WorldWideScience

Sample records for large-scale race approach

  1. Adiabatic hyperspherical approach to large-scale nuclear dynamics

    CERN Document Server

    Suzuki, Yasuyuki

    2015-01-01

    We formulate a fully microscopic approach to large-scale nuclear dynamics using a hyperradius as a collective coordinate. An adiabatic potential is defined by taking account of all possible configurations at a fixed hyperradius, and its hyperradius dependence plays a key role in governing the global nuclear motion. In order to go to larger systems beyond few-body systems, we suggest basis functions of a microscopic multicluster model, propose a method for calculating matrix elements of an adiabatic Hamiltonian with use of Fourier transforms, and test its effectiveness.

  2. Efficient Graph Based Approach to Large Scale Role Engineering

    Directory of Open Access Journals (Sweden)

    Dana Zhang

    2014-04-01

    Full Text Available Role engineering is the process of defining a set of roles that offer administrative benefit for Role Based Access Control (RBAC, which ensures data privacy. It is a business critical task that is required by enterprises wishing to migrate to RBAC. However, existing methods of role generation have not analysed what constitutes a beneficial role and as a result, often produce inadequate solutions in a time consuming manner. To address the urgent issue of identifying high quality RBAC structures in real enterprise environments, we present a cost based analysis of the problem for both flat and hierarchical RBAC structures. Specifically we propose two cost models to evaluate the administration cost of roles and provide a k-partite graph approach to role engineering. Existing role cost evaulations are approximations that overestimate the benefit of a role. Our method and cost models can provide exact role cost and show when existing role cost evaluations can be used as a lower bound to improve efficiency without effecting quality of results. In the first work to address role engineering using large scale real data sets, we propose RoleAnnealing, a fast solution space search algorithm with incremental computation and guided search space heuristics. Our experimental results on both real and synthetic data sets demonstrate that high quality RBAC configurations that maintain data privacy are identified efficiently by RoleAnnealing. Comparison with an existing approach shows RoleAnnealing is significantly faster and produces RBAC configurations with lower cost.

  3. A Novel Approach Towards Large Scale Cross-Media Retrieval

    Institute of Scientific and Technical Information of China (English)

    Bo Lu; Guo-Ren Wang; Ye Yuan

    2012-01-01

    With the rapid development of Internet and multimedia technology,cross-media retrieval is concerned to retrieve all the related media objects with multi-modality by submitting a query media object.Unfortunately,the complexity and the heterogeneity of multi-modality have posed the following two major challenges for cross-media retrieval:1) how to construct a unified and compact model for media objects with multi-modality,2) how to improve the performance of retrieval for large scale cross-media database.In this paper,we propose a novel method which is dedicate to solving these issues to achieve effective and accurate cross-media retrieval.Firstly,a multi-modality semantic relationship graph (MSRG) is constructed using the semantic correlation amongst the media objects with multi-modality.Secondly,all the media objects in MSRG are mapped onto an isomorphic semantic space.Further,an efficient indexing MK-tree based on heterogeneous data distribution is proposed to manage the media objects within the semantic space and improve the performance of cross-media retrieval.Extensive experiments on real large scale cross-media datasets indicate that our proposal dramatically improves the accuracy and efficiency of cross-media retrieval,outperforming the existing methods significantly.

  4. Parametric Approach in Designing Large-Scale Urban Architectural Objects

    Directory of Open Access Journals (Sweden)

    Arne Riekstiņš

    2011-04-01

    Full Text Available When all the disciplines of various science fields converge and develop, new approaches to contemporary architecture arise. The author looks towards approaching digital architecture from parametric viewpoint, revealing its generative capacity, originating from the fields of aeronautical, naval, automobile and product-design industries. The author also goes explicitly through his design cycle workflow for testing the latest methodologies in architectural design. The design process steps involved: extrapolating valuable statistical data about the site into three-dimensional diagrams, defining certain materiality of what is being produced, ways of presenting structural skin and structure simultaneously, contacting the object with the ground, interior program definition of the building with floors and possible spaces, logic of fabrication, CNC milling of the proto-type. The author’s developed tool that is reviewed in this article features enormous performative capacity and is applicable to various architectural design scales.Article in English

  5. K-minus Estimator Approach to Large Scale Structure

    CERN Document Server

    Martinis, M

    2007-01-01

    Self similar 3D distributions of point-particles, with a given quasifractal dimension D, were generated on a Menger sponge model and then compared with \\textit{2dfGRS} and \\textit{Virgo project} data \\footnote{http://www.mso.anu.edu.au/2dFGRS/, http://www.mpa-garching.mpg.de/Virgo/}. Using the principle of local knowledge, it is argued that in a finite volume of space only the two-point minus estimator is acceptable in the correlation analysis of self similar spatial distributions. In this sense, we have simplified the Pietronero-Labini correlative analysis by defining a K-minus estimator, which when applied to 2dfGRS data revealed the quasifractal dimension $D\\approx 2$ as expected. In our approach the K-minus estimator is used only locally. Dimensions between D = 1 and D = 1.7, as suggested by the standard $\\xi (r)$ analysis, were found to be fallacy of the method. In order to visualize spatial quasifractal objects, we created a small software program called \\textit{RoPo} (''Rotate Points''). This program i...

  6. Various approaches to the modelling of large scale 3-dimensional circulation in the Ocean

    Digital Repository Service at National Institute of Oceanography (India)

    Shaji, C.; Bahulayan, N.; Rao, A.D.; Dube, S.K.

    In this paper, the three different approaches to the modelling of large scale 3-dimensional flow in the ocean such as the diagnostic, semi-diagnostic (adaptation) and the prognostic are discussed in detail. Three-dimensional solutions are obtained...

  7. Approaches to large scale unsaturated flow in heterogeneous, stratified, and fractured geologic media

    Energy Technology Data Exchange (ETDEWEB)

    Ababou, R.

    1991-08-01

    This report develops a broad review and assessment of quantitative modeling approaches and data requirements for large-scale subsurface flow in radioactive waste geologic repository. The data review includes discussions of controlled field experiments, existing contamination sites, and site-specific hydrogeologic conditions at Yucca Mountain. Local-scale constitutive models for the unsaturated hydrodynamic properties of geologic media are analyzed, with particular emphasis on the effect of structural characteristics of the medium. The report further reviews and analyzes large-scale hydrogeologic spatial variability from aquifer data, unsaturated soil data, and fracture network data gathered from the literature. Finally, various modeling strategies toward large-scale flow simulations are assessed, including direct high-resolution simulation, and coarse-scale simulation based on auxiliary hydrodynamic models such as single equivalent continuum and dual-porosity continuum. The roles of anisotropy, fracturing, and broad-band spatial variability are emphasized. 252 refs.

  8. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    KAUST Repository

    Hasanov, Khalid

    2014-03-04

    © 2014, Springer Science+Business Media New York. Many state-of-the-art parallel algorithms, which are widely used in scientific applications executed on high-end computing systems, were designed in the twentieth century with relatively small-scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel algorithms for execution on large-scale distributed-memory systems. The idea is to reduce the communication cost by introducing hierarchy and hence more parallelism in the communication scheme. We apply this approach to SUMMA, the state-of-the-art parallel algorithm for matrix–matrix multiplication, and demonstrate both theoretically and experimentally that the modified Hierarchical SUMMA significantly improves the communication cost and the overall performance on large-scale platforms.

  9. Semiconductor Nanocrystal Quantum Dot Synthesis Approaches Towards Large-Scale Industrial Production for Energy Applications.

    Science.gov (United States)

    Hu, Michael Z; Zhu, Ting

    2015-12-01

    This paper reviews the experimental synthesis and engineering developments that focused on various green approaches and large-scale process production routes for quantum dots. Fundamental process engineering principles were illustrated. In relation to the small-scale hot injection method, our discussions focus on the non-injection route that could be scaled up with engineering stir-tank reactors. In addition, applications that demand to utilize quantum dots as "commodity" chemicals are discussed, including solar cells and solid-state lightings.

  10. Local and global approaches of affinity propagation clustering for large scale data

    CERN Document Server

    Xia, Dingyin; Zhang, Xuqing; Zhuang, Yueting

    2009-01-01

    Recently a new clustering algorithm called 'affinity propagation' (AP) has been proposed, which efficiently clustered sparsely related data by passing messages between data points. However, we want to cluster large scale data where the similarities are not sparse in many cases. This paper presents two variants of AP for grouping large scale data with a dense similarity matrix. The local approach is partition affinity propagation (PAP) and the global method is landmark affinity propagation (LAP). PAP passes messages in the subsets of data first and then merges them as the number of initial step of iterations; it can effectively reduce the number of iterations of clustering. LAP passes messages between the landmark data points first and then clusters non-landmark data points; it is a large global approximation method to speed up clustering. Experiments are conducted on many datasets, such as random data points, manifold subspaces, images of faces and Chinese calligraphy, and the results demonstrate that the two...

  11. Evaluating the Health Impact of Large-Scale Public Policy Changes: Classical and Novel Approaches.

    Science.gov (United States)

    Basu, Sanjay; Meghani, Ankita; Siddiqi, Arjumand

    2017-03-20

    Large-scale public policy changes are often recommended to improve public health. Despite varying widely-from tobacco taxes to poverty-relief programs-such policies present a common dilemma to public health researchers: how to evaluate their health effects when randomized controlled trials are not possible. Here, we review the state of knowledge and experience of public health researchers who rigorously evaluate the health consequences of large-scale public policy changes. We organize our discussion by detailing approaches to address three common challenges of conducting policy evaluations: distinguishing a policy effect from time trends in health outcomes or preexisting differences between policy-affected and -unaffected communities (using difference-in-differences approaches); constructing a comparison population when a policy affects a population for whom a well-matched comparator is not immediately available (using propensity score or synthetic control approaches); and addressing unobserved confounders by utilizing quasi-random variations in policy exposure (using regression discontinuity, instrumental variables, or near-far matching approaches).

  12. An Axiomatic Analysis Approach for Large-Scale Disaster-Tolerant Systems Modeling

    Directory of Open Access Journals (Sweden)

    Theodore W. Manikas

    2011-02-01

    Full Text Available Disaster tolerance in computing and communications systems refers to the ability to maintain a degree of functionality throughout the occurrence of a disaster. We accomplish the incorporation of disaster tolerance within a system by simulating various threats to the system operation and identifying areas for system redesign. Unfortunately, extremely large systems are not amenable to comprehensive simulation studies due to the large computational complexity requirements. To address this limitation, an axiomatic approach that decomposes a large-scale system into smaller subsystems is developed that allows the subsystems to be independently modeled. This approach is implemented using a data communications network system example. The results indicate that the decomposition approach produces simulation responses that are similar to the full system approach, but with greatly reduced simulation time.

  13. An Axiomatic Analysis Approach for Large-Scale Disaster-Tolerant Systems Modeling

    Directory of Open Access Journals (Sweden)

    Theodore W. Manikas

    2011-02-01

    Full Text Available Disaster tolerance in computing and communications systems refers to the ability to maintain a degree of functionality throughout the occurrence of a disaster. We accomplish the incorporation of disaster tolerance within a system by simulating various threats to the system operation and identifying areas for system redesign. Unfortunately, extremely large systems are not amenable to comprehensive simulation studies due to the large computational complexity requirements. To address this limitation, an axiomatic approach that decomposes a large-scale system into smaller subsystems is developed that allows the subsystems to be independently modeled. This approach is implemented using a data communications network system example. The results indicate that the decomposition approach produces simulation responses that are similar to the full system approach, but with greatly reduced simulation time.

  14. An efficient approach of attractor calculation for large-scale Boolean gene regulatory networks.

    Science.gov (United States)

    He, Qinbin; Xia, Zhile; Lin, Bin

    2016-11-07

    Boolean network models provide an efficient way for studying gene regulatory networks. The main dynamics of a Boolean network is determined by its attractors. Attractor calculation plays a key role for analyzing Boolean gene regulatory networks. An approach of attractor calculation was proposed in this study, which improved the predecessor-based approach. Furthermore, the proposed approach combined with the identification of constant nodes and simplified Boolean networks to accelerate attractor calculation. The proposed algorithm is effective to calculate all attractors for large-scale Boolean gene regulatory networks. If the average degree of the network is not too large, the algorithm can get all attractors of a Boolean network with dozens or even hundreds of nodes.

  15. Local and global approaches of affinity propagation clustering for large scale data

    Institute of Scientific and Technical Information of China (English)

    Ding-yin XIA; Fei WU; Xu-qing ZHANG; Yue-ting ZHUANG

    2008-01-01

    Recently a new clustering algorithm called 'affinity propagation' (AP) has been proposed, which efficiently clustered sparsely related data by passing messages between data points. However, we want to cluster large scale data where the similarities are not sparse in many cases. This paper presents two variants of AP for grouping large scale data with a dense similarity matrix.The local approach is partition affinity propagation (PAP) and the global method is landmark affinity propagation (LAP). PAP passes messages in the subsets of data first and then merges them as the number of initial step of iterations; it can effectively reduce the number of iterations of clustering. LAP passes messages between the landmark data points first and then clusters non-landmarkdata points; it is a large global approximation method to speed up clustering. Experiments are conducted on many datasets, such as random data points, manifold subspaces, images of faces and Chinese calligraphy, and the results demonstrate that the two ap-proaches are feasible and practicable.

  16. Preparing laboratory and real-world EEG data for large-scale analysis: A containerized approach

    Directory of Open Access Journals (Sweden)

    Nima eBigdely-Shamlo

    2016-03-01

    Full Text Available Large-scale analysis of EEG and other physiological measures promises new insights into brain processes and more accurate and robust brain-computer interface (BCI models.. However, the absence of standard-ized vocabularies for annotating events in a machine understandable manner, the welter of collection-specific data organizations, the diffi-culty in moving data across processing platforms, and the unavailability of agreed-upon standards for preprocessing have prevented large-scale analyses of EEG. Here we describe a containerized approach and freely available tools we have developed to facilitate the process of an-notating, packaging, and preprocessing EEG data collections to enable data sharing, archiving, large-scale machine learning/data mining and (meta-analysis. The EEG Study Schema (ESS comprises three data Levels, each with its own XML-document schema and file/folder convention, plus a standardized (PREP pipeline to move raw (Data Level 1 data to a basic preprocessed state (Data Level 2 suitable for application of a large class of EEG analysis methods. Researchers can ship a study as a single unit and operate on its data using a standardized interface. ESS does not require a central database and provides all the metadata data necessary to execute a wide variety of EEG processing pipelines. The primary focus of ESS is automated in-depth analysis and meta-analysis EEG studies. However, ESS can also encapsulate meta-information for the other modalities such as eye tracking, that are in-creasingly used in both laboratory and real-world neuroimaging. ESS schema and tools are freely available at eegstudy.org, and a central cata-log of over 850 GB of existing data in ESS format is available at study-catalog.org. These tools and resources are part of a larger effort to ena-ble data sharing at sufficient scale for researchers to engage in truly large-scale EEG analysis and data mining (BigEEG.org.

  17. A Two-Stage Approach for Medical Supplies Intermodal Transportation in Large-Scale Disaster Responses

    Directory of Open Access Journals (Sweden)

    Junhu Ruan

    2014-10-01

    Full Text Available We present a two-stage approach for the “helicopters and vehicles” intermodal transportation of medical supplies in large-scale disaster responses. In the first stage, a fuzzy-based method and its heuristic algorithm are developed to select the locations of temporary distribution centers (TDCs and assign medial aid points (MAPs to each TDC. In the second stage, an integer-programming model is developed to determine the delivery routes. Numerical experiments verified the effectiveness of the approach, and observed several findings: (i More TDCs often increase the efficiency and utility of medical supplies; (ii It is not definitely true that vehicles should load more and more medical supplies in emergency responses; (iii The more contrasting the traveling speeds of helicopters and vehicles are, the more advantageous the intermodal transportation is.

  18. Solving Large-Scale TSP Using a Fast Wedging Insertion Partitioning Approach

    Directory of Open Access Journals (Sweden)

    Zuoyong Xiang

    2015-01-01

    Full Text Available A new partitioning method, called Wedging Insertion, is proposed for solving large-scale symmetric Traveling Salesman Problem (TSP. The idea of our proposed algorithm is to cut a TSP tour into four segments by nodes’ coordinate (not by rectangle, such as Strip, FRP, and Karp. Each node is located in one of their segments, which excludes four particular nodes, and each segment does not twist with other segments. After the partitioning process, this algorithm utilizes traditional construction method, that is, the insertion method, for each segment to improve the quality of tour, and then connects the starting node and the ending node of each segment to obtain the complete tour. In order to test the performance of our proposed algorithm, we conduct the experiments on various TSPLIB instances. The experimental results show that our proposed algorithm in this paper is more efficient for solving large-scale TSPs. Specifically, our approach is able to obviously reduce the time complexity for running the algorithm; meanwhile, it will lose only about 10% of the algorithm’s performance.

  19. A Ranking Approach on Large-Scale Graph With Multidimensional Heterogeneous Information.

    Science.gov (United States)

    Wei, Wei; Gao, Bin; Liu, Tie-Yan; Wang, Taifeng; Li, Guohui; Li, Hang

    2016-04-01

    Graph-based ranking has been extensively studied and frequently applied in many applications, such as webpage ranking. It aims at mining potentially valuable information from the raw graph-structured data. Recently, with the proliferation of rich heterogeneous information (e.g., node/edge features and prior knowledge) available in many real-world graphs, how to effectively and efficiently leverage all information to improve the ranking performance becomes a new challenging problem. Previous methods only utilize part of such information and attempt to rank graph nodes according to link-based methods, of which the ranking performances are severely affected by several well-known issues, e.g., over-fitting or high computational complexity, especially when the scale of graph is very large. In this paper, we address the large-scale graph-based ranking problem and focus on how to effectively exploit rich heterogeneous information of the graph to improve the ranking performance. Specifically, we propose an innovative and effective semi-supervised PageRank (SSP) approach to parameterize the derived information within a unified semi-supervised learning framework (SSLF-GR), then simultaneously optimize the parameters and the ranking scores of graph nodes. Experiments on the real-world large-scale graphs demonstrate that our method significantly outperforms the algorithms that consider such graph information only partially.

  20. Gene prediction in metagenomic fragments: A large scale machine learning approach

    Directory of Open Access Journals (Sweden)

    Morgenstern Burkhard

    2008-04-01

    Full Text Available Abstract Background Metagenomics is an approach to the characterization of microbial genomes via the direct isolation of genomic sequences from the environment without prior cultivation. The amount of metagenomic sequence data is growing fast while computational methods for metagenome analysis are still in their infancy. In contrast to genomic sequences of single species, which can usually be assembled and analyzed by many available methods, a large proportion of metagenome data remains as unassembled anonymous sequencing reads. One of the aims of all metagenomic sequencing projects is the identification of novel genes. Short length, for example, Sanger sequencing yields on average 700 bp fragments, and unknown phylogenetic origin of most fragments require approaches to gene prediction that are different from the currently available methods for genomes of single species. In particular, the large size of metagenomic samples requires fast and accurate methods with small numbers of false positive predictions. Results We introduce a novel gene prediction algorithm for metagenomic fragments based on a two-stage machine learning approach. In the first stage, we use linear discriminants for monocodon usage, dicodon usage and translation initiation sites to extract features from DNA sequences. In the second stage, an artificial neural network combines these features with open reading frame length and fragment GC-content to compute the probability that this open reading frame encodes a protein. This probability is used for the classification and scoring of gene candidates. With large scale training, our method provides fast single fragment predictions with good sensitivity and specificity on artificially fragmented genomic DNA. Additionally, this method is able to predict translation initiation sites accurately and distinguishes complete from incomplete genes with high reliability. Conclusion Large scale machine learning methods are well-suited for gene

  1. Burnout of pulverized biomass particles in large scale boiler - Single particle model approach

    Energy Technology Data Exchange (ETDEWEB)

    Saastamoinen, Jaakko; Aho, Martti; Moilanen, Antero [VTT Technical Research Centre of Finland, Box 1603, 40101 Jyvaeskylae (Finland); Soerensen, Lasse Holst [ReaTech/ReAddit, Frederiksborgsveij 399, Niels Bohr, DK-4000 Roskilde (Denmark); Clausen, Soennik [Risoe National Laboratory, DK-4000 Roskilde (Denmark); Berg, Mogens [ENERGI E2 A/S, A.C. Meyers Vaenge 9, DK-2450 Copenhagen SV (Denmark)

    2010-05-15

    Burning of coal and biomass particles are studied and compared by measurements in an entrained flow reactor and by modelling. The results are applied to study the burning of pulverized biomass in a large scale utility boiler originally planned for coal. A simplified single particle approach, where the particle combustion model is coupled with one-dimensional equation of motion of the particle, is applied for the calculation of the burnout in the boiler. The particle size of biomass can be much larger than that of coal to reach complete burnout due to lower density and greater reactivity. The burner location and the trajectories of the particles might be optimised to maximise the residence time and burnout. (author)

  2. A Coordinated Approach to Channel Estimation in Large-scale Multiple-antenna Systems

    CERN Document Server

    Yin, Haifan; Filippou, Miltiades; Liu, Yingzhuang

    2012-01-01

    This paper addresses the problem of channel estimation in multi-cell interference-limited cellular networks. We consider systems employing multiple antennas and are interested in both the finite and large-scale antenna number regimes (so-called "Massive MIMO"). Such systems deal with the multi-cell interference by way of per-cell beamforming applied at each base station. Channel estimation in such networks, which is known to be hampered by the pilot contamination effect, constitute a major bottleneck for overall performance. We present a novel approach which tackles this problem by enabling a low-rate coordination between cells during the channel estimation phase itself. The coordination makes use of the additional second-order statistical information about the user channels, which are shown to offer a powerful way of discriminating across interfering users with even strongly correlated pilot sequences. Importantly, we demonstrate analytically that in the large number of antennas regime the pilot contaminatio...

  3. Integrating adaptive behaviour in large-scale flood risk assessments: an Agent-Based Modelling approach

    Science.gov (United States)

    Haer, Toon; Aerts, Jeroen

    2015-04-01

    Between 1998 and 2009, Europe suffered over 213 major damaging floods, causing 1126 deaths, displacing around half a million people. In this period, floods caused at least 52 billion euro in insured economic losses making floods the most costly natural hazard faced in Europe. In many low-lying areas, the main strategy to cope with floods is to reduce the risk of the hazard through flood defence structures, like dikes and levees. However, it is suggested that part of the responsibility for flood protection needs to shift to households and businesses in areas at risk, and that governments and insurers can effectively stimulate the implementation of individual protective measures. However, adaptive behaviour towards flood risk reduction and the interaction between the government, insurers, and individuals has hardly been studied in large-scale flood risk assessments. In this study, an European Agent-Based Model is developed including agent representatives for the administrative stakeholders of European Member states, insurers and reinsurers markets, and individuals following complex behaviour models. The Agent-Based Modelling approach allows for an in-depth analysis of the interaction between heterogeneous autonomous agents and the resulting (non-)adaptive behaviour. Existing flood damage models are part of the European Agent-Based Model to allow for a dynamic response of both the agents and the environment to changing flood risk and protective efforts. By following an Agent-Based Modelling approach this study is a first contribution to overcome the limitations of traditional large-scale flood risk models in which the influence of individual adaptive behaviour towards flood risk reduction is often lacking.

  4. Stability and Control of Large-Scale Dynamical Systems A Vector Dissipative Systems Approach

    CERN Document Server

    Haddad, Wassim M

    2011-01-01

    Modern complex large-scale dynamical systems exist in virtually every aspect of science and engineering, and are associated with a wide variety of physical, technological, environmental, and social phenomena, including aerospace, power, communications, and network systems, to name just a few. This book develops a general stability analysis and control design framework for nonlinear large-scale interconnected dynamical systems, and presents the most complete treatment on vector Lyapunov function methods, vector dissipativity theory, and decentralized control architectures. Large-scale dynami

  5. Large scale debris-flow hazard assessment: a geotechnical approach and GIS modelling

    Directory of Open Access Journals (Sweden)

    G. Delmonaco

    2003-01-01

    Full Text Available A deterministic distributed model has been developed for large-scale debris-flow hazard analysis in the basin of River Vezza (Tuscany Region – Italy. This area (51.6 km 2 was affected by over 250 landslides. These were classified as debris/earth flow mainly involving the metamorphic geological formations outcropping in the area, triggered by the pluviometric event of 19 June 1996. In the last decades landslide hazard and risk analysis have been favoured by the development of GIS techniques permitting the generalisation, synthesis and modelling of stability conditions on a large scale investigation (>1:10 000. In this work, the main results derived by the application of a geotechnical model coupled with a hydrological model for the assessment of debris flows hazard analysis, are reported. This analysis has been developed starting by the following steps: landslide inventory map derived by aerial photo interpretation, direct field survey, generation of a database and digital maps, elaboration of a DTM and derived themes (i.e. slope angle map, definition of a superficial soil thickness map, geotechnical soil characterisation through implementation of a backanalysis on test slopes, laboratory test analysis, inference of the influence of precipitation, for distinct return times, on ponding time and pore pressure generation, implementation of a slope stability model (infinite slope model and generalisation of the safety factor for estimated rainfall events with different return times. Such an approach has allowed the identification of potential source areas of debris flow triggering. This is used to detected precipitation events with estimated return time of 10, 50, 75 and 100 years. The model shows a dramatic decrease of safety conditions for the simulation when is related to a 75 years return time rainfall event. It corresponds to an estimated cumulated daily intensity of 280–330 mm. This value can be considered the hydrological triggering

  6. A Cost-Effective Planning Graph Approach for Large-Scale Web Service Composition

    Directory of Open Access Journals (Sweden)

    Szu-Yin Lin

    2012-01-01

    Full Text Available Web Service Composition (WSC problems can be considered as a service matching problem, which means that the output parameters of a Web service can be used as inputs of another one. However, when a very large number of Web services are deployed in the environment, the service composition has become sophisticated and complicated process. In this study, we proposed a novel cost-effective Web service composition mechanism. It utilizes planning graph based on backward search algorithm to find multiple feasible solutions and recommends a best composition solution according to the lowest service cost. In other words, the proposed approach is a goal-driven mechanism, which can recommend the approximate solutions, but it consumes fewer amounts of Web services and less nested levels of composite service. Finally, we implement a simulation platform to validate the proposed cost-effective planning graph mechanism in large-scale Web services environment. The simulation results show that our proposed algorithm based on the backward planning graph has reduced by 94% service cost in three different environments of service composition that is compared with other existing service composition approaches which are based on a forward planning graph.

  7. A modular approach to large-scale design optimization of aerospace systems

    Science.gov (United States)

    Hwang, John T.

    Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft

  8. Topographic mapping on large-scale tidal flats with an iterative approach on the waterline method

    Science.gov (United States)

    Kang, Yanyan; Ding, Xianrong; Xu, Fan; Zhang, Changkuan; Ge, Xiaoping

    2017-05-01

    Tidal flats, which are both a natural ecosystem and a type of landscape, are of significant importance to ecosystem function and land resource potential. Morphologic monitoring of tidal flats has become increasingly important with respect to achieving sustainable development targets. Remote sensing is an established technique for the measurement of topography over tidal flats; of the available methods, the waterline method is particularly effective for constructing a digital elevation model (DEM) of intertidal areas. However, application of the waterline method is more limited in large-scale, shifting tidal flats areas, where the tides are not synchronized and the waterline is not a quasi-contour line. For this study, a topographical map of the intertidal regions within the Radial Sand Ridges (RSR) along the Jiangsu Coast, China, was generated using an iterative approach on the waterline method. A series of 21 multi-temporal satellite images (18 HJ-1A/B CCD and three Landsat TM/OLI) of the RSR area collected at different water levels within a five month period (31 December 2013-28 May 2014) was used to extract waterlines based on feature extraction techniques and artificial further modification. These 'remotely-sensed waterlines' were combined with the corresponding water levels from the 'model waterlines' simulated by a hydrodynamic model with an initial generalized DEM of exposed tidal flats. Based on the 21 heighted 'remotely-sensed waterlines', a DEM was constructed using the ANUDEM interpolation method. Using this new DEM as the input data, it was re-entered into the hydrodynamic model, and a new round of water level assignment of waterlines was performed. A third and final output DEM was generated covering an area of approximately 1900 km2 of tidal flats in the RSR. The water level simulation accuracy of the hydrodynamic model was within 0.15 m based on five real-time tide stations, and the height accuracy (root mean square error) of the final DEM was 0.182 m

  9. Neural ensemble communities: open-source approaches to hardware for large-scale electrophysiology.

    Science.gov (United States)

    Siegle, Joshua H; Hale, Gregory J; Newman, Jonathan P; Voigts, Jakob

    2015-06-01

    One often-overlooked factor when selecting a platform for large-scale electrophysiology is whether or not a particular data acquisition system is 'open' or 'closed': that is, whether or not the system's schematics and source code are available to end users. Open systems have a reputation for being difficult to acquire, poorly documented, and hard to maintain. With the arrival of more powerful and compact integrated circuits, rapid prototyping services, and web-based tools for collaborative development, these stereotypes must be reconsidered. We discuss some of the reasons why multichannel extracellular electrophysiology could benefit from open-source approaches and describe examples of successful community-driven tool development within this field. In order to promote the adoption of open-source hardware and to reduce the need for redundant development efforts, we advocate a move toward standardized interfaces that connect each element of the data processing pipeline. This will give researchers the flexibility to modify their tools when necessary, while allowing them to continue to benefit from the high-quality products and expertise provided by commercial vendors. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Query Large Scale Microarray Compendium Datasets Using a Model-Based Bayesian Approach with Variable Selection

    Science.gov (United States)

    Hu, Ming; Qin, Zhaohui S.

    2009-01-01

    In microarray gene expression data analysis, it is often of interest to identify genes that share similar expression profiles with a particular gene such as a key regulatory protein. Multiple studies have been conducted using various correlation measures to identify co-expressed genes. While working well for small datasets, the heterogeneity introduced from increased sample size inevitably reduces the sensitivity and specificity of these approaches. This is because most co-expression relationships do not extend to all experimental conditions. With the rapid increase in the size of microarray datasets, identifying functionally related genes from large and diverse microarray gene expression datasets is a key challenge. We develop a model-based gene expression query algorithm built under the Bayesian model selection framework. It is capable of detecting co-expression profiles under a subset of samples/experimental conditions. In addition, it allows linearly transformed expression patterns to be recognized and is robust against sporadic outliers in the data. Both features are critically important for increasing the power of identifying co-expressed genes in large scale gene expression datasets. Our simulation studies suggest that this method outperforms existing correlation coefficients or mutual information-based query tools. When we apply this new method to the Escherichia coli microarray compendium data, it identifies a majority of known regulons as well as novel potential target genes of numerous key transcription factors. PMID:19214232

  11. Comparing large-scale computational approaches to epidemic modeling: Agent-based versus structured metapopulation models

    Directory of Open Access Journals (Sweden)

    Merler Stefano

    2010-06-01

    Full Text Available Abstract Background In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. Methods We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. Results The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age

  12. Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations.

    Science.gov (United States)

    Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali

    2015-01-01

    Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts.

  13. Large-scale co-expression approach to dissect secondary cell wall formation across plant species

    Directory of Open Access Journals (Sweden)

    Colin eRuprecht

    2011-07-01

    Full Text Available Plant cell walls are complex composites largely consisting of carbohydrate-based polymers, and are generally divided into primary and secondary walls based on content and characteristics. Cellulose microfibrils constitute a major component of both primary and secondary cell walls and are synthesized at the plasma membrane by cellulose synthase (CESA complexes. Several studies in Arabidopsis have demonstrated the power of co-expression analyses to identify new genes associated with secondary wall cellulose biosynthesis. However, across-species comparative co-expression analyses remain largely unexplored. Here, we compared co-expressed gene vicinity networks of primary and secondary wall CESAs in Arabidopsis, barley, rice, poplar, soybean, Medicago and wheat, and identified gene families that are consistently co-regulated with cellulose biosynthesis. In addition to the expected polysaccharide acting enzymes, we also found many gene families associated with cytoskeleton, signaling, transcriptional regulation, oxidation and protein degradation. Based on these analyses, we selected and biochemically analyzed T-DNA insertion lines corresponding to approximately twenty genes from gene families that re-occur in the co-expressed gene vicinity networks of secondary wall CESAs across the seven species. We developed a statistical pipeline using principal component analysis (PCA and optimal clustering based on silhouette width to analyze sugar profiles. One of the mutants, corresponding to a pinoresinol reductase gene, displayed disturbed xylem morphology and held lower levels of lignin molecules. We propose that this type of large-scale co-expression approach, coupled with statistical analysis of the cell wall contents, will be useful to facilitate rapid knowledge transfer across plant species.

  14. An Accurate Approach to Large-Scale IP Traffic Matrix Estimation

    Science.gov (United States)

    Jiang, Dingde; Hu, Guangmin

    This letter proposes a novel method of large-scale IP traffic matrix (TM) estimation, called algebraic reconstruction technique inference (ARTI), which is based on the partial flow measurement and Fratar model. In contrast to previous methods, ARTI can accurately capture the spatio-temporal correlations of TM. Moreover, ARTI is computationally simple since it uses the algebraic reconstruction technique. We use the real data from the Abilene network to validate ARTI. Simulation results show that ARTI can accurately estimate large-scale IP TM and track its dynamics.

  15. A novel computational approach towards the certification of large-scale boson sampling

    Science.gov (United States)

    Huh, Joonsuk

    Recent proposals of boson sampling and the corresponding experiments exhibit the possible disproof of extended Church-Turning Thesis. Furthermore, the application of boson sampling to molecular computation has been suggested theoretically. Till now, however, only small-scale experiments with a few photons have been successfully performed. The boson sampling experiments of 20-30 photons are expected to reveal the computational superiority of the quantum device. A novel theoretical proposal for the large-scale boson sampling using microwave photons is highly promising due to the deterministic photon sources and the scalability. Therefore, the certification protocol of large-scale boson sampling experiments should be presented to complete the exciting story. We propose, in this presentation, a computational protocol towards the certification of large-scale boson sampling. The correlations of paired photon modes and the time-dependent characteristic functional with its Fourier component can show the fingerprint of large-scale boson sampling. This work was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology(NRF-2015R1A6A3A04059773), the ICT R&D program of MSIP/IITP [2015-019, Fundamental Research Toward Secure Quantum Communication] and Mueunjae Institute for Chemistry (MIC) postdoctoral fellowship.

  16. Strategic and Collaborative Crisis Management: A Partnership Approach to Large-Scale Crisis

    Science.gov (United States)

    Mann, Timothy

    2007-01-01

    Large-scale crisis such as natural disasters and acts of terrorism can have a paralyzing effect on the campus community and business continuity. Campus officials in these situations face significant challenges that go beyond the immediate response including re-building the physical plant, restoring campus infrastructure, retaining displaced…

  17. Large-Scale Computations Leading to a First-Principles Approach to Nuclear Structure

    Energy Technology Data Exchange (ETDEWEB)

    Ormand, W E; Navratil, P

    2003-08-18

    We report on large-scale applications of the ab initio, no-core shell model with the primary goal of achieving an accurate description of nuclear structure from the fundamental inter-nucleon interactions. In particular, we show that realistic two-nucleon interactions are inadequate to describe the low-lying structure of {sup 10}B, and that realistic three-nucleon interactions are essential.

  18. PLATO: data-oriented approach to collaborative large-scale brain system modeling.

    Science.gov (United States)

    Kannon, Takayuki; Inagaki, Keiichiro; Kamiji, Nilton L; Makimura, Kouji; Usui, Shiro

    2011-11-01

    The brain is a complex information processing system, which can be divided into sub-systems, such as the sensory organs, functional areas in the cortex, and motor control systems. In this sense, most of the mathematical models developed in the field of neuroscience have mainly targeted a specific sub-system. In order to understand the details of the brain as a whole, such sub-system models need to be integrated toward the development of a neurophysiologically plausible large-scale system model. In the present work, we propose a model integration library where models can be connected by means of a common data format. Here, the common data format should be portable so that models written in any programming language, computer architecture, and operating system can be connected. Moreover, the library should be simple so that models can be adapted to use the common data format without requiring any detailed knowledge on its use. Using this library, we have successfully connected existing models reproducing certain features of the visual system, toward the development of a large-scale visual system model. This library will enable users to reuse and integrate existing and newly developed models toward the development and simulation of a large-scale brain system model. The resulting model can also be executed on high performance computers using Message Passing Interface (MPI).

  19. A compact to revitalise large-scale irrigation systems: A ‘theory of change’ approach

    Directory of Open Access Journals (Sweden)

    Bruce A. Lankford

    2016-02-01

    Full Text Available In countries with transitional economies such as those found in South Asia, large-scale irrigation systems (LSIS with a history of public ownership account for about 115 million ha (Mha or approximately 45% of their total area under irrigation. In terms of the global area of irrigation (320 Mha for all countries, LSIS are estimated at 130 Mha or 40% of irrigated land. These systems can potentially deliver significant local, regional and global benefits in terms of food, water and energy security, employment, economic growth and ecosystem services. For example, primary crop production is conservatively valued at about US$355 billion. However, efforts to enhance these benefits and reform the sector have been costly and outcomes have been underwhelming and short-lived. We propose the application of a 'theory of change' (ToC as a foundation for promoting transformational change in large-scale irrigation centred upon a 'global irrigation compact' that promotes new forms of leadership, partnership and ownership (LPO. The compact argues that LSIS can change by switching away from the current channelling of aid finances controlled by government irrigation agencies. Instead it is for irrigators, closely partnered by private, public and NGO advisory and regulatory services, to develop strong leadership models and to find new compensatory partnerships with cities and other river basin neighbours. The paper summarises key assumptions for change in the LSIS sector including the need to initially test this change via a handful of volunteer systems. Our other key purpose is to demonstrate a ToC template by which large-scale irrigation policy can be better elaborated and discussed.

  20. A quantitative approach to the topology of large-scale structure. [for galactic clustering computation

    Science.gov (United States)

    Gott, J. Richard, III; Weinberg, David H.; Melott, Adrian L.

    1987-01-01

    A quantitative measure of the topology of large-scale structure: the genus of density contours in a smoothed density distribution, is described and applied. For random phase (Gaussian) density fields, the mean genus per unit volume exhibits a universal dependence on threshold density, with a normalizing factor that can be calculated from the power spectrum. If large-scale structure formed from the gravitational instability of small-amplitude density fluctuations, the topology observed today on suitable scales should follow the topology in the initial conditions. The technique is illustrated by applying it to simulations of galaxy clustering in a flat universe dominated by cold dark matter. The technique is also applied to a volume-limited sample of the CfA redshift survey and to a model in which galaxies reside on the surfaces of polyhedral 'bubbles'. The topology of the evolved mass distribution and 'biased' galaxy distribution in the cold dark matter models closely matches the topology of the density fluctuations in the initial conditions. The topology of the observational sample is consistent with the random phase, cold dark matter model.

  1. An Analytical Approach for Optimal Clustering Architecture for Maximizing Lifetime in Large Scale Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Mr. Yogesh Rai

    2011-09-01

    Full Text Available Many methods have been researched to prolong sensor network lifetime using mobile technologies. In the mobile sink research, there are the track based methods and the anchor points based methods as representative operation methods for mobile sinks. However, the existing methods decrease Quality of Service (QoS and lead the routing hotspot in the vicinity of the mobile sink. In large scale wireless sensor networks, clustering is an effective technique for the purpose of improving the utilization of limited energy and prolonging the network lifetime. However, the problem of unbalanced energy dissipation exists in the multi-hop clustering model, where the cluster heads closer to the sink have to relay heavier traffic and consume more energy than farther nodes. In this paper we analyze several aspects based on the optimal clustering architecture for maximizing lifetime for large scale wireless sensor network. We also provide some analytical concepts for energy-aware head rotation and routing protocols to further balance the energy consumption among all nodes.

  2. Sub-bottom profiling for large-scale maritime archaeological survey An experience-based approach

    DEFF Research Database (Denmark)

    Grøn, Ole; Boldreel, Lars Ole

    2013-01-01

    investigation of the sea floor. This commercial activity can take the form of aggregate extraction, fishing, installation of facilities such as windmills, cables or pipelines and the construction of bridges, harbours etc.Non-invasive acoustic survey methods play a significant role in the mapping...... of the submerged cultural heritage. Elements such as archaeological wreck sites exposed on the sea floor are mapped using side-scan and multi-beam techniques. These can also provide information on bathymetric patterns representing potential Stone Age settlements, whereas the detection of such archaeological sites...... and wrecks partially or wholly embedded in the sea-floor sediments demands the application of highresolution sub-bottom profilers. This paper presents a strategy for the cost-effective large-scale mapping of unknown sedimentembedded sites such as submerged Stone Age settlements or wrecks, based on sub...

  3. Descriptor-variable approach to modeling and optimization of large-scale systems. Final report, March 1976--February 1979

    Energy Technology Data Exchange (ETDEWEB)

    Stengel, D N; Luenberger, D G; Larson, R E; Cline, T B

    1979-02-01

    A new approach to modeling and analysis of systems is presented that exploits the underlying structure of the system. The development of the approach focuses on a new modeling form, called 'descriptor variable' systems, that was first introduced in this research. Key concepts concerning the classification and solution of descriptor-variable systems are identified, and theories are presented for the linear case, the time-invariant linear case, and the nonlinear case. Several standard systems notions are demonstrated to have interesting interpretations when analyzed via descriptor-variable theory. The approach developed also focuses on the optimization of large-scale systems. Descriptor variable models are convenient representations of subsystems in an interconnected network, and optimization of these models via dynamic programming is described. A general procedure for the optimization of large-scale systems, called spatial dynamic programming, is presented where the optimization is spatially decomposed in the way standard dynamic programming temporally decomposes the optimization of dynamical systems. Applications of this approach to large-scale economic markets and power systems are discussed.

  4. Experimental approach for mixed-mode fatigue delamination crack growth with large-scale bridging in polymer composites

    DEFF Research Database (Denmark)

    Holmes, John W.; Liu, Liu; Sørensen, Bent F.

    2014-01-01

    of delaminations in a typical fibre-reinforced polymer composite was investigated under a constant cyclic loading amplitude. Pure mode I, mode II and mixed-mode crack growth conditions were examined. The results, analysed using a J-integral approach, show that the double cantilever beam loaded with uneven bending......An experimental apparatus utilizing double cantilever beam specimens loaded with uneven bending moments was developed to study the mixed-mode fatigue crack growth in composites. The approach is suitable when large-scale bridging of cracks is present. To illustrate the testing method, cyclic growth...... crack growth rate observed. In addition to details concerning the equipment, a general discussion of the development of cyclic bridging laws for delamination growth in the presence of large-scale bridging is provided....

  5. Large scale production of megakaryocytes from human pluripotent stem cells by a chemically defined forward programming approach

    OpenAIRE

    Moreau, Thomas; Evans, Amanda L.; Vasquez, Louella; Tijssen, Marloes R.; Yan, Ying; Trotter, Matthew W.; Howard, Daniel; Colzani, Maria; Arumugam, Meera; Wu, Wing Han; Dalby, Amanda; Lampela, Riina; Bouet, Guenaelle; Hobbs, Catherine M.; Dean C Pask

    2016-01-01

    This is the author accepted manuscript. It is currently under an indefinite embargo pending publication by Nature Publishing Group. The production of megakaryocytes (MKs) ? the precursors of blood platelets ? from human pluripotent stem cells (hPSCs) offers exciting clinical opportunities for transfusion medicine. We describe an original approach for the large scale generation of MKs in chemically defined conditions using a forward programming strategy relying on the concurrent exogenous e...

  6. Large scale production of megakaryocytes from human pluripotent stem cells by a chemically defined forward programming approach

    OpenAIRE

    Moreau, Thomas; Evans, Amanda L.; Vasquez, Louella; Tijssen, Marloes R.; Yan, Ying; Trotter, Matthew W.; Howard, Daniel; Colzani, Maria; Arumugam, Meera; Wu, Wing Han; Dalby, Amanda; Lampela, Riina; Bouet, Guenaelle; Hobbs, Catherine M.; Dean C Pask

    2016-01-01

    This is the author accepted manuscript. It is currently under an indefinite embargo pending publication by Nature Publishing Group. The production of megakaryocytes (MKs) – the precursors of blood platelets – from human pluripotent stem cells (hPSCs) offers exciting clinical opportunities for transfusion medicine. We describe an original approach for the large scale generation of MKs in chemically defined conditions using a forward programming strategy relying on the concurrent exogenous e...

  7. Stereotype Threat, Inquiring about Test Takers' Race and Gender, and Performance on Low-Stakes Tests in a Large-Scale Assessment. Research Report. ETS RR-15-02

    Science.gov (United States)

    Stricker, Lawrence J.; Rock, Donald A.; Bridgeman, Brent

    2015-01-01

    This study explores stereotype threat on low-stakes tests used in a large-scale assessment, math and reading tests in the Education Longitudinal Study of 2002 (ELS). Issues identified in laboratory research (though not observed in studies of high-stakes tests) were assessed: whether inquiring about their race and gender is related to the…

  8. A large-scale functional approach to uncover human genes and pathways in Drosophila

    Institute of Scientific and Technical Information of China (English)

    Rong Xu; Yuan Zhuang; Tian Xu; Kejing Deng; Yi Zhu; Yue Wu; Jing Ren; Min Wan; Shouyuan Zhao; Xiaohui Wu; Min Han

    2008-01-01

    We demonstrate the feasibility of performing a systematic screen for human gene functions in Drosophila by assay-ing for their ability to induce overexpression phenotypes. Over 1 500 transgenic fly lines corresponding to 236 human genes have been established. In all, 51 lines are capable of eliciting a phenotype suggesting that the human genes are functional. These heterologous genes are functionally relevant as we have found a similar mutant phenotype caused either by a dominant negative mutant form of the human ribosomal protein L8 gene or by RNAi downregulation of the Drosophila RPL8. Significantly, the Drosophila RPL8 mutant can be rescued by wild-type human RPL8. We also provide genetic evidence that Drosophila RPL8 is a new member of the insulin signaling pathway. In summary, the functions of many human genes appear to be highly conserved, and the ability to identify them in Drosophila repre-sents a powerful genetic tool for large-scale analysis of human transcripts in vivo.

  9. A New Approach to Probing Large Scale Power with Peculiar Velocities

    CERN Document Server

    Feldman, H A; Feldman, Hume A.; Watkins, Richard

    1997-01-01

    We propose a new strategy to probe the power spectrum on large scales using galaxy peculiar velocities. We explore the properties of surveys that cover only two small fields in opposing directions on the sky. Surveys of this type have several advantages over those that attempt to cover the entire sky; in particular, by concentrating galaxies in narrow cones these surveys are able to achieve the density needed to measure several moments of the velocity field with only a modest number of objects, even for surveys designed to probe scales them in terms of the three moments to which they are most sensitive. We calculate window functions for these moments and construct a $\\chi^2$ statistic which can be used to put constraints on the power spectrum. In order to explore the sensitivity of these surveys, we calculate the expectation values of the moments and their associated measurement noise as a function of the survey parameters such as density and depth and for several popular models of structure formation. We fin...

  10. Data-Gathering Scheme Using AUVs in Large-Scale Underwater Sensor Networks: A Multihop Approach

    Directory of Open Access Journals (Sweden)

    Jawaad Ullah Khan

    2016-09-01

    Full Text Available In this paper, we propose a data-gathering scheme for hierarchical underwater sensor networks, where multiple Autonomous Underwater Vehicles (AUVs are deployed over large-scale coverage areas. The deployed AUVs constitute an intermittently connected multihop network through inter-AUV synchronization (in this paper, synchronization means an interconnection between nodes for communication for forwarding data to the designated sink. In such a scenario, the performance of the multihop communication depends upon the synchronization among the vehicles. The mobility parameters of the vehicles vary continuously because of the constantly changing underwater currents. The variations in the AUV mobility parameters reduce the inter-AUV synchronization frequency contributing to delays in the multihop communication. The proposed scheme improves the AUV synchronization frequency by permitting neighboring AUVs to share their status information via a pre-selected node called an agent-node at the static layer of the network. We evaluate the proposed scheme in terms of the AUV synchronization frequency, vertical delay (node→AUV, horizontal delay (AUV→AUV, end-to-end delay, and the packet loss ratio. Simulation results show that the proposed scheme significantly reduces the aforementioned delays without the synchronization time-out process employed in conventional works.

  11. An efficient approach for preprocessing data from a large-scale chemical sensor array.

    Science.gov (United States)

    Leo, Marco; Distante, Cosimo; Bernabei, Mara; Persaud, Krishna

    2014-09-24

    In this paper, an artificial olfactory system (Electronic Nose) that mimics the biological olfactory system is introduced. The device consists of a Large-Scale Chemical Sensor Array (16; 384 sensors, made of 24 different kinds of conducting polymer materials)that supplies data to software modules, which perform advanced data processing. In particular, the paper concentrates on the software components consisting, at first, of a crucial step that normalizes the heterogeneous sensor data and reduces their inherent noise. Cleaned data are then supplied as input to a data reduction procedure that extracts the most informative and discriminant directions in order to get an efficient representation in a lower dimensional space where it is possible to more easily find a robust mapping between the observed outputs and the characteristics of the odors in input to the device. Experimental qualitative proofs of the validity of the procedure are given by analyzing data acquired for two different pure analytes and their binary mixtures. Moreover, a classification task is performed in order to explore the possibility of automatically recognizing pure compounds and to predict binary mixture concentrations.

  12. Improving irrigation efficiency in Italian apple orchards: A large-scale approach

    Science.gov (United States)

    Della Chiesa, Stefano; la Cecilia, Daniele; Niedrist, Georg; Hafner, Hansjörg; Thalheimer, Martin; Tappeiner, Ulrike

    2016-04-01

    Nord-Italian region South Tyrol is Europe's largest apple growing area. In order to enable an economically relevant fruit quality and quantity the relative dry climate of the region 450-700mm gets compensated by a large scale irrigation management which until now follows old, traditional rights. Due to ongoing climatic changes and rising public sensitivity toward sustainable usage of water resources, irrigation practices are more and more critically discussed. In order to establish an objective and quantitative base of information to optimise irrigation practice, 17 existing microclimatic stations were upgraded with soil moisture and soil water potential sensors. As a second information layer a data set of 20,000 soil analyses has been geo-referenced and spatialized using a modern geostatistical method. Finally, to assess whether the zones with shallow aquifer influence soil water availability, data of 70 groundwater depth measuring stations were retrieved. The preliminary results highlight that in many locations in particular in the valley bottoms irrigation largely exceeds plant water needs because either the shallow aquifer provides sufficient water supply by capillary rise processes into the root zone or irrigation is applied without accounting for the specific soil properties.

  13. Estimating heterotrophic respiration at large scales: challenges, approaches, and next steps

    Energy Technology Data Exchange (ETDEWEB)

    Bond-Lamberty, Benjamin; Epron, Daniel; Harden, Jennifer W.; Harmon, Mark; Hoffman, F. M.; Kumar, Jitendra; McGuire, A. David; Vargas, Rodrigo

    2016-06-27

    Heterotrophic respiration (HR), the aerobic and anaerobic processes mineralizing organic matter, is a key carbon flux but one impossible to measure at scales significantly larger than small experimental plots. This impedes our ability to understand carbon and nutrient cycles, benchmark models, or reliably upscale point measurements. Given that a new generation of highly mechanistic, genomic-specific global models is not imminent, we suggest that a useful step to improve this situation would be the development of "Decomposition Functional Types" (DFTs). Analogous to plant functional types (PFTs), DFTs would abstract and capture important differences in HR metabolism and flux dynamics, allowing models to efficiently group and vary these characteristics across space and time. We argue that DFTs should be initially informed by top-down expert opinion, but ultimately developed using bottom-up, data-driven analyses, and provide specific examples of potential dependent and independent variables that could be used. We present and discuss an example clustering analysis to show how model-produced annual HR can be broken into distinct groups associated with global variability in biotic and abiotic factors, and demonstrate that these groups are distinct from already-existing PFTs. A similar analysis, incorporating observational data, could form a basis for future DFTs. Finally, we suggest next steps and critical priorities: collection and synthesis of existing data; more in-depth analyses combining open data with high-performance computing; rigorous testing of analytical results; and planning by the global modeling community for decoupling decomposition from fixed site data. These are all critical steps to build a foundation for DFTs in global models, thus providing the ecological and climate change communities with robust, scalable estimates of HR at large scales.

  14. Modeling long-term, large-scale sediment storage using a simple sediment budget approach

    Science.gov (United States)

    Naipal, Victoria; Reick, Christian; Van Oost, Kristof; Hoffmann, Thomas; Pongratz, Julia

    2016-05-01

    Currently, the anthropogenic perturbation of the biogeochemical cycles remains unquantified due to the poor representation of lateral fluxes of carbon and nutrients in Earth system models (ESMs). This lateral transport of carbon and nutrients between terrestrial ecosystems is strongly affected by accelerated soil erosion rates. However, the quantification of global soil erosion by rainfall and runoff, and the resulting redistribution is missing. This study aims at developing new tools and methods to estimate global soil erosion and redistribution by presenting and evaluating a new large-scale coarse-resolution sediment budget model that is compatible with ESMs. This model can simulate spatial patterns and long-term trends of soil redistribution in floodplains and on hillslopes, resulting from external forces such as climate and land use change. We applied the model to the Rhine catchment using climate and land cover data from the Max Planck Institute Earth System Model (MPI-ESM) for the last millennium (here AD 850-2005). Validation is done using observed Holocene sediment storage data and observed scaling between sediment storage and catchment area. We find that the model reproduces the spatial distribution of floodplain sediment storage and the scaling behavior for floodplains and hillslopes as found in observations. After analyzing the dependence of the scaling behavior on the main parameters of the model, we argue that the scaling is an emergent feature of the model and mainly dependent on the underlying topography. Furthermore, we find that land use change is the main contributor to the change in sediment storage in the Rhine catchment during the last millennium. Land use change also explains most of the temporal variability in sediment storage in floodplains and on hillslopes.

  15. Assistive Technology Approaches for Large-Scale Assessment: Perceptions of Teachers of Students with Visual Impairments

    Science.gov (United States)

    Johnstone, Christopher; Thurlow, Martha; Altman, Jason; Timmons, Joe; Kato, Kentaro

    2009-01-01

    Assistive technology approaches to aid students with visual impairments are becoming commonplace in schools. These approaches, however, present challenges for assessment because students' level of access to different technologies may vary by school district and state. To better understand what assistive technology tools are used in reading…

  16. Qualitative approaches to large scale studies and students' achievements in Science and Mathematics - An Australian and Nordic Perspective

    DEFF Research Database (Denmark)

    Davidsson, Eva; Sørensen, Helene

    . This is done from different examples, which all use additional analysis in order to sort out complex issues behind, maybe at first glance, simple relationships. These examples could constitute possible approaches for qualitative analysis on quantitative data. For example, the relationship between low......Large scale studies play an increasing role in educational politics and results from surveys such as TIMSS and PISA are extensively used in medial debates about students' knowledge in science and mathematics. Although this debate does not usually shed light on the more extensive quantitative...

  17. An approach to large scale identification of non-obvious structural similarities between proteins

    Directory of Open Access Journals (Sweden)

    Cherkasov Artem

    2004-05-01

    Full Text Available Abstract Background A new sequence independent bioinformatics approach allowing genome-wide search for proteins with similar three dimensional structures has been developed. By utilizing the numerical output of the sequence threading it establishes putative non-obvious structural similarities between proteins. When applied to the testing set of proteins with known three dimensional structures the developed approach was able to recognize structurally similar proteins with high accuracy. Results The method has been developed to identify pathogenic proteins with low sequence identity and high structural similarity to host analogues. Such protein structure relationships would be hypothesized to arise through convergent evolution or through ancient horizontal gene transfer events, now undetectable using current sequence alignment techniques. The pathogen proteins, which could mimic or interfere with host activities, would represent candidate virulence factors. The developed approach utilizes the numerical outputs from the sequence-structure threading. It identifies the potential structural similarity between a pair of proteins by correlating the threading scores of the corresponding two primary sequences against the library of the standard folds. This approach allowed up to 64% sensitivity and 99.9% specificity in distinguishing protein pairs with high structural similarity. Conclusion Preliminary results obtained by comparison of the genomes of Homo sapiens and several strains of Chlamydia trachomatis have demonstrated the potential usefulness of the method in the identification of bacterial proteins with known or potential roles in virulence.

  18. Can key vegetation parameters be retrieved at the large-scale using LAI satellite products and a generic modelling approach ?

    Science.gov (United States)

    Dewaele, Helene; Calvet, Jean-Christophe; Carrer, Dominique; Laanaia, Nabil

    2016-04-01

    In the context of climate change, the need to assess and predict the impact of droughts on vegetation and water resources increases. The generic approaches permitting the modelling of continental surfaces at large-scale has progressed in recent decades towards land surface models able to couple cycles of water, energy and carbon. A major source of uncertainty in these generic models is the maximum available water content of the soil (MaxAWC) usable by plants which is constrained by the rooting depth parameter and unobservable at the large-scale. In this study, vegetation products derived from the SPOT/VEGETATION satellite data available since 1999 are used to optimize the model rooting depth over rainfed croplands and permanent grasslands at 1 km x 1 km resolution. The inter-annual variability of the Leaf Area Index (LAI) is simulated over France using the Interactions between Soil, Biosphere and Atmosphere, CO2-reactive (ISBA-A-gs) generic land surface model and a two-layer force-restore (FR-2L) soil profile scheme. The leaf nitrogen concentration directly impacts the modelled value of the maximum annual LAI. In a first step this parameter is estimated for the last 15 years by using an iterative procedure that matches the maximum values of LAI modelled by ISBA-A-gs to the highest satellite-derived LAI values. The Root Mean Square Error (RMSE) is used as a cost function to be minimized. In a second step, the model rooting depth is optimized in order to reproduce the inter-annual variability resulting from the drought impact on the vegetation. The evaluation of the retrieved soil rooting depth is achieved using the French agricultural statistics of Agreste. Retrieved leaf nitrogen concentrations are compared with values from previous studies. The preliminary results show a good potential of this approach to estimate these two vegetation parameters (leaf nitrogen concentration, MaxAWC) at the large-scale over grassland areas. Besides, a marked impact of the

  19. Authormagic – An Approach to Author Disambiguation in Large-Scale Digital Libraries

    CERN Document Server

    Weiler, Henning; Mele, Salvatore

    2011-01-01

    A collaboration of leading research centers in the field of High Energy Physics (HEP) has built INSPIRE, a novel information infrastructure, which comprises the entire corpus of about one million documents produced within the discipline, including a rich set of metadata, citation information and half a million full-text documents, and offers a unique opportunity for author disambiguation strategies. The presented approach features extended metadata comparison metrics and a three-step unsupervised graph clustering technique. The algorithm aided in identifying 200'000 individuals from 6'500'000 author signatures. Preliminary tests based on knowledge of external experts and a pilot of a crowd-sourcing system show a success rate of more than 96% within the selected test cases. The obtained author clusters serve as a recommendation for INSPIRE users to further clean the publication list in a crowd-sourced approach.

  20. The ranking probability approach and its usage in design and analysis of large-scale studies.

    Science.gov (United States)

    Kuo, Chia-Ling; Zaykin, Dmitri

    2013-01-01

    In experiments with many statistical tests there is need to balance type I and type II error rates while taking multiplicity into account. In the traditional approach, the nominal [Formula: see text]-level such as 0.05 is adjusted by the number of tests, [Formula: see text], i.e., as 0.05/[Formula: see text]. Assuming that some proportion of tests represent "true signals", that is, originate from a scenario where the null hypothesis is false, power depends on the number of true signals and the respective distribution of effect sizes. One way to define power is for it to be the probability of making at least one correct rejection at the assumed [Formula: see text]-level. We advocate an alternative way of establishing how "well-powered" a study is. In our approach, useful for studies with multiple tests, the ranking probability [Formula: see text] is controlled, defined as the probability of making at least [Formula: see text] correct rejections while rejecting hypotheses with [Formula: see text] smallest P-values. The two approaches are statistically related. Probability that the smallest P-value is a true signal (i.e., [Formula: see text]) is equal to the power at the level [Formula: see text], to an very good excellent approximation. Ranking probabilities are also related to the false discovery rate and to the Bayesian posterior probability of the null hypothesis. We study properties of our approach when the effect size distribution is replaced for convenience by a single "typical" value taken to be the mean of the underlying distribution. We conclude that its performance is often satisfactory under this simplification; however, substantial imprecision is to be expected when [Formula: see text] is very large and [Formula: see text] is small. Precision is largely restored when three values with the respective abundances are used instead of a single typical effect size value.

  1. Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs

    Science.gov (United States)

    Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.

    2016-07-01

    Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and

  2. Evaluating two model reduction approaches for large scale hedonic models sensitive to omitted variables and multicollinearity

    DEFF Research Database (Denmark)

    Panduro, Toke Emil; Thorsen, Bo Jellesmark

    2014-01-01

    evaluate two common model reduction approaches in an empirical case. The first relies on a principal component analysis (PCA) used to construct new orthogonal variables, which are applied in the hedonic model. The second relies on a stepwise model reduction based on the variance inflation index and Akaike......’s information criteria. Our empirical application focuses on estimating the implicit price of forest proximity in a Danish case area, with a dataset containing 86 relevant variables. We demonstrate that the estimated implicit price for forest proximity, while positive in all models, is clearly sensitive...

  3. Extracting Noun Phrases from Large-Scale Texts A Hybrid Approach and Its Automatic Evaluation

    CERN Document Server

    Chen, K; Chen, Kuang-hua; Chen, Hsin-Hsi

    1994-01-01

    To acquire noun phrases from running texts is useful for many applications, such as word grouping,terminology indexing, etc. The reported literatures adopt pure probabilistic approach, or pure rule-based noun phrases grammar to tackle this problem. In this paper, we apply a probabilistic chunker to deciding the implicit boundaries of constituents and utilize the linguistic knowledge to extract the noun phrases by a finite state mechanism. The test texts are SUSANNE Corpus and the results are evaluated by comparing the parse field of SUSANNE Corpus automatically. The results of this preliminary experiment are encouraging.

  4. Multicontroller: an object programming approach to introduce advanced control algorithms for the GCS large scale project

    CERN Document Server

    Cabaret, S; Coppier, H; Rachid, A; Barillère, R; CERN. Geneva. IT Department

    2007-01-01

    The GCS (Gas Control System) project team at CERN uses a Model Driven Approach with a Framework - UNICOS (UNified Industrial COntrol System) - based on PLC (Programming Language Controller) and SCADA (Supervisory Control And Data Acquisition) technologies. The first' UNICOS versions were able to provide a PID (Proportional Integrative Derivative) controller whereas the Gas Systems required more advanced control strategies. The MultiController is a new UNICOS object which provides the following advanced control algorithms: Smith Predictor, PFC (Predictive Function Control), RST* and GPC (Global Predictive Control). Its design is based on a monolithic entity with a global structure definition which is able to capture the desired set of parameters of any specific control algorithm supported by the object. The SCADA system -- PVSS - supervises the MultiController operation. The PVSS interface provides users with supervision faceplate, in particular it links any MultiController with recipes: the GCS experts are ab...

  5. A DEM-based approach for large-scale floodplain mapping in ungauged watersheds

    Science.gov (United States)

    Jafarzadegan, Keighobad; Merwade, Venkatesh

    2017-07-01

    Binary threshold classifiers are a simple form of supervised classification methods that can be used in floodplain mapping. In these methods, a given watershed is examined as a grid of cells with a particular morphologic value. A reference map is a grid of cells labeled as flood and non-flood from hydraulic modeling or remote sensing observations. By using the reference map, a threshold on morphologic feature is determined to label the unknown cells as flood and non-flood (binary classification). The main limitation of these methods is the threshold transferability assumption in which a homogenous geomorphological and hydrological behavior is assumed for the entire region and the same threshold derived from the reference map (training area) is used for other locations (ungauged watersheds) inside the study area. In order to overcome this limitation and consider the threshold variability inside a large region, regression modeling is used in this paper to predict the threshold by relating it to the watershed characteristics. Application of this approach for North Carolina shows that the threshold is related to main stream slope, average watershed elevation, and average watershed slope. By using the Fitness (F) and Correct (C) criteria of C > 0.9 and F > 0.6, results show the threshold prediction and the corresponding floodplain for 100-year design flow are comparable to that from Federal Emergency Management Agency's (FEMA) Flood Insurance Rate Maps (FIRMs) in the region. However, the floodplains from the proposed model are underpredicted and overpredicted in the flat (average watershed slope 20%). Overall, the proposed approach provides an alternative way of mapping floodplain in data-scarce regions.

  6. The opportunities and challenges of large-scale molecular approaches to songbird neurobiology

    Science.gov (United States)

    Mello, C.V.; Clayton, D.F.

    2014-01-01

    High-through put methods for analyzing genome structure and function are having a large impact in song-bird neurobiology. Methods include genome sequencing and annotation, comparative genomics, DNA microarrays and transcriptomics, and the development of a brain atlas of gene expression. Key emerging findings include the identification of complex transcriptional programs active during singing, the robust brain expression of non-coding RNAs, evidence of profound variations in gene expression across brain regions, and the identification of molecular specializations within song production and learning circuits. Current challenges include the statistical analysis of large datasets, effective genome curations, the efficient localization of gene expression changes to specific neuronal circuits and cells, and the dissection of behavioral and environmental factors that influence brain gene expression. The field requires efficient methods for comparisons with organisms like chicken, which offer important anatomical, functional and behavioral contrasts. As sequencing costs plummet, opportunities emerge for comparative approaches that may help reveal evolutionary transitions contributing to vocal learning, social behavior and other properties that make songbirds such compelling research subjects. PMID:25280907

  7. The opportunities and challenges of large-scale molecular approaches to songbird neurobiology.

    Science.gov (United States)

    Mello, C V; Clayton, D F

    2015-03-01

    High-throughput methods for analyzing genome structure and function are having a large impact in songbird neurobiology. Methods include genome sequencing and annotation, comparative genomics, DNA microarrays and transcriptomics, and the development of a brain atlas of gene expression. Key emerging findings include the identification of complex transcriptional programs active during singing, the robust brain expression of non-coding RNAs, evidence of profound variations in gene expression across brain regions, and the identification of molecular specializations within song production and learning circuits. Current challenges include the statistical analysis of large datasets, effective genome curations, the efficient localization of gene expression changes to specific neuronal circuits and cells, and the dissection of behavioral and environmental factors that influence brain gene expression. The field requires efficient methods for comparisons with organisms like chicken, which offer important anatomical, functional and behavioral contrasts. As sequencing costs plummet, opportunities emerge for comparative approaches that may help reveal evolutionary transitions contributing to vocal learning, social behavior and other properties that make songbirds such compelling research subjects.

  8. A Large-Scale, Multiagency Approach to Defining a Reference Network for Pacific Northwest Streams

    Science.gov (United States)

    Miller, Stephanie; Eldred, Peter; Muldoon, Ariel; Anlauf-Dunn, Kara; Stein, Charlie; Hubler, Shannon; Merrick, Lesley; Haxton, Nick; Larson, Chad; Rehn, Andrew; Ode, Peter; Vander Laan, Jake

    2016-12-01

    Aquatic monitoring programs vary widely in objectives and design. However, each program faces the unifying challenge of assessing conditions and quantifying reasonable expectations for measured indicators. A common approach for setting resource expectations is to define reference conditions that represent areas of least human disturbance or most natural state of a resource characterized by the range of natural variability across a region of interest. Identification of reference sites often relies heavily on professional judgment, resulting in varying and unrepeatable methods. Standardized methods for data collection, site characterization, and reference site selection facilitate greater cooperation among assessment programs and development of assessment tools that are readily shareable and comparable. We illustrate an example that can serve the broader global monitoring community on how to create a consistent and transparent reference network for multiple stream resource agencies. We provide a case study that offers a simple example of how reference sites can be used, at the landscape level, to link upslope management practices to a specific in-channel response. We found management practices, particularly areas with high road densities, have more fine sediments than areas with fewer roads. While this example uses data from only one of the partner agencies, if data were collected in a similar manner they can be combined and create a larger, more robust dataset. We hope that this starts a dialog regarding more standardized ways through inter-agency collaborations to evaluate data. Creating more consistency in physical and biological field protocols will increase the ability to share data.

  9. A 454 sequencing approach for large scale phylogenomic analysis of the common emperor scorpion (Pandinus imperator).

    Science.gov (United States)

    Roeding, Falko; Borner, Janus; Kube, Michael; Klages, Sven; Reinhardt, Richard; Burmester, Thorsten

    2009-12-01

    In recent years, phylogenetic tree reconstructions that rely on multiple gene alignments that had been deduced from expressed sequence tags (ESTs) have become a popular method in molecular systematics. Here, we present a 454 pyrosequencing approach to infer the transcriptome of the Emperor scorpion Pandinus imperator. We obtained 428,844 high-quality reads (mean length=223+/-50 b) from total cDNA, which were assembled into 8334 contigs (mean length 422+/-313 bp) and 26,147 singletons. About 1200 contigs were successfully annotated by BLAST and orthology search. Specific analyses of eight distinct hemocyanin sequences provided further proof for the quality of the 454 reads and the assembly process. The P. imperator sequences were included in a concatenated alignment of 149 orthologous genes of 67 metazoan taxa that covers 39,842 amino acids. After removal of low-quality regions, 11,168 positions were employed for phylogenetic reconstructions. Using Bayesian and maximum likelihood methods, we obtained strongly supported monophyletic Ecdysozoa, Arthropoda (excluding Tardigrada), Euarthropoda, Pancrustacea and Hexapoda. We also recovered the Myriochelata (Chelicerata+Myriapoda). Within the chelicerates, Pycnogonida form the sister group of Euchelicerata. However, Arachnida were found paraphyletic because the Acari (mites and ticks) were recovered as sister group of a clade comprising Xiphosura, Scorpiones and Araneae. In summary, we have shown that 454 pyrosequencing is a cost-effective method that provides sufficient data and coverage depth for gene detection and multigene-based phylogenetic analyses.

  10. QAPgrid: a two level QAP-based approach for large-scale data analysis and visualization.

    Directory of Open Access Journals (Sweden)

    Mario Inostroza-Ponta

    Full Text Available BACKGROUND: The visualization of large volumes of data is a computationally challenging task that often promises rewarding new insights. There is great potential in the application of new algorithms and models from combinatorial optimisation. Datasets often contain "hidden regularities" and a combined identification and visualization method should reveal these structures and present them in a way that helps analysis. While several methodologies exist, including those that use non-linear optimization algorithms, severe limitations exist even when working with only a few hundred objects. METHODOLOGY/PRINCIPAL FINDINGS: We present a new data visualization approach (QAPgrid that reveals patterns of similarities and differences in large datasets of objects for which a similarity measure can be computed. Objects are assigned to positions on an underlying square grid in a two-dimensional space. We use the Quadratic Assignment Problem (QAP as a mathematical model to provide an objective function for assignment of objects to positions on the grid. We employ a Memetic Algorithm (a powerful metaheuristic to tackle the large instances of this NP-hard combinatorial optimization problem, and we show its performance on the visualization of real data sets. CONCLUSIONS/SIGNIFICANCE: Overall, the results show that QAPgrid algorithm is able to produce a layout that represents the relationships between objects in the data set. Furthermore, it also represents the relationships between clusters that are feed into the algorithm. We apply the QAPgrid on the 84 Indo-European languages instance, producing a near-optimal layout. Next, we produce a layout of 470 world universities with an observed high degree of correlation with the score used by the Academic Ranking of World Universities compiled in the The Shanghai Jiao Tong University Academic Ranking of World Universities without the need of an ad hoc weighting of attributes. Finally, our Gene Ontology-based study on

  11. Discovering Outliers of Potential Drug Toxicities Using a Large-scale Data-driven Approach.

    Science.gov (United States)

    Luo, Jake; Cisler, Ron A

    2016-01-01

    We systematically compared the adverse effects of cancer drugs to detect event outliers across different clinical trials using a data-driven approach. Because many cancer drugs are toxic to patients, better understanding of adverse events of cancer drugs is critical for developing therapies that could minimize the toxic effects. However, due to the large variabilities of adverse events across different cancer drugs, methods to efficiently compare adverse effects across different cancer drugs are lacking. To address this challenge, we present an exploration study that integrates multiple adverse event reports from clinical trials in order to systematically compare adverse events across different cancer drugs. To demonstrate our methods, we first collected data on 186,339 clinical trials from ClinicalTrials.gov and selected 30 common cancer drugs. We identified 1602 cancer trials that studied the selected cancer drugs. Our methods effectively extracted 12,922 distinct adverse events from the clinical trial reports. Using the extracted data, we ranked all 12,922 adverse events based on their prevalence in the clinical trials, such as nausea 82%, fatigue 77%, and vomiting 75.97%. To detect the significant drug outliers that could have a statistically high possibility of causing an event, we used the boxplot method to visualize adverse event outliers across different drugs and applied Grubbs' test to evaluate the significance. Analyses showed that by systematically integrating cross-trial data from multiple clinical trial reports, adverse event outliers associated with cancer drugs can be detected. The method was demonstrated by detecting the following four statistically significant adverse event cases: the association of the drug axitinib with hypertension (Grubbs' test, P < 0.001), the association of the drug imatinib with muscle spasm (P < 0.001), the association of the drug vorinostat with deep vein thrombosis (P < 0.001), and the association of the drug afatinib

  12. Large-scale circulation patterns and related rainfall in the Amazon Basin: a neuronal networks approach

    Energy Technology Data Exchange (ETDEWEB)

    Espinoza, Jhan Carlo [LOCEAN - IPSL (IRD, CNRS, MNHN, UPMC), Paris Cedex 05 (France); Universidad Agraria La Molina UNALM, Lima (Peru); Lengaigne, Matthieu; Janicot, Serge [LOCEAN - IPSL (IRD, CNRS, MNHN, UPMC), Paris Cedex 05 (France); Ronchail, Josyane [LOCEAN - IPSL (IRD, CNRS, MNHN, UPMC), Paris Cedex 05 (France); Universite Paris 7, Paris (France)

    2012-01-15

    This study describes the main circulation patterns (CP) in the Amazonian Basin over the 1975-2002 period and their relationship with rainfall variability. CPs in the Amazonian Basin have been computed for each season from the ERA-40 daily 850 hPa winds using an approach combining artificial neural network (Self Organizing Maps) and Hierarchical Ascendant Classification. A 6 to 8 cluster solutions (depending on the season considered) is shown to yield an integrated view of the complex regional circulation variability. For austral fall, winter and spring the temporal evolution between the different CPs shows a clear tendency to describe a cycle, with southern wind anomalies and their convergence with the trade winds progressing northward from the La Plata Basin to the Amazon Basin. This sequence is strongly related to eastward moving extra tropical perturbations and their incursion toward low latitude that modulate the geopotential and winds over South America and its adjoining oceans. During Austral summer, CPs are less spatially and temporally organized compared to other seasons, principally due to weaker extra tropical perturbations and more frequent shallow low situations. Each of these CPs is shown to be associated with coherent northward moving regional rainfall patterns (both in in situ data and ERA-40 reanalysis) and convective activity. However, our results reveals that precipitation variability is better reproduced by ERA-40 in the southern part of the Amazonian Basin than in the northern part, where rainfall variability is likely to be more constrained by local and subdaily processes (e.g. squall lines) that could be misrepresented in the reanalysis dataset. This analysis clearly illustrates the existing connections between the southern and northern part of the Amazonian Basin in terms of regional circulation/rainfall patterns. The identification of these CPs provide useful information to understand local rainfall variability and could hence be used to

  13. A refined regional modeling approach for the Corn Belt - Experiences and recommendations for large-scale integrated modeling

    Science.gov (United States)

    Panagopoulos, Yiannis; Gassman, Philip W.; Jha, Manoj K.; Kling, Catherine L.; Campbell, Todd; Srinivasan, Raghavan; White, Michael; Arnold, Jeffrey G.

    2015-05-01

    Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. This region is comprised of two large river basins, the intensely row-cropped Upper Mississippi River Basin (UMRB) and Ohio-Tennessee River Basin (OTRB), which are considered the key contributing areas for the Northern Gulf of Mexico hypoxic zone according to the US Environmental Protection Agency. Thus, in this area it is of utmost importance to ensure that intensive agriculture for food, feed and biofuel production can coexist with a healthy water environment. To address these objectives within a river basin management context, an integrated modeling system has been constructed with the hydrologic Soil and Water Assessment Tool (SWAT) model, capable of estimating river basin responses to alternative cropping and/or management strategies. To improve modeling performance compared to previous studies and provide a spatially detailed basis for scenario development, this SWAT Corn Belt application incorporates a greatly refined subwatershed structure based on 12-digit hydrologic units or 'subwatersheds' as defined by the US Geological Service. The model setup, calibration and validation are time-demanding and challenging tasks for these large systems, given the scale intensive data requirements, and the need to ensure the reliability of flow and pollutant load predictions at multiple locations. Thus, the objectives of this study are both to comprehensively describe this large-scale modeling approach, providing estimates of pollution and crop production in the region as well as to present strengths and weaknesses of integrated modeling at such a large scale along with how it can be improved on the basis of the current modeling structure and results. The predictions were based on a semi-automatic hydrologic calibration approach for large-scale and spatially detailed modeling studies, with the use of the Sequential

  14. A Social-Ecological System Approach to Analyze Stakeholders' Interactions within a Large-Scale Rangeland Restoration Program

    Directory of Open Access Journals (Sweden)

    Thorunn Petursdottir

    2013-06-01

    In this research, we investigated the SES of rangeland restoration in Iceland to estimate whether social factors, such as stakeholders' attitudes and behavior, can be used to evaluate the effectiveness of agri-environmental policies on rangeland restoration and improved land management. We used qualitative approaches, interviewing 15 stakeholders. Our results indicate that social factors such as attitude toward restoration and land management practices can be used as indicators to evaluate the effectiveness of restoration policies. They also strongly indicate that lack of functionality in the governance system of social-ecological systems can reduce the desired progress of policies related to large-scale natural resource management projects, such as rangeland restoration, and possibly halt the necessary paradigm shift among stakeholders regarding improved rangeland management.

  15. Tokyo Tech–Hitotsubashi Interdisciplinary Conference : New Approaches to the Analysis of Large-Scale Business and Economic Data

    CERN Document Server

    Takayasu, Misako; Takayasu, Hideki; Econophysics Approaches to Large-Scale Business Data and Financial Crisis

    2010-01-01

    The new science of econophysics has arisen out of the information age. As large-scale economic data are being increasingly generated by industries and enterprises worldwide, researchers from fields such as physics, mathematics, and information sciences are becoming involved. The vast number of transactions taking place, both in the financial markets and in the retail sector, is usually studied by economists and management and now by econophysicists. Using cutting-edge tools of computational analysis while searching for regularities and “laws” such as those found in the natural sciences, econophysicists have come up with intriguing results. The ultimate aim is to establish fundamental data collection and analysis techniques that embrace the expertise of a variety of academic disciplines. This book comprises selected papers from the international conference on novel analytical approaches to economic data held in Tokyo in March 2009. The papers include detailed reports on the market behavior during the finan...

  16. Large scale atmospheric forcing and topographic modification of precipitation rates over High Asia – a neural network based approach

    Directory of Open Access Journals (Sweden)

    L. Gerlitz

    2014-10-01

    Full Text Available The heterogeneity of precipitation rates in high mountain regions is not sufficiently captured by state of the art climate reanalysis products due to their limited spatial resolution. Thus there exists a large gap between the available data sets and the demands of climate impact studies. The presented approach aims to generate spatially high resolution precipitation fields for a target area in Central Asia, covering the Tibetan Plateau, the adjacent mountain ranges and lowlands. Based on the assumption, that observed local scale precipitation amounts are triggered by varying large scale atmospheric situations and modified by local scale topographic characteristics, the statistical downscaling approach estimates local scale precipitation rates as a function of large scale atmospheric conditions, derived from the ERA-Interim reanalysis, and high resolution terrain parameters. Since the relationships of the predictor variables with local scale observations are rather unknown and highly non-linear, an Artificial Neural Network (ANN was utilized for the development of adequate transfer functions. Different ANN-architectures were evaluated with regard to their predictive performance. The final downscaling model was used for the cellwise estimation of monthly precipitation sums, the number of rainy days and the maximum daily precipitation amount with a spatial resolution of 1 km2. The model was found to sufficiently capture the temporal and spatial variations of precipitation rates in the highly structured target area and allows a detailed analysis of the precipitation distribution. A concluding sensitivity analysis of the ANN model reveals the effect of the atmospheric and topographic predictor variables on the precipitation estimations in the climatically diverse subregions.

  17. Mutual coupling of hydrologic and hydrodynamic models - a viable approach for improved large-scale inundation estimates?

    Science.gov (United States)

    Hoch, Jannis; Winsemius, Hessel; van Beek, Ludovicus; Haag, Arjen; Bierkens, Marc

    2016-04-01

    Due to their increasing occurrence rate and associated economic costs, fluvial floods are large-scale and cross-border phenomena that need to be well understood. Sound information about temporal and spatial variations of flood hazard is essential for adequate flood risk management and climate change adaption measures. While progress has been made in assessments of flood hazard and risk on the global scale, studies to date have made compromises between spatial resolution on the one hand and local detail that influences their temporal characteristics (rate of rise, duration) on the other. Moreover, global models cannot realistically model flood wave propagation due to a lack of detail in channel and floodplain geometry, and the representation of hydrologic processes influencing the surface water balance such as open water evaporation from inundated water and re-infiltration of water in river banks. To overcome these restrictions and to obtain a better understanding of flood propagation including its spatio-temporal variations at the large scale, yet at a sufficiently high resolution, the present study aims to develop a large-scale modeling tool by coupling the global hydrologic model PCR-GLOBWB and the recently developed hydrodynamic model DELFT3D-FM. The first computes surface water volumes which are routed by the latter, solving the full Saint-Venant equations. With DELFT3D FM being capable of representing the model domain as a flexible mesh, model accuracy is only improved at relevant locations (river and adjacent floodplain) and the computation time is not unnecessarily increased. This efficiency is very advantageous for large-scale modelling approaches. The model domain is thereby schematized by 2D floodplains, being derived from global data sets (HydroSHEDS and G3WBM, respectively). Since a previous study with 1way-coupling showed good model performance (J.M. Hoch et al., in prep.), this approach was extended to 2way-coupling to fully represent evaporation

  18. 5D Modelling: An Efficient Approach for Creating Spatiotemporal Predictive 3D Maps of Large-Scale Cultural Resources

    Science.gov (United States)

    Doulamis, A.; Doulamis, N.; Ioannidis, C.; Chrysouli, C.; Grammalidis, N.; Dimitropoulos, K.; Potsiou, C.; Stathopoulou, E.-K.; Ioannides, M.

    2015-08-01

    Outdoor large-scale cultural sites are mostly sensitive to environmental, natural and human made factors, implying an imminent need for a spatio-temporal assessment to identify regions of potential cultural interest (material degradation, structuring, conservation). On the other hand, in Cultural Heritage research quite different actors are involved (archaeologists, curators, conservators, simple users) each of diverse needs. All these statements advocate that a 5D modelling (3D geometry plus time plus levels of details) is ideally required for preservation and assessment of outdoor large scale cultural sites, which is currently implemented as a simple aggregation of 3D digital models at different time and levels of details. The main bottleneck of such an approach is its complexity, making 5D modelling impossible to be validated in real life conditions. In this paper, a cost effective and affordable framework for 5D modelling is proposed based on a spatial-temporal dependent aggregation of 3D digital models, by incorporating a predictive assessment procedure to indicate which regions (surfaces) of an object should be reconstructed at higher levels of details at next time instances and which at lower ones. In this way, dynamic change history maps are created, indicating spatial probabilities of regions needed further 3D modelling at forthcoming instances. Using these maps, predictive assessment can be made, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 5D Digital Cultural Heritage Model (5D-DCHM) is implemented using open interoperable standards based on the CityGML framework, which also allows the description of additional semantic metadata information. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 5D-DCHM geometry and the respective semantic information. The open source 3DCity

  19. Large-Scale Compute-Intensive Analysis via a Combined In-situ and Co-scheduling Workflow Approach

    Energy Technology Data Exchange (ETDEWEB)

    Messer, Bronson [ORNL; Sewell, Christopher [Los Alamos National Laboratory (LANL); Heitmann, Katrin [ORNL; Finkel, Dr. Hal J [Argonne National Laboratory (ANL); Fasel, Patricia [Los Alamos National Laboratory (LANL); Zagaris, George [Lawrence Livermore National Laboratory (LLNL); Pope, Adrian [Los Alamos National Laboratory (LANL); Habib, Salman [ORNL; Parete-Koon, Suzanne T [ORNL

    2015-01-01

    Large-scale simulations can produce tens of terabytes of data per analysis cycle, complicating and limiting the efficiency of workflows. Traditionally, outputs are stored on the file system and analyzed in post-processing. With the rapidly increasing size and complexity of simulations, this approach faces an uncertain future. Trending techniques consist of performing the analysis in situ, utilizing the same resources as the simulation, and/or off-loading subsets of the data to a compute-intensive analysis system. We introduce an analysis framework developed for HACC, a cosmological N-body code, that uses both in situ and co-scheduling approaches for handling Petabyte-size outputs. An initial in situ step is used to reduce the amount of data to be analyzed, and to separate out the data-intensive tasks handled off-line. The analysis routines are implemented using the PISTON/VTK-m framework, allowing a single implementation of an algorithm that simultaneously targets a variety of GPU, multi-core, and many-core architectures.

  20. Assessing sandy beach macrofaunal patterns along large-scale environmental gradients: A Fuzzy Naïve Bayes approach

    Science.gov (United States)

    Bozzeda, Fabio; Zangrilli, Maria Paola; Defeo, Omar

    2016-06-01

    A Fuzzy Naïve Bayes (FNB) classifier was developed to assess large-scale variations in abundance, species richness and diversity of the macrofauna inhabiting fifteen Uruguayan sandy beaches affected by the effects of beach morphodynamics and the estuarine gradient generated by Rio de la Plata. Information from six beaches was used to estimate FNB parameters, while abiotic data of the remaining nine beaches were used to forecast abundance, species richness and diversity. FNB simulations reproduced the general increasing trend of target variables from inner estuarine reflective beaches to marine dissipative ones. The FNB model also identified a threshold value of salinity range beyond which diversity markedly increased towards marine beaches. Salinity range is suggested as an ecological master factor governing distributional patterns in sandy beach macrofauna. However, the model: 1) underestimated abundance and species richness at the innermost estuarine beach, with the lowest salinity, and 2) overestimated species richness in marine beaches with a reflective morphodynamic state, which is strongly linked to low abundance, species richness and diversity. Therefore, future modeling efforts should be refined by giving a dissimilar weigh to the gradients defined by estuarine (estuarine beaches) and morphodynamic (marine beaches) variables, which could improve predictions of target variables. Our modeling approach could be applied to a wide spectrum of issues, ranging from basic ecology to social-ecological systems. This approach seems relevant, given the current challenge to develop predictive methodologies to assess the simultaneous and nonlinear effects of anthropogenic and natural impacts in coastal ecosystems.

  1. Estimation of Truck Trips on Large-Scale Irrigation Project: A Combinatory Input-Output Commodity-Based Approach

    Directory of Open Access Journals (Sweden)

    Ackchai Sirikijpanichkul

    2015-01-01

    Full Text Available For the agricultural-based countries, the requirement on transportation infrastructure should not only be limited to accommodate general traffic but also the transportation of crop and agricultural products during the harvest seasons. Most of the past researches focus on the development of truck trip estimation techniques for urban, statewide, or nationwide freight movement but neglect the importance of rural freight movement which contributes to pavement deterioration on rural roads especially during harvest seasons. Recently, the Thai Government initiated a plan to construct a network of reservoirs within the northeastern region, aiming at improving existing irrigation system particularly in the areas where a more effective irrigation system is needed. It is expected to bring in new opportunities on expanding the cultivation areas, increasing the economy of scale and enlarging the extent market of area. As a consequence, its effects on truck trip generation needed to be investigated to assure the service quality of related transportation infrastructure. This paper proposes a combinatory input-output commodity-based approach to estimate truck trips on rural highway infrastructure network. The large-scale irrigation project for the northeastern of Thailand is demonstrated as a case study.

  2. Fault diagnosis of nonlinear and large-scale processes using novel modified kernel Fisher discriminant analysis approach

    Science.gov (United States)

    Shi, Huaitao; Liu, Jianchang; Wu, Yuhou; Zhang, Ke; Zhang, Lixiu; Xue, Peng

    2016-04-01

    It is pretty significant for fault diagnosis timely and accurately to improve the dependability of industrial processes. In this study, fault diagnosis of nonlinear and large-scale processes by variable-weighted kernel Fisher discriminant analysis (KFDA) based on improved biogeography-based optimisation (IBBO) is proposed, referred to as IBBO-KFDA, where IBBO is used to determine the parameters of variable-weighted KFDA, and variable-weighted KFDA is used to solve the multi-classification overlapping problem. The main contributions of this work are four-fold to further improve the performance of KFDA for fault diagnosis. First, a nonlinear fault diagnosis approach with variable-weighted KFDA is developed for maximising separation between the overlapping fault samples. Second, kernel parameters and features selection of variable-weighted KFDA are simultaneously optimised using IBBO. Finally, a single fitness function that combines erroneous diagnosis rate with feature cost is created, a novel mixed kernel function is introduced to improve the classification capability in the feature space and diagnosis accuracy of the IBBO-KFDA, and serves as the target function in the optimisation problem. Moreover, an IBBO approach is developed to obtain the better quality of solution and faster convergence speed. On the one hand, the proposed IBBO-KFDA method is first used on Tennessee Eastman process benchmark data sets to validate the feasibility and efficiency. On the other hand, IBBO-KFDA is applied to diagnose faults of automation gauge control system. Simulation results demonstrate that IBBO-KFDA can obtain better kernel parameters and feature vectors with a lower computing cost, higher diagnosis accuracy and a better real-time capacity.

  3. Development of lichen response indexes using a regional gradient modeling approach for large-scale monitoring of forests

    Science.gov (United States)

    Susan Will-Wolf; Peter Neitlich

    2010-01-01

    Development of a regional lichen gradient model from community data is a powerful tool to derive lichen indexes of response to environmental factors for large-scale and long-term monitoring of forest ecosystems. The Forest Inventory and Analysis (FIA) Program of the U.S. Department of Agriculture Forest Service includes lichens in its national inventory of forests of...

  4. Chirping for Large-Scale Maritime Archaeological Survey: A Strategy Developed from a Practical Experience-Based Approach

    OpenAIRE

    Ole Grøn; Lars Ole Boldreel

    2014-01-01

    Archaeological wrecks exposed on the sea floor are mapped using side-scan and multibeam techniques, whereas the detection of submerged archaeological sites, such as Stone Age settlements, and wrecks, partially or wholly embedded in sea-floor sediments, requires the application of high-resolution subbottom profilers. This paper presents a strategy for cost-effective, large-scale mapping of previously undetected sediment-embedded sites and wrecks based on subbottom profiling with chirp systems....

  5. Large-Scale Participation: A Case Study of a Participatory Approach to Developing a New Public Library

    DEFF Research Database (Denmark)

    Dalsgaard, Peter; Eriksson, Eva

    2013-01-01

    In this paper, we present a case study of a participatory project that focuses on interaction in large-scale design, namely, the development of the new Urban Mediaspace Aarhus. This project, which has been under way for ten years, embodies a series of issues that arise when participatory design a...... different paradigms of inquiry and practice in a project of this scale, and how to capture and anchor the insights from participatory events to inform the ongoing design process....

  6. Features of liquid mixtures separation in large-scale distillation columns with structured packing. New ideas and approaches

    Science.gov (United States)

    Pavlenko, A. N.; Zhukov, V. E.; Pecherkin, N. I.; Li, X.; Sui, H.

    2016-10-01

    Negative vapor stratification along the height of distillation column caused by different density of vapor mixture components and higher temperature at the column bottom, leads to formation of large-scale maldistribution of temperature and mixture composition over the column cross-section even at uniform irrigation of the structured packing. Experimental results concerning the dynamic effect of packing irrigation on separation efficiency of the two-component mixture of R-21 and R-114 are presented in this paper. The structured packing Zulser 350Y was installed in the distillation column with the diameter of 0.9 m. Experiments were carried out on the 10- and 19-layer packing with an overall height of 2.1 and 4 m, respectively. The liquid distributor with independently controlled 126 valves for each irrigation point, developed by the authors, was used for packing irrigation. The experiments showed that the periodic impact of the irrigation system on the large-scale non-uniformity of mixture composition, formed in the packing, could significantly affect the distribution of flow parameters over the cross-section and height of the mass transfer unit. Essentially nonuniform periodic irrigation of the packing can improve the separation efficiency of the column within 20%, if the switching periods are comparable with the times of formation of large-scale non-uniformity.

  7. Simulation of large-scale tropical tuna movements in relation with daily remote sensing data: the artificial life approach.

    Science.gov (United States)

    Dagorn, L; Petit, M; Stretta, J M

    1997-01-01

    Tunas are known to be able to travel long distances. The aim of this paper is to propose new ethological models which reproduce some tuna movements using the dynamics of their environment. We use sea surface temperature animations (from remote sensing data) to model the South West Indian Ocean, and French purse seiners data are used to estimate movements of fish. The objective of the models will be to find a northern movement from the Mozambique Channel to the Seychelles Islands at the appropriate time (May-July). The initial model uses our ecological knowledge of tunas, i.e. the search behavior for high concentrations of food commonly associated with thermal fronts. In some cases, this simple model creates some northern movements from the Mozambique Channel, but it cannot be used to reproduce large-scale movements between the Mozambique Channel and the Seychelles Islands. The next generation model is created where tuna behaviors are modeled by an artificial neural network, using a genetic algorithm to adjust the connection weights. The tuna school-network receives daily information from its local environment and chooses the best actions in order to be able to pass from the Mozambique Channel to the Seychelles Islands at the appropriate time. One neural network emerges and represents an adaptive behavior able to interpret daily sea surface temperatures to mimic large-scale tuna movements. This artificial behavior can be generalized to each possible departure position from the Mozambique Channel. This modelling represents a new tool to study large-scale movements of pelagic fish, and is a first step towards real-time management of fisheries.

  8. Validation of a two-step quality control approach for a large-scale human urine metabolomic study conducted in seven experimental batches with LC/QTOF-MS.

    Science.gov (United States)

    Demetrowitsch, Tobias J; Petersen, Beate; Keppler, Julia K; Koch, Andreas; Schreiber, Stefan; Laudes, Matthias; Schwarz, Karin

    2015-01-01

    After his study of food science at the Rheinische Friedrich-Wilhelms University of Bonn, Tobias J Demetrowitsch obtained his doctoral degree in the research field of metabolomics at the Christian-Albrechts-University of Kiel. The present paper is part of his doctoral thesis and describes an extended strategy to evaluate and verify complex or large-scale experiments and data sets. Large-scale studies result in high sample numbers, requiring the analysis of samples in different batches. So far, the verification of such LC-MS-based metabolomics studies is difficult. Common approaches have not provided a reliable validation procedure to date. This article shows a novel verification process for a large-scale human urine study (analyzed by a LC/QToF-MS system) using a two-step validation procedure. The first step comprises a targeted approach that aims to examine and exclude statistical outliers. The second step consists of a principle component analysis, with the aim of a tight cluster of all quality controls and a second for all volunteer samples. The applied study design provides a reliable two-step validation procedure for large-scale studies and additionally contains an inhouse verification procedure.

  9. Chirping for Large-Scale Maritime Archaeological Survey: A Strategy Developed from a Practical Experience-Based Approach

    Directory of Open Access Journals (Sweden)

    Ole Grøn

    2014-01-01

    Full Text Available Archaeological wrecks exposed on the sea floor are mapped using side-scan and multibeam techniques, whereas the detection of submerged archaeological sites, such as Stone Age settlements, and wrecks, partially or wholly embedded in sea-floor sediments, requires the application of high-resolution subbottom profilers. This paper presents a strategy for cost-effective, large-scale mapping of previously undetected sediment-embedded sites and wrecks based on subbottom profiling with chirp systems. The mapping strategy described includes (a definition of line spacing depending on the target; (b interactive surveying, for example, immediate detailed investigation of potential archaeological anomalies on detection with a denser pattern of subbottom survey lines; (c onboard interpretation during data acquisition; (d recognition of nongeological anomalies. Consequently, this strategy differs from those employed in several detailed studies of known wreck sites and from the way in which geologists map the sea floor and the geological column beneath it. The strategy has been developed on the basis of extensive practical experience gained during the use of an off-the-shelf 2D chirp system and, given the present state of this technology, it appears well suited to large-scale maritime archaeological mapping.

  10. Novel approach for extinguishing large-scale coal fires using gas-liquid foams in open pit mines.

    Science.gov (United States)

    Lu, Xinxiao; Wang, Deming; Qin, Botao; Tian, Fuchao; Shi, Guangyi; Dong, Shuaijun

    2015-12-01

    Coal fires are a serious threat to the workers' security and safe production in open pit mines. The coal fire source is hidden and innumerable, and the large-area cavity is prevalent in the coal seam after the coal burned, causing the conventional extinguishment technology difficult to work. Foams are considered as an efficient means of fire extinguishment in these large-scale workplaces. A noble foam preparation method is introduced, and an original design of cavitation jet device is proposed to add foaming agent stably. The jet cavitation occurs when the water flow rate and pressure ratio reach specified values. Through self-building foaming system, the high performance foams are produced and then infused into the blast drilling holes at a large flow. Without complicated operation, this system is found to be very suitable for extinguishing large-scale coal fires. Field application shows that foam generation adopting the proposed key technology makes a good fire extinguishment effect. The temperature reduction using foams is 6-7 times higher than water, and CO concentration is reduced from 9.43 to 0.092‰ in the drilling hole. The coal fires are controlled successfully in open pit mines, ensuring the normal production as well as the security of personnel and equipment.

  11. Multimethod study of a large-scale programme to improve patient safety using a harm-free care approach

    Science.gov (United States)

    Power, Maxine; Brewster, Liz; Parry, Gareth; Brotherton, Ailsa; Minion, Joel; Ozieranski, Piotr; McNicol, Sarah; Harrison, Abigail; Dixon-Woods, Mary

    2016-01-01

    Objectives We aimed to evaluate whether a large-scale two-phase quality improvement programme achieved its aims and to characterise the influences on achievement. Setting National Health Service (NHS) in England. Participants NHS staff. Interventions The programme sought to (1) develop a shared national, regional and locally aligned safety focus for 4 high-cost, high volume harms; (2) establish a new measurement system based on a composite measure of ‘harm-free’ care and (3) deliver improved outcomes. Phase I involved a quality improvement collaborative intended to involve 100 organisations; phase II used financial incentives for data collection. Measures Multimethod evaluation of the programme. In phase I, analysis of regional plans and of rates of data submission and clinical outcomes reported to the programme. A concurrent process evaluation was conducted of phase I, but only data on submission rates and clinical outcomes were available for phase II. Results A context of extreme policy-related structural turbulence impacted strongly on phase I. Most regions' plans did not demonstrate full alignment with the national programme; most fell short of recruitment targets and attrition in attendance at the collaborative meetings occurred over time. Though collaborative participants saw the principles underlying the programme as attractive, useful and innovative, they often struggled to convert enthusiasm into change. Developing the measurement system was arduous, yet continued to be met by controversy. Data submission rates remained patchy throughout phase I but improved in reach and consistency in phase II in response to financial incentives. Some evidence of improvement in clinical outcomes over time could be detected but was hard to interpret owing to variability in the denominators. Conclusions These findings offer important lessons for large-scale improvement programmes, particularly when they seek to develop novel concepts and measures. External contexts may

  12. Large-scale circuit simulation

    Science.gov (United States)

    Wei, Y. P.

    1982-12-01

    The simulation of VLSI (Very Large Scale Integration) circuits falls beyond the capabilities of conventional circuit simulators like SPICE. On the other hand, conventional logic simulators can only give the results of logic levels 1 and 0 with the attendent loss of detail in the waveforms. The aim of developing large-scale circuit simulation is to bridge the gap between conventional circuit simulation and logic simulation. This research is to investigate new approaches for fast and relatively accurate time-domain simulation of MOS (Metal Oxide Semiconductors), LSI (Large Scale Integration) and VLSI circuits. New techniques and new algorithms are studied in the following areas: (1) analysis sequencing (2) nonlinear iteration (3) modified Gauss-Seidel method (4) latency criteria and timestep control scheme. The developed methods have been implemented into a simulation program PREMOS which could be used as a design verification tool for MOS circuits.

  13. LARGE SCALE GLAZED

    DEFF Research Database (Denmark)

    Bache, Anja Margrethe

    2010-01-01

    WORLD FAMOUS ARCHITECTS CHALLENGE TODAY THE EXPOSURE OF CONCRETE IN THEIR ARCHITECTURE. IT IS MY HOPE TO BE ABLE TO COMPLEMENT THESE. I TRY TO DEVELOP NEW AESTHETIC POTENTIALS FOR THE CONCRETE AND CERAMICS, IN LARGE SCALES THAT HAS NOT BEEN SEEN BEFORE IN THE CERAMIC AREA. IT IS EXPECTED TO RESULT...

  14. Vertically integrated approaches to large scale CO2 storage: Evaluating long-term storage security of CO2 injection in saline aquifers

    Science.gov (United States)

    Gasda, S. E.; Nordbotten, J.; Celia, M. A.

    2009-12-01

    Storage security of injected carbon dioxide (CO2) is an essential component of risk management for geological carbon sequestration operations. During the injection and early post-injection periods, CO2 leakage may occur along faults and leaky wells, but this risk may be partly managed by proper site selection and sensible deployment of monitoring and remediation technologies. On the other hand, long-term storage security is an entirely different risk management problem—one that is dominated by a mobile CO2 plume that may travel over very large spatial and temporal scales before it is trapped by different physical and chemical processes. The primary trapping mechanisms are capillary and solubility trapping, which evolve over thousands to tens of thousands of years and can immobilize a significant portion of the mobile, free-phase CO2 plume. However, these processes are complex, involving a combination of small and large spatial scales over varying time scales. Solubility trapping is a prime example of this complexity, where small-scale density instabilities in the dissolved CO2 region leads to convective mixing that has that has a significant effect on the large-scale dissolution process over very long time scales. Using appropriate models that can capture both large and small-scale effects is essential for understanding the role of dissolution and convective mixing on the long-term storage security of CO2 sequestration operations. There are several approaches to modeling long-term CO2 trapping mechanisms. One modeling option is the use of traditional numerical methods, which are often highly sophisticated models that can handle multiple complex phenomena with high levels of accuracy. However, these complex models quickly become prohibitively expensive for the type of large-scale, long-term modeling that is necessary for risk assessment applications such as the late post-injection period. We present an alternative modeling option, the VESA model, that combines

  15. Large-scale retrospective evaluation of regulated liquid chromatography-mass spectrometry bioanalysis projects using different total error approaches.

    Science.gov (United States)

    Tan, Aimin; Saffaj, Taoufiq; Musuku, Adrien; Awaiye, Kayode; Ihssane, Bouchaib; Jhilal, Fayçal; Sosse, Saad Alaoui; Trabelsi, Fethi

    2015-03-01

    The current approach in regulated LC-MS bioanalysis, which evaluates the precision and trueness of an assay separately, has long been criticized for inadequate balancing of lab-customer risks. Accordingly, different total error approaches have been proposed. The aims of this research were to evaluate the aforementioned risks in reality and the difference among four common total error approaches (β-expectation, β-content, uncertainty, and risk profile) through retrospective analysis of regulated LC-MS projects. Twenty-eight projects (14 validations and 14 productions) were randomly selected from two GLP bioanalytical laboratories, which represent a wide variety of assays. The results show that the risk of accepting unacceptable batches did exist with the current approach (9% and 4% of the evaluated QC levels failed for validation and production, respectively). The fact that the risk was not wide-spread was only because the precision and bias of modern LC-MS assays are usually much better than the minimum regulatory requirements. Despite minor differences in magnitude, very similar accuracy profiles and/or conclusions were obtained from the four different total error approaches. High correlation was even observed in the width of bias intervals. For example, the mean width of SFSTP's β-expectation is 1.10-fold (CV=7.6%) of that of Saffaj-Ihssane's uncertainty approach, while the latter is 1.13-fold (CV=6.0%) of that of Hoffman-Kringle's β-content approach. To conclude, the risk of accepting unacceptable batches was real with the current approach, suggesting that total error approaches should be used instead. Moreover, any of the four total error approaches may be used because of their overall similarity. Lastly, the difficulties/obstacles associated with the application of total error approaches in routine analysis and their desirable future improvements are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Approaches to learning as predictors of academic achievement: Results from a large scale, multi-level analysis

    DEFF Research Database (Denmark)

    Herrmann, Kim Jesper

    2017-01-01

    . Controlling for the effects of age, gender, and progression, we found that the students’ end-of-semester grade point averages were related negatively to a surface approach and positively to organised effort. Interestingly, the effect of the surface approach on academic achievement varied across programmes....... While there has been considerable interest in the ways in which academic programmes shape learning and teaching, the effects of these contexts on the relationship between approaches to learning and academic outcomes is under-researched. The results are discussed in relation to findings from recent meta...

  17. Approaches to learning as predictors of academic achievement: Results from a large scale, multi-level analysis

    DEFF Research Database (Denmark)

    Herrmann, Kim Jesper

    2017-01-01

    . Controlling for the effects of age, gender, and progression, we found that the students’ end-of-semester grade point averages were related negatively to a surface approach and positively to organised effort. Interestingly, the effect of the surface approach on academic achievement varied across programmes......The relationships between university students’ academic achievement and their approaches to learning and studying continuously attract scholarly attention. We report the results of an analysis in which multilevel linear modelling was used to analyse data from 3,626 Danish university students...

  18. A Novel Spatial-Temporal Voronoi Diagram-Based Heuristic Approach for Large-Scale Vehicle Routing Optimization with Time Constraints

    Directory of Open Access Journals (Sweden)

    Wei Tu

    2015-10-01

    Full Text Available Vehicle routing optimization (VRO designs the best routes to reduce travel cost, energy consumption, and carbon emission. Due to non-deterministic polynomial-time hard (NP-hard complexity, many VROs involved in real-world applications require too much computing effort. Shortening computing time for VRO is a great challenge for state-of-the-art spatial optimization algorithms. From a spatial-temporal perspective, this paper presents a spatial-temporal Voronoi diagram-based heuristic approach for large-scale vehicle routing problems with time windows (VRPTW. Considering time constraints, a spatial-temporal Voronoi distance is derived from the spatial-temporal Voronoi diagram to find near neighbors in the space-time searching context. A Voronoi distance decay strategy that integrates a time warp operation is proposed to accelerate local search procedures. A spatial-temporal feature-guided search is developed to improve unpromising micro route structures. Experiments on VRPTW benchmarks and real-world instances are conducted to verify performance. The results demonstrate that the proposed approach is competitive with state-of-the-art heuristics and achieves high-quality solutions for large-scale instances of VRPTWs in a short time. This novel approach will contribute to spatial decision support community by developing an effective vehicle routing optimization method for large transportation applications in both public and private sectors.

  19. New grid-planning and certification approaches for the large-scale offshore-wind farm grid-connection systems

    Energy Technology Data Exchange (ETDEWEB)

    Heising, C.; Bartelt, R. [Avasition GmbH, Dortmund (Germany); Zadeh, M. Koochack; Lebioda, T.J.; Jung, J. [TenneT Offshore GmbH, Bayreuth (Germany)

    2012-07-01

    Stable operation of the offshore-wind farms (OWF) and stable grid connection under stationary and dynamic conditions are essential to achieve a stable public power supply. To reach this aim, adequate grid-planning and certification approaches are a major advantage. Within this paper, the fundamental characteristics of the offshore-wind farms and their grid-connection systems are given. The main goal of this research project is to study the stability of the offshore grid especially in terms of subharmonic stability for the likely future extension stage of the offshore grids i.e. having parallel connection of two or more HVDC links and for certain operating scenarios e.g. overload scenario. The current requirements according to the grid code are not the focus of this research project. The goal is to study and define potential additional grid code requirements, simulations, tests and grid planning methods for the future. (orig.)

  20. [Energy Consumption Comparison and Energy Saving Approaches for Different Wastewater Treatment Processes in a Large-scale Reclaimed Water Plant].

    Science.gov (United States)

    Yang, Min; Li, Ya-ming; Wei, Yuan-song; Lü, Jian; Yu, Da-wei; Liu, Ji-bao; Fan, Yao-bo

    2015-06-01

    Energy consumption is the main performance indicator of reclaimed water plant (RWP) operation. Methods of specific energy consumption analysis, unit energy consumption analysis and redundancy analysis were applied to investigate the composition and spatio-temporal distribution of energy consumption in Qinghe RWP with inverted A2/O, A2/O and A2/O-MBR processes. And the A2/ O-MBR process was mainly analyzed to identify the main nodes and causes for high energy consumption, approaches for energy saving were explored, and the energy consumption before and after upgrading for energy saving was compared. The results showed that aeration was the key factor affecting energy consumption in both conventional and A2/O-MBR processes, accounting for 42.97% and 50.65% of total energy consumption, respectively. A pulsating aeration allowed an increasing membrane flux and remarkably reduced the energy consumption of the A2/O-MBR process while still meeting the effluent standard, e.g., the membrane flux was increased by 20%, and the energy consumptions per kiloton wastewater and kilogram COD(removed) were decreased by 42.39% to 0.53 kW-h-kg-3 and by 54.74% to 1.29 kW x h x kg(-1), respectively. The decrease of backflow ratio in the A2/O-MBR process within a certain range would not deteriorate the effluent quality due to its insignificant correlation with the effluent quality, and therefore may be considered as one of the ways for further energy saving.

  1. Large Scale Solar Heating

    DEFF Research Database (Denmark)

    Heller, Alfred

    2001-01-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out...... model is designed and validated on the Marstal case. Applying the Danish Reference Year, a design tool is presented. The simulation tool is used for proposals for application of alternative designs, including high-performance solar collector types (trough solar collectors, vaccum pipe collectors......). Simulation programs are proposed as control supporting tool for daily operation and performance prediction of central solar heating plants. Finaly the CSHP technolgy is put into persepctive with respect to alternatives and a short discussion on the barries and breakthrough of the technology are given....

  2. An Automated Approach to Map Winter Cropped Area of Smallholder Farms across Large Scales Using MODIS Imagery

    Directory of Open Access Journals (Sweden)

    Meha Jain

    2017-06-01

    Full Text Available Fine-scale agricultural statistics are an important tool for understanding trends in food production and their associated drivers, yet these data are rarely collected in smallholder systems. These statistics are particularly important for smallholder systems given the large amount of fine-scale heterogeneity in production that occurs in these regions. To overcome the lack of ground data, satellite data are often used to map fine-scale agricultural statistics. However, doing so is challenging for smallholder systems because of (1 complex sub-pixel heterogeneity; (2 little to no available calibration data; and (3 high amounts of cloud cover as most smallholder systems occur in the tropics. We develop an automated method termed the MODIS Scaling Approach (MSA to map smallholder cropped area across large spatial and temporal scales using MODIS Enhanced Vegetation Index (EVI satellite data. We use this method to map winter cropped area, a key measure of cropping intensity, across the Indian subcontinent annually from 2000–2001 to 2015–2016. The MSA defines a pixel as cropped based on winter growing season phenology and scales the percent of cropped area within a single MODIS pixel based on observed EVI values at peak phenology. We validated the result with eleven high-resolution scenes (spatial scale of 5 × 5 m2 or finer that we classified into cropped versus non-cropped maps using training data collected by visual inspection of the high-resolution imagery. The MSA had moderate to high accuracies when validated using these eleven scenes across India (R2 ranging between 0.19 and 0.89 with an overall R2 of 0.71 across all sites. This method requires no calibration data, making it easy to implement across large spatial and temporal scales, with 100% spatial coverage due to the compositing of EVI to generate cloud-free data sets. The accuracies found in this study are similar to those of other studies that map crop production using automated methods

  3. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  4. Large scale tracking algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  5. A solution approach based on Benders decomposition for the preventive maintenance scheduling problem of a stochastic large-scale energy system

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Muller, Laurent Flindt; Petersen, Bjørn

    2013-01-01

    This paper describes a Benders decomposition-based framework for solving the large scale energy management problem that was posed for the ROADEF 2010 challenge. The problem was taken from the power industry and entailed scheduling the outage dates for a set of nuclear power plants, which need...... to be regularly taken down for refueling and maintenance, in such away that the expected cost of meeting the power demand in a number of potential scenarios is minimized. We show that the problem structure naturally lends itself to Benders decomposition; however, not all constraints can be included in the mixed...... integer programming model. We present a two phase approach that first uses Benders decomposition to solve the linear programming relaxation of a relaxed version of the problem. In the second phase, integer solutions are enumerated and a procedure is applied to make them satisfy constraints not included...

  6. A Framing Approach to Cross-disciplinary Research Collaboration: Experiences from a Large-scale Research Project on Adaptive Water Management

    Directory of Open Access Journals (Sweden)

    Art Dewulf

    2007-12-01

    Full Text Available Although cross-disciplinary research collaboration is necessary to achieve a better understanding of how human and natural systems are dynamically linked, it often turns out to be very difficult in practice. We outline a framing approach to cross-disciplinary research that focuses on the different perspectives that researchers from different backgrounds use to make sense of the issues they want to research jointly. Based on interviews, participants' evaluations, and our own observations during meetings, we analyze three aspects of frame diversity in a large-scale research project. First, we identify dimensions of difference in the way project members frame the central concept of adaptive water management. Second, we analyze the challenges provoked by the multiple framings of concepts. Third, we analyze how a number of interventions (interactive workshops, facilitation, group model building, and concrete case contexts contribute to the connection and integration of different frames through a process of joint learning and knowledge construction.

  7. The health system and population health implications of large-scale diabetes screening in India: a microsimulation model of alternative approaches.

    Directory of Open Access Journals (Sweden)

    Sanjay Basu

    2015-05-01

    Full Text Available Like a growing number of rapidly developing countries, India has begun to develop a system for large-scale community-based screening for diabetes. We sought to identify the implications of using alternative screening instruments to detect people with undiagnosed type 2 diabetes among diverse populations across India.We developed and validated a microsimulation model that incorporated data from 58 studies from across the country into a nationally representative sample of Indians aged 25-65 y old. We estimated the diagnostic and health system implications of three major survey-based screening instruments and random glucometer-based screening. Of the 567 million Indians eligible for screening, depending on which of four screening approaches is utilized, between 158 and 306 million would be expected to screen as "high risk" for type 2 diabetes, and be referred for confirmatory testing. Between 26 million and 37 million of these people would be expected to meet international diagnostic criteria for diabetes, but between 126 million and 273 million would be "false positives." The ratio of false positives to true positives varied from 3.9 (when using random glucose screening to 8.2 (when using a survey-based screening instrument in our model. The cost per case found would be expected to be from US$5.28 (when using random glucose screening to US$17.06 (when using a survey-based screening instrument, presenting a total cost of between US$169 and US$567 million. The major limitation of our analysis is its dependence on published cohort studies that are unlikely fully to capture the poorest and most rural areas of the country. Because these areas are thought to have the lowest diabetes prevalence, this may result in overestimation of the efficacy and health benefits of screening.Large-scale community-based screening is anticipated to produce a large number of false-positive results, particularly if using currently available survey-based screening

  8. Re-fraction: a machine learning approach for deterministic identification of protein homologues and splice variants in large-scale MS-based proteomics.

    Science.gov (United States)

    Yang, Pengyi; Humphrey, Sean J; Fazakerley, Daniel J; Prior, Matthew J; Yang, Guang; James, David E; Yang, Jean Yee-Hwa

    2012-05-04

    A key step in the analysis of mass spectrometry (MS)-based proteomics data is the inference of proteins from identified peptide sequences. Here we describe Re-Fraction, a novel machine learning algorithm that enhances deterministic protein identification. Re-Fraction utilizes several protein physical properties to assign proteins to expected protein fractions that comprise large-scale MS-based proteomics data. This information is then used to appropriately assign peptides to specific proteins. This approach is sensitive, highly specific, and computationally efficient. We provide algorithms and source code for the current version of Re-Fraction, which accepts output tables from the MaxQuant environment. Nevertheless, the principles behind Re-Fraction can be applied to other protein identification pipelines where data are generated from samples fractionated at the protein level. We demonstrate the utility of this approach through reanalysis of data from a previously published study and generate lists of proteins deterministically identified by Re-Fraction that were previously only identified as members of a protein group. We find that this approach is particularly useful in resolving protein groups composed of splice variants and homologues, which are frequently expressed in a cell- or tissue-specific manner and may have important biological consequences.

  9. A Hybrid Approach Using an Artificial Bee Algorithm with Mixed Integer Programming Applied to a Large-Scale Capacitated Facility Location Problem

    Directory of Open Access Journals (Sweden)

    Guillermo Cabrera G.

    2012-01-01

    Full Text Available We present a hybridization of two different approaches applied to the well-known Capacitated Facility Location Problem (CFLP. The Artificial Bee algorithm (BA is used to select a promising subset of locations (warehouses which are solely included in the Mixed Integer Programming (MIP model. Next, the algorithm solves the subproblem by considering the entire set of customers. The hybrid implementation allows us to bypass certain inherited weaknesses of each algorithm, which means that we are able to find an optimal solution in an acceptable computational time. In this paper we demonstrate that BA can be significantly improved by use of the MIP algorithm. At the same time, our hybrid implementation allows the MIP algorithm to reach the optimal solution in a considerably shorter time than is needed to solve the model using the entire dataset directly within the model. Our hybrid approach outperforms the results obtained by each technique separately. It is able to find the optimal solution in a shorter time than each technique on its own, and the results are highly competitive with the state-of-the-art in large-scale optimization. Furthermore, according to our results, combining the BA with a mathematical programming approach appears to be an interesting research area in combinatorial optimization.

  10. Understanding uncertainties in non-linear population trajectories: a Bayesian semi-parametric hierarchical approach to large-scale surveys of coral cover.

    Directory of Open Access Journals (Sweden)

    Julie Vercelloni

    Full Text Available Recently, attempts to improve decision making in species management have focussed on uncertainties associated with modelling temporal fluctuations in populations. Reducing model uncertainty is challenging; while larger samples improve estimation of species trajectories and reduce statistical errors, they typically amplify variability in observed trajectories. In particular, traditional modelling approaches aimed at estimating population trajectories usually do not account well for nonlinearities and uncertainties associated with multi-scale observations characteristic of large spatio-temporal surveys. We present a Bayesian semi-parametric hierarchical model for simultaneously quantifying uncertainties associated with model structure and parameters, and scale-specific variability over time. We estimate uncertainty across a four-tiered spatial hierarchy of coral cover from the Great Barrier Reef. Coral variability is well described; however, our results show that, in the absence of additional model specifications, conclusions regarding coral trajectories become highly uncertain when considering multiple reefs, suggesting that management should focus more at the scale of individual reefs. The approach presented facilitates the description and estimation of population trajectories and associated uncertainties when variability cannot be attributed to specific causes and origins. We argue that our model can unlock value contained in large-scale datasets, provide guidance for understanding sources of uncertainty, and support better informed decision making.

  11. Understanding uncertainties in non-linear population trajectories: a Bayesian semi-parametric hierarchical approach to large-scale surveys of coral cover.

    Science.gov (United States)

    Vercelloni, Julie; Caley, M Julian; Kayal, Mohsen; Low-Choy, Samantha; Mengersen, Kerrie

    2014-01-01

    Recently, attempts to improve decision making in species management have focussed on uncertainties associated with modelling temporal fluctuations in populations. Reducing model uncertainty is challenging; while larger samples improve estimation of species trajectories and reduce statistical errors, they typically amplify variability in observed trajectories. In particular, traditional modelling approaches aimed at estimating population trajectories usually do not account well for nonlinearities and uncertainties associated with multi-scale observations characteristic of large spatio-temporal surveys. We present a Bayesian semi-parametric hierarchical model for simultaneously quantifying uncertainties associated with model structure and parameters, and scale-specific variability over time. We estimate uncertainty across a four-tiered spatial hierarchy of coral cover from the Great Barrier Reef. Coral variability is well described; however, our results show that, in the absence of additional model specifications, conclusions regarding coral trajectories become highly uncertain when considering multiple reefs, suggesting that management should focus more at the scale of individual reefs. The approach presented facilitates the description and estimation of population trajectories and associated uncertainties when variability cannot be attributed to specific causes and origins. We argue that our model can unlock value contained in large-scale datasets, provide guidance for understanding sources of uncertainty, and support better informed decision making.

  12. Comparative analysis of European wide marine ecosystem shifts: a large-scale approach for developing the basis for ecosystem-based management.

    Science.gov (United States)

    Möllmann, Christian; Conversi, Alessandra; Edwards, Martin

    2011-08-23

    Abrupt and rapid ecosystem shifts (where major reorganizations of food-web and community structures occur), commonly termed regime shifts, are changes between contrasting and persisting states of ecosystem structure and function. These shifts have been increasingly reported for exploited marine ecosystems around the world from the North Pacific to the North Atlantic. Understanding the drivers and mechanisms leading to marine ecosystem shifts is crucial in developing adaptive management strategies to achieve sustainable exploitation of marine ecosystems. An international workshop on a comparative approach to analysing these marine ecosystem shifts was held at Hamburg University, Institute for Hydrobiology and Fisheries Science, Germany on 1-3 November 2010. Twenty-seven scientists from 14 countries attended the meeting, representing specialists from seven marine regions, including the Baltic Sea, the North Sea, the Barents Sea, the Black Sea, the Mediterranean Sea, the Bay of Biscay and the Scotian Shelf off the Canadian East coast. The goal of the workshop was to conduct the first large-scale comparison of marine ecosystem regime shifts across multiple regional areas, in order to support the development of ecosystem-based management strategies. This journal is © 2011 The Royal Society

  13. Ethics of large-scale change

    DEFF Research Database (Denmark)

    Arler, Finn

    2006-01-01

    , which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, the neoclassical economists' approach, and finally the so-called Concentric Circle Theories approach...

  14. A nurse-led interdisciplinary primary care approach to prevent disability among community-dwelling frail older people: a large-scale process evaluation.

    Science.gov (United States)

    Metzelthin, Silke F; Daniëls, Ramon; van Rossum, Erik; Cox, Karen; Habets, Herbert; de Witte, Luc P; Kempen, Gertrudis I J M

    2013-09-01

    The complex healthcare needs of frail older people and their increased risk of disability require an integrated and proactive approach. In the Netherlands, an interdisciplinary primary care approach has recently been developed, involving individualized assessment and interventions (tailor-made care), case management and long-term follow-up. The practice nurse as part of a general practice is case manager and plans, organizes and monitors the care process and facilitates cooperation between professionals. The approach has shown positive indications regarding its feasibility in a small pilot, but its implementation on a large scale had not hitherto been investigated. To examine the extent to which the interdisciplinary care approach is implemented as planned and to gain insight into healthcare professionals' and frail older people's experiences regarding the benefits, burden, stimulating factors and barriers. A process evaluation was conducted using a mixed methods design. Six GP practices in the south of the Netherlands. Practice nurses (n=7), GPs (n=12), occupational therapists (n=6) and physical therapists (n=20) participated in the process evaluation. Furthermore, 194 community-dwelling frail older people (≥ 70 years) were included using the Groningen Frailty Indicator. People who were terminally ill, were confined to bed, had severe cognitive or psychological impairments or were unable to communicate in Dutch were excluded. Quantitative data (logbooks and evaluation forms) were collected from all the participating frail older people and 13 semi-structured interviews with a selection of them were conducted. In addition, data from healthcare professionals were collected through 12 semi-structured interviews and four focus group discussions. Although some parts of the protocol were insufficiently executed, healthcare professionals and frail older people were satisfied with the care approach, as it provided a useful structure for the delivery of geriatric primary

  15. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  16. Comprehensive Approaches to Multiphase Flows in Geophysics - Application to nonisothermal, nonhomogenous, unsteady, large-scale, turbulent dusty clouds I. Hydrodynamic and Thermodynamic RANS and LES Models

    Energy Technology Data Exchange (ETDEWEB)

    S. Dartevelle

    2005-09-05

    The objective of this manuscript is to fully derive a geophysical multiphase model able to ''accommodate'' different multiphase turbulence approaches; viz., the Reynolds Averaged Navier-Stokes (RANS), the Large Eddy Simulation (LES), or hybrid RANSLES. This manuscript is the first part of a larger geophysical multiphase project--lead by LANL--that aims to develop comprehensive modeling tools for large-scale, atmospheric, transient-buoyancy dusty jets and plume (e.g., plinian clouds, nuclear ''mushrooms'', ''supercell'' forest fire plumes) and for boundary-dominated geophysical multiphase gravity currents (e.g., dusty surges, diluted pyroclastic flows, dusty gravity currents in street canyons). LES is a partially deterministic approach constructed on either a spatial- or a temporal-separation between the large and small scales of the flow, whereas RANS is an entirely probabilistic approach constructed on a statistical separation between an ensemble-averaged mean and higher-order statistical moments (the so-called ''fluctuating parts''). Within this specific multiphase context, both turbulence approaches are built up upon the same phasic binary-valued ''function of presence''. This function of presence formally describes the occurrence--or not--of any phase at a given position and time and, therefore, allows to derive the same basic multiphase Navier-Stokes model for either the RANS or the LES frameworks. The only differences between these turbulence frameworks are the closures for the various ''turbulence'' terms involving the unknown variables from the fluctuating (RANS) or from the subgrid (LES) parts. Even though the hydrodynamic and thermodynamic models for RANS and LES have the same set of Partial Differential Equations, the physical interpretations of these PDEs cannot be the same, i.e., RANS models an averaged field, while LES simulates a

  17. A systems immunology approach to the host-tumor interaction: large-scale patterns of natural autoantibodies distinguish healthy and tumor-bearing mice.

    Directory of Open Access Journals (Sweden)

    Yifat Merbl

    Full Text Available Traditionally, immunology has considered a meaningful antibody response to be marked by large amounts of high-affinity antibodies reactive with the specific inciting antigen; the detection of small amounts of low-affinity antibodies binding to seemingly unrelated antigens has been considered to be beneath the threshold of immunological meaning. A systems-biology approach to immunology, however, suggests that large-scale patterns in the antibody repertoire might also reflect the functional state of the immune system. To investigate such global patterns of antibodies, we have used an antigen-microarray device combined with informatic analysis. Here we asked whether antibody-repertoire patterns might reflect the state of an implanted tumor. We studied the serum antibodies of inbred C57BL/6 mice before and after implantation of syngeneic 3LL tumor cells of either metastatic or non-metastatic clones. We analyzed patterns of IgG and IgM autoantibodies binding to over 300 self-antigens arrayed on slides using support vector machines and genetic algorithm techniques. We now report that antibody patterns, but not single antibodies, were informative: 1 mice, even before tumor implantation, manifest both individual and common patterns of low-titer natural autoantibodies; 2 the patterns of these autoantibodies respond to the growth of the tumor cells, and can distinguish between metastatic and non-metastatic tumor clones; and 3 curative tumor resection induces dynamic changes in these low-titer autoantibody patterns. The informative patterns included autoantibodies binding to self-molecules not known to be tumor-associated antigens (including insulin, DNA, myosin, fibrinogen as well as to known tumor-associated antigens (including p53, cytokeratin, carbonic anhydrases, tyrosinase. Thus, low-titer autoantibodies that are not the direct products of tumor-specific immunization can still generate an immune biomarker of the body-tumor interaction. System

  18. Optimal Control for Nonlinear Interconnected Large-scale Systems: A Successive Approximation Approach%非线性互联大系统的最优控制:逐次逼近法

    Institute of Scientific and Technical Information of China (English)

    唐功友; 孙亮

    2005-01-01

    The optimal control problem for nonlinear interconnected large-scale dynamic systems is considered. A successive approximation approach for designing the optimal controller is proposed with respect to quadratic performance indexes. By using the approach, the high order, coupling,nonlinear two-point boundary value (TPBV) problem is transformed into a sequence of linear decoupling TPBV problems. It is proven that the TPBV problem sequence uniformly converges to the optimal control for nonlinear interconnected large-scale systems. A suboptimal control law is obtained by using a finite iterative result of the optimal control sequence.

  19. Large-Scale Information Systems

    Energy Technology Data Exchange (ETDEWEB)

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  20. 大规模生产TiO2薄膜的方法%Approach for Producing TiO2 Thin Films in Large Scale

    Institute of Scientific and Technical Information of China (English)

    汪洋; 彭晓光; 陈樱

    2007-01-01

    @@ Introduction Metal oxides are in use as catalysts in industrial processes. The surfaces of titanium dioxide (TiO2)has been of great interest because of its capability of heterogeneous catalysis and photocatalysis, and the adsorption of small molecules on TiO2 has received considerable attention in relation to the elimination of atmospheric pollutants. So that It is important to produce TiO2 thin films in large scale.

  1. Sentinel-1 data massive processing for large scale DInSAR analyses within Cloud Computing environments through the P-SBAS approach

    Science.gov (United States)

    Lanari, Riccardo; Bonano, Manuela; Buonanno, Sabatino; Casu, Francesco; De Luca, Claudio; Fusco, Adele; Manunta, Michele; Manzo, Mariarosaria; Pepe, Antonio; Zinno, Ivana

    2017-04-01

    -core programming techniques. Currently, Cloud Computing environments make available large collections of computing resources and storage that can be effectively exploited through the presented S1 P-SBAS processing chain to carry out interferometric analyses at a very large scale, in reduced time. This allows us to deal also with the problems connected to the use of S1 P-SBAS chain in operational contexts, related to hazard monitoring and risk prevention and mitigation, where handling large amounts of data represents a challenging task. As a significant experimental result we performed a large spatial scale SBAS analysis relevant to the Central and Southern Italy by exploiting the Amazon Web Services Cloud Computing platform. In particular, we processed in parallel 300 S1 acquisitions covering the Italian peninsula from Lazio to Sicily through the presented S1 P-SBAS processing chain, generating 710 interferograms, thus finally obtaining the displacement time series of the whole processed area. This work has been partially supported by the CNR-DPC agreement, the H2020 EPOS-IP project (GA 676564) and the ESA GEP project.

  2. Large Scale Dynamos in Stars

    Science.gov (United States)

    Vishniac, Ethan T.

    2015-01-01

    We show that a differentially rotating conducting fluid automatically creates a magnetic helicity flux with components along the rotation axis and in the direction of the local vorticity. This drives a rapid growth in the local density of current helicity, which in turn drives a large scale dynamo. The dynamo growth rate derived from this process is not constant, but depends inversely on the large scale magnetic field strength. This dynamo saturates when buoyant losses of magnetic flux compete with the large scale dynamo, providing a simple prediction for magnetic field strength as a function of Rossby number in stars. Increasing anisotropy in the turbulence produces a decreasing magnetic helicity flux, which explains the flattening of the B/Rossby number relation at low Rossby numbers. We also show that the kinetic helicity is always a subdominant effect. There is no kinematic dynamo in real stars.

  3. Large-scale multimedia modeling applications

    Energy Technology Data Exchange (ETDEWEB)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications.

  4. Very Large Scale Integration (VLSI).

    Science.gov (United States)

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  5. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  6. Testing gravity on Large Scales

    OpenAIRE

    Raccanelli Alvise

    2013-01-01

    We show how it is possible to test general relativity and different models of gravity via Redshift-Space Distortions using forthcoming cosmological galaxy surveys. However, the theoretical models currently used to interpret the data often rely on simplifications that make them not accurate enough for precise measurements. We will discuss improvements to the theoretical modeling at very large scales, including wide-angle and general relativistic corrections; we then show that for wide and deep...

  7. Large-scale neuromorphic computing systems

    Science.gov (United States)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  8. A large-scale distribution of milk-based fortified spreads: evidence for a new approach in regions with high burden of acute malnutrition.

    Science.gov (United States)

    Defourny, Isabelle; Minetti, Andrea; Harczi, Géza; Doyon, Stéphane; Shepherd, Susan; Tectonidis, Milton; Bradol, Jean-Hervé; Golden, Michael

    2009-01-01

    There are 146 million underweight children in the developing world, which contribute to up to half of the world's child deaths. In high burden regions for malnutrition, the treatment of individual children is limited by available resources. Here, we evaluate a large-scale distribution of a nutritional supplement on the prevention of wasting. A new ready-to-use food (RUF) was developed as a diet supplement for children under three. The intervention consisted of six monthly distributions of RUF during the 2007 hunger gap in a district of Maradi region, Niger, for approximately 60,000 children (length: 60-85 cm). At each distribution, all children over 65 cm had their Mid-Upper Arm Circumference (MUAC) recorded. Admission trends for severe wasting (WFHspreads to reduce the incidence of severe wasting in large population of children 6-36 months of age. Although further information is needed on the cost-effectiveness of such distributions, these results highlight the importance of re-evaluating current nutritional strategies and international recommendations for high burden areas of childhood malnutrition.

  9. Strings and large scale magnetohydrodynamics

    CERN Document Server

    Olesen, P

    1995-01-01

    From computer simulations of magnetohydrodynamics one knows that a turbulent plasma becomes very intermittent, with the magnetic fields concentrated in thin flux tubes. This situation looks very "string-like", so we investigate whether strings could be solutions of the magnetohydrodynamics equations in the limit of infinite conductivity. We find that the induction equation is satisfied, and we discuss the Navier-Stokes equation (without viscosity) with the Lorentz force included. We argue that the string equations (with non-universal maximum velocity) should describe the large scale motion of narrow magnetic flux tubes, because of a large reparametrization (gauge) invariance of the magnetic and electric string fields.

  10. Japanese large-scale interferometers

    CERN Document Server

    Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K

    2002-01-01

    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.

  11. Models of large scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Frenk, C.S. (Physics Dept., Univ. of Durham (UK))

    1991-01-01

    The ingredients required to construct models of the cosmic large scale structure are discussed. Input from particle physics leads to a considerable simplification by offering concrete proposals for the geometry of the universe, the nature of the dark matter and the primordial fluctuations that seed the growth of structure. The remaining ingredient is the physical interaction that governs dynamical evolution. Empirical evidence provided by an analysis of a redshift survey of IRAS galaxies suggests that gravity is the main agent shaping the large-scale structure. In addition, this survey implies large values of the mean cosmic density, {Omega}> or approx.0.5, and is consistent with a flat geometry if IRAS galaxies are somewhat more clustered than the underlying mass. Together with current limits on the density of baryons from Big Bang nucleosynthesis, this lends support to the idea of a universe dominated by non-baryonic dark matter. Results from cosmological N-body simulations evolved from a variety of initial conditions are reviewed. In particular, neutrino dominated and cold dark matter dominated universes are discussed in detail. Finally, it is shown that apparent periodicities in the redshift distributions in pencil-beam surveys arise frequently from distributions which have no intrinsic periodicity but are clustered on small scales. (orig.).

  12. A large-scale distribution of milk-based fortified spreads: evidence for a new approach in regions with high burden of acute malnutrition.

    Directory of Open Access Journals (Sweden)

    Isabelle Defourny

    Full Text Available BACKGROUND: There are 146 million underweight children in the developing world, which contribute to up to half of the world's child deaths. In high burden regions for malnutrition, the treatment of individual children is limited by available resources. Here, we evaluate a large-scale distribution of a nutritional supplement on the prevention of wasting. METHODS AND FINDINGS: A new ready-to-use food (RUF was developed as a diet supplement for children under three. The intervention consisted of six monthly distributions of RUF during the 2007 hunger gap in a district of Maradi region, Niger, for approximately 60,000 children (length: 60-85 cm. At each distribution, all children over 65 cm had their Mid-Upper Arm Circumference (MUAC recorded. Admission trends for severe wasting (WFH<70% NCHS in Maradi, 2002-2005 show an increase every year during the hunger gap. In contrast, in 2007, throughout the period of the distribution, the incidence of severe acute malnutrition (MUAC<110 mm remained at extremely low levels. Comparison of year-over-year admissions to the therapeutic feeding program shows that the 2007 blanket distribution had essentially the same flattening effect on the seasonal rise in admissions as the 2006 individualized treatment of almost 60,000 children moderately wasted. CONCLUSIONS: These results demonstrate the potential for distribution of fortified spreads to reduce the incidence of severe wasting in large population of children 6-36 months of age. Although further information is needed on the cost-effectiveness of such distributions, these results highlight the importance of re-evaluating current nutritional strategies and international recommendations for high burden areas of childhood malnutrition.

  13. Comparative evaluation of seven different sample treatment approaches for large-scale multiclass sport drug testing in urine by liquid chromatography-mass spectrometry.

    Science.gov (United States)

    Domínguez-Romero, Juan C; García-Reyes, Juan F; Molina-Díaz, Antonio

    2014-09-26

    Sample preparation is a critical step in large-scale multiclass analysis such as sport drug testing. Due to the wide heterogeneity of the analytes and the complexity of the matrix, the selection of a correct sample preparation method is essential, looking for a compromise between good recoveries for most of the analytes and cleanliness of the extract. In the present work, seven sample preparation procedures based on solid-phase extraction (SPE) (with 5 different cartridges), liquid-liquid extraction (LLE) and sorbent-supported liquid extraction (SLE) were evaluated for multiclass sport drug testing in urine. The selected SPE sorbents were polymeric cartridges Agilent PLEXA™ and Oasis HLB™, mixed mode cation and anion exchange cartridges Oasis MAX™ and MCX™, and C18 cartridges. LLE was performed using tert-butyl methyl ether and SLE was carried out using Agilent Chem Elut™ cartridges. To evaluate the proposed extraction procedures, a list of 189 compounds were selected as representative from different groups of doping agents, including 34 steroids, 14 glucocorticosteroids, 24 diuretics and masking agents, 11 stimulants, 9 beta-agonist, 16 beta-blockers, 6 Selective Estrogen Receptors Modulators (SERMs), 24 narcotics and 22 other drugs of abuse/sport drugs. Blank urine samples were spiked at two levels of concentration, 2.5 and 25μgL(-1) and extracted with the different extraction protocols (n=6). The analysis of the extracts was carried out by liquid chromatography electrospray time-of-flight mass spectrometry. The use of solid-phase extraction with polymer cartridges provided high recoveries for most of the analytes tested and was found the more suitable method for this type of application given the additional advantages such as low sample and solvent consumption along with increased automation and throughput.

  14. Investigation of the relationship between very warm days in Romania and large-scale atmospheric circulation using multiple linear regression approach

    Science.gov (United States)

    Barbu, N.; Cuculeanu, V.; Stefan, S.

    2016-10-01

    The aim of this study is to investigate the relationship between the frequency of very warm days (TX90p) in Romania and large-scale atmospheric circulation for winter (December-February) and summer (June-August) between 1962 and 2010. In order to achieve this, two catalogues from COST733Action were used to derive daily circulation types. Seasonal occurrence frequencies of the circulation types were calculated and have been utilized as predictors within the multiple linear regression model (MLRM) for the estimation of winter and summer TX90p values for 85 synoptic stations covering the entire Romania. A forward selection procedure has been utilized to find adequate predictor combinations and those predictor combinations were tested for collinearity. The performance of the MLRMs has been quantified based on the explained variance. Furthermore, the leave-one-out cross-validation procedure was applied and the root-mean-squared error skill score was calculated at station level in order to obtain reliable evidence of MLRM robustness. From this analysis, it can be stated that the MLRM performance is higher in winter compared to summer. This is due to the annual cycle of incoming insolation and to the local factors such as orography and surface albedo variations. The MLRM performances exhibit distinct variations between regions with high performance in wintertime for the eastern and southern part of the country and in summertime for the western part of the country. One can conclude that the MLRM generally captures quite well the TX90p variability and reveals the potential for statistical downscaling of TX90p values based on circulation types.

  15. Safeguards instruments for Large-Scale Reprocessing Plants

    Energy Technology Data Exchange (ETDEWEB)

    Hakkila, E.A. [Los Alamos National Lab., NM (United States); Case, R.S.; Sonnier, C. [Sandia National Labs., Albuquerque, NM (United States)

    1993-06-01

    Between 1987 and 1992 a multi-national forum known as LASCAR (Large Scale Reprocessing Plant Safeguards) met to assist the IAEA in development of effective and efficient safeguards for large-scale reprocessing plants. The US provided considerable input for safeguards approaches and instrumentation. This paper reviews and updates instrumentation of importance in measuring plutonium and uranium in these facilities.

  16. Economically viable large-scale hydrogen liquefaction

    Science.gov (United States)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  17. Large-Scale Galaxy Bias

    CERN Document Server

    Desjacques, Vincent; Schmidt, Fabian

    2016-01-01

    This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a pedagogical proof of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which includes the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in i...

  18. Large scale biomimetic membrane arrays

    DEFF Research Database (Denmark)

    Hansen, Jesper Søndergaard; Perry, Mark; Vogel, Jörg

    2009-01-01

    To establish planar biomimetic membranes across large scale partition aperture arrays, we created a disposable single-use horizontal chamber design that supports combined optical-electrical measurements. Functional lipid bilayers could easily and efficiently be established across CO2 laser micro......-structured 8 x 8 aperture partition arrays with average aperture diameters of 301 +/- 5 mu m. We addressed the electro-physical properties of the lipid bilayers established across the micro-structured scaffold arrays by controllable reconstitution of biotechnological and physiological relevant membrane...... peptides and proteins. Next, we tested the scalability of the biomimetic membrane design by establishing lipid bilayers in rectangular 24 x 24 and hexagonal 24 x 27 aperture arrays, respectively. The results presented show that the design is suitable for further developments of sensitive biosensor assays...

  19. Testing gravity on Large Scales

    Directory of Open Access Journals (Sweden)

    Raccanelli Alvise

    2013-09-01

    Full Text Available We show how it is possible to test general relativity and different models of gravity via Redshift-Space Distortions using forthcoming cosmological galaxy surveys. However, the theoretical models currently used to interpret the data often rely on simplifications that make them not accurate enough for precise measurements. We will discuss improvements to the theoretical modeling at very large scales, including wide-angle and general relativistic corrections; we then show that for wide and deep surveys those corrections need to be taken into account if we want to measure the growth of structures at a few percent level, and so perform tests on gravity, without introducing systematic errors. Finally, we report the results of some recent cosmological model tests carried out using those precise models.

  20. Conference on Large Scale Optimization

    CERN Document Server

    Hearn, D; Pardalos, P

    1994-01-01

    On February 15-17, 1993, a conference on Large Scale Optimization, hosted by the Center for Applied Optimization, was held at the University of Florida. The con­ ference was supported by the National Science Foundation, the U. S. Army Research Office, and the University of Florida, with endorsements from SIAM, MPS, ORSA and IMACS. Forty one invited speakers presented papers on mathematical program­ ming and optimal control topics with an emphasis on algorithm development, real world applications and numerical results. Participants from Canada, Japan, Sweden, The Netherlands, Germany, Belgium, Greece, and Denmark gave the meeting an important international component. At­ tendees also included representatives from IBM, American Airlines, US Air, United Parcel Serice, AT & T Bell Labs, Thinking Machines, Army High Performance Com­ puting Research Center, and Argonne National Laboratory. In addition, the NSF sponsored attendance of thirteen graduate students from universities in the United States and abro...

  1. Large Scale Correlation Clustering Optimization

    CERN Document Server

    Bagon, Shai

    2011-01-01

    Clustering is a fundamental task in unsupervised learning. The focus of this paper is the Correlation Clustering functional which combines positive and negative affinities between the data points. The contribution of this paper is two fold: (i) Provide a theoretic analysis of the functional. (ii) New optimization algorithms which can cope with large scale problems (>100K variables) that are infeasible using existing methods. Our theoretic analysis provides a probabilistic generative interpretation for the functional, and justifies its intrinsic "model-selection" capability. Furthermore, we draw an analogy between optimizing this functional and the well known Potts energy minimization. This analogy allows us to suggest several new optimization algorithms, which exploit the intrinsic "model-selection" capability of the functional to automatically recover the underlying number of clusters. We compare our algorithms to existing methods on both synthetic and real data. In addition we suggest two new applications t...

  2. Uncovering Substantive Patterns in Student Responses in International Large-Scale Assessments--Comparing a Latent Class to a Manifest DIF Approach

    Science.gov (United States)

    Oliveri, María Elena; Ercikan, Kadriye; Zumbo, Bruno D.; Lawless, René

    2014-01-01

    In this study, we contrast results from two differential item functioning (DIF) approaches (manifest and latent class) by the number of items and sources of items identified as DIF using data from an international reading assessment. The latter approach yielded three latent classes, presenting evidence of heterogeneity in examinee response…

  3. A new approach to e-commerce customs control in China: Integrated supply chain - A practical application towards large-scale data pipeline implementation

    NARCIS (Netherlands)

    R. Hu (Rong); Yao-Hua Tan (Yao Hua); F. Heijmann (Frank)

    2016-01-01

    textabstractDevelopments in e-commerce are presenting new challenges in customs control and China's customs agency is developing a new approach to how it addresses these challenges. China Customs has adopted an integrated supply chain approach to secure international trade lanes and to facilitate le

  4. Large scale cluster computing workshop

    Energy Technology Data Exchange (ETDEWEB)

    Dane Skow; Alan Silverman

    2002-12-23

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community.

  5. Large Scale Magnetostrictive Valve Actuator

    Science.gov (United States)

    Richard, James A.; Holleman, Elizabeth; Eddleman, David

    2008-01-01

    Marshall Space Flight Center's Valves, Actuators and Ducts Design and Development Branch developed a large scale magnetostrictive valve actuator. The potential advantages of this technology are faster, more efficient valve actuators that consume less power and provide precise position control and deliver higher flow rates than conventional solenoid valves. Magnetostrictive materials change dimensions when a magnetic field is applied; this property is referred to as magnetostriction. Magnetostriction is caused by the alignment of the magnetic domains in the material s crystalline structure and the applied magnetic field lines. Typically, the material changes shape by elongating in the axial direction and constricting in the radial direction, resulting in no net change in volume. All hardware and testing is complete. This paper will discuss: the potential applications of the technology; overview of the as built actuator design; discuss problems that were uncovered during the development testing; review test data and evaluate weaknesses of the design; and discuss areas for improvement for future work. This actuator holds promises of a low power, high load, proportionally controlled actuator for valves requiring 440 to 1500 newtons load.

  6. Large-scale pool fires

    Directory of Open Access Journals (Sweden)

    Steinhaus Thomas

    2007-01-01

    Full Text Available A review of research into the burning behavior of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low source Froude numbers and the radiative interaction with the fire source. In hydrocarbon fires, higher soot levels at increased diameters result in radiation blockage effects around the perimeter of large fire plumes; this yields lower emissive powers and a drastic reduction in the radiative loss fraction; whilst there are simplifying factors with these phenomena, arising from the fact that soot yield can saturate, there are other complications deriving from the intermittency of the behavior, with luminous regions of efficient combustion appearing randomly in the outer surface of the fire according the turbulent fluctuations in the fire plume. Knowledge of the fluid flow instabilities, which lead to the formation of large eddies, is also key to understanding the behavior of large-scale fires. Here modeling tools can be effectively exploited in order to investigate the fluid flow phenomena, including RANS- and LES-based computational fluid dynamics codes. The latter are well-suited to representation of the turbulent motions, but a number of challenges remain with their practical application. Massively-parallel computational resources are likely to be necessary in order to be able to adequately address the complex coupled phenomena to the level of detail that is necessary.

  7. Handbook of Large-Scale Random Networks

    CERN Document Server

    Bollobas, Bela; Miklos, Dezso

    2008-01-01

    Covers various aspects of large-scale networks, including mathematical foundations and rigorous results of random graph theory, modeling and computational aspects of large-scale networks, as well as areas in physics, biology, neuroscience, sociology and technical areas

  8. On the spot ethical decision-making in CBRN (chemical, biological, radiological or nuclear event) response: approaches to on the spot ethical decision-making for first responders to large-scale chemical incidents.

    Science.gov (United States)

    Rebera, Andrew P; Rafalowski, Chaim

    2014-09-01

    First responders to chemical, biological, radiological, or nuclear (CBRN) events face decisions having significant human consequences. Some operational decisions are supported by standard operating procedures, yet these may not suffice for ethical decisions. Responders will be forced to weigh their options, factoring-in contextual peculiarities; they will require guidance on how they can approach novel (indeed unique) ethical problems: they need strategies for "on the spot" ethical decision making. The primary aim of this paper is to examine how first responders should approach on the spot ethical decision-making amid the stress and uncertainty of a CBRN event. Drawing on the long-term professional CBRN experience of one of the authors, this paper sets out a series of practical ethical dilemmas potentially arising in the context of a large-scale chemical incident. We propose a broadly consequentialist approach to on the spot ethical decision-making, but one which incorporates ethical values and rights as "side-constraints".

  9. Robin Hood: A cost-efficient two-stage approach to large-scale simultaneous inference with non-homogeneous sparse effects.

    Science.gov (United States)

    Pecanka, Jakub; Goeman, Jelle

    2017-04-25

    A classical approach to experimental design in many scientific fields is to first gather all of the data and then analyze it in a single analysis. It has been recognized that in many areas such practice leaves substantial room for improvement in terms of the researcher's ability to identify relevant effects, in terms of cost efficiency, or both. Considerable attention has been paid in recent years to multi-stage designs, in which the user alternates between data collection and analysis and thereby sequentially reduces the size of the problem. However, the focus has generally been towards designs that require a hypothesis be tested in every single stage before it can be declared as rejected by the procedure. Such procedures are well-suited for homogeneous effects, i.e. effects of (almost) equal sizes, however, with effects of varying size a procedure that permits rejection at interim stages is much more suitable. Here we present precisely such multi-stage testing procedure called Robin Hood. We show that with heterogeneous effects our method substantially improves on the existing multi-stage procedures with an essentially zero efficiency trade-off in the homogeneous effect realm, which makes it especially useful in areas such as genetics, where heterogeneous effects are common. Our method improves on existing approaches in a number of ways including a novel way of performing two-sided testing in a multi-stage procedure with increased power for detecting small effects.

  10. A divide and conquer approach for construction of large-scale signaling networks from PPI and RNAi data using linear programming.

    Science.gov (United States)

    Ozsoy, Oyku Eren; Can, Tolga

    2013-01-01

    Inference of topology of signaling networks from perturbation experiments is a challenging problem. Recently, the inference problem has been formulated as a reference network editing problem and it has been shown that finding the minimum number of edit operations on a reference network to comply with perturbation experiments is an NP-complete problem. In this paper, we propose an integer linear optimization (ILP) model for reconstruction of signaling networks from RNAi data and a reference network. The ILP model guarantees the optimal solution; however, is practical only for small signaling networks of size 10-15 genes due to computational complexity. To scale for large signaling networks, we propose a divide and conquer-based heuristic, in which a given reference network is divided into smaller subnetworks that are solved separately and the solutions are merged together to form the solution for the large network. We validate our proposed approach on real and synthetic data sets, and comparison with the state of the art shows that our proposed approach is able to scale better for large networks while attaining similar or better biological accuracy.

  11. Conundrum of the Large Scale Streaming

    CERN Document Server

    Malm, T M

    1999-01-01

    The etiology of the large scale peculiar velocity (large scale streaming motion) of clusters would increasingly seem more tenuous, within the context of the gravitational instability hypothesis. Are there any alternative testable models possibly accounting for such large scale streaming of clusters?

  12. Large-scale cloning of human chromosome 2-specific yeast artificial chromosomes (YACs) using an interspersed repetitive sequences (IRS)-PCR approach.

    Science.gov (United States)

    Liu, J; Stanton, V P; Fujiwara, T M; Wang, J X; Rezonzew, R; Crumley, M J; Morgan, K; Gros, P; Housman, D; Schurr, E

    1995-03-20

    We report here an efficient approach to the establishment of extended YAC contigs on human chromosome 2 by using an interspersed repetitive sequences (IRS)-PCR-based screening strategy for YAC DNA pools. Genomic DNA was extracted from 1152 YAC pools comprised of 55,296 YACs mostly derived from the CEPH Mark I library. Alu-element-mediated PCR was performed for each pool, and amplification products were spotted on hybridization membranes (IRS filters). IRS probes for the screening of the IRS filters were obtained by Alu-element-mediated PCR. Of 708 distinct probes obtained from chromosome 2-specific somatic cell hybrids, 85% were successfully used for library screening. Similarly, 80% of 80 YAC walking probes were successfully used for library screening. Each probe detected an average of 6.6 YACs, which is in good agreement with the 7- to 7.5-fold genome coverage provided by the library. In a preliminary analysis, we have identified 188 YAC groups that are the basis for building contigs for chromosome 2. The coverage of the telomeric half of chromosome 2q was considered to be good since 31 of 34 microsatellites and 22 of 23 expressed sequence tags that were chosen from chromosome region 2q13-q37 were contained in a chromosome 2 YAC sublibrary generated by our experiments. We have identified a minimum of 1610 distinct chromosome 2-specific YACs, which will be a valuable asset for the physical mapping of the second largest human chromosome.

  13. Large-scale cloning of human chromosome 2-specific yeast artificial chromosomes (YACs) using an interspersed repetitive sequences (IRS)-PCR approach

    Energy Technology Data Exchange (ETDEWEB)

    Liu, J.; Rezonzew, R. [McGill Centre for the Study of Host Resistance, Montreal, Quebec (Canada)]|[McGill Univ., Montreal, Quebec (Canada); Stanton, V.P. Jr. [Massachusetts Institute of Technology, Cambridge, MA (United States)] [and others

    1995-03-20

    We report here an efficient approach to the establishment of extended YAC contigs on human chromosome 2 by using an interspersed repetitive sequences (IRS)-PCR-based screening strategy for YAC DNA pools. Genomic DNA was extracted from 1152 YAC pools comprised of 55,296 YACs mostly derived from the CEPH Mark I library. Alu-element-mediated PCR was performed for each pool, and amplification products were spotted on hybridization membranes (IRS filters). IRS probes for the screening of the IRS filters were obtained by Alu-element-mediated PCR. Of 708 distinct probes obtained from chromosome 2-specific somatic cell hybrids, 85% were successfully used for library screening. Similarly, 80% of 80 YAC walking probes were successfully used for library screening. Each probe detected an average of 6.6 YACs, which is in good agreement with the 7- to 7.5-fold genome coverage provided by the library. In a preliminary analysis, we have identified 188 YAC groups that are the basis for building contigs for chromosome 2. The coverage of the telomeric half of chromosome 2q was considered to be good since 31 of 34 microsatellites and 22 of 23 expressed sequence tags that were chosen from chromosome region 2q13-q37 were contained in a chromosome 2 YAC sublibrary generated by our experiments. We have identified a minimum of 1610 distinct chromosome 2-specific YACs, which will be a valuable asset for the physical mapping of the second largest human chromosome. 81 refs., 8 figs., 3 tabs.

  14. Efficient algorithms for collaborative decision making for large scale settings

    DEFF Research Database (Denmark)

    Assent, Ira

    2011-01-01

    Collaborative decision making is a successful approach in settings where data analysis and querying can be done interactively. In large scale systems with huge data volumes or many users, collaboration is often hindered by impractical runtimes. Existing work on improving collaboration focuses...... to bring about more effective and more efficient retrieval systems that support the users' decision making process. We sketch promising research directions for more efficient algorithms for collaborative decision making, especially for large scale systems....

  15. Use of large-scale transient stresses and a coupled adjoint-sensitivity/kriging approach to calibrate a groundwater-flow model at the WIPP (Waste Isolation Pilot Plant) site

    Energy Technology Data Exchange (ETDEWEB)

    Beauheim, R.L. (Sandia National Labs., Albuquerque, NM (USA)); LaVenue, A.M. (INTERA, Inc., Albuquerque, NM (USA))

    1990-01-01

    A coupled adjoint-sensitivity/kriging approach was used to calibrate a groundwater-flow model to 10 years of human-induced transient hydraulic stresses at the WIPP site in New Mexico, USA. Transmissivity data obtained from local-scale hydraulic tests were first kriged to define an initial transmissivity distribution. Steady-state model calibration was then performed employing adjoint-sensitivity techniques to identify regions where transmissivity changes would improve the model fit to the observed steady-state heads. Subsequent transient calibration to large-scale hydraulic stresses created by shaft construction and long-term pumping tests aided in the identification of smaller scale features not detected during steady-state calibration. This transient calibration resulted in a much more reliable and defendable model for use in performance-assessment calculations. 7 refs., 6 figs.

  16. Large-scale simulations of reionization

    Energy Technology Data Exchange (ETDEWEB)

    Kohler, Katharina; /JILA, Boulder /Fermilab; Gnedin, Nickolay Y.; /Fermilab; Hamilton, Andrew J.S.; /JILA, Boulder

    2005-11-01

    We use cosmological simulations to explore the large-scale effects of reionization. Since reionization is a process that involves a large dynamic range--from galaxies to rare bright quasars--we need to be able to cover a significant volume of the universe in our simulation without losing the important small scale effects from galaxies. Here we have taken an approach that uses clumping factors derived from small scale simulations to approximate the radiative transfer on the sub-cell scales. Using this technique, we can cover a simulation size up to 1280h{sup -1} Mpc with 10h{sup -1} Mpc cells. This allows us to construct synthetic spectra of quasars similar to observed spectra of SDSS quasars at high redshifts and compare them to the observational data. These spectra can then be analyzed for HII region sizes, the presence of the Gunn-Peterson trough, and the Lyman-{alpha} forest.

  17. Grid sensitivity capability for large scale structures

    Science.gov (United States)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  18. Unfolding large-scale maps.

    Science.gov (United States)

    Jenkins, Glyn

    2003-12-01

    This is an account of the development and use of genetic maps, from humble beginnings at the hands of Thomas Hunt Morgan, to the sophistication of genome sequencing. The review charters the emergence of molecular marker maps exploiting DNA polymorphism, the renaissance of cytogenetics through the use of fluorescence in situ hybridisation, and the discovery and isolation of genes by map-based cloning. The historical significance of sequencing of DNA prefaces a section describing the sequencing of genomes, the ascendancy of particular model organisms, and the utility and limitations of comparative genomic and functional genomic approaches to further our understanding of the control of biological processes. Emphasis is given throughout the treatise as to how the structure and biological behaviour of the DNA molecule underpin the technological development and biological applications of maps.

  19. Supporting large-scale computational science

    Energy Technology Data Exchange (ETDEWEB)

    Musick, R

    1998-10-01

    A study has been carried out to determine the feasibility of using commercial database management systems (DBMSs) to support large-scale computational science. Conventional wisdom in the past has been that DBMSs are too slow for such data. Several events over the past few years have muddied the clarity of this mindset: 1. 2. 3. 4. Several commercial DBMS systems have demonstrated storage and ad-hoc quer access to Terabyte data sets. Several large-scale science teams, such as EOSDIS [NAS91], high energy physics [MM97] and human genome [Kin93] have adopted (or make frequent use of) commercial DBMS systems as the central part of their data management scheme. Several major DBMS vendors have introduced their first object-relational products (ORDBMSs), which have the potential to support large, array-oriented data. In some cases, performance is a moot issue. This is true in particular if the performance of legacy applications is not reduced while new, albeit slow, capabilities are added to the system. The basic assessment is still that DBMSs do not scale to large computational data. However, many of the reasons have changed, and there is an expiration date attached to that prognosis. This document expands on this conclusion, identifies the advantages and disadvantages of various commercial approaches, and describes the studies carried out in exploring this area. The document is meant to be brief, technical and informative, rather than a motivational pitch. The conclusions within are very likely to become outdated within the next 5-7 years, as market forces will have a significant impact on the state of the art in scientific data management over the next decade.

  20. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  1. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  2. Large scale mechanical metamaterials as seismic shields

    Science.gov (United States)

    Miniaci, Marco; Krushynska, Anastasiia; Bosia, Federico; Pugno, Nicola M.

    2016-08-01

    Earthquakes represent one of the most catastrophic natural events affecting mankind. At present, a universally accepted risk mitigation strategy for seismic events remains to be proposed. Most approaches are based on vibration isolation of structures rather than on the remote shielding of incoming waves. In this work, we propose a novel approach to the problem and discuss the feasibility of a passive isolation strategy for seismic waves based on large-scale mechanical metamaterials, including for the first time numerical analysis of both surface and guided waves, soil dissipation effects, and adopting a full 3D simulations. The study focuses on realistic structures that can be effective in frequency ranges of interest for seismic waves, and optimal design criteria are provided, exploring different metamaterial configurations, combining phononic crystals and locally resonant structures and different ranges of mechanical properties. Dispersion analysis and full-scale 3D transient wave transmission simulations are carried out on finite size systems to assess the seismic wave amplitude attenuation in realistic conditions. Results reveal that both surface and bulk seismic waves can be considerably attenuated, making this strategy viable for the protection of civil structures against seismic risk. The proposed remote shielding approach could open up new perspectives in the field of seismology and in related areas of low-frequency vibration damping or blast protection.

  3. Participatory Design of Large-Scale Information Systems

    DEFF Research Database (Denmark)

    Simonsen, Jesper; Hertzum, Morten

    2008-01-01

    In this article we discuss how to engage in large-scale information systems development by applying a participatory design (PD) approach that acknowledges the unique situated work practices conducted by the domain experts of modern organizations. We reconstruct the iterative prototyping approach...

  4. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  5. Network robustness under large-scale attacks

    CERN Document Server

    Zhou, Qing; Liu, Ruifang; Cui, Shuguang

    2014-01-01

    Network Robustness under Large-Scale Attacks provides the analysis of network robustness under attacks, with a focus on large-scale correlated physical attacks. The book begins with a thorough overview of the latest research and techniques to analyze the network responses to different types of attacks over various network topologies and connection models. It then introduces a new large-scale physical attack model coined as area attack, under which a new network robustness measure is introduced and applied to study the network responses. With this book, readers will learn the necessary tools to evaluate how a complex network responds to random and possibly correlated attacks.

  6. Generation Expansion Planning Considering Integrating Large-scale Wind Generation

    DEFF Research Database (Denmark)

    Zhang, Chunyu; Ding, Yi; Østergaard, Jacob

    2013-01-01

    Generation expansion planning (GEP) is the problem of finding the optimal strategy to plan the Construction of new generation while satisfying technical and economical constraints. In the deregulated and competitive environment, large-scale integration of wind generation (WG) in power system has...... necessitated the inclusion of more innovative and sophisticated approaches in power system investment planning. A bi-level generation expansion planning approach considering large-scale wind generation was proposed in this paper. The first phase is investment decision, while the second phase is production...

  7. Large Scale Metal Additive Techniques Review

    Energy Technology Data Exchange (ETDEWEB)

    Nycz, Andrzej [ORNL; Adediran, Adeola I [ORNL; Noakes, Mark W [ORNL; Love, Lonnie J [ORNL

    2016-01-01

    In recent years additive manufacturing made long strides toward becoming a main stream production technology. Particularly strong progress has been made in large-scale polymer deposition. However, large scale metal additive has not yet reached parity with large scale polymer. This paper is a review study of the metal additive techniques in the context of building large structures. Current commercial devices are capable of printing metal parts on the order of several cubic feet compared to hundreds of cubic feet for the polymer side. In order to follow the polymer progress path several factors are considered: potential to scale, economy, environment friendliness, material properties, feedstock availability, robustness of the process, quality and accuracy, potential for defects, and post processing as well as potential applications. This paper focuses on current state of art of large scale metal additive technology with a focus on expanding the geometric limits.

  8. DECOVALEX III III/BENCHPAR PROJECTS. Approaches to Upscaling Thermal-Hydro-Mechanical Processes in a Fractured Rock. Mass and its Significance for Large-Scale Repository Performance Assessment. Summary of Findings. Report of BMT2/WP3

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Johan (comp.) [JA Streamflow AB, Aelvsjoe (Sweden); Staub, Isabelle (comp.) [Golder Associates AB, Stockholm (Sweden); Knight, Les (comp.) [Nirex UK Ltd, Oxon (United Kingdom)

    2005-02-15

    stress results in increasing the permeability (when applying the non-linear stiffness model for fractures) and in channelling of flow path (potentially caused by fracture dilation). Despite the relatively limited amount of large scale analyses conducted within the Task, some general remarks seem possible. It is suggested the stress is so high at the depth of the repository that fractures are almost completely compressed mechanically and the permeability is approaching its residual value. Therefore further stress increase due to thermal stresses would not significantly reduce the permeability. Also the TH effects, due to buoyancy, are relatively limited and would add an uncertainty in the order of a factor of 2 or so. These observations support the conclusion that it is the uspcaling of hydraulic properties rather than the added complication of T and M couplings, which are the main sources of uncertainty in a problem of this nature. The added disturbance, in relation to in-site stress, is small in the far-field of a deep repository. Yet, understanding the stress/permeability relation is important for understanding the nature of the permeability field. It can also be noted that most conclusions to be drawn from the large scale analyses could already be drawn from studying the intermediate performance measures such as permeability, deformation modulus and k versus stress relations.

  9. Large-scale wind turbine structures

    Science.gov (United States)

    Spera, David A.

    1988-01-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  10. Seismic safety in conducting large-scale blasts

    Science.gov (United States)

    Mashukov, I. V.; Chaplygin, V. V.; Domanov, V. P.; Semin, A. A.; Klimkin, M. A.

    2017-09-01

    In mining enterprises to prepare hard rocks for excavation a drilling and blasting method is used. With the approach of mining operations to settlements the negative effect of large-scale blasts increases. To assess the level of seismic impact of large-scale blasts the scientific staff of Siberian State Industrial University carried out expertise for coal mines and iron ore enterprises. Determination of the magnitude of surface seismic vibrations caused by mass explosions was performed using seismic receivers, an analog-digital converter with recording on a laptop. The registration results of surface seismic vibrations during production of more than 280 large-scale blasts at 17 mining enterprises in 22 settlements are presented. The maximum velocity values of the Earth’s surface vibrations are determined. The safety evaluation of seismic effect was carried out according to the permissible value of vibration velocity. For cases with exceedance of permissible values recommendations were developed to reduce the level of seismic impact.

  11. Large Scale Anomalies of the Cosmic Microwave Background with Planck

    DEFF Research Database (Denmark)

    Frejsel, Anne Mette

    This thesis focuses on the large scale anomalies of the Cosmic Microwave Background (CMB) and their possible origins. The investigations consist of two main parts. The first part is on statistical tests of the CMB, and the consistency of both maps and power spectrum. We find that the Planck data...... is very consistent, while the WMAP 9 year release appears more contaminated by non-CMB residuals than the 7 year release. The second part is concerned with the anomalies of the CMB from two approaches. One is based on an extended inflationary model as the origin of one specific large scale anomaly, namely....... Here we find evidence that the Planck CMB maps contain residual radiation in the loop areas, which can be linked to some of the large scale CMB anomalies: the point-parity asymmetry, the alignment of quadrupole and octupole and the dipolemodulation....

  12. Large Scale Anomalies of the Cosmic Microwave Background with Planck

    DEFF Research Database (Denmark)

    Frejsel, Anne Mette

    This thesis focuses on the large scale anomalies of the Cosmic Microwave Background (CMB) and their possible origins. The investigations consist of two main parts. The first part is on statistical tests of the CMB, and the consistency of both maps and power spectrum. We find that the Planck data...... is very consistent, while the WMAP 9 year release appears more contaminated by non-CMB residuals than the 7 year release. The second part is concerned with the anomalies of the CMB from two approaches. One is based on an extended inflationary model as the origin of one specific large scale anomaly, namely....... Here we find evidence that the Planck CMB maps contain residual radiation in the loop areas, which can be linked to some of the large scale CMB anomalies: the point-parity asymmetry, the alignment of quadrupole and octupole and the dipolemodulation....

  13. Sensitivity technologies for large scale simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias (Rice University, Houston, TX); Wilcox, Lucas C. (Brown University, Providence, RI); Hill, Judith C. (Carnegie Mellon University, Pittsburgh, PA); Ghattas, Omar (Carnegie Mellon University, Pittsburgh, PA); Berggren, Martin Olof (University of UppSala, Sweden); Akcelik, Volkan (Carnegie Mellon University, Pittsburgh, PA); Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  14. Sensitivity technologies for large scale simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias (Rice University, Houston, TX); Wilcox, Lucas C. (Brown University, Providence, RI); Hill, Judith C. (Carnegie Mellon University, Pittsburgh, PA); Ghattas, Omar (Carnegie Mellon University, Pittsburgh, PA); Berggren, Martin Olof (University of UppSala, Sweden); Akcelik, Volkan (Carnegie Mellon University, Pittsburgh, PA); Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  15. Accelerating sustainability in large-scale facilities

    CERN Multimedia

    Marina Giampietro

    2011-01-01

    Scientific research centres and large-scale facilities are intrinsically energy intensive, but how can big science improve its energy management and eventually contribute to the environmental cause with new cleantech? CERN’s commitment to providing tangible answers to these questions was sealed in the first workshop on energy management for large scale scientific infrastructures held in Lund, Sweden, on the 13-14 October.   Participants at the energy management for large scale scientific infrastructures workshop. The workshop, co-organised with the European Spallation Source (ESS) and  the European Association of National Research Facilities (ERF), tackled a recognised need for addressing energy issues in relation with science and technology policies. It brought together more than 150 representatives of Research Infrastrutures (RIs) and energy experts from Europe and North America. “Without compromising our scientific projects, we can ...

  16. 一种大容量文本集的智能检索方法%Intelligent information retrieval approach for large-scale collections of full-text document

    Institute of Scientific and Technical Information of China (English)

    金小峰

    2011-01-01

    分析了潜在语义模型,研究了潜在语义空间中文本的表示方法,提出了一种大容量文本集的检索策略.检索过程由粗粒度非相关剔除和相关文本的精确检索两个步骤组成.使用潜在语义空间模型对文本集进行初步的筛选,剔除非相关文本;使用大规模文本检索方法对相关文本在段落一级进行精确检索,其中为了提高检索的执行效率,在检索算法中引入了遗传算法;输出这些候选的段落序号.实验结果证明了这种方法的有效性和高效性.%An information retrieval approach for large-scale collections of full-text document is proposed according to latent model analysis and investigation of latent space-based text representation form. The retrieval process is divided into rough irrelative full-text documents culling procedure,and relative full-text document precise searching prueedure.lrrelative documents are removed by the first procedure. Relative full-text documents are retrieved in passage level by the second one,and in this process, GA algorithm is introduced in order to achieve best performance. Finally, the candidate passage indices are returned.The validity and high efficiency of the proposed method are shown by experimental results.

  17. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus

    2013-11-19

    Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous

  18. Large-Scale Analysis of Art Proportions

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2014-01-01

    While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square) and with majo......While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square...

  19. Large-scale Complex IT Systems

    CERN Document Server

    Sommerville, Ian; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2011-01-01

    This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that identifies the major challenges and issues in the development of large-scale complex, software-intensive systems. Central to this is the notion that we cannot separate software from the socio-technical environment in which it is used.

  20. Topological Routing in Large-Scale Networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Knudsen, Thomas Phillip; Madsen, Ole Brun

    2004-01-01

    A new routing scheme, Topological Routing, for large-scale networks is proposed. It allows for efficient routing without large routing tables as known from traditional routing schemes. It presupposes a certain level of order in the networks, known from Structural QoS. The main issues in applying...... Topological Routing to large-scale networks are discussed. Hierarchical extensions are presented along with schemes for shortest path routing, fault handling and path restoration. Further reserach in the area is discussed and perspectives on the prerequisites for practical deployment of Topological Routing...

  1. Topological Routing in Large-Scale Networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Knudsen, Thomas Phillip; Madsen, Ole Brun

    A new routing scheme, Topological Routing, for large-scale networks is proposed. It allows for efficient routing without large routing tables as known from traditional routing schemes. It presupposes a certain level of order in the networks, known from Structural QoS. The main issues in applying...... Topological Routing to large-scale networks are discussed. Hierarchical extensions are presented along with schemes for shortest path routing, fault handling and path restoration. Further reserach in the area is discussed and perspectives on the prerequisites for practical deployment of Topological Routing...

  2. Large scale topic modeling made practical

    DEFF Research Database (Denmark)

    Wahlgreen, Bjarne Ørum; Hansen, Lars Kai

    2011-01-01

    Topic models are of broad interest. They can be used for query expansion and result structuring in information retrieval and as an important component in services such as recommender systems and user adaptive advertising. In large scale applications both the size of the database (number of docume......Topic models are of broad interest. They can be used for query expansion and result structuring in information retrieval and as an important component in services such as recommender systems and user adaptive advertising. In large scale applications both the size of the database (number...... topics at par with a much larger case specific vocabulary....

  3. Multiple Realities: A Relationship Narrative Approach in Therapy with Black-White Mixed-Race Clients.

    Science.gov (United States)

    Rockquemore, Kerry Ann; Laszloffy, Tracey A.

    2003-01-01

    Emerging empirical research on racial identity formation among persons with one Black and one White parent reveals that multiple identity options are possible. Presents a relational narrative approach to therapy with Black-White mixed-race clients who experience systematic invalidation of their chosen racial identity through a detailed case…

  4. Evaluating Large-Scale Interactive Radio Programmes

    Science.gov (United States)

    Potter, Charles; Naidoo, Gordon

    2009-01-01

    This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…

  5. Configuration management in large scale infrastructure development

    NARCIS (Netherlands)

    Rijn, T.P.J. van; Belt, H. van de; Los, R.H.

    2000-01-01

    Large Scale Infrastructure (LSI) development projects such as the construction of roads, rail-ways and other civil engineering (water)works is tendered differently today than a decade ago. Traditional workflow requested quotes from construction companies for construction works where the works to be

  6. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data dissem

  7. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  8. Large-scale perspective as a challenge

    NARCIS (Netherlands)

    Plomp, M.G.A.

    2012-01-01

    1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that cont

  9. Ensemble methods for large scale inverse problems

    NARCIS (Netherlands)

    Heemink, A.W.; Umer Altaf, M.; Barbu, A.L.; Verlaan, M.

    2013-01-01

    Variational data assimilation, also sometimes simply called the ‘adjoint method’, is used very often for large scale model calibration problems. Using the available data, the uncertain parameters in the model are identified by minimizing a certain cost function that measures the difference between t

  10. Inflation, large scale structure and particle physics

    Indian Academy of Sciences (India)

    S F King

    2004-02-01

    We review experimental and theoretical developments in inflation and its application to structure formation, including the curvation idea. We then discuss a particle physics model of supersymmetric hybrid inflation at the intermediate scale in which the Higgs scalar field is responsible for large scale structure, show how such a theory is completely natural in the framework extra dimensions with an intermediate string scale.

  11. Systematic Literature Review of Agile Scalability for Large Scale Projects

    Directory of Open Access Journals (Sweden)

    Hina saeeda

    2015-09-01

    Full Text Available In new methods, “agile” has come out as the top approach in software industry for the development of the soft wares. With different shapes agile is applied for handling the issues such as low cost, tight time to market schedule continuously changing requirements, Communication & Coordination, team size and distributed environment. Agile has proved to be successful in the small and medium size project, however, it have several limitations when applied on large size projects. The purpose of this study is to know agile techniques in detail, finding and highlighting its restrictions for large size projects with the help of systematic literature review. The systematic literature review is going to find answers for the Research questions: 1 How to make agile approaches scalable and adoptable for large projects?2 What are the existing methods, approaches, frameworks and practices support agile process in large scale projects? 3 What are limitations of existing agile approaches, methods, frameworks and practices with reference to large scale projects? This study will identify the current research problems of the agile scalability for large size projects by giving a detail literature review of the identified problems, existed work for providing solution to these problems and will find out limitations of the existing work for covering the identified problems in the agile scalability. All the results gathered will be summarized statistically based on these finding remedial work will be planned in future for handling the identified limitations of agile approaches for large scale projects.

  12. Application of Large-Scale 3D Non-Orthogonal Boundary Fitted Sediment Transport Model and Small-Scale Approach for Offshore Structure in Cimanuk Delta North Java Sea

    Directory of Open Access Journals (Sweden)

    Muslim Muin

    2016-08-01

    Full Text Available The morphology of the Cimanuk Delta at the North Java Sea has changed rapidly over the last two decades. The annual sediment deposition is about two million cubic meters (Yuanita and Tingsanchali, [1]. The large-scale ocean hydrodynamics and sediment transport model MuSed3D (Muin, [2] was applied to the North Java Sea to simulate suspended sediment transport at the study site. A potential offshore structure was positioned at approximately 30 km from the Cimanuk Delta. The result of the large-scale model was calibrated using observation data and Landsat satellite image interpretation. The agreement between the modeling results and the observations was excellent. It was found that the critical shear stresses for erosion and deposition were 0.1 Pa and 0.05 Pa respectively. An empirical formula was further utilized to assess the local scouring at the potential offshore structure site in a small-scale domain and under extreme conditions.

  13. The large-scale structure of vacuum

    CERN Document Server

    Albareti, F D; Maroto, A L

    2014-01-01

    The vacuum state in quantum field theory is known to exhibit an important number of fundamental physical features. In this work we explore the possibility that this state could also present a non-trivial space-time structure on large scales. In particular, we will show that by imposing the renormalized vacuum energy-momentum tensor to be conserved and compatible with cosmological observations, the vacuum energy of sufficiently heavy fields behaves at late times as non-relativistic matter rather than as a cosmological constant. In this limit, the vacuum state supports perturbations whose speed of sound is negligible and accordingly allows the growth of structures in the vacuum energy itself. This large-scale structure of vacuum could seed the formation of galaxies and clusters very much in the same way as cold dark matter does.

  14. Growth Limits in Large Scale Networks

    DEFF Research Database (Denmark)

    Knudsen, Thomas Phillip

    the fundamental technological resources in network technologies are analysed for scalability. Here several technological limits to continued growth are presented. The third step involves a survey of major problems in managing large scale networks given the growth of user requirements and the technological...... limitations. The rising complexity of network management with the convergence of communications platforms is shown as problematic for both automatic management feasibility and for manpower resource management. In the fourth step the scope is extended to include the present society with the DDN project as its...... main focus. Here the general perception of the nature and role in society of large scale networks as a fundamental infrastructure is analysed. This analysis focuses on the effects of the technical DDN projects and on the perception of network infrastructure as expressed by key decision makers...

  15. Quantum Signature of Cosmological Large Scale Structures

    CERN Document Server

    Capozziello, S; De Siena, S; Illuminati, F; Capozziello, Salvatore; Martino, Salvatore De; Siena, Silvio De; Illuminati, Fabrizio

    1998-01-01

    We demonstrate that to all large scale cosmological structures where gravitation is the only overall relevant interaction assembling the system (e.g. galaxies), there is associated a characteristic unit of action per particle whose order of magnitude coincides with the Planck action constant $h$. This result extends the class of physical systems for which quantum coherence can act on macroscopic scales (as e.g. in superconductivity) and agrees with the absence of screening mechanisms for the gravitational forces, as predicted by some renormalizable quantum field theories of gravity. It also seems to support those lines of thought invoking that large scale structures in the Universe should be connected to quantum primordial perturbations as requested by inflation, that the Newton constant should vary with time and distance and, finally, that gravity should be considered as an effective interaction induced by quantization.

  16. Process Principles for Large-Scale Nanomanufacturing.

    Science.gov (United States)

    Behrens, Sven H; Breedveld, Victor; Mujica, Maritza; Filler, Michael A

    2017-06-07

    Nanomanufacturing-the fabrication of macroscopic products from well-defined nanoscale building blocks-in a truly scalable and versatile manner is still far from our current reality. Here, we describe the barriers to large-scale nanomanufacturing and identify routes to overcome them. We argue for nanomanufacturing systems consisting of an iterative sequence of synthesis/assembly and separation/sorting unit operations, analogous to those used in chemicals manufacturing. In addition to performance and economic considerations, phenomena unique to the nanoscale must guide the design of each unit operation and the overall process flow. We identify and discuss four key nanomanufacturing process design needs: (a) appropriately selected process break points, (b) synthesis techniques appropriate for large-scale manufacturing, (c) new structure- and property-based separations, and (d) advances in stabilization and packaging.

  17. Condition Monitoring of Large-Scale Facilities

    Science.gov (United States)

    Hall, David L.

    1999-01-01

    This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.

  18. Large-scale structure of the Universe

    Energy Technology Data Exchange (ETDEWEB)

    Shandarin, S.F.; Doroshkevich, A.G.; Zel' dovich, Ya.B. (Inst. Prikladnoj Matematiki, Moscow, USSR)

    1983-01-01

    A review of theory of the large-scale structure of the Universe is given, including formation of clusters and superclusters of galaxies as well as large voids. Particular attention is paid to the theory of neutrino dominated Universe - the cosmological model where neutrinos with the rest mass of several tens eV dominate the mean density. Evolution of small perturbations is discussed, estimates of microwave backgorund radiation fluctuations is given for different angular scales. Adiabatic theory of the Universe structure formation, known as ''cake'' scenario and their successive fragmentation is given. This scenario is based on approximate nonlinear theory of gravitation instability. Results of numerical experiments, modeling the processes of large-scale structure formation are discussed.

  19. Large-scale structure of the universe

    Energy Technology Data Exchange (ETDEWEB)

    Shandarin, S.F.; Doroshkevich, A.G.; Zel' dovich, Y.B.

    1983-01-01

    A survey is given of theories for the origin of large-scale structure in the universe: clusters and superclusters of galaxies, and vast black regions practically devoid of galaxies. Special attention is paid to the theory of a neutrino-dominated universe: a cosmology in which electron neutrinos with a rest mass of a few tens of electron volts would contribute the bulk of the mean density. The evolution of small perturbations is discussed, and estimates are made for the temperature anisotropy of the microwave background radiation on various angular scales. The nonlinear stage in the evolution of smooth irrotational perturbations in a low-pressure medium is described in detail. Numerical experiments simulating large-scale structure formation processes are discussed, as well as their interpretation in the context of catastrophe theory.

  20. Wireless Secrecy in Large-Scale Networks

    CERN Document Server

    Pinto, Pedro C; Win, Moe Z

    2011-01-01

    The ability to exchange secret information is critical to many commercial, governmental, and military networks. The intrinsically secure communications graph (iS-graph) is a random graph which describes the connections that can be securely established over a large-scale network, by exploiting the physical properties of the wireless medium. This paper provides an overview of the main properties of this new class of random graphs. We first analyze the local properties of the iS-graph, namely the degree distributions and their dependence on fading, target secrecy rate, and eavesdropper collusion. To mitigate the effect of the eavesdroppers, we propose two techniques that improve secure connectivity. Then, we analyze the global properties of the iS-graph, namely percolation on the infinite plane, and full connectivity on a finite region. These results help clarify how the presence of eavesdroppers can compromise secure communication in a large-scale network.

  1. ELASTIC: A Large Scale Dynamic Tuning Environment

    Directory of Open Access Journals (Sweden)

    Andrea Martínez

    2014-01-01

    Full Text Available The spectacular growth in the number of cores in current supercomputers poses design challenges for the development of performance analysis and tuning tools. To be effective, such analysis and tuning tools must be scalable and be able to manage the dynamic behaviour of parallel applications. In this work, we present ELASTIC, an environment for dynamic tuning of large-scale parallel applications. To be scalable, the architecture of ELASTIC takes the form of a hierarchical tuning network of nodes that perform a distributed analysis and tuning process. Moreover, the tuning network topology can be configured to adapt itself to the size of the parallel application. To guide the dynamic tuning process, ELASTIC supports a plugin architecture. These plugins, called ELASTIC packages, allow the integration of different tuning strategies into ELASTIC. We also present experimental tests conducted using ELASTIC, showing its effectiveness to improve the performance of large-scale parallel applications.

  2. Measuring Bulk Flows in Large Scale Surveys

    CERN Document Server

    Feldman, H A; Feldman, Hume A.; Watkins, Richard

    1993-01-01

    We follow a formalism presented by Kaiser to calculate the variance of bulk flows in large scale surveys. We apply the formalism to a mock survey of Abell clusters \\'a la Lauer \\& Postman and find the variance in the expected bulk velocities in a universe with CDM, MDM and IRAS--QDOT power spectra. We calculate the velocity variance as a function of the 1--D velocity dispersion of the clusters and the size of the survey.

  3. Statistical characteristics of Large Scale Structure

    OpenAIRE

    Demianski; Doroshkevich

    2002-01-01

    We investigate the mass functions of different elements of the Large Scale Structure -- walls, pancakes, filaments and clouds -- and the impact of transverse motions -- expansion and/or compression -- on their statistical characteristics. Using the Zel'dovich theory of gravitational instability we show that the mass functions of all structure elements are approximately the same and the mass of all elements is found to be concentrated near the corresponding mean mass. At high redshifts, both t...

  4. Topologies for large scale photovoltaic power plants

    OpenAIRE

    Cabrera Tobar, Ana; Bullich Massagué, Eduard; Aragüés Peñalba, Mònica; Gomis Bellmunt, Oriol

    2016-01-01

    © 2016 Elsevier Ltd. All rights reserved. The concern of increasing renewable energy penetration into the grid together with the reduction of prices of photovoltaic solar panels during the last decade have enabled the development of large scale solar power plants connected to the medium and high voltage grid. Photovoltaic generation components, the internal layout and the ac collection grid are being investigated for ensuring the best design, operation and control of these power plants. This ...

  5. Large-scale instabilities of helical flows

    CERN Document Server

    Cameron, Alexandre; Brachet, Marc-Étienne

    2016-01-01

    Large-scale hydrodynamic instabilities of periodic helical flows are investigated using $3$D Floquet numerical computations. A minimal three-modes analytical model that reproduce and explains some of the full Floquet results is derived. The growth-rate $\\sigma$ of the most unstable modes (at small scale, low Reynolds number $Re$ and small wavenumber $q$) is found to scale differently in the presence or absence of anisotropic kinetic alpha (\\AKA{}) effect. When an $AKA$ effect is present the scaling $\\sigma \\propto q\\; Re\\,$ predicted by the $AKA$ effect theory [U. Frisch, Z. S. She, and P. L. Sulem, Physica D: Nonlinear Phenomena 28, 382 (1987)] is recovered for $Re\\ll 1$ as expected (with most of the energy of the unstable mode concentrated in the large scales). However, as $Re$ increases, the growth-rate is found to saturate and most of the energy is found at small scales. In the absence of \\AKA{} effect, it is found that flows can still have large-scale instabilities, but with a negative eddy-viscosity sca...

  6. Large-Scale Visual Data Analysis

    Science.gov (United States)

    Johnson, Chris

    2014-04-01

    Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.

  7. Bayesian large-scale structure inference and cosmic web analysis

    CERN Document Server

    Leclercq, Florent

    2015-01-01

    Surveys of the cosmic large-scale structure carry opportunities for building and testing cosmological theories about the origin and evolution of the Universe. This endeavor requires appropriate data assimilation tools, for establishing the contact between survey catalogs and models of structure formation. In this thesis, we present an innovative statistical approach for the ab initio simultaneous analysis of the formation history and morphology of the cosmic web: the BORG algorithm infers the primordial density fluctuations and produces physical reconstructions of the dark matter distribution that underlies observed galaxies, by assimilating the survey data into a cosmological structure formation model. The method, based on Bayesian probability theory, provides accurate means of uncertainty quantification. We demonstrate the application of BORG to the Sloan Digital Sky Survey data and describe the primordial and late-time large-scale structure in the observed volume. We show how the approach has led to the fi...

  8. Large-scale innovation and change in UK higher education

    Directory of Open Access Journals (Sweden)

    Stephen Brown

    2013-09-01

    Full Text Available This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ technology to deliver such changes. Key lessons that emerged from these experiences are reviewed covering themes of pervasiveness, unofficial systems, project creep, opposition, pressure to deliver, personnel changes and technology issues. The paper argues that collaborative approaches to project management offer greater prospects of effective large-scale change in universities than either management-driven top-down or more champion-led bottom-up methods. It also argues that while some diminution of control over project outcomes is inherent in this approach, this is outweighed by potential benefits of lasting and widespread adoption of agreed changes.

  9. Quantum noise in large-scale coherent nonlinear photonic circuits

    CERN Document Server

    Santori, Charles; Beausoleil, Raymond G; Tezak, Nikolas; Hamerly, Ryan; Mabuchi, Hideo

    2014-01-01

    A semiclassical simulation approach is presented for studying quantum noise in large-scale photonic circuits incorporating an ideal Kerr nonlinearity. A netlist-based circuit solver is used to generate matrices defining a set of stochastic differential equations, in which the resonator field variables represent random samplings of the Wigner quasi-probability distributions. Although the semiclassical approach involves making a large-photon-number approximation, tests on one- and two-resonator circuits indicate satisfactory agreement between the semiclassical and full-quantum simulation results in the parameter regime of interest. The semiclassical model is used to simulate random errors in a large-scale circuit that contains 88 resonators and hundreds of components in total, and functions as a 4-bit ripple counter. The error rate as a function of on-state photon number is examined, and it is observed that the quantum fluctuation amplitudes do not increase as signals propagate through the circuit, an important...

  10. Image-based Exploration of Large-Scale Pathline Fields

    KAUST Repository

    Nagoor, Omniah H.

    2014-05-27

    While real-time applications are nowadays routinely used in visualizing large nu- merical simulations and volumes, handling these large-scale datasets requires high-end graphics clusters or supercomputers to process and visualize them. However, not all users have access to powerful clusters. Therefore, it is challenging to come up with a visualization approach that provides insight to large-scale datasets on a single com- puter. Explorable images (EI) is one of the methods that allows users to handle large data on a single workstation. Although it is a view-dependent method, it combines both exploration and modification of visual aspects without re-accessing the original huge data. In this thesis, we propose a novel image-based method that applies the concept of EI in visualizing large flow-field pathlines data. The goal of our work is to provide an optimized image-based method, which scales well with the dataset size. Our approach is based on constructing a per-pixel linked list data structure in which each pixel contains a list of pathlines segments. With this view-dependent method it is possible to filter, color-code and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination and deferred shading are applied, which further improves the performance and scalability of our approach.

  11. RESTRUCTURING OF THE LARGE-SCALE SPRINKLERS

    Directory of Open Access Journals (Sweden)

    Paweł Kozaczyk

    2016-09-01

    Full Text Available One of the best ways for agriculture to become independent from shortages of precipitation is irrigation. In the seventies and eighties of the last century a number of large-scale sprinklers in Wielkopolska was built. At the end of 1970’s in the Poznan province 67 sprinklers with a total area of 6400 ha were installed. The average size of the sprinkler reached 95 ha. In 1989 there were 98 sprinklers, and the area which was armed with them was more than 10 130 ha. The study was conducted on 7 large sprinklers with the area ranging from 230 to 520 hectares in 1986÷1998. After the introduction of the market economy in the early 90’s and ownership changes in agriculture, large-scale sprinklers have gone under a significant or total devastation. Land on the State Farms of the State Agricultural Property Agency has leased or sold and the new owners used the existing sprinklers to a very small extent. This involved a change in crop structure, demand structure and an increase in operating costs. There has also been a threefold increase in electricity prices. Operation of large-scale irrigation encountered all kinds of barriers in practice and limitations of system solutions, supply difficulties, high levels of equipment failure which is not inclined to rational use of available sprinklers. An effect of a vision of the local area was to show the current status of the remaining irrigation infrastructure. The adopted scheme for the restructuring of Polish agriculture was not the best solution, causing massive destruction of assets previously invested in the sprinkler system.

  12. Large-Scale PV Integration Study

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  13. Conformal Anomaly and Large Scale Gravitational Coupling

    CERN Document Server

    Salehi, H

    2000-01-01

    We present a model in which the breackdown of conformal symmetry of a quantum stress-tensor due to the trace anomaly is related to a cosmological effect in a gravitational model. This is done by characterizing the traceless part of the quantum stress-tensor in terms of the stress-tensor of a conformal invariant classical scalar field. We introduce a conformal frame in which the anomalous trace is identified with a cosmological constant. In this conformal frame we establish the Einstein field equations by connecting the quantum stress-tensor with the large scale distribution of matter in the universe.

  14. Large Scale Quantum Simulations of Nuclear Pasta

    Science.gov (United States)

    Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian

    2016-03-01

    Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 pasta configurations. This work is supported in part by DOE Grants DE-FG02-87ER40365 (Indiana University) and DE-SC0008808 (NUCLEI SciDAC Collaboration).

  15. Large scale wind power penetration in Denmark

    DEFF Research Database (Denmark)

    Karnøe, Peter

    2013-01-01

    he Danish electricity generating system prepared to adopt nuclear power in the 1970s, yet has become the world's front runner in wind power with a national plan for 50% wind power penetration by 2020. This paper deploys a sociotechnical perspective to explain the historical transformation of "net...... expertise evolves and contributes to the normalization and large-scale penetration of wind power in the electricity generating system. The analysis teaches us how technological paths become locked-in, but also indicates keys for locking them out....

  16. Stabilization Algorithms for Large-Scale Problems

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg

    2006-01-01

    The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some......-curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New...

  17. What is a large-scale dynamo?

    Science.gov (United States)

    Nigro, G.; Pongkitiwanichakul, P.; Cattaneo, F.; Tobias, S. M.

    2017-01-01

    We consider kinematic dynamo action in a sheared helical flow at moderate to high values of the magnetic Reynolds number (Rm). We find exponentially growing solutions which, for large enough shear, take the form of a coherent part embedded in incoherent fluctuations. We argue that at large Rm large-scale dynamo action should be identified by the presence of structures coherent in time, rather than those at large spatial scales. We further argue that although the growth rate is determined by small-scale processes, the period of the coherent structures is set by mean-field considerations.

  18. Large scale phononic metamaterials for seismic isolation

    Energy Technology Data Exchange (ETDEWEB)

    Aravantinos-Zafiris, N. [Department of Sound and Musical Instruments Technology, Ionian Islands Technological Educational Institute, Stylianou Typaldou ave., Lixouri 28200 (Greece); Sigalas, M. M. [Department of Materials Science, University of Patras, Patras 26504 (Greece)

    2015-08-14

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials.

  19. Hiearchical Engine for Large Scale Infrastructure Simulation

    Energy Technology Data Exchange (ETDEWEB)

    2017-03-15

    HELICS ls a new open-source, cyber-physlcal-energy co-simulation framework for electric power systems. HELICS Is designed to support very-large-scale (100,000+ federates) co­simulations with off-the-shelf power-system, communication, market, and end-use tools. Other key features Include cross platform operating system support, the integration of both eventdrlven (e.g., packetlzed communication) and time-series (e.g.,power flow) simulations, and the ability to co-Iterate among federates to ensure physical model convergence at each time step.

  20. Colloquium: Large scale simulations on GPU clusters

    Science.gov (United States)

    Bernaschi, Massimo; Bisson, Mauro; Fatica, Massimiliano

    2015-06-01

    Graphics processing units (GPU) are currently used as a cost-effective platform for computer simulations and big-data processing. Large scale applications require that multiple GPUs work together but the efficiency obtained with cluster of GPUs is, at times, sub-optimal because the GPU features are not exploited at their best. We describe how it is possible to achieve an excellent efficiency for applications in statistical mechanics, particle dynamics and networks analysis by using suitable memory access patterns and mechanisms like CUDA streams, profiling tools, etc. Similar concepts and techniques may be applied also to other problems like the solution of Partial Differential Equations.

  1. Reliability assessment for components of large scale photovoltaic systems

    Science.gov (United States)

    Ahadi, Amir; Ghadimi, Noradin; Mirabbasi, Davar

    2014-10-01

    Photovoltaic (PV) systems have significantly shifted from independent power generation systems to a large-scale grid-connected generation systems in recent years. The power output of PV systems is affected by the reliability of various components in the system. This study proposes an analytical approach to evaluate the reliability of large-scale, grid-connected PV systems. The fault tree method with an exponential probability distribution function is used to analyze the components of large-scale PV systems. The system is considered in the various sequential and parallel fault combinations in order to find all realistic ways in which the top or undesired events can occur. Additionally, it can identify areas that the planned maintenance should focus on. By monitoring the critical components of a PV system, it is possible not only to improve the reliability of the system, but also to optimize the maintenance costs. The latter is achieved by informing the operators about the system component's status. This approach can be used to ensure secure operation of the system by its flexibility in monitoring system applications. The implementation demonstrates that the proposed method is effective and efficient and can conveniently incorporate more system maintenance plans and diagnostic strategies.

  2. Analysis using large-scale ringing data

    Directory of Open Access Journals (Sweden)

    Baillie, S. R.

    2004-06-01

    survival and recruitment estimates from the French CES scheme to assess the relative contributions of survival and recruitment to overall population changes. He develops a novel approach to modelling survival rates from such multi–site data by using within–year recaptures to provide a covariate of between–year recapture rates. This provided parsimonious models of variation in recapture probabilities between sites and years. The approach provides promising results for the four species investigated and can potentially be extended to similar data from other CES/MAPS schemes. The final paper by Blandine Doligez, David Thomson and Arie van Noordwijk (Doligez et al., 2004 illustrates how large-scale studies of population dynamics can be important for evaluating the effects of conservation measures. Their study is concerned with the reintroduction of White Stork populations to the Netherlands where a re–introduction programme started in 1969 had resulted in a breeding population of 396 pairs by 2000. They demonstrate the need to consider a wide range of models in order to account for potential age, time, cohort and “trap–happiness” effects. As the data are based on resightings such trap–happiness must reflect some form of heterogeneity in resighting probabilities. Perhaps surprisingly, the provision of supplementary food did not influence survival, but it may havehad an indirect effect via the alteration of migratory behaviour. Spatially explicit modelling of data gathered at many sites inevitably results in starting models with very large numbers of parameters. The problem is often complicated further by having relatively sparse data at each site, even where the total amount of data gathered is very large. Both Julliard (2004 and Doligez et al. (2004 give explicit examples of problems caused by needing to handle very large numbers of parameters and show how they overcame them for their particular data sets. Such problems involve both the choice of appropriate

  3. Large-Scale Optimization for Bayesian Inference in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Willcox, Karen [MIT; Marzouk, Youssef [MIT

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to

  4. Internationalization Measures in Large Scale Research Projects

    Science.gov (United States)

    Soeding, Emanuel; Smith, Nancy

    2017-04-01

    Internationalization measures in Large Scale Research Projects Large scale research projects (LSRP) often serve as flagships used by universities or research institutions to demonstrate their performance and capability to stakeholders and other interested parties. As the global competition among universities for the recruitment of the brightest brains has increased, effective internationalization measures have become hot topics for universities and LSRP alike. Nevertheless, most projects and universities are challenged with little experience on how to conduct these measures and make internationalization an cost efficient and useful activity. Furthermore, those undertakings permanently have to be justified with the Project PIs as important, valuable tools to improve the capacity of the project and the research location. There are a variety of measures, suited to support universities in international recruitment. These include e.g. institutional partnerships, research marketing, a welcome culture, support for science mobility and an effective alumni strategy. These activities, although often conducted by different university entities, are interlocked and can be very powerful measures if interfaced in an effective way. On this poster we display a number of internationalization measures for various target groups, identify interfaces between project management, university administration, researchers and international partners to work together, exchange information and improve processes in order to be able to recruit, support and keep the brightest heads to your project.

  5. Large-scale Globally Propagating Coronal Waves

    Directory of Open Access Journals (Sweden)

    Alexander Warmuth

    2015-09-01

    Full Text Available Large-scale, globally propagating wave-like disturbances have been observed in the solar chromosphere and by inference in the corona since the 1960s. However, detailed analysis of these phenomena has only been conducted since the late 1990s. This was prompted by the availability of high-cadence coronal imaging data from numerous spaced-based instruments, which routinely show spectacular globally propagating bright fronts. Coronal waves, as these perturbations are usually referred to, have now been observed in a wide range of spectral channels, yielding a wealth of information. Many findings have supported the “classical” interpretation of the disturbances: fast-mode MHD waves or shocks that are propagating in the solar corona. However, observations that seemed inconsistent with this picture have stimulated the development of alternative models in which “pseudo waves” are generated by magnetic reconfiguration in the framework of an expanding coronal mass ejection. This has resulted in a vigorous debate on the physical nature of these disturbances. This review focuses on demonstrating how the numerous observational findings of the last one and a half decades can be used to constrain our models of large-scale coronal waves, and how a coherent physical understanding of these disturbances is finally emerging.

  6. Less is more: regularization perspectives on large scale machine learning

    CERN Document Server

    CERN. Geneva

    2017-01-01

    Deep learning based techniques provide a possible solution at the expanse of theoretical guidance and, especially, of computational requirements. It is then a key challenge for large scale machine learning to devise approaches guaranteed to be accurate and yet computationally efficient. In this talk, we will consider a regularization perspectives on machine learning appealing to classical ideas in linear algebra and inverse problems to scale-up dramatically nonparametric methods such as kernel methods, often dismissed because of prohibitive costs. Our analysis derives optimal theoretical guarantees while providing experimental results at par or out-performing state of the art approaches.

  7. Multitree Algorithms for Large-Scale Astrostatistics

    Science.gov (United States)

    March, William B.; Ozakin, Arkadas; Lee, Dongryeol; Riegel, Ryan; Gray, Alexander G.

    2012-03-01

    this number every week, resulting in billions of objects. At such scales, even linear-time analysis operations present challenges, particularly since statistical analyses are inherently interactive processes, requiring that computations complete within some reasonable human attention span. The quadratic (or worse) runtimes of straightforward implementations become quickly unbearable. Examples of applications. These analysis subroutines occur ubiquitously in astrostatistical work. We list just a few examples. The need to cross-match objects across different catalogs has led to various algorithms, which at some point perform an AllNN computation. 2-point and higher-order spatial correlations for the basis of spatial statistics, and are utilized in astronomy to compare the spatial structures of two datasets, such as an observed sample and a theoretical sample, for example, forming the basis for two-sample hypothesis testing. Friends-of-friends clustering is often used to identify halos in data from astrophysical simulations. Minimum spanning tree properties have also been proposed as statistics of large-scale structure. Comparison of the distributions of different kinds of objects requires accurate density estimation, for which KDE is the overall statistical method of choice. The prediction of redshifts from optical data requires accurate regression, for which kernel regression is a powerful method. The identification of objects of various types in astronomy, such as stars versus galaxies, requires accurate classification, for which KDA is a powerful method. Overview. In this chapter, we will briefly sketch the main ideas behind recent fast algorithms which achieve, for example, linear runtimes for pairwise-distance problems, or similarly dramatic reductions in computational growth. In some cases, the runtime orders for these algorithms are mathematically provable statements, while in others we have only conjectures backed by experimental observations for the time being

  8. Large-Scale Multiobjective Static Test Generation for Web-Based Testing with Integer Programming

    Science.gov (United States)

    Nguyen, M. L.; Hui, Siu Cheung; Fong, A. C. M.

    2013-01-01

    Web-based testing has become a ubiquitous self-assessment method for online learning. One useful feature that is missing from today's web-based testing systems is the reliable capability to fulfill different assessment requirements of students based on a large-scale question data set. A promising approach for supporting large-scale web-based…

  9. Turbulent large-scale structure effects on wake meandering

    Science.gov (United States)

    Muller, Y.-A.; Masson, C.; Aubrun, S.

    2015-06-01

    This work studies effects of large-scale turbulent structures on wake meandering using Large Eddy Simulations (LES) over an actuator disk. Other potential source of wake meandering such as the instablility mechanisms associated with tip vortices are not treated in this study. A crucial element of the efficient, pragmatic and successful simulations of large-scale turbulent structures in Atmospheric Boundary Layer (ABL) is the generation of the stochastic turbulent atmospheric flow. This is an essential capability since one source of wake meandering is these large - larger than the turbine diameter - turbulent structures. The unsteady wind turbine wake in ABL is simulated using a combination of LES and actuator disk approaches. In order to dedicate the large majority of the available computing power in the wake, the ABL ground region of the flow is not part of the computational domain. Instead, mixed Dirichlet/Neumann boundary conditions are applied at all the computational surfaces except at the outlet. Prescribed values for Dirichlet contribution of these boundary conditions are provided by a stochastic turbulent wind generator. This allows to simulate large-scale turbulent structures - larger than the computational domain - leading to an efficient simulation technique of wake meandering. Since the stochastic wind generator includes shear, the turbulence production is included in the analysis without the necessity of resolving the flow near the ground. The classical Smagorinsky sub-grid model is used. The resulting numerical methodology has been implemented in OpenFOAM. Comparisons with experimental measurements in porous-disk wakes have been undertaken, and the agreements are good. While temporal resolution in experimental measurements is high, the spatial resolution is often too low. LES numerical results provide a more complete spatial description of the flow. They tend to demonstrate that inflow low frequency content - or large- scale turbulent structures - is

  10. Large-Scale Astrophysical Visualization on Smartphones

    Science.gov (United States)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  11. Accelerated large-scale multiple sequence alignment

    Directory of Open Access Journals (Sweden)

    Lloyd Scott

    2011-12-01

    Full Text Available Abstract Background Multiple sequence alignment (MSA is a fundamental analysis method used in bioinformatics and many comparative genomic applications. Prior MSA acceleration attempts with reconfigurable computing have only addressed the first stage of progressive alignment and consequently exhibit performance limitations according to Amdahl's Law. This work is the first known to accelerate the third stage of progressive alignment on reconfigurable hardware. Results We reduce subgroups of aligned sequences into discrete profiles before they are pairwise aligned on the accelerator. Using an FPGA accelerator, an overall speedup of up to 150 has been demonstrated on a large data set when compared to a 2.4 GHz Core2 processor. Conclusions Our parallel algorithm and architecture accelerates large-scale MSA with reconfigurable computing and allows researchers to solve the larger problems that confront biologists today. Program source is available from http://dna.cs.byu.edu/msa/.

  12. Clumps in large scale relativistic jets

    CERN Document Server

    Tavecchio, F; Celotti, A

    2003-01-01

    The relatively intense X-ray emission from large scale (tens to hundreds kpc) jets discovered with Chandra likely implies that jets (at least in powerful quasars) are still relativistic at that distances from the active nucleus. In this case the emission is due to Compton scattering off seed photons provided by the Cosmic Microwave Background, and this on one hand permits to have magnetic fields close to equipartition with the emitting particles, and on the other hand minimizes the requirements about the total power carried by the jet. The emission comes from compact (kpc scale) knots, and we here investigate what we can predict about the possible emission between the bright knots. This is motivated by the fact that bulk relativistic motion makes Compton scattering off the CMB photons efficient even when electrons are cold or mildly relativistic in the comoving frame. This implies relatively long cooling times, dominated by adiabatic losses. Therefore the relativistically moving plasma can emit, by Compton sc...

  13. Large-scale parametric survival analysis.

    Science.gov (United States)

    Mittal, Sushil; Madigan, David; Cheng, Jerry Q; Burd, Randall S

    2013-10-15

    Survival analysis has been a topic of active statistical research in the past few decades with applications spread across several areas. Traditional applications usually consider data with only a small numbers of predictors with a few hundreds or thousands of observations. Recent advances in data acquisition techniques and computation power have led to considerable interest in analyzing very-high-dimensional data where the number of predictor variables and the number of observations range between 10(4) and 10(6). In this paper, we present a tool for performing large-scale regularized parametric survival analysis using a variant of the cyclic coordinate descent method. Through our experiments on two real data sets, we show that application of regularized models to high-dimensional data avoids overfitting and can provide improved predictive performance and calibration over corresponding low-dimensional models.

  14. Curvature constraints from Large Scale Structure

    CERN Document Server

    Di Dio, Enea; Raccanelli, Alvise; Durrer, Ruth; Kamionkowski, Marc; Lesgourgues, Julien

    2016-01-01

    We modified the CLASS code in order to include relativistic galaxy number counts in spatially curved geometries; we present the formalism and study the effect of relativistic corrections on spatial curvature. The new version of the code is now publicly available. Using a Fisher matrix analysis, we investigate how measurements of the spatial curvature parameter $\\Omega_K$ with future galaxy surveys are affected by relativistic effects, which influence observations of the large scale galaxy distribution. These effects include contributions from cosmic magnification, Doppler terms and terms involving the gravitational potential. As an application, we consider angle and redshift dependent power spectra, which are especially well suited for model independent cosmological constraints. We compute our results for a representative deep, wide and spectroscopic survey, and our results show the impact of relativistic corrections on the spatial curvature parameter estimation. We show that constraints on the curvature para...

  15. Large-Scale Tides in General Relativity

    CERN Document Server

    Ip, Hiu Yan

    2016-01-01

    Density perturbations in cosmology, i.e. spherically symmetric adiabatic perturbations of a Friedmann-Lema\\^itre-Robertson-Walker (FLRW) spacetime, are locally exactly equivalent to a different FLRW solution, as long as their wavelength is much larger than the sound horizon of all fluid components. This fact is known as the "separate universe" paradigm. However, no such relation is known for anisotropic adiabatic perturbations, which correspond to an FLRW spacetime with large-scale tidal fields. Here, we provide a closed, fully relativistic set of evolutionary equations for the nonlinear evolution of such modes, based on the conformal Fermi (CFC) frame. We show explicitly that the tidal effects are encoded by the Weyl tensor, and are hence entirely different from an anisotropic Bianchi I spacetime, where the anisotropy is sourced by the Ricci tensor. In order to close the system, certain higher derivative terms have to be dropped. We show that this approximation is equivalent to the local tidal approximation ...

  16. Large scale water lens for solar concentration.

    Science.gov (United States)

    Mondol, A S; Vogel, B; Bastian, G

    2015-06-01

    Properties of large scale water lenses for solar concentration were investigated. These lenses were built from readily available materials, normal tap water and hyper-elastic linear low density polyethylene foil. Exposed to sunlight, the focal lengths and light intensities in the focal spot were measured and calculated. Their optical properties were modeled with a raytracing software based on the lens shape. We have achieved a good match of experimental and theoretical data by considering wavelength dependent concentration factor, absorption and focal length. The change in light concentration as a function of water volume was examined via the resulting load on the foil and the corresponding change of shape. The latter was extracted from images and modeled by a finite element simulation.

  17. Constructing sites on a large scale

    DEFF Research Database (Denmark)

    Braae, Ellen Marie; Tietjen, Anne

    2011-01-01

    for setting the design brief in a large scale urban landscape in Norway, the Jaeren region around the city of Stavanger. In this paper, we first outline the methodological challenges and then present and discuss the proposed method based on our teaching experiences. On this basis, we discuss aspects...... within the development of our urban landscapes. At the same time, urban and landscape designers are confronted with new methodological problems. Within a strategic transformation perspective, the formulation of the design problem or brief becomes an integrated part of the design process. This paper...... discusses new design (education) methods based on a relational concept of urban sites and design processes. Within this logic site survey is not simply a pre-design activity nor is it a question of comprehensive analysis. Site survey is an integrated part of the design process. By means of active site...

  18. Introducing Large-Scale Innovation in Schools

    Science.gov (United States)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-08-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  19. Large-scale sequential quadratic programming algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  20. Robust large-scale parallel nonlinear solvers for simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any

  1. Patterns of horse-rider coordination during endurance race: a dynamical system approach.

    Science.gov (United States)

    Viry, Sylvain; Sleimen-Malkoun, Rita; Temprado, Jean-Jacques; Frances, Jean-Philippe; Berton, Eric; Laurent, Michel; Nicol, Caroline

    2013-01-01

    In riding, most biomechanical studies have focused on the description of the horse locomotion in unridden condition. In this study, we draw the prospect of how the basic principles established in inter-personal coordination by the theory of Coordination Dynamics may provide a conceptual and methodological framework for understanding the horse-rider coupling. The recent development of mobile technologies allows combined horse and rider recordings during long lasting natural events such as endurance races. Six international horse-rider dyads were thus recorded during a 120 km race by using two tri-axial accelerometers placed on the horses and riders, respectively. The analysis concentrated on their combined vertical displacements. The obtained shapes and angles of Lissajous plots together with values of relative phase between horse and rider displacements at lower reversal point allowed us to characterize four coordination patterns, reflecting the use of two riding techniques per horse's gait (trot and canter). The present study shows that the concepts, methods and tools of self-organizing dynamic system approach offer new directions for understanding horse-rider coordination. The identification of the horse-rider coupling patterns constitutes a firm basis to further study the coalition of multiple constraints that determine their emergence and their dynamics in endurance race.

  2. Patterns of horse-rider coordination during endurance race: a dynamical system approach.

    Directory of Open Access Journals (Sweden)

    Sylvain Viry

    Full Text Available In riding, most biomechanical studies have focused on the description of the horse locomotion in unridden condition. In this study, we draw the prospect of how the basic principles established in inter-personal coordination by the theory of Coordination Dynamics may provide a conceptual and methodological framework for understanding the horse-rider coupling. The recent development of mobile technologies allows combined horse and rider recordings during long lasting natural events such as endurance races. Six international horse-rider dyads were thus recorded during a 120 km race by using two tri-axial accelerometers placed on the horses and riders, respectively. The analysis concentrated on their combined vertical displacements. The obtained shapes and angles of Lissajous plots together with values of relative phase between horse and rider displacements at lower reversal point allowed us to characterize four coordination patterns, reflecting the use of two riding techniques per horse's gait (trot and canter. The present study shows that the concepts, methods and tools of self-organizing dynamic system approach offer new directions for understanding horse-rider coordination. The identification of the horse-rider coupling patterns constitutes a firm basis to further study the coalition of multiple constraints that determine their emergence and their dynamics in endurance race.

  3. High Speed Networking and Large-scale Simulation in Geodynamics

    Science.gov (United States)

    Kuang, Weijia; Gary, Patrick; Seablom, Michael; Truszkowski, Walt; Odubiyi, Jide; Jiang, Weiyuan; Liu, Dong

    2004-01-01

    Large-scale numerical simulation has been one of the most important approaches for understanding global geodynamical processes. In this approach, peta-scale floating point operations (pflops) are often required to carry out a single physically-meaningful numerical experiment. For example, to model convective flow in the Earth's core and generation of the geomagnetic field (geodynamo), simulation for one magnetic free-decay time (approximately 15000 years) with a modest resolution of 150 in three spatial dimensions would require approximately 0.2 pflops. If such a numerical model is used to predict geomagnetic secular variation over decades and longer, with e.g. an ensemble Kalman filter assimilation approach, approximately 30 (and perhaps more) independent simulations of similar scales would be needed for one data assimilation analysis. Obviously, such a simulation would require an enormous computing resource that exceeds the capacity of a single facility currently available at our disposal. One solution is to utilize a very fast network (e.g. 10Gb optical networks) and available middleware (e.g. Globus Toolkit) to allocate available but often heterogeneous resources for such large-scale computing efforts. At NASA GSFC, we are experimenting with such an approach by networking several clusters for geomagnetic data assimilation research. We shall present our initial testing results in the meeting.

  4. Summarizing Large-Scale Database Schema Using Community Detection

    Institute of Scientific and Technical Information of China (English)

    Xue Wang; Xuan Zhou; Shan Wang

    2012-01-01

    Schema summarization on large-scale databases is a challenge.In a typical large database schema,a great proportion of the tables are closely connected through a few high degree tables.It is thus difficult to separate these tables into clusters that represent different topics.Moreover,as a schema can be very big,the schema summary needs to be structured into multiple levels,to further improve the usability.In this paper,we introduce a new schema summarization approach utilizing the techniques of community detection in social networks.Our approach contains three steps.First,we use a community detection algorithm to divide a database schema into subject groups,each representing a specific subject.Second,we cluster the subject groups into abstract domains to form a multi-level navigation structure.Third,we discover representative tables in each cluster to label the schema summary.We evaluate our approach on Freebase,a real world large-scale database.The results show that our approach can identify subject groups precisely.The generated abstract schema layers are very helpful for users to explore database.

  5. DECOVALEX III III/BENCHPAR PROJECTS. Approaches to Upscaling Thermal-Hydro-Mechanical Processes in a Fractured Rock. Mass and its Significance for Large-Scale Repository Performance Assessment. Summary of Findings. Report of BMT2/WP3

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Johan (comp.) [JA Streamflow AB, Aelvsjoe (Sweden); Staub, Isabelle (comp.) [Golder Associates AB, Stockholm (Sweden); Knight, Les (comp.) [Nirex UK Ltd, Oxon (United Kingdom)

    2005-02-15

    The Benchmark Test 2 of DECOVALEX III and Work Package 3 of BENCHPAR concerns the upscaling Thermal (T), Hydrological (H) and Mechanical (M) processes in a fractured rock mass and its significance for large-scale repository performance assessment. The work is primarily concerned with the extent to which various thermo-hydro-mechanical couplings in a fractured rock mass adjacent to a repository are significant in terms of solute transport typically calculated in large-scale repository performance assessments. Since the presence of even quite small fractures may control the hydraulic, mechanical and coupled hydromechanical behaviour of the rock mass, a key of the work has been to explore the extent to which these can be upscaled and represented by 'equivalent' continuum properties appropriate PA calculations. From these general aims the BMT was set-up as a numerical study of a large scale reference problem. Analysing this reference problem should: help explore how different means of simplifying the geometrical detail of a site, with its implications on model parameters, ('upscaling') impacts model predictions of relevance to repository performance, explore to what extent the THM-coupling needs to be considered in relation to PA-measures, compare the uncertainties in upscaling (both to uncertainty on how to upscale or uncertainty that arises due to the upscaling processes) and consideration of THM couplings with the inherent uncertainty and spatial variability of the site specific data. Furthermore, it has been an essential component of the work that individual teams not only produce numerical results but are forced to make their own judgements and to provide the proper justification for their conclusions based on their analysis. It should also be understood that conclusions drawn will partly be specific to the problem analysed, in particular as it mainly concerns a 2D application. This means that specific conclusions may have limited applicability

  6. Accurate emulators for large-scale computer experiments

    CERN Document Server

    Haaland, Ben; 10.1214/11-AOS929

    2012-01-01

    Large-scale computer experiments are becoming increasingly important in science. A multi-step procedure is introduced to statisticians for modeling such experiments, which builds an accurate interpolator in multiple steps. In practice, the procedure shows substantial improvements in overall accuracy, but its theoretical properties are not well established. We introduce the terms nominal and numeric error and decompose the overall error of an interpolator into nominal and numeric portions. Bounds on the numeric and nominal error are developed to show theoretically that substantial gains in overall accuracy can be attained with the multi-step approach.

  7. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  8. Large Scale Composite Manufacturing for Heavy Lift Launch Vehicles

    Science.gov (United States)

    Stavana, Jacob; Cohen, Leslie J.; Houseal, Keth; Pelham, Larry; Lort, Richard; Zimmerman, Thomas; Sutter, James; Western, Mike; Harper, Robert; Stuart, Michael

    2012-01-01

    Risk reduction for the large scale composite manufacturing is an important goal to produce light weight components for heavy lift launch vehicles. NASA and an industry team successfully employed a building block approach using low-cost Automated Tape Layup (ATL) of autoclave and Out-of-Autoclave (OoA) prepregs. Several large, curved sandwich panels were fabricated at HITCO Carbon Composites. The aluminum honeycomb core sandwich panels are segments of a 1/16th arc from a 10 meter cylindrical barrel. Lessons learned highlight the manufacturing challenges required to produce light weight composite structures such as fairings for heavy lift launch vehicles.

  9. Petascale computations for Large-scale Atomic and Molecular collisions

    CERN Document Server

    McLaughlin, Brendan M

    2014-01-01

    Petaflop architectures are currently being utilized efficiently to perform large scale computations in Atomic, Molecular and Optical Collisions. We solve the Schroedinger or Dirac equation for the appropriate collision problem using the R-matrix or R-matrix with pseudo-states approach. We briefly outline the parallel methodology used and implemented for the current suite of Breit-Pauli and DARC codes. Various examples are shown of our theoretical results compared with those obtained from Synchrotron Radiation facilities and from Satellite observations. We also indicate future directions and implementation of the R-matrix codes on emerging GPU architectures.

  10. An iterative decoupling solution method for large scale Lyapunov equations

    Science.gov (United States)

    Athay, T. M.; Sandell, N. R., Jr.

    1976-01-01

    A great deal of attention has been given to the numerical solution of the Lyapunov equation. A useful classification of the variety of solution techniques are the groupings of direct, transformation, and iterative methods. The paper summarizes those methods that are at least partly favorable numerically, giving special attention to two criteria: exploitation of a general sparse system matrix structure and efficiency in resolving the governing linear matrix equation for different matrices. An iterative decoupling solution method is proposed as a promising approach for solving large-scale Lyapunov equation when the system matrix exhibits a general sparse structure. A Fortran computer program that realizes the iterative decoupling algorithm is also discussed.

  11. An iterative decoupling solution method for large scale Lyapunov equations

    Science.gov (United States)

    Athay, T. M.; Sandell, N. R., Jr.

    1976-01-01

    A great deal of attention has been given to the numerical solution of the Lyapunov equation. A useful classification of the variety of solution techniques are the groupings of direct, transformation, and iterative methods. The paper summarizes those methods that are at least partly favorable numerically, giving special attention to two criteria: exploitation of a general sparse system matrix structure and efficiency in resolving the governing linear matrix equation for different matrices. An iterative decoupling solution method is proposed as a promising approach for solving large-scale Lyapunov equation when the system matrix exhibits a general sparse structure. A Fortran computer program that realizes the iterative decoupling algorithm is also discussed.

  12. Quantum computation for large-scale image classification

    Science.gov (United States)

    Ruan, Yue; Chen, Hanwu; Tan, Jianing; Li, Xi

    2016-10-01

    Due to the lack of an effective quantum feature extraction method, there is currently no effective way to perform quantum image classification or recognition. In this paper, for the first time, a global quantum feature extraction method based on Schmidt decomposition is proposed. A revised quantum learning algorithm is also proposed that will classify images by computing the Hamming distance of these features. From the experimental results derived from the benchmark database Caltech 101, and an analysis of the algorithm, an effective approach to large-scale image classification is derived and proposed against the background of big data.

  13. Improving the Fine-Tuning of Metaheuristics: An Approach Combining Design of Experiments and Racing Algorithms

    Directory of Open Access Journals (Sweden)

    Eduardo Batista de Moraes Barbosa

    2017-01-01

    Full Text Available Usually, metaheuristic algorithms are adapted to a large set of problems by applying few modifications on parameters for each specific case. However, this flexibility demands a huge effort to correctly tune such parameters. Therefore, the tuning of metaheuristics arises as one of the most important challenges in the context of research of these algorithms. Thus, this paper aims to present a methodology combining Statistical and Artificial Intelligence methods in the fine-tuning of metaheuristics. The key idea is a heuristic method, called Heuristic Oriented Racing Algorithm (HORA, which explores a search space of parameters looking for candidate configurations close to a promising alternative. To confirm the validity of this approach, we present a case study for fine-tuning two distinct metaheuristics: Simulated Annealing (SA and Genetic Algorithm (GA, in order to solve the classical traveling salesman problem. The results are compared considering the same metaheuristics tuned through a racing method. Broadly, the proposed approach proved to be effective in terms of the overall time of the tuning process. Our results reveal that metaheuristics tuned by means of HORA achieve, with much less computational effort, similar results compared to the case when they are tuned by the other fine-tuning approach.

  14. Cold flows and large scale tides

    Science.gov (United States)

    van de Weygaert, R.; Hoffman, Y.

    1999-01-01

    Within the context of the general cosmological setting it has remained puzzling that the local Universe is a relatively cold environment, in the sense of small-scale peculiar velocities being relatively small. Indeed, it has since long figured as an important argument for the Universe having a low Ω, or if the Universe were to have a high Ω for the existence of a substantial bias between the galaxy and the matter distribution. Here we investigate the dynamical impact of neighbouring matter concentrations on local small-scale characteristics of cosmic flows. While regions where huge nearby matter clumps represent a dominating component in the local dynamics and kinematics may experience a faster collapse on behalf of the corresponding tidal influence, the latter will also slow down or even prevent a thorough mixing and virialization of the collapsing region. By means of N-body simulations starting from constrained realizations of regions of modest density surrounded by more pronounced massive structures, we have explored the extent to which the large scale tidal fields may indeed suppress the `heating' of the small-scale cosmic velocities. Amongst others we quantify the resulting cosmic flows through the cosmic Mach number. This allows us to draw conclusions about the validity of estimates of global cosmological parameters from local cosmic phenomena and the necessity to take into account the structure and distribution of mass in the local Universe.

  15. Large-scale autostereoscopic outdoor display

    Science.gov (United States)

    Reitterer, Jörg; Fidler, Franz; Saint Julien-Wallsee, Ferdinand; Schmid, Gerhard; Gartner, Wolfgang; Leeb, Walter; Schmid, Ulrich

    2013-03-01

    State-of-the-art autostereoscopic displays are often limited in size, effective brightness, number of 3D viewing zones, and maximum 3D viewing distances, all of which are mandatory requirements for large-scale outdoor displays. Conventional autostereoscopic indoor concepts like lenticular lenses or parallax barriers cannot simply be adapted for these screens due to the inherent loss of effective resolution and brightness, which would reduce both image quality and sunlight readability. We have developed a modular autostereoscopic multi-view laser display concept with sunlight readable effective brightness, theoretically up to several thousand 3D viewing zones, and maximum 3D viewing distances of up to 60 meters. For proof-of-concept purposes a prototype display with two pixels was realized. Due to various manufacturing tolerances each individual pixel has slightly different optical properties, and hence the 3D image quality of the display has to be calculated stochastically. In this paper we present the corresponding stochastic model, we evaluate the simulation and measurement results of the prototype display, and we calculate the achievable autostereoscopic image quality to be expected for our concept.

  16. Management of large-scale multimedia conferencing

    Science.gov (United States)

    Cidon, Israel; Nachum, Youval

    1998-12-01

    The goal of this work is to explore management strategies and algorithms for large-scale multimedia conferencing over a communication network. Since the use of multimedia conferencing is still limited, the management of such systems has not yet been studied in depth. A well organized and human friendly multimedia conference management should utilize efficiently and fairly its limited resources as well as take into account the requirements of the conference participants. The ability of the management to enforce fair policies and to quickly take into account the participants preferences may even lead to a conference environment that is more pleasant and more effective than a similar face to face meeting. We suggest several principles for defining and solving resource sharing problems in this context. The conference resources which are addressed in this paper are the bandwidth (conference network capacity), time (participants' scheduling) and limitations of audio and visual equipment. The participants' requirements for these resources are defined and translated in terms of Quality of Service requirements and the fairness criteria.

  17. Large-scale tides in general relativity

    Science.gov (United States)

    Ip, Hiu Yan; Schmidt, Fabian

    2017-02-01

    Density perturbations in cosmology, i.e. spherically symmetric adiabatic perturbations of a Friedmann-Lemaȋtre-Robertson-Walker (FLRW) spacetime, are locally exactly equivalent to a different FLRW solution, as long as their wavelength is much larger than the sound horizon of all fluid components. This fact is known as the "separate universe" paradigm. However, no such relation is known for anisotropic adiabatic perturbations, which correspond to an FLRW spacetime with large-scale tidal fields. Here, we provide a closed, fully relativistic set of evolutionary equations for the nonlinear evolution of such modes, based on the conformal Fermi (CFC) frame. We show explicitly that the tidal effects are encoded by the Weyl tensor, and are hence entirely different from an anisotropic Bianchi I spacetime, where the anisotropy is sourced by the Ricci tensor. In order to close the system, certain higher derivative terms have to be dropped. We show that this approximation is equivalent to the local tidal approximation of Hui and Bertschinger [1]. We also show that this very simple set of equations matches the exact evolution of the density field at second order, but fails at third and higher order. This provides a useful, easy-to-use framework for computing the fully relativistic growth of structure at second order.

  18. Large scale probabilistic available bandwidth estimation

    CERN Document Server

    Thouin, Frederic; Rabbat, Michael

    2010-01-01

    The common utilization-based definition of available bandwidth and many of the existing tools to estimate it suffer from several important weaknesses: i) most tools report a point estimate of average available bandwidth over a measurement interval and do not provide a confidence interval; ii) the commonly adopted models used to relate the available bandwidth metric to the measured data are invalid in almost all practical scenarios; iii) existing tools do not scale well and are not suited to the task of multi-path estimation in large-scale networks; iv) almost all tools use ad-hoc techniques to address measurement noise; and v) tools do not provide enough flexibility in terms of accuracy, overhead, latency and reliability to adapt to the requirements of various applications. In this paper we propose a new definition for available bandwidth and a novel framework that addresses these issues. We define probabilistic available bandwidth (PAB) as the largest input rate at which we can send a traffic flow along a pa...

  19. Gravitational redshifts from large-scale structure

    CERN Document Server

    Croft, Rupert A C

    2013-01-01

    The recent measurement of the gravitational redshifts of galaxies in galaxy clusters by Wojtak et al. has opened a new observational window on dark matter and modified gravity. By stacking clusters this determination effectively used the line of sight distortion of the cross-correlation function of massive galaxies and lower mass galaxies to estimate the gravitational redshift profile of clusters out to 4 Mpc/h. Here we use a halo model of clustering to predict the distortion due to gravitational redshifts of the cross-correlation function on scales from 1 - 100 Mpc/h. We compare our predictions to simulations and use the simulations to make mock catalogues relevant to current and future galaxy redshift surveys. Without formulating an optimal estimator, we find that the full BOSS survey should be able to detect gravitational redshifts from large-scale structure at the ~4 sigma level. Upcoming redshift surveys will greatly increase the number of galaxies useable in such studies and the BigBOSS and Euclid exper...

  20. Food appropriation through large scale land acquisitions

    Science.gov (United States)

    Rulli, Maria Cristina; D'Odorico, Paolo

    2014-05-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300-550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190-370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations.

  1. Large-scale clustering of cosmic voids

    Science.gov (United States)

    Chan, Kwan Chuen; Hamaus, Nico; Desjacques, Vincent

    2014-11-01

    We study the clustering of voids using N -body simulations and simple theoretical models. The excursion-set formalism describes fairly well the abundance of voids identified with the watershed algorithm, although the void formation threshold required is quite different from the spherical collapse value. The void cross bias bc is measured and its large-scale value is found to be consistent with the peak background split results. A simple fitting formula for bc is found. We model the void auto-power spectrum taking into account the void biasing and exclusion effect. A good fit to the simulation data is obtained for voids with radii ≳30 Mpc h-1 , especially when the void biasing model is extended to 1-loop order. However, the best-fit bias parameters do not agree well with the peak-background results. Being able to fit the void auto-power spectrum is particularly important not only because it is the direct observable in galaxy surveys, but also our method enables us to treat the bias parameters as nuisance parameters, which are sensitive to the techniques used to identify voids.

  2. Large scale digital atlases in neuroscience

    Science.gov (United States)

    Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.

    2014-03-01

    Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.

  3. On the Phenomenology of an Accelerated Large-Scale Universe

    Directory of Open Access Journals (Sweden)

    Martiros Khurshudyan

    2016-10-01

    Full Text Available In this review paper, several new results towards the explanation of the accelerated expansion of the large-scale universe is discussed. On the other hand, inflation is the early-time accelerated era and the universe is symmetric in the sense of accelerated expansion. The accelerated expansion of is one of the long standing problems in modern cosmology, and physics in general. There are several well defined approaches to solve this problem. One of them is an assumption concerning the existence of dark energy in recent universe. It is believed that dark energy is responsible for antigravity, while dark matter has gravitational nature and is responsible, in general, for structure formation. A different approach is an appropriate modification of general relativity including, for instance, f ( R and f ( T theories of gravity. On the other hand, attempts to build theories of quantum gravity and assumptions about existence of extra dimensions, possible variability of the gravitational constant and the speed of the light (among others, provide interesting modifications of general relativity applicable to problems of modern cosmology, too. In particular, here two groups of cosmological models are discussed. In the first group the problem of the accelerated expansion of large-scale universe is discussed involving a new idea, named the varying ghost dark energy. On the other hand, the second group contains cosmological models addressed to the same problem involving either new parameterizations of the equation of state parameter of dark energy (like varying polytropic gas, or nonlinear interactions between dark energy and dark matter. Moreover, for cosmological models involving varying ghost dark energy, massless particle creation in appropriate radiation dominated universe (when the background dynamics is due to general relativity is demonstrated as well. Exploring the nature of the accelerated expansion of the large-scale universe involving generalized

  4. Developing Large-Scale Bayesian Networks by Composition

    Data.gov (United States)

    National Aeronautics and Space Administration — In this paper, we investigate the use of Bayesian networks to construct large-scale diagnostic systems. In particular, we consider the development of large-scale...

  5. Distributed large-scale dimensional metrology new insights

    CERN Document Server

    Franceschini, Fiorenzo; Maisano, Domenico

    2011-01-01

    Focuses on the latest insights into and challenges of distributed large scale dimensional metrology Enables practitioners to study distributed large scale dimensional metrology independently Includes specific examples of the development of new system prototypes

  6. Micropolitics in Resistance: The Micropolitics of Large-Scale Natural Resource Extraction in South East Asia

    NARCIS (Netherlands)

    Rasch, E.D.; Kohne, F.M.

    2015-01-01

    This article analyzes Southeast Asian local communities’ resistance against the globalizing large-scale exploitation of natural resources using a micropolitical ecology approach. It focuses on how communities struggle for livelihoods, both resisting and appropriating globalized practices and narrati

  7. Large Scale Flame Spread Environmental Characterization Testing

    Science.gov (United States)

    Clayman, Lauren K.; Olson, Sandra L.; Gokoghi, Suleyman A.; Brooker, John E.; Ferkul, Paul V.; Kacher, Henry F.

    2013-01-01

    Under the Advanced Exploration Systems (AES) Spacecraft Fire Safety Demonstration Project (SFSDP), as a risk mitigation activity in support of the development of a large-scale fire demonstration experiment in microgravity, flame-spread tests were conducted in normal gravity on thin, cellulose-based fuels in a sealed chamber. The primary objective of the tests was to measure pressure rise in a chamber as sample material, burning direction (upward/downward), total heat release, heat release rate, and heat loss mechanisms were varied between tests. A Design of Experiments (DOE) method was imposed to produce an array of tests from a fixed set of constraints and a coupled response model was developed. Supplementary tests were run without experimental design to additionally vary select parameters such as initial chamber pressure. The starting chamber pressure for each test was set below atmospheric to prevent chamber overpressure. Bottom ignition, or upward propagating burns, produced rapid acceleratory turbulent flame spread. Pressure rise in the chamber increases as the amount of fuel burned increases mainly because of the larger amount of heat generation and, to a much smaller extent, due to the increase in gaseous number of moles. Top ignition, or downward propagating burns, produced a steady flame spread with a very small flat flame across the burning edge. Steady-state pressure is achieved during downward flame spread as the pressure rises and plateaus. This indicates that the heat generation by the flame matches the heat loss to surroundings during the longer, slower downward burns. One heat loss mechanism included mounting a heat exchanger directly above the burning sample in the path of the plume to act as a heat sink and more efficiently dissipate the heat due to the combustion event. This proved an effective means for chamber overpressure mitigation for those tests producing the most total heat release and thusly was determined to be a feasible mitigation

  8. Lightweight computational steering of very large scale molecular dynamics simulations

    Energy Technology Data Exchange (ETDEWEB)

    Beazley, D.M. [Univ. of Utah, Salt Lake City, UT (United States). Dept. of Computer Science; Lomdahl, P.S. [Los Alamos National Lab., NM (United States)

    1996-09-01

    We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show how this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages.

  9. Participatory Design and the Challenges of Large-Scale Systems

    DEFF Research Database (Denmark)

    Simonsen, Jesper; Hertzum, Morten

    2008-01-01

    With its 10th biannual anniversary conference, Participatory Design (PD) is leaving its teens and must now be considered ready to join the adult world. In this article we encourage the PD community to think big: PD should engage in large-scale information-systems development and opt for a PD...... approach applied throughout design and organizational implementation. To pursue this aim we extend the iterative PD prototyping approach by (1) emphasizing PD experiments as transcending traditional prototyping by evaluating fully integrated systems exposed to real work practices; (2) incorporating...... improvisational change management including anticipated, emergent, and opportunity-based change; and (3) extending initial design and development into a sustained and ongoing stepwise implementation that constitutes an overall technology-driven organizational change. The extended approach is exemplified through...

  10. Halo detection via large-scale Bayesian inference

    Science.gov (United States)

    Merson, Alexander I.; Jasche, Jens; Abdalla, Filipe B.; Lahav, Ofer; Wandelt, Benjamin; Jones, D. Heath; Colless, Matthew

    2016-08-01

    We present a proof-of-concept of a novel and fully Bayesian methodology designed to detect haloes of different masses in cosmological observations subject to noise and systematic uncertainties. Our methodology combines the previously published Bayesian large-scale structure inference algorithm, HAmiltonian Density Estimation and Sampling algorithm (HADES), and a Bayesian chain rule (the Blackwell-Rao estimator), which we use to connect the inferred density field to the properties of dark matter haloes. To demonstrate the capability of our approach, we construct a realistic galaxy mock catalogue emulating the wide-area 6-degree Field Galaxy Survey, which has a median redshift of approximately 0.05. Application of HADES to the catalogue provides us with accurately inferred three-dimensional density fields and corresponding quantification of uncertainties inherent to any cosmological observation. We then use a cosmological simulation to relate the amplitude of the density field to the probability of detecting a halo with mass above a specified threshold. With this information, we can sum over the HADES density field realisations to construct maps of detection probabilities and demonstrate the validity of this approach within our mock scenario. We find that the probability of successful detection of haloes in the mock catalogue increases as a function of the signal to noise of the local galaxy observations. Our proposed methodology can easily be extended to account for more complex scientific questions and is a promising novel tool to analyse the cosmic large-scale structure in observations.

  11. Synchronization of coupled large-scale Boolean networks

    Energy Technology Data Exchange (ETDEWEB)

    Li, Fangfei, E-mail: li-fangfei@163.com [Department of Mathematics, East China University of Science and Technology, No. 130, Meilong Road, Shanghai, Shanghai 200237 (China)

    2014-03-15

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  12. Synchronization of coupled large-scale Boolean networks

    Science.gov (United States)

    Li, Fangfei

    2014-03-01

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  13. Making Sense of "Race" in the History Classroom: A Literary Approach

    Science.gov (United States)

    Cruz, Barbara C.; Duplass, James A.

    2009-01-01

    Most students today are not able to distinguish between the notion of "race" as a social construct and the pernicious, false belief that "race" is a biological reality. The American Anthropological Association has adopted a policy that the word "race" should always be put in quotation marks because, as a social and not biological construct, there…

  14. Large scale dynamics of protoplanetary discs

    Science.gov (United States)

    Béthune, William

    2017-08-01

    Planets form in the gaseous and dusty disks orbiting young stars. These protoplanetary disks are dispersed in a few million years, being accreted onto the central star or evaporated into the interstellar medium. To explain the observed accretion rates, it is commonly assumed that matter is transported through the disk by turbulence, although the mechanism sustaining turbulence is uncertain. On the other side, irradiation by the central star could heat up the disk surface and trigger a photoevaporative wind, but thermal effects cannot account for the observed acceleration and collimation of the wind into a narrow jet perpendicular to the disk plane. Both issues can be solved if the disk is sensitive to magnetic fields. Weak fields lead to the magnetorotational instability, whose outcome is a state of sustained turbulence. Strong fields can slow down the disk, causing it to accrete while launching a collimated wind. However, the coupling between the disk and the neutral gas is done via electric charges, each of which is outnumbered by several billion neutral molecules. The imperfect coupling between the magnetic field and the neutral gas is described in terms of "non-ideal" effects, introducing new dynamical behaviors. This thesis is devoted to the transport processes happening inside weakly ionized and weakly magnetized accretion disks; the role of microphysical effects on the large-scale dynamics of the disk is of primary importance. As a first step, I exclude the wind and examine the impact of non-ideal effects on the turbulent properties near the disk midplane. I show that the flow can spontaneously organize itself if the ionization fraction is low enough; in this case, accretion is halted and the disk exhibits axisymmetric structures, with possible consequences on planetary formation. As a second step, I study the launching of disk winds via a global model of stratified disk embedded in a warm atmosphere. This model is the first to compute non-ideal effects from

  15. Large-Scale Spacecraft Fire Safety Tests

    Science.gov (United States)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; Toth, Balazs; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Jomaas, Grunde

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests

  16. Large-scale comparative visualisation of sets of multidimensional data

    CERN Document Server

    Vohl, Dany; Fluke, Christopher J; Poudel, Govinda; Georgiou-Karistianis, Nellie; Hassan, Amr H; Benovitski, Yuri; Wong, Tsz Ho; Kaluza, Owen; Nguyen, Toan D; Bonnington, C Paul

    2016-01-01

    We present encube $-$ a qualitative, quantitative and comparative visualisation and analysis system, with application to high-resolution, immersive three-dimensional environments and desktop displays. encube extends previous comparative visualisation systems by considering: 1) the integration of comparative visualisation and analysis into a unified system; 2) the documentation of the discovery process; and 3) an approach that enables scientists to continue the research process once back at their desktop. Our solution enables tablets, smartphones or laptops to be used as interaction units for manipulating, organising, and querying data. We highlight the modularity of encube, allowing additional functionalities to be included as required. Additionally, our approach supports a high level of collaboration within the physical environment. We show how our implementation of encube operates in a large-scale, hybrid visualisation and supercomputing environment using the CAVE2 at Monash University, and on a local deskt...

  17. Bonus algorithm for large scale stochastic nonlinear programming problems

    CERN Document Server

    Diwekar, Urmila

    2015-01-01

    This book presents the details of the BONUS algorithm and its real world applications in areas like sensor placement in large scale drinking water networks, sensor placement in advanced power systems, water management in power systems, and capacity expansion of energy systems. A generalized method for stochastic nonlinear programming based on a sampling based approach for uncertainty analysis and statistical reweighting to obtain probability information is demonstrated in this book. Stochastic optimization problems are difficult to solve since they involve dealing with optimization and uncertainty loops. There are two fundamental approaches used to solve such problems. The first being the decomposition techniques and the second method identifies problem specific structures and transforms the problem into a deterministic nonlinear programming problem. These techniques have significant limitations on either the objective function type or the underlying distributions for the uncertain variables. Moreover, these ...

  18. Deep Feature Learning and Cascaded Classifier for Large Scale Data

    DEFF Research Database (Denmark)

    Prasoon, Adhish

    from data rather than having a predefined feature set. We explore deep learning approach of convolutional neural network (CNN) for segmenting three dimensional medical images. We propose a novel system integrating three 2D CNNs, which have a one-to-one association with the xy, yz and zx planes of 3D......This thesis focuses on voxel/pixel classification based approaches for image segmentation. The main application is segmentation of articular cartilage in knee MRIs. The first major contribution of the thesis deals with large scale machine learning problems. Many medical imaging problems need huge...... amount of training data to cover sufficient biological variability. Learning methods scaling badly with number of training data points cannot be used in such scenarios. This may restrict the usage of many powerful classifiers having excellent generalization ability. We propose a cascaded classifier which...

  19. Maestro: an orchestration framework for large-scale WSN simulations.

    Science.gov (United States)

    Riliskis, Laurynas; Osipov, Evgeny

    2014-03-18

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation.

  20. Literature Review: Herbal Medicine Treatment after Large-Scale Disasters.

    Science.gov (United States)

    Takayama, Shin; Kaneko, Soichiro; Numata, Takehiro; Kamiya, Tetsuharu; Arita, Ryutaro; Saito, Natsumi; Kikuchi, Akiko; Ohsawa, Minoru; Kohayagawa, Yoshitaka; Ishii, Tadashi

    2017-09-27

    Large-scale natural disasters, such as earthquakes, tsunamis, volcanic eruptions, and typhoons, occur worldwide. After the Great East Japan earthquake and tsunami, our medical support operation's experiences suggested that traditional medicine might be useful for treating the various symptoms of the survivors. However, little information is available regarding herbal medicine treatment in such situations. Considering that further disasters will occur, we performed a literature review and summarized the traditional medicine approaches for treatment after large-scale disasters. We searched PubMed and Cochrane Library for articles written in English, and Ichushi for those written in Japanese. Articles published before 31 March 2016 were included. Keywords "disaster" and "herbal medicine" were used in our search. Among studies involving herbal medicine after a disaster, we found two randomized controlled trials investigating post-traumatic stress disorder (PTSD), three retrospective investigations of trauma or common diseases, and seven case series or case reports of dizziness, pain, and psychosomatic symptoms. In conclusion, herbal medicine has been used to treat trauma, PTSD, and other symptoms after disasters. However, few articles have been published, likely due to the difficulty in designing high quality studies in such situations. Further study will be needed to clarify the usefulness of herbal medicine after disasters.

  1. Large-scale Direct Targeting for Drug Repositioning and Discovery

    Science.gov (United States)

    Zheng, Chunli; Guo, Zihu; Huang, Chao; Wu, Ziyin; Li, Yan; Chen, Xuetong; Fu, Yingxue; Ru, Jinlong; Ali Shar, Piar; Wang, Yuan; Wang, Yonghua

    2015-01-01

    A system-level identification of drug-target direct interactions is vital to drug repositioning and discovery. However, the biological means on a large scale remains challenging and expensive even nowadays. The available computational models mainly focus on predicting indirect interactions or direct interactions on a small scale. To address these problems, in this work, a novel algorithm termed weighted ensemble similarity (WES) has been developed to identify drug direct targets based on a large-scale of 98,327 drug-target relationships. WES includes: (1) identifying the key ligand structural features that are highly-related to the pharmacological properties in a framework of ensemble; (2) determining a drug’s affiliation of a target by evaluation of the overall similarity (ensemble) rather than a single ligand judgment; and (3) integrating the standardized ensemble similarities (Z score) by Bayesian network and multi-variate kernel approach to make predictions. All these lead WES to predict drug direct targets with external and experimental test accuracies of 70% and 71%, respectively. This shows that the WES method provides a potential in silico model for drug repositioning and discovery. PMID:26155766

  2. Evaluating large scale orthophotos derived from high resolution satellite imagery

    Science.gov (United States)

    Ioannou, Maria Teresa; Georgopoulos, Andreas

    2013-08-01

    For the purposes of a research project, for the compilation of the archaeological and environmental digital map of the island of Antiparos, the production of updated large scale orthophotos was required. Hence suitable stereoscopic high resolution satellite imagery was acquired. Two Geoeye-1 stereopairs were enough to cover this small island of the Cyclades complex in the central Aegean. For the orientation of the two stereopairs numerous ground control points were determined using GPS observations. Some of them would also serve as check points. The images were processed using commercial stereophotogrammetric software suitable to process satellite stereoscopic imagery. The results of the orientations are evaluated and the digital terrain model was produced using automated and manual procedures. The DTM was checked both internally and externally with comparison to other available DTMs. In this paper the procedures for producing the desired orthophotography are critically presented and the final result is compared and evaluated for its accuracy, completeness and efficiency. The final product is also compared against the orthophotography produced by Ktimatologio S.A. using aerial images in 2007. The orthophotography produced has been evaluated metrically using the available check points, while qualitative evaluation has also been performed. The results are presented and a critical approach for the usability of satellite imagery for the production of large scale orthophotos is attempted.

  3. Large Scale Emerging Properties from Non Hamiltonian Complex Systems

    Directory of Open Access Journals (Sweden)

    Marco Bianucci

    2017-06-01

    Full Text Available The concept of “large scale” depends obviously on the phenomenon we are interested in. For example, in the field of foundation of Thermodynamics from microscopic dynamics, the spatial and time large scales are order of fraction of millimetres and microseconds, respectively, or lesser, and are defined in relation to the spatial and time scales of the microscopic systems. In large scale oceanography or global climate dynamics problems the time scales of interest are order of thousands of kilometres, for space, and many years for time, and are compared to the local and daily/monthly times scales of atmosphere and ocean dynamics. In all the cases a Zwanzig projection approach is, at least in principle, an effective tool to obtain class of universal smooth “large scale” dynamics for few degrees of freedom of interest, starting from the complex dynamics of the whole (usually many degrees of freedom system. The projection approach leads to a very complex calculus with differential operators, that is drastically simplified when the basic dynamics of the system of interest is Hamiltonian, as it happens in Foundation of Thermodynamics problems. However, in geophysical Fluid Dynamics, Biology, and in most of the physical problems the building block fundamental equations of motions have a non Hamiltonian structure. Thus, to continue to apply the useful projection approach also in these cases, we exploit the generalization of the Hamiltonian formalism given by the Lie algebra of dissipative differential operators. In this way, we are able to analytically deal with the series of the differential operators stemming from the projection approach applied to these general cases. Then we shall apply this formalism to obtain some relevant results concerning the statistical properties of the El Niño Southern Oscillation (ENSO.

  4. A new system of labour management in African large-scale agriculture?

    DEFF Research Database (Denmark)

    Gibbon, Peter; Riisgaard, Lone

    2014-01-01

    This paper applies a convention theory (CT) approach to the analysis of labour management systems in African large-scale farming. The reconstruction of previous analyses of high-value crop production on large-scale farms in Africa in terms of CT suggests that, since 1980–95, labour management has...... moved from a ‘domestic’ to a ‘market’ system. However, data collected by the authors from a sample of 11 large-scale rose farms in Kenya in 2011 (covering around 20% of national output) points to the adoption of systems that, in CT terms, combine ‘industrial’ and ‘civic’ elements. The paper concludes...

  5. Design and implementation of a large-scale multi-class text classifier

    Institute of Scientific and Technical Information of China (English)

    YU Shui; ZHANG Liang; MA Fan-yuan

    2005-01-01

    Although, researchers in the ATC field have done a wide range of work based on SVM, almost all existing approaches utilize an empirical model of selection algorithms. Their attempts to model automatic selection in practical, large-scale, text classification systems have been limited. In this paper, we propose a new model selection algorithm that utilizes the DDAG learning architecture. This architecture derives a new large-scale text classifier with very good performance. Experimental results show that the proposed algorithm has good efficiency and the necessary generalization capability while handling large-scale multi-class text classification tasks.

  6. Large-scale assembly of colloidal particles

    Science.gov (United States)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  7. Racing Sampling Based Microimmune Optimization Approach Solving Constrained Expected Value Programming

    Directory of Open Access Journals (Sweden)

    Kai Yang

    2016-01-01

    Full Text Available This work investigates a bioinspired microimmune optimization algorithm to solve a general kind of single-objective nonlinear constrained expected value programming without any prior distribution. In the study of algorithm, two lower bound sample estimates of random variables are theoretically developed to estimate the empirical values of individuals. Two adaptive racing sampling schemes are designed to identify those competitive individuals in a given population, by which high-quality individuals can obtain large sampling size. An immune evolutionary mechanism, along with a local search approach, is constructed to evolve the current population. The comparative experiments have showed that the proposed algorithm can effectively solve higher-dimensional benchmark problems and is of potential for further applications.

  8. Thermal power generation projects ``Large Scale Solar Heating``; EU-Thermie-Projekte ``Large Scale Solar Heating``

    Energy Technology Data Exchange (ETDEWEB)

    Kuebler, R.; Fisch, M.N. [Steinbeis-Transferzentrum Energie-, Gebaeude- und Solartechnik, Stuttgart (Germany)

    1998-12-31

    The aim of this project is the preparation of the ``Large-Scale Solar Heating`` programme for an Europe-wide development of subject technology. The following demonstration programme was judged well by the experts but was not immediately (1996) accepted for financial subsidies. In November 1997 the EU-commission provided 1,5 million ECU which allowed the realisation of an updated project proposal. By mid 1997 a small project was approved, that had been requested under the lead of Chalmes Industriteteknik (CIT) in Sweden and is mainly carried out for the transfer of technology. (orig.) [Deutsch] Ziel dieses Vorhabens ist die Vorbereitung eines Schwerpunktprogramms `Large Scale Solar Heating`, mit dem die Technologie europaweit weiterentwickelt werden sollte. Das daraus entwickelte Demonstrationsprogramm wurde von den Gutachtern positiv bewertet, konnte jedoch nicht auf Anhieb (1996) in die Foerderung aufgenommen werden. Im November 1997 wurden von der EU-Kommission dann kurzfristig noch 1,5 Mio ECU an Foerderung bewilligt, mit denen ein aktualisierter Projektvorschlag realisiert werden kann. Bereits Mitte 1997 wurde ein kleineres Vorhaben bewilligt, das unter Federfuehrung von Chalmers Industriteknik (CIT) in Schweden beantragt worden war und das vor allem dem Technologietransfer dient. (orig.)

  9. Hybrid constraint programming and metaheuristic methods for large scale optimization problems

    OpenAIRE

    2011-01-01

    This work presents hybrid Constraint Programming (CP) and metaheuristic methods for the solution of Large Scale Optimization Problems; it aims at integrating concepts and mechanisms from the metaheuristic methods to a CP-based tree search environment in order to exploit the advantages of both approaches. The modeling and solution of large scale combinatorial optimization problem is a topic which has arisen the interest of many researcherers in the Operations Research field; combinatori...

  10. Feature Hashing for Large Scale Multitask Learning

    CERN Document Server

    Weinberger, Kilian; Attenberg, Josh; Langford, John; Smola, Alex

    2009-01-01

    Empirical evidence suggests that hashing is an effective strategy for dimensionality reduction and practical nonparametric estimation. In this paper we provide exponential tail bounds for feature hashing and show that the interaction between random subspaces is negligible with high probability. We demonstrate the feasibility of this approach with experimental results for a new use case -- multitask learning with hundreds of thousands of tasks.

  11. Risk management in large scale underground infrastructures

    NARCIS (Netherlands)

    Helmholt, K.A.; Courage, W.M.G.

    2013-01-01

    Underground infrastructures can fail due to ground movements. Due to the underground nature this is difficult to detect above ground. In a collaboration of multiple research institutes a new approach has been developed to estimate the probability of failure using underground position sensors. A Proo

  12. Solving large scale traveling salesman problems by chaotic neurodynamics.

    Science.gov (United States)

    Hasegawa, Mikio; Ikeguch, Tohru; Aihara, Kazuyuki

    2002-03-01

    We propose a novel approach for solving large scale traveling salesman problems (TSPs) by chaotic dynamics. First, we realize the tabu search on a neural network, by utilizing the refractory effects as the tabu effects. Then, we extend it to a chaotic neural network version. We propose two types of chaotic searching methods, which are based on two different tabu searches. While the first one requires neurons of the order of n2 for an n-city TSP, the second one requires only n neurons. Moreover, an automatic parameter tuning method of our chaotic neural network is presented for easy application to various problems. Last, we show that our method with n neurons is applicable to large TSPs such as an 85,900-city problem and exhibits better performance than the conventional stochastic searches and the tabu searches.

  13. Risk Management Challenges in Large-scale Energy PSS

    DEFF Research Database (Denmark)

    Tegeltija, Miroslava; Oehmen, Josef; Kozin, Igor

    2017-01-01

    adequate representations. We focus on a large-scale energy company in Denmark as one case of current product/servicesystems risk management best practices. We analyze their risk management process and investigate the tools they use in order to support decision making processes within the company. First, we......Probabilistic risk management approaches have a long tradition in engineering. A large variety of tools and techniques based on the probabilistic view of risk is available and applied in PSS practice. However, uncertainties that arise due to lack of knowledge and information are still missing...... identify the following challenges in the current risk management practices that are in line with literature: (1) current methods are not appropriate for the situations dominated by weak knowledge and information; (2) quality of traditional models in such situations is open to debate; (3) quality of input...

  14. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.

    2003-01-01

    can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate......Long-term energy market models can be used to examine investments in production technologies, however, with market liberalisation it is crucial that such models include investment risks and investor behaviour. This paper analyses how the effect of investment risk on production technology selection...... the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice...

  15. Unfolding large-scale online collaborative human dynamics

    CERN Document Server

    Zha, Yilong; Zhou, Changsong

    2015-01-01

    Large-scale interacting human activities underlie all social and economic phenomena, but quantitative understanding of regular patterns and mechanism is very challenging and still rare. Self-organized online collaborative activities with precise record of event timing provide unprecedented opportunity. Our empirical analysis of the history of millions of updates in Wikipedia shows a universal double power-law distribution of time intervals between consecutive updates of an article. We then propose a generic model to unfold collaborative human activities into three modules: (i) individual behavior characterized by Poissonian initiation of an action, (ii) human interaction captured by a cascading response to others with a power-law waiting time, and (iii) population growth due to increasing number of interacting individuals. This unfolding allows us to obtain analytical formula that is fully supported by the universal patterns in empirical data. Our modeling approaches reveal "simplicity" beyond complex interac...

  16. Large-scale Cosmic Flows from Cosmicflows-2 Catalog

    CERN Document Server

    Watkins, Richard

    2014-01-01

    We calculate the large-scale bulk flow from the Cosmicflows-2 peculiar velocity catalog (Tully et al. 2013) using the minimum variance method introduced in Watkins et al. (2009). We find a bulk flow of 262 +/- 60 km/sec on a scale of 100 Mpc/h, a result somewhat smaller than that found from the COMPOSITE catalog introduced in Watkins et al. (2009), a compendium of peculiar velocity data that has many objects in common with the Cosmicflows-2 sample. We find that distances are systematically larger in the Cosmicflows-2 catalog for objects in common due to a different approach to bias correction, and that this explains the difference in the bulk flows derived from the the two catalogs. The bulk flow result from the Cosmicflows-2 survey is consistent with expectations from LCDM, and thus this catalog potentially resolves an important challenge to the standard cosmological model.

  17. SOLVING TRUST REGION PROBLEM IN LARGE SCALE OPTIMIZATION

    Institute of Scientific and Technical Information of China (English)

    Bing-sheng He

    2000-01-01

    This paper presents a new method for solving the basic problem in the “model trust region” approach to large scale minimization: Compute a vector x such that 1/2xTHx + cTx = min, subject to the constraint ‖x‖2≤a. The method is a combination of the CG method and a projection and contraction (PC) method. The first (CG) method with x0 = 0 as the start point either directly offers a solution of the problem, or--as soon as the norm of the iterate greater than a, --it gives a suitable starting point and a favourable choice of a crucial scaling parameter in the second (PC) method. Some numerical examples are given, which indicate that the method is applicable.

  18. Statistics of Caustics in Large-Scale Structure Formation

    Science.gov (United States)

    Feldbrugge, Job L.; Hidding, Johan; van de Weygaert, Rien

    2016-10-01

    The cosmic web is a complex spatial pattern of walls, filaments, cluster nodes and underdense void regions. It emerged through gravitational amplification from the Gaussian primordial density field. Here we infer analytical expressions for the spatial statistics of caustics in the evolving large-scale mass distribution. In our analysis, following the quasi-linear Zel'dovich formalism and confined to the 1D and 2D situation, we compute number density and correlation properties of caustics in cosmic density fields that evolve from Gaussian primordial conditions. The analysis can be straightforwardly extended to the 3D situation. We moreover, are currently extending the approach to the non-linear regime of structure formation by including higher order Lagrangian approximations and Lagrangian effective field theory.

  19. High pressure sheet metal forming of large scale body structures

    Energy Technology Data Exchange (ETDEWEB)

    Trompeter, M.; Krux, R.; Homberg, W.; Kleiner, M. [Dortmund Univ. (Germany). Inst. of Forming Technology and Lightweight Construction

    2005-07-01

    An important trend in the automotive industry is the weight reduction of car bodies by lightweight construction. One approach to realise lightweight structures is the use of load optimised sheet metal parts (e.g. tailored blanks), especially for crash relevant car body structures. To form such parts which are mostly complex and primarily made of high strength steels, the use of working media based forming processes is favorable. The paper presents the manufacturing of a large scale structural component made of tailor rolled blanks (TRB) by high pressure sheet metal forming (HBU). The paper focuses mainly on the tooling system, which is integrated into a specific 100 MN hydroform press at the IUL. The HBU tool basically consists of a multipoint blankholder, a specially designed flange draw-in sensor, which is necessary to determine the material flow, and a sealing system. Furthermore, the paper presents a strategy for an effective closed loop flange draw-in control. (orig.)

  20. Statistics of Caustics in Large-Scale Structure Formation

    CERN Document Server

    Feldbrugge, Job; van de Weygaert, Rien

    2014-01-01

    The cosmic web is a complex spatial pattern of walls, filaments, cluster nodes and underdense void regions. It emerged through gravitational amplification from the Gaussian primordial density field. Here we infer analytical expressions for the spatial statistics of caustics in the evolving large-scale mass distribution. In our analysis, following the quasi-linear Zeldovich formalism and confined to the 1D and 2D situation, we compute number density and correlation properties of caustics in cosmic density fields that evolve from Gaussian primordial conditions. The analysis can be straightforwardly extended to the 3D situation. We moreover, are currently extending the approach to the non-linear regime of structure formation by including higher order Lagrangian approximations and Lagrangian effective field theory.

  1. Automatic Installation and Configuration for Large Scale Farms

    CERN Document Server

    Novák, J

    2005-01-01

    Since the early appearance of commodity hardware, the utilization of computers rose rapidly, and they became essential in all areas of life. Soon it was realized that nodes are able to work cooperatively, in order to solve new, more complex tasks. This conception got materialized in coherent aggregations of computers called farms and clusters. Collective application of nodes, being efficient and economical, was adopted in education, research and industry before long. But maintainance, especially in large scale, appeared as a problem to be resolved. New challenges needed new methods and tools. Development work has been started to build farm management applications and frameworks. In the first part of the thesis, these systems are introduced. After a general description of the matter, a comparative analysis of different approaches and tools illustrates the practical aspects of the theoretical discussion. CERN, the European Organization of Nuclear Research is the largest Particle Physics laboratory in the world....

  2. Applications of large-scale density functional theory in biology

    Science.gov (United States)

    Cole, Daniel J.; Hine, Nicholas D. M.

    2016-10-01

    Density functional theory (DFT) has become a routine tool for the computation of electronic structure in the physics, materials and chemistry fields. Yet the application of traditional DFT to problems in the biological sciences is hindered, to a large extent, by the unfavourable scaling of the computational effort with system size. Here, we review some of the major software and functionality advances that enable insightful electronic structure calculations to be performed on systems comprising many thousands of atoms. We describe some of the early applications of large-scale DFT to the computation of the electronic properties and structure of biomolecules, as well as to paradigmatic problems in enzymology, metalloproteins, photosynthesis and computer-aided drug design. With this review, we hope to demonstrate that first principles modelling of biological structure-function relationships are approaching a reality.

  3. Deep Feature Learning and Cascaded Classifier for Large Scale Data

    DEFF Research Database (Denmark)

    Prasoon, Adhish

    This thesis focuses on voxel/pixel classification based approaches for image segmentation. The main application is segmentation of articular cartilage in knee MRIs. The first major contribution of the thesis deals with large scale machine learning problems. Many medical imaging problems need huge...... to a state-of-the-art method for cartilage segmentation using one stage nearest neighbour classifier. Our method achieved better results than the state-of-the-art method for tibial as well as femoral cartilage segmentation. The next main contribution of the thesis deals with learning features autonomously...... image, respectively and this system is referred as triplanar convolutional neural network in the thesis. We applied the triplanar CNN for segmenting articular cartilage in knee MRI and compared its performance with the same state-of-the-art method which was used as a benchmark for cascaded classifier...

  4. Large-scale quantum networks based on graphs

    Science.gov (United States)

    Epping, Michael; Kampermann, Hermann; Bruß, Dagmar

    2016-05-01

    Society relies and depends increasingly on information exchange and communication. In the quantum world, security and privacy is a built-in feature for information processing. The essential ingredient for exploiting these quantum advantages is the resource of entanglement, which can be shared between two or more parties. The distribution of entanglement over large distances constitutes a key challenge for current research and development. Due to losses of the transmitted quantum particles, which typically scale exponentially with the distance, intermediate quantum repeater stations are needed. Here we show how to generalise the quantum repeater concept to the multipartite case, by describing large-scale quantum networks, i.e. network nodes and their long-distance links, consistently in the language of graphs and graph states. This unifying approach comprises both the distribution of multipartite entanglement across the network, and the protection against errors via encoding. The correspondence to graph states also provides a tool for optimising the architecture of quantum networks.

  5. Harvesting Collective Trend Observations from Large Scale Study Trips

    DEFF Research Database (Denmark)

    Eriksen, Kaare; Ovesen, Nis

    2014-01-01

    To enhance industrial design students’ decoding and understanding of the technological possibilities and the diversity of needs and preferences in different cultures it is not unusual to arrange study trips where such students acquire a broader view to strengthen their professional skills...... and approach, hence linking the design education and the design culture of the surrounding world. To improve the professional learning it is useful, though, to facilitate and organize the trips in a way that involves systematic data collection and reporting. This paper presents a method for facilitating study...... trips for engineering students in architecture & design and the results from crowd-collecting a large amount of trend observations as well as the derived experience from using the method on a large scale study trip. The method has been developed and formalized in relation to study trips with large...

  6. Development of large-scale structure in the Universe

    CERN Document Server

    Ostriker, J P

    1991-01-01

    This volume grew out of the 1988 Fermi lectures given by Professor Ostriker, and is concerned with cosmological models that take into account the large scale structure of the universe. He starts with homogeneous isotropic models of the universe and then, by considering perturbations, he leads us to modern cosmological theories of the large scale, such as superconducting strings. This will be an excellent companion for all those interested in the cosmology and the large scale nature of the universe.

  7. On soft limits of large-scale structure correlation functions

    Energy Technology Data Exchange (ETDEWEB)

    Sagunski, Laura

    2016-08-15

    Large-scale structure surveys have the potential to become the leading probe for precision cosmology in the next decade. To extract valuable information on the cosmological evolution of the Universe from the observational data, it is of major importance to derive accurate theoretical predictions for the statistical large-scale structure observables, such as the power spectrum and the bispectrum of (dark) matter density perturbations. Hence, one of the greatest challenges of modern cosmology is to theoretically understand the non-linear dynamics of large-scale structure formation in the Universe from first principles. While analytic approaches to describe the large-scale structure formation are usually based on the framework of non-relativistic cosmological perturbation theory, we pursue another road in this thesis and develop methods to derive generic, non-perturbative statements about large-scale structure correlation functions. We study unequal- and equal-time correlation functions of density and velocity perturbations in the limit where one of their wavenumbers becomes small, that is, in the soft limit. In the soft limit, it is possible to link (N+1)-point and N-point correlation functions to non-perturbative 'consistency conditions'. These provide in turn a powerful tool to test fundamental aspects of the underlying theory at hand. In this work, we first rederive the (resummed) consistency conditions at unequal times by using the so-called eikonal approximation. The main appeal of the unequal-time consistency conditions is that they are solely based on symmetry arguments and thus are universal. Proceeding from this, we direct our attention to consistency conditions at equal times, which, on the other hand, depend on the interplay between soft and hard modes. We explore the existence and validity of equal-time consistency conditions within and beyond perturbation theory. For this purpose, we investigate the predictions for the soft limit of the

  8. Large-Scale Collective Entity Matching

    CERN Document Server

    Rastogi, Vibhor; Garofalakis, Minos

    2011-01-01

    There have been several recent advancements in Machine Learning community on the Entity Matching (EM) problem. However, their lack of scalability has prevented them from being applied in practical settings on large real-life datasets. Towards this end, we propose a principled framework to scale any generic EM algorithm. Our technique consists of running multiple instances of the EM algorithm on small neighborhoods of the data and passing messages across neighborhoods to construct a global solution. We prove formal properties of our framework and experimentally demonstrate the effectiveness of our approach in scaling EM algorithms.

  9. Dynamic Reactive Power Compensation of Large Scale Wind Integrated Power System

    DEFF Research Database (Denmark)

    Rather, Zakir Hussain; Chen, Zhe; Thøgersen, Paul

    2015-01-01

    Due to progressive displacement of conventional power plants by wind turbines, dynamic security of large scale wind integrated power systems gets significantly compromised. In this paper we first highlight the importance of dynamic reactive power support/voltage security in large scale wind...... integrated power systems with least presence of conventional power plants. Then we propose a mixed integer dynamic optimization based method for optimal dynamic reactive power allocation in large scale wind integrated power systems. One of the important aspects of the proposed methodology is that unlike...... static optimal power flow based approaches, the proposed method considers detailed system dynamics and wind turbine grid code fulfilment while optimizing the allocation of dynamic reactive power sources. We also argue that in large scale wind integrated power systems, i) better utilization of existing...

  10. The predictability of large-scale wind-driven flows

    Directory of Open Access Journals (Sweden)

    A. Mahadevan

    2001-01-01

    content is usually relatively low and the predictability of the wind-driven circulation is determined by the large-scale structure of the flow. In the more realistic, strongly chaotic regime, in which energetic mesoscale eddies are produced by the meandering of the separated western boundary current extension, the predictability of the flow locally tends to be a stronger function of the local mesoscale eddy structure than of the larger scale structure of the circulation. This has a broader implication for the effectiveness of different approaches to forecasting the ocean with models which sequentially assimilate new observations.

  11. Economic and agricultural transformation through large-scale farming : impacts of large-scale farming on local economic development, household food security and the environment in Ethiopia

    NARCIS (Netherlands)

    Bekele, M.S.

    2016-01-01

    This study examined impacts of large-scale farming in Ethiopia on local economic development, household food security, incomes, employment, and the environment. The study adopted a mixed research approach in which both qualitative and quantitative data were generated from secondary and primary sourc

  12. Large Scale Asset Extraction for Urban Images

    KAUST Repository

    Affara, Lama

    2016-09-16

    Object proposals are currently used for increasing the computational efficiency of object detection. We propose a novel adaptive pipeline for interleaving object proposals with object classification and use it as a formulation for asset detection. We first preprocess the images using a novel and efficient rectification technique. We then employ a particle filter approach to keep track of three priors, which guide proposed samples and get updated using classifier output. Tests performed on over 1000 urban images demonstrate that our rectification method is faster than existing methods without loss in quality, and that our interleaved proposal method outperforms current state-of-the-art. We further demonstrate that other methods can be improved by incorporating our interleaved proposals. © Springer International Publishing AG 2016.

  13. Large-Scale Analysis of Relationships between Mineral Dust, Ice Cloud Properties, and Precipitation from Satellite Observations Using a Bayesian Approach: Theoretical Basis and First Results for the Tropical Atlantic Ocean

    Directory of Open Access Journals (Sweden)

    Lars Klüser

    2017-01-01

    Full Text Available Mineral dust and ice cloud observations from the Infrared Atmospheric Sounding Interferometer (IASI are used to assess the relationships between desert dust aerosols and ice clouds over the tropical Atlantic Ocean during the hurricane season 2008. Cloud property histograms are first adjusted for varying cloud top temperature or ice water path distributions with a Bayesian approach to account for meteorological constraints on the cloud variables. Then, histogram differences between dust load classes are used to describe the impact of dust load on cloud property statistics. The analysis of the histogram differences shows that ice crystal sizes are reduced with increasing aerosol load and ice cloud optical depth and ice water path are increased. The distributions of all three variables broaden and get less skewed in dusty environments. For ice crystal size the significant bimodality is reduced and the order of peaks is reversed. Moreover, it is shown that not only are distributions of ice cloud variables simply shifted linearly but also variance, skewness, and complexity of the cloud variable distributions are significantly affected. This implies that the whole cloud variable distributions have to be considered for indirect aerosol effects in any application for climate modelling.

  14. Electrodialysis system for large-scale enantiomer separation

    NARCIS (Netherlands)

    Ent, van der E.M.; Thielen, T.P.H.; Cohen Stuart, M.A.; Padt, van der A.; Keurentjes, J.T.F.

    2001-01-01

    In contrast to analytical methods, the range of technologies currently applied for large-scale enantiomer separations is not very extensive. Therefore, a new system has been developed for large-scale enantiomer separations that can be regarded as the scale-up of a capillary electrophoresis system. I

  15. Electrodialysis system for large-scale enantiomer separation

    NARCIS (Netherlands)

    Ent, van der E.M.; Thielen, T.P.H.; Cohen Stuart, M.A.; Padt, van der A.; Keurentjes, J.T.F.

    2001-01-01

    In contrast to analytical methods, the range of technologies currently applied for large-scale enantiomer separations is not very extensive. Therefore, a new system has been developed for large-scale enantiomer separations that can be regarded as the scale-up of a capillary electrophoresis system.

  16. Prospects for large scale electricity storage in Denmark

    DEFF Research Database (Denmark)

    Krog Ekman, Claus; Jensen, Søren Højgaard

    2010-01-01

    In a future power systems with additional wind power capacity there will be an increased need for large scale power management as well as reliable balancing and reserve capabilities. Different technologies for large scale electricity storage provide solutions to the different challenges arising w...

  17. Large Scale Implementations for Twitter Sentiment Classification

    Directory of Open Access Journals (Sweden)

    Andreas Kanavos

    2017-03-01

    Full Text Available Sentiment Analysis on Twitter Data is indeed a challenging problem due to the nature, diversity and volume of the data. People tend to express their feelings freely, which makes Twitter an ideal source for accumulating a vast amount of opinions towards a wide spectrum of topics. This amount of information offers huge potential and can be harnessed to receive the sentiment tendency towards these topics. However, since no one can invest an infinite amount of time to read through these tweets, an automated decision making approach is necessary. Nevertheless, most existing solutions are limited in centralized environments only. Thus, they can only process at most a few thousand tweets. Such a sample is not representative in order to define the sentiment polarity towards a topic due to the massive number of tweets published daily. In this work, we develop two systems: the first in the MapReduce and the second in the Apache Spark framework for programming with Big Data. The algorithm exploits all hashtags and emoticons inside a tweet, as sentiment labels, and proceeds to a classification method of diverse sentiment types in a parallel and distributed manner. Moreover, the sentiment analysis tool is based on Machine Learning methodologies alongside Natural Language Processing techniques and utilizes Apache Spark’s Machine learning library, MLlib. In order to address the nature of Big Data, we introduce some pre-processing steps for achieving better results in Sentiment Analysis as well as Bloom filters to compact the storage size of intermediate data and boost the performance of our algorithm. Finally, the proposed system was trained and validated with real data crawled by Twitter, and, through an extensive experimental evaluation, we prove that our solution is efficient, robust and scalable while confirming the quality of our sentiment identification.

  18. Large-scale comparative visualisation of sets of multidimensional data

    Directory of Open Access Journals (Sweden)

    Dany Vohl

    2016-10-01

    Full Text Available We present encube—a qualitative, quantitative and comparative visualisation and analysis system, with application to high-resolution, immersive three-dimensional environments and desktop displays. encube extends previous comparative visualisation systems by considering: (1 the integration of comparative visualisation and analysis into a unified system; (2 the documentation of the discovery process; and (3 an approach that enables scientists to continue the research process once back at their desktop. Our solution enables tablets, smartphones or laptops to be used as interaction units for manipulating, organising, and querying data. We highlight the modularity of encube, allowing additional functionalities to be included as required. Additionally, our approach supports a high level of collaboration within the physical environment. We show how our implementation of encube operates in a large-scale, hybrid visualisation and supercomputing environment using the CAVE2 at Monash University, and on a local desktop, making it a versatile solution. We discuss how our approach can help accelerate the discovery rate in a variety of research scenarios.

  19. Fast Large Scale Structure Perturbation Theory using 1D FFTs

    CERN Document Server

    Schmittfull, Marcel; McDonald, Patrick

    2016-01-01

    The usual fluid equations describing the large-scale evolution of mass density in the universe can be written as local in the density, velocity divergence, and velocity potential fields. As a result, the perturbative expansion in small density fluctuations, usually written in terms of convolutions in Fourier space, can be written as a series of products of these fields evaluated at the same location in configuration space. Based on this, we establish a new method to numerically evaluate the 1-loop power spectrum (i.e., Fourier transform of the 2-point correlation function) with one-dimensional Fast Fourier Transforms. This is exact and a few orders of magnitude faster than previously used numerical approaches. Numerical results of the new method are in excellent agreement with the standard quadrature integration method. This fast model evaluation can in principle be extended to higher loop order where existing codes become painfully slow. Our approach follows by writing higher order corrections to the 2-point...

  20. ANALYSIS OF TURBULENT MIXING JETS IN LARGE SCALE TANK

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S; Richard Dimenna, R; Robert Leishear, R; David Stefanko, D

    2007-03-28

    Flow evolution models were developed to evaluate the performance of the new advanced design mixer pump for sludge mixing and removal operations with high-velocity liquid jets in one of the large-scale Savannah River Site waste tanks, Tank 18. This paper describes the computational model, the flow measurements used to provide validation data in the region far from the jet nozzle, the extension of the computational results to real tank conditions through the use of existing sludge suspension data, and finally, the sludge removal results from actual Tank 18 operations. A computational fluid dynamics approach was used to simulate the sludge removal operations. The models employed a three-dimensional representation of the tank with a two-equation turbulence model. Both the computational approach and the models were validated with onsite test data reported here and literature data. The model was then extended to actual conditions in Tank 18 through a velocity criterion to predict the ability of the new pump design to suspend settled sludge. A qualitative comparison with sludge removal operations in Tank 18 showed a reasonably good comparison with final results subject to significant uncertainties in actual sludge properties.

  1. Distribution probability of large-scale landslides in central Nepal

    Science.gov (United States)

    Timilsina, Manita; Bhandary, Netra P.; Dahal, Ranjan Kumar; Yatabe, Ryuichi

    2014-12-01

    Large-scale landslides in the Himalaya are defined as huge, deep-seated landslide masses that occurred in the geological past. They are widely distributed in the Nepal Himalaya. The steep topography and high local relief provide high potential for such failures, whereas the dynamic geology and adverse climatic conditions play a key role in the occurrence and reactivation of such landslides. The major geoscientific problems related with such large-scale landslides are 1) difficulties in their identification and delineation, 2) sources of small-scale failures, and 3) reactivation. Only a few scientific publications have been published concerning large-scale landslides in Nepal. In this context, the identification and quantification of large-scale landslides and their potential distribution are crucial. Therefore, this study explores the distribution of large-scale landslides in the Lesser Himalaya. It provides simple guidelines to identify large-scale landslides based on their typical characteristics and using a 3D schematic diagram. Based on the spatial distribution of landslides, geomorphological/geological parameters and logistic regression, an equation of large-scale landslide distribution is also derived. The equation is validated by applying it to another area. For the new area, the area under the receiver operating curve of the landslide distribution probability in the new area is 0.699, and a distribution probability value could explain > 65% of existing landslides. Therefore, the regression equation can be applied to areas of the Lesser Himalaya of central Nepal with similar geological and geomorphological conditions.

  2. Statistical methods for the analysis of rotation measure grids in large scale structures in the SKA era

    CERN Document Server

    Vacca, Valentina; Ensslin, Torsten; Selig, Marco; Junklewitz, Henrik; Greiner, Maksim; Jasche, Jens; Hales, Christopher; Reinecke, Martin; Carretti, Ettore; Feretti, Luigina; Ferrari, Chiara; Giovannini, Gabriele; Govoni, Federica; Horellou, Cathy; Ideguchi, Shinsuke; Johnston-Hollitt, Melanie; Murgia, Matteo; Paladino, Rosita; Pizzo, Roberto Francesco; Anna, Scaife

    2015-01-01

    To better understand the origin and properties of cosmological magnetic fields, a detailed knowledge of magnetic fields in the large-scale structure of the Universe (galaxy clusters, filaments) is crucial. We propose a new statistical approach to study magnetic fields on large scales with the rotation measure grid data that will be obtained with the new generation of radio interferometers.

  3. New Approaches for Very Large-Scale Integer Programming

    Science.gov (United States)

    2016-06-24

    problems. In the context of MIP, some recent works propose ML techniques for constructing a portfolio of good parameter configurations for a MIP solver... benefit of being instance-specific and of continuing the branch-and-bound seamlessly, without losing work when switching between learning and prediction...computational results on instances from MIPLIB 2010 illustrating the benefits of this framework. DISTRIBUTION A: Distribution approved for public release

  4. Organised convection embedded in a large-scale flow

    Science.gov (United States)

    Naumann, Ann Kristin; Stevens, Bjorn; Hohenegger, Cathy

    2017-04-01

    In idealised simulations of radiative convective equilibrium, convection aggregates spontaneously from randomly distributed convective cells into organized mesoscale convection despite homogeneous boundary conditions. Although these simulations apply very idealised setups, the process of self-aggregation is thought to be relevant for the development of tropical convective systems. One feature that idealised simulations usually neglect is the occurrence of a large-scale background flow. In the tropics, organised convection is embedded in a large-scale circulation system, which advects convection in along-wind direction and alters near surface convergence in the convective areas. A large-scale flow also modifies the surface fluxes, which are expected to be enhanced upwind of the convective area if a large-scale flow is applied. Convective clusters that are embedded in a large-scale flow therefore experience an asymmetric component of the surface fluxes, which influences the development and the pathway of a convective cluster. In this study, we use numerical simulations with explicit convection and add a large-scale flow to the established setup of radiative convective equilibrium. We then analyse how aggregated convection evolves when being exposed to wind forcing. The simulations suggest that convective line structures are more prevalent if a large-scale flow is present and that convective clusters move considerably slower than advection by the large-scale flow would suggest. We also study the asymmetric component of convective aggregation due to enhanced surface fluxes, and discuss the pathway and speed of convective clusters as a function of the large-scale wind speed.

  5. Ferroelectric opening switches for large-scale pulsed power drivers.

    Energy Technology Data Exchange (ETDEWEB)

    Brennecka, Geoffrey L.; Rudys, Joseph Matthew; Reed, Kim Warren; Pena, Gary Edward; Tuttle, Bruce Andrew; Glover, Steven Frank

    2009-11-01

    Fast electrical energy storage or Voltage-Driven Technology (VDT) has dominated fast, high-voltage pulsed power systems for the past six decades. Fast magnetic energy storage or Current-Driven Technology (CDT) is characterized by 10,000 X higher energy density than VDT and has a great number of other substantial advantages, but it has all but been neglected for all of these decades. The uniform explanation for neglect of CDT technology is invariably that the industry has never been able to make an effective opening switch, which is essential for the use of CDT. Most approaches to opening switches have involved plasma of one sort or another. On a large scale, gaseous plasmas have been used as a conductor to bridge the switch electrodes that provides an opening function when the current wave front propagates through to the output end of the plasma and fully magnetizes the plasma - this is called a Plasma Opening Switch (POS). Opening can be triggered in a POS using a magnetic field to push the plasma out of the A-K gap - this is called a Magnetically Controlled Plasma Opening Switch (MCPOS). On a small scale, depletion of electron plasmas in semiconductor devices is used to affect opening switch behavior, but these devices are relatively low voltage and low current compared to the hundreds of kilo-volts and tens of kilo-amperes of interest to pulsed power. This work is an investigation into an entirely new approach to opening switch technology that utilizes new materials in new ways. The new materials are Ferroelectrics and using them as an opening switch is a stark contrast to their traditional applications in optics and transducer applications. Emphasis is on use of high performance ferroelectrics with the objective of developing an opening switch that would be suitable for large scale pulsed power applications. Over the course of exploring this new ground, we have discovered new behaviors and properties of these materials that were here to fore unknown. Some of

  6. Large scale stochastic spatio-temporal modelling with PCRaster

    Science.gov (United States)

    Karssenberg, Derek; Drost, Niels; Schmitz, Oliver; de Jong, Kor; Bierkens, Marc F. P.

    2013-04-01

    software from the eScience Technology Platform (eSTeP), developed at the Netherlands eScience Center. This will allow us to scale up to hundreds of machines, with thousands of compute cores. A key requirement is not to change the user experience of the software. PCRaster operations and the use of the Python framework classes should work in a similar manner on machines ranging from a laptop to a supercomputer. This enables a seamless transfer of models from small machines, where model development is done, to large machines used for large-scale model runs. Domain specialists from a large range of disciplines, including hydrology, ecology, sedimentology, and land use change studies, currently use the PCRaster Python software within research projects. Applications include global scale hydrological modelling and error propagation in large-scale land use change models. The software runs on MS Windows, Linux operating systems, and OS X.

  7. Probabilistic cartography of the large-scale structure

    CERN Document Server

    Leclercq, Florent; Lavaux, Guilhem; Wandelt, Benjamin

    2015-01-01

    The BORG algorithm is an inference engine that derives the initial conditions given a cosmological model and galaxy survey data, and produces physical reconstructions of the underlying large-scale structure by assimilating the data into the model. We present the application of BORG to real galaxy catalogs and describe the primordial and late-time large-scale structure in the considered volumes. We then show how these results can be used for building various probabilistic maps of the large-scale structure, with rigorous propagation of uncertainties. In particular, we study dynamic cosmic web elements and secondary effects in the cosmic microwave background.

  8. Large scale and big data processing and management

    CERN Document Server

    Sakr, Sherif

    2014-01-01

    Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas

  9. Formative Classroom Assessment and Large-Scale Assessment: Toward a More Balanced System

    Directory of Open Access Journals (Sweden)

    Felipe Martínez Rizo

    2009-11-01

    Full Text Available Given the proliferation of large-scale standardized tests that has occurred in Mexico in recent years, this article constitutes a review of the international literature on the subject for the purpose of reflecting on the possible consequences of this phenomenon and exploring the progress of alternative assessment approaches. It also reviews the development of concepts related to formative classroom assessment, and summarizes current thinking on this subject. It emphasizes the importance of such approaches for improving educational quality. In conclusion, it argues that it is necessary to move toward assessment systems that combine large-scale assessment and classroom assessment in a more balanced fashion.

  10. The theory of large-scale ocean circulation

    National Research Council Canada - National Science Library

    Samelson, R. M

    2011-01-01

    "This is a concise but comprehensive introduction to the basic elements of the theory of large-scale ocean circulation for advanced students and researchers"-- "Mounting evidence that human activities...

  11. Learning networks for sustainable, large-scale improvement.

    Science.gov (United States)

    McCannon, C Joseph; Perla, Rocco J

    2009-05-01

    Large-scale improvement efforts known as improvement networks offer structured opportunities for exchange of information and insights into the adaptation of clinical protocols to a variety of settings.

  12. Personalized Opportunistic Computing for CMS at Large Scale

    CERN Document Server

    CERN. Geneva

    2015-01-01

    **Douglas Thain** is an Associate Professor of Computer Science and Engineering at the University of Notre Dame, where he designs large scale distributed computing systems to power the needs of advanced science and...

  13. An Evaluation Framework for Large-Scale Network Structures

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Knudsen, Thomas Phillip; Madsen, Ole Brun

    2004-01-01

    An evaluation framework for large-scale network structures is presented, which facilitates evaluations and comparisons of different physical network structures. A number of quantitative and qualitative parameters are presented, and their importance to networks discussed. Choosing a network...

  14. Modified gravity and large scale flows, a review

    Science.gov (United States)

    Mould, Jeremy

    2017-02-01

    Large scale flows have been a challenging feature of cosmography ever since galaxy scaling relations came on the scene 40 years ago. The next generation of surveys will offer a serious test of the standard cosmology.

  15. Some perspective on the Large Scale Scientific Computation Research

    Institute of Scientific and Technical Information of China (English)

    DU Qiang

    2004-01-01

    @@ The "Large Scale Scientific Computation (LSSC) Research"project is one of the State Major Basic Research projects funded by the Chinese Ministry of Science and Technology in the field ofinformation science and technology.

  16. Some perspective on the Large Scale Scientific Computation Research

    Institute of Scientific and Technical Information of China (English)

    DU; Qiang

    2004-01-01

    The "Large Scale Scientific Computation (LSSC) Research"project is one of the State Major Basic Research projects funded by the Chinese Ministry of Science and Technology in the field ofinformation science and technology.……

  17. PetroChina to Expand Dushanzi Refinery on Large Scale

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    @@ A large-scale expansion project for PetroChina Dushanzi Petrochemical Company has been given the green light, a move which will make it one of the largest refineries and petrochemical complexes in the country.

  18. Needs, opportunities, and options for large scale systems research

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  19. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...... are presented as the small-scale model underpredicts the overtopping discharge....

  20. Balancing modern Power System with large scale of wind power

    OpenAIRE

    Basit, Abdul; Altin, Müfit; Hansen, Anca Daniela; Sørensen, Poul Ejnar

    2014-01-01

    Power system operators must ensure robust, secure and reliable power system operation even with a large scale integration of wind power. Electricity generated from the intermittent wind in large propor-tion may impact on the control of power system balance and thus deviations in the power system frequency in small or islanded power systems or tie line power flows in interconnected power systems. Therefore, the large scale integration of wind power into the power system strongly concerns the s...

  1. Large-Scale Spray Releases: Additional Aerosol Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Daniel, Richard C.; Gauglitz, Phillip A.; Burns, Carolyn A.; Fountain, Matthew S.; Shimskey, Rick W.; Billing, Justin M.; Bontha, Jagannadha R.; Kurath, Dean E.; Jenks, Jeromy WJ; MacFarlan, Paul J.; Mahoney, Lenna A.

    2013-08-01

    One of the events postulated in the hazard analysis for the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak event involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids that behave as a Newtonian fluid. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and in processing facilities across the DOE complex. To expand the data set upon which the WTP accident and safety analyses were based, an aerosol spray leak testing program was conducted by Pacific Northwest National Laboratory (PNNL). PNNL’s test program addressed two key technical areas to improve the WTP methodology (Larson and Allen 2010). The first technical area was to quantify the role of slurry particles in small breaches where slurry particles may plug the hole and prevent high-pressure sprays. The results from an effort to address this first technical area can be found in Mahoney et al. (2012a). The second technical area was to determine aerosol droplet size distribution and total droplet volume from prototypic breaches and fluids, including sprays from larger breaches and sprays of slurries for which literature data are mostly absent. To address the second technical area, the testing program collected aerosol generation data at two scales, commonly referred to as small-scale and large-scale testing. The small-scale testing and resultant data are described in Mahoney et al. (2012b), and the large-scale testing and resultant data are presented in Schonewill et al. (2012). In tests at both scales, simulants were used

  2. Large-scale bias of dark matter halos

    CERN Document Server

    Valageas, Patrick

    2010-01-01

    We build a simple analytical model for the bias of dark matter halos that applies to objects defined by an arbitrary density threshold, $200\\leq\\delta\\leq 1600$, and that provides accurate predictions from low-mass to high-mass halos. We point out that it is possible to build simple and efficient models, with no free parameter for the halo bias, by using integral constraints that govern the behavior of low-mass and typical halos, whereas the properties of rare massive halos are derived through explicit asymptotic approaches. We also describe how to take into account the impact of halo motions on their bias, using their linear displacement field. We obtain a good agreement with numerical simulations for the halo mass functions and large-scale bias at redshifts $0\\leq z \\leq 2.5$, for halos defined by nonlinear density threshold $200\\leq\\delta\\leq 1600$. We also evaluate the impact on the halo bias of two common approximations, i) neglecting halo motions, and ii) linearizing the halo two-point correlation.

  3. Plasma suppression of large scale structure formation in the universe.

    Science.gov (United States)

    Chen, Pisin; Lai, Kwang-Chang

    2007-12-07

    We point out that during the reionization epoch of the cosmic history, the plasma collective effect among the ordinary matter would suppress the large scale structure formation. The imperfect Debye shielding at finite temperature would induce an electrostatic pressure which, working together with the thermal pressure, would counter the gravitational collapse. As a result, the effective Jeans length, lambda[over ]_{J} is increased by a factor lambda[over ]_{J}/lambda_{J}=sqrt[8/5], relative to the conventional one. For scales smaller than the effective Jeans scale the plasma would oscillate at the ion-acoustic frequency. The modes that would be influenced by this effect lie roughly in the range 0.5h Mpc;{-1}approach 1-(Omega_{dm}/Omega_{m});{2} approximately 1-(5/6);{2} approximately 30%.

  4. Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.

    Science.gov (United States)

    Han, Lei; Zhang, Yu; Zhang, Tong

    2016-08-01

    The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.

  5. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, C; Abdulla, G; Critchlow, T

    2003-01-31

    This paper discusses using the wavelets modeling technique as a mechanism for querying large-scale spatio-temporal scientific simulation data. Wavelets have been used successfully in time series analysis and in answering surprise and trend queries. Our approach however is driven by the need for compression, which is necessary for viable throughput given the size of the targeted data, along with the end user requirements from the discovery process. Our users would like to run fast queries to check the validity of the simulation algorithms used. In some cases users are welling to accept approximate results if the answer comes back within a reasonable time. In other cases they might want to identify a certain phenomena and track it over time. We face a unique problem because of the data set sizes. It may take months to generate one set of the targeted data; because of its shear size, the data cannot be stored on disk for long and thus needs to be analyzed immediately before it is sent to tape. We integrated wavelets within AQSIM, a system that we are developing to support exploration and analyses of tera-scale size data sets. We will discuss the way we utilized wavelets decomposition in our domain to facilitate compression and in answering a specific class of queries that is harder to answer with any other modeling technique. We will also discuss some of the shortcomings of our implementation and how to address them.

  6. Large Scale Obscuration and Related Climate Effects Workshop: Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Zak, B.D.; Russell, N.A.; Church, H.W.; Einfeld, W.; Yoon, D.; Behl, Y.K. [eds.

    1994-05-01

    A Workshop on Large Scale Obsurcation and Related Climate Effects was held 29--31 January, 1992, in Albuquerque, New Mexico. The objectives of the workshop were: to determine through the use of expert judgement the current state of understanding of regional and global obscuration and related climate effects associated with nuclear weapons detonations; to estimate how large the uncertainties are in the parameters associated with these phenomena (given specific scenarios); to evaluate the impact of these uncertainties on obscuration predictions; and to develop an approach for the prioritization of further work on newly-available data sets to reduce the uncertainties. The workshop consisted of formal presentations by the 35 participants, and subsequent topical working sessions on: the source term; aerosol optical properties; atmospheric processes; and electro-optical systems performance and climatic impacts. Summaries of the conclusions reached in the working sessions are presented in the body of the report. Copies of the transparencies shown as part of each formal presentation are contained in the appendices (microfiche).

  7. Optimal management of large scale aquifers under uncertainty

    Science.gov (United States)

    Ghorbanidehno, H.; Kokkinaki, A.; Kitanidis, P. K.; Darve, E. F.

    2016-12-01

    Water resources systems, and especially groundwater reservoirs, are a valuable resource that is often being endangered by contamination and over-exploitation. Optimal control techniques can be applied for groundwater management to ensure the long-term sustainability of this vulnerable resource. Linear Quadratic Gaussian (LQG) control is an optimal control method that combines a Kalman filter for real time estimation with a linear quadratic regulator for dynamic optimization. The LQG controller can be used to determine the optimal controls (e.g. pumping schedule) upon receiving feedback about the system from incomplete noisy measurements. However, applying LQG control for systems of large dimension is computationally expensive. This work presents the Spectral Linear Quadratic Gaussian (SpecLQG) control, a new fast LQG controller that can be used for large scale problems. SpecLQG control combines the Spectral Kalman filter, which is a fast Kalman filter algorithm, with an efficient low rank LQR, and provides a practical approach for combined monitoring, parameter estimation, uncertainty quantification and optimal control for linear and weakly non-linear systems. The computational cost of SpecLQG controller scales linearly with the number of unknowns, a great improvement compared to the quadratic cost of basic LQG. We demonstrate the accuracy and computational efficiency of SpecLQG control using two applications: first, a linear validation case for pumping schedule management in a small homogeneous confined aquifer; and second, a larger scale nonlinear case with unknown heterogeneities in aquifer properties and boundary conditions.

  8. Large scale filaments associated with Milky Way spiral arms

    CERN Document Server

    Wang, Ke; Ginsburg, Adam; Walmsley, C Malcolm; Molinari, Sergio; Schisano, Eugenio

    2015-01-01

    The ubiquity of filamentary structure at various scales through out the Galaxy has triggered a renewed interest in their formation, evolution, and role in star formation. The largest filaments can reach up to Galactic scale as part of the spiral arm structure. However, such large scale filaments are hard to identify systematically due to limitations in identifying methodology (i.e., as extinction features). We present a new approach to directly search for the largest, coldest, and densest filaments in the Galaxy, making use of sensitive Herschel Hi-GAL data complemented by spectral line cubes. We present a sample of the 9 most prominent Herschel filaments, including 6 identified from a pilot search field plus 3 from outside the field. These filaments measure 37-99 pc long and 0.6-3.0 pc wide with masses (0.5-8.3)$\\times10^4 \\, M_\\odot$, and beam-averaged ($28"$, or 0.4-0.7 pc) peak H$_2$ column densities of (1.7-9.3)$\\times 10^{22} \\, \\rm{cm^{-2}}$. The bulk of the filaments are relatively cold (17-21 K), whi...

  9. Simulated evolution of the dark matter large-scale structure

    CERN Document Server

    Demiański, M; Pilipenko, S; Gottlöber, S

    2011-01-01

    We analyze evolution of the basic properties of simulated large scale structure elements formed by dark matter (DM LSS) and confront it with the observed evolution of the Lyman-$\\alpha$ forest. In three high resolution simulations we selected samples of compact DM clouds of moderate overdensity. Clouds are selected at redshifts $0\\leq z\\leq 3$ with the Minimal Spanning Tree (MST) technique. The main properties of so selected clouds are analyzed in 3D space and with the core sampling approach, what allows us to compare estimates of the DM LSS evolution obtained with two different techniques and to clarify some important aspects of the LSS evolution. In both cases we find that regular redshift variations of the mean characteristics of the DM LSS are accompanied only by small variations of their PDFs, what indicates the self similar character of the DM LSS evolution. The high degree of relaxation of DM particles compressed within the LSS is found along the shortest principal axis of clouds. We see that the inter...

  10. A study of MLFMA for large-scale scattering problems

    Science.gov (United States)

    Hastriter, Michael Larkin

    This research is centered in computational electromagnetics with a focus on solving large-scale problems accurately in a timely fashion using first principle physics. Error control of the translation operator in 3-D is shown. A parallel implementation of the multilevel fast multipole algorithm (MLFMA) was studied as far as parallel efficiency and scaling. The large-scale scattering program (LSSP), based on the ScaleME library, was used to solve ultra-large-scale problems including a 200lambda sphere with 20 million unknowns. As these large-scale problems were solved, techniques were developed to accurately estimate the memory requirements. Careful memory management is needed in order to solve these massive problems. The study of MLFMA in large-scale problems revealed significant errors that stemmed from inconsistencies in constants used by different parts of the algorithm. These were fixed to produce the most accurate data possible for large-scale surface scattering problems. Data was calculated on a missile-like target using both high frequency methods and MLFMA. This data was compared and analyzed to determine possible strategies to increase data acquisition speed and accuracy through multiple computation method hybridization.

  11. Vector dissipativity theory for large-scale impulsive dynamical systems

    Directory of Open Access Journals (Sweden)

    Haddad Wassim M.

    2004-01-01

    Full Text Available Modern complex large-scale impulsive systems involve multiple modes of operation placing stringent demands on controller analysis of increasing complexity. In analyzing these large-scale systems, it is often desirable to treat the overall impulsive system as a collection of interconnected impulsive subsystems. Solution properties of the large-scale impulsive system are then deduced from the solution properties of the individual impulsive subsystems and the nature of the impulsive system interconnections. In this paper, we develop vector dissipativity theory for large-scale impulsive dynamical systems. Specifically, using vector storage functions and vector hybrid supply rates, dissipativity properties of the composite large-scale impulsive systems are shown to be determined from the dissipativity properties of the impulsive subsystems and their interconnections. Furthermore, extended Kalman-Yakubovich-Popov conditions, in terms of the impulsive subsystem dynamics and interconnection constraints, characterizing vector dissipativeness via vector system storage functions, are derived. Finally, these results are used to develop feedback interconnection stability results for large-scale impulsive dynamical systems using vector Lyapunov functions.

  12. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, C; Abdulla, G; Critchlow, T

    2002-02-25

    Data produced by large scale scientific simulations, experiments, and observations can easily reach tera-bytes in size. The ability to examine data-sets of this magnitude, even in moderate detail, is problematic at best. Generally this scientific data consists of multivariate field quantities with complex inter-variable correlations and spatial-temporal structure. To provide scientists and engineers with the ability to explore and analyze such data sets we are using a twofold approach. First, we model the data with the objective of creating a compressed yet manageable representation. Second, with that compressed representation, we provide the user with the ability to query the resulting approximation to obtain approximate yet sufficient answers; a process called adhoc querying. This paper is concerned with a wavelet modeling technique that seeks to capture the important physical characteristics of the target scientific data. Our approach is driven by the compression, which is necessary for viable throughput, along with the end user requirements from the discovery process. Our work contrasts existing research which applies wavelets to range querying, change detection, and clustering problems by working directly with a decomposition of the data. The difference in this procedures is due primarily to the nature of the data and the requirements of the scientists and engineers. Our approach directly uses the wavelet coefficients of the data to compress as well as query. We will provide some background on the problem, describe how the wavelet decomposition is used to facilitate data compression and how queries are posed on the resulting compressed model. Results of this process will be shown for several problems of interest and we will end with some observations and conclusions about this research.

  13. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, C; Abdulla, G; Critchlow, T

    2002-02-25

    Data produced by large scale scientific simulations, experiments, and observations can easily reach tera-bytes in size. The ability to examine data-sets of this magnitude, even in moderate detail, is problematic at best. Generally this scientific data consists of multivariate field quantities with complex inter-variable correlations and spatial-temporal structure. To provide scientists and engineers with the ability to explore and analyze such data sets we are using a twofold approach. First, we model the data with the objective of creating a compressed yet manageable representation. Second, with that compressed representation, we provide the user with the ability to query the resulting approximation to obtain approximate yet sufficient answers; a process called adhoc querying. This paper is concerned with a wavelet modeling technique that seeks to capture the important physical characteristics of the target scientific data. Our approach is driven by the compression, which is necessary for viable throughput, along with the end user requirements from the discovery process. Our work contrasts existing research which applies wavelets to range querying, change detection, and clustering problems by working directly with a decomposition of the data. The difference in this procedures is due primarily to the nature of the data and the requirements of the scientists and engineers. Our approach directly uses the wavelet coefficients of the data to compress as well as query. We will provide some background on the problem, describe how the wavelet decomposition is used to facilitate data compression and how queries are posed on the resulting compressed model. Results of this process will be shown for several problems of interest and we will end with some observations and conclusions about this research.

  14. Developments in large-scale coastal flood hazard mapping

    Science.gov (United States)

    Vousdoukas, Michalis I.; Voukouvalas, Evangelos; Mentaschi, Lorenzo; Dottori, Francesco; Giardino, Alessio; Bouziotas, Dimitrios; Bianchi, Alessandra; Salamon, Peter; Feyen, Luc

    2016-08-01

    Coastal flooding related to marine extreme events has severe socioeconomic impacts, and even though the latter are projected to increase under the changing climate, there is a clear deficit of information and predictive capacity related to coastal flood mapping. The present contribution reports on efforts towards a new methodology for mapping coastal flood hazard at European scale, combining (i) the contribution of waves to the total water level; (ii) improved inundation modeling; and (iii) an open, physics-based framework which can be constantly upgraded, whenever new and more accurate data become available. Four inundation approaches of gradually increasing complexity and computational costs were evaluated in terms of their applicability to large-scale coastal flooding mapping: static inundation (SM); a semi-dynamic method, considering the water volume discharge over the dykes (VD); the flood intensity index approach (Iw); and the model LISFLOOD-FP (LFP). A validation test performed against observed flood extents during the Xynthia storm event showed that SM and VD can lead to an overestimation of flood extents by 232 and 209 %, while Iw and LFP showed satisfactory predictive skill. Application at pan-European scale for the present-day 100-year event confirmed that static approaches can overestimate flood extents by 56 % compared to LFP; however, Iw can deliver results of reasonable accuracy in cases when reduced computational costs are a priority. Moreover, omitting the wave contribution in the extreme total water level (TWL) can result in a ˜ 60 % underestimation of the flooded area. The present findings have implications for impact assessment studies, since combination of the estimated inundation maps with population exposure maps revealed differences in the estimated number of people affected within the 20-70 % range.

  15. The Race Race: Assimilation in America

    Science.gov (United States)

    Balis, Andrea; Aman, Michael

    2013-01-01

    Can race and assimilation be taught? Interdisciplinary pedagogy provides a methodology, context, and use of nontraditional texts culled from American cultural history such as from, theater and historical texts. This approach and these texts prove useful for an examination of race and assimilation in America. The paper describes a course that while…

  16. Coupled binary embedding for large-scale image retrieval.

    Science.gov (United States)

    Zheng, Liang; Wang, Shengjin; Tian, Qi

    2014-08-01

    Visual matching is a crucial step in image retrieval based on the bag-of-words (BoW) model. In the baseline method, two keypoints are considered as a matching pair if their SIFT descriptors are quantized to the same visual word. However, the SIFT visual word has two limitations. First, it loses most of its discriminative power during quantization. Second, SIFT only describes the local texture feature. Both drawbacks impair the discriminative power of the BoW model and lead to false positive matches. To tackle this problem, this paper proposes to embed multiple binary features at indexing level. To model correlation between features, a multi-IDF scheme is introduced, through which different binary features are coupled into the inverted file. We show that matching verification methods based on binary features, such as Hamming embedding, can be effectively incorporated in our framework. As an extension, we explore the fusion of binary color feature into image retrieval. The joint integration of the SIFT visual word and binary features greatly enhances the precision of visual matching, reducing the impact of false positive matches. Our method is evaluated through extensive experiments on four benchmark datasets (Ukbench, Holidays, DupImage, and MIR Flickr 1M). We show that our method significantly improves the baseline approach. In addition, large-scale experiments indicate that the proposed method requires acceptable memory usage and query time compared with other approaches. Further, when global color feature is integrated, our method yields competitive performance with the state-of-the-arts.

  17. Decoupling local mechanics from large-scale structure in modular metamaterials.

    Science.gov (United States)

    Yang, Nan; Silverberg, Jesse L

    2017-04-04

    A defining feature of mechanical metamaterials is that their properties are determined by the organization of internal structure instead of the raw fabrication materials. This shift of attention to engineering internal degrees of freedom has coaxed relatively simple materials into exhibiting a wide range of remarkable mechanical properties. For practical applications to be realized, however, this nascent understanding of metamaterial design must be translated into a capacity for engineering large-scale structures with prescribed mechanical functionality. Thus, the challenge is to systematically map desired functionality of large-scale structures backward into a design scheme while using finite parameter domains. Such "inverse design" is often complicated by the deep coupling between large-scale structure and local mechanical function, which limits the available design space. Here, we introduce a design strategy for constructing 1D, 2D, and 3D mechanical metamaterials inspired by modular origami and kirigami. Our approach is to assemble a number of modules into a voxelized large-scale structure, where the module's design has a greater number of mechanical design parameters than the number of constraints imposed by bulk assembly. This inequality allows each voxel in the bulk structure to be uniquely assigned mechanical properties independent from its ability to connect and deform with its neighbors. In studying specific examples of large-scale metamaterial structures we show that a decoupling of global structure from local mechanical function allows for a variety of mechanically and topologically complex designs.

  18. Large-scale modeling - a tool for conquering the complexity of the brain

    Directory of Open Access Journals (Sweden)

    Mikael Djurfeldt

    2008-04-01

    Full Text Available Is there any hope of achieving a thorough understanding of higher functions such as perception, memory, thought and emotion or is the stunning complexity of the brain a barrier which will limit such efforts for the foreseeable future? In this perspective we discuss methods to handle complexity, approaches to model building, and point to detailed large-scale models as a new contribution to the toolbox of the computational neuroscientist. We elucidate some aspects which distinguishes large-scale models and some of the technological challenges which they entail.

  19. Software quality assurance: in large scale and complex software-intensive systems

    NARCIS (Netherlands)

    Mistrik, I.; Soley, R.; Ali, N.; Grundy, J.; Tekinerdogan, B.

    2015-01-01

    Software Quality Assurance in Large Scale and Complex Software-intensive Systems presents novel and high-quality research related approaches that relate the quality of software architecture to system requirements, system architecture and enterprise-architecture, or software testing. Modern software

  20. New Principles of Coordination in Large-scale Micro- and Molecular-Robotic Groups

    CERN Document Server

    Kornienko, S

    2011-01-01

    Micro- and molecular-robotic systems act as large-scale swarms. Capabilities of sensing, communication and information processing are very limited on these scales. This short position paper describes a swarm-based minimalistic approach, which can be applied for coordinating collective behavior in such systems.

  1. Software quality assurance: in large scale and complex software-intensive systems

    NARCIS (Netherlands)

    Mistrik, I.; Soley, R.; Ali, N.; Grundy, J.; Tekinerdogan, B.

    2015-01-01

    Software Quality Assurance in Large Scale and Complex Software-intensive Systems presents novel and high-quality research related approaches that relate the quality of software architecture to system requirements, system architecture and enterprise-architecture, or software testing. Modern software

  2. The Role of Reading Comprehension in Large-Scale Subject-Matter Assessments

    Science.gov (United States)

    Zhang, Ting

    2013-01-01

    This study was designed with the overall goal of understanding how difficulties in reading comprehension are associated with early adolescents' performance in large-scale assessments in subject domains including science and civic-related social studies. The current study extended previous research by taking a cognition-centered approach based on…

  3. Hybrid Nested Partitions and Math Programming Framework for Large-scale Combinatorial Optimization

    Science.gov (United States)

    2010-03-31

    optimization problems: 1) exact algorithms and 2) metaheuristic algorithms. This project will integrate concepts from these two technologies to develop...generic optimization frameworks to find provably good solutions to large-scale discrete optimization problems often encountered in many real applications...integer programming decomposition approaches, such as Dantzig Wolfe decomposition and Lagrangian relaxation, and metaheuristics such as the Nested

  4. Testing cosmological models with large-scale power modulation using microwave background polarization observations

    CERN Document Server

    Bunn, Emory F; Zheng, Haoxuan

    2016-01-01

    We examine the degree to which observations of large-scale cosmic microwave background (CMB) polarization can shed light on the puzzling large-scale power modulation in maps of CMB anisotropy. We consider a phenomenological model in which the observed anomaly is caused by modulation of large-scale primordial curvature perturbations, and calculate Fisher information and error forecasts for future polarization data, constrained by the existing CMB anisotropy data. Because a significant fraction of the available information is contained in correlations with the anomalous temperature data, it is essential to account for these constraints. We also present a systematic approach to finding a set of normal modes that maximize the available information, generalizing the well-known Karhunen-Loeve transformation to take account of the constraints from the temperature data. A polarization map covering at least $\\sim 60\\%$ of the sky should be able to provide a $3\\sigma$ detection of modulation at the level favored by the...

  5. Cross-indexing of binary SIFT codes for large-scale image search.

    Science.gov (United States)

    Liu, Zhen; Li, Houqiang; Zhang, Liyan; Zhou, Wengang; Tian, Qi

    2014-05-01

    In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on large-scale image collections. Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance. In this paper, we propose a novel flexible scale invariant feature transform (SIFT) binarization (FSB) algorithm for large-scale image search. The FSB algorithm explores the magnitude patterns of SIFT descriptor. It is unsupervised and the generated binary codes are demonstrated to be dispreserving. Besides, we propose a new searching strategy to find target features based on the cross-indexing in the binary SIFT space and original SIFT space. We evaluate our approach on two publicly released data sets. The experiments on large-scale partial duplicate image retrieval system demonstrate the effectiveness and efficiency of the proposed algorithm.

  6. Hierarchical Neural Networks Method for Fault Diagnosis of Large-Scale Analog Circuits

    Institute of Scientific and Technical Information of China (English)

    TAN Yanghong; HE Yigang; FANG Gefeng

    2007-01-01

    A novel hierarchical neural networks (HNNs) method for fault diagnosis of large-scale circuits is proposed. The presented techniques using neural networks(NNs) approaches require a large amount of computation for simulating various faulty component possibilities. For large scale circuits, the number of possible faults, and hence the simulations, grow rapidly and become tedious and sometimes even impractical. Some NNs are distributed to the torn sub-blocks according to the proposed torn principles of large scale circuits. And the NNs are trained in batches by different patterns in the light of the presented rules of various patterns when the DC, AC and transient responses of the circuit are available. The method is characterized by decreasing the over-lapped feasible domains of responses of circuits with tolerance and leads to better performance and higher correct classification. The methodology is illustrated by means of diagnosis examples.

  7. Large Scale Meteorological Pattern of Extreme Rainfall in Indonesia

    Science.gov (United States)

    Kuswanto, Heri; Grotjahn, Richard; Rachmi, Arinda; Suhermi, Novri; Oktania, Erma; Wijaya, Yosep

    2014-05-01

    Extreme Weather Events (EWEs) cause negative impacts socially, economically, and environmentally. Considering these facts, forecasting EWEs is crucial work. Indonesia has been identified as being among the countries most vulnerable to the risk of natural disasters, such as floods, heat waves, and droughts. Current forecasting of extreme events in Indonesia is carried out by interpreting synoptic maps for several fields without taking into account the link between the observed events in the 'target' area with remote conditions. This situation may cause misidentification of the event leading to an inaccurate prediction. Grotjahn and Faure (2008) compute composite maps from extreme events (including heat waves and intense rainfall) to help forecasters identify such events in model output. The composite maps show large scale meteorological patterns (LSMP) that occurred during historical EWEs. Some vital information about the EWEs can be acquired from studying such maps, in addition to providing forecaster guidance. Such maps have robust mid-latitude meteorological patterns (for Sacramento and California Central Valley, USA EWEs). We study the performance of the composite approach for tropical weather condition such as Indonesia. Initially, the composite maps are developed to identify and forecast the extreme weather events in Indramayu district- West Java, the main producer of rice in Indonesia and contributes to about 60% of the national total rice production. Studying extreme weather events happening in Indramayu is important since EWEs there affect national agricultural and fisheries activities. During a recent EWE more than a thousand houses in Indramayu suffered from serious flooding with each home more than one meter underwater. The flood also destroyed a thousand hectares of rice plantings in 5 regencies. Identifying the dates of extreme events is one of the most important steps and has to be carried out carefully. An approach has been applied to identify the

  8. Large-scale-vortex dynamos in planar rotating convection

    CERN Document Server

    Guervilly, Céline; Jones, Chris A

    2016-01-01

    Several recent studies have demonstrated how large-scale vortices may arise spontaneously in rotating planar convection. Here we examine the dynamo properties of such flows in rotating Boussinesq convection. For moderate values of the magnetic Reynolds number ($100 \\lesssim Rm \\lesssim 550$, with $Rm$ based on the box depth and the convective velocity), a large-scale (i.e. system-size) magnetic field is generated. The amplitude of the magnetic energy oscillates in time, out of phase with the oscillating amplitude of the large-scale vortex. The dynamo mechanism relies on those components of the flow that have length scales lying between that of the large-scale vortex and the typical convective cell size; smaller-scale flows are not required. The large-scale vortex plays a crucial role in the magnetic induction despite being essentially two-dimensional. For larger magnetic Reynolds numbers, the dynamo is small scale, with a magnetic energy spectrum that peaks at the scale of the convective cells. In this case, ...

  9. Considering Race and Space: Mapping Developmental Approaches for Providing Culturally Responsive Advising

    Science.gov (United States)

    Mitchell, Roland W.; Wood, Gerald K.; Witherspoon, Noelle

    2010-01-01

    This exploratory essay critically examines how social relations structure the production of space on a college campus. In particular, we analyze how the organization of one particular site--the student advising office at a southeastern university--calls attention to the relationship between race and space in ways that re-inscribe narrow…

  10. An inertia-free filter line-search algorithm for large-scale nonlinear programming

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Nai-Yuan; Zavala, Victor M.

    2016-02-15

    We present a filter line-search algorithm that does not require inertia information of the linear system. This feature enables the use of a wide range of linear algebra strategies and libraries, which is essential to tackle large-scale problems on modern computing architectures. The proposed approach performs curvature tests along the search step to detect negative curvature and to trigger convexification. We prove that the approach is globally convergent and we implement the approach within a parallel interior-point framework to solve large-scale and highly nonlinear problems. Our numerical tests demonstrate that the inertia-free approach is as efficient as inertia detection via symmetric indefinite factorizations. We also demonstrate that the inertia-free approach can lead to reductions in solution time because it reduces the amount of convexification needed.

  11. Large-Scale Sequencing: The Future of Genomic Sciences Colloquium

    Energy Technology Data Exchange (ETDEWEB)

    Margaret Riley; Merry Buckley

    2009-01-01

    Genetic sequencing and the various molecular techniques it has enabled have revolutionized the field of microbiology. Examining and comparing the genetic sequences borne by microbes - including bacteria, archaea, viruses, and microbial eukaryotes - provides researchers insights into the processes microbes carry out, their pathogenic traits, and new ways to use microorganisms in medicine and manufacturing. Until recently, sequencing entire microbial genomes has been laborious and expensive, and the decision to sequence the genome of an organism was made on a case-by-case basis by individual researchers and funding agencies. Now, thanks to new technologies, the cost and effort of sequencing is within reach for even the smallest facilities, and the ability to sequence the genomes of a significant fraction of microbial life may be possible. The availability of numerous microbial genomes will enable unprecedented insights into microbial evolution, function, and physiology. However, the current ad hoc approach to gathering sequence data has resulted in an unbalanced and highly biased sampling of microbial diversity. A well-coordinated, large-scale effort to target the breadth and depth of microbial diversity would result in the greatest impact. The American Academy of Microbiology convened a colloquium to discuss the scientific benefits of engaging in a large-scale, taxonomically-based sequencing project. A group of individuals with expertise in microbiology, genomics, informatics, ecology, and evolution deliberated on the issues inherent in such an effort and generated a set of specific recommendations for how best to proceed. The vast majority of microbes are presently uncultured and, thus, pose significant challenges to such a taxonomically-based approach to sampling genome diversity. However, we have yet to even scratch the surface of the genomic diversity among cultured microbes. A coordinated sequencing effort of cultured organisms is an appropriate place to begin

  12. Acoustic Studies of the Large Scale Ocean Circulation

    Science.gov (United States)

    Menemenlis, Dimitris

    1999-01-01

    Detailed knowledge of ocean circulation and its transport properties is prerequisite to an understanding of the earth's climate and of important biological and chemical cycles. Results from two recent experiments, THETIS-2 in the Western Mediterranean and ATOC in the North Pacific, illustrate the use of ocean acoustic tomography for studies of the large scale circulation. The attraction of acoustic tomography is its ability to sample and average the large-scale oceanic thermal structure, synoptically, along several sections, and at regular intervals. In both studies, the acoustic data are compared to, and then combined with, general circulation models, meteorological analyses, satellite altimetry, and direct measurements from ships. Both studies provide complete regional descriptions of the time-evolving, three-dimensional, large scale circulation, albeit with large uncertainties. The studies raise serious issues about existing ocean observing capability and provide guidelines for future efforts.

  13. Human pescadillo induces large-scale chromatin unfolding

    Institute of Scientific and Technical Information of China (English)

    ZHANG Hao; FANG Yan; HUANG Cuifen; YANG Xiao; YE Qinong

    2005-01-01

    The human pescadillo gene encodes a protein with a BRCT domain. Pescadillo plays an important role in DNA synthesis, cell proliferation and transformation. Since BRCT domains have been shown to induce chromatin large-scale unfolding, we tested the role of Pescadillo in regulation of large-scale chromatin unfolding. To this end, we isolated the coding region of Pescadillo from human mammary MCF10A cells. Compared with the reported sequence, the isolated Pescadillo contains in-frame deletion from amino acid 580 to 582. Targeting the Pescadillo to an amplified, lac operator-containing chromosome region in the mammalian genome results in large-scale chromatin decondensation. This unfolding activity maps to the BRCT domain of Pescadillo. These data provide a new clue to understanding the vital role of Pescadillo.

  14. Transport of Large Scale Poloidal Flux in Black Hole Accretion

    CERN Document Server

    Beckwith, Kris; Krolik, Julian H

    2009-01-01

    We perform a global, three-dimensional GRMHD simulation of an accretion torus embedded in a large scale vertical magnetic field orbiting a Schwarzschild black hole. This simulation investigates how a large scale vertical field evolves within a turbulent accretion disk and whether global magnetic field configurations suitable for launching jets and winds can develop. We identify a ``coronal mechanism'' of magnetic flux motion, which dominates the global flux evolution. In this coronal mechanism, magnetic stresses driven by orbital shear create large-scale half-loops of magnetic field that stretch radially inward and then reconnect, leading to discontinuous jumps in the location of magnetic flux. This mechanism is supplemented by a smaller amount of flux advection in the accretion flow proper. Because the black hole in this case does not rotate, the magnetic flux on the horizon determines the mean magnetic field strength in the funnel around the disk axis; this field strength is regulated by a combination of th...

  15. Large Scale Magnetohydrodynamic Dynamos from Cylindrical Differentially Rotating Flows

    CERN Document Server

    Ebrahimi, F

    2015-01-01

    For cylindrical differentially rotating plasmas threaded with a uniform vertical magnetic field, we study large-scale magnetic field generation from finite amplitude perturbations using analytic theory and direct numerical simulations. Analytically, we impose helical fluctuations, a seed field, and a background flow and use quasi-linear theory for a single mode. The predicted large-scale field growth agrees with numerical simulations in which the magnetorotational instability (MRI) arises naturally. The vertically and azimuthally averaged toroidal field is generated by a fluctuation-induced EMF that depends on differential rotation. Given fluctuations, the method also predicts large-scale field growth for MRI-stable rotation profiles and flows with no rotation but shear.

  16. A relativistic signature in large-scale structure

    Science.gov (United States)

    Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David

    2016-09-01

    In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.

  17. Balancing modern Power System with large scale of wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Altin, Müfit; Hansen, Anca Daniela

    2014-01-01

    Power system operators must ensure robust, secure and reliable power system operation even with a large scale integration of wind power. Electricity generated from the intermittent wind in large propor-tion may impact on the control of power system balance and thus deviations in the power system...... to be analysed with improved analytical tools and techniques. This paper proposes techniques for the active power balance control in future power systems with the large scale wind power integration, where power balancing model provides the hour-ahead dispatch plan with reduced planning horizon and the real time...... frequency in small or islanded power systems or tie line power flows in interconnected power systems. Therefore, the large scale integration of wind power into the power system strongly concerns the secure and stable grid operation. To ensure the stable power system operation, the evolving power system has...

  18. Large-scale networks in engineering and life sciences

    CERN Document Server

    Findeisen, Rolf; Flockerzi, Dietrich; Reichl, Udo; Sundmacher, Kai

    2014-01-01

    This edited volume provides insights into and tools for the modeling, analysis, optimization, and control of large-scale networks in the life sciences and in engineering. Large-scale systems are often the result of networked interactions between a large number of subsystems, and their analysis and control are becoming increasingly important. The chapters of this book present the basic concepts and theoretical foundations of network theory and discuss its applications in different scientific areas such as biochemical reactions, chemical production processes, systems biology, electrical circuits, and mobile agents. The aim is to identify common concepts, to understand the underlying mathematical ideas, and to inspire discussions across the borders of the various disciplines.  The book originates from the interdisciplinary summer school “Large Scale Networks in Engineering and Life Sciences” hosted by the International Max Planck Research School Magdeburg, September 26-30, 2011, and will therefore be of int...

  19. The Internet As a Large-Scale Complex System

    Science.gov (United States)

    Park, Kihong; Willinger, Walter

    2005-06-01

    The Internet may be viewed as a "complex system" with diverse features and many components that can give rise to unexpected emergent phenomena, revealing much about its own engineering. This book brings together chapter contributions from a workshop held at the Santa Fe Institute in March 2001. This volume captures a snapshot of some features of the Internet that may be fruitfully approached using a complex systems perspective, meaning using interdisciplinary tools and methods to tackle the subject area. The Internet penetrates the socioeconomic fabric of everyday life; a broader and deeper grasp of the Internet may be needed to meet the challenges facing the future. The resulting empirical data have already proven to be invaluable for gaining novel insights into the network's spatio-temporal dynamics, and can be expected to become even more important when tryin to explain the Internet's complex and emergent behavior in terms of elementary networking-based mechanisms. The discoveries of fractal or self-similar network traffic traces, power-law behavior in network topology and World Wide Web connectivity are instances of unsuspected, emergent system traits. Another important factor at the heart of fair, efficient, and stable sharing of network resources is user behavior. Network systems, when habited by selfish or greedy users, take on the traits of a noncooperative multi-party game, and their stability and efficiency are integral to understanding the overall system and its dynamics. Lastly, fault-tolerance and robustness of large-scale network systems can exhibit spatial and temporal correlations whose effective analysis and management may benefit from rescaling techniques applied in certain physical and biological systems. The present book will bring together several of the leading workers involved in the analysis of complex systems with the future development of the Internet.

  20. From Systematic Errors to Cosmology Using Large-Scale Structure

    Science.gov (United States)

    Hunterer, Dragan

    We propose to carry out a two-pronged program to significantly improve links between galaxy surveys and constraints on primordial cosmology and fundamental physics. We will first develop the methodology to self-calibrate the survey, that is, determine the large-angle calibration systematics internally from the survey. We will use this information to correct biases that propagate from the largest to smaller angular scales. Our approach for tackling the systematics is very complementary to existing ones, in particular in the sense that it does not assume knowledge of specific systematic maps or templates. It is timely to undertake these analyses, since none of the currently known methods addresses the multiplicative effects of large-angle calibration errors that contaminate the small-scale signal and present one of the most significant sources of error in the large-scale structure. The second part of the proposal is to precisely quantify the statistical and systematic errors in the reconstruction of the Integrated Sachs-Wolfe (ISW) contribution to the cosmic microwave background (CMB) sky map using information from galaxy surveys. Unlike the ISW contributions to CMB power, the ISW map reconstruction has not been studied in detail to date. We will create a nimble plug-and-play pipeline to ascertain how reliably a map from an arbitrary LSS survey can be used to separate the late-time and early-time contributions to CMB anisotropy at large angular scales. We will pay particular attention to partial sky coverage, incomplete redshift information, finite redshift range, and imperfect knowledge of the selection function for the galaxy survey. Our work should serve as the departure point for a variety of implications in cosmology, including the physical origin of the large-angle CMB "anomalies".

  1. Cosmological parameters from large scale structure - geometric versus shape information

    Science.gov (United States)

    Hamann, Jan; Hannestad, Steen; Lesgourgues, Julien; Rampf, Cornelius; Wong, Yvonne Y. Y.

    2010-07-01

    The matter power spectrum as derived from large scale structure (LSS) surveys contains two important and distinct pieces of information: an overall smooth shape and the imprint of baryon acoustic oscillations (BAO). We investigate the separate impact of these two types of information on cosmological parameter estimation for current data, and show that for the simplest cosmological models, the broad-band shape information currently contained in the SDSS DR7 halo power spectrum (HPS) is by far superseded by geometric information derived from the baryonic features. An immediate corollary is that contrary to popular beliefs, the upper limit on the neutrino mass mν presently derived from LSS combined with cosmic microwave background (CMB) data does not in fact arise from the possible small-scale power suppression due to neutrino free-streaming, if we limit the model framework to minimal ΛCDM+mν. However, in more complicated models, such as those extended with extra light degrees of freedom and a dark energy equation of state parameter w differing from -1, shape information becomes crucial for the resolution of parameter degeneracies. This conclusion will remain true even when data from the Planck spacecraft are combined with SDSS DR7 data. In the course of our analysis, we update both the BAO likelihood function by including an exact numerical calculation of the time of decoupling, as well as the HPS likelihood, by introducing a new dewiggling procedure that generalises the previous approach to models with an arbitrary sound horizon at decoupling. These changes allow a consistent application of the BAO and HPS data sets to a much wider class of models, including the ones considered in this work. All the cases considered here are compatible with the conservative 95%-bounds ∑mν < 1.16eV, Neff = 4.8±2.0.

  2. GAS MIXING ANALYSIS IN A LARGE-SCALED SALTSTONE FACILITY

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S

    2008-05-28

    Computational fluid dynamics (CFD) methods have been used to estimate the flow patterns mainly driven by temperature gradients inside vapor space in a large-scaled Saltstone vault facility at Savannah River site (SRS). The purpose of this work is to examine the gas motions inside the vapor space under the current vault configurations by taking a three-dimensional transient momentum-energy coupled approach for the vapor space domain of the vault. The modeling calculations were based on prototypic vault geometry and expected normal operating conditions as defined by Waste Solidification Engineering. The modeling analysis was focused on the air flow patterns near the ventilated corner zones of the vapor space inside the Saltstone vault. The turbulence behavior and natural convection mechanism used in the present model were benchmarked against the literature information and theoretical results. The verified model was applied to the Saltstone vault geometry for the transient assessment of the air flow patterns inside the vapor space of the vault region using the potential operating conditions. The baseline model considered two cases for the estimations of the flow patterns within the vapor space. One is the reference nominal case. The other is for the negative temperature gradient between the roof inner and top grout surface temperatures intended for the potential bounding condition. The flow patterns of the vapor space calculated by the CFD model demonstrate that the ambient air comes into the vapor space of the vault through the lower-end ventilation hole, and it gets heated up by the Benard-cell type circulation before leaving the vault via the higher-end ventilation hole. The calculated results are consistent with the literature information. Detailed results and the cases considered in the calculations will be discussed here.

  3. CLAST: CUDA implemented large-scale alignment search tool.

    Science.gov (United States)

    Yano, Masahiro; Mori, Hiroshi; Akiyama, Yutaka; Yamada, Takuji; Kurokawa, Ken

    2014-12-11

    Metagenomics is a powerful methodology to study microbial communities, but it is highly dependent on nucleotide sequence similarity searching against sequence databases. Metagenomic analyses with next-generation sequencing technologies produce enormous numbers of reads from microbial communities, and many reads are derived from microbes whose genomes have not yet been sequenced, limiting the usefulness of existing sequence similarity search tools. Therefore, there is a clear need for a sequence similarity search tool that can rapidly detect weak similarity in large datasets. We developed a tool, which we named CLAST (CUDA implemented large-scale alignment search tool), that enables analyses of millions of reads and thousands of reference genome sequences, and runs on NVIDIA Fermi architecture graphics processing units. CLAST has four main advantages over existing alignment tools. First, CLAST was capable of identifying sequence similarities ~80.8 times faster than BLAST and 9.6 times faster than BLAT. Second, CLAST executes global alignment as the default (local alignment is also an option), enabling CLAST to assign reads to taxonomic and functional groups based on evolutionarily distant nucleotide sequences with high accuracy. Third, CLAST does not need a preprocessed sequence database like Burrows-Wheeler Transform-based tools, and this enables CLAST to incorporate large, frequently updated sequence databases. Fourth, CLAST requires computer or server node. CLAST achieved very high speed (similar to the Burrows-Wheeler Transform-based Bowtie 2 for long reads) and sensitivity (equal to BLAST, BLAT, and FR-HIT) without the need for extensive database preprocessing or a specialized computing platform. Our results demonstrate that CLAST has the potential to be one of the most powerful and realistic approaches to analyze the massive amount of sequence data from next-generation sequencing technologies.

  4. Large-scale synthesis of YSZ nanopowder by Pechini method

    Indian Academy of Sciences (India)

    Morteza Hajizadeh-Oghaz; Reza Shoja Razavi; Mohammadreza Loghman Estarki

    2014-08-01

    Yttria–stabilized zirconia nanopowders were synthesized on a relatively large scale using Pechini method. In the present paper, nearly spherical yttria-stabilized zirconia nanopowders with tetragonal structure were synthesized by Pechini process from zirconium oxynitrate hexahydrate, yttrium nitrate, citric acid and ethylene glycol. The phase and structural analyses were accomplished by X-ray diffraction; morphological analysis was carried out by field emission scanning electron microscopy and transmission electron microscopy. The results revealed nearly spherical yttria–stabilized zirconia powder with tetragonal crystal structure and chemical purity of 99.1% by inductively coupled plasma optical emission spectroscopy on a large scale.

  5. Practical Large Scale Syntheses of New Drug Candidates

    Institute of Scientific and Technical Information of China (English)

    Hui-Yin; Li

    2001-01-01

    This presentation will be focus on Practical large scale syntheses of lead compounds and drug candidates from three major therapeutic areas from DuPont Pharmaceuticals Research Laboratory: 1). DMP777-a selective, non-toxic, orally active human elastase inhibitor; 2). DMP754-a potent glycoprotein IIb/IIIa antagonist; 3). R-Wafarin-the pure enantiomeric form of wafarin. The key technology used for preparation these drug candidates is asymmetric hydrogenation under very mild reaction conditions, which produced very high quality final products at large scale (>99% de, >99 A% and >99 wt%). Some practical and GMP aspects of process development will be also discussed.……

  6. Statistical equilibria of large scales in dissipative hydrodynamic turbulence

    CERN Document Server

    Dallas, Vassilios; Alexakis, Alexandros

    2015-01-01

    We present a numerical study of the statistical properties of three-dimensional dissipative turbulent flows at scales larger than the forcing scale. Our results indicate that the large scale flow can be described to a large degree by the truncated Euler equations with the predictions of the zero flux solutions given by absolute equilibrium theory, both for helical and non-helical flows. Thus, the functional shape of the large scale spectra can be predicted provided that scales sufficiently larger than the forcing length scale but also sufficiently smaller than the box size are examined. Deviations from the predictions of absolute equilibrium are discussed.

  7. Fatigue Analysis of Large-scale Wind turbine

    Directory of Open Access Journals (Sweden)

    Zhu Yongli

    2017-01-01

    Full Text Available The paper does research on top flange fatigue damage of large-scale wind turbine generator. It establishes finite element model of top flange connection system with finite element analysis software MSC. Marc/Mentat, analyzes its fatigue strain, implements load simulation of flange fatigue working condition with Bladed software, acquires flange fatigue load spectrum with rain-flow counting method, finally, it realizes fatigue analysis of top flange with fatigue analysis software MSC. Fatigue and Palmgren-Miner linear cumulative damage theory. The analysis result indicates that its result provides new thinking for flange fatigue analysis of large-scale wind turbine generator, and possesses some practical engineering value.

  8. Large-Scale Inverse Problems and Quantification of Uncertainty

    CERN Document Server

    Biegler, Lorenz; Ghattas, Omar

    2010-01-01

    Large-scale inverse problems and associated uncertainty quantification has become an important area of research, central to a wide range of science and engineering applications. Written by leading experts in the field, Large-scale Inverse Problems and Quantification of Uncertainty focuses on the computational methods used to analyze and simulate inverse problems. The text provides PhD students, researchers, advanced undergraduate students, and engineering practitioners with the perspectives of researchers in areas of inverse problems and data assimilation, ranging from statistics and large-sca

  9. [Issues of large scale tissue culture of medicinal plant].

    Science.gov (United States)

    Lv, Dong-Mei; Yuan, Yuan; Zhan, Zhi-Lai

    2014-09-01

    In order to increase the yield and quality of the medicinal plant and enhance the competitive power of industry of medicinal plant in our country, this paper analyzed the status, problem and countermeasure of the tissue culture of medicinal plant on large scale. Although the biotechnology is one of the most efficient and promising means in production of medicinal plant, it still has problems such as stability of the material, safety of the transgenic medicinal plant and optimization of cultured condition. Establishing perfect evaluation system according to the characteristic of the medicinal plant is the key measures to assure the sustainable development of the tissue culture of medicinal plant on large scale.

  10. The fractal octahedron network of the large scale structure

    CERN Document Server

    Battaner, E

    1998-01-01

    In a previous article, we have proposed that the large scale structure network generated by large scale magnetic fields could consist of a network of octahedra only contacting at their vertexes. Assuming such a network could arise at different scales producing a fractal geometry, we study here its properties, and in particular how a sub-octahedron network can be inserted within an octahedron of the large network. We deduce that the scale of the fractal structure would range from $\\approx$100 Mpc, i.e. the scale of the deepest surveys, down to about 10 Mpc, as other smaller scale magnetic fields were probably destroyed in the radiation dominated Universe.

  11. Large-scale streaming motions and microwave background anisotropies

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Gonzalez, E.; Sanz, J.L. (Cantabria Universidad, Santander (Spain))

    1989-12-01

    The minimal microwave background radiation is calculated on each angular scale implied by the existence of large-scale streaming motions. These minimal anisotropies, due to the Sachs-Wolfe effect, are obtained for different experiments, and give quite different results from those found in previous work. They are not in conflict with present theories of galaxy formation. Upper limits are imposed on the scale at which large-scale streaming motions can occur by extrapolating results from present double-beam-switching experiments. 17 refs.

  12. Distributed chaos tuned to large scale coherent motions in turbulence

    CERN Document Server

    Bershadskii, A

    2016-01-01

    It is shown, using direct numerical simulations and laboratory experiments data, that distributed chaos is often tuned to large scale coherent motions in anisotropic inhomogeneous turbulence. The examples considered are: fully developed turbulent boundary layer (range of coherence: $14 < y^{+} < 80$), turbulent thermal convection (in a horizontal cylinder), and Cuette-Taylor flow. Two ways of the tuning have been described: one via fundamental frequency (wavenumber) and another via subharmonic (period doubling). For the second way the large scale coherent motions are a natural component of distributed chaos. In all considered cases spontaneous breaking of space translational symmetry is accompanied by reflexional symmetry breaking.

  13. Topology Optimization of Large Scale Stokes Flow Problems

    DEFF Research Database (Denmark)

    Aage, Niels; Poulsen, Thomas Harpsøe; Gersborg-Hansen, Allan

    2008-01-01

    This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs.......This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs....

  14. Large-scale liquid scintillation detectors for solar neutrinos

    Energy Technology Data Exchange (ETDEWEB)

    Benziger, Jay B.; Calaprice, Frank P. [Princeton University Princeton, Princeton, NJ (United States)

    2016-04-15

    Large-scale liquid scintillation detectors are capable of providing spectral yields of the low energy solar neutrinos. These detectors require > 100 tons of liquid scintillator with high optical and radiopurity. In this paper requirements for low-energy neutrino detection by liquid scintillation are specified and the procedures to achieve low backgrounds in large-scale liquid scintillation detectors for solar neutrinos are reviewed. The designs, operations and achievements of Borexino, KamLAND and SNO+ in measuring the low-energy solar neutrino fluxes are reviewed. (orig.)

  15. Optimal Dispatching of Large-scale Water Supply System

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    This paper deals with the use of optimal control techniques in large-scale water distribution networks. According to the network characteristics and actual state of the water supply system in China, the implicit model, which may be solved by utilizing the hierarchical optimization method, is established. In special, based on the analyses of the water supply system containing variable-speed pumps, a software tool has been developed successfully. The application of this model to the city of Shenyang (China) is compared to experiential strategy. The results of this study show that the developed model is a very promising optimization method to control the large-scale water supply systems.

  16. Reliability Evaluation considering Structures of a Large Scale Wind Farm

    DEFF Research Database (Denmark)

    Shin, Je-Seok; Cha, Seung-Tae; Wu, Qiuwei

    2012-01-01

    evaluation on wind farm is necessarily required. Also, because large scale offshore wind farm has a long repair time and a high repair cost as well as a high investment cost, it is essential to take into account the economic aspect. One of methods to efficiently build and to operate wind farm is to construct......Wind energy is one of the most widely used renewable energy resources. Wind power has been connected to the grid as large scale wind farm which is made up of dozens of wind turbines, and the scale of wind farm is more increased recently. Due to intermittent and variable wind source, reliability...

  17. Practical Large Scale Syntheses of New Drug Candidates

    Institute of Scientific and Technical Information of China (English)

    Hui-Yin Li

    2001-01-01

    @@ This presentation will be focus on Practical large scale syntheses of lead compounds and drug candidates from three major therapeutic areas from DuPont Pharmaceuticals Research Laboratory: 1). DMP777-a selective, non-toxic, orally active human elastase inhibitor; 2). DMP754-a potent glycoprotein IIb/IIIa antagonist; 3). R-Wafarin-the pure enantiomeric form of wafarin. The key technology used for preparation these drug candidates is asymmetric hydrogenation under very mild reaction conditions, which produced very high quality final products at large scale (>99% de, >99 A% and >99 wt%). Some practical and GMP aspects of process development will be also discussed.

  18. Fast paths in large-scale dynamic road networks

    CERN Document Server

    Nannicini, Giacomo; Barbier, Gilles; Krob, Daniel; Liberti, Leo

    2007-01-01

    Efficiently computing fast paths in large scale dynamic road networks (where dynamic traffic information is known over a part of the network) is a practical problem faced by several traffic information service providers who wish to offer a realistic fast path computation to GPS terminal enabled vehicles. The heuristic solution method we propose is based on a highway hierarchy-based shortest path algorithm for static large-scale networks; we maintain a static highway hierarchy and perform each query on the dynamically evaluated network.

  19. Large-Scale Spray Releases: Initial Aerosol Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Schonewill, Philip P.; Gauglitz, Phillip A.; Bontha, Jagannadha R.; Daniel, Richard C.; Kurath, Dean E.; Adkins, Harold E.; Billing, Justin M.; Burns, Carolyn A.; Davis, James M.; Enderlin, Carl W.; Fischer, Christopher M.; Jenks, Jeromy WJ; Lukins, Craig D.; MacFarlan, Paul J.; Shutthanandan, Janani I.; Smith, Dennese M.

    2012-12-01

    One of the events postulated in the hazard analysis at the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids with Newtonian fluid behavior. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and across processing facilities in the DOE complex. Two key technical areas were identified where testing results were needed to improve the technical basis by reducing the uncertainty due to extrapolating existing literature results. The first technical need was to quantify the role of slurry particles in small breaches where the slurry particles may plug and result in substantially reduced, or even negligible, respirable fraction formed by high-pressure sprays. The second technical need was to determine the aerosol droplet size distribution and volume from prototypic breaches and fluids, specifically including sprays from larger breaches with slurries where data from the literature are scarce. To address these technical areas, small- and large-scale test stands were constructed and operated with simulants to determine aerosol release fractions and generation rates from a range of breach sizes and geometries. The properties of the simulants represented the range of properties expected in the WTP process streams and included water, sodium salt solutions, slurries containing boehmite or gibbsite, and a hazardous chemical simulant. The effect of anti-foam agents was assessed with most of the simulants. Orifices included round holes and

  20. GroFi: Large-scale fiber placement research facility

    Directory of Open Access Journals (Sweden)

    Christian Krombholz

    2016-03-01

    and processes for large-scale composite components. Due to the use of coordinated and simultaneously working layup units a high exibility of the research platform is achieved. This allows the investigation of new materials, technologies and processes on both, small coupons, but also large components such as wing covers or fuselage skins.

  1. Flexibility in design of large-scale methanol plants

    Institute of Scientific and Technical Information of China (English)

    Esben Lauge Sφrensen; Helge Holm-Larsen; Haldor Topsφe A/S

    2006-01-01

    This paper presents a cost effective design for large-scale methanol production. It is demonstrated how recent technological progress can be utilised to design a methanol plant,which is inexpensive and easy to operate, while at the same time very robust towards variations in feed-stock composition and product specifications.

  2. Large-scale search for dark-matter axions

    Energy Technology Data Exchange (ETDEWEB)

    Hagmann, C.A., LLNL; Kinion, D.; Stoeffl, W.; Van Bibber, K.; Daw, E.J. [Massachusetts Inst. of Tech., Cambridge, MA (United States); McBride, J. [Massachusetts Inst. of Tech., Cambridge, MA (United States); Peng, H. [Massachusetts Inst. of Tech., Cambridge, MA (United States); Rosenberg, L.J. [Massachusetts Inst. of Tech., Cambridge, MA (United States); Xin, H. [Massachusetts Inst. of Tech., Cambridge, MA (United States); Laveigne, J. [Florida Univ., Gainesville, FL (United States); Sikivie, P. [Florida Univ., Gainesville, FL (United States); Sullivan, N.S. [Florida Univ., Gainesville, FL (United States); Tanner, D.B. [Florida Univ., Gainesville, FL (United States); Moltz, D.M. [Lawrence Berkeley Lab., CA (United States); Powell, J. [Lawrence Berkeley Lab., CA (United States); Clarke, J. [Lawrence Berkeley Lab., CA (United States); Nezrick, F.A. [Fermi National Accelerator Lab., Batavia, IL (United States); Turner, M.S. [Fermi National Accelerator Lab., Batavia, IL (United States); Golubev, N.A. [Russian Academy of Sciences, Moscow (Russia); Kravchuk, L.V. [Russian Academy of Sciences, Moscow (Russia)

    1998-01-01

    Early results from a large-scale search for dark matter axions are presented. In this experiment, axions constituting our dark-matter halo may be resonantly converted to monochromatic microwave photons in a high-Q microwave cavity permeated by a strong magnetic field. Sensitivity at the level of one important axion model (KSVZ) has been demonstrated.

  3. Temporal Variation of Large Scale Flows in the Solar Interior

    Indian Academy of Sciences (India)

    Sarbani Basu; H. M. Antia

    2000-09-01

    We attempt to detect short-term temporal variations in the rotation rate and other large scale velocity fields in the outer part of the solar convection zone using the ring diagram technique applied to Michelson Doppler Imager (MDI) data. The measured velocity field shows variations by about 10 m/s on the scale of few days.

  4. Large-scale Homogenization of Bulk Materials in Mammoth Silos

    NARCIS (Netherlands)

    Schott, D.L.

    2004-01-01

    This doctoral thesis concerns the large-scale homogenization of bulk materials in mammoth silos. The objective of this research was to determine the best stacking and reclaiming method for homogenization in mammoth silos. For this purpose a simulation program was developed to estimate the homogeniza

  5. Quantized pressure control in large-scale nonlinear hydraulic networks

    NARCIS (Netherlands)

    Persis, Claudio De; Kallesøe, Carsten Skovmose; Jensen, Tom Nørgaard

    2010-01-01

    It was shown previously that semi-global practical pressure regulation at designated points of a large-scale nonlinear hydraulic network is guaranteed by distributed proportional controllers. For a correct implementation of the control laws, each controller, which is located at these designated poin

  6. Main Achievements of Cotton Large-scale Transformation System

    Institute of Scientific and Technical Information of China (English)

    LI Fu-guang; LIU Chuan-liang; WU Zhi-xia; ZHANG Chao-jun; ZHANG Xue-yan

    2008-01-01

    @@ Cotton large-scale transformation methods system was established based on innovation of cotton transformation methods.It obtains 8000 transgenic cotton plants per year by combining Agrobacteriurn turnefaciens-mediated,pollen-tube pathway and biolistic methods together efficiently.More than 1000 transgenie lines are selected from the transgenic plants with molecular assistant breeding and conventional breeding methods.

  7. Segmentation by Large Scale Hypothesis Testing - Segmentation as Outlier Detection

    DEFF Research Database (Denmark)

    Darkner, Sune; Dahl, Anders Lindbjerg; Larsen, Rasmus

    2010-01-01

    locally. We propose a method based on large scale hypothesis testing with a consistent method for selecting an appropriate threshold for the given data. By estimating the background distribution we characterize the segment of interest as a set of outliers with a certain probability based on the estimated...

  8. Large Scale Survey Data in Career Development Research

    Science.gov (United States)

    Diemer, Matthew A.

    2008-01-01

    Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…

  9. Large-scale coastal impact induced by a catastrophic storm

    DEFF Research Database (Denmark)

    Fruergaard, Mikkel; Andersen, Thorbjørn Joest; Johannessen, Peter N

    breaching. Our results demonstrate that violent, millennial-scale storms can trigger significant large-scale and long-term changes on barrier coasts, and that coastal changes assumed to take place over centuries or even millennia may occur in association with a single extreme storm event....

  10. Regeneration and propagation of reed grass for large-scale ...

    African Journals Online (AJOL)

    전서범

    2012-01-26

    Jan 26, 2012 ... containing different sucrose concentrations; this experiment found that 60 g L-1 ... All these uses of reeds require the large-scale rege- ... numbers of plant in a small space within a short time ... callus stock and grown in vitro were used in this study. .... presence of 4-FA were converted to friable and light-.

  11. Dual Decomposition for Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Vandenberghe, Lieven

    2013-01-01

    Dual decomposition is applied to power balancing of exible thermal storage units. The centralized large-scale problem is decomposed into smaller subproblems and solved locallyby each unit in the Smart Grid. Convergence is achieved by coordinating the units consumption through a negotiation...

  12. Large-Scale Assessment and English Language Learners with Disabilities

    Science.gov (United States)

    Liu, Kristin K.; Ward, Jenna M.; Thurlow, Martha L.; Christensen, Laurene L.

    2017-01-01

    This article highlights a set of principles and guidelines, developed by a diverse group of specialists in the field, for appropriately including English language learners (ELLs) with disabilities in large-scale assessments. ELLs with disabilities make up roughly 9% of the rapidly increasing ELL population nationwide. In spite of the small overall…

  13. Large scale radial stability density of Hill's equation

    NARCIS (Netherlands)

    Broer, Henk; Levi, Mark; Simo, Carles

    2013-01-01

    This paper deals with large scale aspects of Hill's equation (sic) + (a + bp(t)) x = 0, where p is periodic with a fixed period. In particular, the interest is the asymptotic radial density of the stability domain in the (a, b)-plane. It turns out that this density changes discontinuously in a certa

  14. Water Implications of Large-Scale Land Acquisitions in Ghana

    Directory of Open Access Journals (Sweden)

    Timothy Olalekan Williams

    2012-06-01

    The paper offers recommendations which can help the government to achieve its stated objective of developing a "policy framework and guidelines for large-scale land acquisitions by both local and foreign investors for biofuels that will protect the interests of investors and the welfare of Ghanaian farmers and landowners".

  15. Evaluating Large-scale National Public Management Reforms

    DEFF Research Database (Denmark)

    Breidahl, Karen Nielsen; Gjelstrup, Gunnar; Hansen, Morten Balle

    This article explores differences and similarities between two evaluations of large-scale administrative reforms which were carried out in the 2000s: The evaluation of the Norwegian NAV reform (EVANAV) and the evaluation of the Danish Local Government Reform (LGR). We provide a comparative analys...

  16. A Chain Perspective on Large-scale Number Systems

    NARCIS (Netherlands)

    Grijpink, J.H.A.M.

    2012-01-01

    As large-scale number systems gain significance in social and economic life (electronic communication, remote electronic authentication), the correct functioning and the integrity of public number systems take on crucial importance. They are needed to uniquely indicate people, objects or phenomena i

  17. Main Achievements of Cotton Large-scale Transformation System

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Cotton large-scale transformation methods system was established based on innovation of cotton transformation methods.It obtains 8000 transgenic cotton plants per year by combining Agrobacterium tumefaciens-mediated,pollen-tube pathway and biolistic methods together efficiently.More than

  18. Newton Methods for Large Scale Problems in Machine Learning

    Science.gov (United States)

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  19. Large-Scale Machine Learning for Classification and Search

    Science.gov (United States)

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  20. Cost Overruns in Large-scale Transportation Infrastructure Projects

    DEFF Research Database (Denmark)

    Cantarelli, Chantal C; Flyvbjerg, Bent; Molin, Eric J. E

    2010-01-01

    Managing large-scale transportation infrastructure projects is difficult due to frequent misinformation about the costs which results in large cost overruns that often threaten the overall project viability. This paper investigates the explanations for cost overruns that are given in the literature...

  1. Ultra-Large-Scale Systems: Scale Changes Everything

    Science.gov (United States)

    2008-03-06

    Statistical Mechanics, Complexity Networks Are Everywhere Recurring “scale free” structure • internet & yeast protein structures Analogous dynamics...Design • Design Representation and Analysis • Assimilation • Determining and Managing Requirements 43 Ultra-Large-Scale Systems Linda Northrop: March

  2. The Role of Plausible Values in Large-Scale Surveys

    Science.gov (United States)

    Wu, Margaret

    2005-01-01

    In large-scale assessment programs such as NAEP, TIMSS and PISA, students' achievement data sets provided for secondary analysts contain so-called "plausible values." Plausible values are multiple imputations of the unobservable latent achievement for each student. In this article it has been shown how plausible values are used to: (1) address…

  3. Large-scale data analysis using the Wigner function

    Science.gov (United States)

    Earnshaw, R. A.; Lei, C.; Li, J.; Mugassabi, S.; Vourdas, A.

    2012-04-01

    Large-scale data are analysed using the Wigner function. It is shown that the 'frequency variable' provides important information, which is lost with other techniques. The method is applied to 'sentiment analysis' in data from social networks and also to financial data.

  4. Lessons from Large-Scale Renewable Energy Integration Studies: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Bird, L.; Milligan, M.

    2012-06-01

    In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.

  5. VESPA: Very large-scale Evolutionary and Selective Pressure Analyses

    Directory of Open Access Journals (Sweden)

    Andrew E. Webb

    2017-06-01

    Full Text Available Background Large-scale molecular evolutionary analyses of protein coding sequences requires a number of preparatory inter-related steps from finding gene families, to generating alignments and phylogenetic trees and assessing selective pressure variation. Each phase of these analyses can represent significant challenges, particularly when working with entire proteomes (all protein coding sequences in a genome from a large number of species. Methods We present VESPA, software capable of automating a selective pressure analysis using codeML in addition to the preparatory analyses and summary statistics. VESPA is written in python and Perl and is designed to run within a UNIX environment. Results We have benchmarked VESPA and our results show that the method is consistent, performs well on both large scale and smaller scale datasets, and produces results in line with previously published datasets. Discussion Large-scale gene family identification, sequence alignment, and phylogeny reconstruction are all important aspects of large-scale molecular evolutionary analyses. VESPA provides flexible software for simplifying these processes along with downstream selective pressure variation analyses. The software automatically interprets results from codeML and produces simplified summary files to assist the user in better understanding the results. VESPA may be found at the following website: http://www.mol-evol.org/VESPA.

  6. High-Throughput, Large-Scale SNP Genotyping: Bioinformatics Considerations

    OpenAIRE

    Margetic, Nino

    2004-01-01

    In order to provide a high-throughput, large-scale genotyping facility at the national level we have developed a set of inter-dependent information systems. A combination of commercial, publicly-available and in-house developed tools links a series of data repositories based both on flat files and relational databases providing an almost complete semi-automated pipeline.

  7. Chain Analysis for large-scale Communication systems

    NARCIS (Netherlands)

    Grijpink, Jan

    2010-01-01

    The chain concept is introduced to explain how large-scale information infrastructures so often fail and sometimes even backfire. Next, the assessment framework of the doctrine of Chain-computerisation and its chain analysis procedure are outlined. In this procedure chain description precedes assess

  8. Large-Scale Machine Learning for Classification and Search

    Science.gov (United States)

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  9. Newton Methods for Large Scale Problems in Machine Learning

    Science.gov (United States)

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  10. How large-scale subsidence affects stratocumulus transitions (discussion paper)

    NARCIS (Netherlands)

    Van der Dussen, J.J.; De Roode, S.R.; Siebesma, A.P.

    2015-01-01

    Some climate modeling results suggest that the Hadley circulation might weaken in a future climate, causing a subsequent reduction in the large-scale subsidence velocity in the subtropics. In this study we analyze the cloud liquid water path (LWP) budget from large-eddy simulation (LES) results of

  11. Large-Scale Innovation and Change in UK Higher Education

    Science.gov (United States)

    Brown, Stephen

    2013-01-01

    This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ…

  12. Measurement, Sampling, and Equating Errors in Large-Scale Assessments

    Science.gov (United States)

    Wu, Margaret

    2010-01-01

    In large-scale assessments, such as state-wide testing programs, national sample-based assessments, and international comparative studies, there are many steps involved in the measurement and reporting of student achievement. There are always sources of inaccuracies in each of the steps. It is of interest to identify the source and magnitude of…

  13. A Chain Perspective on Large-scale Number Systems

    NARCIS (Netherlands)

    Grijpink, J.H.A.M.

    2012-01-01

    As large-scale number systems gain significance in social and economic life (electronic communication, remote electronic authentication), the correct functioning and the integrity of public number systems take on crucial importance. They are needed to uniquely indicate people, objects or phenomena i

  14. Large-Scale Innovation and Change in UK Higher Education

    Science.gov (United States)

    Brown, Stephen

    2013-01-01

    This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ…

  15. Primordial non-Gaussianity from the large scale structure

    CERN Document Server

    Desjacques, Vincent

    2010-01-01

    Primordial non-Gaussianity is a potentially powerful discriminant of the physical mechanisms that generated the cosmological fluctuations observed today. Any detection of non-Gaussianity would have profound implications for our understanding of cosmic structure formation. In this paper, we review past and current efforts in the search for primordial non-Gaussianity in the large scale structure of the Universe.

  16. Planck intermediate results XLII. Large-scale Galactic magnetic fields

    DEFF Research Database (Denmark)

    Adam, R.; Ade, P. A. R.; Alves, M. I. R.

    2016-01-01

    Recent models for the large-scale Galactic magnetic fields in the literature have been largely constrained by synchrotron emission and Faraday rotation measures. We use three different but representative models to compare their predicted polarized synchrotron and dust emission with that measured...

  17. Electric vehicles and large-scale integration of wind power

    DEFF Research Database (Denmark)

    Liu, Wen; Hu, Weihao; Lund, Henrik

    2013-01-01

    was 6.5% in 2009 and which has the plan to develop large-scale wind power. The results show that electric vehicles (EVs) have the ability to balance the electricity demand and supply and to further the wind power integration. In the best case, the energy system with EV can increase wind power...

  18. Large scale solar district heating. Evaluation, modelling and designing - Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The appendices present the following: A) Cad-drawing of the Marstal CSHP design. B) Key values - large-scale solar heating in Denmark. C) Monitoring - a system description. D) WMO-classification of pyranometers (solarimeters). E) The computer simulation model in TRNSYS. F) Selected papers from the author. (EHS)

  19. New Visions for Large Scale Networks: Research and Applications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This paper documents the findings of the March 12-14, 2001 Workshop on New Visions for Large-Scale Networks: Research and Applications. The workshops objectives were...

  20. Combining Flux Balance and Energy Balance Analysis for Large-Scale Metabolic Network: Biochemical Circuit Theory for Analysis of Large-Scale Metabolic Networks

    Science.gov (United States)

    Beard, Daniel A.; Liang, Shou-Dan; Qian, Hong; Biegel, Bryan (Technical Monitor)

    2001-01-01

    Predicting behavior of large-scale biochemical metabolic networks represents one of the greatest challenges of bioinformatics and computational biology. Approaches, such as flux balance analysis (FBA), that account for the known stoichiometry of the reaction network while avoiding implementation of detailed reaction kinetics are perhaps the most promising tools for the analysis of large complex networks. As a step towards building a complete theory of biochemical circuit analysis, we introduce energy balance analysis (EBA), which compliments the FBA approach by introducing fundamental constraints based on the first and second laws of thermodynamics. Fluxes obtained with EBA are thermodynamically feasible and provide valuable insight into the activation and suppression of biochemical pathways.

  1. Novel method to construct large-scale design space in lubrication process utilizing Bayesian estimation based on a small-scale design-of-experiment and small sets of large-scale manufacturing data.

    Science.gov (United States)

    Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo

    2012-12-01

    A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale.

  2. Porous microwells for geometry-selective, large-scale microparticle arrays

    Science.gov (United States)

    Kim, Jae Jung; Bong, Ki Wan; Reátegui, Eduardo; Irimia, Daniel; Doyle, Patrick S.

    2017-01-01

    Large-scale microparticle arrays (LSMAs) are key for material science and bioengineering applications. However, previous approaches suffer from trade-offs between scalability, precision, specificity and versatility. Here, we present a porous microwell-based approach to create large-scale microparticle arrays with complex motifs. Microparticles are guided to and pushed into microwells by fluid flow through small open pores at the bottom of the porous well arrays. A scaling theory allows for the rational design of LSMAs to sort and array particles on the basis of their size, shape, or modulus. Sequential particle assembly allows for proximal and nested particle arrangements, as well as particle recollection and pattern transfer. We demonstrate the capabilities of the approach by means of three applications: high-throughput single-cell arrays; microenvironment fabrication for neutrophil chemotaxis; and complex, covert tags by the transfer of an upconversion nanocrystal-laden LSMA.

  3. A new asynchronous parallel algorithm for inferring large-scale gene regulatory networks.

    Directory of Open Access Journals (Sweden)

    Xiangyun Xiao

    Full Text Available The reconstruction of gene regulatory networks (GRNs from high-throughput experimental data has been considered one of the most important issues in systems biology research. With the development of high-throughput technology and the complexity of biological problems, we need to reconstruct GRNs that contain thousands of genes. However, when many existing algorithms are used to handle these large-scale problems, they will encounter two important issues: low accuracy and high computational cost. To overcome these difficulties, the main goal of this study is to design an effective parallel algorithm to infer large-scale GRNs based on high-performance parallel computing environments. In this study, we proposed a novel asynchronous parallel framework to improve the accuracy and lower the time complexity of large-scale GRN inference by combining splitting technology and ordinary differential equation (ODE-based optimization. The presented algorithm uses the sparsity and modularity of GRNs to split whole large-scale GRNs into many small-scale modular subnetworks. Through the ODE-based optimization of all subnetworks in parallel and their asynchronous communications, we can easily obtain the parameters of the whole network. To test the performance of the proposed approach, we used well-known benchmark datasets from Dialogue for Reverse Engineering Assessments and Methods challenge (DREAM, experimentally determined GRN of Escherichia coli and one published dataset that contains more than 10 thousand genes to compare the proposed approach with several popular algorithms on the same high-performance computing environments in terms of both accuracy and time complexity. The numerical results demonstrate that our parallel algorithm exhibits obvious superiority in inferring large-scale GRNs.

  4. Investigation of the large scale regional hydrogeological situation at Beberg

    Energy Technology Data Exchange (ETDEWEB)

    Hartley, L. [AEA Technology, Harwell (United Kingdom); Boghammar, A.; Grundfelt, B. [Kemakta Konsult AB, Stockholm (Sweden)

    1998-12-01

    The present study forms part of the large-scale groundwater flow studies within the SR 97 project. The site of interest is Beberg which is based on data from the Finnsjoen site. To understand the groundwater flow and salinity at present it is necessary to consider the history of the Finnsjoen site over the last few thousand years. A numerical approach is developed to model a situation of transient variable density flow on a regional scale. The model is considered a reasonable representation of the real situation at the site, since it is consistent with the current conceptual understanding of the site and predictions of salinity are in broad agreement with field observations. The effects of post-glacial uplift on the hydrogeological conditions are considered. It is assumed that about 9,000-4,000 years BP the rock was completely infiltrated by saline groundwaters during the periods when the Yoldia and Litorina Seas covered the Finnsjoen area. The model represents the interval from the end of this period up to the present day. During which time the recharge of fresh water in the form of rainfall flushed or displaced saline water from the near-surface. Many variants on the basis conceptual model are considered both to calibrate the model against field data and to quantify the sensitivities of groundwater pathways. The present study shows that: Salinity and transient processes are very important in determining groundwater pathways in the deep rock. Salinity drives flow downwards increasing the length of pathways and groundwater travel times. A high contrast in hydraulic conductivity between major structures and the rock mass results in the fracture zones being the dominant flow paths, in particular Zone 1 and Imundbo. There are two distinct flow paths. In the east of the hypothetical repository the path is short along Zone 1, for the west the path is longer through the rock mass. Fracture zones are flushed of salt on a time-scale of 10-100 years. For the rock mass it

  5. A first large-scale flood inundation forecasting model

    Energy Technology Data Exchange (ETDEWEB)

    Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie; Andreadis, Konstantinos M.; Pappenberger, Florian; Phanthuwongpakdee, Kay; Hall, Amanda C.; Bates, Paul D.

    2013-11-04

    At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domain has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode

  6. Do land parameters matter in large-scale hydrological modelling?

    Science.gov (United States)

    Gudmundsson, Lukas; Seneviratne, Sonia I.

    2013-04-01

    Many of the most pending issues in large-scale hydrology are concerned with predicting hydrological variability at ungauged locations. However, current-generation hydrological and land surface models that are used for their estimation suffer from large uncertainties. These models rely on mathematical approximations of the physical system as well as on mapped values of land parameters (e.g. topography, soil types, land cover) to predict hydrological variables (e.g. evapotranspiration, soil moisture, stream flow) as a function of atmospheric forcing (e.g. precipitation, temperature, humidity). Despite considerable progress in recent years, it remains unclear whether better estimates of land parameters can improve predictions - or - if a refinement of model physics is necessary. To approach this question we suggest scrutinizing our perception of hydrological systems by confronting it with the radical assumption that hydrological variability at any location in space depends on past and present atmospheric forcing only, and not on location-specific land parameters. This so called "Constant Land Parameter Hypothesis (CLPH)" assumes that variables like runoff can be predicted without taking location specific factors such as topography or soil types into account. We demonstrate, using a modern statistical tool, that monthly runoff in Europe can be skilfully estimated using atmospheric forcing alone, without accounting for locally varying land parameters. The resulting runoff estimates are used to benchmark state-of-the-art process models. These are found to have inferior performance, despite their explicit process representation, which accounts for locally varying land parameters. This suggests that progress in the theory of hydrological systems is likely to yield larger improvements in model performance than more precise land parameter estimates. The results also question the current modelling paradigm that is dominated by the attempt to account for locally varying land

  7. Disentangling the dynamic core: a research program for a neurodynamics at the large-scale.

    Science.gov (United States)

    Le Van Quyen, Michel

    2003-01-01

    My purpose in this paper is to sketch a research direction based on Francisco Varela's pioneering work in neurodynamics (see also Rudrauf et al. 2003, in this issue). Very early on he argued that the internal coherence of every mental-cognitive state lies in the global self-organization of the brain activities at the large-scale, constituting a fundamental pole of integration called here a "dynamic core". Recent neuroimaging evidence appears to broadly support this hypothesis and suggests that a global brain dynamics emerges at the large scale level from the cooperative interactions among widely distributed neuronal populations. Despite a growing body of evidence supporting this view, our understanding of these large-scale brain processes remains hampered by the lack of a theoretical language for expressing these complex behaviors in dynamical terms. In this paper, I propose a rough cartography of a comprehensive approach that offers a conceptual and mathematical framework to analyze spatio-temporal large-scale brain phenomena. I emphasize how these nonlinear methods can be applied, what property might be inferred from neuronal signals, and where one might productively proceed for the future. This paper is dedicated, with respect and affection, to the memory of Francisco Varela.

  8. Race-related cognitive test bias in the active study: a mimic model approach.

    Science.gov (United States)

    Aiken Morgan, Adrienne T; Marsiske, Michael; Dzierzewski, Joseph M; Jones, Richard N; Whitfield, Keith E; Johnson, Kathy E; Cresci, Mary K

    2010-10-01

    The present study investigated evidence for race-related test bias in cognitive measures used in the baseline assessment of the ACTIVE clinical trial. Test bias against African Americans has been documented in both cognitive aging and early life span studies. Despite significant mean performance differences, Multiple Indicators Multiple Causes (MIMIC) models suggested most differences were at the construct level. There was little evidence that specific measures put either group at particular advantage or disadvantage and little evidence of cognitive test bias in this sample. Small group differences in education, cognitive status, and health suggest positive selection may have attenuated possible biases.

  9. Rapid social perception is flexible: Approach and avoidance motivational states shape P100 responses to other-race faces

    Directory of Open Access Journals (Sweden)

    William eCunningham

    2012-05-01

    Full Text Available Research on person categorization suggests that people automatically and inflexibly categorize others according to group memberships, such as race. Consistent with this view, research using electroencephalography (EEG has found that White participants tend to show an early difference in processing Black versus White faces. Yet, new research has shown that these ostensibly automatic biases may not be as inevitable as once thought and that motivational influences may be able to eliminate these biases. It is unclear, however, whether motivational influences shape the initial biases or whether these biases can only be modulated by later, controlled processes. Using EEG to examine the time course of biased processing, we manipulated approach and avoidance motivational states by having participants pull or push a joystick, respectively, while viewing White or Black faces. Consistent with previous work on own-race bias, we observed a greater P100 response to White than Black faces; however, this racial bias was attenuated in the approach condition. These data suggest that rapid social perception may be flexible and can be modulated by motivational states.

  10. Large-scale flow generation by inhomogeneous helicity

    CERN Document Server

    Yokoi, Nobumitsu

    2015-01-01

    The effect of kinetic helicity (velocity--vorticity correlation) on turbulent momentum transport is investigated. The turbulent kinetic helicity (pseudoscalar) enters into the Reynolds stress (mirrorsymmetric tensor) expression in the form of a helicity gradient as the coupling coefficient for the mean vorticity and/or the angular velocity (axial vector), which suggests the possibility of mean-flow generation in the presence of inhomogeneous helicity. This inhomogeneous helicity effect, which was previously confirmed at the level of a turbulence- or closure-model simulation, is examined with the aid of direct numerical simulations of rotating turbulence with non-uniform helicity sustained by an external forcing. The numerical simulations show that the spatial distribution of the Reynolds stress is in agreement with the helicity-related term coupled with the angular velocity, and that a large-scale flow is generated in the direction of angular velocity. Such a large-scale flow is not induced in the case of hom...

  11. Series Design of Large-Scale NC Machine Tool

    Institute of Scientific and Technical Information of China (English)

    TANG Zhi

    2007-01-01

    Product system design is a mature concept in western developed countries. It has been applied in war industry during the last century. However, up until now, functional combination is still the main method for product system design in China. Therefore, in terms of a concept of product generation and product interaction we are in a weak position compared with the requirements of global markets. Today, the idea of serial product design has attracted much attention in the design field and the definition of product generation as well as its parameters has already become the standard in serial product designs. Although the design of a large-scale NC machine tool is complicated, it can be further optimized by the precise exercise of object design by placing the concept of platform establishment firmly into serial product design. The essence of a serial product design has been demonstrated by the design process of a large-scale NC machine tool.

  12. Evaluation of Large-scale Public Sector Reforms

    DEFF Research Database (Denmark)

    Breidahl, Karen Nielsen; Gjelstrup, Gunnar; Hansen, Hanne Foss

    2017-01-01

    Research on the evaluation of large-scale public sector reforms is rare. This article sets out to fill that gap in the evaluation literature and argues that it is of vital importance. The impact of such reforms is considerable. Furthermore they change the context in which evaluations of other...... and more delimited policy areas take place. In our analysis we apply four governance perspectives (rational-instrumental, rational-interest based, institutional-cultural and a chaos perspective) in a comparative analysis of the evaluations of two large-scale public sector reforms in Denmark and Norway. We...... compare the evaluation process (focus and purpose), the evaluators and the organization of the evaluation as well as the utilization of the evaluation results. The analysis uncovers several significant findings including how the initial organization of the evaluation show strong impact on the utilization...

  13. Magnetic Helicity and Large Scale Magnetic Fields: A Primer

    CERN Document Server

    Blackman, Eric G

    2014-01-01

    Magnetic fields of laboratory, planetary, stellar, and galactic plasmas commonly exhibit significant order on large temporal or spatial scales compared to the otherwise random motions within the hosting system. Such ordered fields can be measured in the case of planets, stars, and galaxies, or inferred indirectly by the action of their dynamical influence, such as jets. Whether large scale fields are amplified in situ or a remnant from previous stages of an object's history is often debated for objects without a definitive magnetic activity cycle. Magnetic helicity, a measure of twist and linkage of magnetic field lines, is a unifying tool for understanding large scale field evolution for both mechanisms of origin. Its importance stems from its two basic properties: (1) magnetic helicity is typically better conserved than magnetic energy; and (2) the magnetic energy associated with a fixed amount of magnetic helicity is minimized when the system relaxes this helical structure to the largest scale available. H...

  14. Real-time simulation of large-scale floods

    Science.gov (United States)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  15. Constraining cosmological ultra-large scale structure using numerical relativity

    CERN Document Server

    Braden, Jonathan; Peiris, Hiranya V; Aguirre, Anthony

    2016-01-01

    Cosmic inflation, a period of accelerated expansion in the early universe, can give rise to large amplitude ultra-large scale inhomogeneities on distance scales comparable to or larger than the observable universe. The cosmic microwave background (CMB) anisotropy on the largest angular scales is sensitive to such inhomogeneities and can be used to constrain the presence of ultra-large scale structure (ULSS). We numerically evolve nonlinear inhomogeneities present at the beginning of inflation in full General Relativity to assess the CMB quadrupole constraint on the amplitude of the initial fluctuations and the size of the observable universe relative to a length scale characterizing the ULSS. To obtain a statistically significant number of simulations, we adopt a toy model in which inhomogeneities are injected along a preferred direction. We compute the likelihood function for the CMB quadrupole including both ULSS and the standard quantum fluctuations produced during inflation. We compute the posterior given...

  16. Ultra-large scale cosmology with next-generation experiments

    CERN Document Server

    Alonso, David; Ferreira, Pedro G; Maartens, Roy; Santos, Mario G

    2015-01-01

    Future surveys of large-scale structure will be able to measure perturbations on the scale of the cosmological horizon, and so could potentially probe a number of novel relativistic effects that are negligibly small on sub-horizon scales. These effects leave distinctive signatures in the power spectra of clustering observables and, if measurable, would open a new window on relativistic cosmology. We quantify the size and detectability of the effects for a range of future large-scale structure surveys: spectroscopic and photometric galaxy redshift surveys, intensity mapping surveys of neutral hydrogen, and continuum surveys of radio galaxies. Our forecasts show that next-generation experiments, reaching out to redshifts z ~ 4, will not be able to detect previously-undetected general-relativistic effects from the single-tracer power spectra alone, although they may be able to measure the lensing magnification in the auto-correlation. We also perform a rigorous joint forecast for the detection of primordial non-...

  17. First Mile Challenges for Large-Scale IoT

    KAUST Repository

    Bader, Ahmed

    2017-03-16

    The Internet of Things is large-scale by nature. This is not only manifested by the large number of connected devices, but also by the sheer scale of spatial traffic intensity that must be accommodated, primarily in the uplink direction. To that end, cellular networks are indeed a strong first mile candidate to accommodate the data tsunami to be generated by the IoT. However, IoT devices are required in the cellular paradigm to undergo random access procedures as a precursor to resource allocation. Such procedures impose a major bottleneck that hinders cellular networks\\' ability to support large-scale IoT. In this article, we shed light on the random access dilemma and present a case study based on experimental data as well as system-level simulations. Accordingly, a case is built for the latent need to revisit random access procedures. A call for action is motivated by listing a few potential remedies and recommendations.

  18. Supermassive black holes, large scale structure and holography

    CERN Document Server

    Mongan, T R

    2013-01-01

    A holographic analysis of large scale structure in the universe estimates the mass of supermassive black holes at the center of large scale structures with matter density varying inversely as the square of the distance from their center. The estimate is consistent with two important test cases involving observations of the supermassive black hole with mass 3.6\\times10^{-6} times the galactic mass in Sagittarius A^{*} near the center of our Milky Way and the 2\\times10^{9} solar mass black hole in the quasar ULAS J112001.48+064124.3 at redshift z=7.085. It is also consistent with upper bounds on central black hole masses in globular clusters M15, M19 and M22 developed using the Jansky Very Large Array in New Mexico.

  19. Optimization of Survivability Analysis for Large-Scale Engineering Networks

    CERN Document Server

    Poroseva, S V

    2012-01-01

    Engineering networks fall into the category of large-scale networks with heterogeneous nodes such as sources and sinks. The survivability analysis of such networks requires the analysis of the connectivity of the network components for every possible combination of faults to determine a network response to each combination of faults. From the computational complexity point of view, the problem belongs to the class of exponential time problems at least. Partially, the problem complexity can be reduced by mapping the initial topology of a complex large-scale network with multiple sources and multiple sinks onto a set of smaller sub-topologies with multiple sources and a single sink connected to the network of sources by a single link. In this paper, the mapping procedure is applied to the Florida power grid.

  20. Distant galaxy clusters in the XMM Large Scale Structure survey

    CERN Document Server

    Willis, J P; Bremer, M N; Pierre, M; Adami, C; Ilbert, O; Maughan, B; Maurogordato, S; Pacaud, F; Valtchanov, I; Chiappetti, L; Thanjavur, K; Gwyn, S; Stanway, E R; Winkworth, C

    2012-01-01

    (Abridged) Distant galaxy clusters provide important tests of the growth of large scale structure in addition to highlighting the process of galaxy evolution in a consistently defined environment at large look back time. We present a sample of 22 distant (z>0.8) galaxy clusters and cluster candidates selected from the 9 deg2 footprint of the overlapping X-ray Multi Mirror (XMM) Large Scale Structure (LSS), CFHTLS Wide and Spitzer SWIRE surveys. Clusters are selected as extended X-ray sources with an accompanying overdensity of galaxies displaying optical to mid-infrared photometry consistent with z>0.8. Nine clusters have confirmed spectroscopic redshifts in the interval 0.80.8 clusters.

  1. Instrumentation Development for Large Scale Hypersonic Inflatable Aerodynamic Decelerator Characterization

    Science.gov (United States)

    Swanson, Gregory T.; Cassell, Alan M.

    2011-01-01

    Hypersonic Inflatable Aerodynamic Decelerator (HIAD) technology is currently being considered for multiple atmospheric entry applications as the limitations of traditional entry vehicles have been reached. The Inflatable Re-entry Vehicle Experiment (IRVE) has successfully demonstrated this technology as a viable candidate with a 3.0 m diameter vehicle sub-orbital flight. To further this technology, large scale HIADs (6.0 8.5 m) must be developed and tested. To characterize the performance of large scale HIAD technology new instrumentation concepts must be developed to accommodate the flexible nature inflatable aeroshell. Many of the concepts that are under consideration for the HIAD FY12 subsonic wind tunnel test series are discussed below.

  2. The complexity nature of large-scale software systems

    Institute of Scientific and Technical Information of China (English)

    Yan Dong; Qi Guo-Ning; Gu Xin-Jian

    2006-01-01

    In software engineering, class diagrams are often used to describe the system's class structures in Unified Modelling Language (UML). A class diagram, as a graph, is a collection of static declarative model elements, such as classes, interfaces, and the relationships of their connections with each other. In this paper, class graphs are examined within several Java software systems provided by Sun and IBM, and some new features are found. For a large-scale Java software system, its in-degree distribution tends to an exponential distribution, while its out-degree and degree distributions reveal the power-law behaviour. And then a directed preferential-random model is established to describe the corresponding degree distribution features and evolve large-scale Java software systems.

  3. Electron drift in a large scale solid xenon

    CERN Document Server

    Yoo, J

    2015-01-01

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7\\,cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163\\,K), the drift speed is 0.193 $\\pm$ 0.003 cm/$\\mu$s while the drift speed in the solid phase (157\\,K) is 0.397 $\\pm$ 0.006 cm/$\\mu$s at 900 V/cm over 8.0\\,cm of uniform electric fields. Therefore, it is demonstrated that a factor two faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.

  4. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  5. Large-Scale Agriculture and Outgrower Schemes in Ethiopia

    DEFF Research Database (Denmark)

    Wendimu, Mengistu Assefa

    As a result of the growing demand for food, feed and industrial raw materials in the first decade of this century, and the usually welcoming policies regarding investors amongst the governments of developing countries, there has been a renewed interest in agriculture and an increase in large...... to ‘land grabbing’ for large-scale farming (i.e. outgrower schemes and contract farming could modernise agricultural production while allowing smallholders to maintain their land ownership), to integrate them into global agro-food value chains and to increase their productivity and welfare. However......, the impact of large-scale agriculture and outgrower schemes on productivity, household welfare and wages in developing countries is highly contentious. Chapter 1 of this thesis provides an introduction to the study, while also reviewing the key debate in the contemporary land ‘grabbing’ and historical large...

  6. Clusters and Large-Scale Structure: the Synchrotron Keys

    CERN Document Server

    Rudnick, L; Andernach, H; Battaglia, N; Brown, S; Brunetti, Gf; Burns, J; Clarke, T; Dolag, K; Farnsworth, D; Giovannini, G; Hallman, E; Johnston-Hollit, M; Jones, T W; Kang, H; Kassim, N; Kravtsov, A; Lazio, J; Lonsdale, C; McNamara, B; Myers, S; Owen, F; Pfrommer, C; Ryu, D; Sarazin, C; Subrahmanyan, R; Taylor, G; Taylor, R

    2009-01-01

    For over four decades, synchrotron-radiating sources have played a series of pathfinding roles in the study of galaxy clusters and large scale structure. Such sources are uniquely sensitive to the turbulence and shock structures of large-scale environments, and their cosmic rays and magnetic fields often play important dynamic and thermodynamic roles. They provide essential complements to studies at other wavebands. Over the next decade, they will fill essential gaps in both cluster astrophysics and the cosmological growth of structure in the universe, especially where the signatures of shocks and turbulence, or even the underlying thermal plasma itself, are otherwise undetectable. Simultaneously, synchrotron studies offer a unique tool for exploring the fundamental question of the origins of cosmic magnetic fields. This work will be based on the new generation of m/cm-wave radio telescopes now in construction, as well as major advances in the sophistication of 3-D MHD simulations.

  7. Cluster Galaxy Dynamics and the Effects of Large Scale Environment

    CERN Document Server

    White, Martin; Smit, Renske

    2010-01-01

    We use a high-resolution N-body simulation to study how the influence of large-scale structure in and around clusters causes correlated signals in different physical probes and discuss some implications this has for multi-physics probes of clusters. We pay particular attention to velocity dispersions, matching galaxies to subhalos which are explicitly tracked in the simulation. We find that not only do halos persist as subhalos when they fall into a larger host, groups of subhalos retain their identity for long periods within larger host halos. The highly anisotropic nature of infall into massive clusters, and their triaxiality, translates into an anisotropic velocity ellipsoid: line-of-sight galaxy velocity dispersions for any individual halo show large variance depending on viewing angle. The orientation of the velocity ellipsoid is correlated with the large-scale structure, and thus velocity outliers correlate with outliers caused by projection in other probes. We quantify this orientation uncertainty and ...

  8. Imaging HF-induced large-scale irregularities above HAARP

    Science.gov (United States)

    Djuth, Frank T.; Reinisch, Bodo W.; Kitrosser, David F.; Elder, John H.; Snyder, A. Lee; Sales, Gary S.

    2006-02-01

    The University of Massachusetts-Lowell digisonde is used with the HAARP high-frequency (HF), ionospheric modification facility to obtain radio images of artificially-produced, large-scale, geomagnetic field-aligned irregularities. F region irregularities generated with the HAARP beam pointed in the vertical and geomagnetic field-aligned directions are examined in a smooth background plasma. It is found that limited large-scale irregularity production takes place with vertical transmissions, whereas there is a dramatic increase in the number of source irregularities with the beam pointed parallel to the geomagnetic field. Strong irregularity production appears to be confined to within ~5° of the geomagnetic zenith and does not fill the volume occupied by the HF beam. A similar effect is observed in optical images of artificial airglow.

  9. In the fast lane: large-scale bacterial genome engineering.

    Science.gov (United States)

    Fehér, Tamás; Burland, Valerie; Pósfai, György

    2012-07-31

    The last few years have witnessed rapid progress in bacterial genome engineering. The long-established, standard ways of DNA synthesis, modification, transfer into living cells, and incorporation into genomes have given way to more effective, large-scale, robust genome modification protocols. Expansion of these engineering capabilities is due to several factors. Key advances include: (i) progress in oligonucleotide synthesis and in vitro and in vivo assembly methods, (ii) optimization of recombineering techniques, (iii) introduction of parallel, large-scale, combinatorial, and automated genome modification procedures, and (iv) rapid identification of the modifications by barcode-based analysis and sequencing. Combination of the brute force of these techniques with sophisticated bioinformatic design and modeling opens up new avenues for the analysis of gene functions and cellular network interactions, but also in engineering more effective producer strains. This review presents a summary of recent technological advances in bacterial genome engineering.

  10. Human race as indicator of 3D planning of soft tissue of face and multidisciplinary approach.

    Science.gov (United States)

    Nadazdyova, A; Samohyl, M; Stefankova, E; Pintesova, S; Stanko, P

    2017-01-01

    The aim of this study was to determine the optimal parameters for 3D soft tissue planning for ortognatic treatment by gender and increases the effectiveness of multidisciplinary cooperation. Craniofacial parameters which were analysed: nose breadth (al-al), bi-entocanthion breadth (en-en), bi-zygomatic breadth (zy-zy), bi-gonial breadth (go-go), total facial height (n-gn), mouth breadth (ch-ch), morphologic face height (sn-gn), upper-lip height (Ls-Stm), lower-lip height (Stm-Li) and pupils - mid-face (right). The statistically significant level was determined at p values maxilofacial surgeon. The gender and age influenced the variability of following parameters: bi-gonial breadth, total facial height and morphologic face height. The soft tissue values for craniofacial parameters can be used to identify the surgical-orthodontic goal for patient - europoid race. Due to the immigration and the mix of races it is necessary to take this fact into account (Tab. 3, Fig. 1, Ref. 41).

  11. Robust regression for large-scale neuroimaging studies.

    OpenAIRE

    2015-01-01

    PUBLISHED Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypot...

  12. Measuring large scale space perception in literary texts

    Science.gov (United States)

    Rossi, Paolo

    2007-07-01

    A center and radius of “perception” (in the sense of environmental cognition) can be formally associated with a written text and operationally defined. Simple algorithms for their computation are presented, and indicators for anisotropy in large scale space perception are introduced. The relevance of these notions for the analysis of literary and historical records is briefly discussed and illustrated with an example taken from medieval historiography.

  13. The Phoenix series large scale LNG pool fire experiments.

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  14. Large-scale Alfvén vortices

    Energy Technology Data Exchange (ETDEWEB)

    Onishchenko, O. G., E-mail: onish@ifz.ru [Institute of Physics of the Earth, 10 B. Gruzinskaya, 123242 Moscow, Russian Federation and Space Research Institute, 84/32 Profsouznaya str., 117997 Moscow (Russian Federation); Pokhotelov, O. A., E-mail: pokh@ifz.ru [Institute of Physics of the Earth, 10 B. Gruzinskaya, 123242 Moscow (Russian Federation); Horton, W., E-mail: wendell.horton@gmail.com [Institute for Fusion Studies and Applied Research Laboratory, University of Texas at Austin, Austin, Texas 78713 (United States); Scullion, E., E-mail: scullie@tcd.ie [School of Physics, Trinity College Dublin, Dublin 2 (Ireland); Fedun, V., E-mail: v.fedun@sheffield.ac.uk [Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield S13JD (United Kingdom)

    2015-12-15

    The new type of large-scale vortex structures of dispersionless Alfvén waves in collisionless plasma is investigated. It is shown that Alfvén waves can propagate in the form of Alfvén vortices of finite characteristic radius and characterised by magnetic flux ropes carrying orbital angular momentum. The structure of the toroidal and radial velocity, fluid and magnetic field vorticity, the longitudinal electric current in the plane orthogonal to the external magnetic field are discussed.

  15. UAV Data Processing for Large Scale Topographical Mapping

    Science.gov (United States)

    Tampubolon, W.; Reinhardt, W.

    2014-06-01

    Large scale topographical mapping in the third world countries is really a prominent challenge in geospatial industries nowadays. On one side the demand is significantly increasing while on the other hand it is constrained by limited budgets available for mapping projects. Since the advent of Act Nr.4/yr.2011 about Geospatial Information in Indonesia, large scale topographical mapping has been on high priority for supporting the nationwide development e.g. detail spatial planning. Usually large scale topographical mapping relies on conventional aerial survey campaigns in order to provide high resolution 3D geospatial data sources. Widely growing on a leisure hobby, aero models in form of the so-called Unmanned Aerial Vehicle (UAV) bring up alternative semi photogrammetric aerial data acquisition possibilities suitable for relatively small Area of Interest (AOI) i.e. Indonesia this area size can be used as a mapping unit since it usually concentrates on the basis of sub district area (kecamatan) level. In this paper different camera and processing software systems will be further analyzed for identifying the best optimum UAV data acquisition campaign components in combination with the data processing scheme. The selected AOI is covering the cultural heritage of Borobudur Temple as one of the Seven Wonders of the World. A detailed accuracy assessment will be concentrated within the object feature of the temple at the first place. Feature compilation involving planimetric objects (2D) and digital terrain models (3D) will be integrated in order to provide Digital Elevation Models (DEM) as the main interest of the topographic mapping activity. By doing this research, incorporating the optimum amount of GCPs in the UAV photo data processing will increase the accuracy along with its high resolution in 5 cm Ground Sampling Distance (GSD). Finally this result will be used as the benchmark for alternative geospatial data acquisition in the future in which it can support

  16. Hierarchical Engine for Large-scale Infrastructure Co-Simulation

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-24

    HELICS is designed to support very-large-scale (100,000+ federates) cosimulations with off-the-shelf power-system, communication, market, and end-use tools. Other key features include cross platform operating system support, the integration of both event driven (e.g., packetized communication) and time-series (e.g., power flow) simulations, and the ability to co-iterate among federates to ensure physical model convergence at each time step.

  17. Large scale-small scale duality and cosmological constant

    CERN Document Server

    Darabi, F

    1999-01-01

    We study a model of quantum cosmology originating from a classical model of gravitation where a self interacting scalar field is coupled to gravity with the metric undergoing a signature transition. We show that there are dual classical signature changing solutions, one at large scales and the other at small scales. It is possible to fine-tune the physics in both scales with an infinitesimal effective cosmological constant.

  18. Information Tailoring Enhancements for Large-Scale Social Data

    Science.gov (United States)

    2016-09-26

    Social Data Final Report Reporting Period: September 22, 2015 – September 16, 2016 Contract No. N00014-15-P-5138 Sponsored by ONR...Report September 22, 20 15 - September 16, 20 16 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Information Tailoring Enhancements for Large-Scale Social ...goals of(i) further enhancing capability to analyze unstructured social media data at scale and rapidly, and (ii) improving IAI social media software

  19. Large-scale prediction of drug-target relationships

    DEFF Research Database (Denmark)

    Kuhn, Michael; Campillos, Mónica; González, Paula

    2008-01-01

    , but also provides a more global view on drug-target relations. Here we review recent attempts to apply large-scale computational analyses to predict novel interactions of drugs and targets from molecular and cellular features. In this context, we quantify the family-dependent probability of two proteins...... to bind the same ligand as function of their sequence similarity. We finally discuss how phenotypic data could help to expand our understanding of the complex mechanisms of drug action....

  20. One-dimensional adhesion model for large scale structures

    Directory of Open Access Journals (Sweden)

    Kayyunnapara Thomas Joseph

    2010-05-01

    Full Text Available We discuss initial value problems and initial boundary value problems for some systems of partial differential equations appearing in the modelling for the large scale structure formation in the universe. We restrict the initial data to be bounded measurable and locally bounded variation function and use Volpert product to justify the product which appear in the equation. For more general initial data in the class of generalized functions of Colombeau, we construct the solution in the sense of association.

  1. Robust regression for large-scale neuroimaging studies.

    OpenAIRE

    BOKDE, ARUN

    2015-01-01

    PUBLISHED Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypot...

  2. Multimodel Design of Large Scale Systems with Multiple Decision Makers.

    Science.gov (United States)

    1982-08-01

    virtue. 5- , Lead me from darkneu to light. - Lead me from death to eternal Life. ( Vedic Payer) p. I, MULTIMODEL DESIGN OF LARGE SCALE SYSTEMS WITH...BFI-S2L) is stable for all e in H. To avoid mathematical complications, the feedback matrices of (2.31) are restricted to be of the form, S(e)= Fli + 0...control values used during all past sampling intervals. This information pattern, though not of ouch practical importance, is mathematically con

  3. A Large-Scale Study of Online Shopping Behavior

    OpenAIRE

    Nalchigar, Soroosh; Weber, Ingmar

    2012-01-01

    The continuous growth of electronic commerce has stimulated great interest in studying online consumer behavior. Given the significant growth in online shopping, better understanding of customers allows better marketing strategies to be designed. While studies of online shopping attitude are widespread in the literature, studies of browsing habits differences in relation to online shopping are scarce. This research performs a large scale study of the relationship between Internet browsing hab...

  4. Foundations of Large-Scale Multimedia Information Management and Retrieval

    CERN Document Server

    Chang, Edward Y

    2011-01-01

    "Foundations of Large-Scale Multimedia Information Management and Retrieval - Mathematics of Perception" covers knowledge representation and semantic analysis of multimedia data and scalability in signal extraction, data mining, and indexing. The book is divided into two parts: Part I - Knowledge Representation and Semantic Analysis focuses on the key components of mathematics of perception as it applies to data management and retrieval. These include feature selection/reduction, knowledge representation, semantic analysis, distance function formulation for measuring similarity, and

  5. Multivariate Clustering of Large-Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Eliassi-Rad, T; Critchlow, T

    2003-06-13

    Simulations of complex scientific phenomena involve the execution of massively parallel computer programs. These simulation programs generate large-scale data sets over the spatio-temporal space. Modeling such massive data sets is an essential step in helping scientists discover new information from their computer simulations. In this paper, we present a simple but effective multivariate clustering algorithm for large-scale scientific simulation data sets. Our algorithm utilizes the cosine similarity measure to cluster the field variables in a data set. Field variables include all variables except the spatial (x, y, z) and temporal (time) variables. The exclusion of the spatial dimensions is important since ''similar'' characteristics could be located (spatially) far from each other. To scale our multivariate clustering algorithm for large-scale data sets, we take advantage of the geometrical properties of the cosine similarity measure. This allows us to reduce the modeling time from O(n{sup 2}) to O(n x g(f(u))), where n is the number of data points, f(u) is a function of the user-defined clustering threshold, and g(f(u)) is the number of data points satisfying f(u). We show that on average g(f(u)) is much less than n. Finally, even though spatial variables do not play a role in building clusters, it is desirable to associate each cluster with its correct spatial region. To achieve this, we present a linking algorithm for connecting each cluster to the appropriate nodes of the data set's topology tree (where the spatial information of the data set is stored). Our experimental evaluations on two large-scale simulation data sets illustrate the value of our multivariate clustering and linking algorithms.

  6. Large-Scale Integrated Carbon Nanotube Gas Sensors

    OpenAIRE

    Kim, Joondong

    2012-01-01

    Carbon nanotube (CNT) is a promising one-dimensional nanostructure for various nanoscale electronics. Additionally, nanostructures would provide a significant large surface area at a fixed volume, which is an advantage for high-responsive gas sensors. However, the difficulty in fabrication processes limits the CNT gas sensors for the large-scale production. We review the viable scheme for large-area application including the CNT gas sensor fabrication and reaction mechanism with a practical d...

  7. Large Scale Density Estimation of Blue and Fin Whales (LSD)

    Science.gov (United States)

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...sensors, or both. The goal of this research is to develop and implement a new method for estimating blue and fin whale density that is effective over...develop and implement a density estimation methodology for quantifying blue and fin whale abundance from passive acoustic data recorded on sparse

  8. Concurrent Programming Using Actors: Exploiting Large-Scale Parallelism,

    Science.gov (United States)

    1985-10-07

    ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASK* Artificial Inteligence Laboratory AREA Is WORK UNIT NUMBERS 545 Technology Square...D-R162 422 CONCURRENT PROGRMMIZNG USING f"OS XL?ITP TEH l’ LARGE-SCALE PARALLELISH(U) NASI AC E Al CAMBRIDGE ARTIFICIAL INTELLIGENCE L. G AGHA ET AL...RESOLUTION TEST CHART N~ATIONAL BUREAU OF STANDA.RDS - -96 A -E. __ _ __ __’ .,*- - -- •. - MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL

  9. Large-Scale Weather Disturbances in Mars’ Southern Extratropics

    Science.gov (United States)

    Hollingsworth, Jeffery L.; Kahre, Melinda A.

    2015-11-01

    Between late autumn and early spring, Mars’ middle and high latitudes within its atmosphere support strong mean thermal gradients between the tropics and poles. Observations from both the Mars Global Surveyor (MGS) and Mars Reconnaissance Orbiter (MRO) indicate that this strong baroclinicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). These extratropical weather disturbances are key components of the global circulation. Such wave-like disturbances act as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of large-scale, traveling extratropical synoptic-period disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively lifted and radiatively active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to their northern-hemisphere counterparts, southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are examined. Simulations that adapt Mars’ full topography compared to simulations that utilize synthetic topographies emulating key large-scale features of the southern middle latitudes indicate that Mars’ transient barotropic/baroclinic eddies are highly influenced by the great impact basins of this hemisphere (e.g., Argyre and Hellas). The occurrence of a southern storm zone in late winter and early spring appears to be anchored to the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre

  10. Experimental simulation of microinteractions in large scale explosions

    Energy Technology Data Exchange (ETDEWEB)

    Chen, X.; Luo, R.; Yuen, W.W.; Theofanous, T.G. [California Univ., Santa Barbara, CA (United States). Center for Risk Studies and Safety

    1998-01-01

    This paper presents data and analysis of recent experiments conducted in the SIGMA-2000 facility to simulate microinteractions in large scale explosions. Specifically, the fragmentation behavior of a high temperature molten steel drop under high pressure (beyond critical) conditions are investigated. The current data demonstrate, for the first time, the effect of high pressure in suppressing the thermal effect of fragmentation under supercritical conditions. The results support the microinteractions idea, and the ESPROSE.m prediction of fragmentation rate. (author)

  11. Large-Scale Sequencing: The Future of Genomic Sciences Colloquium

    Energy Technology Data Exchange (ETDEWEB)

    Margaret Riley; Merry Buckley

    2009-01-01

    Genetic sequencing and the various molecular techniques it has enabled have revolutionized the field of microbiology. Examining and comparing the genetic sequences borne by microbes - including bacteria, archaea, viruses, and microbial eukaryotes - provides researchers insights into the processes microbes carry out, their pathogenic traits, and new ways to use microorganisms in medicine and manufacturing. Until recently, sequencing entire microbial genomes has been laborious and expensive, and the decision to sequence the genome of an organism was made on a case-by-case basis by individual researchers and funding agencies. Now, thanks to new technologies, the cost and effort of sequencing is within reach for even the smallest facilities, and the ability to sequence the genomes of a significant fraction of microbial life may be possible. The availability of numerous microbial genomes will enable unprecedented insights into microbial evolution, function, and physiology. However, the current ad hoc approach to gathering sequence data has resulted in an unbalanced and highly biased sampling of microbial diversity. A well-coordinated, large-scale effort to target the breadth and depth of microbial diversity would result in the greatest impact. The American Academy of Microbiology convened a colloquium to discuss the scientific benefits of engaging in a large-scale, taxonomically-based sequencing project. A group of individuals with expertise in microbiology, genomics, informatics, ecology, and evolution deliberated on the issues inherent in such an effort and generated a set of specific recommendations for how best to proceed. The vast majority of microbes are presently uncultured and, thus, pose significant challenges to such a taxonomically-based approach to sampling genome diversity. However, we have yet to even scratch the surface of the genomic diversity among cultured microbes. A coordinated sequencing effort of cultured organisms is an appropriate place to begin

  12. Robust regression for large-scale neuroimaging studies.

    Science.gov (United States)

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2015-05-01

    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies.

  13. Critical thinking, politics on a large scale and media democracy

    Directory of Open Access Journals (Sweden)

    José Antonio IBÁÑEZ-MARTÍN

    2015-06-01

    Full Text Available The first approximation to the social current reality offers us numerous motives for the worry. The spectacle of violence and of immorality can scare us easily. But more worrying still it is to verify that the horizon of conviviality, peace and wellbeing that Europe had been developing from the Treaty of Rome of 1957 has compromised itself seriously for the economic crisis. Today we are before an assault to the democratic politics, which is qualified, on the part of the media democracy, as an exhausted system, which is required to be changed into a new and great politics, a politics on a large scale. The article analyses the concept of a politics on a large scale, primarily attending to Nietzsche, and noting its union with the great philosophy and the great education. The study of the texts of Nietzsche leads us to the conclusion of how in them we often find an interesting analysis of the problems and a misguided proposal for solutions. We cannot think to suggest solutions to all the problems, but we outline various proposals about changes of political activity, that reasonably are defended from the media democracy. In conclusion, we point out that a politics on a large scale requires statesmen, able to suggest modes of life in common that can structure a long-term coexistence.

  14. Large Scale Magnetic Fields: Density Power Spectrum in Redshift Space

    Indian Academy of Sciences (India)

    Rajesh Gopal; Shiv K. Sethi

    2003-09-01

    We compute the density redshift-space power spectrum in the presence of tangled magnetic fields and compare it with existing observations. Our analysis shows that if these magnetic fields originated in the early universe then it is possible to construct models for which the shape of the power spectrum agrees with the large scale slope of the observed power spectrum. However requiring compatibility with observed CMBR anisotropies, the normalization of the power spectrum is too low for magnetic fields to have significant impact on the large scale structure at present. Magnetic fields of a more recent origin generically give density power spectrum ∝ 4 which doesn’t agree with the shape of the observed power spectrum at any scale. Magnetic fields generate curl modes of the velocity field which increase both the quadrupole and hexadecapole of the redshift space power spectrum. For curl modes, the hexadecapole dominates over quadrupole. So the presence of curl modes could be indicated by an anomalously large hexadecapole, which has not yet been computed from observation. It appears difficult to construct models in which tangled magnetic fields could have played a major role in shaping the large scale structure in the present epoch. However if they did, one of the best ways to infer their presence would be from the redshift space effects in the density power spectrum.

  15. A visualization framework for large-scale virtual astronomy

    Science.gov (United States)

    Fu, Chi-Wing

    Motivated by advances in modern positional astronomy, this research attempts to digitally model the entire Universe through computer graphics technology. Our first challenge is space itself. The gigantic size of the Universe makes it impossible to put everything into a typical graphics system at its own scale. The graphics rendering process can easily fail because of limited computational precision, The second challenge is that the enormous amount of data could slow down the graphics; we need clever techniques to speed up the rendering. Third, since the Universe is dominated by empty space, objects are widely separated; this makes navigation difficult. We attempt to tackle these problems through various techniques designed to extend and optimize the conventional graphics framework, including the following: power homogeneous coordinates for large-scale spatial representations, generalized large-scale spatial transformations, and rendering acceleration via environment caching and object disappearance criteria. Moreover, we implemented an assortment of techniques for modeling and rendering a variety of astronomical bodies, ranging from the Earth up to faraway galaxies, and attempted to visualize cosmological time; a method we call the Lightcone representation was introduced to visualize the whole space-time of the Universe at a single glance. In addition, several navigation models were developed to handle the large-scale navigation problem. Our final results include a collection of visualization tools, two educational animations appropriate for planetarium audiences, and state-of-the-art-advancing rendering techniques that can be transferred to practice in digital planetarium systems.

  16. Impact of Large-scale Geological Architectures On Recharge

    Science.gov (United States)

    Troldborg, L.; Refsgaard, J. C.; Engesgaard, P.; Jensen, K. H.

    Geological and hydrogeological data constitutes the basis for assessment of ground- water flow pattern and recharge zones. The accessibility and applicability of hard ge- ological data is often a major obstacle in deriving plausible conceptual models. Nev- ertheless focus is often on parameter uncertainty caused by the effect of geological heterogeneity due to lack of hard geological data, thus neglecting the possibility of alternative conceptualizations of the large-scale geological architecture. For a catchment in the eastern part of Denmark we have constructed different geologi- cal models based on different conceptualization of the major geological trends and fa- cies architecture. The geological models are equally plausible in a conceptually sense and they are all calibrated to well head and river flow measurements. Comparison of differences in recharge zones and subsequently well protection zones emphasize the importance of assessing large-scale geological architecture in hydrological modeling on regional scale in a non-deterministic way. Geostatistical modeling carried out in a transitional probability framework shows the possibility of assessing multiple re- alizations of large-scale geological architecture from a combination of soft and hard geological information.

  17. Large-scale magnetic topologies of early M dwarfs

    CERN Document Server

    Donati, JF; Petit, P; Delfosse, X; Forveille, T; Aurière, M; Cabanac, R; Dintrans, B; Fares, R; Gastine, T; Jardine, MM; Lignières, F; Paletou, F; Velez, J Ramirez; Théado, S

    2008-01-01

    We present here additional results of a spectropolarimetric survey of a small sample of stars ranging from spectral type M0 to M8 aimed at investigating observationally how dynamo processes operate in stars on both sides of the full convection threshold (spectral type M4). The present paper focuses on early M stars (M0--M3), i.e. above the full convection threshold. Applying tomographic imaging techniques to time series of rotationally modulated circularly polarised profiles collected with the NARVAL spectropolarimeter, we determine the rotation period and reconstruct the large-scale magnetic topologies of 6 early M dwarfs. We find that early-M stars preferentially host large-scale fields with dominantly toroidal and non-axisymmetric poloidal configurations, along with significant differential rotation (and long-term variability); only the lowest-mass star of our subsample is found to host an almost fully poloidal, mainly axisymmetric large-scale field ressembling those found in mid-M dwarfs. This abrupt chan...

  18. Multiresolution comparison of precipitation datasets for large-scale models

    Science.gov (United States)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  19. Searching for Large Scale Structure in Deep Radio Surveys

    CERN Document Server

    Baleisis, A; Loan, A J; Wall, J V; Baleisis, Audra; Lahav, Ofer; Loan, Andrew J.; Wall, Jasper V.

    1997-01-01

    (Abridged Abstract) We calculate the expected amplitude of the dipole and higher spherical harmonics in the angular distribution of radio galaxies. The median redshift of radio sources in existing catalogues is z=1, which allows us to study large scale structure on scales between those accessible to present optical and infrared surveys, and that of the Cosmic Microwave Background (CMB). The dipole is due to 2 effects which turn out to be of comparable magnitude: (i) our motion with respect to the CMB, and (ii) large scale structure, parameterised here by a family of Cold Dark Matter power-spectra. We make specific predictions for the Green Bank (87GB) and Parkes-MIT-NRAO (PMN) catalogues. For these relatively sparse catalogues both the motion and large scale structure dipole effects are expected to be smaller than the Poisson shot-noise. However, we detect dipole and higher harmonics in the combined 87GB-PMN catalogue which are far larger than expected. We attribute this to a 2 % flux mismatch between the two...

  20. Geospatial Optimization of Siting Large-Scale Solar Projects

    Energy Technology Data Exchange (ETDEWEB)

    Macknick, J.; Quinby, T.; Caulfield, E.; Gerritsen, M.; Diffendorfer, J.; Haines, S.

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.