WorldWideScience

Sample records for large-scale multi-year randomized

  1. Coordinated Multi-layer Multi-domain Optical Network (COMMON) for Large-Scale Science Applications (COMMON)

    Energy Technology Data Exchange (ETDEWEB)

    Vokkarane, Vinod [University of Massachusetts

    2013-09-01

    We intend to implement a Coordinated Multi-layer Multi-domain Optical Network (COMMON) Framework for Large-scale Science Applications. In the COMMON project, specific problems to be addressed include 1) anycast/multicast/manycast request provisioning, 2) deployable OSCARS enhancements, 3) multi-layer, multi-domain quality of service (QoS), and 4) multi-layer, multidomain path survivability. In what follows, we outline the progress in the above categories (Year 1, 2, and 3 deliverables).

  2. Financial Management of a Large Multi-site Randomized Clinical Trial

    Science.gov (United States)

    Sheffet, Alice J.; Flaxman, Linda; Tom, MeeLee; Hughes, Susan E.; Longbottom, Mary E.; Howard, Virginia J.; Marler, John R.; Brott, Thomas G.

    2014-01-01

    Background The Carotid Revascularization Endarterectomy versus Stenting Trial (CREST) received five years’ funding ($21,112,866) from the National Institutes of Health to compare carotid stenting to surgery for stroke prevention in 2,500 randomized participants at 40 sites. Aims Herein we evaluate the change in the CREST budget from a fixed to variable-cost model and recommend strategies for the financial management of large-scale clinical trials. Methods Projections of the original grant’s fixed-cost model were compared to the actual costs of the revised variable-cost model. The original grant’s fixed-cost budget included salaries, fringe benefits, and other direct and indirect costs. For the variable-cost model, the costs were actual payments to the clinical sites and core centers based upon actual trial enrollment. We compared annual direct and indirect costs and per-patient cost for both the fixed and variable models. Differences between clinical site and core center expenditures were also calculated. Results Using a variable-cost budget for clinical sites, funding was extended by no-cost extension from five to eight years. Randomizing sites tripled from 34 to 109. Of the 2,500 targeted sample size, 138 (5.5%) were randomized during the first five years and 1,387 (55.5%) during the no-cost extension. The actual per-patient costs of the variable model were 9% ($13,845) of the projected per-patient costs ($152,992) of the fixed model. Conclusions Performance-based budgets conserve funding, promote compliance, and allow for additional sites at modest additional cost. Costs of large-scale clinical trials can thus be reduced through effective management without compromising scientific integrity. PMID:24661748

  3. Multi-level discriminative dictionary learning with application to large scale image classification.

    Science.gov (United States)

    Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua

    2015-10-01

    The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.

  4. Large-Scale, Parallel, Multi-Sensor Data Fusion in the Cloud

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Hua, H.

    2012-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To efficiently assemble such decade-scale datasets in a timely manner, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. "SciReduce" is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, in which simple tuples (keys & values) are passed between the map and reduce functions, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Thus, SciReduce uses the native datatypes (geolocated grids, swaths, and points) that geo-scientists are familiar with. We are deploying within Sci

  5. Large-Scale, Parallel, Multi-Sensor Atmospheric Data Fusion Using Cloud Computing

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2013-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the 'A-Train' platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (MERRA), stratify the comparisons using a classification of the 'cloud scenes' from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Figure 1 shows the architecture of the full computational system, with SciReduce at the core. Multi-year datasets are automatically 'sharded' by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will

  6. Preparation of a large-scale and multi-layer molybdenum crystal and its characteristics

    International Nuclear Information System (INIS)

    Fujii, Tadayuki

    1989-01-01

    In the present work, the secondary recrystallization method was applied to obtain a large-scale and multi-layer crystal from a hot-rolled multi-laminated molybdenum sheet doped and stacked alternately with different amounts of dopant. It was found that the time and/or temperature at which secondary recrystallization commence from the multi- layer sheet is strongly dependent on the amounts of dopants. Therefore the potential nucleus of the secondary grain from layers with different amounts of dopant occurred first at the layer with a small amount of dopant and then grew into the layer with a large amount of dopant after an anneal at 1800 0 C-2000 0 C. Consequently a large -scale and multi-layer molybdenum crystal can easily be obtained. 12 refs., 9 figs., 2 tabs. (Author)

  7. Large-Scale Multi-Resolution Representations for Accurate Interactive Image and Volume Operations

    KAUST Repository

    Sicat, Ronell B.

    2015-11-25

    The resolutions of acquired image and volume data are ever increasing. However, the resolutions of commodity display devices remain limited. This leads to an increasing gap between data and display resolutions. To bridge this gap, the standard approach is to employ output-sensitive operations on multi-resolution data representations. Output-sensitive operations facilitate interactive applications since their required computations are proportional only to the size of the data that is visible, i.e., the output, and not the full size of the input. Multi-resolution representations, such as image mipmaps, and volume octrees, are crucial in providing these operations direct access to any subset of the data at any resolution corresponding to the output. Despite its widespread use, this standard approach has some shortcomings in three important application areas, namely non-linear image operations, multi-resolution volume rendering, and large-scale image exploration. This dissertation presents new multi-resolution representations for large-scale images and volumes that address these shortcomings. Standard multi-resolution representations require low-pass pre-filtering for anti- aliasing. However, linear pre-filters do not commute with non-linear operations. This becomes problematic when applying non-linear operations directly to any coarse resolution levels in standard representations. Particularly, this leads to inaccurate output when applying non-linear image operations, e.g., color mapping and detail-aware filters, to multi-resolution images. Similarly, in multi-resolution volume rendering, this leads to inconsistency artifacts which manifest as erroneous differences in rendering outputs across resolution levels. To address these issues, we introduce the sparse pdf maps and sparse pdf volumes representations for large-scale images and volumes, respectively. These representations sparsely encode continuous probability density functions (pdfs) of multi-resolution pixel

  8. Large-Scale, Multi-Sensor Atmospheric Data Fusion Using Hybrid Cloud Computing

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2015-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, MODIS, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. HySDS is a Hybrid-Cloud Science Data System that has been developed and applied under NASA AIST, MEaSUREs, and ACCESS grants. HySDS uses the SciFlow workflow engine to partition analysis workflows into parallel tasks (e.g. segmenting by time or space) that are pushed into a durable job queue. The tasks are "pulled" from the queue by worker Virtual Machines (VM's) and executed in an on-premise Cloud (Eucalyptus or OpenStack) or at Amazon in the public Cloud or govCloud. In this way, years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the transferred data. We are using HySDS to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a MEASURES grant. We will present the architecture of HySDS, describe the achieved "clock time" speedups in fusing datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. Our system demonstrates how one can pull A-Train variables (Levels 2 & 3) on-demand into the Amazon Cloud, and cache only those variables that are heavily used, so that any number of compute jobs can be

  9. Study of multi-functional precision optical measuring system for large scale equipment

    Science.gov (United States)

    Jiang, Wei; Lao, Dabao; Zhou, Weihu; Zhang, Wenying; Jiang, Xingjian; Wang, Yongxi

    2017-10-01

    The effective application of high performance measurement technology can greatly improve the large-scale equipment manufacturing ability. Therefore, the geometric parameters measurement, such as size, attitude and position, requires the measurement system with high precision, multi-function, portability and other characteristics. However, the existing measuring instruments, such as laser tracker, total station, photogrammetry system, mostly has single function, station moving and other shortcomings. Laser tracker needs to work with cooperative target, but it can hardly meet the requirement of measurement in extreme environment. Total station is mainly used for outdoor surveying and mapping, it is hard to achieve the demand of accuracy in industrial measurement. Photogrammetry system can achieve a wide range of multi-point measurement, but the measuring range is limited and need to repeatedly move station. The paper presents a non-contact opto-electronic measuring instrument, not only it can work by scanning the measurement path but also measuring the cooperative target by tracking measurement. The system is based on some key technologies, such as absolute distance measurement, two-dimensional angle measurement, automatically target recognition and accurate aiming, precision control, assembly of complex mechanical system and multi-functional 3D visualization software. Among them, the absolute distance measurement module ensures measurement with high accuracy, and the twodimensional angle measuring module provides precision angle measurement. The system is suitable for the case of noncontact measurement of large-scale equipment, it can ensure the quality and performance of large-scale equipment throughout the process of manufacturing and improve the manufacturing ability of large-scale and high-end equipment.

  10. Knowledge Guided Disambiguation for Large-Scale Scene Classification With Multi-Resolution CNNs

    Science.gov (United States)

    Wang, Limin; Guo, Sheng; Huang, Weilin; Xiong, Yuanjun; Qiao, Yu

    2017-04-01

    Convolutional Neural Networks (CNNs) have made remarkable progress on scene recognition, partially due to these recent large-scale scene datasets, such as the Places and Places2. Scene categories are often defined by multi-level information, including local objects, global layout, and background environment, thus leading to large intra-class variations. In addition, with the increasing number of scene categories, label ambiguity has become another crucial issue in large-scale classification. This paper focuses on large-scale scene recognition and makes two major contributions to tackle these issues. First, we propose a multi-resolution CNN architecture that captures visual content and structure at multiple levels. The multi-resolution CNNs are composed of coarse resolution CNNs and fine resolution CNNs, which are complementary to each other. Second, we design two knowledge guided disambiguation techniques to deal with the problem of label ambiguity. (i) We exploit the knowledge from the confusion matrix computed on validation data to merge ambiguous classes into a super category. (ii) We utilize the knowledge of extra networks to produce a soft label for each image. Then the super categories or soft labels are employed to guide CNN training on the Places2. We conduct extensive experiments on three large-scale image datasets (ImageNet, Places, and Places2), demonstrating the effectiveness of our approach. Furthermore, our method takes part in two major scene recognition challenges, and achieves the second place at the Places2 challenge in ILSVRC 2015, and the first place at the LSUN challenge in CVPR 2016. Finally, we directly test the learned representations on other scene benchmarks, and obtain the new state-of-the-art results on the MIT Indoor67 (86.7\\%) and SUN397 (72.0\\%). We release the code and models at~\\url{https://github.com/wanglimin/MRCNN-Scene-Recognition}.

  11. HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.

    Science.gov (United States)

    Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye

    2017-02-09

    In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.

  12. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    International Nuclear Information System (INIS)

    Fonseca, R A; Vieira, J; Silva, L O; Fiuza, F; Davidson, A; Tsung, F S; Mori, W B

    2013-01-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ∼10 6 cores and sustained performance over ∼2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios. (paper)

  13. Large-scale alcohol use and socioeconomic position of origin: longitudinal study from ages 15 to 19 years

    DEFF Research Database (Denmark)

    Andersen, Anette; Holstein, Bjørn E; Due, Pernille

    2008-01-01

    AIM: To examine socioeconomic position (SEP) of origin as predictor of large-scale alcohol use in adolescence. METHODS: The study population was a random sample of 15-year-olds at baseline (n=843) with a first follow-up 4 years later (n=729). Excess alcohol intake was assessed by consumption last...

  14. Multi-Agent System Supporting Automated Large-Scale Photometric Computations

    Directory of Open Access Journals (Sweden)

    Adam Sȩdziwy

    2016-02-01

    Full Text Available The technologies related to green energy, smart cities and similar areas being dynamically developed in recent years, face frequently problems of a computational nature rather than a technological one. The example is the ability of accurately predicting the weather conditions for PV farms or wind turbines. Another group of issues is related to the complexity of the computations required to obtain an optimal setup of a solution being designed. In this article, we present the case representing the latter group of problems, namely designing large-scale power-saving lighting installations. The term “large-scale” refers to an entire city area, containing tens of thousands of luminaires. Although a simple power reduction for a single street, giving limited savings, is relatively easy, it becomes infeasible for tasks covering thousands of luminaires described by precise coordinates (instead of simplified layouts. To overcome this critical issue, we propose introducing a formal representation of a computing problem and applying a multi-agent system to perform design-related computations in parallel. The important measure introduced in the article indicating optimization progress is entropy. It also allows for terminating optimization when the solution is satisfying. The article contains the results of real-life calculations being made with the help of the presented approach.

  15. Tenant Placement Strategies within Multi-Level Large-Scale Shopping Centers

    OpenAIRE

    Tony Shun-Te Yuo; Colin Lizieri

    2013-01-01

    This paper argues that tenant placement strategies for large-scale multi-unit shopping centers differ depending on the number of floor levels. Two core strategies are identified: dispersion and departmentalization. There exists a trade-off between three income effects: basic footfall effects, spillover effects, and an effective floor area effect, which varies by the number of floor levels. Departmentalization is favored for centers with more than four floors. Greater spatial complexity also p...

  16. Biodiversity conservation in Swedish forests: ways forward for a 30-year-old multi-scaled approach.

    Science.gov (United States)

    Gustafsson, Lena; Perhans, Karin

    2010-12-01

    A multi-scaled model for biodiversity conservation in forests was introduced in Sweden 30 years ago, which makes it a pioneer example of an integrated ecosystem approach. Trees are set aside for biodiversity purposes at multiple scale levels varying from individual trees to areas of thousands of hectares, with landowner responsibility at the lowest level and with increasing state involvement at higher levels. Ecological theory supports the multi-scaled approach, and retention efforts at every harvest occasion stimulate landowners' interest in conservation. We argue that the model has large advantages but that in a future with intensified forestry and global warming, development based on more progressive thinking is necessary to maintain and increase biodiversity. Suggestions for the future include joint planning for several forest owners, consideration of cost-effectiveness, accepting opportunistic work models, adjusting retention levels to stand and landscape composition, introduction of temporary reserves, creation of "receiver habitats" for species escaping climate change, and protection of young forests.

  17. Multi-GNSS PPP-RTK: From Large- to Small-Scale Networks

    Directory of Open Access Journals (Sweden)

    Nandakumaran Nadarajah

    2018-04-01

    Full Text Available Precise point positioning (PPP and its integer ambiguity resolution-enabled variant, PPP-RTK (real-time kinematic, can benefit enormously from the integration of multiple global navigation satellite systems (GNSS. In such a multi-GNSS landscape, the positioning convergence time is expected to be reduced considerably as compared to the one obtained by a single-GNSS setup. It is therefore the goal of the present contribution to provide numerical insights into the role taken by the multi-GNSS integration in delivering fast and high-precision positioning solutions (sub-decimeter and centimeter levels using PPP-RTK. To that end, we employ the Curtin PPP-RTK platform and process data-sets of GPS, BeiDou Navigation Satellite System (BDS and Galileo in stand-alone and combined forms. The data-sets are collected by various receiver types, ranging from high-end multi-frequency geodetic receivers to low-cost single-frequency mass-market receivers. The corresponding stations form a large-scale (Australia-wide network as well as a small-scale network with inter-station distances less than 30 km. In case of the Australia-wide GPS-only ambiguity-float setup, 90% of the horizontal positioning errors (kinematic mode are shown to become less than five centimeters after 103 min. The stated required time is reduced to 66 min for the corresponding GPS + BDS + Galieo setup. The time is further reduced to 15 min by applying single-receiver ambiguity resolution. The outcomes are supported by the positioning results of the small-scale network.

  18. Large-scale inverse model analyses employing fast randomized data reduction

    Science.gov (United States)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  19. Large-scale, multi-compartment tests in PANDA for LWR-containment analysis and code validation

    International Nuclear Information System (INIS)

    Paladino, Domenico; Auban, Olivier; Zboray, Robert

    2006-01-01

    The large-scale thermal-hydraulic PANDA facility has been used for the last years for investigating passive decay heat removal systems and related containment phenomena relevant for next-generation and current light water reactors. As part of the 5. EURATOM framework program project TEMPEST, a series of tests was performed in PANDA to experimentally investigate the distribution of hydrogen inside the containment and its effect on the performance of the Passive Containment Cooling System (PCCS) designed for the Economic Simplified Boiling Water Reactor (ESBWR). In a postulated severe accident, a large amount of hydrogen could be released in the Reactor Pressure Vessel (RPV) as a consequence of the cladding Metal- Water (M-W) reaction and discharged together with steam to the Drywell (DW) compartment. In PANDA tests, hydrogen was simulated by using helium. This paper illustrates the results of a TEMPEST test performed in PANDA and named as Test T1.2. In Test T1.2, the gas stratification (steam-helium) patterns forming in the large-scale multi-compartment PANDA DW, and the effect of non-condensable gas (helium) on the overall behaviour of the PCCS were identified. Gas mixing and stratification in a large-scale multi-compartment system are currently being further investigated in PANDA in the frame of the OECD project SETH. The testing philosophy in this new PANDA program is to produce data for code validation in relation to specific phenomena, such as: gas stratification in the containment, gas transport between containment compartments, wall condensation, etc. These types of phenomena are driven by buoyant high-momentum injections (jets) and/or low momentum injection (plumes), depending on the transient scenario. In this context, the new SETH tests in PANDA are particularly valuable to produce an experimental database for code assessment. This paper also presents an overview of the PANDA SETH tests and the major improvements in instrumentation carried out in the PANDA

  20. PKI security in large-scale healthcare networks.

    Science.gov (United States)

    Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos

    2012-06-01

    During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.

  1. Distributed parallel cooperative coevolutionary multi-objective large-scale immune algorithm for deployment of wireless sensor networks

    DEFF Research Database (Denmark)

    Cao, Bin; Zhao, Jianwei; Yang, Po

    2018-01-01

    -objective evolutionary algorithms the Cooperative Coevolutionary Generalized Differential Evolution 3, the Cooperative Multi-objective Differential Evolution and the Nondominated Sorting Genetic Algorithm III, the proposed algorithm addresses the deployment optimization problem efficiently and effectively.......Using immune algorithms is generally a time-intensive process especially for problems with a large number of variables. In this paper, we propose a distributed parallel cooperative coevolutionary multi-objective large-scale immune algorithm that is implemented using the message passing interface...... (MPI). The proposed algorithm is composed of three layers: objective, group and individual layers. First, for each objective in the multi-objective problem to be addressed, a subpopulation is used for optimization, and an archive population is used to optimize all the objectives. Second, the large...

  2. Large-scale grid management

    International Nuclear Information System (INIS)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-01-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series

  3. Band gaps and localization of surface water waves over large-scale sand waves with random fluctuations

    Science.gov (United States)

    Zhang, Yu; Li, Yan; Shao, Hao; Zhong, Yaozhao; Zhang, Sai; Zhao, Zongxi

    2012-06-01

    Band structure and wave localization are investigated for sea surface water waves over large-scale sand wave topography. Sand wave height, sand wave width, water depth, and water width between adjacent sand waves have significant impact on band gaps. Random fluctuations of sand wave height, sand wave width, and water depth induce water wave localization. However, random water width produces a perfect transmission tunnel of water waves at a certain frequency so that localization does not occur no matter how large a disorder level is applied. Together with theoretical results, the field experimental observations in the Taiwan Bank suggest band gap and wave localization as the physical mechanism of sea surface water wave propagating over natural large-scale sand waves.

  4. A Large-Scale Multi-Hop Localization Algorithm Based on Regularized Extreme Learning for Wireless Networks.

    Science.gov (United States)

    Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan

    2017-12-20

    A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.

  5. Large-scale multi-stage constructed wetlands for secondary effluents treatment in northern China: Carbon dynamics.

    Science.gov (United States)

    Wu, Haiming; Fan, Jinlin; Zhang, Jian; Ngo, Huu Hao; Guo, Wenshan

    2018-02-01

    Multi-stage constructed wetlands (CWs) have been proved to be a cost-effective alternative in the treatment of various wastewaters for improving the treatment performance as compared with the conventional single-stage CWs. However, few long-term full-scale multi-stage CWs have been performed and evaluated for polishing effluents from domestic wastewater treatment plants (WWTP). This study investigated the seasonal and spatial dynamics of carbon and the effects of the key factors (input loading and temperature) in the large-scale seven-stage Wu River CW polishing domestic WWTP effluents in northern China. The results indicated a significant improvement in water quality. Significant seasonal and spatial variations of organics removal were observed in the Wu River CW with a higher COD removal efficiency of 64-66% in summer and fall. Obvious seasonal and spatial variations of CH 4 and CO 2 emissions were also found with the average CH 4 and CO 2 emission rates of 3.78-35.54 mg m -2 d -1 and 610.78-8992.71 mg m -2 d -1 , respectively, while the higher CH 4 and CO 2 emission flux was obtained in spring and summer. Seasonal air temperatures and inflow COD loading rates significantly affected organics removal and CH 4 emission, but they appeared to have a weak influence on CO 2 emission. Overall, this study suggested that large-scale Wu River CW might be a potential source of GHG, but considering the sustainability of the multi-stage CW, the inflow COD loading rate of 1.8-2.0 g m -2 d -1 and temperature of 15-20 °C may be the suitable condition for achieving the higher organics removal efficiency and lower greenhouse gases (GHG) emission in polishing the domestic WWTP effluent. The obtained knowledge of the carbon dynamics in large-scale Wu River CW will be helpful for understanding the carbon cycles, but also can provide useful field experience for the design, operation and management of multi-stage CW treatments. Copyright © 2017 Elsevier Ltd. All rights

  6. Multi-level infrastructure of interconnected testbeds of large-scale wireless sensor networks (MI2T-WSN)

    CSIR Research Space (South Africa)

    Abu-Mahfouz, Adnan M

    2012-06-01

    Full Text Available are still required for further testing before the real implementation. In this paper we propose a multi-level infrastructure of interconnected testbeds of large- scale WSNs. This testbed consists of 1000 sensor motes that will be distributed into four...

  7. Implementation of a large-scale hospital information infrastructure for multi-unit health-care services.

    Science.gov (United States)

    Yoo, Sun K; Kim, Dong Keun; Kim, Jung C; Park, Youn Jung; Chang, Byung Chul

    2008-01-01

    With the increase in demand for high quality medical services, the need for an innovative hospital information system has become essential. An improved system has been implemented in all hospital units of the Yonsei University Health System. Interoperability between multi-units required appropriate hardware infrastructure and software architecture. This large-scale hospital information system encompassed PACS (Picture Archiving and Communications Systems), EMR (Electronic Medical Records) and ERP (Enterprise Resource Planning). It involved two tertiary hospitals and 50 community hospitals. The monthly data production rate by the integrated hospital information system is about 1.8 TByte and the total quantity of data produced so far is about 60 TByte. Large scale information exchange and sharing will be particularly useful for telemedicine applications.

  8. Technology for the large-scale production of multi-crystalline silicon solar cells and modules

    International Nuclear Information System (INIS)

    Weeber, A.W.; De Moor, H.H.C.

    1997-06-01

    In cooperation with Shell Solar Energy (formerly R and S Renewable Energy Systems) and the Research Institute for Materials of the Catholic University Nijmegen the Netherlands Energy Research Foundation (ECN) plans to develop a competitive technology for the large-scale manufacturing of solar cells and solar modules on the basis of multi-crystalline silicon. The project will be carried out within the framework of the Economy, Ecology and Technology (EET) program of the Dutch ministry of Economic Affairs and the Dutch ministry of Education, Culture and Sciences. The aim of the EET-project is to reduce the costs of a solar module by 50% by means of increasing the conversion efficiency as well as the development of cheap processes for large-scale production

  9. A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems

    Directory of Open Access Journals (Sweden)

    Yingni Zhai

    2014-10-01

    Full Text Available Purpose: A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems (JSP is proposed.Design/methodology/approach: In the algorithm, a number of sub-problems are constructed by iteratively decomposing the large-scale JSP according to the process route of each job. And then the solution of the large-scale JSP can be obtained by iteratively solving the sub-problems. In order to improve the sub-problems' solving efficiency and the solution quality, a detection method for multi-bottleneck machines based on critical path is proposed. Therewith the unscheduled operations can be decomposed into bottleneck operations and non-bottleneck operations. According to the principle of “Bottleneck leads the performance of the whole manufacturing system” in TOC (Theory Of Constraints, the bottleneck operations are scheduled by genetic algorithm for high solution quality, and the non-bottleneck operations are scheduled by dispatching rules for the improvement of the solving efficiency.Findings: In the process of the sub-problems' construction, partial operations in the previous scheduled sub-problem are divided into the successive sub-problem for re-optimization. This strategy can improve the solution quality of the algorithm. In the process of solving the sub-problems, the strategy that evaluating the chromosome's fitness by predicting the global scheduling objective value can improve the solution quality.Research limitations/implications: In this research, there are some assumptions which reduce the complexity of the large-scale scheduling problem. They are as follows: The processing route of each job is predetermined, and the processing time of each operation is fixed. There is no machine breakdown, and no preemption of the operations is allowed. The assumptions should be considered if the algorithm is used in the actual job shop.Originality/value: The research provides an efficient scheduling method for the

  10. Multi-granularity Bandwidth Allocation for Large-Scale WDM/TDM PON

    Science.gov (United States)

    Gao, Ziyue; Gan, Chaoqin; Ni, Cuiping; Shi, Qiongling

    2017-12-01

    WDM (wavelength-division multiplexing)/TDM (time-division multiplexing) PON (passive optical network) is being viewed as a promising solution for delivering multiple services and applications, such as high-definition video, video conference and data traffic. Considering the real-time transmission, QoS (quality of services) requirements and differentiated services model, a multi-granularity dynamic bandwidth allocation (DBA) in both domains of wavelengths and time for large-scale hybrid WDM/TDM PON is proposed in this paper. The proposed scheme achieves load balance by using the bandwidth prediction. Based on the bandwidth prediction, the wavelength assignment can be realized fairly and effectively to satisfy the different demands of various classes. Specially, the allocation of residual bandwidth further augments the DBA and makes full use of bandwidth resources in the network. To further improve the network performance, two schemes named extending the cycle of one free wavelength (ECoFW) and large bandwidth shrinkage (LBS) are proposed, which can prevent transmission from interruption when the user employs more than one wavelength. The simulation results show the effectiveness of the proposed scheme.

  11. Challenges and opportunities for large landscape-scale management in a shifting climate: The importance of nested adaptation responses across geospatial and temporal scales

    Science.gov (United States)

    Gary M. Tabor; Anne Carlson; Travis Belote

    2014-01-01

    The Yellowstone to Yukon Conservation Initiative (Y2Y) was established over 20 years ago as an experiment in large landscape conservation. Initially, Y2Y emerged as a response to large scale habitat fragmentation by advancing ecological connectivity. It also laid the foundation for large scale multi-stakeholder conservation collaboration with almost 200 non-...

  12. Multi-Scale Models for the Scale Interaction of Organized Tropical Convection

    Science.gov (United States)

    Yang, Qiu

    Assessing the upscale impact of organized tropical convection from small spatial and temporal scales is a research imperative, not only for having a better understanding of the multi-scale structures of dynamical and convective fields in the tropics, but also for eventually helping in the design of new parameterization strategies to improve the next-generation global climate models. Here self-consistent multi-scale models are derived systematically by following the multi-scale asymptotic methods and used to describe the hierarchical structures of tropical atmospheric flows. The advantages of using these multi-scale models lie in isolating the essential components of multi-scale interaction and providing assessment of the upscale impact of the small-scale fluctuations onto the large-scale mean flow through eddy flux divergences of momentum and temperature in a transparent fashion. Specifically, this thesis includes three research projects about multi-scale interaction of organized tropical convection, involving tropical flows at different scaling regimes and utilizing different multi-scale models correspondingly. Inspired by the observed variability of tropical convection on multiple temporal scales, including daily and intraseasonal time scales, the goal of the first project is to assess the intraseasonal impact of the diurnal cycle on the planetary-scale circulation such as the Hadley cell. As an extension of the first project, the goal of the second project is to assess the intraseasonal impact of the diurnal cycle over the Maritime Continent on the Madden-Julian Oscillation. In the third project, the goals are to simulate the baroclinic aspects of the ITCZ breakdown and assess its upscale impact on the planetary-scale circulation over the eastern Pacific. These simple multi-scale models should be useful to understand the scale interaction of organized tropical convection and help improve the parameterization of unresolved processes in global climate models.

  13. Multi-scale salient feature extraction on mesh models

    KAUST Repository

    Yang, Yongliang; Shen, ChaoHui

    2012-01-01

    We present a new method of extracting multi-scale salient features on meshes. It is based on robust estimation of curvature on multiple scales. The coincidence between salient feature and the scale of interest can be established straightforwardly, where detailed feature appears on small scale and feature with more global shape information shows up on large scale. We demonstrate this multi-scale description of features accords with human perception and can be further used for several applications as feature classification and viewpoint selection. Experiments exhibit that our method as a multi-scale analysis tool is very helpful for studying 3D shapes. © 2012 Springer-Verlag.

  14. Multi-parameter sensor based on random fiber lasers

    Directory of Open Access Journals (Sweden)

    Yanping Xu

    2016-09-01

    Full Text Available We demonstrate a concept of utilizing random fiber lasers to achieve multi-parameter sensing. The proposed random fiber ring laser consists of an erbium-doped fiber as the gain medium and a random fiber grating as the feedback. The random feedback is effectively realized by a large number of reflections from around 50000 femtosecond laser induced refractive index modulation regions over a 10cm standard single mode fiber. Numerous polarization-dependent spectral filters are formed and superimposed to provide multiple lasing lines with high signal-to-noise ratio up to 40dB, which gives an access for a high-fidelity multi-parameter sensing scheme. The number of sensing parameters can be controlled by the number of the lasing lines via input polarizations and wavelength shifts of each peak can be explored for the simultaneous multi-parameter sensing with one sensing probe. In addition, the random grating induced coupling between core and cladding modes can be potentially used for liquid medical sample sensing in medical diagnostics, biology and remote sensing in hostile environments.

  15. Large-scale grid management; Storskala Nettforvaltning

    Energy Technology Data Exchange (ETDEWEB)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-07-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series.

  16. A fast learning method for large scale and multi-class samples of SVM

    Science.gov (United States)

    Fan, Yu; Guo, Huiming

    2017-06-01

    A multi-class classification SVM(Support Vector Machine) fast learning method based on binary tree is presented to solve its low learning efficiency when SVM processing large scale multi-class samples. This paper adopts bottom-up method to set up binary tree hierarchy structure, according to achieved hierarchy structure, sub-classifier learns from corresponding samples of each node. During the learning, several class clusters are generated after the first clustering of the training samples. Firstly, central points are extracted from those class clusters which just have one type of samples. For those which have two types of samples, cluster numbers of their positive and negative samples are set respectively according to their mixture degree, secondary clustering undertaken afterwards, after which, central points are extracted from achieved sub-class clusters. By learning from the reduced samples formed by the integration of extracted central points above, sub-classifiers are obtained. Simulation experiment shows that, this fast learning method, which is based on multi-level clustering, can guarantee higher classification accuracy, greatly reduce sample numbers and effectively improve learning efficiency.

  17. PANDA: a Large Scale Multi-Purpose Test Facility for LWR Safety Research

    Energy Technology Data Exchange (ETDEWEB)

    Dreier, Joerg; Paladino, Domenico; Huggenberger, Max; Andreani, Michele [Laboratory for Thermal-Hydraulics, Nuclear Energy and Safety Research Department, Paul Scherrer Institut (PSI), CH-5232 Villigen PSI (Switzerland); Yadigaroglu, George [ETH Zuerich, Technoparkstrasse 1, Einstein 22- CH-8005 Zuerich (Switzerland)

    2008-07-01

    PANDA is a large-scale multi-purpose thermal-hydraulics test facility, built and operated by PSI. Due to its modular structure, PANDA provides flexibility for a variety of applications, ranging from integral containment system investigations, primary system tests, component experiments to large-scale separate-effects tests. For many applications, the experimental results are directly used for example for concept demonstrations or for the characterisation of phenomena or components, but all the experimental data generated in the various test campaigns is unique and was or/and will still be widely used for the validation and improvement of a variety of computer codes, including codes with 3D capabilities, for reactor safety analysis. The paper provides an overview of the already completed and on-going research programs performed in the PANDA facility in the different area of applications, including the main results and conclusions of the investigations. In particular the advanced passive containment cooling system concept investigations of the SBWR, ESBWR as well as of the SWR1000 in relation to various aspects are presented and the main findings are summarised. Finally the goals, planned investigations and expected results of the on-going OECD project SETH-2 are presented. (authors)

  18. PANDA: a Large Scale Multi-Purpose Test Facility for LWR Safety Research

    International Nuclear Information System (INIS)

    Dreier, Joerg; Paladino, Domenico; Huggenberger, Max; Andreani, Michele; Yadigaroglu, George

    2008-01-01

    PANDA is a large-scale multi-purpose thermal-hydraulics test facility, built and operated by PSI. Due to its modular structure, PANDA provides flexibility for a variety of applications, ranging from integral containment system investigations, primary system tests, component experiments to large-scale separate-effects tests. For many applications, the experimental results are directly used for example for concept demonstrations or for the characterisation of phenomena or components, but all the experimental data generated in the various test campaigns is unique and was or/and will still be widely used for the validation and improvement of a variety of computer codes, including codes with 3D capabilities, for reactor safety analysis. The paper provides an overview of the already completed and on-going research programs performed in the PANDA facility in the different area of applications, including the main results and conclusions of the investigations. In particular the advanced passive containment cooling system concept investigations of the SBWR, ESBWR as well as of the SWR1000 in relation to various aspects are presented and the main findings are summarised. Finally the goals, planned investigations and expected results of the on-going OECD project SETH-2 are presented. (authors)

  19. Large-Scale Cubic-Scaling Random Phase Approximation Correlation Energy Calculations Using a Gaussian Basis.

    Science.gov (United States)

    Wilhelm, Jan; Seewald, Patrick; Del Ben, Mauro; Hutter, Jürg

    2016-12-13

    We present an algorithm for computing the correlation energy in the random phase approximation (RPA) in a Gaussian basis requiring [Formula: see text] operations and [Formula: see text] memory. The method is based on the resolution of the identity (RI) with the overlap metric, a reformulation of RI-RPA in the Gaussian basis, imaginary time, and imaginary frequency integration techniques, and the use of sparse linear algebra. Additional memory reduction without extra computations can be achieved by an iterative scheme that overcomes the memory bottleneck of canonical RPA implementations. We report a massively parallel implementation that is the key for the application to large systems. Finally, cubic-scaling RPA is applied to a thousand water molecules using a correlation-consistent triple-ζ quality basis.

  20. Multi-scale properties of large eddy simulations: correlations between resolved-scale velocity-field increments and subgrid-scale quantities

    Science.gov (United States)

    Linkmann, Moritz; Buzzicotti, Michele; Biferale, Luca

    2018-06-01

    We provide analytical and numerical results concerning multi-scale correlations between the resolved velocity field and the subgrid-scale (SGS) stress-tensor in large eddy simulations (LES). Following previous studies for Navier-Stokes equations, we derive the exact hierarchy of LES equations governing the spatio-temporal evolution of velocity structure functions of any order. The aim is to assess the influence of the subgrid model on the inertial range intermittency. We provide a series of predictions, within the multifractal theory, for the scaling of correlation involving the SGS stress and we compare them against numerical results from high-resolution Smagorinsky LES and from a-priori filtered data generated from direct numerical simulations (DNS). We find that LES data generally agree very well with filtered DNS results and with the multifractal prediction for all leading terms in the balance equations. Discrepancies are measured for some of the sub-leading terms involving cross-correlation between resolved velocity increments and the SGS tensor or the SGS energy transfer, suggesting that there must be room to improve the SGS modelisation to further extend the inertial range properties for any fixed LES resolution.

  1. Low rank approximation methods for MR fingerprinting with large scale dictionaries.

    Science.gov (United States)

    Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra

    2018-04-01

    This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  2. BROAD ABSORPTION LINE VARIABILITY ON MULTI-YEAR TIMESCALES IN A LARGE QUASAR SAMPLE

    Energy Technology Data Exchange (ETDEWEB)

    Filiz Ak, N.; Brandt, W. N.; Schneider, D. P. [Department of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA 16802 (United States); Hall, P. B. [Department of Physics and Astronomy, York University, 4700 Keele St., Toronto, Ontario, M3J 1P3 (Canada); Anderson, S. F. [Astronomy Department, University of Washington, Seattle, WA 98195 (United States); Hamann, F. [Department of Astronomy, University of Florida, Gainesville, FL 32611-2055 (United States); Lundgren, B. F. [Department of Astronomy, University of Wisconsin, Madison, WI 53706 (United States); Myers, Adam D. [Department of Physics and Astronomy, University of Wyoming, Laramie, WY 82071 (United States); Pâris, I. [Departamento de Astronomía, Universidad de Chile, Casilla 36-D, Santiago (Chile); Petitjean, P. [Universite Paris 6, Institut d' Astrophysique de Paris, 75014, Paris (France); Ross, Nicholas P. [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 92420 (United States); Shen, Yue [Harvard-Smithsonian Center for Astrophysics, 60 Garden St., MS-51, Cambridge, MA 02138 (United States); York, Don, E-mail: nfilizak@astro.psu.edu [The University of Chicago, Department of Astronomy and Astrophysics, Chicago, IL 60637 (United States)

    2013-11-10

    We present a detailed investigation of the variability of 428 C IV and 235 Si IV broad absorption line (BAL) troughs identified in multi-epoch observations of 291 quasars by the Sloan Digital Sky Survey-I/II/III. These observations primarily sample rest-frame timescales of 1-3.7 yr over which significant rearrangement of the BAL wind is expected. We derive a number of observational results on, e.g., the frequency of BAL variability, the velocity range over which BAL variability occurs, the primary observed form of BAL-trough variability, the dependence of BAL variability upon timescale, the frequency of BAL strengthening versus weakening, correlations between BAL variability and BAL-trough profiles, relations between C IV and Si IV BAL variability, coordinated multi-trough variability, and BAL variations as a function of quasar properties. We assess implications of these observational results for quasar winds. Our results support models where most BAL absorption is formed within an order-of-magnitude of the wind-launching radius, although a significant minority of BAL troughs may arise on larger scales. We estimate an average lifetime for a BAL trough along our line-of-sight of a few thousand years. BAL disappearance and emergence events appear to be extremes of general BAL variability, rather than being qualitatively distinct phenomena. We derive the parameters of a random-walk model for BAL EW variability, finding that this model can acceptably describe some key aspects of EW variability. The coordinated trough variability of BAL quasars with multiple troughs suggests that changes in 'shielding gas' may play a significant role in driving general BAL variability.

  3. Creating Large Scale Database Servers

    International Nuclear Information System (INIS)

    Becla, Jacek

    2001-01-01

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region

  4. Creating Large Scale Database Servers

    Energy Technology Data Exchange (ETDEWEB)

    Becla, Jacek

    2001-12-14

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region.

  5. Scalable multi-objective control for large scale water resources systems under uncertainty

    Science.gov (United States)

    Giuliani, Matteo; Quinn, Julianne; Herman, Jonathan; Castelletti, Andrea; Reed, Patrick

    2016-04-01

    The use of mathematical models to support the optimal management of environmental systems is rapidly expanding over the last years due to advances in scientific knowledge of the natural processes, efficiency of the optimization techniques, and availability of computational resources. However, undergoing changes in climate and society introduce additional challenges for controlling these systems, ultimately motivating the emergence of complex models to explore key causal relationships and dependencies on uncontrolled sources of variability. In this work, we contribute a novel implementation of the evolutionary multi-objective direct policy search (EMODPS) method for controlling environmental systems under uncertainty. The proposed approach combines direct policy search (DPS) with hierarchical parallelization of multi-objective evolutionary algorithms (MOEAs) and offers a threefold advantage: the DPS simulation-based optimization can be combined with any simulation model and does not add any constraint on modeled information, allowing the use of exogenous information in conditioning the decisions. Moreover, the combination of DPS and MOEAs prompts the generation or Pareto approximate set of solutions for up to 10 objectives, thus overcoming the decision biases produced by cognitive myopia, where narrow or restrictive definitions of optimality strongly limit the discovery of decision relevant alternatives. Finally, the use of large-scale MOEAs parallelization improves the ability of the designed solutions in handling the uncertainty due to severe natural variability. The proposed approach is demonstrated on a challenging water resources management problem represented by the optimal control of a network of four multipurpose water reservoirs in the Red River basin (Vietnam). As part of the medium-long term energy and food security national strategy, four large reservoirs have been constructed on the Red River tributaries, which are mainly operated for hydropower

  6. Exploring Hardware Support For Scaling Irregular Applications on Multi-node Multi-core Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Secchi, Simone; Ceriani, Marco; Tumeo, Antonino; Villa, Oreste; Palermo, Gianluca; Raffo, Luigi

    2013-06-05

    With the recent emergence of large-scale knowledge dis- covery, data mining and social network analysis, irregular applications have gained renewed interest. Classic cache-based high-performance architectures do not provide optimal performances with such kind of workloads, mainly due to the very low spatial and temporal locality of the irregular control and memory access patterns. In this paper, we present a multi-node, multi-core, fine-grained multi-threaded shared-memory system architecture specifically designed for the execution of large-scale irregular applications, and built on top of three pillars, that we believe are fundamental to support these workloads. First, we offer transparent hardware support for Partitioned Global Address Space (PGAS) to provide a large globally-shared address space with no software library overhead. Second, we employ multi-threaded multi-core processing nodes to achieve the necessary latency tolerance required by accessing global memory, which potentially resides in a remote node. Finally, we devise hardware support for inter-thread synchronization on the whole global address space. We first model the performances by using an analytical model that takes into account the main architecture and application characteristics. We describe the hardware design of the proposed cus- tom architectural building blocks that provide support for the above- mentioned three pillars. Finally, we present a limited-scale evaluation of the system on a multi-board FPGA prototype with typical irregular kernels and benchmarks. The experimental evaluation demonstrates the architecture performance scalability for different configurations of the whole system.

  7. Deep multi-scale convolutional neural network for hyperspectral image classification

    Science.gov (United States)

    Zhang, Feng-zhe; Yang, Xia

    2018-04-01

    In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.

  8. Comparison of Multi-Scale Digital Elevation Models for Defining Waterways and Catchments Over Large Areas

    Science.gov (United States)

    Harris, B.; McDougall, K.; Barry, M.

    2012-07-01

    Digital Elevation Models (DEMs) allow for the efficient and consistent creation of waterways and catchment boundaries over large areas. Studies of waterway delineation from DEMs are usually undertaken over small or single catchment areas due to the nature of the problems being investigated. Improvements in Geographic Information Systems (GIS) techniques, software, hardware and data allow for analysis of larger data sets and also facilitate a consistent tool for the creation and analysis of waterways over extensive areas. However, rarely are they developed over large regional areas because of the lack of available raw data sets and the amount of work required to create the underlying DEMs. This paper examines definition of waterways and catchments over an area of approximately 25,000 km2 to establish the optimal DEM scale required for waterway delineation over large regional projects. The comparative study analysed multi-scale DEMs over two test areas (Wivenhoe catchment, 543 km2 and a detailed 13 km2 within the Wivenhoe catchment) including various data types, scales, quality, and variable catchment input parameters. Historic and available DEM data was compared to high resolution Lidar based DEMs to assess variations in the formation of stream networks. The results identified that, particularly in areas of high elevation change, DEMs at 20 m cell size created from broad scale 1:25,000 data (combined with more detailed data or manual delineation in flat areas) are adequate for the creation of waterways and catchments at a regional scale.

  9. Large-scale hydrology in Europe : observed patterns and model performance

    Energy Technology Data Exchange (ETDEWEB)

    Gudmundsson, Lukas

    2011-06-15

    In a changing climate, terrestrial water storages are of great interest as water availability impacts key aspects of ecosystem functioning. Thus, a better understanding of the variations of wet and dry periods will contribute to fully grasp processes of the earth system such as nutrient cycling and vegetation dynamics. Currently, river runoff from small, nearly natural, catchments is one of the few variables of the terrestrial water balance that is regularly monitored with detailed spatial and temporal coverage on large scales. River runoff, therefore, provides a foundation to approach European hydrology with respect to observed patterns on large scales, with regard to the ability of models to capture these.The analysis of observed river flow from small catchments, focused on the identification and description of spatial patterns of simultaneous temporal variations of runoff. These are dominated by large-scale variations of climatic variables but also altered by catchment processes. It was shown that time series of annual low, mean and high flows follow the same atmospheric drivers. The observation that high flows are more closely coupled to large scale atmospheric drivers than low flows, indicates the increasing influence of catchment properties on runoff under dry conditions. Further, it was shown that the low-frequency variability of European runoff is dominated by two opposing centres of simultaneous variations, such that dry years in the north are accompanied by wet years in the south.Large-scale hydrological models are simplified representations of our current perception of the terrestrial water balance on large scales. Quantification of the models strengths and weaknesses is the prerequisite for a reliable interpretation of simulation results. Model evaluations may also enable to detect shortcomings with model assumptions and thus enable a refinement of the current perception of hydrological systems. The ability of a multi model ensemble of nine large-scale

  10. COMPARISON OF MULTI-SCALE DIGITAL ELEVATION MODELS FOR DEFINING WATERWAYS AND CATCHMENTS OVER LARGE AREAS

    Directory of Open Access Journals (Sweden)

    B. Harris

    2012-07-01

    Full Text Available Digital Elevation Models (DEMs allow for the efficient and consistent creation of waterways and catchment boundaries over large areas. Studies of waterway delineation from DEMs are usually undertaken over small or single catchment areas due to the nature of the problems being investigated. Improvements in Geographic Information Systems (GIS techniques, software, hardware and data allow for analysis of larger data sets and also facilitate a consistent tool for the creation and analysis of waterways over extensive areas. However, rarely are they developed over large regional areas because of the lack of available raw data sets and the amount of work required to create the underlying DEMs. This paper examines definition of waterways and catchments over an area of approximately 25,000 km2 to establish the optimal DEM scale required for waterway delineation over large regional projects. The comparative study analysed multi-scale DEMs over two test areas (Wivenhoe catchment, 543 km2 and a detailed 13 km2 within the Wivenhoe catchment including various data types, scales, quality, and variable catchment input parameters. Historic and available DEM data was compared to high resolution Lidar based DEMs to assess variations in the formation of stream networks. The results identified that, particularly in areas of high elevation change, DEMs at 20 m cell size created from broad scale 1:25,000 data (combined with more detailed data or manual delineation in flat areas are adequate for the creation of waterways and catchments at a regional scale.

  11. Multi-year expansion planning of large transmission networks

    Energy Technology Data Exchange (ETDEWEB)

    Binato, S; Oliveira, G C [Centro de Pesquisas de Energia Eletrica (CEPEL), Rio de Janeiro, RJ (Brazil)

    1994-12-31

    This paper describes a model for multi-year transmission network expansion to be used in long-term system planning. The network is represented by a linearized (DC) power flow and, for each year, operation costs are evaluated by a linear programming (LP) based algorithm that provides sensitivity indices for circuit reinforcements. A Backward/Forward approaches is proposed to devise an expansion plan over the study period. A case study with the southeastern Brazilian system is presented and discussed. (author) 18 refs., 5 figs., 1 tab.

  12. Large-scale building energy efficiency retrofit: Concept, model and control

    International Nuclear Information System (INIS)

    Wu, Zhou; Wang, Bo; Xia, Xiaohua

    2016-01-01

    BEER (Building energy efficiency retrofit) projects are initiated in many nations and regions over the world. Existing studies of BEER focus on modeling and planning based on one building and one year period of retrofitting, which cannot be applied to certain large BEER projects with multiple buildings and multi-year retrofit. In this paper, the large-scale BEER problem is defined in a general TBT (time-building-technology) framework, which fits essential requirements of real-world projects. The large-scale BEER is newly studied in the control approach rather than the optimization approach commonly used before. Optimal control is proposed to design optimal retrofitting strategy in terms of maximal energy savings and maximal NPV (net present value). The designed strategy is dynamically changing on dimensions of time, building and technology. The TBT framework and the optimal control approach are verified in a large BEER project, and results indicate that promising performance of energy and cost savings can be achieved in the general TBT framework. - Highlights: • Energy efficiency retrofit of many buildings is studied. • A TBT (time-building-technology) framework is proposed. • The control system of the large-scale BEER is modeled. • The optimal retrofitting strategy is obtained.

  13. First Mile Challenges for Large-Scale IoT

    KAUST Repository

    Bader, Ahmed

    2017-03-16

    The Internet of Things is large-scale by nature. This is not only manifested by the large number of connected devices, but also by the sheer scale of spatial traffic intensity that must be accommodated, primarily in the uplink direction. To that end, cellular networks are indeed a strong first mile candidate to accommodate the data tsunami to be generated by the IoT. However, IoT devices are required in the cellular paradigm to undergo random access procedures as a precursor to resource allocation. Such procedures impose a major bottleneck that hinders cellular networks\\' ability to support large-scale IoT. In this article, we shed light on the random access dilemma and present a case study based on experimental data as well as system-level simulations. Accordingly, a case is built for the latent need to revisit random access procedures. A call for action is motivated by listing a few potential remedies and recommendations.

  14. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    Science.gov (United States)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  15. Multi-scale magnetic field intermittence in the plasma sheet

    Directory of Open Access Journals (Sweden)

    Z. Vörös

    2003-09-01

    Full Text Available This paper demonstrates that intermittent magnetic field fluctuations in the plasma sheet exhibit transitory, localized, and multi-scale features. We propose a multifractal-based algorithm, which quantifies intermittence on the basis of the statistical distribution of the "strength of burstiness", estimated within a sliding window. Interesting multi-scale phenomena observed by the Cluster spacecraft include large-scale motion of the current sheet and bursty bulk flow associated turbulence, interpreted as a cross-scale coupling (CSC process.Key words. Magnetospheric physics (magnetotail; plasma sheet – Space plasma physics (turbulence

  16. Large-scale, high-performance and cloud-enabled multi-model analytics experiments in the context of the Earth System Grid Federation

    Science.gov (United States)

    Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.

    2017-12-01

    The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the

  17. Financial management of a large multisite randomized clinical trial.

    Science.gov (United States)

    Sheffet, Alice J; Flaxman, Linda; Tom, MeeLee; Hughes, Susan E; Longbottom, Mary E; Howard, Virginia J; Marler, John R; Brott, Thomas G

    2014-08-01

    The Carotid Revascularization Endarterectomy versus Stenting Trial (CREST) received five years' funding ($21 112 866) from the National Institutes of Health to compare carotid stenting to surgery for stroke prevention in 2500 randomized participants at 40 sites. Herein we evaluate the change in the CREST budget from a fixed to variable-cost model and recommend strategies for the financial management of large-scale clinical trials. Projections of the original grant's fixed-cost model were compared to the actual costs of the revised variable-cost model. The original grant's fixed-cost budget included salaries, fringe benefits, and other direct and indirect costs. For the variable-cost model, the costs were actual payments to the clinical sites and core centers based upon actual trial enrollment. We compared annual direct and indirect costs and per-patient cost for both the fixed and variable models. Differences between clinical site and core center expenditures were also calculated. Using a variable-cost budget for clinical sites, funding was extended by no-cost extension from five to eight years. Randomizing sites tripled from 34 to 109. Of the 2500 targeted sample size, 138 (5·5%) were randomized during the first five years and 1387 (55·5%) during the no-cost extension. The actual per-patient costs of the variable model were 9% ($13 845) of the projected per-patient costs ($152 992) of the fixed model. Performance-based budgets conserve funding, promote compliance, and allow for additional sites at modest additional cost. Costs of large-scale clinical trials can thus be reduced through effective management without compromising scientific integrity. © 2014 The Authors. International Journal of Stroke © 2014 World Stroke Organization.

  18. On Modeling Large-Scale Multi-Agent Systems with Parallel, Sequential and Genuinely Asynchronous Cellular Automata

    International Nuclear Information System (INIS)

    Tosic, P.T.

    2011-01-01

    We study certain types of Cellular Automata (CA) viewed as an abstraction of large-scale Multi-Agent Systems (MAS). We argue that the classical CA model needs to be modified in several important respects, in order to become a relevant and sufficiently general model for the large-scale MAS, and so that thus generalized model can capture many important MAS properties at the level of agent ensembles and their long-term collective behavior patterns. We specifically focus on the issue of inter-agent communication in CA, and propose sequential cellular automata (SCA) as the first step, and genuinely Asynchronous Cellular Automata (ACA) as the ultimate deterministic CA-based abstract models for large-scale MAS made of simple reactive agents. We first formulate deterministic and nondeterministic versions of sequential CA, and then summarize some interesting configuration space properties (i.e., possible behaviors) of a restricted class of sequential CA. In particular, we compare and contrast those properties of sequential CA with the corresponding properties of the classical (that is, parallel and perfectly synchronous) CA with the same restricted class of update rules. We analytically demonstrate failure of the studied sequential CA models to simulate all possible behaviors of perfectly synchronous parallel CA, even for a very restricted class of non-linear totalistic node update rules. The lesson learned is that the interleaving semantics of concurrency, when applied to sequential CA, is not refined enough to adequately capture the perfect synchrony of parallel CA updates. Last but not least, we outline what would be an appropriate CA-like abstraction for large-scale distributed computing insofar as the inter-agent communication model is concerned, and in that context we propose genuinely asynchronous CA. (author)

  19. A Multi-scale Modeling System with Unified Physics to Study Precipitation Processes

    Science.gov (United States)

    Tao, W. K.

    2017-12-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), and (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF). The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitation, processes and their sensitivity on model resolution and microphysics schemes will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.

  20. Multi-format all-optical processing based on a large-scale, hybridly integrated photonic circuit.

    Science.gov (United States)

    Bougioukos, M; Kouloumentas, Ch; Spyropoulou, M; Giannoulis, G; Kalavrouziotis, D; Maziotis, A; Bakopoulos, P; Harmon, R; Rogers, D; Harrison, J; Poustie, A; Maxwell, G; Avramopoulos, H

    2011-06-06

    We investigate through numerical studies and experiments the performance of a large scale, silica-on-silicon photonic integrated circuit for multi-format regeneration and wavelength-conversion. The circuit encompasses a monolithically integrated array of four SOAs inside two parallel Mach-Zehnder structures, four delay interferometers and a large number of silica waveguides and couplers. Exploiting phase-incoherent techniques, the circuit is capable of processing OOK signals at variable bit rates, DPSK signals at 22 or 44 Gb/s and DQPSK signals at 44 Gbaud. Simulation studies reveal the wavelength-conversion potential of the circuit with enhanced regenerative capabilities for OOK and DPSK modulation formats and acceptable quality degradation for DQPSK format. Regeneration of 22 Gb/s OOK signals with amplified spontaneous emission (ASE) noise and DPSK data signals degraded with amplitude, phase and ASE noise is experimentally validated demonstrating a power penalty improvement up to 1.5 dB.

  1. Amplification of large-scale magnetic field in nonhelical magnetohydrodynamics

    KAUST Repository

    Kumar, Rohit

    2017-08-11

    It is typically assumed that the kinetic and magnetic helicities play a crucial role in the growth of large-scale dynamo. In this paper, we demonstrate that helicity is not essential for the amplification of large-scale magnetic field. For this purpose, we perform nonhelical magnetohydrodynamic (MHD) simulation, and show that the large-scale magnetic field can grow in nonhelical MHD when random external forcing is employed at scale 1/10 the box size. The energy fluxes and shell-to-shell transfer rates computed using the numerical data show that the large-scale magnetic energy grows due to the energy transfers from the velocity field at the forcing scales.

  2. Large Scale Document Inversion using a Multi-threaded Computing System.

    Science.gov (United States)

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2017-06-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations.

  3. The Adaptive Multi-scale Simulation Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Tobin, William R. [Rensselaer Polytechnic Inst., Troy, NY (United States)

    2015-09-01

    The Adaptive Multi-scale Simulation Infrastructure (AMSI) is a set of libraries and tools developed to support the development, implementation, and execution of general multimodel simulations. Using a minimal set of simulation meta-data AMSI allows for minimally intrusive work to adapt existent single-scale simulations for use in multi-scale simulations. Support for dynamic runtime operations such as single- and multi-scale adaptive properties is a key focus of AMSI. Particular focus has been spent on the development on scale-sensitive load balancing operations to allow single-scale simulations incorporated into a multi-scale simulation using AMSI to use standard load-balancing operations without affecting the integrity of the overall multi-scale simulation.

  4. Use of electronic healthcare records in large-scale simple randomized trials at the point of care for the documentation of value-based medicine.

    Science.gov (United States)

    van Staa, T-P; Klungel, O; Smeeth, L

    2014-06-01

    A solid foundation of evidence of the effects of an intervention is a prerequisite of evidence-based medicine. The best source of such evidence is considered to be randomized trials, which are able to avoid confounding. However, they may not always estimate effectiveness in clinical practice. Databases that collate anonymized electronic health records (EHRs) from different clinical centres have been widely used for many years in observational studies. Randomized point-of-care trials have been initiated recently to recruit and follow patients using the data from EHR databases. In this review, we describe how EHR databases can be used for conducting large-scale simple trials and discuss the advantages and disadvantages of their use. © 2014 The Association for the Publication of the Journal of Internal Medicine.

  5. Large Scale Document Inversion using a Multi-threaded Computing System

    Science.gov (United States)

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2018-01-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. CCS Concepts •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations.

  6. Large-scale HTS bulks for magnetic application

    International Nuclear Information System (INIS)

    Werfel, Frank N.; Floegel-Delor, Uta; Riedel, Thomas; Goebel, Bernd; Rothfeld, Rolf; Schirrmeister, Peter; Wippich, Dieter

    2013-01-01

    Highlights: ► ATZ Company has constructed about 130 HTS magnet systems. ► Multi-seeded YBCO bulks joint the way for large-scale application. ► Levitation platforms demonstrate “superconductivity” to a great public audience (100 years anniversary). ► HTS magnetic bearings show forces up to 1 t. ► Modular HTS maglev vacuum cryostats are tested for train demonstrators in Brazil, China and Germany. -- Abstract: ATZ Company has constructed about 130 HTS magnet systems using high-Tc bulk magnets. A key feature in scaling-up is the fabrication of YBCO melts textured multi-seeded large bulks with three to eight seeds. Except of levitation, magnetization, trapped field and hysteresis, we review system engineering parameters of HTS magnetic linear and rotational bearings like compactness, cryogenics, power density, efficiency and robust construction. We examine mobile compact YBCO bulk magnet platforms cooled with LN 2 and Stirling cryo-cooler for demonstrator use. Compact cryostats for Maglev train operation contain 24 pieces of 3-seed bulks and can levitate 2500–3000 N at 10 mm above a permanent magnet (PM) track. The effective magnetic distance of the thermally insulated bulks is 2 mm only; the stored 2.5 l LN 2 allows more than 24 h operation without refilling. 34 HTS Maglev vacuum cryostats are manufactured tested and operate in Germany, China and Brazil. The magnetic levitation load to weight ratio is more than 15, and by group assembling the HTS cryostats under vehicles up to 5 t total loads levitated above a magnetic track is achieved

  7. Large-scale HTS bulks for magnetic application

    Energy Technology Data Exchange (ETDEWEB)

    Werfel, Frank N., E-mail: werfel@t-online.de [Adelwitz Technologiezentrum GmbH (ATZ), Rittergut Adelwitz 16, 04886 Arzberg-Adelwitz (Germany); Floegel-Delor, Uta; Riedel, Thomas; Goebel, Bernd; Rothfeld, Rolf; Schirrmeister, Peter; Wippich, Dieter [Adelwitz Technologiezentrum GmbH (ATZ), Rittergut Adelwitz 16, 04886 Arzberg-Adelwitz (Germany)

    2013-01-15

    Highlights: ► ATZ Company has constructed about 130 HTS magnet systems. ► Multi-seeded YBCO bulks joint the way for large-scale application. ► Levitation platforms demonstrate “superconductivity” to a great public audience (100 years anniversary). ► HTS magnetic bearings show forces up to 1 t. ► Modular HTS maglev vacuum cryostats are tested for train demonstrators in Brazil, China and Germany. -- Abstract: ATZ Company has constructed about 130 HTS magnet systems using high-Tc bulk magnets. A key feature in scaling-up is the fabrication of YBCO melts textured multi-seeded large bulks with three to eight seeds. Except of levitation, magnetization, trapped field and hysteresis, we review system engineering parameters of HTS magnetic linear and rotational bearings like compactness, cryogenics, power density, efficiency and robust construction. We examine mobile compact YBCO bulk magnet platforms cooled with LN{sub 2} and Stirling cryo-cooler for demonstrator use. Compact cryostats for Maglev train operation contain 24 pieces of 3-seed bulks and can levitate 2500–3000 N at 10 mm above a permanent magnet (PM) track. The effective magnetic distance of the thermally insulated bulks is 2 mm only; the stored 2.5 l LN{sub 2} allows more than 24 h operation without refilling. 34 HTS Maglev vacuum cryostats are manufactured tested and operate in Germany, China and Brazil. The magnetic levitation load to weight ratio is more than 15, and by group assembling the HTS cryostats under vehicles up to 5 t total loads levitated above a magnetic track is achieved.

  8. Multi(scale)gravity: a telescope for the micro-world

    International Nuclear Information System (INIS)

    Kogan, I.I.

    2001-01-01

    A short review of modern status of multi-gravity, i.e. modification of gravity at both short and large distances is given. Usually embedding of standard model and general relativity into any multidimensional construction gives rise to all possible sorts of new effects in a micro-world but we can also get a very drastic modification of these laws of gravity at ultra-large scale. One of the reason why multi-gravity can modify CMB (cosmic microwave background) is that it leads to a large distance modification of the curvature. One of very striking features of multi-gravity is that it gives us a some sort of a dark matter whose origin is that it is just matter from other branes. The author shows that on a 5-dimensional case and at large distances, multi-gravity opens a window in extra dimensions and gravitationally matter which is localized on other branes can be felt. (A.C.)

  9. 5 years of experience with a large-scale mentoring program for medical students

    Directory of Open Access Journals (Sweden)

    Pinilla, Severin

    2015-02-01

    Full Text Available In this paper we present our 5-year-experience with a large-scale mentoring program for undergraduate medical students at the Ludwig Maximilians-Universität Munich (LMU. We implemented a two-tiered program with a peer-mentoring concept for preclinical students and a 1:1-mentoring concept for clinical students aided by a fully automated online-based matching algorithm. Approximately 20-30% of each student cohort participates in our voluntary mentoring program. Defining ideal program evaluation strategies, recruiting mentors from beyond the academic environment and accounting for the mentoring network reality remain challenging. We conclude that a two-tiered program is well accepted by students and faculty. In addition the online-based matching seems to be effective for large-scale mentoring programs.

  10. Multi-fidelity uncertainty quantification in large-scale predictive simulations of turbulent flow

    Science.gov (United States)

    Geraci, Gianluca; Jofre-Cruanyes, Lluis; Iaccarino, Gianluca

    2017-11-01

    The performance characterization of complex engineering systems often relies on accurate, but computationally intensive numerical simulations. It is also well recognized that in order to obtain a reliable numerical prediction the propagation of uncertainties needs to be included. Therefore, Uncertainty Quantification (UQ) plays a fundamental role in building confidence in predictive science. Despite the great improvement in recent years, even the more advanced UQ algorithms are still limited to fairly simplified applications and only moderate parameter dimensionality. Moreover, in the case of extremely large dimensionality, sampling methods, i.e. Monte Carlo (MC) based approaches, appear to be the only viable alternative. In this talk we describe and compare a family of approaches which aim to accelerate the convergence of standard MC simulations. These methods are based on hierarchies of generalized numerical resolutions (multi-level) or model fidelities (multi-fidelity), and attempt to leverage the correlation between Low- and High-Fidelity (HF) models to obtain a more accurate statistical estimator without introducing additional HF realizations. The performance of these methods are assessed on an irradiated particle laden turbulent flow (PSAAP II solar energy receiver). This investigation was funded by the United States Department of Energy's (DoE) National Nuclear Security Administration (NNSA) under the Predicitive Science Academic Alliance Program (PSAAP) II at Stanford University.

  11. Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®

    Energy Technology Data Exchange (ETDEWEB)

    Stander, Nielen; Basudhar, Anirban; Basu, Ushnish; Gandikota, Imtiaz; Savic, Vesna; Sun, Xin; Choi, Kyoo Sil; Hu, Xiaohua; Pourboghrat, F.; Park, Taejoon; Mapar, Aboozar; Kumar, Shavan; Ghassemi-Armaki, Hassan; Abu-Farha, Fadi

    2015-09-14

    Test Ban Treaty of 1996 which banned surface testing of nuclear devices [1]. This had the effect that experimental work was reduced from large scale tests to multiscale experiments to provide material models with validation at different length scales. In the subsequent years industry realized that multi-scale modeling and simulation-based design were transferable to the design optimization of any structural system. Horstemeyer [1] lists a number of advantages of the use of multiscale modeling. Among these are: the reduction of product development time by alleviating costly trial-and-error iterations as well as the reduction of product costs through innovations in material, product and process designs. Multi-scale modeling can reduce the number of costly large scale experiments and can increase product quality by providing more accurate predictions. Research tends to be focussed on each particular length scale, which enhances accuracy in the long term. This paper serves as an introduction to the LS-OPT and LS-DYNA methodology for multi-scale modeling. It mainly focuses on an approach to integrate material identification using material models of different length scales. As an example, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a homogenized State Variable (SV) model, is discussed and the parameter identification of the individual material models of different length scales is demonstrated. The paper concludes with thoughts on integrating the multi-scale methodology into the overall vehicle design.

  12. A Multi-Scale Settlement Matching Algorithm Based on ARG

    Science.gov (United States)

    Yue, Han; Zhu, Xinyan; Chen, Di; Liu, Lingjia

    2016-06-01

    Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG) is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  13. Emergence of multi-scaling in fluid turbulence

    Science.gov (United States)

    Donzis, Diego; Yakhot, Victor

    2017-11-01

    We present new theoretical and numerical results on the transition to strong turbulence in an infinite fluid stirred by a Gaussian random force. The transition is defined as a first appearance of anomalous scaling of normalized moments of velocity derivatives (or dissipation rates) emerging from the low-Reynolds-number Gaussian background. It is shown that due to multi-scaling, strongly intermittent rare events can be quantitatively described in terms of an infinite number of different ``Reynolds numbers'' reflecting a multitude of anomalous scaling exponents. We found that anomalous scaling for high order moments emerges at very low Reynolds numbers implying that intense dissipative-range fluctuations are established at even lower Reynolds number than that required for an inertial range. Thus, our results suggest that information about inertial range dynamics can be obtained from dissipative scales even when the former does not exit. We discuss our further prediction that transition to fully anomalous turbulence disappears at Rλ < 3 . Support from NSF is acknowledged.

  14. Exploring Multi-Scale Spatiotemporal Twitter User Mobility Patterns with a Visual-Analytics Approach

    Directory of Open Access Journals (Sweden)

    Junjun Yin

    2016-10-01

    Full Text Available Understanding human mobility patterns is of great importance for urban planning, traffic management, and even marketing campaign. However, the capability of capturing detailed human movements with fine-grained spatial and temporal granularity is still limited. In this study, we extracted high-resolution mobility data from a collection of over 1.3 billion geo-located Twitter messages. Regarding the concerns of infringement on individual privacy, such as the mobile phone call records with restricted access, the dataset is collected from publicly accessible Twitter data streams. In this paper, we employed a visual-analytics approach to studying multi-scale spatiotemporal Twitter user mobility patterns in the contiguous United States during the year 2014. Our approach included a scalable visual-analytics framework to deliver efficiency and scalability in filtering large volume of geo-located tweets, modeling and extracting Twitter user movements, generating space-time user trajectories, and summarizing multi-scale spatiotemporal user mobility patterns. We performed a set of statistical analysis to understand Twitter user mobility patterns across multi-level spatial scales and temporal ranges. In particular, Twitter user mobility patterns measured by the displacements and radius of gyrations of individuals revealed multi-scale or multi-modal Twitter user mobility patterns. By further studying such mobility patterns in different temporal ranges, we identified both consistency and seasonal fluctuations regarding the distance decay effects in the corresponding mobility patterns. At the same time, our approach provides a geo-visualization unit with an interactive 3D virtual globe web mapping interface for exploratory geo-visual analytics of the multi-level spatiotemporal Twitter user movements.

  15. Study design of a cluster-randomized controlled trial to evaluate a large-scale distribution of cook stoves and water filters in Western Province, Rwanda.

    Science.gov (United States)

    Nagel, Corey L; Kirby, Miles A; Zambrano, Laura D; Rosa, Ghislane; Barstow, Christina K; Thomas, Evan A; Clasen, Thomas F

    2016-12-15

    In Rwanda, pneumonia and diarrhea are the first and second leading causes of death, respectively, among children under five. Household air pollution (HAP) resultant from cooking indoors with biomass fuels on traditional stoves is a significant risk factor for pneumonia, while consumption of contaminated drinking water is a primary cause of diarrheal disease. To date, there have been no large-scale effectiveness trials of programmatic efforts to provide either improved cookstoves or household water filters at scale in a low-income country. In this paper we describe the design of a cluster-randomized trial to evaluate the impact of a national-level program to distribute and promote the use of improved cookstoves and advanced water filters to the poorest quarter of households in Rwanda. We randomly allocated 72 sectors (administratively defined units) in Western Province to the intervention, with the remaining 24 sectors in the province serving as controls. In the intervention sectors, roughly 100,000 households received improved cookstoves and household water filters through a government-sponsored program targeting the poorest quarter of households nationally. The primary outcome measures are the incidence of acute respiratory infection (ARI) and diarrhea among children under five years of age. Over a one-year surveillance period, all cases of acute respiratory infection (ARI) and diarrhea identified by health workers in the study area will be extracted from records maintained at health facilities and by community health workers (CHW). In addition, we are conducting intensive, longitudinal data collection among a random sample of households in the study area for in-depth assessment of coverage, use, environmental exposures, and additional health measures. Although previous research has examined the impact of providing household water treatment and improved cookstoves on child health, there have been no studies of national-level programs to deliver these interventions

  16. Mass-flux subgrid-scale parameterization in analogy with multi-component flows: a formulation towards scale independence

    Directory of Open Access Journals (Sweden)

    J.-I. Yano

    2012-11-01

    Full Text Available A generalized mass-flux formulation is presented, which no longer takes a limit of vanishing fractional areas for subgrid-scale components. The presented formulation is applicable to a~situation in which the scale separation is still satisfied, but fractional areas occupied by individual subgrid-scale components are no longer small. A self-consistent formulation is presented by generalizing the mass-flux formulation under the segmentally-constant approximation (SCA to the grid–scale variabilities. The present formulation is expected to alleviate problems arising from increasing resolutions of operational forecast models without invoking more extensive overhaul of parameterizations.

    The present formulation leads to an analogy of the large-scale atmospheric flow with multi-component flows. This analogy allows a generality of including any subgrid-scale variability into the mass-flux parameterization under SCA. Those include stratiform clouds as well as cold pools in the boundary layer.

    An important finding under the present formulation is that the subgrid-scale quantities are advected by the large-scale velocities characteristic of given subgrid-scale components (large-scale subcomponent flows, rather than by the total large-scale flows as simply defined by grid-box average. In this manner, each subgrid-scale component behaves as if like a component of multi-component flows. This formulation, as a result, ensures the lateral interaction of subgrid-scale variability crossing the grid boxes, which are missing in the current parameterizations based on vertical one-dimensional models, and leading to a reduction of the grid-size dependencies in its performance. It is shown that the large-scale subcomponent flows are driven by large-scale subcomponent pressure gradients. The formulation, as a result, furthermore includes a self-contained description of subgrid-scale momentum transport.

    The main purpose of the present paper

  17. Implementation of Grid-computing Framework for Simulation in Multi-scale Structural Analysis

    Directory of Open Access Journals (Sweden)

    Data Iranata

    2010-05-01

    Full Text Available A new grid-computing framework for simulation in multi-scale structural analysis is presented. Two levels of parallel processing will be involved in this framework: multiple local distributed computing environments connected by local network to form a grid-based cluster-to-cluster distributed computing environment. To successfully perform the simulation, a large-scale structural system task is decomposed into the simulations of a simplified global model and several detailed component models using various scales. These correlated multi-scale structural system tasks are distributed among clusters and connected together in a multi-level hierarchy and then coordinated over the internet. The software framework for supporting the multi-scale structural simulation approach is also presented. The program architecture design allows the integration of several multi-scale models as clients and servers under a single platform. To check its feasibility, a prototype software system has been designed and implemented to perform the proposed concept. The simulation results show that the software framework can increase the speedup performance of the structural analysis. Based on this result, the proposed grid-computing framework is suitable to perform the simulation of the multi-scale structural analysis.

  18. Optical interconnect for large-scale systems

    Science.gov (United States)

    Dress, William

    2013-02-01

    This paper presents a switchless, optical interconnect module that serves as a node in a network of identical distribution modules for large-scale systems. Thousands to millions of hosts or endpoints may be interconnected by a network of such modules, avoiding the need for multi-level switches. Several common network topologies are reviewed and their scaling properties assessed. The concept of message-flow routing is discussed in conjunction with the unique properties enabled by the optical distribution module where it is shown how top-down software control (global routing tables, spanning-tree algorithms) may be avoided.

  19. Using Multi-Scale Modeling Systems and Satellite Data to Study the Precipitation Processes

    Science.gov (United States)

    Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.

    2011-01-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the recent developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitating systems and hurricanes/typhoons will be presented. The high-resolution spatial and temporal visualization will be utilized to show the evolution of precipitation processes. Also how to

  20. Evaluating 20th Century precipitation characteristics between multi-scale atmospheric models with different land-atmosphere coupling

    Science.gov (United States)

    Phillips, M.; Denning, A. S.; Randall, D. A.; Branson, M.

    2016-12-01

    Multi-scale models of the atmosphere provide an opportunity to investigate processes that are unresolved by traditional Global Climate Models while at the same time remaining viable in terms of computational resources for climate-length time scales. The MMF represents a shift away from large horizontal grid spacing in traditional GCMs that leads to overabundant light precipitation and lack of heavy events, toward a model where precipitation intensity is allowed to vary over a much wider range of values. Resolving atmospheric motions on the scale of 4 km makes it possible to recover features of precipitation, such as intense downpours, that were previously only obtained by computationally expensive regional simulations. These heavy precipitation events may have little impact on large-scale moisture and energy budgets, but are outstanding in terms of interaction with the land surface and potential impact on human life. Three versions of the Community Earth System Model were used in this study; the standard CESM, the multi-scale `Super-Parameterized' CESM where large-scale parameterizations have been replaced with a 2D cloud-permitting model, and a multi-instance land version of the SP-CESM where each column of the 2D CRM is allowed to interact with an individual land unit. These simulations were carried out using prescribed Sea Surface Temperatures for the period from 1979-2006 with daily precipitation saved for all 28 years. Comparisons of the statistical properties of precipitation between model architectures and against observations from rain gauges were made, with specific focus on detection and evaluation of extreme precipitation events.

  1. Large scale and cloud-based multi-model analytics experiments on climate change data in the Earth System Grid Federation

    Science.gov (United States)

    Fiore, Sandro; Płóciennik, Marcin; Doutriaux, Charles; Blanquer, Ignacio; Barbera, Roberto; Donvito, Giacinto; Williams, Dean N.; Anantharaj, Valentine; Salomoni, Davide D.; Aloisio, Giovanni

    2017-04-01

    In many scientific domains such as climate, data is often n-dimensional and requires tools that support specialized data types and primitives to be properly stored, accessed, analysed and visualized. Moreover, new challenges arise in large-scale scenarios and eco-systems where petabytes (PB) of data can be available and data can be distributed and/or replicated, such as the Earth System Grid Federation (ESGF) serving the Coupled Model Intercomparison Project, Phase 5 (CMIP5) experiment, providing access to 2.5PB of data for the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). A case study on climate models intercomparison data analysis addressing several classes of multi-model experiments is being implemented in the context of the EU H2020 INDIGO-DataCloud project. Such experiments require the availability of large amount of data (multi-terabyte order) related to the output of several climate models simulations as well as the exploitation of scientific data management tools for large-scale data analytics. More specifically, the talk discusses in detail a use case on precipitation trend analysis in terms of requirements, architectural design solution, and infrastructural implementation. The experiment has been tested and validated on CMIP5 datasets, in the context of a large scale distributed testbed across EU and US involving three ESGF sites (LLNL, ORNL, and CMCC) and one central orchestrator site (PSNC). The general "environment" of the case study relates to: (i) multi-model data analysis inter-comparison challenges; (ii) addressed on CMIP5 data; and (iii) which are made available through the IS-ENES/ESGF infrastructure. The added value of the solution proposed in the INDIGO-DataCloud project are summarized in the following: (i) it implements a different paradigm (from client- to server-side); (ii) it intrinsically reduces data movement; (iii) it makes lightweight the end-user setup; (iv) it fosters re-usability (of data, final

  2. Evaluation of convergence behavior of metamodeling techniques for bridging scales in multi-scale multimaterial simulation

    International Nuclear Information System (INIS)

    Sen, Oishik; Davis, Sean; Jacobs, Gustaaf; Udaykumar, H.S.

    2015-01-01

    The effectiveness of several metamodeling techniques, viz. the Polynomial Stochastic Collocation method, Adaptive Stochastic Collocation method, a Radial Basis Function Neural Network, a Kriging Method and a Dynamic Kriging Method is evaluated. This is done with the express purpose of using metamodels to bridge scales between micro- and macro-scale models in a multi-scale multimaterial simulation. The rate of convergence of the error when used to reconstruct hypersurfaces of known functions is studied. For sufficiently large number of training points, Stochastic Collocation methods generally converge faster than the other metamodeling techniques, while the DKG method converges faster when the number of input points is less than 100 in a two-dimensional parameter space. Because the input points correspond to computationally expensive micro/meso-scale computations, the DKG is favored for bridging scales in a multi-scale solver

  3. A Multi-Scale Settlement Matching Algorithm Based on ARG

    Directory of Open Access Journals (Sweden)

    H. Yue

    2016-06-01

    Full Text Available Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  4. Front-end vision and multi-scale image analysis multi-scale computer vision theory and applications, written in Mathematica

    CERN Document Server

    Romeny, Bart M Haar

    2008-01-01

    Front-End Vision and Multi-Scale Image Analysis is a tutorial in multi-scale methods for computer vision and image processing. It builds on the cross fertilization between human visual perception and multi-scale computer vision (`scale-space') theory and applications. The multi-scale strategies recognized in the first stages of the human visual system are carefully examined, and taken as inspiration for the many geometric methods discussed. All chapters are written in Mathematica, a spectacular high-level language for symbolic and numerical manipulations. The book presents a new and effective

  5. Detecting Multi-scale Structures in Chandra Images of Centaurus A

    Science.gov (United States)

    Karovska, M.; Fabbiano, G.; Elvis, M. S.; Evans, I. N.; Kim, D. W.; Prestwich, A. H.; Schwartz, D. A.; Murray, S. S.; Forman, W.; Jones, C.; Kraft, R. P.; Isobe, T.; Cui, W.; Schreier, E. J.

    1999-12-01

    Centaurus A (NGC 5128) is a giant early-type galaxy with a merger history, containing the nearest radio-bright AGN. Recent Chandra High Resolution Camera (HRC) observations of Cen A reveal X-ray multi-scale structures in this object with unprecedented detail and clarity. We show the results of an analysis of the Chandra data with smoothing and edge enhancement techniques that allow us to enhance and quantify the multi-scale structures present in the HRC images. These techniques include an adaptive smoothing algorithm (Ebeling et al 1999), and a multi-directional gradient detection algorithm (Karovska et al 1994). The Ebeling et al adaptive smoothing algorithm, which is incorporated in the CXC analysis s/w package, is a powerful tool for smoothing images containing complex structures at various spatial scales. The adaptively smoothed images of Centaurus A show simultaneously the high-angular resolution bright structures at scales as small as an arcsecond and the extended faint structures as large as several arc minutes. The large scale structures suggest complex symmetry, including a component possibly associated with the inner radio lobes (as suggested by the ROSAT HRI data, Dobereiner et al 1996), and a separate component with an orthogonal symmetry that may be associated with the galaxy as a whole. The dust lane and the x-ray ridges are very clearly visible. The adaptively smoothed images and the edge-enhanced images also suggest several filamentary features including a large filament-like structure extending as far as about 5 arcminutes to North-West.

  6. Multimode Resource-Constrained Multiple Project Scheduling Problem under Fuzzy Random Environment and Its Application to a Large Scale Hydropower Construction Project

    Science.gov (United States)

    Xu, Jiuping

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method. PMID:24550708

  7. Multimode resource-constrained multiple project scheduling problem under fuzzy random environment and its application to a large scale hydropower construction project.

    Science.gov (United States)

    Xu, Jiuping; Feng, Cuiying

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method.

  8. Multi-Scale Factor Analysis of High-Dimensional Brain Signals

    KAUST Repository

    Ting, Chee-Ming; Ombao, Hernando; Salleh, Sh-Hussain

    2017-01-01

    In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive

  9. Homogenization of Large-Scale Movement Models in Ecology

    Science.gov (United States)

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  10. A multi-directional rapidly exploring random graph (mRRG) for protein folding

    KAUST Repository

    Nath, Shuvra Kanti; Thomas, Shawna; Ekenna, Chinwe; Amato, Nancy M.

    2012-01-01

    Modeling large-scale protein motions, such as those involved in folding and binding interactions, is crucial to better understanding not only how proteins move and interact with other molecules but also how proteins misfold, thus causing many devastating diseases. Robotic motion planning algorithms, such as Rapidly Exploring Random Trees (RRTs), have been successful in simulating protein folding pathways. Here, we propose a new multi-directional Rapidly Exploring Random Graph (mRRG) specifically tailored for proteins. Unlike traditional RRGs which only expand a parent conformation in a single direction, our strategy expands the parent conformation in multiple directions to generate new samples. Resulting samples are connected to the parent conformation and its nearest neighbors. By leveraging multiple directions, mRRG can model the protein motion landscape with reduced computational time compared to several other robotics-based methods for small to moderate-sized proteins. Our results on several proteins agree with experimental hydrogen out-exchange, pulse-labeling, and F-value analysis. We also show that mRRG covers the conformation space better as compared to the other computation methods. Copyright © 2012 ACM.

  11. Identification of low order models for large scale processes

    NARCIS (Netherlands)

    Wattamwar, S.K.

    2010-01-01

    Many industrial chemical processes are complex, multi-phase and large scale in nature. These processes are characterized by various nonlinear physiochemical effects and fluid flows. Such processes often show coexistence of fast and slow dynamics during their time evolutions. The increasing demand

  12. Large scale electrolysers

    International Nuclear Information System (INIS)

    B Bello; M Junker

    2006-01-01

    Hydrogen production by water electrolysis represents nearly 4 % of the world hydrogen production. Future development of hydrogen vehicles will require large quantities of hydrogen. Installation of large scale hydrogen production plants will be needed. In this context, development of low cost large scale electrolysers that could use 'clean power' seems necessary. ALPHEA HYDROGEN, an European network and center of expertise on hydrogen and fuel cells, has performed for its members a study in 2005 to evaluate the potential of large scale electrolysers to produce hydrogen in the future. The different electrolysis technologies were compared. Then, a state of art of the electrolysis modules currently available was made. A review of the large scale electrolysis plants that have been installed in the world was also realized. The main projects related to large scale electrolysis were also listed. Economy of large scale electrolysers has been discussed. The influence of energy prices on the hydrogen production cost by large scale electrolysis was evaluated. (authors)

  13. Large Scale Gaussian Processes for Atmospheric Parameter Retrieval and Cloud Screening

    Science.gov (United States)

    Camps-Valls, G.; Gomez-Chova, L.; Mateo, G.; Laparra, V.; Perez-Suay, A.; Munoz-Mari, J.

    2017-12-01

    Current Earth-observation (EO) applications for image classification have to deal with an unprecedented big amount of heterogeneous and complex data sources. Spatio-temporally explicit classification methods are a requirement in a variety of Earth system data processing applications. Upcoming missions such as the super-spectral Copernicus Sentinels EnMAP and FLEX will soon provide unprecedented data streams. Very high resolution (VHR) sensors like Worldview-3 also pose big challenges to data processing. The challenge is not only attached to optical sensors but also to infrared sounders and radar images which increased in spectral, spatial and temporal resolution. Besides, we should not forget the availability of the extremely large remote sensing data archives already collected by several past missions, such ENVISAT, Cosmo-SkyMED, Landsat, SPOT, or Seviri/MSG. These large-scale data problems require enhanced processing techniques that should be accurate, robust and fast. Standard parameter retrieval and classification algorithms cannot cope with this new scenario efficiently. In this work, we review the field of large scale kernel methods for both atmospheric parameter retrieval and cloud detection using infrared sounding IASI data and optical Seviri/MSG imagery. We propose novel Gaussian Processes (GPs) to train problems with millions of instances and high number of input features. Algorithms can cope with non-linearities efficiently, accommodate multi-output problems, and provide confidence intervals for the predictions. Several strategies to speed up algorithms are devised: random Fourier features and variational approaches for cloud classification using IASI data and Seviri/MSG, and engineered randomized kernel functions and emulation in temperature, moisture and ozone atmospheric profile retrieval from IASI as a proxy to the upcoming MTG-IRS sensor. Excellent compromise between accuracy and scalability are obtained in all applications.

  14. Modeling Multi-Level Systems

    CERN Document Server

    Iordache, Octavian

    2011-01-01

    This book is devoted to modeling of multi-level complex systems, a challenging domain for engineers, researchers and entrepreneurs, confronted with the transition from learning and adaptability to evolvability and autonomy for technologies, devices and problem solving methods. Chapter 1 introduces the multi-scale and multi-level systems and highlights their presence in different domains of science and technology. Methodologies as, random systems, non-Archimedean analysis, category theory and specific techniques as model categorification and integrative closure, are presented in chapter 2. Chapters 3 and 4 describe polystochastic models, PSM, and their developments. Categorical formulation of integrative closure offers the general PSM framework which serves as a flexible guideline for a large variety of multi-level modeling problems. Focusing on chemical engineering, pharmaceutical and environmental case studies, the chapters 5 to 8 analyze mixing, turbulent dispersion and entropy production for multi-scale sy...

  15. Fast Decentralized Averaging via Multi-scale Gossip

    Science.gov (United States)

    Tsianos, Konstantinos I.; Rabbat, Michael G.

    We are interested in the problem of computing the average consensus in a distributed fashion on random geometric graphs. We describe a new algorithm called Multi-scale Gossip which employs a hierarchical decomposition of the graph to partition the computation into tractable sub-problems. Using only pairwise messages of fixed size that travel at most O(n^{1/3}) hops, our algorithm is robust and has communication cost of O(n loglogn logɛ - 1) transmissions, which is order-optimal up to the logarithmic factor in n. Simulated experiments verify the good expected performance on graphs of many thousands of nodes.

  16. Investigation on the integral output power model of a large-scale wind farm

    Institute of Scientific and Technical Information of China (English)

    BAO Nengsheng; MA Xiuqian; NI Weidou

    2007-01-01

    The integral output power model of a large-scale wind farm is needed when estimating the wind farm's output over a period of time in the future.The actual wind speed power model and calculation method of a wind farm made up of many wind turbine units are discussed.After analyzing the incoming wind flow characteristics and their energy distributions,and after considering the multi-effects among the wind turbine units and certain assumptions,the incoming wind flow model of multi-units is built.The calculation algorithms and steps of the integral output power model of a large-scale wind farm are provided.Finally,an actual power output of the wind farm is calculated and analyzed by using the practical measurement wind speed data.The characteristics of a large-scale wind farm are also discussed.

  17. Human visual system automatically represents large-scale sequential regularities.

    Science.gov (United States)

    Kimura, Motohiro; Widmann, Andreas; Schröger, Erich

    2010-03-04

    Our brain recordings reveal that large-scale sequential regularities defined across non-adjacent stimuli can be automatically represented in visual sensory memory. To show that, we adopted an auditory paradigm developed by Sussman, E., Ritter, W., and Vaughan, H. G. Jr. (1998). Predictability of stimulus deviance and the mismatch negativity. NeuroReport, 9, 4167-4170, Sussman, E., and Gumenyuk, V. (2005). Organization of sequential sounds in auditory memory. NeuroReport, 16, 1519-1523 to the visual domain by presenting task-irrelevant infrequent luminance-deviant stimuli (D, 20%) inserted among task-irrelevant frequent stimuli being of standard luminance (S, 80%) in randomized (randomized condition, SSSDSSSSSDSSSSD...) and fixed manners (fixed condition, SSSSDSSSSDSSSSD...). Comparing the visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in human visual sensory system, revealed that visual MMN elicited by deviant stimuli was reduced in the fixed compared to the randomized condition. Thus, the large-scale sequential regularity being present in the fixed condition (SSSSD) must have been represented in visual sensory memory. Interestingly, this effect did not occur in conditions with stimulus-onset asynchronies (SOAs) of 480 and 800 ms but was confined to the 160-ms SOA condition supporting the hypothesis that large-scale regularity extraction was based on perceptual grouping of the five successive stimuli defining the regularity. 2010 Elsevier B.V. All rights reserved.

  18. NASA: Assessments of Selected Large-Scale Projects

    Science.gov (United States)

    2011-03-01

    REPORT DATE MAR 2011 2. REPORT TYPE 3. DATES COVERED 00-00-2011 to 00-00-2011 4. TITLE AND SUBTITLE Assessments Of Selected Large-Scale Projects...Volatile EvolutioN MEP Mars Exploration Program MIB Mishap Investigation Board MMRTG Multi Mission Radioisotope Thermoelectric Generator MMS Magnetospheric...probes designed to explore the Martian surface, to satellites equipped with advanced sensors to study the earth , to telescopes intended to explore the

  19. Bush encroachment monitoring using multi-temporal Landsat data and random forests

    Science.gov (United States)

    Symeonakis, E.; Higginbottom, T.

    2014-11-01

    It is widely accepted that land degradation and desertification (LDD) are serious global threats to humans and the environment. Around a third of savannahs in Africa are affected by LDD processes that may lead to substantial declines in ecosystem functioning and services. Indirectly, LDD can be monitored using relevant indicators. The encroachment of woody plants into grasslands, and the subsequent conversion of savannahs and open woodlands into shrublands, has attracted a lot of attention over the last decades and has been identified as a potential indicator of LDD. Mapping bush encroachment over large areas can only effectively be done using Earth Observation (EO) data and techniques. However, the accurate assessment of large-scale savannah degradation through bush encroachment with satellite imagery remains a formidable task due to the fact that on the satellite data vegetation variability in response to highly variable rainfall patterns might obscure the underlying degradation processes. Here, we present a methodological framework for the monitoring of bush encroachment-related land degradation in a savannah environment in the Northwest Province of South Africa. We utilise multi-temporal Landsat TM and ETM+ (SLC-on) data from 1989 until 2009, mostly from the dry-season, and ancillary data in a GIS environment. We then use the machine learning classification approach of random forests to identify the extent of encroachment over the 20-year period. The results show that in the area of study, bush encroachment is as alarming as permanent vegetation loss. The classification of the year 2009 is validated yielding low commission and omission errors and high k-statistic values for the grasses and woody vegetation classes. Our approach is a step towards a rigorous and effective savannah degradation assessment.

  20. Algorithm 896: LSA: Algorithms for Large-Scale Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2009-01-01

    Roč. 36, č. 3 (2009), 16-1-16-29 ISSN 0098-3500 R&D Pro jects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : algorithms * design * large-scale optimization * large-scale nonsmooth optimization * large-scale nonlinear least squares * large-scale nonlinear minimax * large-scale systems of nonlinear equations * sparse pro blems * partially separable pro blems * limited-memory methods * discrete Newton methods * quasi-Newton methods * primal interior-point methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.904, year: 2009

  1. Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale

    Science.gov (United States)

    Kreibich, Heidi; Schröter, Kai; Merz, Bruno

    2016-05-01

    Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.

  2. Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Naveed ur Rehman

    2015-05-01

    Full Text Available A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA, discrete wavelet transform (DWT and non-subsampled contourlet transform (NCT. A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.

  3. Multi-scale analysis to uncover habitat use of red-crowned cranes: Implications for conservation

    Directory of Open Access Journals (Sweden)

    Chunyue LIU, Hongxing JIANG, Shuqing ZHANG, Chunrong LI,Yunqiu HOU, Fawen QIAN

    2013-10-01

    Full Text Available A multi-scale approach is essential to assess the factors that limit avian habitat use. Numerous studies have examined habitat use by the red-crowned crane, but integrated multi-scale habitat use information is lacking. We evaluated the effects of several habitat variables quantified across many spatial scales on crane use and abundance in two periods (2000 and 2009 at Yancheng National Nature Reserve, China. The natural wetlands decreased in area by 30,601 ha (-6.9% from 2000 to 2009, predominantly as a result of conversion to aquaculture ponds and farmland, and the remaining was under degradation due to expansion of the exotic smooth cordgrass. The cranes are focusing in on either larger patches or those that are in close proximity to each other in both years, but occupied patches had smaller size, less proximity and more regular boundaries in 2009. At landscape scales, the area percentage of common seepweed, reed ponds and paddy fields had a greater positive impact on crane presence than the area percentage of aquaculture ponds. The cranes were more abundant in patches that had a greater percent area of common seepweed and reed ponds, while the percent area of paddy fields was inversely related to crane abundance in 2009 due to changing agricultural practices. In 2009, cranes tended to use less fragmented plots in natural wetlands and more fragmented plots in anthropogenic paddy fields, which were largely associated with the huge loss and degradation of natural habitats between the two years. Management should focus on restoration of large patches of natural wetlands, and formation of a relatively stable area of large paddy field and reed pond to mitigate the loss of natural wetlands [Current Zoology 59 (5: 604–617, 2013].

  4. Classifying epileptic EEG signals with delay permutation entropy and Multi-Scale K-means.

    Science.gov (United States)

    Zhu, Guohun; Li, Yan; Wen, Peng Paul; Wang, Shuaifang

    2015-01-01

    Most epileptic EEG classification algorithms are supervised and require large training datasets, that hinder their use in real time applications. This chapter proposes an unsupervised Multi-Scale K-means (MSK-means) MSK-means algorithm to distinguish epileptic EEG signals and identify epileptic zones. The random initialization of the K-means algorithm can lead to wrong clusters. Based on the characteristics of EEGs, the MSK-means MSK-means algorithm initializes the coarse-scale centroid of a cluster with a suitable scale factor. In this chapter, the MSK-means algorithm is proved theoretically superior to the K-means algorithm on efficiency. In addition, three classifiers: the K-means, MSK-means MSK-means and support vector machine (SVM), are used to identify seizure and localize epileptogenic zone using delay permutation entropy features. The experimental results demonstrate that identifying seizure with the MSK-means algorithm and delay permutation entropy achieves 4. 7 % higher accuracy than that of K-means, and 0. 7 % higher accuracy than that of the SVM.

  5. Spatiotemporal property and predictability of large-scale human mobility

    Science.gov (United States)

    Zhang, Hai-Tao; Zhu, Tao; Fu, Dongfei; Xu, Bowen; Han, Xiao-Pu; Chen, Duxin

    2018-04-01

    Spatiotemporal characteristics of human mobility emerging from complexity on individual scale have been extensively studied due to the application potential on human behavior prediction and recommendation, and control of epidemic spreading. We collect and investigate a comprehensive data set of human activities on large geographical scales, including both websites browse and mobile towers visit. Numerical results show that the degree of activity decays as a power law, indicating that human behaviors are reminiscent of scale-free random walks known as Lévy flight. More significantly, this study suggests that human activities on large geographical scales have specific non-Markovian characteristics, such as a two-segment power-law distribution of dwelling time and a high possibility for prediction. Furthermore, a scale-free featured mobility model with two essential ingredients, i.e., preferential return and exploration, and a Gaussian distribution assumption on the exploration tendency parameter is proposed, which outperforms existing human mobility models under scenarios of large geographical scales.

  6. Hierarchical Learning of Tree Classifiers for Large-Scale Plant Species Identification.

    Science.gov (United States)

    Fan, Jianping; Zhou, Ning; Peng, Jinye; Gao, Ling

    2015-11-01

    In this paper, a hierarchical multi-task structural learning algorithm is developed to support large-scale plant species identification, where a visual tree is constructed for organizing large numbers of plant species in a coarse-to-fine fashion and determining the inter-related learning tasks automatically. For a given parent node on the visual tree, it contains a set of sibling coarse-grained categories of plant species or sibling fine-grained plant species, and a multi-task structural learning algorithm is developed to train their inter-related classifiers jointly for enhancing their discrimination power. The inter-level relationship constraint, e.g., a plant image must first be assigned to a parent node (high-level non-leaf node) correctly if it can further be assigned to the most relevant child node (low-level non-leaf node or leaf node) on the visual tree, is formally defined and leveraged to learn more discriminative tree classifiers over the visual tree. Our experimental results have demonstrated the effectiveness of our hierarchical multi-task structural learning algorithm on training more discriminative tree classifiers for large-scale plant species identification.

  7. Signatures of non-universal large scales in conditional structure functions from various turbulent flows

    International Nuclear Information System (INIS)

    Blum, Daniel B; Voth, Greg A; Bewley, Gregory P; Bodenschatz, Eberhard; Gibert, Mathieu; Xu Haitao; Gylfason, Ármann; Mydlarski, Laurent; Yeung, P K

    2011-01-01

    We present a systematic comparison of conditional structure functions in nine turbulent flows. The flows studied include forced isotropic turbulence simulated on a periodic domain, passive grid wind tunnel turbulence in air and in pressurized SF 6 , active grid wind tunnel turbulence (in both synchronous and random driving modes), the flow between counter-rotating discs, oscillating grid turbulence and the flow in the Lagrangian exploration module (in both constant and random driving modes). We compare longitudinal Eulerian second-order structure functions conditioned on the instantaneous large-scale velocity in each flow to assess the ways in which the large scales affect the small scales in a variety of turbulent flows. Structure functions are shown to have larger values when the large-scale velocity significantly deviates from the mean in most flows, suggesting that dependence on the large scales is typical in many turbulent flows. The effects of the large-scale velocity on the structure functions can be quite strong, with the structure function varying by up to a factor of 2 when the large-scale velocity deviates from the mean by ±2 standard deviations. In several flows, the effects of the large-scale velocity are similar at all the length scales we measured, indicating that the large-scale effects are scale independent. In a few flows, the effects of the large-scale velocity are larger on the smallest length scales. (paper)

  8. Integrated multi-scale modelling and simulation of nuclear fuels

    International Nuclear Information System (INIS)

    Valot, C.; Bertolus, M.; Masson, R.; Malerba, L.; Rachid, J.; Besmann, T.; Phillpot, S.; Stan, M.

    2015-01-01

    This chapter aims at discussing the objectives, implementation and integration of multi-scale modelling approaches applied to nuclear fuel materials. We will first show why the multi-scale modelling approach is required, due to the nature of the materials and by the phenomena involved under irradiation. We will then present the multiple facets of multi-scale modelling approach, while giving some recommendations with regard to its application. We will also show that multi-scale modelling must be coupled with appropriate multi-scale experiments and characterisation. Finally, we will demonstrate how multi-scale modelling can contribute to solving technology issues. (authors)

  9. Use of Large-Scale Multi-Configuration EMI Measurements to Characterize Subsurface Structures of the Vadose Zone.

    Science.gov (United States)

    Huisman, J. A.; Brogi, C.; Pätzold, S.; Weihermueller, L.; von Hebel, C.; Van Der Kruk, J.; Vereecken, H.

    2017-12-01

    Subsurface structures of the vadose zone can play a key role in crop yield potential, especially during water stress periods. Geophysical techniques like electromagnetic induction EMI can provide information about dominant shallow subsurface features. However, previous studies with EMI have typically not reached beyond the field scale. We used high-resolution large-scale multi-configuration EMI measurements to characterize patterns of soil structural organization (layering and texture) and their impact on crop productivity at the km2 scale. We collected EMI data on an agricultural area of 1 km2 (102 ha) near Selhausen (NRW, Germany). The area consists of 51 agricultural fields cropped in rotation. Therefore, measurements were collected between April and December 2016, preferably within few days after the harvest. EMI data were automatically filtered, temperature corrected, and interpolated onto a common grid of 1 m resolution. Inspecting the ECa maps, we identified three main sub-areas with different subsurface heterogeneity. We also identified small-scale geomorphological structures as well as anthropogenic activities such as soil management and buried drainage networks. To identify areas with similar subsurface structures, we applied image classification techniques. We fused ECa maps obtained with different coil distances in a multiband image and applied supervised and unsupervised classification methodologies. Both showed good results in reconstructing observed patterns in plant productivity and the subsurface structures associated with them. However, the supervised methodology proved more efficient in classifying the whole study area. In a second step, we selected hundred locations within the study area and obtained a soil profile description with type, depth, and thickness of the soil horizons. Using this ground truth data it was possible to assign a typical soil profile to each of the main classes obtained from the classification. The proposed methodology was

  10. Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale

    Directory of Open Access Journals (Sweden)

    H. Kreibich

    2016-05-01

    Full Text Available Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB.In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.

  11. Multi-scale and multi-orientation medical image analysis

    NARCIS (Netherlands)

    Haar Romenij, ter B.M.; Deserno, T.M.

    2011-01-01

    Inspired by multi-scale and multi-orientation mechanisms recognized in the first stages of our visual system, this chapter gives a tutorial overview of the basic principles. Images are discrete, measured data. The optimal aperture for an observation with as little artefacts as possible, is derived

  12. Novel probabilistic and distributed algorithms for guidance, control, and nonlinear estimation of large-scale multi-agent systems

    Science.gov (United States)

    Bandyopadhyay, Saptarshi

    Multi-agent systems are widely used for constructing a desired formation shape, exploring an area, surveillance, coverage, and other cooperative tasks. This dissertation introduces novel algorithms in the three main areas of shape formation, distributed estimation, and attitude control of large-scale multi-agent systems. In the first part of this dissertation, we address the problem of shape formation for thousands to millions of agents. Here, we present two novel algorithms for guiding a large-scale swarm of robotic systems into a desired formation shape in a distributed and scalable manner. These probabilistic swarm guidance algorithms adopt an Eulerian framework, where the physical space is partitioned into bins and the swarm's density distribution over each bin is controlled using tunable Markov chains. In the first algorithm - Probabilistic Swarm Guidance using Inhomogeneous Markov Chains (PSG-IMC) - each agent determines its bin transition probabilities using a time-inhomogeneous Markov chain that is constructed in real-time using feedback from the current swarm distribution. This PSG-IMC algorithm minimizes the expected cost of the transitions required to achieve and maintain the desired formation shape, even when agents are added to or removed from the swarm. The algorithm scales well with a large number of agents and complex formation shapes, and can also be adapted for area exploration applications. In the second algorithm - Probabilistic Swarm Guidance using Optimal Transport (PSG-OT) - each agent determines its bin transition probabilities by solving an optimal transport problem, which is recast as a linear program. In the presence of perfect feedback of the current swarm distribution, this algorithm minimizes the given cost function, guarantees faster convergence, reduces the number of transitions for achieving the desired formation, and is robust to disturbances or damages to the formation. We demonstrate the effectiveness of these two proposed swarm

  13. Modified truncated randomized singular value decomposition (MTRSVD) algorithms for large scale discrete ill-posed problems with general-form regularization

    Science.gov (United States)

    Jia, Zhongxiao; Yang, Yanfei

    2018-05-01

    In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).

  14. Multi-scale modeling for sustainable chemical production.

    Science.gov (United States)

    Zhuang, Kai; Bakshi, Bhavik R; Herrgård, Markus J

    2013-09-01

    With recent advances in metabolic engineering, it is now technically possible to produce a wide portfolio of existing petrochemical products from biomass feedstock. In recent years, a number of modeling approaches have been developed to support the engineering and decision-making processes associated with the development and implementation of a sustainable biochemical industry. The temporal and spatial scales of modeling approaches for sustainable chemical production vary greatly, ranging from metabolic models that aid the design of fermentative microbial strains to material and monetary flow models that explore the ecological impacts of all economic activities. Research efforts that attempt to connect the models at different scales have been limited. Here, we review a number of existing modeling approaches and their applications at the scales of metabolism, bioreactor, overall process, chemical industry, economy, and ecosystem. In addition, we propose a multi-scale approach for integrating the existing models into a cohesive framework. The major benefit of this proposed framework is that the design and decision-making at each scale can be informed, guided, and constrained by simulations and predictions at every other scale. In addition, the development of this multi-scale framework would promote cohesive collaborations across multiple traditionally disconnected modeling disciplines to achieve sustainable chemical production. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Decision aid on breast cancer screening reduces attendance rate: results of a large-scale, randomized, controlled study by the DECIDEO group

    Science.gov (United States)

    Bourmaud, Aurelie; Soler-Michel, Patricia; Oriol, Mathieu; Regnier, Véronique; Tinquaut, Fabien; Nourissat, Alice; Bremond, Alain; Moumjid, Nora; Chauvin, Franck

    2016-01-01

    Controversies regarding the benefits of breast cancer screening programs have led to the promotion of new strategies taking into account individual preferences, such as decision aid. The aim of this study was to assess the impact of a decision aid leaflet on the participation of women invited to participate in a national breast cancer screening program. This Randomized, multicentre, controlled trial. Women aged 50 to 74 years, were randomly assigned to receive either a decision aid or the usual invitation letter. Primary outcome was the participation rate 12 months after the invitation. 16 000 women were randomized and 15 844 included in the modified intention-to-treat analysis. The participation rate in the intervention group was 40.25% (3174/7885 women) compared with 42.13% (3353/7959) in the control group (p = 0.02). Previous attendance for screening (RR = 6.24; [95%IC: 5.75-6.77]; p < 0.0001) and medium household income (RR = 1.05; [95%IC: 1.01-1.09]; p = 0.0074) were independently associated with attendance for screening. This large-scale study demonstrates that the decision aid reduced the participation rate. The decision aid activate the decision making process of women toward non-attendance to screening. These results show the importance of promoting informed patient choices, especially when those choices cannot be anticipated. PMID:26883201

  16. Cluster galaxy dynamics and the effects of large-scale environment

    Science.gov (United States)

    White, Martin; Cohn, J. D.; Smit, Renske

    2010-11-01

    Advances in observational capabilities have ushered in a new era of multi-wavelength, multi-physics probes of galaxy clusters and ambitious surveys are compiling large samples of cluster candidates selected in different ways. We use a high-resolution N-body simulation to study how the influence of large-scale structure in and around clusters causes correlated signals in different physical probes and discuss some implications this has for multi-physics probes of clusters (e.g. richness, lensing, Compton distortion and velocity dispersion). We pay particular attention to velocity dispersions, matching galaxies to subhaloes which are explicitly tracked in the simulation. We find that not only do haloes persist as subhaloes when they fall into a larger host, but groups of subhaloes retain their identity for long periods within larger host haloes. The highly anisotropic nature of infall into massive clusters, and their triaxiality, translates into an anisotropic velocity ellipsoid: line-of-sight galaxy velocity dispersions for any individual halo show large variance depending on viewing angle. The orientation of the velocity ellipsoid is correlated with the large-scale structure, and thus velocity outliers correlate with outliers caused by projection in other probes. We quantify this orientation uncertainty and give illustrative examples. Such a large variance suggests that velocity dispersion estimators will work better in an ensemble sense than for any individual cluster, which may inform strategies for obtaining redshifts of cluster members. We similarly find that the ability of substructure indicators to find kinematic substructures is highly viewing angle dependent. While groups of subhaloes which merge with a larger host halo can retain their identity for many Gyr, they are only sporadically picked up by substructure indicators. We discuss the effects of correlated scatter on scaling relations estimated through stacking, both analytically and in the simulations

  17. A multi-scale, multi-disciplinary approach for assessing the technological, economic and environmental performance of bio-based chemicals.

    Science.gov (United States)

    Herrgård, Markus; Sukumara, Sumesh; Campodonico, Miguel; Zhuang, Kai

    2015-12-01

    In recent years, bio-based chemicals have gained interest as a renewable alternative to petrochemicals. However, there is a significant need to assess the technological, biological, economic and environmental feasibility of bio-based chemicals, particularly during the early research phase. Recently, the Multi-scale framework for Sustainable Industrial Chemicals (MuSIC) was introduced to address this issue by integrating modelling approaches at different scales ranging from cellular to ecological scales. This framework can be further extended by incorporating modelling of the petrochemical value chain and the de novo prediction of metabolic pathways connecting existing host metabolism to desirable chemical products. This multi-scale, multi-disciplinary framework for quantitative assessment of bio-based chemicals will play a vital role in supporting engineering, strategy and policy decisions as we progress towards a sustainable chemical industry. © 2015 Authors; published by Portland Press Limited.

  18. Tuneable resolution as a systems biology approach for multi-scale, multi-compartment computational models.

    Science.gov (United States)

    Kirschner, Denise E; Hunt, C Anthony; Marino, Simeone; Fallahi-Sichani, Mohammad; Linderman, Jennifer J

    2014-01-01

    The use of multi-scale mathematical and computational models to study complex biological processes is becoming increasingly productive. Multi-scale models span a range of spatial and/or temporal scales and can encompass multi-compartment (e.g., multi-organ) models. Modeling advances are enabling virtual experiments to explore and answer questions that are problematic to address in the wet-lab. Wet-lab experimental technologies now allow scientists to observe, measure, record, and analyze experiments focusing on different system aspects at a variety of biological scales. We need the technical ability to mirror that same flexibility in virtual experiments using multi-scale models. Here we present a new approach, tuneable resolution, which can begin providing that flexibility. Tuneable resolution involves fine- or coarse-graining existing multi-scale models at the user's discretion, allowing adjustment of the level of resolution specific to a question, an experiment, or a scale of interest. Tuneable resolution expands options for revising and validating mechanistic multi-scale models, can extend the longevity of multi-scale models, and may increase computational efficiency. The tuneable resolution approach can be applied to many model types, including differential equation, agent-based, and hybrid models. We demonstrate our tuneable resolution ideas with examples relevant to infectious disease modeling, illustrating key principles at work. © 2014 The Authors. WIREs Systems Biology and Medicine published by Wiley Periodicals, Inc.

  19. Modeling and control of a large nuclear reactor. A three-time-scale approach

    Energy Technology Data Exchange (ETDEWEB)

    Shimjith, S.R. [Indian Institute of Technology Bombay, Mumbai (India); Bhabha Atomic Research Centre, Mumbai (India); Tiwari, A.P. [Bhabha Atomic Research Centre, Mumbai (India); Bandyopadhyay, B. [Indian Institute of Technology Bombay, Mumbai (India). IDP in Systems and Control Engineering

    2013-07-01

    Recent research on Modeling and Control of a Large Nuclear Reactor. Presents a three-time-scale approach. Written by leading experts in the field. Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property, with emphasis on three-time-scale systems.

  20. Inflation in random landscapes with two energy scales

    Science.gov (United States)

    Blanco-Pillado, Jose J.; Vilenkin, Alexander; Yamada, Masaki

    2018-02-01

    We investigate inflation in a multi-dimensional landscape with a hierarchy of energy scales, motivated by the string theory, where the energy scale of Kahler moduli is usually assumed to be much lower than that of complex structure moduli and dilaton field. We argue that in such a landscape, the dynamics of slow-roll inflation is governed by the low-energy potential, while the initial condition for inflation are determined by tunneling through high-energy barriers. We then use the scale factor cutoff measure to calculate the probability distribution for the number of inflationary e-folds and the amplitude of density fluctuations Q, assuming that the low-energy landscape is described by a random Gaussian potential with a correlation length much smaller than M pl. We find that the distribution for Q has a unique shape and a preferred domain, which depends on the parameters of the low-energy landscape. We discuss some observational implications of this distribution and the constraints it imposes on the landscape parameters.

  1. Large-scale modeling on the fate and transport of polycyclic aromatic hydrocarbons (PAHs) in multimedia over China

    Science.gov (United States)

    Huang, Y.; Liu, M.; Wada, Y.; He, X.; Sun, X.

    2017-12-01

    In recent decades, with rapid economic growth, industrial development and urbanization, expanding pollution of polycyclic aromatic hydrocarbons (PAHs) has become a diversified and complicated phenomenon in China. However, the availability of sufficient monitoring activities for PAHs in multi-compartment and the corresponding multi-interface migration processes are still limited, especially at a large geographic area. In this study, we couple the Multimedia Fate Model (MFM) to the Community Multi-Scale Air Quality (CMAQ) model in order to consider the fugacity and the transient contamination processes. This coupled dynamic contaminant model can evaluate the detailed local variations and mass fluxes of PAHs in different environmental media (e.g., air, surface film, soil, sediment, water and vegetation) across different spatial (a county to country) and temporal (days to years) scales. This model has been applied to a large geographical domain of China at a 36 km by 36 km grid resolution. The model considers response characteristics of typical environmental medium to complex underlying surface. Results suggest that direct emission is the main input pathway of PAHs entering the atmosphere, while advection is the main outward flow of pollutants from the environment. In addition, both soil and sediment act as the main sink of PAHs and have the longest retention time. Importantly, the highest PAHs loadings are found in urbanized and densely populated regions of China, such as Yangtze River Delta and Pearl River Delta. This model can provide a good scientific basis towards a better understanding of the large-scale dynamics of environmental pollutants for land conservation and sustainable development. In a next step, the dynamic contaminant model will be integrated with the continental-scale hydrological and water resources model (i.e., Community Water Model, CWatM) to quantify a more accurate representation and feedbacks between the hydrological cycle and water quality at

  2. Why small-scale cannabis growers stay small: five mechanisms that prevent small-scale growers from going large scale.

    Science.gov (United States)

    Hammersvik, Eirik; Sandberg, Sveinung; Pedersen, Willy

    2012-11-01

    Over the past 15-20 years, domestic cultivation of cannabis has been established in a number of European countries. New techniques have made such cultivation easier; however, the bulk of growers remain small-scale. In this study, we explore the factors that prevent small-scale growers from increasing their production. The study is based on 1 year of ethnographic fieldwork and qualitative interviews conducted with 45 Norwegian cannabis growers, 10 of whom were growing on a large-scale and 35 on a small-scale. The study identifies five mechanisms that prevent small-scale indoor growers from going large-scale. First, large-scale operations involve a number of people, large sums of money, a high work-load and a high risk of detection, and thus demand a higher level of organizational skills than for small growing operations. Second, financial assets are needed to start a large 'grow-site'. Housing rent, electricity, equipment and nutrients are expensive. Third, to be able to sell large quantities of cannabis, growers need access to an illegal distribution network and knowledge of how to act according to black market norms and structures. Fourth, large-scale operations require advanced horticultural skills to maximize yield and quality, which demands greater skills and knowledge than does small-scale cultivation. Fifth, small-scale growers are often embedded in the 'cannabis culture', which emphasizes anti-commercialism, anti-violence and ecological and community values. Hence, starting up large-scale production will imply having to renegotiate or abandon these values. Going from small- to large-scale cannabis production is a demanding task-ideologically, technically, economically and personally. The many obstacles that small-scale growers face and the lack of interest and motivation for going large-scale suggest that the risk of a 'slippery slope' from small-scale to large-scale growing is limited. Possible political implications of the findings are discussed. Copyright

  3. Use of large-scale multi-configuration EMI measurements to characterize heterogeneous subsurface structures and their impact on crop productivity

    Science.gov (United States)

    Brogi, Cosimo; Huisman, Johan Alexander; Kaufmann, Manuela Sarah; von Hebel, Christian; van der Kruk, Jan; Vereecken, Harry

    2017-04-01

    Soil subsurface structures can play a key role in crop performance, especially during water stress periods. Geophysical techniques like electromagnetic induction EMI have been shown to be able of providing information about dominant shallow subsurface features. However, previous work with EMI has typically not reached beyond the field scale. The objective of this study is to use large-scale multi-configuration EMI to characterize patterns of soil structural organization (layering and texture) and the associated impact on crop vegetation at the km2 scale. For this, we carried out an intensive measurement campaign and collected high spatial resolution multi-configuration EMI data on an agricultural area of approx. 1 km2 (102 ha) near Selhausen (North Rhine-Westphalia, Germany) with a maximum depth of investigation of around 2.5 m. We measured using two EMI instruments simultaneously with a total of nine coil configurations. The instruments were placed inside polyethylene sleds that were pulled by an all-terrain-vehicle along parallel lines with a spacing of 2 to 2.5 m. The driving speed was between 5 and 7 km h-1 and we used a 0.2 Hz sampling frequency to obtain an in-line resolution of approximately 0.3 m. The survey area consists of almost 50 different fields managed in different way. The EMI measurements were collected between April and December 2016 within a few days after the harvest of each field. After data acquisition, EMI data were automatically filtered, temperature corrected, and interpolated onto a common grid. The resulting EMI maps allowed us to identify three main areas with different subsurface heterogeneities. The differences between these areas are likely related to the late quaternary geological history (Pleistocene and Holocene) of the area that resulted in spatially variable soil texture and layering, which has a strong impact on spatio-temporal soil water content variability. The high resolution surveys also allowed us to identify small scale

  4. Experimental Evaluation of Suitability of Selected Multi-Criteria Decision-Making Methods for Large-Scale Agent-Based Simulations.

    Science.gov (United States)

    Tučník, Petr; Bureš, Vladimír

    2016-01-01

    Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the-server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models.

  5. Multi-Time Scale Coordinated Scheduling Strategy with Distributed Power Flow Controllers for Minimizing Wind Power Spillage

    Directory of Open Access Journals (Sweden)

    Yi Tang

    2017-11-01

    Full Text Available The inherent variability and randomness of large-scale wind power integration have brought great challenges to power flow control and dispatch. The distributed power flow controller (DPFC has the higher flexibility and capacity in power flow control in the system with wind generation. This paper proposes a multi-time scale coordinated scheduling model with DPFC to minimize wind power spillage. Configuration of DPFCs is initially determined by stochastic method. Afterward, two sequential procedures containing day-head and real-time scales are applied for determining maximum schedulable wind sources, optimal outputs of generating units and operation setting of DPFCs. The generating plan is obtained initially in day-ahead scheduling stage and modified in real-time scheduling model, while considering the uncertainty of wind power and fast operation of DPFC. Numerical simulation results in IEEE-RTS79 system illustrate that wind power is maximum scheduled with the optimal deployment and operation of DPFC, which confirms the applicability and effectiveness of the proposed method.

  6. Multi-scale approximation of Vlasov equation

    International Nuclear Information System (INIS)

    Mouton, A.

    2009-09-01

    One of the most important difficulties of numerical simulation of magnetized plasmas is the existence of multiple time and space scales, which can be very different. In order to produce good simulations of these multi-scale phenomena, it is recommended to develop some models and numerical methods which are adapted to these problems. Nowadays, the two-scale convergence theory introduced by G. Nguetseng and G. Allaire is one of the tools which can be used to rigorously derive multi-scale limits and to obtain new limit models which can be discretized with a usual numerical method: this procedure is so-called a two-scale numerical method. The purpose of this thesis is to develop a two-scale semi-Lagrangian method and to apply it on a gyrokinetic Vlasov-like model in order to simulate a plasma submitted to a large external magnetic field. However, the physical phenomena we have to simulate are quite complex and there are many questions without answers about the behaviour of a two-scale numerical method, especially when such a method is applied on a nonlinear model. In a first part, we develop a two-scale finite volume method and we apply it on the weakly compressible 1D isentropic Euler equations. Even if this mathematical context is far from a Vlasov-like model, it is a relatively simple framework in order to study the behaviour of a two-scale numerical method in front of a nonlinear model. In a second part, we develop a two-scale semi-Lagrangian method for the two-scale model developed by E. Frenod, F. Salvarani et E. Sonnendrucker in order to simulate axisymmetric charged particle beams. Even if the studied physical phenomena are quite different from magnetic fusion experiments, the mathematical context of the one-dimensional paraxial Vlasov-Poisson model is very simple for establishing the basis of a two-scale semi-Lagrangian method. In a third part, we use the two-scale convergence theory in order to improve M. Bostan's weak-* convergence results about the finite

  7. Dynamic Modeling and Analysis of the Large-Scale Rotary Machine with Multi-Supporting

    Directory of Open Access Journals (Sweden)

    Xuejun Li

    2011-01-01

    Full Text Available The large-scale rotary machine with multi-supporting, such as rotary kiln and rope laying machine, is the key equipment in the architectural, chemistry, and agriculture industries. The body, rollers, wheels, and bearings constitute a chain multibody system. Axis line deflection is a vital parameter to determine mechanics state of rotary machine, thus body axial vibration needs to be studied for dynamic monitoring and adjusting of rotary machine. By using the Riccati transfer matrix method, the body system of rotary machine is divided into many subsystems composed of three elements, namely, rigid disk, elastic shaft, and linear spring. Multiple wheel-bearing structures are simplified as springs. The transfer matrices of the body system and overall transfer equation are developed, as well as the response overall motion equation. Taken a rotary kiln as an instance, natural frequencies, modal shape, and response vibration with certain exciting axis line deflection are obtained by numerical computing. The body vibration modal curves illustrate the cause of dynamical errors in the common axis line measurement methods. The displacement response can be used for further measurement dynamical error analysis and compensation. The response overall motion equation could be applied to predict the body motion under abnormal mechanics condition, and provide theory guidance for machine failure diagnosis.

  8. A multi-scale model for correlation in B cell VDJ usage of zebrafish

    International Nuclear Information System (INIS)

    Pan, Keyao; Deem, Michael W

    2011-01-01

    The zebrafish (Danio rerio) is one of the model animals used for the study of immunology because the dynamics in the adaptive immune system of zebrafish are similar to that in higher animals. In this work, we built a multi-scale model to simulate the dynamics of B cells in the primary and secondary immune responses of zebrafish. We use this model to explain the reported correlation between VDJ usage of B cell repertoires in individual zebrafish. We use a delay ordinary differential equation (ODE) system to model the immune responses in the 6-month lifespan of a zebrafish. This mean field theory gives the number of high-affinity B cells as a function of time during an infection. The sequences of those B cells are then taken from a distribution calculated by a 'microscopic' random energy model. This generalized NK model shows that mature B cells specific to one antigen largely possess a single VDJ recombination. The model allows first-principle calculation of the probability, p, that two zebrafish responding to the same antigen will select the same VDJ recombination. This probability p increases with the B cell population size and the B cell selection intensity. The probability p decreases with the B cell hypermutation rate. The multi-scale model predicts correlations in the immune system of the zebrafish that are highly similar to that from experiment

  9. A multi-scale model for correlation in B cell VDJ usage of zebrafish

    Science.gov (United States)

    Pan, Keyao; Deem, Michael W.

    2011-10-01

    The zebrafish (Danio rerio) is one of the model animals used for the study of immunology because the dynamics in the adaptive immune system of zebrafish are similar to that in higher animals. In this work, we built a multi-scale model to simulate the dynamics of B cells in the primary and secondary immune responses of zebrafish. We use this model to explain the reported correlation between VDJ usage of B cell repertoires in individual zebrafish. We use a delay ordinary differential equation (ODE) system to model the immune responses in the 6-month lifespan of a zebrafish. This mean field theory gives the number of high-affinity B cells as a function of time during an infection. The sequences of those B cells are then taken from a distribution calculated by a 'microscopic' random energy model. This generalized NK model shows that mature B cells specific to one antigen largely possess a single VDJ recombination. The model allows first-principle calculation of the probability, p, that two zebrafish responding to the same antigen will select the same VDJ recombination. This probability p increases with the B cell population size and the B cell selection intensity. The probability p decreases with the B cell hypermutation rate. The multi-scale model predicts correlations in the immune system of the zebrafish that are highly similar to that from experiment.

  10. Formation of Robust Multi-Agent Networks through Self-Organizing Random Regular Graphs

    KAUST Repository

    Yasin Yazicioǧlu, A.; Egerstedt, Magnus; Shamma, Jeff S.

    2015-01-01

    Multi-Agent networks are often modeled as interaction graphs, where the nodes represent the agents and the edges denote some direct interactions. The robustness of a multi-Agent network to perturbations such as failures, noise, or malicious attacks largely depends on the corresponding graph. In many applications, networks are desired to have well-connected interaction graphs with relatively small number of links. One family of such graphs is the random regular graphs. In this paper, we present a decentralized scheme for transforming any connected interaction graph with a possibly non-integer average degree of k into a connected random m-regular graph for some m ϵ [k+k ] 2. Accordingly, the agents improve the robustness of the network while maintaining a similar number of links as the initial configuration by locally adding or removing some edges. © 2015 IEEE.

  11. Formation of Robust Multi-Agent Networks through Self-Organizing Random Regular Graphs

    KAUST Repository

    Yasin Yazicioǧlu, A.

    2015-11-25

    Multi-Agent networks are often modeled as interaction graphs, where the nodes represent the agents and the edges denote some direct interactions. The robustness of a multi-Agent network to perturbations such as failures, noise, or malicious attacks largely depends on the corresponding graph. In many applications, networks are desired to have well-connected interaction graphs with relatively small number of links. One family of such graphs is the random regular graphs. In this paper, we present a decentralized scheme for transforming any connected interaction graph with a possibly non-integer average degree of k into a connected random m-regular graph for some m ϵ [k+k ] 2. Accordingly, the agents improve the robustness of the network while maintaining a similar number of links as the initial configuration by locally adding or removing some edges. © 2015 IEEE.

  12. Output variability caused by random seeds in a multi-agent transport simulation model

    DEFF Research Database (Denmark)

    Paulsen, Mads; Rasmussen, Thomas Kjær; Nielsen, Otto Anker

    2018-01-01

    Dynamic transport simulators are intended to support decision makers in transport-related issues, and as such it is valuable that the random variability of their outputs is as small as possible. In this study we analyse the output variability caused by random seeds of a multi-agent transport...... simulator (MATSim) when applied to a case study of Santiago de Chile. Results based on 100 different random seeds shows that the relative accuracies of estimated link loads tend to increase with link load, but that relative errors of up to 10 % do occur even for links with large volumes. Although...

  13. A multi-scale correlative investigation of ductile fracture

    International Nuclear Information System (INIS)

    Daly, M.; Burnett, T.L.; Pickering, E.J.; Tuck, O.C.G.; Léonard, F.; Kelley, R.; Withers, P.J.; Sherry, A.H.

    2017-01-01

    The use of novel multi-scale correlative methods, which involve the coordinated characterisation of matter across a range of length scales, are becoming of increasing value to materials scientists. Here, we describe for the first time how a multi-scale correlative approach can be used to investigate the nature of ductile fracture in metals. Specimens of a nuclear pressure vessel steel, SA508 Grade 3, are examined following ductile fracture using medium and high-resolution 3D X-ray computed tomography (CT) analyses, and a site-specific analysis using a dual beam plasma focused ion beam scanning electron microscope (PFIB-SEM). The methods are employed sequentially to characterise damage by void nucleation and growth in one volume of interest, allowing for the imaging of voids that ranged in size from less than 100 nm to over 100 μm. This enables the examination of voids initiated at carbide particles to be detected, as well as the large voids initiated at inclusions. We demonstrate that this multi-scale correlative approach is a powerful tool, which not only enhances our understanding of ductile failure through detailed characterisation of microstructure, but also provides quantitative information about the size, volume fractions and spatial distributions of voids that can be used to inform models of failure. It is found that the vast majority of large voids nucleated at MnS inclusions, and that the volume of a void varied according to the volume of its initiating inclusion raised to the power 3/2. The most severe voiding was concentrated within 500 μm of the fracture surface, but measurable damage was found to extend to a depth of at least 3 mm. Microvoids associated with carbides (carbide-initiated voids) were found to be concentrated around larger inclusion-initiated voids at depths of at least 400 μm. Methods for quantifying X-ray CT void data are discussed, and a procedure for using this data to calibrate parameters in the Gurson-Tvergaard Needleman (GTN

  14. Optimization of large-scale industrial systems : an emerging method

    Energy Technology Data Exchange (ETDEWEB)

    Hammache, A.; Aube, F.; Benali, M.; Cantave, R. [Natural Resources Canada, Varennes, PQ (Canada). CANMET Energy Technology Centre

    2006-07-01

    This paper reviewed optimization methods of large-scale industrial production systems and presented a novel systematic multi-objective and multi-scale optimization methodology. The methodology was based on a combined local optimality search with global optimality determination, and advanced system decomposition and constraint handling. The proposed method focused on the simultaneous optimization of the energy, economy and ecology aspects of industrial systems (E{sup 3}-ISO). The aim of the methodology was to provide guidelines for decision-making strategies. The approach was based on evolutionary algorithms (EA) with specifications including hybridization of global optimality determination with a local optimality search; a self-adaptive algorithm to account for the dynamic changes of operating parameters and design variables occurring during the optimization process; interactive optimization; advanced constraint handling and decomposition strategy; and object-oriented programming and parallelization techniques. Flowcharts of the working principles of the basic EA were presented. It was concluded that the EA uses a novel decomposition and constraint handling technique to enhance the Pareto solution search procedure for multi-objective problems. 6 refs., 9 figs.

  15. Multi-year lags between forest browning and soil respiration at high northern latitudes.

    Directory of Open Access Journals (Sweden)

    Ben Bond-Lamberty

    Full Text Available High-latitude northern ecosystems are experiencing rapid climate changes, and represent a large potential climate feedback because of their high soil carbon densities and shifting disturbance regimes. A significant carbon flow from these ecosystems is soil respiration (R(S, the flow of carbon dioxide, generated by plant roots and soil fauna, from the soil surface to atmosphere, and any change in the high-latitude carbon cycle might thus be reflected in R(S observed in the field. This study used two variants of a machine-learning algorithm and least squares regression to examine how remotely-sensed canopy greenness (NDVI, climate, and other variables are coupled to annual R(S based on 105 observations from 64 circumpolar sites in a global database. The addition of NDVI roughly doubled model performance, with the best-performing models explaining ∼62% of observed R(S variability. We show that early-summer NDVI from previous years is generally the best single predictor of R(S, and is better than current-year temperature or moisture. This implies significant temporal lags between these variables, with multi-year carbon pools exerting large-scale effects. Areas of decreasing R(S are spatially correlated with browning boreal forests and warmer temperatures, particularly in western North America. We suggest that total circumpolar R(S may have slowed by ∼5% over the last decade, depressed by forest stress and mortality, which in turn decrease R(S. Arctic tundra may exhibit a significantly different response, but few data are available with which to test this. Combining large-scale remote observations and small-scale field measurements, as done here, has the potential to allow inferences about the temporal and spatial complexity of the large-scale response of northern ecosystems to changing climate.

  16. A large-scale multi-species spatial depletion model for overwintering waterfowl

    NARCIS (Netherlands)

    Baveco, J.M.; Kuipers, H.; Nolet, B.A.

    2011-01-01

    In this paper, we develop a model to evaluate the capacity of accommodation areas for overwintering waterfowl, at a large spatial scale. Each day geese are distributed over roosting sites. Based on the energy minimization principle, the birds daily decide which surrounding fields to exploit within

  17. Correlations of stock price fluctuations under multi-scale and multi-threshold scenarios

    Science.gov (United States)

    Sui, Guo; Li, Huajiao; Feng, Sida; Liu, Xueyong; Jiang, Meihui

    2018-01-01

    The multi-scale method is widely used in analyzing time series of financial markets and it can provide market information for different economic entities who focus on different periods. Through constructing multi-scale networks of price fluctuation correlation in the stock market, we can detect the topological relationship between each time series. Previous research has not addressed the problem that the original fluctuation correlation networks are fully connected networks and more information exists within these networks that is currently being utilized. Here we use listed coal companies as a case study. First, we decompose the original stock price fluctuation series into different time scales. Second, we construct the stock price fluctuation correlation networks at different time scales. Third, we delete the edges of the network based on thresholds and analyze the network indicators. Through combining the multi-scale method with the multi-threshold method, we bring to light the implicit information of fully connected networks.

  18. Study on high density multi-scale calculation technique

    International Nuclear Information System (INIS)

    Sekiguchi, S.; Tanaka, Y.; Nakada, H.; Nishikawa, T.; Yamamoto, N.; Yokokawa, M.

    2004-01-01

    To understand degradation of nuclear materials under irradiation, it is essential to know as much about each phenomenon observed from multi-scale points of view; they are micro-scale in atomic-level, macro-level in structural scale and intermediate level. In this study for application to meso-scale materials (100A ∼ 2μm), computer technology approaching from micro- and macro-scales was developed including modeling and computer application using computational science and technology method. And environmental condition of grid technology for multi-scale calculation was prepared. The software and MD (molecular dynamics) stencil for verifying the multi-scale calculation were improved and their movement was confirmed. (A. Hishinuma)

  19. Scaling Fiber Lasers to Large Mode Area: An Investigation of Passive Mode-Locking Using a Multi-Mode Fiber.

    Science.gov (United States)

    Ding, Edwin; Lefrancois, Simon; Kutz, Jose Nathan; Wise, Frank W

    2011-01-01

    The mode-locking of dissipative soliton fiber lasers using large mode area fiber supporting multiple transverse modes is studied experimentally and theoretically. The averaged mode-locking dynamics in a multi-mode fiber are studied using a distributed model. The co-propagation of multiple transverse modes is governed by a system of coupled Ginzburg-Landau equations. Simulations show that stable and robust mode-locked pulses can be produced. However, the mode-locking can be destabilized by excessive higher-order mode content. Experiments using large core step-index fiber, photonic crystal fiber, and chirally-coupled core fiber show that mode-locking can be significantly disturbed in the presence of higher-order modes, resulting in lower maximum single-pulse energies. In practice, spatial mode content must be carefully controlled to achieve full pulse energy scaling. This paper demonstrates that mode-locking performance is very sensitive to the presence of multiple waveguide modes when compared to systems such as amplifiers and continuous-wave lasers.

  20. Evaluation of the multi-sums for large scale problems

    International Nuclear Information System (INIS)

    Bluemlein, J.; Hasselhuhn, A.; Schneider, C.

    2012-02-01

    A big class of Feynman integrals, in particular, the coefficients of their Laurent series expansion w.r.t. the dimension parameter ε can be transformed to multi-sums over hypergeometric terms and harmonic sums. In this article, we present a general summation method based on difference fields that simplifies these multi--sums by transforming them from inside to outside to representations in terms of indefinite nested sums and products. In particular, we present techniques that assist in the task to simplify huge expressions of such multi-sums in a completely automatic fashion. The ideas are illustrated on new calculations coming from 3-loop topologies of gluonic massive operator matrix elements containing two fermion lines, which contribute to the transition matrix elements in the variable flavor scheme. (orig.)

  1. Evaluation of the multi-sums for large scale problems

    Energy Technology Data Exchange (ETDEWEB)

    Bluemlein, J.; Hasselhuhn, A. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Schneider, C. [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation

    2012-02-15

    A big class of Feynman integrals, in particular, the coefficients of their Laurent series expansion w.r.t. the dimension parameter {epsilon} can be transformed to multi-sums over hypergeometric terms and harmonic sums. In this article, we present a general summation method based on difference fields that simplifies these multi--sums by transforming them from inside to outside to representations in terms of indefinite nested sums and products. In particular, we present techniques that assist in the task to simplify huge expressions of such multi-sums in a completely automatic fashion. The ideas are illustrated on new calculations coming from 3-loop topologies of gluonic massive operator matrix elements containing two fermion lines, which contribute to the transition matrix elements in the variable flavor scheme. (orig.)

  2. A novel artificial fish swarm algorithm for solving large-scale reliability-redundancy application problem.

    Science.gov (United States)

    He, Qiang; Hu, Xiangtao; Ren, Hong; Zhang, Hongqi

    2015-11-01

    A novel artificial fish swarm algorithm (NAFSA) is proposed for solving large-scale reliability-redundancy allocation problem (RAP). In NAFSA, the social behaviors of fish swarm are classified in three ways: foraging behavior, reproductive behavior, and random behavior. The foraging behavior designs two position-updating strategies. And, the selection and crossover operators are applied to define the reproductive ability of an artificial fish. For the random behavior, which is essentially a mutation strategy, the basic cloud generator is used as the mutation operator. Finally, numerical results of four benchmark problems and a large-scale RAP are reported and compared. NAFSA shows good performance in terms of computational accuracy and computational efficiency for large scale RAP. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Some Statistics for Measuring Large-Scale Structure

    OpenAIRE

    Brandenberger, Robert H.; Kaplan, David M.; A, Stephen; Ramsey

    1993-01-01

    Good statistics for measuring large-scale structure in the Universe must be able to distinguish between different models of structure formation. In this paper, two and three dimensional ``counts in cell" statistics and a new ``discrete genus statistic" are applied to toy versions of several popular theories of structure formation: random phase cold dark matter model, cosmic string models, and global texture scenario. All three statistics appear quite promising in terms of differentiating betw...

  4. Transitions of the Multi-Scale Singularity Trees

    DEFF Research Database (Denmark)

    Somchaipeng, Kerawit; Sporring, Jon; Kreiborg, Sven

    2005-01-01

    Multi-Scale Singularity Trees(MSSTs) [10] are multi-scale image descriptors aimed at representing the deep structures of images. Changes in images are directly translated to changes in the deep structures; therefore transitions in MSSTs. Because MSSTs can be used to represent the deep structure...

  5. Microphysics in Multi-scale Modeling System with Unified Physics

    Science.gov (United States)

    Tao, Wei-Kuo

    2012-01-01

    Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.

  6. OBJECT-ORIENTED CHANGE DETECTION BASED ON MULTI-SCALE APPROACH

    Directory of Open Access Journals (Sweden)

    Y. Jia

    2016-06-01

    Full Text Available The change detection of remote sensing images means analysing the change information quantitatively and recognizing the change types of the surface coverage data in different time phases. With the appearance of high resolution remote sensing image, object-oriented change detection method arises at this historic moment. In this paper, we research multi-scale approach for high resolution images, which includes multi-scale segmentation, multi-scale feature selection and multi-scale classification. Experimental results show that this method has a stronger advantage than the traditional single-scale method of high resolution remote sensing image change detection.

  7. Random sampling of elementary flux modes in large-scale metabolic networks.

    Science.gov (United States)

    Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel

    2012-09-15

    The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.

  8. Study on dynamic multi-objective approach considering coal and water conflict in large scale coal group

    Science.gov (United States)

    Feng, Qing; Lu, Li

    2018-01-01

    In the process of coal mining, destruction and pollution of groundwater in has reached an imminent time, and groundwater is not only related to the ecological environment, but also affect the health of human life. Similarly, coal and water conflict is still one of the world's problems in large scale coal mining regions. Based on this, this paper presents a dynamic multi-objective optimization model to deal with the conflict of the coal and water in the coal group with multiple subordinate collieries and arrive at a comprehensive arrangement to achieve environmentally friendly coal mining strategy. Through calculation, this paper draws the output of each subordinate coal mine. And on this basis, we continue to adjust the environmental protection parameters to compare the coal production at different collieries at different stages under different attitude of the government. At last, the paper conclude that, in either case, it is the first arrangement to give priority to the production of low-drainage, high-yield coal mines.

  9. Measuring large-scale social networks with high resolution.

    Directory of Open Access Journals (Sweden)

    Arkadiusz Stopczynski

    Full Text Available This paper describes the deployment of a large-scale study designed to measure human interactions across a variety of communication channels, with high temporal resolution and spanning multiple years-the Copenhagen Networks Study. Specifically, we collect data on face-to-face interactions, telecommunication, social networks, location, and background information (personality, demographics, health, politics for a densely connected population of 1000 individuals, using state-of-the-art smartphones as social sensors. Here we provide an overview of the related work and describe the motivation and research agenda driving the study. Additionally, the paper details the data-types measured, and the technical infrastructure in terms of both backend and phone software, as well as an outline of the deployment procedures. We document the participant privacy procedures and their underlying principles. The paper is concluded with early results from data analysis, illustrating the importance of multi-channel high-resolution approach to data collection.

  10. Multi-Scale Dissemination of Time Series Data

    DEFF Research Database (Denmark)

    Guo, Qingsong; Zhou, Yongluan; Su, Li

    2013-01-01

    In this paper, we consider the problem of continuous dissemination of time series data, such as sensor measurements, to a large number of subscribers. These subscribers fall into multiple subscription levels, where each subscription level is specified by the bandwidth constraint of a subscriber......, which is an abstract indicator for both the physical limits and the amount of data that the subscriber would like to handle. To handle this problem, we propose a system framework for multi-scale time series data dissemination that employs a typical tree-based dissemination network and existing time...

  11. An interactive display system for large-scale 3D models

    Science.gov (United States)

    Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman

    2018-04-01

    With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.

  12. 3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study.

    Science.gov (United States)

    Dolz, Jose; Desrosiers, Christian; Ben Ayed, Ismail

    2018-04-15

    This study investigates a 3D and fully convolutional neural network (CNN) for subcortical brain structure segmentation in MRI. 3D CNN architectures have been generally avoided due to their computational and memory requirements during inference. We address the problem via small kernels, allowing deeper architectures. We further model both local and global context by embedding intermediate-layer outputs in the final prediction, which encourages consistency between features extracted at different scales and embeds fine-grained information directly in the segmentation process. Our model is efficiently trained end-to-end on a graphics processing unit (GPU), in a single stage, exploiting the dense inference capabilities of fully CNNs. We performed comprehensive experiments over two publicly available datasets. First, we demonstrate a state-of-the-art performance on the ISBR dataset. Then, we report a large-scale multi-site evaluation over 1112 unregistered subject datasets acquired from 17 different sites (ABIDE dataset), with ages ranging from 7 to 64 years, showing that our method is robust to various acquisition protocols, demographics and clinical factors. Our method yielded segmentations that are highly consistent with a standard atlas-based approach, while running in a fraction of the time needed by atlas-based methods and avoiding registration/normalization steps. This makes it convenient for massive multi-site neuroanatomical imaging studies. To the best of our knowledge, our work is the first to study subcortical structure segmentation on such large-scale and heterogeneous data. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Optimal Multi-scale Demand-side Management for Continuous Power-Intensive Processes

    Science.gov (United States)

    Mitra, Sumit

    With the advent of deregulation in electricity markets and an increasing share of intermittent power generation sources, the profitability of industrial consumers that operate power-intensive processes has become directly linked to the variability in energy prices. Thus, for industrial consumers that are able to adjust to the fluctuations, time-sensitive electricity prices (as part of so-called Demand-Side Management (DSM) in the smart grid) offer potential economical incentives. In this thesis, we introduce optimization models and decomposition strategies for the multi-scale Demand-Side Management of continuous power-intensive processes. On an operational level, we derive a mode formulation for scheduling under time-sensitive electricity prices. The formulation is applied to air separation plants and cement plants to minimize the operating cost. We also describe how a mode formulation can be used for industrial combined heat and power plants that are co-located at integrated chemical sites to increase operating profit by adjusting their steam and electricity production according to their inherent flexibility. Furthermore, a robust optimization formulation is developed to address the uncertainty in electricity prices by accounting for correlations and multiple ranges in the realization of the random variables. On a strategic level, we introduce a multi-scale model that provides an understanding of the value of flexibility of the current plant configuration and the value of additional flexibility in terms of retrofits for Demand-Side Management under product demand uncertainty. The integration of multiple time scales leads to large-scale two-stage stochastic programming problems, for which we need to apply decomposition strategies in order to obtain a good solution within a reasonable amount of time. Hence, we describe two decomposition schemes that can be applied to solve two-stage stochastic programming problems: First, a hybrid bi-level decomposition scheme with

  14. The Goddard multi-scale modeling system with unified physics

    Directory of Open Access Journals (Sweden)

    W.-K. Tao

    2009-08-01

    Full Text Available Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1 a cloud-resolving model (CRM, (2 a regional-scale model, the NASA unified Weather Research and Forecasting Model (WRF, and (3 a coupled CRM-GCM (general circulation model, known as the Goddard Multi-scale Modeling Framework or MMF. The same cloud-microphysical processes, long- and short-wave radiative transfer and land-surface processes are applied in all of the models to study explicit cloud-radiation and cloud-surface interactive processes in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator for comparison and validation with NASA high-resolution satellite data.

    This paper reviews the development and presents some applications of the multi-scale modeling system, including results from using the multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols. In addition, use of the multi-satellite simulator to identify the strengths and weaknesses of the model-simulated precipitation processes will be discussed as well as future model developments and applications.

  15. A cloud-based framework for large-scale traditional Chinese medical record retrieval.

    Science.gov (United States)

    Liu, Lijun; Liu, Li; Fu, Xiaodong; Huang, Qingsong; Zhang, Xianwen; Zhang, Yin

    2018-01-01

    Electronic medical records are increasingly common in medical practice. The secondary use of medical records has become increasingly important. It relies on the ability to retrieve the complete information about desired patient populations. How to effectively and accurately retrieve relevant medical records from large- scale medical big data is becoming a big challenge. Therefore, we propose an efficient and robust framework based on cloud for large-scale Traditional Chinese Medical Records (TCMRs) retrieval. We propose a parallel index building method and build a distributed search cluster, the former is used to improve the performance of index building, and the latter is used to provide high concurrent online TCMRs retrieval. Then, a real-time multi-indexing model is proposed to ensure the latest relevant TCMRs are indexed and retrieved in real-time, and a semantics-based query expansion method and a multi- factor ranking model are proposed to improve retrieval quality. Third, we implement a template-based visualization method for displaying medical reports. The proposed parallel indexing method and distributed search cluster can improve the performance of index building and provide high concurrent online TCMRs retrieval. The multi-indexing model can ensure the latest relevant TCMRs are indexed and retrieved in real-time. The semantics expansion method and the multi-factor ranking model can enhance retrieval quality. The template-based visualization method can enhance the availability and universality, where the medical reports are displayed via friendly web interface. In conclusion, compared with the current medical record retrieval systems, our system provides some advantages that are useful in improving the secondary use of large-scale traditional Chinese medical records in cloud environment. The proposed system is more easily integrated with existing clinical systems and be used in various scenarios. Copyright © 2017. Published by Elsevier Inc.

  16. A large-scale forest landscape model incorporating multi-scale processes and utilizing forest inventory data

    Science.gov (United States)

    Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson III; David R. Larsen; Jacob S. Fraser; Jian. Yang

    2013-01-01

    Two challenges confronting forest landscape models (FLMs) are how to simulate fine, standscale processes while making large-scale (i.e., .107 ha) simulation possible, and how to take advantage of extensive forest inventory data such as U.S. Forest Inventory and Analysis (FIA) data to initialize and constrain model parameters. We present the LANDIS PRO model that...

  17. Planning under uncertainty solving large-scale stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G. [Stanford Univ., CA (United States). Dept. of Operations Research]|[Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  18. Sampling strategy for a large scale indoor radiation survey - a pilot project

    International Nuclear Information System (INIS)

    Strand, T.; Stranden, E.

    1986-01-01

    Optimisation of a stratified random sampling strategy for large scale indoor radiation surveys is discussed. It is based on the results from a small scale pilot project where variances in dose rates within different categories of houses were assessed. By selecting a predetermined precision level for the mean dose rate in a given region, the number of measurements needed can be optimised. The results of a pilot project in Norway are presented together with the development of the final sampling strategy for a planned large scale survey. (author)

  19. Large scale particle simulations in a virtual memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Million, R.; Wagner, J.S.; Tajima, T.

    1983-01-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceeds the computer core size. The required address space is automatically mapped onto slow disc memory the the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Assesses to slow memory significantly reduce the excecution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time. (orig.)

  20. Large-scale particle simulations in a virtual-memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Wagner, J.S.; Tajima, T.; Million, R.

    1982-08-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceed the computer core size. The required address space is automatically mapped onto slow disc memory by the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Accesses to slow memory significantly reduce the execution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time

  1. Multi-scale characterization of surface blistering morphology of helium irradiated W thin films

    International Nuclear Information System (INIS)

    Yang, J.J.; Zhu, H.L.; Wan, Q.; Peng, M.J.; Ran, G.; Tang, J.; Yang, Y.Y.; Liao, J.L.; Liu, N.

    2015-01-01

    Highlights: • Multi-scale blistering morphology of He irradiated W film was studied. • This complex morphology was first characterized by wavelet transform approach. - Abstract: Surface blistering morphologies of W thin films irradiated by 30 keV He ion beam were studied quantitatively. It was found that the blistering morphology strongly depends on He fluence. For lower He fluence, the accumulation and growth of He bubbles induce the intrinsic surface blisters with mono-modal size distribution feature. When the He fluence is higher, the film surface morphology exhibits a multi-scale property, including two kinds of surface blisters with different characteristic sizes. In addition to the intrinsic He blisters, film/substrate interface delamination also induces large-sized surface blisters. A strategy based on wavelet transform approach was proposed to distinguish and extract the multi-scale surface blistering morphologies. Then the density, the lateral size and the height of these different blisters were estimated quantitatively, and the effect of He fluence on these geometrical parameters was investigated. Our method could provide a potential tool to describe the irradiation induced surface damage morphology with a multi-scale property

  2. Evaluating the Performance of the Goddard Multi-Scale Modeling Framework against GPM, TRMM and CloudSat/CALIPSO Products

    Science.gov (United States)

    Chern, J. D.; Tao, W. K.; Lang, S. E.; Matsui, T.; Mohr, K. I.

    2014-12-01

    Four six-month (March-August 2014) experiments with the Goddard Multi-scale Modeling Framework (MMF) were performed to study the impacts of different Goddard one-moment bulk microphysical schemes and large-scale forcings on the performance of the MMF. Recently a new Goddard one-moment bulk microphysics with four-ice classes (cloud ice, snow, graupel, and frozen drops/hail) has been developed based on cloud-resolving model simulations with large-scale forcings from field campaign observations. The new scheme has been successfully implemented to the MMF and two MMF experiments were carried out with this new scheme and the old three-ice classes (cloud ice, snow graupel) scheme. The MMF has global coverage and can rigorously evaluate microphysics performance for different cloud regimes. The results show MMF with the new scheme outperformed the old one. The MMF simulations are also strongly affected by the interaction between large-scale and cloud-scale processes. Two MMF sensitivity experiments with and without nudging large-scale forcings to those of ERA-Interim reanalysis were carried out to study the impacts of large-scale forcings. The model simulated mean and variability of surface precipitation, cloud types, cloud properties such as cloud amount, hydrometeors vertical profiles, and cloud water contents, etc. in different geographic locations and climate regimes are evaluated against GPM, TRMM, CloudSat/CALIPSO satellite observations. The Goddard MMF has also been coupled with the Goddard Satellite Data Simulation Unit (G-SDSU), a system with multi-satellite, multi-sensor, and multi-spectrum satellite simulators. The statistics of MMF simulated radiances and backscattering can be directly compared with satellite observations to assess the strengths and/or deficiencies of MMF simulations and provide guidance on how to improve the MMF and microphysics.

  3. Multi-scale biomedical systems: measurement challenges

    International Nuclear Information System (INIS)

    Summers, R

    2016-01-01

    Multi-scale biomedical systems are those that represent interactions in materials, sensors, and systems from a holistic perspective. It is possible to view such multi-scale activity using measurement of spatial scale or time scale, though in this paper only the former is considered. The biomedical application paradigm comprises interactions that range from quantum biological phenomena at scales of 10-12 for one individual to epidemiological studies of disease spread in populations that in a pandemic lead to measurement at a scale of 10+7. It is clear that there are measurement challenges at either end of this spatial scale, but those challenges that relate to the use of new technologies that deal with big data and health service delivery at the point of care are also considered. The measurement challenges lead to the use, in many cases, of model-based measurement and the adoption of virtual engineering. It is these measurement challenges that will be uncovered in this paper. (paper)

  4. Fast selection of miRNA candidates based on large-scale pre-computed MFE sets of randomized sequences.

    Science.gov (United States)

    Warris, Sven; Boymans, Sander; Muiser, Iwe; Noback, Michiel; Krijnen, Wim; Nap, Jan-Peter

    2014-01-13

    Small RNAs are important regulators of genome function, yet their prediction in genomes is still a major computational challenge. Statistical analyses of pre-miRNA sequences indicated that their 2D structure tends to have a minimal free energy (MFE) significantly lower than MFE values of equivalently randomized sequences with the same nucleotide composition, in contrast to other classes of non-coding RNA. The computation of many MFEs is, however, too intensive to allow for genome-wide screenings. Using a local grid infrastructure, MFE distributions of random sequences were pre-calculated on a large scale. These distributions follow a normal distribution and can be used to determine the MFE distribution for any given sequence composition by interpolation. It allows on-the-fly calculation of the normal distribution for any candidate sequence composition. The speedup achieved makes genome-wide screening with this characteristic of a pre-miRNA sequence practical. Although this particular property alone will not be able to distinguish miRNAs from other sequences sufficiently discriminative, the MFE-based P-value should be added to the parameters of choice to be included in the selection of potential miRNA candidates for experimental verification.

  5. 2D deblending using the multi-scale shaping scheme

    Science.gov (United States)

    Li, Qun; Ban, Xingan; Gong, Renbin; Li, Jinnuo; Ge, Qiang; Zu, Shaohuan

    2018-01-01

    Deblending can be posed as an inversion problem, which is ill-posed and requires constraint to obtain unique and stable solution. In blended record, signal is coherent, whereas interference is incoherent in some domains (e.g., common receiver domain and common offset domain). Due to the different sparsity, coefficients of signal and interference locate in different curvelet scale domains and have different amplitudes. Take into account the two differences, we propose a 2D multi-scale shaping scheme to constrain the sparsity to separate the blended record. In the domain where signal concentrates, the multi-scale scheme passes all the coefficients representing signal, while, in the domain where interference focuses, the multi-scale scheme suppresses the coefficients representing interference. Because the interference is suppressed evidently at each iteration, the constraint of multi-scale shaping operator in all scale domains are weak to guarantee the convergence of algorithm. We evaluate the performance of the multi-scale shaping scheme and the traditional global shaping scheme by using two synthetic and one field data examples.

  6. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus

    2013-11-19

    Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous

  7. Algorithmic foundation of multi-scale spatial representation

    CERN Document Server

    Li, Zhilin

    2006-01-01

    With the widespread use of GIS, multi-scale representation has become an important issue in the realm of spatial data handling. However, no book to date has systematically tackled the different aspects of this discipline. Emphasizing map generalization, Algorithmic Foundation of Multi-Scale Spatial Representation addresses the mathematical basis of multi-scale representation, specifically, the algorithmic foundation.Using easy-to-understand language, the author focuses on geometric transformations, with each chapter surveying a particular spatial feature. After an introduction to the essential operations required for geometric transformations as well as some mathematical and theoretical background, the book describes algorithms for a class of point features/clusters. It then examines algorithms for individual line features, such as the reduction of data points, smoothing (filtering), and scale-driven generalization, followed by a discussion of algorithms for a class of line features including contours, hydrog...

  8. Evolutionary Hierarchical Multi-Criteria Metaheuristics for Scheduling in Large-Scale Grid Systems

    CERN Document Server

    Kołodziej, Joanna

    2012-01-01

    One of the most challenging issues in modelling today's large-scale computational systems is to effectively manage highly parametrised distributed environments such as computational grids, clouds, ad hoc networks and P2P networks. Next-generation computational grids must provide a wide range of services and high performance computing infrastructures. Various types of information and data processed in the large-scale dynamic grid environment may be incomplete, imprecise, and fragmented, which complicates the specification of proper evaluation criteria and which affects both the availability of resources and the final collective decisions of users. The complexity of grid architectures and grid management may also contribute towards higher energy consumption. All of these issues necessitate the development of intelligent resource management techniques, which are capable of capturing all of this complexity and optimising meaningful metrics for a wide range of grid applications.   This book covers hot topics in t...

  9. Multi-scale calculation based on dual domain material point method combined with molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-27

    This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crack tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the

  10. Large-scale solar purchasing

    International Nuclear Information System (INIS)

    1999-01-01

    The principal objective of the project was to participate in the definition of a new IEA task concerning solar procurement (''the Task'') and to assess whether involvement in the task would be in the interest of the UK active solar heating industry. The project also aimed to assess the importance of large scale solar purchasing to UK active solar heating market development and to evaluate the level of interest in large scale solar purchasing amongst potential large scale purchasers (in particular housing associations and housing developers). A further aim of the project was to consider means of stimulating large scale active solar heating purchasing activity within the UK. (author)

  11. Incipient multiple fault diagnosis in real time with applications to large-scale systems

    International Nuclear Information System (INIS)

    Chung, H.Y.; Bien, Z.; Park, J.H.; Seon, P.H.

    1994-01-01

    By using a modified signed directed graph (SDG) together with the distributed artificial neutral networks and a knowledge-based system, a method of incipient multi-fault diagnosis is presented for large-scale physical systems with complex pipes and instrumentations such as valves, actuators, sensors, and controllers. The proposed method is designed so as to (1) make a real-time incipient fault diagnosis possible for large-scale systems, (2) perform the fault diagnosis not only in the steady-state case but also in the transient case as well by using a concept of fault propagation time, which is newly adopted in the SDG model, (3) provide with highly reliable diagnosis results and explanation capability of faults diagnosed as in an expert system, and (4) diagnose the pipe damage such as leaking, break, or throttling. This method is applied for diagnosis of a pressurizer in the Kori Nuclear Power Plant (NPP) unit 2 in Korea under a transient condition, and its result is reported to show satisfactory performance of the method for the incipient multi-fault diagnosis of such a large-scale system in a real-time manner

  12. Multi-scale associations between vegetation cover and woodland bird communities across a large agricultural region.

    Directory of Open Access Journals (Sweden)

    Karen Ikin

    Full Text Available Improving biodiversity conservation in fragmented agricultural landscapes has become an important global issue. Vegetation at the patch and landscape-scale is important for species occupancy and diversity, yet few previous studies have explored multi-scale associations between vegetation and community assemblages. Here, we investigated how patch and landscape-scale vegetation cover structure woodland bird communities. We asked: (1 How is the bird community associated with the vegetation structure of woodland patches and the amount of vegetation cover in the surrounding landscape? (2 Do species of conservation concern respond to woodland vegetation structure and surrounding vegetation cover differently to other species in the community? And (3 Can the relationships between the bird community and the woodland vegetation structure and surrounding vegetation cover be explained by the ecological traits of the species comprising the bird community? We studied 103 woodland patches (0.5 - 53.8 ha over two time periods across a large (6,800 km(2 agricultural region in southeastern Australia. We found that both patch vegetation and surrounding woody vegetation cover were important for structuring the bird community, and that these relationships were consistent over time. In particular, the occurrence of mistletoe within the patches and high values of woody vegetation cover within 1,000 ha and 10,000 ha were important, especially for bird species of conservation concern. We found that the majority of these species displayed similar, positive responses to patch and landscape vegetation attributes. We also found that these relationships were related to the foraging and nesting traits of the bird community. Our findings suggest that management strategies to increase both remnant vegetation quality and the cover of surrounding woody vegetation in fragmented agricultural landscapes may lead to improved conservation of bird communities.

  13. Multi-Scale Scattering Transform in Music Similarity Measuring

    Science.gov (United States)

    Wang, Ruobai

    Scattering transform is a Mel-frequency spectrum based, time-deformation stable method, which can be used in evaluating music similarity. Compared with Dynamic time warping, it has better performance in detecting similar audio signals under local time-frequency deformation. Multi-scale scattering means to combine scattering transforms of different window lengths. This paper argues that, multi-scale scattering transform is a good alternative of dynamic time warping in music similarity measuring. We tested the performance of multi-scale scattering transform against other popular methods, with data designed to represent different conditions.

  14. Multi-Scale Factor Analysis of High-Dimensional Brain Signals

    KAUST Repository

    Ting, Chee-Ming

    2017-05-18

    In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive spatio-temporal data defined over the complex networks into a finite set of regional clusters. To achieve further dimension reduction, we represent the signals in each cluster by a small number of latent factors. The correlation matrix for all nodes in the network are approximated by lower-dimensional sub-structures derived from the cluster-specific factors. To estimate regional connectivity between numerous nodes (within each cluster), we apply principal components analysis (PCA) to produce factors which are derived as the optimal reconstruction of the observed signals under the squared loss. Then, we estimate global connectivity (between clusters or sub-networks) based on the factors across regions using the RV-coefficient as the cross-dependence measure. This gives a reliable and computationally efficient multi-scale analysis of both regional and global dependencies of the large networks. The proposed novel approach is applied to estimate brain connectivity networks using functional magnetic resonance imaging (fMRI) data. Results on resting-state fMRI reveal interesting modular and hierarchical organization of human brain networks during rest.

  15. Scaling Argument of Anisotropic Random Walk

    International Nuclear Information System (INIS)

    Xu Bingzhen; Jin Guojun; Wang Feifeng

    2005-01-01

    In this paper, we analytically discuss the scaling properties of the average square end-to-end distance (R 2 ) for anisotropic random walk in D-dimensional space (D≥2), and the returning probability P n (r 0 ) for the walker into a certain neighborhood of the origin. We will not only give the calculating formula for (R 2 ) and P n (r 0 ), but also point out that if there is a symmetric axis for the distribution of the probability density of a single step displacement, we always obtain (R p erpendicular n 2 )∼n, where perpendicular refers to the projections of the displacement perpendicular to each symmetric axes of the walk; in D-dimensional space with D symmetric axes perpendicular to each other, we always have (R n 2 )∼n and the random walk will be like a purely random motion; if the number of inter-perpendicular symmetric axis is smaller than the dimensions of the space, we must have (R n 2 )∼n 2 for very large n and the walk will be like a ballistic motion. It is worth while to point out that unlike the isotropic random walk in one and two dimensions, which is certain to return into the neighborhood of the origin, generally there is only a nonzero probability for the anisotropic random walker in two dimensions to return to the neighborhood.

  16. A Randomized Controlled Trial Evaluation of "Time to Read", a Volunteer Tutoring Program for 8- to 9-Year-Olds

    Science.gov (United States)

    Miller, Sarah; Connolly, Paul

    2013-01-01

    Tutoring is commonly employed to prevent early reading failure, and evidence suggests that it can have a positive effect. This article presents findings from a large-scale ("n" = 734) randomized controlled trial evaluation of the effect of "Time to Read"--a volunteer tutoring program aimed at children aged 8 to 9 years--on…

  17. Control method for multi-input multi-output non-Gaussian random vibration test with cross spectra consideration

    Directory of Open Access Journals (Sweden)

    Ronghui ZHENG

    2017-12-01

    Full Text Available A control method for Multi-Input Multi-Output (MIMO non-Gaussian random vibration test with cross spectra consideration is proposed in the paper. The aim of the proposed control method is to replicate the specified references composed of auto spectral densities, cross spectral densities and kurtoses on the test article in the laboratory. It is found that the cross spectral densities will bring intractable coupling problems and induce difficulty for the control of the multi-output kurtoses. Hence, a sequential phase modification method is put forward to solve the coupling problems in multi-input multi-output non-Gaussian random vibration test. To achieve the specified responses, an improved zero memory nonlinear transformation is utilized first to modify the Fourier phases of the signals with sequential phase modification method to obtain one frame reference response signals which satisfy the reference spectra and reference kurtoses. Then, an inverse system method is used in frequency domain to obtain the continuous stationary drive signals. At the same time, the matrix power control algorithm is utilized to control the spectra and kurtoses of the response signals further. At the end of the paper, a simulation example with a cantilever beam and a vibration shaker test are implemented and the results support the proposed method very well. Keywords: Cross spectra, Kurtosis control, Multi-input multi-output, Non-Gaussian, Random vibration test

  18. Measuring the topology of large-scale structure in the universe

    Science.gov (United States)

    Gott, J. Richard, III

    1988-11-01

    An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data.

  19. Measuring the topology of large-scale structure in the universe

    International Nuclear Information System (INIS)

    Gott, J.R. III

    1988-01-01

    An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data. 45 references

  20. Multi-Label Learning via Random Label Selection for Protein Subcellular Multi-Locations Prediction.

    Science.gov (United States)

    Wang, Xiao; Li, Guo-Zheng

    2013-03-12

    Prediction of protein subcellular localization is an important but challenging problem, particularly when proteins may simultaneously exist at, or move between, two or more different subcellular location sites. Most of the existing protein subcellular localization methods are only used to deal with the single-location proteins. In the past few years, only a few methods have been proposed to tackle proteins with multiple locations. However, they only adopt a simple strategy, that is, transforming the multi-location proteins to multiple proteins with single location, which doesn't take correlations among different subcellular locations into account. In this paper, a novel method named RALS (multi-label learning via RAndom Label Selection), is proposed to learn from multi-location proteins in an effective and efficient way. Through five-fold cross validation test on a benchmark dataset, we demonstrate our proposed method with consideration of label correlations obviously outperforms the baseline BR method without consideration of label correlations, indicating correlations among different subcellular locations really exist and contribute to improvement of prediction performance. Experimental results on two benchmark datasets also show that our proposed methods achieve significantly higher performance than some other state-of-the-art methods in predicting subcellular multi-locations of proteins. The prediction web server is available at http://levis.tongji.edu.cn:8080/bioinfo/MLPred-Euk/ for the public usage.

  1. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    Directory of Open Access Journals (Sweden)

    A. F. Van Loon

    2012-11-01

    underestimation of wet-to-dry-season droughts and snow-related droughts. Furthermore, almost no composite droughts were simulated for slowly responding areas, while many multi-year drought events were expected in these systems.

    We conclude that most drought propagation processes are reasonably well reproduced by the ensemble mean of large-scale models in contrasting catchments in Europe. Challenges, however, remain in catchments with cold and semi-arid climates and catchments with large storage in aquifers or lakes. This leads to a high uncertainty in hydrological drought simulation at large scales. Improvement of drought simulation in large-scale models should focus on a better representation of hydrological processes that are important for drought development, such as evapotranspiration, snow accumulation and melt, and especially storage. Besides the more explicit inclusion of storage in large-scale models, also parametrisation of storage processes requires attention, for example through a global-scale dataset on aquifer characteristics, improved large-scale datasets on other land characteristics (e.g. soils, land cover, and calibration/evaluation of the models against observations of storage (e.g. in snow, groundwater.

  2. Multi-Annual Climate Predictions for Fisheries: An Assessment of Skill of Sea Surface Temperature Forecasts for Large Marine Ecosystems

    Directory of Open Access Journals (Sweden)

    Desiree Tommasi

    2017-06-01

    Full Text Available Decisions made by fishers and fisheries managers are informed by climate and fisheries observations that now often span more than 50 years. Multi-annual climate forecasts could further inform such decisions if they were skillful in predicting future conditions relative to the 50-year scope of past variability. We demonstrate that an existing multi-annual prediction system skillfully forecasts the probability of next year, the next 1–3 years, and the next 1–10 years being warmer or cooler than the 50-year average at the surface in coastal ecosystems. Probabilistic forecasts of upper and lower seas surface temperature (SST terciles over the next 3 or 10 years from the GFDL CM 2.1 10-member ensemble global prediction system showed significant improvements in skill over the use of a 50-year climatology for most Large Marine Ecosystems (LMEs in the North Atlantic, the western Pacific, and Indian oceans. Through a comparison of the forecast skill of initialized and uninitialized hindcasts, we demonstrate that this skill is largely due to the predictable signature of radiative forcing changes over the 50-year timescale rather than prediction of evolving modes of climate variability. North Atlantic LMEs stood out as the only coastal regions where initialization significantly contributed to SST prediction skill at the 1 to 10 year scale.

  3. Multi-Index Stochastic Collocation for random PDEs

    KAUST Repository

    Haji Ali, Abdul Lateef

    2016-03-28

    In this work we introduce the Multi-Index Stochastic Collocation method (MISC) for computing statistics of the solution of a PDE with random data. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. We propose an optimization procedure to select the most effective mixed differences to include in the MISC estimator: such optimization is a crucial step and allows us to build a method that, provided with sufficient solution regularity, is potentially more effective than other multi-level collocation methods already available in literature. We then provide a complexity analysis that assumes decay rates of product type for such mixed differences, showing that in the optimal case the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one dimensional problem. We show the effectiveness of MISC with some computational tests, comparing it with other related methods available in the literature, such as the Multi-Index and Multilevel Monte Carlo, Multilevel Stochastic Collocation, Quasi Optimal Stochastic Collocation and Sparse Composite Collocation methods.

  4. Multi-Index Stochastic Collocation for random PDEs

    KAUST Repository

    Haji Ali, Abdul Lateef; Nobile, Fabio; Tamellini, Lorenzo; Tempone, Raul

    2016-01-01

    In this work we introduce the Multi-Index Stochastic Collocation method (MISC) for computing statistics of the solution of a PDE with random data. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. We propose an optimization procedure to select the most effective mixed differences to include in the MISC estimator: such optimization is a crucial step and allows us to build a method that, provided with sufficient solution regularity, is potentially more effective than other multi-level collocation methods already available in literature. We then provide a complexity analysis that assumes decay rates of product type for such mixed differences, showing that in the optimal case the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one dimensional problem. We show the effectiveness of MISC with some computational tests, comparing it with other related methods available in the literature, such as the Multi-Index and Multilevel Monte Carlo, Multilevel Stochastic Collocation, Quasi Optimal Stochastic Collocation and Sparse Composite Collocation methods.

  5. Validating Remotely Sensed Land Surface Evapotranspiration Based on Multi-scale Field Measurements

    Science.gov (United States)

    Jia, Z.; Liu, S.; Ziwei, X.; Liang, S.

    2012-12-01

    validation experiments demonstrated that the models yield accurate estimates at flux measurement sites, the question remains whether they are performing well over the broader landscape. Moreover, a large number of RS_ET products have been released in recent years. Thus, we also pay attention to the cross-validation method of RS_ET derived from multi-source models. "The Multi-scale Observation Experiment on Evapotranspiration over Heterogeneous Land Surfaces: Flux Observation Matrix" campaign is carried out at the middle reaches of the Heihe River Basin, China in 2012. Flux measurements from an observation matrix composed of 22 EC and 4 LAS are acquired to investigate the cross-validation of multi-source models over different landscapes. In this case, six remote sensing models, including the empirical statistical model, the one-source and two-source models, the Penman-Monteith equation based model, the Priestley-Taylor equation based model, and the complementary relationship based model, are used to perform an intercomparison. All the results from the two cases of RS_ET validation showed that the proposed validation methods are reasonable and feasible.

  6. Multi scales based sparse matrix spectral clustering image segmentation

    Science.gov (United States)

    Liu, Zhongmin; Chen, Zhicai; Li, Zhanming; Hu, Wenjin

    2018-04-01

    In image segmentation, spectral clustering algorithms have to adopt the appropriate scaling parameter to calculate the similarity matrix between the pixels, which may have a great impact on the clustering result. Moreover, when the number of data instance is large, computational complexity and memory use of the algorithm will greatly increase. To solve these two problems, we proposed a new spectral clustering image segmentation algorithm based on multi scales and sparse matrix. We devised a new feature extraction method at first, then extracted the features of image on different scales, at last, using the feature information to construct sparse similarity matrix which can improve the operation efficiency. Compared with traditional spectral clustering algorithm, image segmentation experimental results show our algorithm have better degree of accuracy and robustness.

  7. A Combined Eulerian-Lagrangian Data Representation for Large-Scale Applications.

    Science.gov (United States)

    Sauer, Franz; Xie, Jinrong; Ma, Kwan-Liu

    2017-10-01

    The Eulerian and Lagrangian reference frames each provide a unique perspective when studying and visualizing results from scientific systems. As a result, many large-scale simulations produce data in both formats, and analysis tasks that simultaneously utilize information from both representations are becoming increasingly popular. However, due to their fundamentally different nature, drawing correlations between these data formats is a computationally difficult task, especially in a large-scale setting. In this work, we present a new data representation which combines both reference frames into a joint Eulerian-Lagrangian format. By reorganizing Lagrangian information according to the Eulerian simulation grid into a "unit cell" based approach, we can provide an efficient out-of-core means of sampling, querying, and operating with both representations simultaneously. We also extend this design to generate multi-resolution subsets of the full data to suit the viewer's needs and provide a fast flow-aware trajectory construction scheme. We demonstrate the effectiveness of our method using three large-scale real world scientific datasets and provide insight into the types of performance gains that can be achieved.

  8. Parameters affecting the resilience of scale-free networks to random failures.

    Energy Technology Data Exchange (ETDEWEB)

    Link, Hamilton E.; LaViolette, Randall A.; Lane, Terran (University of New Mexico, Albuquerque, NM); Saia, Jared (University of New Mexico, Albuquerque, NM)

    2005-09-01

    It is commonly believed that scale-free networks are robust to massive numbers of random node deletions. For example, Cohen et al. in (1) study scale-free networks including some which approximate the measured degree distribution of the Internet. Their results suggest that if each node in this network failed independently with probability 0.99, most of the remaining nodes would still be connected in a giant component. In this paper, we show that a large and important subclass of scale-free networks are not robust to massive numbers of random node deletions. In particular, we study scale-free networks which have minimum node degree of 1 and a power-law degree distribution beginning with nodes of degree 1 (power-law networks). We show that, in a power-law network approximating the Internet's reported distribution, when the probability of deletion of each node is 0.5 only about 25% of the surviving nodes in the network remain connected in a giant component, and the giant component does not persist beyond a critical failure rate of 0.9. The new result is partially due to improved analytical accommodation of the large number of degree-0 nodes that result after node deletions. Our results apply to power-law networks with a wide range of power-law exponents, including Internet-like networks. We give both analytical and empirical evidence that such networks are not generally robust to massive random node deletions.

  9. An Novel Architecture of Large-scale Communication in IOT

    Science.gov (United States)

    Ma, Wubin; Deng, Su; Huang, Hongbin

    2018-03-01

    In recent years, many scholars have done a great deal of research on the development of Internet of Things and networked physical systems. However, few people have made the detailed visualization of the large-scale communications architecture in the IOT. In fact, the non-uniform technology between IPv6 and access points has led to a lack of broad principles of large-scale communications architectures. Therefore, this paper presents the Uni-IPv6 Access and Information Exchange Method (UAIEM), a new architecture and algorithm that addresses large-scale communications in the IOT.

  10. Large-scale pool fires

    Directory of Open Access Journals (Sweden)

    Steinhaus Thomas

    2007-01-01

    Full Text Available A review of research into the burning behavior of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low source Froude numbers and the radiative interaction with the fire source. In hydrocarbon fires, higher soot levels at increased diameters result in radiation blockage effects around the perimeter of large fire plumes; this yields lower emissive powers and a drastic reduction in the radiative loss fraction; whilst there are simplifying factors with these phenomena, arising from the fact that soot yield can saturate, there are other complications deriving from the intermittency of the behavior, with luminous regions of efficient combustion appearing randomly in the outer surface of the fire according the turbulent fluctuations in the fire plume. Knowledge of the fluid flow instabilities, which lead to the formation of large eddies, is also key to understanding the behavior of large-scale fires. Here modeling tools can be effectively exploited in order to investigate the fluid flow phenomena, including RANS- and LES-based computational fluid dynamics codes. The latter are well-suited to representation of the turbulent motions, but a number of challenges remain with their practical application. Massively-parallel computational resources are likely to be necessary in order to be able to adequately address the complex coupled phenomena to the level of detail that is necessary.

  11. Multi-year predictability in a coupled general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Power, Scott; Colman, Rob [Bureau of Meteorology Research Centre, Melbourne, VIC (Australia)

    2006-02-01

    surface heat fluxes by the surface mixed layer act to low pass filter the ENSO-forcing. The resulting off-equatorial variability is therefore more coherent with low pass filtered (decadal) ENSO indices than with unfiltered ENSO indices. Consequently large correlations between variability and NINO3 extend further poleward on decadal time-scales than they do on interannual time-scales. This explains why decadal ENSO-like patterns have a broader meridional structure than their interannual counterparts. This difference in appearance can occur even if ENSO indices do not have any predictability beyond interannual time-scales. The wings around 15-20 S, and sub-surface variability at many other locations are predictable on interannual and multi-year time-scales. This includes westward propagating internal RWs within about 25 of the equator. The slowest of these take up to 4 years to reach the western boundary. This sub-surface predictability has significant oceanographic interest. (orig.)

  12. A Method of Vector Map Multi-scale Representation Considering User Interest on Subdivision Gird

    Directory of Open Access Journals (Sweden)

    YU Tong

    2016-12-01

    Full Text Available Compared with the traditional spatial data model and method, global subdivision grid show a great advantage in the organization and expression of massive spatial data. In view of this, a method of vector map multi-scale representation considering user interest on subdivision gird is proposed. First, the spatial interest field is built using a large number POI data to describe the spatial distribution of the user interest in geographic information. Second, spatial factor is classified and graded, and its representation scale range can be determined. Finally, different levels of subdivision surfaces are divided based on GeoSOT subdivision theory, and the corresponding relation of subdivision level and scale is established. According to the user interest of subdivision surfaces, the spatial feature can be expressed in different degree of detail. It can realize multi-scale representation of spatial data based on user interest. The experimental results show that this method can not only satisfy general-to-detail and important-to-secondary space cognitive demands of users, but also achieve better multi-scale representation effect.

  13. Phenomenology of two-dimensional stably stratified turbulence under large-scale forcing

    KAUST Repository

    Kumar, Abhishek; Verma, Mahendra K.; Sukhatme, Jai

    2017-01-01

    In this paper, we characterise the scaling of energy spectra, and the interscale transfer of energy and enstrophy, for strongly, moderately and weakly stably stratified two-dimensional (2D) turbulence, restricted in a vertical plane, under large-scale random forcing. In the strongly stratified case, a large-scale vertically sheared horizontal flow (VSHF) coexists with small scale turbulence. The VSHF consists of internal gravity waves and the turbulent flow has a kinetic energy (KE) spectrum that follows an approximate k−3 scaling with zero KE flux and a robust positive enstrophy flux. The spectrum of the turbulent potential energy (PE) also approximately follows a k−3 power-law and its flux is directed to small scales. For moderate stratification, there is no VSHF and the KE of the turbulent flow exhibits Bolgiano–Obukhov scaling that transitions from a shallow k−11/5 form at large scales, to a steeper approximate k−3 scaling at small scales. The entire range of scales shows a strong forward enstrophy flux, and interestingly, large (small) scales show an inverse (forward) KE flux. The PE flux in this regime is directed to small scales, and the PE spectrum is characterised by an approximate k−1.64 scaling. Finally, for weak stratification, KE is transferred upscale and its spectrum closely follows a k−2.5 scaling, while PE exhibits a forward transfer and its spectrum shows an approximate k−1.6 power-law. For all stratification strengths, the total energy always flows from large to small scales and almost all the spectral indicies are well explained by accounting for the scale-dependent nature of the corresponding flux.

  14. Phenomenology of two-dimensional stably stratified turbulence under large-scale forcing

    KAUST Repository

    Kumar, Abhishek

    2017-01-11

    In this paper, we characterise the scaling of energy spectra, and the interscale transfer of energy and enstrophy, for strongly, moderately and weakly stably stratified two-dimensional (2D) turbulence, restricted in a vertical plane, under large-scale random forcing. In the strongly stratified case, a large-scale vertically sheared horizontal flow (VSHF) coexists with small scale turbulence. The VSHF consists of internal gravity waves and the turbulent flow has a kinetic energy (KE) spectrum that follows an approximate k−3 scaling with zero KE flux and a robust positive enstrophy flux. The spectrum of the turbulent potential energy (PE) also approximately follows a k−3 power-law and its flux is directed to small scales. For moderate stratification, there is no VSHF and the KE of the turbulent flow exhibits Bolgiano–Obukhov scaling that transitions from a shallow k−11/5 form at large scales, to a steeper approximate k−3 scaling at small scales. The entire range of scales shows a strong forward enstrophy flux, and interestingly, large (small) scales show an inverse (forward) KE flux. The PE flux in this regime is directed to small scales, and the PE spectrum is characterised by an approximate k−1.64 scaling. Finally, for weak stratification, KE is transferred upscale and its spectrum closely follows a k−2.5 scaling, while PE exhibits a forward transfer and its spectrum shows an approximate k−1.6 power-law. For all stratification strengths, the total energy always flows from large to small scales and almost all the spectral indicies are well explained by accounting for the scale-dependent nature of the corresponding flux.

  15. Relating system-to-CFD coupled code analyses to theoretical framework of a multi-scale method

    International Nuclear Information System (INIS)

    Cadinu, F.; Kozlowski, T.; Dinh, T.N.

    2007-01-01

    Over past decades, analyses of transient processes and accidents in a nuclear power plant have been performed, to a significant extent and with a great success, by means of so called system codes, e.g. RELAP5, CATHARE, ATHLET codes. These computer codes, based on a multi-fluid model of two-phase flow, provide an effective, one-dimensional description of the coolant thermal-hydraulics in the reactor system. For some components in the system, wherever needed, the effect of multi-dimensional flow is accounted for through approximate models. The later are derived from scaled experiments conducted for selected accident scenarios. Increasingly, however, we have to deal with newer and ever more complex accident scenarios. In some such cases the system codes fail to serve as simulation vehicle, largely due to its deficient treatment of multi-dimensional flow (in e.g. downcomer, lower plenum). A possible way of improvement is to use the techniques of Computational Fluid Dynamics (CFD). Based on solving Navier-Stokes equations, CFD codes have been developed and used, broadly, to perform analysis of multi-dimensional flow, dominantly in non-nuclear industry and for single-phase flow applications. It is clear that CFD simulations can not substitute system codes but just complement them. Given the intrinsic multi-scale nature of this problem, we propose to relate it to the more general field of research on multi-scale simulations. Even though multi-scale methods are developed on case-by-case basis, the need for a unified framework brought to the development of the heterogeneous multi-scale method (HMM)

  16. Large Scale EOF Analysis of Climate Data

    Science.gov (United States)

    Prabhat, M.; Gittens, A.; Kashinath, K.; Cavanaugh, N. R.; Mahoney, M.

    2016-12-01

    We present a distributed approach towards extracting EOFs from 3D climate data. We implement the method in Apache Spark, and process multi-TB sized datasets on O(1000-10,000) cores. We apply this method to latitude-weighted ocean temperature data from CSFR, a 2.2 terabyte-sized data set comprising ocean and subsurface reanalysis measurements collected at 41 levels in the ocean, at 6 hour intervals over 31 years. We extract the first 100 EOFs of this full data set and compare to the EOFs computed simply on the surface temperature field. Our analyses provide evidence of Kelvin and Rossy waves and components of large-scale modes of oscillation including the ENSO and PDO that are not visible in the usual SST EOFs. Further, they provide information on the the most influential parts of the ocean, such as the thermocline, that exist below the surface. Work is ongoing to understand the factors determining the depth-varying spatial patterns observed in the EOFs. We will experiment with weighting schemes to appropriately account for the differing depths of the observations. We also plan to apply the same distributed approach to analysis of analysis of 3D atmospheric climatic data sets, including multiple variables. Because the atmosphere changes on a quicker time-scale than the ocean, we expect that the results will demonstrate an even greater advantage to computing 3D EOFs in lieu of 2D EOFs.

  17. Large scale modulation of high frequency acoustic waves in periodic porous media.

    Science.gov (United States)

    Boutin, Claude; Rallu, Antoine; Hans, Stephane

    2012-12-01

    This paper deals with the description of the modulation at large scale of high frequency acoustic waves in gas saturated periodic porous media. High frequencies mean local dynamics at the pore scale and therefore absence of scale separation in the usual sense of homogenization. However, although the pressure is spatially varying in the pores (according to periodic eigenmodes), the mode amplitude can present a large scale modulation, thereby introducing another type of scale separation to which the asymptotic multi-scale procedure applies. The approach is first presented on a periodic network of inter-connected Helmholtz resonators. The equations governing the modulations carried by periodic eigenmodes, at frequencies close to their eigenfrequency, are derived. The number of cells on which the carrying periodic mode is defined is therefore a parameter of the modeling. In a second part, the asymptotic approach is developed for periodic porous media saturated by a perfect gas. Using the "multicells" periodic condition, one obtains the family of equations governing the amplitude modulation at large scale of high frequency waves. The significant difference between modulations of simple and multiple mode are evidenced and discussed. The features of the modulation (anisotropy, width of frequency band) are also analyzed.

  18. Power recovery with multi-anode/cathode microbial fuel cells suitable for future large-scale applications

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Daqian; Li, Xiang; Raymond, Dustin; Mooradain, James; Li, Baikun [Department of Civil and Environmental Engineering, University of Connecticut, Storrs, CT 06269 (United States)

    2010-08-15

    Multi-anode/cathode microbial fuel cells (MFCs) incorporate multiple MFCs into a single unit, which maintain high power generation at a low cost and small space occupation for the scale-up MFC systems. The power production of multi-anode/cathode MFCs was similar to the total power production of multiple single-anode/cathode MFCs. The power density of a 4-anode/cathode MFC was 1184 mW/m{sup 3}, which was 3.2 times as that of a single-anode/cathode MFC (350 mW/m{sup 3}). The effect of chemical oxygen demand (COD) was studied as the preliminary factor affecting the MFC performance. The power density of MFCs increased with COD concentrations. Multi-anode/cathode MFCs exhibited higher power generation efficiencies than single-anode/cathode MFCs at high CODs. The power output of the 4-anode/cathode MFCs kept increasing from 200 mW/m{sup 3} to 1200 mW/m{sup 3} as COD increased from 500 mg/L to 3000 mg/L, while the single-anode/cathode MFC showed no increase in the power output at CODs above 1000 mg/L. In addition, the internal resistance (R{sub in}) exhibited strong dependence on COD and electrode distance. The R{sub in} decreased at high CODs and short electrode distances. The tests indicated that the multi-anode/cathode configuration efficiently enhanced the power generation. (author)

  19. Conformal-Based Surface Morphing and Multi-Scale Representation

    Directory of Open Access Journals (Sweden)

    Ka Chun Lam

    2014-05-01

    Full Text Available This paper presents two algorithms, based on conformal geometry, for the multi-scale representations of geometric shapes and surface morphing. A multi-scale surface representation aims to describe a 3D shape at different levels of geometric detail, which allows analyzing or editing surfaces at the global or local scales effectively. Surface morphing refers to the process of interpolating between two geometric shapes, which has been widely applied to estimate or analyze deformations in computer graphics, computer vision and medical imaging. In this work, we propose two geometric models for surface morphing and multi-scale representation for 3D surfaces. The basic idea is to represent a 3D surface by its mean curvature function, H, and conformal factor function λ, which uniquely determine the geometry of the surface according to Riemann surface theory. Once we have the (λ, H parameterization of the surface, post-processing of the surface can be done directly on the conformal parameter domain. In particular, the problem of multi-scale representations of shapes can be reduced to the signal filtering on the λ and H parameters. On the other hand, the surface morphing problem can be transformed to an interpolation process of two sets of (λ, H parameters. We test the proposed algorithms on 3D human face data and MRI-derived brain surfaces. Experimental results show that our proposed methods can effectively obtain multi-scale surface representations and give natural surface morphing results.

  20. Anisotropic multi-scale fluid registration: evaluation in magnetic resonance breast imaging

    International Nuclear Information System (INIS)

    Crum, W R; Tanner, C; Hawkes, D J

    2005-01-01

    Registration using models of compressible viscous fluids has not found the general application of some other techniques (e.g., free-form-deformation (FFD)) despite its ability to model large diffeomorphic deformations. We report on a multi-resolution fluid registration algorithm which improves on previous work by (a) directly solving the Navier-Stokes equation at the resolution of the images (b) accommodating image sampling anisotropy using semi-coarsening and implicit smoothing in a full multi-grid (FMG) solver and (c) exploiting the inherent multi-resolution nature of FMG to implement a multi-scale approach. Evaluation is on five magnetic resonance (MR) breast images subject to six biomechanical deformation fields over 11 multi-resolution schemes. Quantitative assessment is by tissue overlaps and target registration errors and by registering using the known correspondences rather than image features to validate the fluid model. Context is given by comparison with a validated FFD algorithm and by application to images of volunteers subjected to large applied deformation. The results show that fluid registration of 3D breast MR images to sub-voxel accuracy is possible in minutes on a 1.6 GHz Linux-based Athlon processor with coarse solutions obtainable in a few tens of seconds. Accuracy and computation time are comparable to FFD techniques validated for this application

  1. Large Scale Community Detection Using a Small World Model

    Directory of Open Access Journals (Sweden)

    Ranjan Kumar Behera

    2017-11-01

    Full Text Available In a social network, small or large communities within the network play a major role in deciding the functionalities of the network. Despite of diverse definitions, communities in the network may be defined as the group of nodes that are more densely connected as compared to nodes outside the group. Revealing such hidden communities is one of the challenging research problems. A real world social network follows small world phenomena, which indicates that any two social entities can be reachable in a small number of steps. In this paper, nodes are mapped into communities based on the random walk in the network. However, uncovering communities in large-scale networks is a challenging task due to its unprecedented growth in the size of social networks. A good number of community detection algorithms based on random walk exist in literature. In addition, when large-scale social networks are being considered, these algorithms are observed to take considerably longer time. In this work, with an objective to improve the efficiency of algorithms, parallel programming framework like Map-Reduce has been considered for uncovering the hidden communities in social network. The proposed approach has been compared with some standard existing community detection algorithms for both synthetic and real-world datasets in order to examine its performance, and it is observed that the proposed algorithm is more efficient than the existing ones.

  2. Optimizing Implementation of Obesity Prevention Programs: A Qualitative Investigation Within a Large-Scale Randomized Controlled Trial.

    Science.gov (United States)

    Kozica, Samantha L; Teede, Helena J; Harrison, Cheryce L; Klein, Ruth; Lombard, Catherine B

    2016-01-01

    The prevalence of obesity in rural and remote areas is elevated in comparison to urban populations, highlighting the need for interventions targeting obesity prevention in these settings. Implementing evidence-based obesity prevention programs is challenging. This study aimed to investigate factors influencing the implementation of obesity prevention programs, including adoption, program delivery, community uptake, and continuation, specifically within rural settings. Nested within a large-scale randomized controlled trial, a qualitative exploratory approach was adopted, with purposive sampling techniques utilized, to recruit stakeholders from 41 small rural towns in Australia. In-depth semistructured interviews were conducted with clinical health professionals, health service managers, and local government employees. Open coding was completed independently by 2 investigators and thematic analysis undertaken. In-depth interviews revealed that obesity prevention programs were valued by the rural workforce. Program implementation is influenced by interrelated factors across: (1) contextual factors and (2) organizational capacity. Key recommendations to manage the challenges of implementing evidence-based programs focused on reducing program delivery costs, aided by the provision of a suite of implementation and evaluation resources. Informing the scale-up of future prevention programs, stakeholders highlighted the need to build local rural capacity through developing supportive university partnerships, generating local program ownership and promoting active feedback to all program partners. We demonstrate that the rural workforce places a high value on obesity prevention programs. Our results inform the future scale-up of obesity prevention programs, providing an improved understanding of strategies to optimize implementation of evidence-based prevention programs. © 2015 National Rural Health Association.

  3. Multi-parameter decoupling and slope tracking control strategy of a large-scale high altitude environment simulation test cabin

    Directory of Open Access Journals (Sweden)

    Li Ke

    2014-12-01

    Full Text Available A large-scale high altitude environment simulation test cabin was developed to accurately control temperatures and pressures encountered at high altitudes. The system was developed to provide slope-tracking dynamic control of the temperature–pressure two-parameter and overcome the control difficulties inherent to a large inertia lag link with a complex control system which is composed of turbine refrigeration device, vacuum device and liquid nitrogen cooling device. The system includes multi-parameter decoupling of the cabin itself to avoid equipment damage of air refrigeration turbine caused by improper operation. Based on analysis of the dynamic characteristics and modeling for variations in temperature, pressure and rotation speed, an intelligent controller was implemented that includes decoupling and fuzzy arithmetic combined with an expert PID controller to control test parameters by decoupling and slope tracking control strategy. The control system employed centralized management in an open industrial ethernet architecture with an industrial computer at the core. The simulation and field debugging and running results show that this method can solve the problems of a poor anti-interference performance typical for a conventional PID and overshooting that can readily damage equipment. The steady-state characteristics meet the system requirements.

  4. Status of large scale wind turbine technology development abroad?

    Institute of Scientific and Technical Information of China (English)

    Ye LI; Lei DUAN

    2016-01-01

    To facilitate the large scale (multi-megawatt) wind turbine development in China, the foreign e?orts and achievements in the area are reviewed and summarized. Not only the popular horizontal axis wind turbines on-land but also the o?shore wind turbines, vertical axis wind turbines, airborne wind turbines, and shroud wind turbines are discussed. The purpose of this review is to provide a comprehensive comment and assessment about the basic work principle, economic aspects, and environmental impacts of turbines.

  5. The 2-Year Cosmetic Outcome of a Randomized Trial Comparing Prone and Supine Whole-Breast Irradiation in Large-Breasted Women

    Energy Technology Data Exchange (ETDEWEB)

    Veldeman, Liv, E-mail: liv.veldeman@uzgent.be [Department of Radiation Oncology, University Hospital Ghent, Ghent (Belgium); Department of Radiotherapy and Experimental Cancer Research, Ghent University, Ghent (Belgium); Schiettecatte, Kimberly; De Sutter, Charlotte; Monten, Christel; Greveling, Annick van [Department of Radiation Oncology, University Hospital Ghent, Ghent (Belgium); Berkovic, Patrick [Department of Radiation Oncology, University Hospital Ghent, Ghent (Belgium); Department of Radiation Oncology, Centre Hospitalier Universitaire de Liège, Liège (Belgium); Mulliez, Thomas [Department of Radiation Oncology, University Hospital Ghent, Ghent (Belgium); De Neve, Wilfried [Department of Radiation Oncology, University Hospital Ghent, Ghent (Belgium); Department of Radiotherapy and Experimental Cancer Research, Ghent University, Ghent (Belgium)

    2016-07-15

    Purpose: To report the 2-year cosmetic outcome of a randomized trial comparing prone and supine whole-breast irradiation in large-breasted patients. Methods and Materials: One hundred patients with a (European) cup size ≥C were included. Before and 2 years after radiation therapy, clinical endpoints were scored and digital photographs were taken with the arms alongside the body and with the arms elevated 180°. Three observers rated the photographs using the 4-point Harvard cosmesis scale. Cosmesis was also evaluated with the commercially available Breast Cancer Conservation Treatment.cosmetic results (BCCT.core) software. Results: Two-year follow-up data and photographs were available for 94 patients (47 supine treated and 47 prone treated). Patient and treatment characteristics were not significantly different between the 2 cohorts. A worsening of color change occurred more frequently in the supine than in the prone cohort (19/46 vs 10/46 patients, respectively, P=.04). Five patients in the prone group (11%) and 12 patients in the supine group (26%) presented with a worse scoring of edema at 2-year follow-up (P=.06). For retraction and fibrosis, no significant differences were found between the 2 cohorts, although scores were generally worse in the supine cohort. The cosmetic scoring by 3 observers did not reveal differences between the prone and supine groups. On the photographs with the hands up, 7 patients in the supine group versus none in the prone group had a worsening of cosmesis of 2 categories using the (BCCT.org) software (P=.02). Conclusion: With a limited follow-up of 2 years, better cosmetic outcome was observed in prone-treated than in supine-treated patients.

  6. Multi-scale symbolic transfer entropy analysis of EEG

    Science.gov (United States)

    Yao, Wenpo; Wang, Jun

    2017-10-01

    From both global and local perspectives, we symbolize two kinds of EEG and analyze their dynamic and asymmetrical information using multi-scale transfer entropy. Multi-scale process with scale factor from 1 to 199 and step size of 2 is applied to EEG of healthy people and epileptic patients, and then the permutation with embedding dimension of 3 and global approach are used to symbolize the sequences. The forward and reverse symbol sequences are taken as the inputs of transfer entropy. Scale factor intervals of permutation and global way are (37, 57) and (65, 85) where the two kinds of EEG have satisfied entropy distinctions. When scale factor is 67, transfer entropy of the healthy and epileptic subjects of permutation, 0.1137 and 0.1028, have biggest difference. And the corresponding values of the global symbolization is 0.0641 and 0.0601 which lies in the scale factor of 165. Research results show that permutation which takes contribution of local information has better distinction and is more effectively applied to our multi-scale transfer entropy analysis of EEG.

  7. Cosmological streaming velocities and large-scale density maxima

    International Nuclear Information System (INIS)

    Peacock, J.A.; Lumsden, S.L.; Heavens, A.F.

    1987-01-01

    The statistical testing of models for galaxy formation against the observed peculiar velocities on 10-100 Mpc scales is considered. If it is assumed that observers are likely to be sited near maxima in the primordial field of density perturbations, then the observed filtered velocity field will be biased to low values by comparison with a point selected at random. This helps to explain how the peculiar velocities (relative to the microwave background) of the local supercluster and the Rubin-Ford shell can be so similar in magnitude. Using this assumption to predict peculiar velocities on two scales, we test models with large-scale damping (i.e. adiabatic perturbations). Allowed models have a damping length close to the Rubin-Ford scale and are mildly non-linear. Both purely baryonic universes and universes dominated by massive neutrinos can account for the observed velocities, provided 0.1 ≤ Ω ≤ 1. (author)

  8. Large-scale fracture mechancis testing -- requirements and possibilities

    International Nuclear Information System (INIS)

    Brumovsky, M.

    1993-01-01

    Application of fracture mechanics to very important and/or complicated structures, like reactor pressure vessels, brings also some questions about the reliability and precision of such calculations. These problems become more pronounced in cases of elastic-plastic conditions of loading and/or in parts with non-homogeneous materials (base metal and austenitic cladding, property gradient changes through material thickness) or with non-homogeneous stress fields (nozzles, bolt threads, residual stresses etc.). For such special cases some verification by large-scale testing is necessary and valuable. This paper discusses problems connected with planning of such experiments with respect to their limitations, requirements to a good transfer of received results to an actual vessel. At the same time, an analysis of possibilities of small-scale model experiments is also shown, mostly in connection with application of results between standard, small-scale and large-scale experiments. Experience from 30 years of large-scale testing in SKODA is used as an example to support this analysis. 1 fig

  9. Subgrid-scale stresses and scalar fluxes constructed by the multi-scale turnover Lagrangian map

    Science.gov (United States)

    AL-Bairmani, Sukaina; Li, Yi; Rosales, Carlos; Xie, Zheng-tong

    2017-04-01

    The multi-scale turnover Lagrangian map (MTLM) [C. Rosales and C. Meneveau, "Anomalous scaling and intermittency in three-dimensional synthetic turbulence," Phys. Rev. E 78, 016313 (2008)] uses nested multi-scale Lagrangian advection of fluid particles to distort a Gaussian velocity field and, as a result, generate non-Gaussian synthetic velocity fields. Passive scalar fields can be generated with the procedure when the fluid particles carry a scalar property [C. Rosales, "Synthetic three-dimensional turbulent passive scalar fields via the minimal Lagrangian map," Phys. Fluids 23, 075106 (2011)]. The synthetic fields have been shown to possess highly realistic statistics characterizing small scale intermittency, geometrical structures, and vortex dynamics. In this paper, we present a study of the synthetic fields using the filtering approach. This approach, which has not been pursued so far, provides insights on the potential applications of the synthetic fields in large eddy simulations and subgrid-scale (SGS) modelling. The MTLM method is first generalized to model scalar fields produced by an imposed linear mean profile. We then calculate the subgrid-scale stress, SGS scalar flux, SGS scalar variance, as well as related quantities from the synthetic fields. Comparison with direct numerical simulations (DNSs) shows that the synthetic fields reproduce the probability distributions of the SGS energy and scalar dissipation rather well. Related geometrical statistics also display close agreement with DNS results. The synthetic fields slightly under-estimate the mean SGS energy dissipation and slightly over-predict the mean SGS scalar variance dissipation. In general, the synthetic fields tend to slightly under-estimate the probability of large fluctuations for most quantities we have examined. Small scale anisotropy in the scalar field originated from the imposed mean gradient is captured. The sensitivity of the synthetic fields on the input spectra is assessed by

  10. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  11. The Cea multi-scale and multi-physics simulation project for nuclear applications

    International Nuclear Information System (INIS)

    Ledermann, P.; Chauliac, C.; Thomas, J.B.

    2005-01-01

    Full text of publication follows. Today numerical modelling is everywhere recognized as an essential tool of capitalization, integration and share of knowledge. For this reason, it becomes the central tool of research. Until now, the Cea developed a set of scientific software allowing to model, in each situation, the operation of whole or part of a nuclear installation and these codes are largely used in nuclear industry. However, for the future, it is essential to aim for a better accuracy, a better control of uncertainties and better performance in computing times. The objective is to obtain validated models allowing accurate predictive calculations for actual complex nuclear problems such as fuel behaviour in accidental situation. This demands to master a large and interactive set of phenomena ranging from nuclear reaction to heat transfer. To this end, Cea, with industrial partners (EDF, Framatome-ANP, ANDRA) has designed an integrated platform of calculation, devoted to the study of nuclear systems, and intended at the same time for industries and scientists. The development of this platform is under way with the start in 2005 of the integrated project NURESIM, with 18 European partners. Improvement is coming not only through a multi-scale description of all phenomena but also through an innovative design approach requiring deep functional analysis which is upstream from the development of the simulation platform itself. In addition, the studies of future nuclear systems are increasingly multidisciplinary (simultaneous modelling of core physics, thermal-hydraulics and fuel behaviour). These multi-physics and multi-scale aspects make mandatory to pay very careful attention to software architecture issues. A global platform is thus developed integrating dedicated specialized platforms: DESCARTES for core physics, NEPTUNE for thermal-hydraulics, PLEIADES for fuel behaviour, SINERGY for materials behaviour under irradiation, ALLIANCES for the performance

  12. Energy transfers in large-scale and small-scale dynamos

    Science.gov (United States)

    Samtaney, Ravi; Kumar, Rohit; Verma, Mahendra

    2015-11-01

    We present the energy transfers, mainly energy fluxes and shell-to-shell energy transfers in small-scale dynamo (SSD) and large-scale dynamo (LSD) using numerical simulations of MHD turbulence for Pm = 20 (SSD) and for Pm = 0.2 on 10243 grid. For SSD, we demonstrate that the magnetic energy growth is caused by nonlocal energy transfers from the large-scale or forcing-scale velocity field to small-scale magnetic field. The peak of these energy transfers move towards lower wavenumbers as dynamo evolves, which is the reason for the growth of the magnetic fields at the large scales. The energy transfers U2U (velocity to velocity) and B2B (magnetic to magnetic) are forward and local. For LSD, we show that the magnetic energy growth takes place via energy transfers from large-scale velocity field to large-scale magnetic field. We observe forward U2U and B2B energy flux, similar to SSD.

  13. Multi-scale simulation for homogenization of cement media

    International Nuclear Information System (INIS)

    Abballe, T.

    2011-01-01

    To solve diffusion problems on cement media, two scales must be taken into account: a fine scale, which describes the micrometers wide microstructures present in the media, and a work scale, which is usually a few meters long. Direct numerical simulations are almost impossible because of the huge computational resources (memory, CPU time) required to assess both scales at the same time. To overcome this problem, we present in this thesis multi-scale resolution methods using both Finite Volumes and Finite Elements, along with their efficient implementations. More precisely, we developed a multi-scale simulation tool which uses the SALOME platform to mesh domains and post-process data, and the parallel calculation code MPCube to solve problems. This SALOME/MPCube tool can solve automatically and efficiently multi-scale simulations. Parallel structure of computer clusters can be use to dispatch the more time-consuming tasks. We optimized most functions to account for cement media specificities. We presents numerical experiments on various cement media samples, e.g. mortar and cement paste. From these results, we manage to compute a numerical effective diffusivity of our cement media and to reconstruct a fine scale solution. (author) [fr

  14. Characterizing multi-scale self-similar behavior and non-statistical properties of fluctuations in financial time series

    Science.gov (United States)

    Ghosh, Sayantan; Manimaran, P.; Panigrahi, Prasanta K.

    2011-11-01

    We make use of wavelet transform to study the multi-scale, self-similar behavior and deviations thereof, in the stock prices of large companies, belonging to different economic sectors. The stock market returns exhibit multi-fractal characteristics, with some of the companies showing deviations at small and large scales. The fact that, the wavelets belonging to the Daubechies’ (Db) basis enables one to isolate local polynomial trends of different degrees, plays the key role in isolating fluctuations at different scales. One of the primary motivations of this work is to study the emergence of the k-3 behavior [X. Gabaix, P. Gopikrishnan, V. Plerou, H. Stanley, A theory of power law distributions in financial market fluctuations, Nature 423 (2003) 267-270] of the fluctuations starting with high frequency fluctuations. We make use of Db4 and Db6 basis sets to respectively isolate local linear and quadratic trends at different scales in order to study the statistical characteristics of these financial time series. The fluctuations reveal fat tail non-Gaussian behavior, unstable periodic modulations, at finer scales, from which the characteristic k-3 power law behavior emerges at sufficiently large scales. We further identify stable periodic behavior through the continuous Morlet wavelet.

  15. Parallel Index and Query for Large Scale Data Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chou, Jerry; Wu, Kesheng; Ruebel, Oliver; Howison, Mark; Qiang, Ji; Prabhat,; Austin, Brian; Bethel, E. Wes; Ryne, Rob D.; Shoshani, Arie

    2011-07-18

    Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing of a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.

  16. Analysing and Correcting the Differences between Multi-Source and Multi-Scale Spatial Remote Sensing Observations

    Science.gov (United States)

    Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun

    2014-01-01

    Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding

  17. Origin of the large scale structures of the universe

    International Nuclear Information System (INIS)

    Oaknin, David H.

    2004-01-01

    We revise the statistical properties of the primordial cosmological density anisotropies that, at the time of matter-radiation equality, seeded the gravitational development of large scale structures in the otherwise homogeneous and isotropic Friedmann-Robertson-Walker flat universe. Our analysis shows that random fluctuations of the density field at the same instant of equality and with comoving wavelength shorter than the causal horizon at that time can naturally account, when globally constrained to conserve the total mass (energy) of the system, for the observed scale invariance of the anisotropies over cosmologically large comoving volumes. Statistical systems with similar features are generically known as glasslike or latticelike. Obviously, these conclusions conflict with the widely accepted understanding of the primordial structures reported in the literature, which requires an epoch of inflationary cosmology to precede the standard expansion of the universe. The origin of the conflict must be found in the widespread, but unjustified, claim that scale invariant mass (energy) anisotropies at the instant of equality over comoving volumes of cosmological size, larger than the causal horizon at the time, must be generated by fluctuations in the density field with comparably large comoving wavelength

  18. Prediction of Coal Face Gas Concentration by Multi-Scale Selective Ensemble Hybrid Modeling

    Directory of Open Access Journals (Sweden)

    WU Xiang

    2014-06-01

    Full Text Available A selective ensemble hybrid modeling prediction method based on wavelet transformation is proposed to improve the fitting and generalization capability of the existing prediction models of the coal face gas concentration, which has a strong stochastic volatility. Mallat algorithm was employed for the multi-scale decomposition and single-scale reconstruction of the gas concentration time series. Then, it predicted every subsequence by sparsely weighted multi unstable ELM(extreme learning machine predictor within method SERELM(sparse ensemble regressors of ELM. At last, it superimposed the predicted values of these models to obtain the predicted values of the original sequence. The proposed method takes advantage of characteristics of multi scale analysis of wavelet transformation, accuracy and fast characteristics of ELM prediction and the generalization ability of L1 regularized selective ensemble learning method. The results show that the forecast accuracy has large increase by using the proposed method. The average relative error is 0.65%, the maximum relative error is 4.16% and the probability of relative error less than 1% reaches 0.785.

  19. Large-Scale Preventive Chemotherapy for the Control of Helminth Infection in Western Pacific Countries: Six Years Later

    Science.gov (United States)

    Montresor, Antonio; Cong, Dai Tran; Sinuon, Mouth; Tsuyuoka, Reiko; Chanthavisouk, Chitsavang; Strandgaard, Hanne; Velayudhan, Raman; Capuano, Corinne M.; Le Anh, Tuan; Tee Dató, Ah S.

    2008-01-01

    In 2001, Urbani and Palmer published a review of the epidemiological situation of helminthiases in the countries of the Western Pacific Region of the World Health Organization indicating the control needs in the region. Six years after this inspiring article, large-scale preventive chemotherapy for the control of helminthiasis has scaled up dramatically in the region. This paper analyzes the most recent published and unpublished country information on large-scale preventive chemotherapy and summarizes the progress made since 2000. Almost 39 million treatments were provided in 2006 in the region for the control of helminthiasis: nearly 14 million for the control of lymphatic filariasis, more than 22 million for the control of soil-transmitted helminthiasis, and over 2 million for the control of schistosomiasis. In general, control of these helminthiases is progressing well in the Mekong countries and Pacific Islands. In China, despite harboring the majority of the helminth infections of the region, the control activities have not reached the level of coverage of countries with much more limited financial resources. The control of food-borne trematodes is still limited, but pilot activities have been initiated in China, Lao People's Democratic Republic, and Vietnam. PMID:18846234

  20. Multi spectral scaling data acquisition system

    International Nuclear Information System (INIS)

    Behere, Anita; Patil, R.D.; Ghodgaonkar, M.D.; Gopalakrishnan, K.R.

    1997-01-01

    In nuclear spectroscopy applications, it is often desired to acquire data at high rate with high resolution. With the availability of low cost computers, it is possible to make a powerful data acquisition system with minimum hardware and software development, by designing a PC plug-in acquisition board. But in using the PC processor for data acquisition, the PC can not be used as a multitasking node. Keeping this in view, PC plug-in acquisition boards with on-board processor find tremendous applications. Transputer based data acquisition board has been designed which can be configured as a high count rate pulse height MCA or as a Multi Spectral Scaler. Multi Spectral Scaling (MSS) is a new technique, in which multiple spectra are acquired in small time frames and are then analyzed. This paper describes the details of this multi spectral scaling data acquisition system. 2 figs

  1. Building Participation in Large-scale Conservation: Lessons from Belize and Panama

    Directory of Open Access Journals (Sweden)

    Jesse Guite Hastings

    2015-01-01

    Full Text Available Motivated by biogeography and a desire for alignment with the funding priorities of donors, the twenty-first century has seen big international NGOs shifting towards a large-scale conservation approach. This shift has meant that even before stakeholders at the national and local scale are involved, conservation programmes often have their objectives defined and funding allocated. This paper uses the experiences of Conservation International′s Marine Management Area Science (MMAS programme in Belize and Panama to explore how to build participation at the national and local scale while working within the bounds of the current conservation paradigm. Qualitative data about MMAS was gathered through a multi-sited ethnographic research process, utilising document review, direct observation, and semi-structured interviews with 82 informants in Belize, Panama, and the United States of America. Results indicate that while a large-scale approach to conservation disadvantages early national and local stakeholder participation, this effect can be mediated through focusing engagement efforts, paying attention to context, building horizontal and vertical partnerships, and using deliberative processes that promote learning. While explicit consideration of geopolitics and local complexity alongside biogeography in the planning phase of a large-scale conservation programme is ideal, actions taken by programme managers during implementation can still have a substantial impact on conservation outcomes.

  2. Analysis using large-scale ringing data

    Directory of Open Access Journals (Sweden)

    Baillie, S. R.

    2004-06-01

    ]; Peach et al., 1998; DeSante et al., 2001 are generally co–ordinated by ringing centres such as those that make up the membership of EURING. In some countries volunteer census work (often called Breeding Bird Surveys is undertaken by the same organizations while in others different bodies may co–ordinate this aspect of the work. This session was concerned with the analysis of such extensive data sets and the approaches that are being developed to address the key theoretical and applied issues outlined above. The papers reflect the development of more spatially explicit approaches to analyses of data gathered at large spatial scales. They show that while the statistical tools that have been developed in recent years can be used to derive useful biological conclusions from such data, there is additional need for further developments. Future work should also consider how to best implement such analytical developments within future study designs. In his plenary paper Andy Royle (Royle, 2004 addresses this theme directly by describing a general framework for modelling spatially replicated abundance data. The approach is based on the idea that a set of spatially referenced local populations constitutes a metapopulation, within which local abundance is determined as a random process. This provides an elegant and general approach in which the metapopulation model as described above is combined with a data–generating model specific to the type of data being analysed to define a simple hierarchical model that can be analysed using conventional methods. It should be noted, however, that further software development will be needed if the approach is to be made readily available to biologists. The approach is well suited to dealing with sparse data and avoids the need for data aggregation prior to analysis. Spatial synchrony has received most attention in studies of species whose populations show cyclic fluctuations, particularly certain game birds and small mammals. However

  3. Multi-Scale Validation of a Nanodiamond Drug Delivery System and Multi-Scale Engineering Education

    Science.gov (United States)

    Schwalbe, Michelle Kristin

    2010-01-01

    This dissertation has two primary concerns: (i) evaluating the uncertainty and prediction capabilities of a nanodiamond drug delivery model using Bayesian calibration and bias correction, and (ii) determining conceptual difficulties of multi-scale analysis from an engineering education perspective. A Bayesian uncertainty quantification scheme…

  4. Utilisation of ISA Reverse Genetics and Large-Scale Random Codon Re-Encoding to Produce Attenuated Strains of Tick-Borne Encephalitis Virus within Days.

    Science.gov (United States)

    de Fabritus, Lauriane; Nougairède, Antoine; Aubry, Fabien; Gould, Ernest A; de Lamballerie, Xavier

    2016-01-01

    Large-scale codon re-encoding is a new method of attenuating RNA viruses. However, the use of infectious clones to generate attenuated viruses has inherent technical problems. We previously developed a bacterium-free reverse genetics protocol, designated ISA, and now combined it with large-scale random codon-re-encoding method to produce attenuated tick-borne encephalitis virus (TBEV), a pathogenic flavivirus which causes febrile illness and encephalitis in humans. We produced wild-type (WT) and two re-encoded TBEVs, containing 273 or 273+284 synonymous mutations in the NS5 and NS5+NS3 coding regions respectively. Both re-encoded viruses were attenuated when compared with WT virus using a laboratory mouse model and the relative level of attenuation increased with the degree of re-encoding. Moreover, all infected animals produced neutralizing antibodies. This novel, rapid and efficient approach to engineering attenuated viruses could potentially expedite the development of safe and effective new-generation live attenuated vaccines.

  5. Variational Multi-Scale method with spectral approximation of the sub-scales.

    KAUST Repository

    Dia, Ben Mansour; Chá con-Rebollo, Tomas

    2015-01-01

    A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base

  6. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  7. Two-scale large deviations for chemical reaction kinetics through second quantization path integral

    International Nuclear Information System (INIS)

    Li, Tiejun; Lin, Feng

    2016-01-01

    Motivated by the study of rare events for a typical genetic switching model in systems biology, in this paper we aim to establish the general two-scale large deviations for chemical reaction systems. We build a formal approach to explicitly obtain the large deviation rate functionals for the considered two-scale processes based upon the second quantization path integral technique. We get three important types of large deviation results when the underlying two timescales are in three different regimes. This is realized by singular perturbation analysis to the rate functionals obtained by the path integral. We find that the three regimes possess the same deterministic mean-field limit but completely different chemical Langevin approximations. The obtained results are natural extensions of the classical large volume limit for chemical reactions. We also discuss its implication on the single-molecule Michaelis–Menten kinetics. Our framework and results can be applied to understand general multi-scale systems including diffusion processes. (paper)

  8. Multi-time, multi-scale correlation functions in turbulence and in turbulent models

    NARCIS (Netherlands)

    Biferale, L.; Boffetta, G.; Celani, A.; Toschi, F.

    1999-01-01

    A multifractal-like representation for multi-time, multi-scale velocity correlation in turbulence and dynamical turbulent models is proposed. The importance of subleading contributions to time correlations is highlighted. The fulfillment of the dynamical constraints due to the equations of motion is

  9. Reliability of Multi-Category Rating Scales

    Science.gov (United States)

    Parker, Richard I.; Vannest, Kimberly J.; Davis, John L.

    2013-01-01

    The use of multi-category scales is increasing for the monitoring of IEP goals, classroom and school rules, and Behavior Improvement Plans (BIPs). Although they require greater inference than traditional data counting, little is known about the inter-rater reliability of these scales. This simulation study examined the performance of nine…

  10. Large-scale event extraction from literature with multi-level gene normalization.

    Directory of Open Access Journals (Sweden)

    Sofie Van Landeghem

    Full Text Available Text mining for the life sciences aims to aid database curation, knowledge summarization and information retrieval through the automated processing of biomedical texts. To provide comprehensive coverage and enable full integration with existing biomolecular database records, it is crucial that text mining tools scale up to millions of articles and that their analyses can be unambiguously linked to information recorded in resources such as UniProt, KEGG, BioGRID and NCBI databases. In this study, we investigate how fully automated text mining of complex biomolecular events can be augmented with a normalization strategy that identifies biological concepts in text, mapping them to identifiers at varying levels of granularity, ranging from canonicalized symbols to unique gene and proteins and broad gene families. To this end, we have combined two state-of-the-art text mining components, previously evaluated on two community-wide challenges, and have extended and improved upon these methods by exploiting their complementary nature. Using these systems, we perform normalization and event extraction to create a large-scale resource that is publicly available, unique in semantic scope, and covers all 21.9 million PubMed abstracts and 460 thousand PubMed Central open access full-text articles. This dataset contains 40 million biomolecular events involving 76 million gene/protein mentions, linked to 122 thousand distinct genes from 5032 species across the full taxonomic tree. Detailed evaluations and analyses reveal promising results for application of this data in database and pathway curation efforts. The main software components used in this study are released under an open-source license. Further, the resulting dataset is freely accessible through a novel API, providing programmatic and customized access (http://www.evexdb.org/api/v001/. Finally, to allow for large-scale bioinformatic analyses, the entire resource is available for bulk download from

  11. Multi-Scale Three-Dimensional Variational Data Assimilation System for Coastal Ocean Prediction

    Science.gov (United States)

    Li, Zhijin; Chao, Yi; Li, P. Peggy

    2012-01-01

    A multi-scale three-dimensional variational data assimilation system (MS-3DVAR) has been formulated and the associated software system has been developed for improving high-resolution coastal ocean prediction. This system helps improve coastal ocean prediction skill, and has been used in support of operational coastal ocean forecasting systems and field experiments. The system has been developed to improve the capability of data assimilation for assimilating, simultaneously and effectively, sparse vertical profiles and high-resolution remote sensing surface measurements into coastal ocean models, as well as constraining model biases. In this system, the cost function is decomposed into two separate units for the large- and small-scale components, respectively. As such, data assimilation is implemented sequentially from large to small scales, the background error covariance is constructed to be scale-dependent, and a scale-dependent dynamic balance is incorporated. This scheme then allows effective constraining large scales and model bias through assimilating sparse vertical profiles, and small scales through assimilating high-resolution surface measurements. This MS-3DVAR enhances the capability of the traditional 3DVAR for assimilating highly heterogeneously distributed observations, such as along-track satellite altimetry data, and particularly maximizing the extraction of information from limited numbers of vertical profile observations.

  12. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python.

    Science.gov (United States)

    Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri

    2014-01-01

    In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

  13. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python

    Directory of Open Access Journals (Sweden)

    Nicolas eRey-Villamizar

    2014-04-01

    Full Text Available In this article, we describe use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis task, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral brain tissue images surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels, 6,000$times$10,000$times$500 voxels with 16 bits/voxel, implying image sizes exceeding 250GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analytics for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment consisting. Our Python script enables efficient data storage and movement between compute and storage servers, logging all processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

  14. Unraveling The Connectome: Visualizing and Abstracting Large-Scale Connectomics Data

    KAUST Repository

    Al-Awami, Ali K.

    2017-04-30

    We explore visualization and abstraction approaches to represent neuronal data. Neuroscientists acquire electron microscopy volumes to reconstruct a complete wiring diagram of the neurons in the brain, called the connectome. This will be crucial to understanding brains and their development. However, the resulting data is complex and large, posing a big challenge to existing visualization techniques in terms of clarity and scalability. We describe solutions to tackle the problems of scalability and cluttered presentation. We first show how a query-guided interactive approach to visual exploration can reduce the clutter and help neuroscientists explore their data dynamically. We use a knowledge-based query algebra that facilitates the interactive creation of queries. This allows neuroscientists to pose domain-specific questions related to their research. Simple queries can be combined to form complex queries to answer more sophisticated questions. We then show how visual abstractions from 3D to 2D can significantly reduce the visual clutter and add clarity to the visualization so that scientists can focus more on the analysis. We abstract the topology of 3D neurons into a multi-scale, relative distance-preserving subway map visualization that allows scientists to interactively explore the morphological and connectivity features of neuronal cells. We then focus on the process of acquisition, where neuroscientists segment electron microscopy images to reconstruct neurons. The segmentation process of such data is tedious, time-intensive, and usually performed using a diverse set of tools. We present a novel web-based visualization system for tracking the state, progress, and evolution of segmentation data in neuroscience. Our multi-user system seamlessly integrates a diverse set of tools. Our system provides support for the management, provenance, accountability, and auditing of large-scale segmentations. Finally, we present a novel architecture to render very large

  15. Generation Expansion Planning Considering Integrating Large-scale Wind Generation

    DEFF Research Database (Denmark)

    Zhang, Chunyu; Ding, Yi; Østergaard, Jacob

    2013-01-01

    necessitated the inclusion of more innovative and sophisticated approaches in power system investment planning. A bi-level generation expansion planning approach considering large-scale wind generation was proposed in this paper. The first phase is investment decision, while the second phase is production...... optimization decision. A multi-objective PSO (MOPSO) algorithm was introduced to solve this optimization problem, which can accelerate the convergence and guarantee the diversity of Pareto-optimal front set as well. The feasibility and effectiveness of the proposed bi-level planning approach and the MOPSO...

  16. A multi scale model for small scale plasticity

    International Nuclear Information System (INIS)

    Zbib, Hussein M.

    2002-01-01

    Full text.A framework for investigating size-dependent small-scale plasticity phenomena and related material instabilities at various length scales ranging from the nano-microscale to the mesoscale is presented. The model is based on fundamental physical laws that govern dislocation motion and their interaction with various defects and interfaces. Particularly, a multi-scale model is developed merging two scales, the nano-microscale where plasticity is determined by explicit three-dimensional dislocation dynamics analysis providing the material length-scale, and the continuum scale where energy transport is based on basic continuum mechanics laws. The result is a hybrid simulation model coupling discrete dislocation dynamics with finite element analyses. With this hybrid approach, one can address complex size-dependent problems, including dislocation boundaries, dislocations in heterogeneous structures, dislocation interaction with interfaces and associated shape changes and lattice rotations, as well as deformation in nano-structured materials, localized deformation and shear band

  17. Five hundred years of gridded high-resolution precipitation reconstructions over Europe and the connection to large-scale circulation

    Energy Technology Data Exchange (ETDEWEB)

    Pauling, Andreas [University of Bern, Institute of Geography, Bern (Switzerland); Luterbacher, Juerg; Wanner, Heinz [University of Bern, Institute of Geography, Bern (Switzerland); National Center of Competence in Research (NCCR) in Climate, Bern (Switzerland); Casty, Carlo [University of Bern, Climate and Environmental Physics Institute, Bern (Switzerland)

    2006-03-15

    We present seasonal precipitation reconstructions for European land areas (30 W to 40 E/30-71 N; given on a 0.5 x 0.5 resolved grid) covering the period 1500-1900 together with gridded reanalysis from 1901 to 2000 (Mitchell and Jones 2005). Principal component regression techniques were applied to develop this dataset. A large variety of long instrumental precipitation series, precipitation indices based on documentary evidence and natural proxies (tree-ring chronologies, ice cores, corals and a speleothem) that are sensitive to precipitation signals were used as predictors. Transfer functions were derived over the 1901-1983 calibration period and applied to 1500-1900 in order to reconstruct the large-scale precipitation fields over Europe. The performance (quality estimation based on unresolved variance within the calibration period) of the reconstructions varies over centuries, seasons and space. Highest reconstructive skill was found for winter over central Europe and the Iberian Peninsula. Precipitation variability over the last half millennium reveals both large interannual and decadal fluctuations. Applying running correlations, we found major non-stationarities in the relation between large-scale circulation and regional precipitation. For several periods during the last 500 years, we identified key atmospheric modes for southern Spain/northern Morocco and central Europe as representations of two precipitation regimes. Using scaled composite analysis, we show that precipitation extremes over central Europe and southern Spain are linked to distinct pressure patterns. Due to its high spatial and temporal resolution, this dataset allows detailed studies of regional precipitation variability for all seasons, impact studies on different time and space scales, comparisons with high-resolution climate models as well as analysis of connections with regional temperature reconstructions. (orig.)

  18. Multi-scale analysis of lung computed tomography images

    CERN Document Server

    Gori, I; Fantacci, M E; Preite Martinez, A; Retico, A; De Mitri, I; Donadio, S; Fulcheri, C

    2007-01-01

    A computer-aided detection (CAD) system for the identification of lung internal nodules in low-dose multi-detector helical Computed Tomography (CT) images was developed in the framework of the MAGIC-5 project. The three modules of our lung CAD system, a segmentation algorithm for lung internal region identification, a multi-scale dot-enhancement filter for nodule candidate selection and a multi-scale neural technique for false positive finding reduction, are described. The results obtained on a dataset of low-dose and thin-slice CT scans are shown in terms of free response receiver operating characteristic (FROC) curves and discussed.

  19. Traffic assignment models in large-scale applications

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær

    the potential of the method proposed and the possibility to use individual-based GPS units for travel surveys in real-life large-scale multi-modal networks. Congestion is known to highly influence the way we act in the transportation network (and organise our lives), because of longer travel times...... of observations of actual behaviour to obtain estimates of the (monetary) value of different travel time components, thereby increasing the behavioural realism of largescale models. vii The generation of choice sets is a vital component in route choice models. This is, however, not a straight-forward task in real......, but the reliability of the travel time also has a large impact on our travel choices. Consequently, in order to improve the realism of transport models, correct understanding and representation of two values that are related to the value of time (VoT) are essential: (i) the value of congestion (VoC), as the Vo...

  20. Large Scale Processes and Extreme Floods in Brazil

    Science.gov (United States)

    Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.

    2016-12-01

    Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).

  1. Continuous micron-scaled rope engineering using a rotating multi-nozzle electrospinning emitter

    Science.gov (United States)

    Zhang, Chunchen; Gao, Chengcheng; Chang, Ming-Wei; Ahmad, Zeeshan; Li, Jing-Song

    2016-10-01

    Electrospinning (ES) enables simple production of fibers for broad applications (e.g., biomedical engineering, energy storage, and electronics). However, resulting structures are predominantly random; displaying significant disordered fiber entanglement, which inevitably gives rise to structural variations and reproducibility on the micron scale. Surface and structural features on this scale are critical for biomaterials, tissue engineering, and pharmaceutical sciences. In this letter, a modified ES technique using a rotating multi-nozzle emitter is developed and utilized to fabricate continuous micron-scaled polycaprolactone (PCL) ropes, providing control on fiber intercalation (twist) and structural order. Micron-scaled ropes comprising 312 twists per millimeter are generated, and rope diameter and pitch length are regulated using polymer concentration and process parameters. Electric field simulations confirm vector and distribution mechanisms, which influence fiber orientation and deposition during the process. The modified fabrication system provides much needed control on reproducibility and fiber entanglement which is crucial for electrospun biomedical materials.

  2. Large-scale road safety programmes in low- and middle-income countries: an opportunity to generate evidence.

    Science.gov (United States)

    Hyder, Adnan A; Allen, Katharine A; Peters, David H; Chandran, Aruna; Bishai, David

    2013-01-01

    The growing burden of road traffic injuries, which kill over 1.2 million people yearly, falls mostly on low- and middle-income countries (LMICs). Despite this, evidence generation on the effectiveness of road safety interventions in LMIC settings remains scarce. This paper explores a scientific approach for evaluating road safety programmes in LMICs and introduces such a road safety multi-country initiative, the Road Safety in 10 Countries Project (RS-10). By building on existing evaluation frameworks, we develop a scientific approach for evaluating large-scale road safety programmes in LMIC settings. This also draws on '13 lessons' of large-scale programme evaluation: defining the evaluation scope; selecting study sites; maintaining objectivity; developing an impact model; utilising multiple data sources; using multiple analytic techniques; maximising external validity; ensuring an appropriate time frame; the importance of flexibility and a stepwise approach; continuous monitoring; providing feedback to implementers, policy-makers; promoting the uptake of evaluation results; and understanding evaluation costs. The use of relatively new approaches for evaluation of real-world programmes allows for the production of relevant knowledge. The RS-10 project affords an important opportunity to scientifically test these approaches for a real-world, large-scale road safety evaluation and generate new knowledge for the field of road safety.

  3. Modeling and Control of a Large Nuclear Reactor A Three-Time-Scale Approach

    CERN Document Server

    Shimjith, S R; Bandyopadhyay, B

    2013-01-01

    Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property,...

  4. Scaling and criticality in a stochastic multi-agent model of a financial market

    Science.gov (United States)

    Lux, Thomas; Marchesi, Michele

    1999-02-01

    Financial prices have been found to exhibit some universal characteristics that resemble the scaling laws characterizing physical systems in which large numbers of units interact. This raises the question of whether scaling in finance emerges in a similar way - from the interactions of a large ensemble of market participants. However, such an explanation is in contradiction to the prevalent `efficient market hypothesis' in economics, which assumes that the movements of financial prices are an immediate and unbiased reflection of incoming news about future earning prospects. Within this hypothesis, scaling in price changes would simply reflect similar scaling in the `input' signals that influence them. Here we describe a multi-agent model of financial markets which supports the idea that scaling arises from mutual interactions of participants. Although the `news arrival process' in our model lacks both power-law scaling and any temporal dependence in volatility, we find that it generates such behaviour as a result of interactions between agents.

  5. Iterative equalization for OFDM systems over wideband Multi-Scale Multi-Lag channels

    NARCIS (Netherlands)

    Xu, T.; Tang, Z.; Remis, R.; Leus, G.

    2012-01-01

    OFDM suffers from inter-carrier interference (ICI) when the channel is time varying. This article seeks to quantify the amount of interference resulting from wideband OFDM channels, which are assumed to follow the multi-scale multi-lag (MSML) model. The MSML channel model results in full channel

  6. Large-scale bioenergy production: how to resolve sustainability trade-offs?

    Science.gov (United States)

    Humpenöder, Florian; Popp, Alexander; Bodirsky, Benjamin Leon; Weindl, Isabelle; Biewald, Anne; Lotze-Campen, Hermann; Dietrich, Jan Philipp; Klein, David; Kreidenweis, Ulrich; Müller, Christoph; Rolinski, Susanne; Stevanovic, Miodrag

    2018-02-01

    Large-scale 2nd generation bioenergy deployment is a key element of 1.5 °C and 2 °C transformation pathways. However, large-scale bioenergy production might have negative sustainability implications and thus may conflict with the Sustainable Development Goal (SDG) agenda. Here, we carry out a multi-criteria sustainability assessment of large-scale bioenergy crop production throughout the 21st century (300 EJ in 2100) using a global land-use model. Our analysis indicates that large-scale bioenergy production without complementary measures results in negative effects on the following sustainability indicators: deforestation, CO2 emissions from land-use change, nitrogen losses, unsustainable water withdrawals and food prices. One of our main findings is that single-sector environmental protection measures next to large-scale bioenergy production are prone to involve trade-offs among these sustainability indicators—at least in the absence of more efficient land or water resource use. For instance, if bioenergy production is accompanied by forest protection, deforestation and associated emissions (SDGs 13 and 15) decline substantially whereas food prices (SDG 2) increase. However, our study also shows that this trade-off strongly depends on the development of future food demand. In contrast to environmental protection measures, we find that agricultural intensification lowers some side-effects of bioenergy production substantially (SDGs 13 and 15) without generating new trade-offs—at least among the sustainability indicators considered here. Moreover, our results indicate that a combination of forest and water protection schemes, improved fertilization efficiency, and agricultural intensification would reduce the side-effects of bioenergy production most comprehensively. However, although our study includes more sustainability indicators than previous studies on bioenergy side-effects, our study represents only a small subset of all indicators relevant for the

  7. Model abstraction addressing long-term simulations of chemical degradation of large-scale concrete structures

    International Nuclear Information System (INIS)

    Jacques, D.; Perko, J.; Seetharam, S.; Mallants, D.

    2012-01-01

    This paper presents a methodology to assess the spatial-temporal evolution of chemical degradation fronts in real-size concrete structures typical of a near-surface radioactive waste disposal facility. The methodology consists of the abstraction of a so-called full (complicated) model accounting for the multicomponent - multi-scale nature of concrete to an abstracted (simplified) model which simulates chemical concrete degradation based on a single component in the aqueous and solid phase. The abstracted model is verified against chemical degradation fronts simulated with the full model under both diffusive and advective transport conditions. Implementation in the multi-physics simulation tool COMSOL allows simulation of the spatial-temporal evolution of chemical degradation fronts in large-scale concrete structures. (authors)

  8. Magnetic hysteresis at the domain scale of a multi-scale material model for magneto-elastic behaviour

    Energy Technology Data Exchange (ETDEWEB)

    Vanoost, D., E-mail: dries.vanoost@kuleuven-kulak.be [KU Leuven Technology Campus Ostend, ReMI Research Group, Oostende B-8400 (Belgium); KU Leuven Kulak, Wave Propagation and Signal Processing Research Group, Kortrijk B-8500 (Belgium); Steentjes, S. [Institute of Electrical Machines, RWTH Aachen University, Aachen D-52062 (Germany); Peuteman, J. [KU Leuven Technology Campus Ostend, ReMI Research Group, Oostende B-8400 (Belgium); KU Leuven, Department of Electrical Engineering, Electrical Energy and Computer Architecture, Heverlee B-3001 (Belgium); Gielen, G. [KU Leuven, Department of Electrical Engineering, Microelectronics and Sensors, Heverlee B-3001 (Belgium); De Gersem, H. [KU Leuven Kulak, Wave Propagation and Signal Processing Research Group, Kortrijk B-8500 (Belgium); TU Darmstadt, Institut für Theorie Elektromagnetischer Felder, Darmstadt D-64289 (Germany); Pissoort, D. [KU Leuven Technology Campus Ostend, ReMI Research Group, Oostende B-8400 (Belgium); KU Leuven, Department of Electrical Engineering, Microelectronics and Sensors, Heverlee B-3001 (Belgium); Hameyer, K. [Institute of Electrical Machines, RWTH Aachen University, Aachen D-52062 (Germany)

    2016-09-15

    This paper proposes a multi-scale energy-based material model for poly-crystalline materials. Describing the behaviour of poly-crystalline materials at three spatial scales of dominating physical mechanisms allows accounting for the heterogeneity and multi-axiality of the material behaviour. The three spatial scales are the poly-crystalline, grain and domain scale. Together with appropriate scale transitions rules and models for local magnetic behaviour at each scale, the model is able to describe the magneto-elastic behaviour (magnetostriction and hysteresis) at the macroscale, although the data input is merely based on a set of physical constants. Introducing a new energy density function that describes the demagnetisation field, the anhysteretic multi-scale energy-based material model is extended to the hysteretic case. The hysteresis behaviour is included at the domain scale according to the micro-magnetic domain theory while preserving a valid description for the magneto-elastic coupling. The model is verified using existing measurement data for different mechanical stress levels. - Highlights: • A ferromagnetic hysteretic energy-based multi-scale material model is proposed. • The hysteresis is obtained by new proposed hysteresis energy density function. • Avoids tedious parameter identification.

  9. Ethics of large-scale change

    OpenAIRE

    Arler, Finn

    2006-01-01

      The subject of this paper is long-term large-scale changes in human society. Some very significant examples of large-scale change are presented: human population growth, human appropriation of land and primary production, the human use of fossil fuels, and climate change. The question is posed, which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, th...

  10. Bio-stimuli-responsive multi-scale hyaluronic acid nanoparticles for deepened tumor penetration and enhanced therapy.

    Science.gov (United States)

    Huo, Mengmeng; Li, Wenyan; Chaudhuri, Arka Sen; Fan, Yuchao; Han, Xiu; Yang, Chen; Wu, Zhenghong; Qi, Xiaole

    2017-09-01

    In this study, we developed bio-stimuli-responsive multi-scale hyaluronic acid (HA) nanoparticles encapsulated with polyamidoamine (PAMAM) dendrimers as the subunits. These HA/PAMAM nanoparticles of large scale (197.10±3.00nm) were stable during systematic circulation then enriched at the tumor sites; however, they were prone to be degraded by the high expressed hyaluronidase (HAase) to release inner PAMAM dendrimers and regained a small scale (5.77±0.25nm) with positive charge. After employing tumor spheroids penetration assay on A549 3D tumor spheroids for 8h, the fluorescein isothiocyanate (FITC) labeled multi-scale HA/PAMAM-FITC nanoparticles could penetrate deeply into these tumor spheroids with the degradation of HAase. Moreover, small animal imaging technology in male nude mice bearing H22 tumor showed HA/PAMAM-FITC nanoparticles possess higher prolonged systematic circulation compared with both PAMAM-FITC nanoparticles and free FITC. In addition, after intravenous administration in mice bearing H22 tumors, methotrexate (MTX) loaded multi-scale HA/PAMAM-MTX nanoparticles exhibited a 2.68-fold greater antitumor activity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. [Review of occupational hazard census and large-scale surveys in sixty years in China].

    Science.gov (United States)

    Li, Tao; Li, Chao-lin; Wang, Huan-qiang

    2010-11-01

    To compare and analyze the all previous censuses and large-scale surveys on occupational hazard in China, draw lessons from the past, and try to provide references for the development of census or surveys on the occupational hazard in the new period. A literature retrieval had been performed mainly on the occupational hazard census and large-scale surveys since the founding of People's Republic of China. Only the survey items carried on a national scale were selected. Some keywords were drawn from these items such as survey time, survey scope, industries, occupational diseases and the rate of examination, organization and technical director, methods and so on. The outcomes and experiences were summarized. Since the founding of People's Republic of China, there were seven occupational hazard census and large-scale surveys carried in China, three of them were about silicosis or pneumoconiosis, two of them were about poison and carcinogens, one was about noise, another one was about the township industrial enterprises. Leadership attention was the fundamental guarantee of the success of the survey, sound occupational health management organizations were the base, collaborative relationship with each other was an import factor, and only the interdisciplinary team, scientific design, quality control and incentive mechanism could assure the quality of the survey. The survey should be designed and carried out according to industries.

  12. Sampling large random knots in a confined space

    International Nuclear Information System (INIS)

    Arsuaga, J; Blackstone, T; Diao, Y; Hinson, K; Karadayi, E; Saito, M

    2007-01-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e n 2 )). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n 2 ). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications

  13. Sampling large random knots in a confined space

    Science.gov (United States)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.

    2007-09-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  14. Sampling large random knots in a confined space

    Energy Technology Data Exchange (ETDEWEB)

    Arsuaga, J [Department of Mathematics, San Francisco State University, 1600 Holloway Ave, San Francisco, CA 94132 (United States); Blackstone, T [Department of Computer Science, San Francisco State University, 1600 Holloway Ave., San Francisco, CA 94132 (United States); Diao, Y [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Hinson, K [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Karadayi, E [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States); Saito, M [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States)

    2007-09-28

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e{sup n{sup 2}}). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n{sup 2}). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  15. Formalizing Knowledge in Multi-Scale Agent-Based Simulations.

    Science.gov (United States)

    Somogyi, Endre; Sluka, James P; Glazier, James A

    2016-10-01

    Multi-scale, agent-based simulations of cellular and tissue biology are increasingly common. These simulations combine and integrate a range of components from different domains. Simulations continuously create, destroy and reorganize constituent elements causing their interactions to dynamically change. For example, the multi-cellular tissue development process coordinates molecular, cellular and tissue scale objects with biochemical, biomechanical, spatial and behavioral processes to form a dynamic network. Different domain specific languages can describe these components in isolation, but cannot describe their interactions. No current programming language is designed to represent in human readable and reusable form the domain specific knowledge contained in these components and interactions. We present a new hybrid programming language paradigm that naturally expresses the complex multi-scale objects and dynamic interactions in a unified way and allows domain knowledge to be captured, searched, formalized, extracted and reused.

  16. Hypersingular integral equations, waveguiding effects in Cantorian Universe and genesis of large scale structures

    International Nuclear Information System (INIS)

    Iovane, G.; Giordano, P.

    2005-01-01

    In this work we introduce the hypersingular integral equations and analyze a realistic model of gravitational waveguides on a cantorian space-time. A waveguiding effect is considered with respect to the large scale structure of the Universe, where the structure formation appears as if it were a classically self-similar random process at all astrophysical scales. The result is that it seems we live in an El Naschie's o (∞) Cantorian space-time, where gravitational lensing and waveguiding effects can explain the appearing Universe. In particular, we consider filamentary and planar large scale structures as possible refraction channels for electromagnetic radiation coming from cosmological structures. From this vision the Universe appears like a large self-similar adaptive mirrors set, thanks to three numerical simulations. Consequently, an infinite Universe is just an optical illusion that is produced by mirroring effects connected with the large scale structure of a finite and not a large Universe

  17. The application of liquid air energy storage for large scale long duration solutions to grid balancing

    Science.gov (United States)

    Brett, Gareth; Barnett, Matthew

    2014-12-01

    Liquid Air Energy Storage (LAES) provides large scale, long duration energy storage at the point of demand in the 5 MW/20 MWh to 100 MW/1,000 MWh range. LAES combines mature components from the industrial gas and electricity industries assembled in a novel process and is one of the few storage technologies that can be delivered at large scale, with no geographical constraints. The system uses no exotic materials or scarce resources and all major components have a proven lifetime of 25+ years. The system can also integrate low grade waste heat to increase power output. Founded in 2005, Highview Power Storage, is a UK based developer of LAES. The company has taken the concept from academic analysis, through laboratory testing, and in 2011 commissioned the world's first fully integrated system at pilot plant scale (300 kW/2.5 MWh) hosted at SSE's (Scottish & Southern Energy) 80 MW Biomass Plant in Greater London which was partly funded by a Department of Energy and Climate Change (DECC) grant. Highview is now working with commercial customers to deploy multi MW commercial reference plants in the UK and abroad.

  18. Genome Partitioner: A web tool for multi-level partitioning of large-scale DNA constructs for synthetic biology applications.

    Science.gov (United States)

    Christen, Matthias; Del Medico, Luca; Christen, Heinz; Christen, Beat

    2017-01-01

    Recent advances in lower-cost DNA synthesis techniques have enabled new innovations in the field of synthetic biology. Still, efficient design and higher-order assembly of genome-scale DNA constructs remains a labor-intensive process. Given the complexity, computer assisted design tools that fragment large DNA sequences into fabricable DNA blocks are needed to pave the way towards streamlined assembly of biological systems. Here, we present the Genome Partitioner software implemented as a web-based interface that permits multi-level partitioning of genome-scale DNA designs. Without the need for specialized computing skills, biologists can submit their DNA designs to a fully automated pipeline that generates the optimal retrosynthetic route for higher-order DNA assembly. To test the algorithm, we partitioned a 783 kb Caulobacter crescentus genome design. We validated the partitioning strategy by assembling a 20 kb test segment encompassing a difficult to synthesize DNA sequence. Successful assembly from 1 kb subblocks into the 20 kb segment highlights the effectiveness of the Genome Partitioner for reducing synthesis costs and timelines for higher-order DNA assembly. The Genome Partitioner is broadly applicable to translate DNA designs into ready to order sequences that can be assembled with standardized protocols, thus offering new opportunities to harness the diversity of microbial genomes for synthetic biology applications. The Genome Partitioner web tool can be accessed at https://christenlab.ethz.ch/GenomePartitioner.

  19. Genome Partitioner: A web tool for multi-level partitioning of large-scale DNA constructs for synthetic biology applications.

    Directory of Open Access Journals (Sweden)

    Matthias Christen

    Full Text Available Recent advances in lower-cost DNA synthesis techniques have enabled new innovations in the field of synthetic biology. Still, efficient design and higher-order assembly of genome-scale DNA constructs remains a labor-intensive process. Given the complexity, computer assisted design tools that fragment large DNA sequences into fabricable DNA blocks are needed to pave the way towards streamlined assembly of biological systems. Here, we present the Genome Partitioner software implemented as a web-based interface that permits multi-level partitioning of genome-scale DNA designs. Without the need for specialized computing skills, biologists can submit their DNA designs to a fully automated pipeline that generates the optimal retrosynthetic route for higher-order DNA assembly. To test the algorithm, we partitioned a 783 kb Caulobacter crescentus genome design. We validated the partitioning strategy by assembling a 20 kb test segment encompassing a difficult to synthesize DNA sequence. Successful assembly from 1 kb subblocks into the 20 kb segment highlights the effectiveness of the Genome Partitioner for reducing synthesis costs and timelines for higher-order DNA assembly. The Genome Partitioner is broadly applicable to translate DNA designs into ready to order sequences that can be assembled with standardized protocols, thus offering new opportunities to harness the diversity of microbial genomes for synthetic biology applications. The Genome Partitioner web tool can be accessed at https://christenlab.ethz.ch/GenomePartitioner.

  20. Multi-scale modeling of composites

    DEFF Research Database (Denmark)

    Azizi, Reza

    A general method to obtain the homogenized response of metal-matrix composites is developed. It is assumed that the microscopic scale is sufficiently small compared to the macroscopic scale such that the macro response does not affect the micromechanical model. Therefore, the microscopic scale......-Mandel’s energy principle is used to find macroscopic operators based on micro-mechanical analyses using the finite element method under generalized plane strain condition. A phenomenologically macroscopic model for metal matrix composites is developed based on constitutive operators describing the elastic...... to plastic deformation. The macroscopic operators found, can be used to model metal matrix composites on the macroscopic scale using a hierarchical multi-scale approach. Finally, decohesion under tension and shear loading is studied using a cohesive law for the interface between matrix and fiber....

  1. The Large-Scale Biosphere-Atmosphere Experiment in Amazonia: Analyzing Regional Land Use Change Effects.

    Science.gov (United States)

    Michael Keller; Maria Assunção Silva-Dias; Daniel C. Nepstad; Meinrat O. Andreae

    2004-01-01

    The Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) is a multi-disciplinary, multinational scientific project led by Brazil. LBA researchers seek to understand Amazonia in its global context especially with regard to regional and global climate. Current development activities in Amazonia including deforestation, logging, cattle ranching, and agriculture...

  2. Coupled numerical approach combining finite volume and lattice Boltzmann methods for multi-scale multi-physicochemical processes

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Li; He, Ya-Ling [Key Laboratory of Thermo-Fluid Science and Engineering of MOE, School of Energy and Power Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China); Kang, Qinjun [Computational Earth Science Group (EES-16), Los Alamos National Laboratory, Los Alamos, NM (United States); Tao, Wen-Quan, E-mail: wqtao@mail.xjtu.edu.cn [Key Laboratory of Thermo-Fluid Science and Engineering of MOE, School of Energy and Power Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China)

    2013-12-15

    A coupled (hybrid) simulation strategy spatially combining the finite volume method (FVM) and the lattice Boltzmann method (LBM), called CFVLBM, is developed to simulate coupled multi-scale multi-physicochemical processes. In the CFVLBM, computational domain of multi-scale problems is divided into two sub-domains, i.e., an open, free fluid region and a region filled with porous materials. The FVM and LBM are used for these two regions, respectively, with information exchanged at the interface between the two sub-domains. A general reconstruction operator (RO) is proposed to derive the distribution functions in the LBM from the corresponding macro scalar, the governing equation of which obeys the convection–diffusion equation. The CFVLBM and the RO are validated in several typical physicochemical problems and then are applied to simulate complex multi-scale coupled fluid flow, heat transfer, mass transport, and chemical reaction in a wall-coated micro reactor. The maximum ratio of the grid size between the FVM and LBM regions is explored and discussed. -- Highlights: •A coupled simulation strategy for simulating multi-scale phenomena is developed. •Finite volume method and lattice Boltzmann method are coupled. •A reconstruction operator is derived to transfer information at the sub-domains interface. •Coupled multi-scale multiple physicochemical processes in micro reactor are simulated. •Techniques to save computational resources and improve the efficiency are discussed.

  3. Joint Conditional Random Field Filter for Multi-Object Tracking

    Directory of Open Access Journals (Sweden)

    Luo Ronghua

    2011-03-01

    Full Text Available Object tracking can improve the performance of mobile robot especially in populated dynamic environments. A novel joint conditional random field Filter (JCRFF based on conditional random field with hierarchical structure is proposed for multi-object tracking by abstracting the data associations between objects and measurements to be a sequence of labels. Since the conditional random field makes no assumptions about the dependency structure between the observations and it allows non-local dependencies between the state and the observations, the proposed method can not only fuse multiple cues including shape information and motion information to improve the stability of tracking, but also integrate moving object detection and object tracking quite well. At the same time, implementation of multi-object tracking based on JCRFF with measurements from the laser range finder on a mobile robot is studied. Experimental results with the mobile robot developed in our lab show that the proposed method has higher precision and better stability than joint probabilities data association filter (JPDAF.

  4. Promoting Handwashing Behavior: The Effects of Large-scale Community and School-level Interventions.

    Science.gov (United States)

    Galiani, Sebastian; Gertler, Paul; Ajzenman, Nicolas; Orsola-Vidal, Alexandra

    2016-12-01

    This paper analyzes a randomized experiment that uses novel strategies to promote handwashing with soap at critical points in time in Peru. It evaluates a large-scale comprehensive initiative that involved both community and school activities in addition to communication campaigns. The analysis indicates that the initiative was successful in reaching the target audience and in increasing the treated population's knowledge about appropriate handwashing behavior. These improvements translated into higher self-reported and observed handwashing with soap at critical junctures. However, no significant improvements in the health of children under the age of 5 years were observed. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network.

    Science.gov (United States)

    Du, Xiaofeng; Qu, Xiaobo; He, Yifan; Guo, Di

    2018-03-06

    Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods.

  6. Atmospheric Rivers across Multi-scales of the Hydrologic cycle

    Science.gov (United States)

    Hu, H.

    2017-12-01

    Atmospheric Rivers (ARs) are defined as filamentary structures with strong water vapor transport in the atmosphere, moving as much water as is discharged by the Amazon River. As a large-scale phenomenon, ARs are embedded in the planetary-scale Rossby waves and account for the majority of poleward moisture transport in the midlatitudes. On the other hand, AR is the fundamental physical mechanism leading to extreme basin-scale precipitation and flooding over the U.S. West Coast in the winter season. The moisture transported by ARs is forced to rise and generate precipitation when it impinges on the mountainous coastal lands. My goal is to build the connection between the multi-scale features associated with ARs with their impacts on local hydrology, with particular focus on the U.S. West Coast. Moving across the different scales I have: (1) examined the planetary-scale dynamics in the upper-troposphere, and established a robust relationship between the two regimes of Rossby wave breaking and AR-precipitation and streamflow along the West Coast; (2) quantified the contribution from the tropics/subtropics to AR-related precipitation intensity and found a significant modulation from the large-scale thermodynamics; (3) developed a water tracer tool in a land surface model to track the lifecycle of the water collected from AR precipitation over the terrestrial system, so that the role of catchment-scale factors in modulating ARs' hydrological consequences could be examined. Ultimately, the information gather from these studies will indicate how the dynamic and thermodynamic changes as a response to climate change could affect the local flooding and water resource, which would be helpful in decision making.

  7. Gradient networks on uncorrelated random scale-free networks

    International Nuclear Information System (INIS)

    Pan Guijun; Yan Xiaoqing; Huang Zhongbing; Ma Weichuan

    2011-01-01

    Uncorrelated random scale-free (URSF) networks are useful null models for checking the effects of scale-free topology on network-based dynamical processes. Here, we present a comparative study of the jamming level of gradient networks based on URSF networks and Erdos-Renyi (ER) random networks. We find that the URSF networks are less congested than ER random networks for the average degree (k)>k c (k c ∼ 2 denotes a critical connectivity). In addition, by investigating the topological properties of the two kinds of gradient networks, we discuss the relations between the topological structure and the transport efficiency of the gradient networks. These findings show that the uncorrelated scale-free structure might allow more efficient transport than the random structure.

  8. CoDuSe group exercise programme improves balance and reduces falls in people with multiple sclerosis: A multi-centre, randomized, controlled pilot study.

    Science.gov (United States)

    Carling, Anna; Forsberg, Anette; Gunnarsson, Martin; Nilsagård, Ylva

    2017-09-01

    Imbalance leading to falls is common in people with multiple sclerosis (PwMS). To evaluate the effects of a balance group exercise programme (CoDuSe) on balance and walking in PwMS (Expanded Disability Status Scale, 4.0-7.5). A multi-centre, randomized, controlled single-blinded pilot study with random allocation to early or late start of exercise, with the latter group serving as control group for the physical function measures. In total, 14 supervised 60-minute exercise sessions were delivered over 7 weeks. Pretest-posttest analyses were conducted for self-reported near falls and falls in the group starting late. Primary outcome was Berg Balance Scale (BBS). A total of 51 participants were initially enrolled; three were lost to follow-up. Post-intervention, the exercise group showed statistically significant improvement ( p = 0.015) in BBS and borderline significant improvement in MS Walking Scale ( p = 0.051), both with large effect sizes (3.66; -2.89). No other significant differences were found between groups. In the group starting late, numbers of falls and near falls were statistically significantly reduced after exercise compared to before ( p balance and reduced perceived walking limitations, compared to no exercise. The intervention reduced falls and near falls frequency.

  9. The integration of novel diagnostics techniques for multi-scale monitoring of large civil infrastructures

    Directory of Open Access Journals (Sweden)

    F. Soldovieri

    2008-11-01

    Full Text Available In the recent years, structural monitoring of large infrastructures (buildings, dams, bridges or more generally man-made structures has raised an increased attention due to the growing interest about safety and security issues and risk assessment through early detection. In this framework, aim of the paper is to introduce a new integrated approach which combines two sensing techniques acting on different spatial and temporal scales. The first one is a distributed optic fiber sensor based on the Brillouin scattering phenomenon, which allows a spatially and temporally continuous monitoring of the structure with a "low" spatial resolution (meter. The second technique is based on the use of Ground Penetrating Radar (GPR, which can provide detailed images of the inner status of the structure (with a spatial resolution less then tens centimetres, but does not allow a temporal continuous monitoring. The paper describes the features of these two techniques and provides experimental results concerning preliminary test cases.

  10. Large-scale Instability during Gravitational Collapse with Neutrino Transport and a Core-Collapse Supernova

    Science.gov (United States)

    Aksenov, A. G.; Chechetkin, V. M.

    2018-04-01

    Most of the energy released in the gravitational collapse of the cores of massive stars is carried away by neutrinos. Neutrinos play a pivotal role in explaining core-collape supernovae. Currently, mathematical models of the gravitational collapse are based on multi-dimensional gas dynamics and thermonuclear reactions, while neutrino transport is considered in a simplified way. Multidimensional gas dynamics is used with neutrino transport in the flux-limited diffusion approximation to study the role of multi-dimensional effects. The possibility of large-scale convection is discussed, which is interesting both for explaining SN II and for setting up observations to register possible high-energy (≳10MeV) neutrinos from the supernova. A new multi-dimensional, multi-temperature gas dynamics method with neutrino transport is presented.

  11. Towards Agent-Based Simulation of Emerging and Large-Scale Social Networks. Examples of the Migrant Crisis and MMORPGs

    Directory of Open Access Journals (Sweden)

    Schatten, Markus

    2016-10-01

    Full Text Available Large-scale agent based simulation of social networks is described in the context of the migrant crisis in Syria and the EU as well as massively multi-player on-line role playing games (MMORPG. The recipeWorld system by Terna and Fontana is proposed as a possible solution to simulating large-scale social networks. The initial system has been re-implemented using the Smart Python multi-Agent Development Environment (SPADE and Pyinteractive was used for visualization. We present initial models of simulation that we plan to develop further in future studies. Thus this paper is research in progress that will hopefully establish a novel agent-based modelling system in the context of the ModelMMORPG project.

  12. A Multi-Resolution Spatial Model for Large Datasets Based on the Skew-t Distribution

    KAUST Repository

    Tagle, Felipe

    2017-12-06

    Large, non-Gaussian spatial datasets pose a considerable modeling challenge as the dependence structure implied by the model needs to be captured at different scales, while retaining feasible inference. Skew-normal and skew-t distributions have only recently begun to appear in the spatial statistics literature, without much consideration, however, for the ability to capture dependence at multiple resolutions, and simultaneously achieve feasible inference for increasingly large data sets. This article presents the first multi-resolution spatial model inspired by the skew-t distribution, where a large-scale effect follows a multivariate normal distribution and the fine-scale effects follow a multivariate skew-normal distributions. The resulting marginal distribution for each region is skew-t, thereby allowing for greater flexibility in capturing skewness and heavy tails characterizing many environmental datasets. Likelihood-based inference is performed using a Monte Carlo EM algorithm. The model is applied as a stochastic generator of daily wind speeds over Saudi Arabia.

  13. Protein multi-scale organization through graph partitioning and robustness analysis: application to the myosin–myosin light chain interaction

    International Nuclear Information System (INIS)

    Delmotte, A; Barahona, M; Tate, E W; Yaliraki, S N

    2011-01-01

    Despite the recognized importance of the multi-scale spatio-temporal organization of proteins, most computational tools can only access a limited spectrum of time and spatial scales, thereby ignoring the effects on protein behavior of the intricate coupling between the different scales. Starting from a physico-chemical atomistic network of interactions that encodes the structure of the protein, we introduce a methodology based on multi-scale graph partitioning that can uncover partitions and levels of organization of proteins that span the whole range of scales, revealing biological features occurring at different levels of organization and tracking their effect across scales. Additionally, we introduce a measure of robustness to quantify the relevance of the partitions through the generation of biochemically-motivated surrogate random graph models. We apply the method to four distinct conformations of myosin tail interacting protein, a protein from the molecular motor of the malaria parasite, and study properties that have been experimentally addressed such as the closing mechanism, the presence of conserved clusters, and the identification through computational mutational analysis of key residues for binding

  14. Harnessing Petaflop-Scale Multi-Core Supercomputing for Problems in Space Science

    Science.gov (United States)

    Albright, B. J.; Yin, L.; Bowers, K. J.; Daughton, W.; Bergen, B.; Kwan, T. J.

    2008-12-01

    The particle-in-cell kinetic plasma code VPIC has been migrated successfully to the world's fastest supercomputer, Roadrunner, a hybrid multi-core platform built by IBM for the Los Alamos National Laboratory. How this was achieved will be described and examples of state-of-the-art calculations in space science, in particular, the study of magnetic reconnection, will be presented. With VPIC on Roadrunner, we have performed, for the first time, plasma PIC calculations with over one trillion particles, >100× larger than calculations considered "heroic" by community standards. This allows examination of physics at unprecedented scale and fidelity. Roadrunner is an example of an emerging paradigm in supercomputing: the trend toward multi-core systems with deep hierarchies and where memory bandwidth optimization is vital to achieving high performance. Getting VPIC to perform well on such systems is a formidable challenge: the core algorithm is memory bandwidth limited with low compute-to-data ratio and requires random access to memory in its inner loop. That we were able to get VPIC to perform and scale well, achieving >0.374 Pflop/s and linear weak scaling on real physics problems on up to the full 12240-core Roadrunner machine, bodes well for harnessing these machines for our community's needs in the future. Many of the design considerations encountered commute to other multi-core and accelerated (e.g., via GPU) platforms and we modified VPIC with flexibility in mind. These will be summarized and strategies for how one might adapt a code for such platforms will be shared. Work performed under the auspices of the U.S. DOE by the LANS LLC Los Alamos National Laboratory. Dr. Bowers is a LANL Guest Scientist; he is presently at D. E. Shaw Research LLC, 120 W 45th Street, 39th Floor, New York, NY 10036.

  15. Autonomous management of a recursive area hierarchy for large scale wireless sensor networks using multiple parents

    Energy Technology Data Exchange (ETDEWEB)

    Cree, Johnathan Vee [Washington State Univ., Pullman, WA (United States); Delgado-Frias, Jose [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-03-01

    Large scale wireless sensor networks have been proposed for applications ranging from anomaly detection in an environment to vehicle tracking. Many of these applications require the networks to be distributed across a large geographic area while supporting three to five year network lifetimes. In order to support these requirements large scale wireless sensor networks of duty-cycled devices need a method of efficient and effective autonomous configuration/maintenance. This method should gracefully handle the synchronization tasks duty-cycled networks. Further, an effective configuration solution needs to recognize that in-network data aggregation and analysis presents significant benefits to wireless sensor network and should configure the network in a way such that said higher level functions benefit from the logically imposed structure. NOA, the proposed configuration and maintenance protocol, provides a multi-parent hierarchical logical structure for the network that reduces the synchronization workload. It also provides higher level functions with significant inherent benefits such as but not limited to: removing network divisions that are created by single-parent hierarchies, guarantees for when data will be compared in the hierarchy, and redundancies for communication as well as in-network data aggregation/analysis/storage.

  16. A Systematic Multi-Time Scale Solution for Regional Power Grid Operation

    Science.gov (United States)

    Zhu, W. J.; Liu, Z. G.; Cheng, T.; Hu, B. Q.; Liu, X. Z.; Zhou, Y. F.

    2017-10-01

    Many aspects need to be taken into consideration in a regional grid while making schedule plans. In this paper, a systematic multi-time scale solution for regional power grid operation considering large scale renewable energy integration and Ultra High Voltage (UHV) power transmission is proposed. In the time scale aspect, we discuss the problem from month, week, day-ahead, within-day to day-behind, and the system also contains multiple generator types including thermal units, hydro-plants, wind turbines and pumped storage stations. The 9 subsystems of the scheduling system are described, and their functions and relationships are elaborated. The proposed system has been constructed in a provincial power grid in Central China, and the operation results further verified the effectiveness of the system.

  17. Large-Scale Systems Control Design via LMI Optimization

    Czech Academy of Sciences Publication Activity Database

    Rehák, Branislav

    2015-01-01

    Roč. 44, č. 3 (2015), s. 247-253 ISSN 1392-124X Institutional support: RVO:67985556 Keywords : Combinatorial linear matrix inequalities * large-scale system * decentralized control Subject RIV: BC - Control Systems Theory Impact factor: 0.633, year: 2015

  18. Effects of unstratified and centre-stratified randomization in multi-centre clinical trials.

    Science.gov (United States)

    Anisimov, Vladimir V

    2011-01-01

    This paper deals with the analysis of randomization effects in multi-centre clinical trials. The two randomization schemes most often used in clinical trials are considered: unstratified and centre-stratified block-permuted randomization. The prediction of the number of patients randomized to different treatment arms in different regions during the recruitment period accounting for the stochastic nature of the recruitment and effects of multiple centres is investigated. A new analytic approach using a Poisson-gamma patient recruitment model (patients arrive at different centres according to Poisson processes with rates sampled from a gamma distributed population) and its further extensions is proposed. Closed-form expressions for corresponding distributions of the predicted number of the patients randomized in different regions are derived. In the case of two treatments, the properties of the total imbalance in the number of patients on treatment arms caused by using centre-stratified randomization are investigated and for a large number of centres a normal approximation of imbalance is proved. The impact of imbalance on the power of the study is considered. It is shown that the loss of statistical power is practically negligible and can be compensated by a minor increase in sample size. The influence of patient dropout is also investigated. The impact of randomization on predicted drug supply overage is discussed. Copyright © 2010 John Wiley & Sons, Ltd.

  19. Sensitivity of the Modified Children's Yale-Brown Obsessive Compulsive Scale to Detect Change: Results from Two Multi-Site Trials

    Science.gov (United States)

    Scahill, Lawrence; Sukhodolsky, Denis G.; Anderberg, Emily; Dimitropoulos, Anastasia; Dziura, James; Aman, Michael G.; McCracken, James; Tierney, Elaine; Hallett, Victoria; Katz, Karol; Vitiello, Benedetto; McDougle, Christopher

    2016-01-01

    Repetitive behavior is a core feature of autism spectrum disorder. We used 8-week data from two federally funded, multi-site, randomized trials with risperidone conducted by the Research Units on Pediatric Psychopharmacology Autism Network to evaluate the sensitivity of the Children's Yale-Brown Obsessive Compulsive Scale modified for autism…

  20. Randomness in multi-step direct reactions

    International Nuclear Information System (INIS)

    Koning, A.J.; Akkermans, J.M.

    1991-01-01

    The authors propose a quantum-statistical framework that provides an integrated perspective on the differences and similarities between the many current models for multi-step direct reactions in the continuum. It is argued that to obtain a statistical theory two physically different approaches are conceivable to postulate randomness, respectively called leading-particle statistics and residual-system statistics. They present a new leading-particle statistics theory for multi-step direct reactions. It is shown that the model of Feshbach et al. can be derived as a simplification of this theory and thus can be founded solely upon leading-particle statistics. The models developed by Tamura et al. and Nishioka et al. are based upon residual-system statistics and hence fall into a physically different class of multi-step direct theories, although the resulting cross-section formulae for the important first step are shown to be the same. The widely used semi-classical models such as the generalized exciton model can be interpreted as further phenomenological simplification of the leading-particle statistics theory

  1. Multi-scaling of the dense plasma focus

    Science.gov (United States)

    Saw, S. H.; Lee, S.

    2015-03-01

    The dense plasma focus is a copious source of multi-radiations with many potential new applications of special interest such as in advanced SXR lithography, materials synthesizing and testing, medical isotopes and imaging. This paper reviews the series of numerical experiments conducted using the Lee model code to obtain the scaling laws of the multi-radiations.

  2. Large-Scale Multiantenna Multisine Wireless Power Transfer

    Science.gov (United States)

    Huang, Yang; Clerckx, Bruno

    2017-11-01

    Wireless Power Transfer (WPT) is expected to be a technology reshaping the landscape of low-power applications such as the Internet of Things, Radio Frequency identification (RFID) networks, etc. Although there has been some progress towards multi-antenna multi-sine WPT design, the large-scale design of WPT, reminiscent of massive MIMO in communications, remains an open challenge. In this paper, we derive efficient multiuser algorithms based on a generalizable optimization framework, in order to design transmit sinewaves that maximize the weighted-sum/minimum rectenna output DC voltage. The study highlights the significant effect of the nonlinearity introduced by the rectification process on the design of waveforms in multiuser systems. Interestingly, in the single-user case, the optimal spatial domain beamforming, obtained prior to the frequency domain power allocation optimization, turns out to be Maximum Ratio Transmission (MRT). In contrast, in the general weighted sum criterion maximization problem, the spatial domain beamforming optimization and the frequency domain power allocation optimization are coupled. Assuming channel hardening, low-complexity algorithms are proposed based on asymptotic analysis, to maximize the two criteria. The structure of the asymptotically optimal spatial domain precoder can be found prior to the optimization. The performance of the proposed algorithms is evaluated. Numerical results confirm the inefficiency of the linear model-based design for the single and multi-user scenarios. It is also shown that as nonlinear model-based designs, the proposed algorithms can benefit from an increasing number of sinewaves.

  3. A scale-entropy diffusion equation to describe the multi-scale features of turbulent flames near a wall

    Science.gov (United States)

    Queiros-Conde, D.; Foucher, F.; Mounaïm-Rousselle, C.; Kassem, H.; Feidt, M.

    2008-12-01

    Multi-scale features of turbulent flames near a wall display two kinds of scale-dependent fractal features. In scale-space, an unique fractal dimension cannot be defined and the fractal dimension of the front is scale-dependent. Moreover, when the front approaches the wall, this dependency changes: fractal dimension also depends on the wall-distance. Our aim here is to propose a general geometrical framework that provides the possibility to integrate these two cases, in order to describe the multi-scale structure of turbulent flames interacting with a wall. Based on the scale-entropy quantity, which is simply linked to the roughness of the front, we thus introduce a general scale-entropy diffusion equation. We define the notion of “scale-evolutivity” which characterises the deviation of a multi-scale system from the pure fractal behaviour. The specific case of a constant “scale-evolutivity” over the scale-range is studied. In this case, called “parabolic scaling”, the fractal dimension is a linear function of the logarithm of scale. The case of a constant scale-evolutivity in the wall-distance space implies that the fractal dimension depends linearly on the logarithm of the wall-distance. We then verified experimentally, that parabolic scaling represents a good approximation of the real multi-scale features of turbulent flames near a wall.

  4. Design study on sodium cooled large-scale reactor

    International Nuclear Information System (INIS)

    Murakami, Tsutomu; Hishida, Masahiko; Kisohara, Naoyuki

    2004-07-01

    In Phase 1 of the 'Feasibility Studies on Commercialized Fast Reactor Cycle Systems (F/S)', an advanced loop type reactor has been selected as a promising concept of sodium-cooled large-scale reactor, which has a possibility to fulfill the design requirements of the F/S. In Phase 2, design improvement for further cost reduction of establishment of the plant concept has been performed. This report summarizes the results of the design study on the sodium-cooled large-scale reactor performed in JFY2003, which is the third year of Phase 2. In the JFY2003 design study, critical subjects related to safety, structural integrity and thermal hydraulics which found in the last fiscal year has been examined and the plant concept has been modified. Furthermore, fundamental specifications of main systems and components have been set and economy has been evaluated. In addition, as the interim evaluation of the candidate concept of the FBR fuel cycle is to be conducted, cost effectiveness and achievability for the development goal were evaluated and the data of the three large-scale reactor candidate concepts were prepared. As a results of this study, the plant concept of the sodium-cooled large-scale reactor has been constructed, which has a prospect to satisfy the economic goal (construction cost: less than 200,000 yens/kWe, etc.) and has a prospect to solve the critical subjects. From now on, reflecting the results of elemental experiments, the preliminary conceptual design of this plant will be preceded toward the selection for narrowing down candidate concepts at the end of Phase 2. (author)

  5. Multi Scale Finite Element Analyses By Using SEM-EBSD Crystallographic Modeling and Parallel Computing

    International Nuclear Information System (INIS)

    Nakamachi, Eiji

    2005-01-01

    A crystallographic homogenization procedure is introduced to the conventional static-explicit and dynamic-explicit finite element formulation to develop a multi scale - double scale - analysis code to predict the plastic strain induced texture evolution, yield loci and formability of sheet metal. The double-scale structure consists of a crystal aggregation - micro-structure - and a macroscopic elastic plastic continuum. At first, we measure crystal morphologies by using SEM-EBSD apparatus, and define a unit cell of micro structure, which satisfy the periodicity condition in the real scale of polycrystal. Next, this crystallographic homogenization FE code is applied to 3N pure-iron and 'Benchmark' aluminum A6022 polycrystal sheets. It reveals that the initial crystal orientation distribution - the texture - affects very much to a plastic strain induced texture and anisotropic hardening evolutions and sheet deformation. Since, the multi-scale finite element analysis requires a large computation time, a parallel computing technique by using PC cluster is developed for a quick calculation. In this parallelization scheme, a dynamic workload balancing technique is introduced for quick and efficient calculations

  6. PERSEUS-HUB: Interactive and Collective Exploration of Large-Scale Graphs

    Directory of Open Access Journals (Sweden)

    Di Jin

    2017-07-01

    Full Text Available Graphs emerge naturally in many domains, such as social science, neuroscience, transportation engineering, and more. In many cases, such graphs have millions or billions of nodes and edges, and their sizes increase daily at a fast pace. How can researchers from various domains explore large graphs interactively and efficiently to find out what is ‘important’? How can multiple researchers explore a new graph dataset collectively and “help” each other with their findings? In this article, we present Perseus-Hub, a large-scale graph mining tool that computes a set of graph properties in a distributed manner, performs ensemble, multi-view anomaly detection to highlight regions that are worth investigating, and provides users with uncluttered visualization and easy interaction with complex graph statistics. Perseus-Hub uses a Spark cluster to calculate various statistics of large-scale graphs efficiently, and aggregates the results in a summary on the master node to support interactive user exploration. In Perseus-Hub, the visualized distributions of graph statistics provide preliminary analysis to understand a graph. To perform a deeper analysis, users with little prior knowledge can leverage patterns (e.g., spikes in the power-law degree distribution marked by other users or experts. Moreover, Perseus-Hub guides users to regions of interest by highlighting anomalous nodes and helps users establish a more comprehensive understanding about the graph at hand. We demonstrate our system through the case study on real, large-scale networks.

  7. Measuring Cosmic Expansion and Large Scale Structure with Destiny

    Science.gov (United States)

    Benford, Dominic J.; Lauer, Tod R.

    2007-01-01

    Destiny is a simple, direct, low cost mission to determine the properties of dark energy by obtaining a cosmologically deep supernova (SN) type Ia Hubble diagram and by measuring the large-scale mass power spectrum over time. Its science instrument is a 1.65m space telescope, featuring a near-infrared survey camera/spectrometer with a large field of view. During its first two years, Destiny will detect, observe, and characterize 23000 SN Ia events over the redshift interval 0.4Destiny will be used in its third year as a high resolution, wide-field imager to conduct a weak lensing survey covering >lo00 square degrees to measure the large-scale mass power spectrum. The combination of surveys is much more powerful than either technique on its own, and will have over an order of magnitude greater sensitivity than will be provided by ongoing ground-based projects.

  8. Large-scale modeling of rain fields from a rain cell deterministic model

    Science.gov (United States)

    FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia

    2006-04-01

    A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.

  9. On the relationship between large-scale climate modes and regional synoptic patterns that drive Victorian rainfall

    Science.gov (United States)

    Verdon-Kidd, D. C.; Kiem, A. S.

    2009-04-01

    In this paper regional (synoptic) and large-scale climate drivers of rainfall are investigated for Victoria, Australia. A non-linear classification methodology known as self-organizing maps (SOM) is used to identify 20 key regional synoptic patterns, which are shown to capture a range of significant synoptic features known to influence the climate of the region. Rainfall distributions are assigned to each of the 20 patterns for nine rainfall stations located across Victoria, resulting in a clear distinction between wet and dry synoptic types at each station. The influence of large-scale climate modes on the frequency and timing of the regional synoptic patterns is also investigated. This analysis revealed that phase changes in the El Niño Southern Oscillation (ENSO), the Indian Ocean Dipole (IOD) and/or the Southern Annular Mode (SAM) are associated with a shift in the relative frequency of wet and dry synoptic types on an annual to inter-annual timescale. In addition, the relative frequency of synoptic types is shown to vary on a multi-decadal timescale, associated with changes in the Inter-decadal Pacific Oscillation (IPO). Importantly, these results highlight the potential to utilise the link between the regional synoptic patterns derived in this study and large-scale climate modes to improve rainfall forecasting for Victoria, both in the short- (i.e. seasonal) and long-term (i.e. decadal/multi-decadal scale). In addition, the regional and large-scale climate drivers identified in this study provide a benchmark by which the performance of Global Climate Models (GCMs) may be assessed.

  10. Political consultation and large-scale research

    International Nuclear Information System (INIS)

    Bechmann, G.; Folkers, H.

    1977-01-01

    Large-scale research and policy consulting have an intermediary position between sociological sub-systems. While large-scale research coordinates science, policy, and production, policy consulting coordinates science, policy and political spheres. In this very position, large-scale research and policy consulting lack of institutional guarantees and rational back-ground guarantee which are characteristic for their sociological environment. This large-scale research can neither deal with the production of innovative goods under consideration of rentability, nor can it hope for full recognition by the basis-oriented scientific community. Policy consulting knows neither the competence assignment of the political system to make decisions nor can it judge succesfully by the critical standards of the established social science, at least as far as the present situation is concerned. This intermediary position of large-scale research and policy consulting has, in three points, a consequence supporting the thesis which states that this is a new form of institutionalization of science: These are: 1) external control, 2) the organization form, 3) the theoretical conception of large-scale research and policy consulting. (orig.) [de

  11. Financing a large-scale picture archival and communication system.

    Science.gov (United States)

    Goldszal, Alberto F; Bleshman, Michael H; Bryan, R Nick

    2004-01-01

    An attempt to finance a large-scale multi-hospital picture archival and communication system (PACS) solely based on cost savings from current film operations is reported. A modified Request for Proposal described the technical requirements, PACS architecture, and performance targets. The Request for Proposal was complemented by a set of desired financial goals-the main one being the ability to use film savings to pay for the implementation and operation of the PACS. Financing of the enterprise-wide PACS was completed through an operating lease agreement including all PACS equipment, implementation, service, and support for an 8-year term, much like a complete outsourcing. Equipment refreshes, both hardware and software, are included. Our agreement also linked the management of the digital imaging operation (PACS) and the traditional film printing, shifting the operational risks of continued printing and costs related to implementation delays to the PACS vendor. An additional optimization step provided the elimination of the negative film budget variances in the beginning of the project when PACS costs tend to be higher than film and film-related expenses. An enterprise-wide PACS has been adopted to achieve clinical workflow improvements and cost savings. PACS financing was solely based on film savings, which included the entire digital solution (PACS) and any residual film printing. These goals were achieved with simultaneous elimination of any over-budget scenarios providing a non-negative cash flow in each year of an 8-year term.

  12. Screening wells by multi-scale grids for multi-stage Markov Chain Monte Carlo simulation

    DEFF Research Database (Denmark)

    Akbari, Hani; Engsig-Karup, Allan Peter

    2018-01-01

    /production wells, aiming at accurate breakthrough capturing as well as above mentioned efficiency goals. However this short time simulation needs fine-scale structure of the geological model around wells and running a fine-scale model is not as cheap as necessary for screening steps. On the other hand applying...... it on a coarse-scale model declines important data around wells and causes inaccurate results, particularly accurate breakthrough capturing which is important for prediction applications. Therefore we propose a multi-scale grid which preserves the fine-scale model around wells (as well as high permeable regions...... and fractures) and coarsens rest of the field and keeps efficiency and accuracy for the screening well stage and coarse-scale simulation, as well. A discrete wavelet transform is used as a powerful tool to generate the desired unstructured multi-scale grid efficiently. Finally an accepted proposal on coarse...

  13. Sustainability Risk Evaluation for Large-Scale Hydropower Projects with Hybrid Uncertainty

    Directory of Open Access Journals (Sweden)

    Weiyao Tang

    2018-01-01

    Full Text Available As large-scale hydropower projects are influenced by many factors, risk evaluations are complex. This paper considers a hydropower project as a complex system from the perspective of sustainability risk, and divides it into three subsystems: the natural environment subsystem, the eco-environment subsystem and the socioeconomic subsystem. Risk-related factors and quantitative dimensions of each subsystem are comprehensively analyzed considering uncertainty of some quantitative dimensions solved by hybrid uncertainty methods, including fuzzy (e.g., the national health degree, the national happiness degree, the protection of cultural heritage, random (e.g., underground water levels, river width, and fuzzy random uncertainty (e.g., runoff volumes, precipitation. By calculating the sustainability risk-related degree in each of the risk-related factors, a sustainable risk-evaluation model is built. Based on the calculation results, the critical sustainability risk-related factors are identified and targeted to reduce the losses caused by sustainability risk factors of the hydropower project. A case study at the under-construction Baihetan hydropower station is presented to demonstrate the viability of the risk-evaluation model and to provide a reference for the sustainable risk evaluation of other large-scale hydropower projects.

  14. Multi-Index Stochastic Collocation (MISC) for random elliptic PDEs

    KAUST Repository

    Haji Ali, Abdul Lateef; Nobile, Fabio; Tamellini, Lorenzo; Tempone, Raul

    2016-01-01

    In this work we introduce the Multi-Index Stochastic Collocation method (MISC) for computing statistics of the solution of a PDE with random data. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. We propose an optimization procedure to select the most effective mixed differences to include in the MISC estimator: such optimization is a crucial step and allows us to build a method that, provided with sufficient solution regularity, is potentially more effective than other multi-level collocation methods already available in literature. We then provide a complexity analysis that assumes decay rates of product type for such mixed differences, showing that in the optimal case the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one dimensional problem. We show the effectiveness of MISC with some computational tests, comparing it with other related methods available in the literature, such as the Multi-Index and Multilevel Monte Carlo, Multilevel Stochastic Collocation, Quasi Optimal Stochastic Collocation and Sparse Composite Collocation methods.

  15. Multi-Index Stochastic Collocation (MISC) for random elliptic PDEs

    KAUST Repository

    Haji Ali, Abdul Lateef

    2016-01-06

    In this work we introduce the Multi-Index Stochastic Collocation method (MISC) for computing statistics of the solution of a PDE with random data. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. We propose an optimization procedure to select the most effective mixed differences to include in the MISC estimator: such optimization is a crucial step and allows us to build a method that, provided with sufficient solution regularity, is potentially more effective than other multi-level collocation methods already available in literature. We then provide a complexity analysis that assumes decay rates of product type for such mixed differences, showing that in the optimal case the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one dimensional problem. We show the effectiveness of MISC with some computational tests, comparing it with other related methods available in the literature, such as the Multi-Index and Multilevel Monte Carlo, Multilevel Stochastic Collocation, Quasi Optimal Stochastic Collocation and Sparse Composite Collocation methods.

  16. Internet-Assisted Parent Training Intervention for Disruptive Behavior in 4-Year-Old Children: A Randomized Clinical Trial.

    Science.gov (United States)

    Sourander, Andre; McGrath, Patrick J; Ristkari, Terja; Cunningham, Charles; Huttunen, Jukka; Lingley-Pottie, Patricia; Hinkka-Yli-Salomäki, Susanna; Kinnunen, Malin; Vuorio, Jenni; Sinokki, Atte; Fossum, Sturla; Unruh, Anita

    2016-04-01

    There is a large gap worldwide in the provision of evidence-based early treatment of children with disruptive behavioral problems. To determine whether an Internet-assisted intervention using whole-population screening that targets the most symptomatic 4-year-old children is effective at 6 and 12 months after the start of treatment. This 2-parallel-group randomized clinical trial was performed from October 1, 2011, through November 30, 2013, at a primary health care clinic in Southwest Finland. Data analysis was performed from August 6, 2015, to December 11, 2015. Of a screened population of 4656 children, 730 met the screening criteria indicating a high level of disruptive behavioral problems. A total of 464 parents of 4-year-old children were randomized into the Strongest Families Smart Website (SFSW) intervention group (n = 232) or an education control (EC) group (n = 232). The SFSW intervention, an 11-session Internet-assisted parent training program that included weekly telephone coaching. Child Behavior Checklist version for preschool children (CBCL/1.5-5) externalizing scale (primary outcome), other CBCL/1.5-5 scales and subscores, Parenting Scale, Inventory of Callous-Unemotional Traits, and the 21-item Depression, Anxiety, and Stress Scale. All data were analyzed by intention to treat and per protocol. The assessments were made before randomization and 6 and 12 months after randomization. Of the children randomized, 287 (61.9%) were male and 79 (17.1%) lived in other than a family with 2 biological parents. At 12-month follow-up, improvement in the SFSW intervention group was significantly greater compared with the control group on the following measures: CBCL/1.5-5 externalizing scale (effect size, 0.34; P anxiety (effect size, 0.26; P = .003), and emotional problems (effect size, 0.31; P = .001); Inventory of Callous-Unemotional Traits callousness scores (effect size, 0.19; P = .03); and self-reported parenting skills (effect size

  17. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  18. Enabling High Performance Large Scale Dense Problems through KBLAS

    KAUST Repository

    Abdelfattah, Ahmad

    2014-05-04

    KBLAS (KAUST BLAS) is a small library that provides highly optimized BLAS routines on systems accelerated with GPUs. KBLAS is entirely written in CUDA C, and targets NVIDIA GPUs with compute capability 2.0 (Fermi) or higher. The current focus is on level-2 BLAS routines, namely the general matrix vector multiplication (GEMV) kernel, and the symmetric/hermitian matrix vector multiplication (SYMV/HEMV) kernel. KBLAS provides these two kernels in all four precisions (s, d, c, and z), with support to multi-GPU systems. Through advanced optimization techniques that target latency hiding and pushing memory bandwidth to the limit, KBLAS outperforms state-of-the-art kernels by 20-90% improvement. Competitors include CUBLAS-5.5, MAGMABLAS-1.4.0, and CULAR17. The SYMV/HEMV kernel from KBLAS has been adopted by NVIDIA, and should appear in CUBLAS-6.0. KBLAS has been used in large scale simulations of multi-object adaptive optics.

  19. Comparative Analysis of Different Protocols to Manage Large Scale Networks

    OpenAIRE

    Anil Rao Pimplapure; Dr Jayant Dubey; Prashant Sen

    2013-01-01

    In recent year the numbers, complexity and size is increased in Large Scale Network. The best example of Large Scale Network is Internet, and recently once are Data-centers in Cloud Environment. In this process, involvement of several management tasks such as traffic monitoring, security and performance optimization is big task for Network Administrator. This research reports study the different protocols i.e. conventional protocols like Simple Network Management Protocol and newly Gossip bas...

  20. NASA's Information Power Grid: Large Scale Distributed Computing and Data Management

    Science.gov (United States)

    Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)

    2001-01-01

    Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.

  1. Small-scale multi-axial hybrid simulation of a shear-critical reinforced concrete frame

    Science.gov (United States)

    Sadeghian, Vahid; Kwon, Oh-Sung; Vecchio, Frank

    2017-10-01

    This study presents a numerical multi-scale simulation framework which is extended to accommodate hybrid simulation (numerical-experimental integration). The framework is enhanced with a standardized data exchange format and connected to a generalized controller interface program which facilitates communication with various types of laboratory equipment and testing configurations. A small-scale experimental program was conducted using a six degree-of-freedom hydraulic testing equipment to verify the proposed framework and provide additional data for small-scale testing of shearcritical reinforced concrete structures. The specimens were tested in a multi-axial hybrid simulation manner under a reversed cyclic loading condition simulating earthquake forces. The physical models were 1/3.23-scale representations of a beam and two columns. A mixed-type modelling technique was employed to analyze the remainder of the structures. The hybrid simulation results were compared against those obtained from a large-scale test and finite element analyses. The study found that if precautions are taken in preparing model materials and if the shear-related mechanisms are accurately considered in the numerical model, small-scale hybrid simulations can adequately simulate the behaviour of shear-critical structures. Although the findings of the study are promising, to draw general conclusions additional test data are required.

  2. A Web-based Multi-user Interactive Visualization System For Large-Scale Computing Using Google Web Toolkit Technology

    Science.gov (United States)

    Weiss, R. M.; McLane, J. C.; Yuen, D. A.; Wang, S.

    2009-12-01

    We have created a web-based, interactive system for multi-user collaborative visualization of large data sets (on the order of terabytes) that allows users in geographically disparate locations to simultaneous and collectively visualize large data sets over the Internet. By leveraging asynchronous java and XML (AJAX) web development paradigms via the Google Web Toolkit (http://code.google.com/webtoolkit/), we are able to provide remote, web-based users a web portal to LCSE's (http://www.lcse.umn.edu) large-scale interactive visualization system already in place at the University of Minnesota that provides high resolution visualizations to the order of 15 million pixels by Megan Damon. In the current version of our software, we have implemented a new, highly extensible back-end framework built around HTTP "server push" technology to provide a rich collaborative environment and a smooth end-user experience. Furthermore, the web application is accessible via a variety of devices including netbooks, iPhones, and other web- and javascript-enabled cell phones. New features in the current version include: the ability for (1) users to launch multiple visualizations, (2) a user to invite one or more other users to view their visualization in real-time (multiple observers), (3) users to delegate control aspects of the visualization to others (multiple controllers) , and (4) engage in collaborative chat and instant messaging with other users within the user interface of the web application. We will explain choices made regarding implementation, overall system architecture and method of operation, and the benefits of an extensible, modular design. We will also discuss future goals, features, and our plans for increasing scalability of the system which includes a discussion of the benefits potentially afforded us by a migration of server-side components to the Google Application Engine (http://code.google.com/appengine/).

  3. Large-Scale Multi-Dimensional Document Clustering on GPU Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Mueller, Frank [North Carolina State University; Zhang, Yongpeng [ORNL; Potok, Thomas E [ORNL

    2010-01-01

    Document clustering plays an important role in data mining systems. Recently, a flocking-based document clustering algorithm has been proposed to solve the problem through simulation resembling the flocking behavior of birds in nature. This method is superior to other clustering algorithms, including k-means, in the sense that the outcome is not sensitive to the initial state. One limitation of this approach is that the algorithmic complexity is inherently quadratic in the number of documents. As a result, execution time becomes a bottleneck with large number of documents. In this paper, we assess the benefits of exploiting the computational power of Beowulf-like clusters equipped with contemporary Graphics Processing Units (GPUs) as a means to significantly reduce the runtime of flocking-based document clustering. Our framework scales up to over one million documents processed simultaneously in a sixteennode GPU cluster. Results are also compared to a four-node cluster with higher-end GPUs. On these clusters, we observe 30X-50X speedups, which demonstrates the potential of GPU clusters to efficiently solve massive data mining problems. Such speedups combined with the scalability potential and accelerator-based parallelization are unique in the domain of document-based data mining, to the best of our knowledge.

  4. Efficient gate set tomography on a multi-qubit superconducting processor

    Science.gov (United States)

    Nielsen, Erik; Rudinger, Kenneth; Blume-Kohout, Robin; Bestwick, Andrew; Bloom, Benjamin; Block, Maxwell; Caldwell, Shane; Curtis, Michael; Hudson, Alex; Orgiazzi, Jean-Luc; Papageorge, Alexander; Polloreno, Anthony; Reagor, Matt; Rubin, Nicholas; Scheer, Michael; Selvanayagam, Michael; Sete, Eyob; Sinclair, Rodney; Smith, Robert; Vahidpour, Mehrnoosh; Villiers, Marius; Zeng, William; Rigetti, Chad

    Quantum information processors with five or more qubits are becoming common. Complete, predictive characterization of such devices e.g. via any form of tomography, including gate set tomography appears impossible because the parameter space is intractably large. Randomized benchmarking scales well, but cannot predict device behavior or diagnose failure modes. We introduce a new type of gate set tomography that uses an efficient ansatz to model physically plausible errors, but scales polynomially with the number of qubits. We will describe the theory behind this multi-qubit tomography and present experimental results from using it to characterize a multi-qubit processor made by Rigetti Quantum Computing. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidary of Lockheed Martin Corporation, for the US Department of Energy's NNSA under contract DE-AC04-94AL85000.

  5. Multi scale analysis of ITER pre-compression rings

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ben, E-mail: ben.park@sener.es [SENER Ingeniería y Sistemas S.A., Barcelona (Spain); Foussat, Arnaud [ITER Organization, St. Paul-Lez-Durance (France); Rajainmaki, Hannu [Fusion for Energy, Barcelona (Spain); Knaster, Juan [IFMIF, Aomori (Japan)

    2013-10-15

    Highlights: • A multi-scale analysis approach employing various scales of ABAQUS FEM models have been used to calculate the response and performance of the rings. • We have studied the effects of various defects on the performance of the rings under the operating temperatures and loading that will be applied to the PCRs. • The multi scale analysis results are presented here. -- Abstract: The Pre-compression Rings of ITER (PCRs) represent one of the largest and most highly stressed composite structures ever designed for long term operation at 4 K. Six rings, each 5 m in diameter and 337 mm × 288 mm in cross-section, will be manufactured from S2 fiber-glass/epoxy composite and installed three at the top and three at the bottom of the eighteen D shaped toroidal field (TF) coils to apply a total centripetal pre-load of 70 MN per TF coil. The composite rings will be fabricated with a high content (65% by volume) of S2 fiber-glass in an epoxy resin matrix. During the manufacture emphasis will be placed on obtaining a structure with a very low void content and minimal presence of critical defects, such as delaminations. This paper presents a unified framework for the multi-scale analysis of the composite structure of the PCRs. A multi-scale analysis approach employing various scales of ABAQUS FEM models and other analysis tools have been used to calculate the response and performance of the rings over the design life of the structure. We have studied the effects of various defects on the performance of the rings under the operating temperatures and loading that will be applied to the PCRs. The results are presented here.

  6. Topics in random walks in random environment

    International Nuclear Information System (INIS)

    Sznitman, A.-S.

    2004-01-01

    Over the last twenty-five years random motions in random media have been intensively investigated and some new general methods and paradigms have by now emerged. Random walks in random environment constitute one of the canonical models of the field. However in dimension bigger than one they are still poorly understood and many of the basic issues remain to this day unresolved. The present series of lectures attempt to give an account of the progresses which have been made over the last few years, especially in the study of multi-dimensional random walks in random environment with ballistic behavior. (author)

  7. A new hybrid meta-heuristic algorithm for optimal design of large-scale dome structures

    Science.gov (United States)

    Kaveh, A.; Ilchi Ghazaan, M.

    2018-02-01

    In this article a hybrid algorithm based on a vibrating particles system (VPS) algorithm, multi-design variable configuration (Multi-DVC) cascade optimization, and an upper bound strategy (UBS) is presented for global optimization of large-scale dome truss structures. The new algorithm is called MDVC-UVPS in which the VPS algorithm acts as the main engine of the algorithm. The VPS algorithm is one of the most recent multi-agent meta-heuristic algorithms mimicking the mechanisms of damped free vibration of single degree of freedom systems. In order to handle a large number of variables, cascade sizing optimization utilizing a series of DVCs is used. Moreover, the UBS is utilized to reduce the computational time. Various dome truss examples are studied to demonstrate the effectiveness and robustness of the proposed method, as compared to some existing structural optimization techniques. The results indicate that the MDVC-UVPS technique is a powerful search and optimization method for optimizing structural engineering problems.

  8. Decentralized Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2013-01-01

    problem is formulated as a centralized large-scale optimization problem but is then decomposed into smaller subproblems that are solved locally by each unit connected to an aggregator. For large-scale systems the method is faster than solving the full problem and can be distributed to include an arbitrary...

  9. Health status, resource consumption, and costs of dysthymia. A multi-center two-year longitudinal study.

    Science.gov (United States)

    Barbui, Corrado; Motterlini, Nicola; Garattini, Livio

    2006-02-01

    In this study we estimated the health status, resource consumption and costs of a large cohort of patients with early and late-onset dysthymia. The DYSCO (DYSthymia COsts) project is a multi-center observational study which prospectively followed for two years a randomly chosen sample of patients with dysthymia in the Italian primary health care system. A total of 501 patients were followed for two years; 81% had early-onset dysthymic disorder. During the study, improvement was seen in most domains of the 36-Item Short Form Health Survey (SF-36) questionnaire. Comparison of the SF-36 scores for the two groups showed that only the physical health index significantly differed during the two years. The use of outpatient consultations, laboratory tests and diagnostic procedures was similar in the two groups, but patients with early-onset dysthymia were admitted significantly more than late-onset cases. Hospital admissions were almost entirely responsible for the higher total cost per patient per year of early-onset dysthymia. A first limitation of this study is that general practitioners were selected on the basis of their willingness to participate, not at random; secondly, no information was collected on concomitant psychiatric comorbidities. The present study provides the first prospective, long-term data on service use and costs in patients with dysthymia. Differently from patients with early-onset dysthymia, patients with late-onset dysthymia were admitted less and cost less.

  10. Multi codes and multi-scale analysis for void fraction prediction in hot channel for VVER-1000/V392

    International Nuclear Information System (INIS)

    Hoang Minh Giang; Hoang Tan Hung; Nguyen Huu Tiep

    2015-01-01

    Recently, an approach of multi codes and multi-scale analysis is widely applied to study core thermal hydraulic behavior such as void fraction prediction. Better results are achieved by using multi codes or coupling codes such as PARCS and RELAP5. The advantage of multi-scale analysis is zooming of the interested part in the simulated domain for detail investigation. Therefore, in this study, the multi codes between MCNP5, RELAP5, CTF and also the multi-scale analysis based RELAP5 and CTF are applied to investigate void fraction in hot channel of VVER-1000/V392 reactor. Since VVER-1000/V392 reactor is a typical advanced reactor that can be considered as the base to develop later VVER-1200 reactor, then understanding core behavior in transient conditions is necessary in order to investigate VVER technology. It is shown that the item of near wall boiling, Γ w in RELAP5 proposed by Lahey mechanistic method may not give enough accuracy of void fraction prediction as smaller scale code as CTF. (author)

  11. An augmented Lagrangian multi-scale dictionary learning algorithm

    Directory of Open Access Journals (Sweden)

    Ye Meng

    2011-01-01

    Full Text Available Abstract Learning overcomplete dictionaries for sparse signal representation has become a hot topic fascinated by many researchers in the recent years, while most of the existing approaches have a serious problem that they always lead to local minima. In this article, we present a novel augmented Lagrangian multi-scale dictionary learning algorithm (ALM-DL, which is achieved by first recasting the constrained dictionary learning problem into an AL scheme, and then updating the dictionary after each inner iteration of the scheme during which majorization-minimization technique is employed for solving the inner subproblem. Refining the dictionary from low scale to high makes the proposed method less dependent on the initial dictionary hence avoiding local optima. Numerical tests for synthetic data and denoising applications on real images demonstrate the superior performance of the proposed approach.

  12. A Multi-Component Day-Camp Weight-Loss Program Is Effective in Reducing BMI in Children after One Year

    DEFF Research Database (Denmark)

    Larsen, Kristian Traberg; Huang, Tao; Ried-Larsen, Mathias

    2016-01-01

    The objective of the present study was to evaluate the effectiveness of a one-year multi-component immersive day-camp weight-loss intervention for children with overweight and obesity. The study design was a parallel-group randomized controlled trial. One hundred fifteen 11-13-year-old children...

  13. Automating large-scale reactor systems

    International Nuclear Information System (INIS)

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig

  14. Multi-scale model of epidemic fade-out: Will local extirpation events inhibit the spread of white-nose syndrome?

    Science.gov (United States)

    O'Reagan, Suzanne M; Magori, Krisztian; Pulliam, J Tomlin; Zokan, Marcus A; Kaul, RajReni B; Barton, Heather D; Drake, John M

    2015-04-01

    White-nose syndrome (WNS) is an emerging infectious disease that has resulted in severe declines of its hibernating bat hosts in North America. The ongoing epidemic of white-nose syndrome is a multi-scale phenomenon becau.se it causes hibernaculum-level extirpations, while simultaneously spreading over larger spatial scales. We investigate a neglected topic in ecological epidemiology: how local pathogen-driven extirpations impact large-scale pathogen spread. Previous studies have identified risk factors for propagation of WNS over hibernaculum and landscape scales but none of these have tested the hypothesis that separation of spatial scales and disease-induced mortality at the hibernaculum level might slow or halt its spread. To test this hypothesis, we developed a mechanistic multi-scale model parameterized using white-nose syndrome.county and site incidence data that connects hibernaculum-level susceptible-infectious-removed (SIR) epidemiology to the county-scale contagion process. Our key result is that hibernaculum-level extirpations will not inhibit county-scale spread of WNS. We show that over 80% of counties of the contiguous USA are likely to become infected before the current epidemic is over and that geometry of habitat connectivity is such that host refuges are exceedingly rare. The macroscale spatiotemporal infection pattern that emerges from local SIR epidemiological processes falls within a narrow spectrum of possible outcomes, suggesting that recolonization, rescue effects, and multi-host complexities at local scales are not important to forward propagation of WNS at large spatial scales. If effective control measures are not implemented, precipitous declines in bat populations are likely, particularly in cave-dense regions that constitute the main geographic corridors of the USA, a serious concern for bat conservation.

  15. Multi-scale graph-cut algorithm for efficient water-fat separation.

    Science.gov (United States)

    Berglund, Johan; Skorpil, Mikael

    2017-09-01

    To improve the accuracy and robustness to noise in water-fat separation by unifying the multiscale and graph cut based approaches to B 0 -correction. A previously proposed water-fat separation algorithm that corrects for B 0 field inhomogeneity in 3D by a single quadratic pseudo-Boolean optimization (QPBO) graph cut was incorporated into a multi-scale framework, where field map solutions are propagated from coarse to fine scales for voxels that are not resolved by the graph cut. The accuracy of the single-scale and multi-scale QPBO algorithms was evaluated against benchmark reference datasets. The robustness to noise was evaluated by adding noise to the input data prior to water-fat separation. Both algorithms achieved the highest accuracy when compared with seven previously published methods, while computation times were acceptable for implementation in clinical routine. The multi-scale algorithm was more robust to noise than the single-scale algorithm, while causing only a small increase (+10%) of the reconstruction time. The proposed 3D multi-scale QPBO algorithm offers accurate water-fat separation, robustness to noise, and fast reconstruction. The software implementation is freely available to the research community. Magn Reson Med 78:941-949, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  16. Application of Large-Scale, Multi-Resolution Watershed Modeling Framework Using the Hydrologic and Water Quality System (HAWQS

    Directory of Open Access Journals (Sweden)

    Haw Yen

    2016-04-01

    Full Text Available In recent years, large-scale watershed modeling has been implemented broadly in the field of water resources planning and management. Complex hydrological, sediment, and nutrient processes can be simulated by sophisticated watershed simulation models for important issues such as water resources allocation, sediment transport, and pollution control. Among commonly adopted models, the Soil and Water Assessment Tool (SWAT has been demonstrated to provide superior performance with a large amount of referencing databases. However, it is cumbersome to perform tedious initialization steps such as preparing inputs and developing a model with each changing targeted study area. In this study, the Hydrologic and Water Quality System (HAWQS is introduced to serve as a national-scale Decision Support System (DSS to conduct challenging watershed modeling tasks. HAWQS is a web-based DSS developed and maintained by Texas A & M University, and supported by the U.S. Environmental Protection Agency. Three different spatial resolutions of Hydrologic Unit Code (HUC8, HUC10, and HUC12 and three temporal scales (time steps in daily/monthly/annual are available as alternatives for general users. In addition, users can specify preferred values of model parameters instead of using the pre-defined sets. With the aid of HAWQS, users can generate a preliminarily calibrated SWAT project within a few minutes by only providing the ending HUC number of the targeted watershed and the simulation period. In the case study, HAWQS was implemented on the Illinois River Basin, USA, with graphical demonstrations and associated analytical results. Scientists and/or decision-makers can take advantage of the HAWQS framework while conducting relevant topics or policies in the future.

  17. Multi-GNSS PPP-RTK : From large- to Small-Scale networks

    NARCIS (Netherlands)

    Nadarajah, Nandakumaran; Khodabandeh, Amir; Wang, Kan; Choudhury, Mazher; Teunissen, P.J.G.

    2018-01-01

    Precise point positioning (PPP) and its integer ambiguity resolution-enabled variant, PPP-RTK (real-time kinematic), can benefit enormously from the integration of multiple global navigation satellite systems (GNSS). In such a multi-GNSS landscape, the positioning convergence time is expected to

  18. Large scale integration of photovoltaics in cities

    International Nuclear Information System (INIS)

    Strzalka, Aneta; Alam, Nazmul; Duminil, Eric; Coors, Volker; Eicker, Ursula

    2012-01-01

    Highlights: ► We implement the photovoltaics on a large scale. ► We use three-dimensional modelling for accurate photovoltaic simulations. ► We consider the shadowing effect in the photovoltaic simulation. ► We validate the simulated results using detailed hourly measured data. - Abstract: For a large scale implementation of photovoltaics (PV) in the urban environment, building integration is a major issue. This includes installations on roof or facade surfaces with orientations that are not ideal for maximum energy production. To evaluate the performance of PV systems in urban settings and compare it with the building user’s electricity consumption, three-dimensional geometry modelling was combined with photovoltaic system simulations. As an example, the modern residential district of Scharnhauser Park (SHP) near Stuttgart/Germany was used to calculate the potential of photovoltaic energy and to evaluate the local own consumption of the energy produced. For most buildings of the district only annual electrical consumption data was available and only selected buildings have electronic metering equipment. The available roof area for one of these multi-family case study buildings was used for a detailed hourly simulation of the PV power production, which was then compared to the hourly measured electricity consumption. The results were extrapolated to all buildings of the analyzed area by normalizing them to the annual consumption data. The PV systems can produce 35% of the quarter’s total electricity consumption and half of this generated electricity is directly used within the buildings.

  19. Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems

    Science.gov (United States)

    Koch, Patrick Nathan

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.

  20. A multi-scaled approach to evaluating the fish assemblage structure within southern Appalachian streams USA.

    Science.gov (United States)

    Kirsch, Joseph; Peterson, James T.

    2014-01-01

    There is considerable uncertainty about the relative roles of stream habitat and landscape characteristics in structuring stream-fish assemblages. We evaluated the relative importance of environmental characteristics on fish occupancy at the local and landscape scales within the upper Little Tennessee River basin of Georgia and North Carolina. Fishes were sampled using a quadrat sample design at 525 channel units within 48 study reaches during two consecutive years. We evaluated species–habitat relationships (local and landscape factors) by developing hierarchical, multispecies occupancy models. Modeling results suggested that fish occupancy within the Little Tennessee River basin was primarily influenced by stream topology and topography, urban land coverage, and channel unit types. Landscape scale factors (e.g., urban land coverage and elevation) largely controlled the fish assemblage structure at a stream-reach level, and local-scale factors (i.e., channel unit types) influenced fish distribution within stream reaches. Our study demonstrates the utility of a multi-scaled approach and the need to account for hierarchy and the interscale interactions of factors influencing assemblage structure prior to monitoring fish assemblages, developing biological management plans, or allocating management resources throughout a stream system.

  1. MUSIC: MUlti-Scale Initial Conditions

    Science.gov (United States)

    Hahn, Oliver; Abel, Tom

    2013-11-01

    MUSIC generates multi-scale initial conditions with multiple levels of refinements for cosmological ‘zoom-in’ simulations. The code uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). MUSIC achieves rms relative errors of the order of 10-4 for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier space-induced interference ringing.

  2. 8th International Symposium on Intelligent Distributed Computing & Workshop on Cyber Security and Resilience of Large-Scale Systems & 6th International Workshop on Multi-Agent Systems Technology and Semantics

    CERN Document Server

    Braubach, Lars; Venticinque, Salvatore; Badica, Costin

    2015-01-01

    This book represents the combined peer-reviewed proceedings of the Eight International Symposium on Intelligent Distributed Computing - IDC'2014, of the Workshop on Cyber Security and Resilience of Large-Scale Systems - WSRL-2014, and of the Sixth International Workshop on Multi-Agent Systems Technology and Semantics- MASTS-2014. All the events were held in Madrid, Spain, during September 3-5, 2014. The 47 contributions published in this book address several topics related to theory and applications of the intelligent distributed computing and multi-agent systems, including: agent-based data processing, ambient intelligence, collaborative systems, cryptography and security, distributed algorithms, grid and cloud computing, information extraction, knowledge management, big data and ontologies, social networks, swarm intelligence or videogames amongst others.

  3. State of the Art in Large-Scale Soil Moisture Monitoring

    Science.gov (United States)

    Ochsner, Tyson E.; Cosh, Michael Harold; Cuenca, Richard H.; Dorigo, Wouter; Draper, Clara S.; Hagimoto, Yutaka; Kerr, Yan H.; Larson, Kristine M.; Njoku, Eni Gerald; Small, Eric E.; hide

    2013-01-01

    Soil moisture is an essential climate variable influencing land atmosphere interactions, an essential hydrologic variable impacting rainfall runoff processes, an essential ecological variable regulating net ecosystem exchange, and an essential agricultural variable constraining food security. Large-scale soil moisture monitoring has advanced in recent years creating opportunities to transform scientific understanding of soil moisture and related processes. These advances are being driven by researchers from a broad range of disciplines, but this complicates collaboration and communication. For some applications, the science required to utilize large-scale soil moisture data is poorly developed. In this review, we describe the state of the art in large-scale soil moisture monitoring and identify some critical needs for research to optimize the use of increasingly available soil moisture data. We review representative examples of 1) emerging in situ and proximal sensing techniques, 2) dedicated soil moisture remote sensing missions, 3) soil moisture monitoring networks, and 4) applications of large-scale soil moisture measurements. Significant near-term progress seems possible in the use of large-scale soil moisture data for drought monitoring. Assimilation of soil moisture data for meteorological or hydrologic forecasting also shows promise, but significant challenges related to model structures and model errors remain. Little progress has been made yet in the use of large-scale soil moisture observations within the context of ecological or agricultural modeling. Opportunities abound to advance the science and practice of large-scale soil moisture monitoring for the sake of improved Earth system monitoring, modeling, and forecasting.

  4. Web tools for large-scale 3D biological images and atlases

    Directory of Open Access Journals (Sweden)

    Husz Zsolt L

    2012-06-01

    Full Text Available Abstract Background Large-scale volumetric biomedical image data of three or more dimensions are a significant challenge for distributed browsing and visualisation. Many images now exceed 10GB which for most users is too large to handle in terms of computer RAM and network bandwidth. This is aggravated when users need to access tens or hundreds of such images from an archive. Here we solve the problem for 2D section views through archive data delivering compressed tiled images enabling users to browse through very-large volume data in the context of a standard web-browser. The system provides an interactive visualisation for grey-level and colour 3D images including multiple image layers and spatial-data overlay. Results The standard Internet Imaging Protocol (IIP has been extended to enable arbitrary 2D sectioning of 3D data as well a multi-layered images and indexed overlays. The extended protocol is termed IIP3D and we have implemented a matching server to deliver the protocol and a series of Ajax/Javascript client codes that will run in an Internet browser. We have tested the server software on a low-cost linux-based server for image volumes up to 135GB and 64 simultaneous users. The section views are delivered with response times independent of scale and orientation. The exemplar client provided multi-layer image views with user-controlled colour-filtering and overlays. Conclusions Interactive browsing of arbitrary sections through large biomedical-image volumes is made possible by use of an extended internet protocol and efficient server-based image tiling. The tools open the possibility of enabling fast access to large image archives without the requirement of whole image download and client computers with very large memory configurations. The system was demonstrated using a range of medical and biomedical image data extending up to 135GB for a single image volume.

  5. Extraction of multi-scale landslide morphological features based on local Gi* using airborne LiDAR-derived DEM

    Science.gov (United States)

    Shi, Wenzhong; Deng, Susu; Xu, Wenbing

    2018-02-01

    For automatic landslide detection, landslide morphological features should be quantitatively expressed and extracted. High-resolution Digital Elevation Models (DEMs) derived from airborne Light Detection and Ranging (LiDAR) data allow fine-scale morphological features to be extracted, but noise in DEMs influences morphological feature extraction, and the multi-scale nature of landslide features should be considered. This paper proposes a method to extract landslide morphological features characterized by homogeneous spatial patterns. Both profile and tangential curvature are utilized to quantify land surface morphology, and a local Gi* statistic is calculated for each cell to identify significant patterns of clustering of similar morphometric values. The method was tested on both synthetic surfaces simulating natural terrain and airborne LiDAR data acquired over an area dominated by shallow debris slides and flows. The test results of the synthetic data indicate that the concave and convex morphologies of the simulated terrain features at different scales and distinctness could be recognized using the proposed method, even when random noise was added to the synthetic data. In the test area, cells with large local Gi* values were extracted at a specified significance level from the profile and the tangential curvature image generated from the LiDAR-derived 1-m DEM. The morphologies of landslide main scarps, source areas and trails were clearly indicated, and the morphological features were represented by clusters of extracted cells. A comparison with the morphological feature extraction method based on curvature thresholds proved the proposed method's robustness to DEM noise. When verified against a landslide inventory, the morphological features of almost all recent (historical (> 10 years) landslides were extracted. This finding indicates that the proposed method can facilitate landslide detection, although the cell clusters extracted from curvature images should

  6. Multi-scale path planning for reduced environmental impact of aviation

    Science.gov (United States)

    Campbell, Scot Edward

    A future air traffic management system capable of rerouting aircraft trajectories in real-time in response to transient and evolving events would result in increased aircraft efficiency, better utilization of the airspace, and decreased environmental impact. Mixed-integer linear programming (MILP) is used within a receding horizon framework to form aircraft trajectories which mitigate persistent contrail formation, avoid areas of convective weather, and seek a minimum fuel solution. Areas conducive to persistent contrail formation and areas of convective weather occur at disparate temporal and spatial scales, and thereby require the receding horizon controller to be adaptable to multi-scale events. In response, a novel adaptable receding horizon controller was developed to account for multi-scale disturbances, as well as generate trajectories using both a penalty function approach for obstacle penetration and hard obstacle avoidance constraints. A realistic aircraft fuel burn model based on aircraft data and engine performance simulations is used to form the cost function in the MILP optimization. The performance of the receding horizon algorithm is tested through simulation. A scalability analysis of the algorithm is conducted to ensure the tractability of the path planner. The adaptable receding horizon algorithm is shown to successfully negotiate multi-scale environments with performance exceeding static receding horizon solutions. The path planner is applied to realistic scenarios involving real atmospheric data. A single flight example for persistent contrail mitigation shows that fuel burn increases 1.48% when approximately 50% of persistent contrails are avoided, but 6.19% when 100% of persistent contrails are avoided. Persistent contrail mitigating trajectories are generated for multiple days of data, and the research shows that 58% of persistent contrails are avoided with a 0.48% increase in fuel consumption when averaged over a year.

  7. Effect of randomness on multi-frequency aeroelastic responses resolved by Unsteady Adaptive Stochastic Finite Elements

    International Nuclear Information System (INIS)

    Witteveen, Jeroen A.S.; Bijl, Hester

    2009-01-01

    The Unsteady Adaptive Stochastic Finite Elements (UASFE) method resolves the effect of randomness in numerical simulations of single-mode aeroelastic responses with a constant accuracy in time for a constant number of samples. In this paper, the UASFE framework is extended to multi-frequency responses and continuous structures by employing a wavelet decomposition pre-processing step to decompose the sampled multi-frequency signals into single-frequency components. The effect of the randomness on the multi-frequency response is then obtained by summing the results of the UASFE interpolation at constant phase for the different frequency components. Results for multi-frequency responses and continuous structures show a three orders of magnitude reduction of computational costs compared to crude Monte Carlo simulations in a harmonically forced oscillator, a flutter panel problem, and the three-dimensional transonic AGARD 445.6 wing aeroelastic benchmark subject to random fields and random parameters with various probability distributions.

  8. Response of Moist Convection to Multi-scale Surface Flux Heterogeneity

    Science.gov (United States)

    Kang, S. L.; Ryu, J. H.

    2015-12-01

    We investigate response of moist convection to multi-scale feature of the spatial variation of surface sensible heat fluxes (SHF) in the afternoon evolution of the convective boundary layer (CBL), utilizing a mesoscale-domain large eddy simulation (LES) model. The multi-scale surface heterogeneity feature is analytically created as a function of the spectral slope in the wavelength range from a few tens of km to a few hundreds of m in the spectrum of surface SHF on a log-log scale. The response of moist convection to the κ-3 - slope (where κ is wavenumber) surface SHF field is compared with that to the κ-2 - slope surface, which has a relatively weak mesoscale feature, and the homogeneous κ0 - slope surface. Given the surface energy balance with a spatially uniform available energy, the prescribed SHF has a 180° phase lag with the latent heat flux (LHF) in a horizontal domain of (several tens of km)2. Thus, warmer (cooler) surface is relatively dry (moist). For all the cases, the same observation-based sounding is prescribed for the initial condition. For all the κ-3 - slope surface heterogeneity cases, early non-precipitating shallow clouds further develop into precipitating deep thunderstorms. But for all the κ-2 - slope cases, only shallow clouds develop. We compare the vertical profiles of domain-averaged fluxes and variances, and the contribution of the mesoscale and turbulence contributions to the fluxes and variances, between the κ-3 versus κ-2 slope cases. Also the cross-scale processes are investigated.

  9. Water Saving and Cost Analysis of Large-Scale Implementation of Domestic Rain Water Harvesting in Minor Mediterranean Islands

    Directory of Open Access Journals (Sweden)

    Alberto Campisano

    2017-11-01

    Full Text Available This paper describes a novel methodology to evaluate the benefits of large-scale installation of domestic Rain Water Harvesting (RWH systems in multi-story buildings. The methodology was specifically developed for application to small settlements of the minor Mediterranean islands characterized by sharp fluctuations in precipitation and water demands between winter and summer periods. The methodology is based on the combined use of regressive models for water saving evaluation and of geospatial analysis tools for semi-automatic collection of spatial information at the building/household level. An application to the old town of Lipari (Aeolian islands showed potential for high yearly water savings (between 30% and 50%, with return on investment in less than 15 years for about 50% of the installed RWH systems.

  10. The Software Reliability of Large Scale Integration Circuit and Very Large Scale Integration Circuit

    OpenAIRE

    Artem Ganiyev; Jan Vitasek

    2010-01-01

    This article describes evaluation method of faultless function of large scale integration circuits (LSI) and very large scale integration circuits (VLSI). In the article there is a comparative analysis of factors which determine faultless of integrated circuits, analysis of already existing methods and model of faultless function evaluation of LSI and VLSI. The main part describes a proposed algorithm and program for analysis of fault rate in LSI and VLSI circuits.

  11. Improved decomposition–coordination and discrete differential dynamic programming for optimization of large-scale hydropower system

    International Nuclear Information System (INIS)

    Li, Chunlong; Zhou, Jianzhong; Ouyang, Shuo; Ding, Xiaoling; Chen, Lu

    2014-01-01

    Highlights: • Optimization of large-scale hydropower system in the Yangtze River basin. • Improved decomposition–coordination and discrete differential dynamic programming. • Generating initial solution randomly to reduce generation time. • Proposing relative coefficient for more power generation. • Proposing adaptive bias corridor technology to enhance convergence speed. - Abstract: With the construction of major hydro plants, more and more large-scale hydropower systems are taking shape gradually, which brings up a challenge to optimize these systems. Optimization of large-scale hydropower system (OLHS), which is to determine water discharges or water levels of overall hydro plants for maximizing total power generation when subjecting to lots of constrains, is a high dimensional, nonlinear and coupling complex problem. In order to solve the OLHS problem effectively, an improved decomposition–coordination and discrete differential dynamic programming (IDC–DDDP) method is proposed in this paper. A strategy that initial solution is generated randomly is adopted to reduce generation time. Meanwhile, a relative coefficient based on maximum output capacity is proposed for more power generation. Moreover, an adaptive bias corridor technology is proposed to enhance convergence speed. The proposed method is applied to long-term optimal dispatches of large-scale hydropower system (LHS) in the Yangtze River basin. Compared to other methods, IDC–DDDP has competitive performances in not only total power generation but also convergence speed, which provides a new method to solve the OLHS problem

  12. An introduction to random interlacements

    CERN Document Server

    Drewitz, Alexander; Sapozhnikov, Artëm

    2014-01-01

    This book gives a self-contained introduction to the theory of random interlacements. The intended reader of the book is a graduate student with a background in probability theory who wants to learn about the fundamental results and methods of this rapidly emerging field of research. The model was introduced by Sznitman in 2007 in order to describe the local picture left by the trace of a random walk on a large discrete torus when it runs up to times proportional to the volume of the torus. Random interlacements is a new percolation model on the d-dimensional lattice. The main results covered by the book include the full proof of the local convergence of random walk trace on the torus to random interlacements and the full proof of the percolation phase transition of the vacant set of random interlacements in all dimensions. The reader will become familiar with the techniques relevant to working with the underlying Poisson Process and the method of multi-scale renormalization, which helps in overcoming the ch...

  13. Superhydrophobic multi-scale ZnO nanostructures fabricated by chemical vapor deposition method.

    Science.gov (United States)

    Zhou, Ming; Feng, Chengheng; Wu, Chunxia; Ma, Weiwei; Cai, Lan

    2009-07-01

    The ZnO nanostructures were synthesized on Si(100) substrates by chemical vapor deposition (CVD) method. Different Morphologies of ZnO nanostructures, such as nanoparticle film, micro-pillar and micro-nano multi-structure, were obtained with different conditions. The results of XRD and TEM showed the good quality of ZnO crystal growth. Selected area electron diffraction analysis indicates the individual nano-wire is single crystal. The wettability of ZnO was studied by contact angle admeasuring apparatus. We found that the wettability can be changed from hydrophobic to super-hydrophobic when the structure changed from smooth particle film to single micro-pillar, nano-wire and micro-nano multi-scale structure. Compared with the particle film with contact angle (CA) of 90.7 degrees, the CA of single scale microstructure and sparse micro-nano multi-scale structure is 130-140 degrees, 140-150 degrees respectively. But when the surface is dense micro-nano multi-scale structure such as nano-lawn, the CA can reach to 168.2 degrees . The results indicate that microstructure of surface is very important to the surface wettability. The wettability on the micro-nano multi-structure is better than single-scale structure, and that of dense micro-nano multi-structure is better than sparse multi-structure.

  14. A novel way to detect correlations on multi-time scales, with temporal evolution and for multi-variables

    Science.gov (United States)

    Yuan, Naiming; Xoplaki, Elena; Zhu, Congwen; Luterbacher, Juerg

    2016-06-01

    In this paper, two new methods, Temporal evolution of Detrended Cross-Correlation Analysis (TDCCA) and Temporal evolution of Detrended Partial-Cross-Correlation Analysis (TDPCCA), are proposed by generalizing DCCA and DPCCA. Applying TDCCA/TDPCCA, it is possible to study correlations on multi-time scales and over different periods. To illustrate their properties, we used two climatological examples: i) Global Sea Level (GSL) versus North Atlantic Oscillation (NAO); and ii) Summer Rainfall over Yangtze River (SRYR) versus previous winter Pacific Decadal Oscillation (PDO). We find significant correlations between GSL and NAO on time scales of 60 to 140 years, but the correlations are non-significant between 1865-1875. As for SRYR and PDO, significant correlations are found on time scales of 30 to 35 years, but the correlations are more pronounced during the recent 30 years. By combining TDCCA/TDPCCA and DCCA/DPCCA, we proposed a new correlation-detection system, which compared to traditional methods, can objectively show how two time series are related (on which time scale, during which time period). These are important not only for diagnosis of complex system, but also for better designs of prediction models. Therefore, the new methods offer new opportunities for applications in natural sciences, such as ecology, economy, sociology and other research fields.

  15. A generic library for large scale solution of PDEs on modern heterogeneous architectures

    DEFF Research Database (Denmark)

    Glimberg, Stefan Lemvig; Engsig-Karup, Allan Peter

    2012-01-01

    Adapting to new programming models for modern multi- and many-core architectures requires code-rewriting and changing algorithms and data structures, in order to achieve good efficiency and scalability. We present a generic library for solving large scale partial differential equations (PDEs......), capable of utilizing heterogeneous CPU/GPU environments. The library can be used for fast proto-typing of PDE solvers, based on finite difference approximations of spatial derivatives in one, two, or three dimensions. In order to efficiently solve large scale problems, we keep memory consumption...... and memory access low, using a low-storage implementation of flexible-order finite difference operators. We will illustrate the use of library components by assembling such matrix-free operators to be used with one of the supported iterative solvers, such as GMRES, CG, Multigrid or Defect Correction...

  16. A review of large-scale solar heating systems in Europe

    International Nuclear Information System (INIS)

    Fisch, M.N.; Guigas, M.; Dalenback, J.O.

    1998-01-01

    Large-scale solar applications benefit from the effect of scale. Compared to small solar domestic hot water (DHW) systems for single-family houses, the solar heat cost can be cut at least in third. The most interesting projects for replacing fossil fuels and the reduction of CO 2 -emissions are solar systems with seasonal storage in combination with gas or biomass boilers. In the framework of the EU-APAS project Large-scale Solar Heating Systems, thirteen existing plants in six European countries have been evaluated. lie yearly solar gains of the systems are between 300 and 550 kWh per m 2 collector area. The investment cost of solar plants with short-term storage varies from 300 up to 600 ECU per m 2 . Systems with seasonal storage show investment costs twice as high. Results of studies concerning the market potential for solar heating plants, taking new collector concepts and industrial production into account, are presented. Site specific studies and predesign of large-scale solar heating plants in six European countries for housing developments show a 50% cost reduction compared to existing projects. The cost-benefit-ratio for the planned systems with long-term storage is between 0.7 and 1.5 ECU per kWh per year. (author)

  17. Investigating the Role of Large-Scale Domain Dynamics in Protein-Protein Interactions.

    Science.gov (United States)

    Delaforge, Elise; Milles, Sigrid; Huang, Jie-Rong; Bouvier, Denis; Jensen, Malene Ringkjøbing; Sattler, Michael; Hart, Darren J; Blackledge, Martin

    2016-01-01

    Intrinsically disordered linkers provide multi-domain proteins with degrees of conformational freedom that are often essential for function. These highly dynamic assemblies represent a significant fraction of all proteomes, and deciphering the physical basis of their interactions represents a considerable challenge. Here we describe the difficulties associated with mapping the large-scale domain dynamics and describe two recent examples where solution state methods, in particular NMR spectroscopy, are used to investigate conformational exchange on very different timescales.

  18. Multi-Scale Pattern Recognition for Image Classification and Segmentation

    NARCIS (Netherlands)

    Li, Y.

    2013-01-01

    Scale is an important parameter of images. Different objects or image structures (e.g. edges and corners) can appear at different scales and each is meaningful only over a limited range of scales. Multi-scale analysis has been widely used in image processing and computer vision, serving as the basis

  19. Phylogenetic distribution of large-scale genome patchiness

    Directory of Open Access Journals (Sweden)

    Hackenberg Michael

    2008-04-01

    Full Text Available Abstract Background The phylogenetic distribution of large-scale genome structure (i.e. mosaic compositional patchiness has been explored mainly by analytical ultracentrifugation of bulk DNA. However, with the availability of large, good-quality chromosome sequences, and the recently developed computational methods to directly analyze patchiness on the genome sequence, an evolutionary comparative analysis can be carried out at the sequence level. Results The local variations in the scaling exponent of the Detrended Fluctuation Analysis are used here to analyze large-scale genome structure and directly uncover the characteristic scales present in genome sequences. Furthermore, through shuffling experiments of selected genome regions, computationally-identified, isochore-like regions were identified as the biological source for the uncovered large-scale genome structure. The phylogenetic distribution of short- and large-scale patchiness was determined in the best-sequenced genome assemblies from eleven eukaryotic genomes: mammals (Homo sapiens, Pan troglodytes, Mus musculus, Rattus norvegicus, and Canis familiaris, birds (Gallus gallus, fishes (Danio rerio, invertebrates (Drosophila melanogaster and Caenorhabditis elegans, plants (Arabidopsis thaliana and yeasts (Saccharomyces cerevisiae. We found large-scale patchiness of genome structure, associated with in silico determined, isochore-like regions, throughout this wide phylogenetic range. Conclusion Large-scale genome structure is detected by directly analyzing DNA sequences in a wide range of eukaryotic chromosome sequences, from human to yeast. In all these genomes, large-scale patchiness can be associated with the isochore-like regions, as directly detected in silico at the sequence level.

  20. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  1. Large-scale derived flood frequency analysis based on continuous simulation

    Science.gov (United States)

    Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several

  2. State-of-the-Art Report on Multi-scale Modelling of Nuclear Fuels

    International Nuclear Information System (INIS)

    Bartel, T.J.; Dingreville, R.; Littlewood, D.; Tikare, V.; Bertolus, M.; Blanc, V.; Bouineau, V.; Carlot, G.; Desgranges, C.; Dorado, B.; Dumas, J.C.; Freyss, M.; Garcia, P.; Gatt, J.M.; Gueneau, C.; Julien, J.; Maillard, S.; Martin, G.; Masson, R.; Michel, B.; Piron, J.P.; Sabathier, C.; Skorek, R.; Toffolon, C.; Valot, C.; Van Brutzel, L.; Besmann, Theodore M.; Chernatynskiy, A.; Clarno, K.; Gorti, S.B.; Radhakrishnan, B.; Devanathan, R.; Dumont, M.; Maugis, P.; El-Azab, A.; Iglesias, F.C.; Lewis, B.J.; Krack, M.; Yun, Y.; Kurata, M.; Kurosaki, K.; Largenton, R.; Lebensohn, R.A.; Malerba, L.; Oh, J.Y.; Phillpot, S.R.; Tulenko, J. S.; Rachid, J.; Stan, M.; Sundman, B.; Tonks, M.R.; Williamson, R.; Van Uffelen, P.; Welland, M.J.; Valot, Carole; Stan, Marius; Massara, Simone; Tarsi, Reka

    2015-10-01

    The Nuclear Science Committee (NSC) of the Nuclear Energy Agency (NEA) has undertaken an ambitious programme to document state-of-the-art of modelling for nuclear fuels and structural materials. The project is being performed under the Working Party on Multi-Scale Modelling of Fuels and Structural Material for Nuclear Systems (WPMM), which has been established to assess the scientific and engineering aspects of fuels and structural materials, describing multi-scale models and simulations as validated predictive tools for the design of nuclear systems, fuel fabrication and performance. The WPMM's objective is to promote the exchange of information on models and simulations of nuclear materials, theoretical and computational methods, experimental validation and related topics. It also provides member countries with up-to-date information, shared data, models, and expertise. The goal is also to assess needs for improvement and address them by initiating joint efforts. The WPMM reviews and evaluates multi-scale modelling and simulation techniques currently employed in the selection of materials used in nuclear systems. It serves to provide advice to the nuclear community on the developments needed to meet the requirements of modelling for the design of different nuclear systems. The original WPMM mandate had three components (Figure 1), with the first component currently completed, delivering a report on the state-of-the-art of modelling of structural materials. The work on modelling was performed by three expert groups, one each on Multi-Scale Modelling Methods (M3), Multi-Scale Modelling of Fuels (M2F) and Structural Materials Modelling (SMM). WPMM is now composed of three expert groups and two task forces providing contributions on multi-scale methods, modelling of fuels and modelling of structural materials. This structure will be retained, with the addition of task forces as new topics are developed. The mandate of the Expert Group on Multi-Scale Modelling of

  3. Relay-aided multi-cell broadcasting with random network coding

    DEFF Research Database (Denmark)

    Lu, Lu; Sun, Fan; Xiao, Ming

    2010-01-01

    We investigate a relay-aided multi-cell broadcasting system using random network codes, where the focus is on devising efficient scheduling algorithms between relay and base stations. Two scheduling algorithms are proposed based on different feedback strategies; namely, a one-step scheduling...

  4. Defect detection in industrial radiography: a multi-scale approach; Detection de defauts en radiographie industrielle: approches multiechelles

    Energy Technology Data Exchange (ETDEWEB)

    Lefevre, M

    1995-10-01

    Radiography is used by Electricite de France for pipe inspection in nuclear power plant in order to detect defects. For several years, the RD Division of EDF has undertaken research to define image processing methods well adapted to radiographic images. The main issues raised by these images are their low contrast, their high level of noise, the presence of a trend and the variable size of the defects. A data base of digitized radiographs of pipes has been gathered and the statistical, topological and geometrical properties of all of these images have been analyzed. From this study, a global indicator of the presence of defects and local features, leading to a classification of images into areas with or without defects, have been extracted. The defect localisation problem has been considered in a multi-scale framework based on the creation of a family of images with increasing regularity and defined as a solution of a partial differential equation. From a choice of axioms, a set of equations may be deduced which define various multi-scale analyses. The survey of the properties of such analysed, when applied to images altered with different types of noise, has lead to the selection of the digitized radiographs best adapted multi-scale analysis. The segmentation process, uses the geodesic information attached to defects via connection cost concept. The final decision is based on a summary of the information extracted at several scales. A fuzzy logic approach has been proposed to solve this part. We then developed methods and tools for expertise guidance and validated them on a complete data base of images. Some global indicators have been extracted and a detection and localisation process has been achieved for large defects. (author). 117 refs., 73 figs.

  5. Large Scale Self-Organizing Information Distribution System

    National Research Council Canada - National Science Library

    Low, Steven

    2005-01-01

    This project investigates issues in "large-scale" networks. Here "large-scale" refers to networks with large number of high capacity nodes and transmission links, and shared by a large number of users...

  6. Multi-scale modeling of the environmental impact and energy performance of open-loop groundwater heat pumps in urban areas

    International Nuclear Information System (INIS)

    Sciacovelli, A.; Guelpa, E.; Verda, V.

    2014-01-01

    Groundwater heat pumps are expected to play a major role in future energy scenarios. Proliferation of such systems in urban areas may generate issues related to possible interference between installations. These issues are associated with the thermal plume produced by heat pumps during operation and are particularly evident in the case of groundwater flow, because of the advection heat transfer. In this paper, the impact of an installation is investigated through a thermo-fluid dynamic model of the subsurface which considers fluid flow in the saturated unit and heat transfer in both the saturated and unsaturated units. Due to the large extension of the affected area, a multiscale numerical model that combines a three-dimensional CFD model and a network model is proposed. The thermal request of the user and the heat pump performances are considered in the multi-scale numerical model through appropriate boundary conditions imposed at the wells. Various scenarios corresponding to different operating modes of the heat pump are considered. - Highlights: • A groundwater heat pump of a skyscraper under construction is considered. • The thermal plume induced in the groundwater is evaluated using a multi-scale model. • The multi-scale model is constituted by a full 3D model and a network model. • Multi-scale permits to study large space for long time with low computational costs. • In some cases thermal plume can reduce the COP of other heat pumps of 20%

  7. Nonlinear evolution of large-scale structure in the universe

    International Nuclear Information System (INIS)

    Frenk, C.S.; White, S.D.M.; Davis, M.

    1983-01-01

    Using N-body simulations we study the nonlinear development of primordial density perturbation in an Einstein--de Sitter universe. We compare the evolution of an initial distribution without small-scale density fluctuations to evolution from a random Poisson distribution. These initial conditions mimic the assumptions of the adiabatic and isothermal theories of galaxy formation. The large-scale structures which form in the two cases are markedly dissimilar. In particular, the correlation function xi(r) and the visual appearance of our adiabatic (or ''pancake'') models match better the observed distribution of galaxies. This distribution is characterized by large-scale filamentary structure. Because the pancake models do not evolve in a self-similar fashion, the slope of xi(r) steepens with time; as a result there is a unique epoch at which these models fit the galaxy observations. We find the ratio of cutoff length to correlation length at this time to be lambda/sub min//r 0 = 5.1; its expected value in a neutrino dominated universe is 4(Ωh) -1 (H 0 = 100h km s -1 Mpc -1 ). At early epochs these models predict a negligible amplitude for xi(r) and could explain the lack of measurable clustering in the Lyα absorption lines of high-redshift quasars. However, large-scale structure in our models collapses after z = 2. If this collapse precedes galaxy formation as in the usual pancake theory, galaxies formed uncomfortably recently. The extent of this problem may depend on the cosmological model used; the present series of experiments should be extended in the future to include models with Ω<1

  8. The role of fragmentation mechanism in large-scale vapor explosions

    International Nuclear Information System (INIS)

    Liu, Jie

    2003-01-01

    A non-equilibrium, multi-phase, multi-component code PROVER-I is developed for propagation phase of vapor explosion. Two fragmentation models are used. The hydrodynamic fragmentation model is the same as Fletcher's one. A new thermal fragmentation model is proposed with three kinds of time scale for modeling instant fragmentation, spontaneous nucleation fragmentation and normal boiling fragmentation. The role of fragmentation mechanisms is investigated by the simulations of the pressure wave propagation and energy conversion ratio of ex-vessel vapor explosion. The spontaneous nucleation fragmentation results in a much higher pressure peak and a larger energy conversion ratio than hydrodynamic fragmentation. The instant fragmentation gives a slightly larger energy conversion ratio than spontaneous nucleation fragmentation, and the normal boiling fragmentation results in a smaller energy conversion ratio. The detailed analysis of the structure of pressure wave makes it clear that thermal detonation exists only under the thermal fragmentation circumstance. The high energy conversion ratio is obtained in a small vapor volume fraction. However, in larger vapor volume fraction conditions, the vapor explosion is weak. In a large-scale vapor explosion, the hydrodynamic fragmentation is essential when the pressure wave becomes strong, so a small energy conversion ratio is expected. (author)

  9. Multi-function nuclear weight scale system

    International Nuclear Information System (INIS)

    Zheng Mingquan; Sun Jinhua; Jia Changchun; Wang Mingqian; Tang Ke

    1998-01-01

    The author introduces the methods to contrive the hardware and software of a Multi-function Nuclear Weight Scale System based on the communication contract in compliance with RS485 between a master (industrial control computer 386) and a slave (single chip 8098) and its main functions

  10. Persistent multi-scale fluctuations shift European hydroclimate to its millennial boundaries.

    Science.gov (United States)

    Markonis, Y; Hanel, M; Máca, P; Kyselý, J; Cook, E R

    2018-05-02

    In recent years, there has been growing concern about the effect of global warming on water resources, especially at regional and continental scales. The last IPCC report on extremes states that there is medium confidence about an increase on European drought frequency during twentieth century. Here we use the Old World Drought Atlas palaeoclimatic reconstruction to show that when Europe's hydroclimate is examined under a millennial, multi-scale perspective, a significant decrease in dryness can be observed since 1920 over most of central and northern Europe. On the contrary, in the south, drying conditions have prevailed, creating an intense north-to-south dipole. In both cases, hydroclimatic conditions have shifted to, and in some regions exceeded, their millennial boundaries, remaining at these extreme levels for the longest period of the 1000-year-long record.

  11. Multi-scale sampling to evaluate assemblage dynamics in an oceanic marine reserve.

    Science.gov (United States)

    Thompson, Andrew R; Watson, William; McClatchie, Sam; Weber, Edward D

    2012-01-01

    To resolve the capacity of Marine Protected Areas (MPA) to enhance fish productivity it is first necessary to understand how environmental conditions affect the distribution and abundance of fishes independent of potential reserve effects. Baseline fish production was examined from 2002-2004 through ichthyoplankton sampling in a large (10,878 km(2)) Southern Californian oceanic marine reserve, the Cowcod Conservation Area (CCA) that was established in 2001, and the Southern California Bight as a whole (238,000 km(2) CalCOFI sampling domain). The CCA assemblage changed through time as the importance of oceanic-pelagic species decreased between 2002 (La Niña) and 2003 (El Niño) and then increased in 2004 (El Niño), while oceanic species and rockfishes displayed the opposite pattern. By contrast, the CalCOFI assemblage was relatively stable through time. Depth, temperature, and zooplankton explained more of the variability in assemblage structure at the CalCOFI scale than they did at the CCA scale. CalCOFI sampling revealed that oceanic species impinged upon the CCA between 2002 and 2003 in association with warmer offshore waters, thus explaining the increased influence of these species in the CCA during the El Nino years. Multi-scale, spatially explicit sampling and analysis was necessary to interpret assemblage dynamics in the CCA and likely will be needed to evaluate other focal oceanic marine reserves throughout the world.

  12. Selective vulnerability related to aging in large-scale resting brain networks.

    Science.gov (United States)

    Zhang, Hong-Ying; Chen, Wen-Xin; Jiao, Yun; Xu, Yao; Zhang, Xiang-Rong; Wu, Jing-Tao

    2014-01-01

    Normal aging is associated with cognitive decline. Evidence indicates that large-scale brain networks are affected by aging; however, it has not been established whether aging has equivalent effects on specific large-scale networks. In the present study, 40 healthy subjects including 22 older (aged 60-80 years) and 18 younger (aged 22-33 years) adults underwent resting-state functional MRI scanning. Four canonical resting-state networks, including the default mode network (DMN), executive control network (ECN), dorsal attention network (DAN) and salience network, were extracted, and the functional connectivities in these canonical networks were compared between the younger and older groups. We found distinct, disruptive alterations present in the large-scale aging-related resting brain networks: the ECN was affected the most, followed by the DAN. However, the DMN and salience networks showed limited functional connectivity disruption. The visual network served as a control and was similarly preserved in both groups. Our findings suggest that the aged brain is characterized by selective vulnerability in large-scale brain networks. These results could help improve our understanding of the mechanism of degeneration in the aging brain. Additional work is warranted to determine whether selective alterations in the intrinsic networks are related to impairments in behavioral performance.

  13. Large scale structure and baryogenesis

    International Nuclear Information System (INIS)

    Kirilova, D.P.; Chizhov, M.V.

    2001-08-01

    We discuss a possible connection between the large scale structure formation and the baryogenesis in the universe. An update review of the observational indications for the presence of a very large scale 120h -1 Mpc in the distribution of the visible matter of the universe is provided. The possibility to generate a periodic distribution with the characteristic scale 120h -1 Mpc through a mechanism producing quasi-periodic baryon density perturbations during inflationary stage, is discussed. The evolution of the baryon charge density distribution is explored in the framework of a low temperature boson condensate baryogenesis scenario. Both the observed very large scale of a the visible matter distribution in the universe and the observed baryon asymmetry value could naturally appear as a result of the evolution of a complex scalar field condensate, formed at the inflationary stage. Moreover, for some model's parameters a natural separation of matter superclusters from antimatter ones can be achieved. (author)

  14. Variational Multi-Scale method with spectral approximation of the sub-scales.

    KAUST Repository

    Dia, Ben Mansour

    2015-01-07

    A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a nite number of modes.

  15. Alignment of crystal orientations of the multi-domain photonic crystals in Parides sesostris wing scales

    Science.gov (United States)

    Yoshioka, S.; Fujita, H.; Kinoshita, S.; Matsuhana, B.

    2014-01-01

    It is known that the wing scales of the emerald-patched cattleheart butterfly, Parides sesostris, contain gyroid-type photonic crystals, which produce a green structural colour. However, the photonic crystal is not a single crystal that spreads over the entire scale, but it is separated into many small domains with different crystal orientations. As a photonic crystal generally has band gaps at different frequencies depending on the direction of light propagation, it seems mysterious that the scale is observed to be uniformly green under an optical microscope despite the multi-domain structure. In this study, we have carefully investigated the structure of the wing scale and discovered that the crystal orientations of different domains are not perfectly random, but there is a preferred crystal orientation that is aligned along the surface normal of the scale. This finding suggests that there is an additional factor during the developmental process of the microstructure that regulates the crystal orientation. PMID:24352678

  16. Multi-view L2-SVM and its multi-view core vector machine.

    Science.gov (United States)

    Huang, Chengquan; Chung, Fu-lai; Wang, Shitong

    2016-03-01

    In this paper, a novel L2-SVM based classifier Multi-view L2-SVM is proposed to address multi-view classification tasks. The proposed Multi-view L2-SVM classifier does not have any bias in its objective function and hence has the flexibility like μ-SVC in the sense that the number of the yielded support vectors can be controlled by a pre-specified parameter. The proposed Multi-view L2-SVM classifier can make full use of the coherence and the difference of different views through imposing the consensus among multiple views to improve the overall classification performance. Besides, based on the generalized core vector machine GCVM, the proposed Multi-view L2-SVM classifier is extended into its GCVM version MvCVM which can realize its fast training on large scale multi-view datasets, with its asymptotic linear time complexity with the sample size and its space complexity independent of the sample size. Our experimental results demonstrated the effectiveness of the proposed Multi-view L2-SVM classifier for small scale multi-view datasets and the proposed MvCVM classifier for large scale multi-view datasets. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Automatic management software for large-scale cluster system

    International Nuclear Information System (INIS)

    Weng Yunjian; Chinese Academy of Sciences, Beijing; Sun Gongxing

    2007-01-01

    At present, the large-scale cluster system faces to the difficult management. For example the manager has large work load. It needs to cost much time on the management and the maintenance of large-scale cluster system. The nodes in large-scale cluster system are very easy to be chaotic. Thousands of nodes are put in big rooms so that some managers are very easy to make the confusion with machines. How do effectively carry on accurate management under the large-scale cluster system? The article introduces ELFms in the large-scale cluster system. Furthermore, it is proposed to realize the large-scale cluster system automatic management. (authors)

  18. Biointerface dynamics--Multi scale modeling considerations.

    Science.gov (United States)

    Pajic-Lijakovic, Ivana; Levic, Steva; Nedovic, Viktor; Bugarski, Branko

    2015-08-01

    Irreversible nature of matrix structural changes around the immobilized cell aggregates caused by cell expansion is considered within the Ca-alginate microbeads. It is related to various effects: (1) cell-bulk surface effects (cell-polymer mechanical interactions) and cell surface-polymer surface effects (cell-polymer electrostatic interactions) at the bio-interface, (2) polymer-bulk volume effects (polymer-polymer mechanical and electrostatic interactions) within the perturbed boundary layers around the cell aggregates, (3) cumulative surface and volume effects within the parts of the microbead, and (4) macroscopic effects within the microbead as a whole based on multi scale modeling approaches. All modeling levels are discussed at two time scales i.e. long time scale (cell growth time) and short time scale (cell rearrangement time). Matrix structural changes results in the resistance stress generation which have the feedback impact on: (1) single and collective cell migrations, (2) cell deformation and orientation, (3) decrease of cell-to-cell separation distances, and (4) cell growth. Herein, an attempt is made to discuss and connect various multi scale modeling approaches on a range of time and space scales which have been proposed in the literature in order to shed further light to this complex course-consequence phenomenon which induces the anomalous nature of energy dissipation during the structural changes of cell aggregates and matrix quantified by the damping coefficients (the orders of the fractional derivatives). Deeper insight into the matrix partial disintegration within the boundary layers is useful for understanding and minimizing the polymer matrix resistance stress generation within the interface and on that base optimizing cell growth. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Large-scale data analysis of power grid resilience across multiple US service regions

    Science.gov (United States)

    Ji, Chuanyi; Wei, Yun; Mei, Henry; Calzada, Jorge; Carey, Matthew; Church, Steve; Hayes, Timothy; Nugent, Brian; Stella, Gregory; Wallace, Matthew; White, Joe; Wilcox, Robert

    2016-05-01

    Severe weather events frequently result in large-scale power failures, affecting millions of people for extended durations. However, the lack of comprehensive, detailed failure and recovery data has impeded large-scale resilience studies. Here, we analyse data from four major service regions representing Upstate New York during Super Storm Sandy and daily operations. Using non-stationary spatiotemporal random processes that relate infrastructural failures to recoveries and cost, our data analysis shows that local power failures have a disproportionally large non-local impact on people (that is, the top 20% of failures interrupted 84% of services to customers). A large number (89%) of small failures, represented by the bottom 34% of customers and commonplace devices, resulted in 56% of the total cost of 28 million customer interruption hours. Our study shows that extreme weather does not cause, but rather exacerbates, existing vulnerabilities, which are obscured in daily operations.

  20. Multi-scale simulation of droplet-droplet interactions and coalescence

    CSIR Research Space (South Africa)

    Musehane, Ndivhuwo M

    2016-10-01

    Full Text Available Conference on Computational and Applied Mechanics Potchefstroom 3–5 October 2016 Multi-scale simulation of droplet-droplet interactions and coalescence 1,2Ndivhuwo M. Musehane?, 1Oliver F. Oxtoby and 2Daya B. Reddy 1. Aeronautic Systems, Council... topology changes that result when droplets interact. This work endeavours to eliminate the need to use empirical correlations based on phenomenological models by developing a multi-scale model that predicts the outcome of a collision between droplets from...

  1. Addressing the multi-scale lapsus of landscape : multi-scale landscape process modelling to support sustainable land use : a case study for the Lower Guadalhorce valley South Spain

    NARCIS (Netherlands)

    Schoorl, J.M.

    2002-01-01

    "Addressing the Multi-scale Lapsus of Landscape" with the sub-title "Multi-scale landscape process modelling to support sustainable land use: A case study for the Lower Guadalhorce valley South Spain" focuses on the role of

  2. Received signal strength in large-scale wireless relay sensor network: a stochastic ray approach

    NARCIS (Netherlands)

    Hu, L.; Chen, Y.; Scanlon, W.G.

    2011-01-01

    The authors consider a point percolation lattice representation of a large-scale wireless relay sensor network (WRSN) deployed in a cluttered environment. Each relay sensor corresponds to a grid point in the random lattice and the signal sent by the source is modelled as an ensemble of photons that

  3. A practical process for light-water detritiation at large scales

    Energy Technology Data Exchange (ETDEWEB)

    Boniface, H.A. [Atomic Energy of Canada Limited, Chalk River, ON (Canada); Robinson, J., E-mail: jr@tyne-engineering.com [Tyne Engineering, Burlington, ON (Canada); Gnanapragasam, N.V.; Castillo, I.; Suppiah, S. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)

    2014-07-01

    AECL and Tyne Engineering have recently completed a preliminary engineering design for a modest-scale tritium removal plant for light water, intended for installation at AECL's Chalk River Laboratories (CRL). This plant design was based on the Combined Electrolysis and Catalytic Exchange (CECE) technology developed at CRL over many years and demonstrated there and elsewhere. The general features and capabilities of this design have been reported as well as the versatility of the design for separating any pair of the three hydrogen isotopes. The same CECE technology could be applied directly to very large-scale wastewater detritiation, such as the case at Fukushima Daiichi Nuclear Power Station. However, since the CECE process scales linearly with throughput, the required capital and operating costs are substantial for such large-scale applications. This paper discusses some options for reducing the costs of very large-scale detritiation. Options include: Reducing tritium removal effectiveness; Energy recovery; Improving the tolerance of impurities; Use of less expensive or more efficient equipment. A brief comparison with alternative processes is also presented. (author)

  4. Multi-approaches analysis reveals local adaptation in the emmer wheat (Triticum dicoccoides) at macro- but not micro-geographical scale.

    Science.gov (United States)

    Volis, Sergei; Ormanbekova, Danara; Yermekbayev, Kanat; Song, Minshu; Shulgina, Irina

    2015-01-01

    Detecting local adaptation and its spatial scale is one of the most important questions of evolutionary biology. However, recognition of the effect of local selection can be challenging when there is considerable environmental variation across the distance at the whole species range. We analyzed patterns of local adaptation in emmer wheat, Triticum dicoccoides, at two spatial scales, small (inter-population distance less than one km) and large (inter-population distance more than 50 km) using several approaches. Plants originating from four distinct habitats at two geographic scales (cold edge, arid edge and two topographically dissimilar core locations) were reciprocally transplanted and their success over time was measured as 1) lifetime fitness in a year of planting, and 2) population growth four years after planting. In addition, we analyzed molecular (SSR) and quantitative trait variation and calculated the QST/FST ratio. No home advantage was detected at the small spatial scale. At the large spatial scale, home advantage was detected for the core population and the cold edge population in the year of introduction via measuring life-time plant performance. However, superior performance of the arid edge population in its own environment was evident only after several generations via measuring experimental population growth rate through genotyping with SSRs allowing counting the number of plants and seeds per introduced genotype per site. These results highlight the importance of multi-generation surveys of population growth rate in local adaptation testing. Despite predominant self-fertilization of T. dicoccoides and the associated high degree of structuring of genetic variation, the results of the QST - FST comparison were in general agreement with the pattern of local adaptation at the two spatial scales detected by reciprocal transplanting.

  5. Structural Quality of Service in Large-Scale Networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup

    , telephony and data. To meet the requirements of the different applications, and to handle the increased vulnerability to failures, the ability to design robust networks providing good Quality of Service is crucial. However, most planning of large-scale networks today is ad-hoc based, leading to highly...... complex networks lacking predictability and global structural properties. The thesis applies the concept of Structural Quality of Service to formulate desirable global properties, and it shows how regular graph structures can be used to obtain such properties.......Digitalization has created the base for co-existence and convergence in communications, leading to an increasing use of multi service networks. This is for example seen in the Fiber To The Home implementations, where a single fiber is used for virtually all means of communication, including TV...

  6. A rate-dependent multi-scale crack model for concrete

    NARCIS (Netherlands)

    Karamnejad, A.; Nguyen, V.P.; Sluys, L.J.

    2013-01-01

    A multi-scale numerical approach for modeling cracking in heterogeneous quasi-brittle materials under dynamic loading is presented. In the model, a discontinuous crack model is used at macro-scale to simulate fracture and a gradient-enhanced damage model has been used at meso-scale to simulate

  7. Large scale Brownian dynamics of confined suspensions of rigid particles

    Science.gov (United States)

    Sprinkle, Brennan; Balboa Usabiaga, Florencio; Patankar, Neelesh A.; Donev, Aleksandar

    2017-12-01

    We introduce methods for large-scale Brownian Dynamics (BD) simulation of many rigid particles of arbitrary shape suspended in a fluctuating fluid. Our method adds Brownian motion to the rigid multiblob method [F. Balboa Usabiaga et al., Commun. Appl. Math. Comput. Sci. 11(2), 217-296 (2016)] at a cost comparable to the cost of deterministic simulations. We demonstrate that we can efficiently generate deterministic and random displacements for many particles using preconditioned Krylov iterative methods, if kernel methods to efficiently compute the action of the Rotne-Prager-Yamakawa (RPY) mobility matrix and its "square" root are available for the given boundary conditions. These kernel operations can be computed with near linear scaling for periodic domains using the positively split Ewald method. Here we study particles partially confined by gravity above a no-slip bottom wall using a graphical processing unit implementation of the mobility matrix-vector product, combined with a preconditioned Lanczos iteration for generating Brownian displacements. We address a major challenge in large-scale BD simulations, capturing the stochastic drift term that arises because of the configuration-dependent mobility. Unlike the widely used Fixman midpoint scheme, our methods utilize random finite differences and do not require the solution of resistance problems or the computation of the action of the inverse square root of the RPY mobility matrix. We construct two temporal schemes which are viable for large-scale simulations, an Euler-Maruyama traction scheme and a trapezoidal slip scheme, which minimize the number of mobility problems to be solved per time step while capturing the required stochastic drift terms. We validate and compare these schemes numerically by modeling suspensions of boomerang-shaped particles sedimented near a bottom wall. Using the trapezoidal scheme, we investigate the steady-state active motion in dense suspensions of confined microrollers, whose

  8. Predicting Species Distributions Using Record Centre Data: Multi-Scale Modelling of Habitat Suitability for Bat Roosts.

    Science.gov (United States)

    Bellamy, Chloe; Altringham, John

    2015-01-01

    Conservation increasingly operates at the landscape scale. For this to be effective, we need landscape scale information on species distributions and the environmental factors that underpin them. Species records are becoming increasingly available via data centres and online portals, but they are often patchy and biased. We demonstrate how such data can yield useful habitat suitability models, using bat roost records as an example. We analysed the effects of environmental variables at eight spatial scales (500 m - 6 km) on roost selection by eight bat species (Pipistrellus pipistrellus, P. pygmaeus, Nyctalus noctula, Myotis mystacinus, M. brandtii, M. nattereri, M. daubentonii, and Plecotus auritus) using the presence-only modelling software MaxEnt. Modelling was carried out on a selection of 418 data centre roost records from the Lake District National Park, UK. Target group pseudoabsences were selected to reduce the impact of sampling bias. Multi-scale models, combining variables measured at their best performing spatial scales, were used to predict roosting habitat suitability, yielding models with useful predictive abilities. Small areas of deciduous woodland consistently increased roosting habitat suitability, but other habitat associations varied between species and scales. Pipistrellus were positively related to built environments at small scales, and depended on large-scale woodland availability. The other, more specialist, species were highly sensitive to human-altered landscapes, avoiding even small rural towns. The strength of many relationships at large scales suggests that bats are sensitive to habitat modifications far from the roost itself. The fine resolution, large extent maps will aid targeted decision-making by conservationists and planners. We have made available an ArcGIS toolbox that automates the production of multi-scale variables, to facilitate the application of our methods to other taxa and locations. Habitat suitability modelling has the

  9. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  10. Complex modular structure of large-scale brain networks

    Science.gov (United States)

    Valencia, M.; Pastor, M. A.; Fernández-Seara, M. A.; Artieda, J.; Martinerie, J.; Chavez, M.

    2009-06-01

    Modular structure is ubiquitous among real-world networks from related proteins to social groups. Here we analyze the modular organization of brain networks at a large scale (voxel level) extracted from functional magnetic resonance imaging signals. By using a random-walk-based method, we unveil the modularity of brain webs and show modules with a spatial distribution that matches anatomical structures with functional significance. The functional role of each node in the network is studied by analyzing its patterns of inter- and intramodular connections. Results suggest that the modular architecture constitutes the structural basis for the coexistence of functional integration of distant and specialized brain areas during normal brain activities at rest.

  11. Large-scale Ising-machines composed of magnetic neurons

    Science.gov (United States)

    Mizushima, Koichi; Goto, Hayato; Sato, Rie

    2017-10-01

    We propose Ising-machines composed of magnetic neurons, that is, magnetic bits in a recording track. In large-scale machines, the sizes of both neurons and synapses need to be reduced, and neat and smart connections among neurons are also required to achieve all-to-all connectivity among them. These requirements can be fulfilled by adopting magnetic recording technologies such as race-track memories and skyrmion tracks because the area of a magnetic bit is almost two orders of magnitude smaller than that of static random access memory, which has normally been used as a semiconductor neuron, and the smart connections among neurons are realized by using the read and write methods of these technologies.

  12. Investigating the Role of Large-Scale Domain Dynamics in Protein-Protein Interactions

    Directory of Open Access Journals (Sweden)

    Elise Delaforge

    2016-09-01

    Full Text Available Intrinsically disordered linkers provide multi-domain proteins with degrees of conformational freedom that are often essential for function. These highly dynamic assemblies represent a significant fraction of all proteomes, and deciphering the physical basis of their interactions represents a considerable challenge. Here we describe the difficulties associated with mapping the large-scale domain dynamics and describe two recent examples where solution state methods, in particular NMR spectroscopy, are used to investigate conformational exchange on very different timescales.

  13. Computed tomography whole body imaging in multi-trauma: 7 years experience

    Energy Technology Data Exchange (ETDEWEB)

    Sampson, M.A. [Southampton General Hospital, Southampton (United Kingdom)]. E-mail: msampson@doctors.org.uk; Colquhoun, K.B.M. [Southampton General Hospital, Southampton (United Kingdom); Hennessy, N.L.M. [Southampton General Hospital, Southampton (United Kingdom)

    2006-04-15

    AIM: To assess the impact of the introduction of a computed tomography (CT) imaging protocol for multi-trauma patients on the workload, overall diagnostic yield, and effect on detection of cervical spine injury and pneumothorax. METHOD: Between February 1997 and April 2004, all patients presenting acutely to the Emergency Department (ED) with haemodynamically stable trauma (Abbreviated Injury Scale 3 or more) involving more than two body systems were imaged with a comprehensive pre-set helical CT protocol (including non-contrast head, cervical spine: cranio-cervical and cervico-thoracic junctions; and oral and intravenous contrast-enhanced thoracic, abdomen and pelvis) after initial triage and a standard trauma series of radiographs (chest, lateral C-spine and pelvis). Diagnosis of cervical spine fracture and pneumothorax was noted before and after the CT protocol was carried out and findings from all studies were recorded prospectively. RESULTS: Over the 7-year period 296 multi-trauma CT studies were completed of which 41 (13.8%) were negative. Of the positive cases there were 127 (43%) head injuries; 25 cervical spine fractures (8%); 66 pelvic fractures (22%);48 thoracic or lumbar spine fractures (16%); 97 pneumothoraces (33%); 22 mediastinal injuries (7%) and 49 intra-abdominal injuries (17%) with 19 (6%) splenic tears/ruptures. Positive findings included many unsuspected injuries, including 19 cervical spine fractures which were not demonstrated on the standard lateral radiograph from the resuscitation room. Of the 97 CT detected pneumothoraces, 12 were bilateral, 52 already had a chest drain in situ and 36 were not detected on initial supine chest radiography in the resuscitation room. One undetected case had bilateral tension pneumothoraces that were promptly drained on the CT table. Only three patients did not complete their multi-trauma examination because of deterioration in clinical condition and these were all immediately returned to the resuscitation

  14. Computed tomography whole body imaging in multi-trauma: 7 years experience

    International Nuclear Information System (INIS)

    Sampson, M.A.; Colquhoun, K.B.M.; Hennessy, N.L.M.

    2006-01-01

    AIM: To assess the impact of the introduction of a computed tomography (CT) imaging protocol for multi-trauma patients on the workload, overall diagnostic yield, and effect on detection of cervical spine injury and pneumothorax. METHOD: Between February 1997 and April 2004, all patients presenting acutely to the Emergency Department (ED) with haemodynamically stable trauma (Abbreviated Injury Scale 3 or more) involving more than two body systems were imaged with a comprehensive pre-set helical CT protocol (including non-contrast head, cervical spine: cranio-cervical and cervico-thoracic junctions; and oral and intravenous contrast-enhanced thoracic, abdomen and pelvis) after initial triage and a standard trauma series of radiographs (chest, lateral C-spine and pelvis). Diagnosis of cervical spine fracture and pneumothorax was noted before and after the CT protocol was carried out and findings from all studies were recorded prospectively. RESULTS: Over the 7-year period 296 multi-trauma CT studies were completed of which 41 (13.8%) were negative. Of the positive cases there were 127 (43%) head injuries; 25 cervical spine fractures (8%); 66 pelvic fractures (22%);48 thoracic or lumbar spine fractures (16%); 97 pneumothoraces (33%); 22 mediastinal injuries (7%) and 49 intra-abdominal injuries (17%) with 19 (6%) splenic tears/ruptures. Positive findings included many unsuspected injuries, including 19 cervical spine fractures which were not demonstrated on the standard lateral radiograph from the resuscitation room. Of the 97 CT detected pneumothoraces, 12 were bilateral, 52 already had a chest drain in situ and 36 were not detected on initial supine chest radiography in the resuscitation room. One undetected case had bilateral tension pneumothoraces that were promptly drained on the CT table. Only three patients did not complete their multi-trauma examination because of deterioration in clinical condition and these were all immediately returned to the resuscitation

  15. Concurrent Validity and Feasibility of Short Tests Currently Used to Measure Early Childhood Development in Large Scale Studies.

    Directory of Open Access Journals (Sweden)

    Marta Rubio-Codina

    Full Text Available In low- and middle-income countries (LIMCs, measuring early childhood development (ECD with standard tests in large scale surveys and evaluations of interventions is difficult and expensive. Multi-dimensional screeners and single-domain tests ('short tests' are frequently used as alternatives. However, their validity in these circumstances is unknown. We examined the feasibility, reliability, and concurrent validity of three multi-dimensional screeners (Ages and Stages Questionnaires (ASQ-3, Denver Developmental Screening Test (Denver-II, Battelle Developmental Inventory screener (BDI-2 and two single-domain tests (MacArthur-Bates Short-Forms (SFI and SFII, WHO Motor Milestones (WHO-Motor in 1,311 children 6-42 months in Bogota, Colombia. The scores were compared with those on the Bayley Scales of Infant and Toddler Development (Bayley-III, taken as the 'gold standard'. The Bayley-III was given at a center by psychologists; whereas the short tests were administered in the home by interviewers, as in a survey setting. Findings indicated good internal validity of all short tests except the ASQ-3. The BDI-2 took long to administer and was expensive, while the single-domain tests were quickest and cheapest and the Denver-II and ASQ-3 were intermediate. Concurrent validity of the multi-dimensional tests' cognitive, language, and fine motor scales with the corresponding Bayley-III scale was low below 19 months. However, it increased with age, becoming moderate-to-high over 30 months. In contrast, gross motor scales' concurrence was high under 19 months and then decreased. Of the single-domain tests, the WHO-Motor had high validity with gross motor under 16 months, and the SFI and SFII expressive scales showed moderate correlations with language under 30 months. Overall, the Denver-II was the most feasible and valid multi-dimensional test and the ASQ-3 performed poorly under 31 months. By domain, gross motor development had the highest concurrence

  16. Development of multi-dimensional body image scale for malaysian female adolescents.

    Science.gov (United States)

    Chin, Yit Siew; Taib, Mohd Nasir Mohd; Shariff, Zalilah Mohd; Khor, Geok Lin

    2008-01-01

    The present study was conducted to develop a Multi-dimensional Body Image Scale for Malaysian female adolescents. Data were collected among 328 female adolescents from a secondary school in Kuantan district, state of Pahang, Malaysia by using a self-administered questionnaire and anthropometric measurements. The self-administered questionnaire comprised multiple measures of body image, Eating Attitude Test (EAT-26; Garner & Garfinkel, 1979) and Rosenberg Self-esteem Inventory (Rosenberg, 1965). The 152 items from selected multiple measures of body image were examined through factor analysis and for internal consistency. Correlations between Multi-dimensional Body Image Scale and body mass index (BMI), risk of eating disorders and self-esteem were assessed for construct validity. A seven factor model of a 62-item Multi-dimensional Body Image Scale for Malaysian female adolescents with construct validity and good internal consistency was developed. The scale encompasses 1) preoccupation with thinness and dieting behavior, 2) appearance and body satisfaction, 3) body importance, 4) muscle increasing behavior, 5) extreme dieting behavior, 6) appearance importance, and 7) perception of size and shape dimensions. Besides, a multidimensional body image composite score was proposed to screen negative body image risk in female adolescents. The result found body image was correlated with BMI, risk of eating disorders and self-esteem in female adolescents. In short, the present study supports a multi-dimensional concept for body image and provides a new insight into its multi-dimensionality in Malaysian female adolescents with preliminary validity and reliability of the scale. The Multi-dimensional Body Image Scale can be used to identify female adolescents who are potentially at risk of developing body image disturbance through future intervention programs.

  17. Sensitivity analysis for missing dichotomous outcome data in multi-visit randomized clinical trial with randomization-based covariance adjustment.

    Science.gov (United States)

    Li, Siying; Koch, Gary G; Preisser, John S; Lam, Diana; Sanchez-Kam, Matilde

    2017-01-01

    Dichotomous endpoints in clinical trials have only two possible outcomes, either directly or via categorization of an ordinal or continuous observation. It is common to have missing data for one or more visits during a multi-visit study. This paper presents a closed form method for sensitivity analysis of a randomized multi-visit clinical trial that possibly has missing not at random (MNAR) dichotomous data. Counts of missing data are redistributed to the favorable and unfavorable outcomes mathematically to address possibly informative missing data. Adjusted proportion estimates and their closed form covariance matrix estimates are provided. Treatment comparisons over time are addressed with Mantel-Haenszel adjustment for a stratification factor and/or randomization-based adjustment for baseline covariables. The application of such sensitivity analyses is illustrated with an example. An appendix outlines an extension of the methodology to ordinal endpoints.

  18. The Transition to Large-scale Cosmic Homogeneity in the WiggleZ Dark Energy Survey

    Science.gov (United States)

    Scrimgeour, Morag; Davis, T.; Blake, C.; James, B.; Poole, G. B.; Staveley-Smith, L.; Dark Energy Survey, WiggleZ

    2013-01-01

    The most fundamental assumption of the standard cosmological model (ΛCDM) is that the universe is homogeneous on large scales. This is clearly not true on small scales, where clusters and voids exist, and some studies seem to suggest that galaxies follow a fractal distribution up to very large scales 200 h-1 Mpc or more), whereas the ΛCDM model predicts transition to homogeneity at scales of ~100 h-1 Mpc. Any cosmological measurements made below the scale of homogeneity (such as the power spectrum) could be misleading, so it is crucial to measure the scale of homogeneity in the Universe. We have used the WiggleZ Dark Energy Survey to make the largest volume measurement to date of the transition to homogeneity in the galaxy distribution. WiggleZ is a UV-selected spectroscopic survey of ~200,000 luminous blue galaxies up to z=1, made with the Anglo-Australian Telescope. We have corrected for survey incompleteness using random catalogues that account for the various survey selection criteria, and tested the robustness of our results using a suite of fractal mock catalogues. The large volume and depth of WiggleZ allows us to probe the transition of the galaxy distribution to homogeneity on large scales and over several epochs, and see if this is consistent with a ΛCDM prediction.

  19. Localization Algorithm Based on a Spring Model (LASM for Large Scale Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Shuai Li

    2008-03-01

    Full Text Available A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1 for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.

  20. Large-Scale Outflows in Seyfert Galaxies

    Science.gov (United States)

    Colbert, E. J. M.; Baum, S. A.

    1995-12-01

    \\catcode`\\@=11 \\ialign{m @th#1hfil ##hfil \\crcr#2\\crcr\\sim\\crcr}}} \\catcode`\\@=12 Highly collimated outflows extend out to Mpc scales in many radio-loud active galaxies. In Seyfert galaxies, which are radio-quiet, the outflows extend out to kpc scales and do not appear to be as highly collimated. In order to study the nature of large-scale (>~1 kpc) outflows in Seyferts, we have conducted optical, radio and X-ray surveys of a distance-limited sample of 22 edge-on Seyfert galaxies. Results of the optical emission-line imaging and spectroscopic survey imply that large-scale outflows are present in >~{{1} /{4}} of all Seyferts. The radio (VLA) and X-ray (ROSAT) surveys show that large-scale radio and X-ray emission is present at about the same frequency. Kinetic luminosities of the outflows in Seyferts are comparable to those in starburst-driven superwinds. Large-scale radio sources in Seyferts appear diffuse, but do not resemble radio halos found in some edge-on starburst galaxies (e.g. M82). We discuss the feasibility of the outflows being powered by the active nucleus (e.g. a jet) or a circumnuclear starburst.

  1. HSTDEK: Developing a methodology for construction of large-scale, multi-use knowledge bases

    Science.gov (United States)

    Freeman, Michael S.

    1987-01-01

    The primary research objectives of the Hubble Space Telescope Design/Engineering Knowledgebase (HSTDEK) are to develop a methodology for constructing and maintaining large scale knowledge bases which can be used to support multiple applications. To insure the validity of its results, this research is being persued in the context of a real world system, the Hubble Space Telescope. The HSTDEK objectives are described in detail. The history and motivation of the project are briefly described. The technical challenges faced by the project are outlined.

  2. Multi-scale diffuse interface modeling of multi-component two-phase flow with partial miscibility

    Science.gov (United States)

    Kou, Jisheng; Sun, Shuyu

    2016-08-01

    In this paper, we introduce a diffuse interface model to simulate multi-component two-phase flow with partial miscibility based on a realistic equation of state (e.g. Peng-Robinson equation of state). Because of partial miscibility, thermodynamic relations are used to model not only interfacial properties but also bulk properties, including density, composition, pressure, and realistic viscosity. As far as we know, this effort is the first time to use diffuse interface modeling based on equation of state for modeling of multi-component two-phase flow with partial miscibility. In numerical simulation, the key issue is to resolve the high contrast of scales from the microscopic interface composition to macroscale bulk fluid motion since the interface has a nanoscale thickness only. To efficiently solve this challenging problem, we develop a multi-scale simulation method. At the microscopic scale, we deduce a reduced interfacial equation under reasonable assumptions, and then we propose a formulation of capillary pressure, which is consistent with macroscale flow equations. Moreover, we show that Young-Laplace equation is an approximation of this capillarity formulation, and this formulation is also consistent with the concept of Tolman length, which is a correction of Young-Laplace equation. At the macroscopical scale, the interfaces are treated as discontinuous surfaces separating two phases of fluids. Our approach differs from conventional sharp-interface two-phase flow model in that we use the capillary pressure directly instead of a combination of surface tension and Young-Laplace equation because capillarity can be calculated from our proposed capillarity formulation. A compatible condition is also derived for the pressure in flow equations. Furthermore, based on the proposed capillarity formulation, we design an efficient numerical method for directly computing the capillary pressure between two fluids composed of multiple components. Finally, numerical tests

  3. Multi-scale diffuse interface modeling of multi-component two-phase flow with partial miscibility

    KAUST Repository

    Kou, Jisheng

    2016-05-10

    In this paper, we introduce a diffuse interface model to simulate multi-component two-phase flow with partial miscibility based on a realistic equation of state (e.g. Peng-Robinson equation of state). Because of partial miscibility, thermodynamic relations are used to model not only interfacial properties but also bulk properties, including density, composition, pressure, and realistic viscosity. As far as we know, this effort is the first time to use diffuse interface modeling based on equation of state for modeling of multi-component two-phase flow with partial miscibility. In numerical simulation, the key issue is to resolve the high contrast of scales from the microscopic interface composition to macroscale bulk fluid motion since the interface has a nanoscale thickness only. To efficiently solve this challenging problem, we develop a multi-scale simulation method. At the microscopic scale, we deduce a reduced interfacial equation under reasonable assumptions, and then we propose a formulation of capillary pressure, which is consistent with macroscale flow equations. Moreover, we show that Young-Laplace equation is an approximation of this capillarity formulation, and this formulation is also consistent with the concept of Tolman length, which is a correction of Young-Laplace equation. At the macroscopical scale, the interfaces are treated as discontinuous surfaces separating two phases of fluids. Our approach differs from conventional sharp-interface two-phase flow model in that we use the capillary pressure directly instead of a combination of surface tension and Young-Laplace equation because capillarity can be calculated from our proposed capillarity formulation. A compatible condition is also derived for the pressure in flow equations. Furthermore, based on the proposed capillarity formulation, we design an efficient numerical method for directly computing the capillary pressure between two fluids composed of multiple components. Finally, numerical tests

  4. Classification of high-resolution remote sensing images based on multi-scale superposition

    Science.gov (United States)

    Wang, Jinliang; Gao, Wenjie; Liu, Guangjie

    2017-07-01

    Landscape structures and process on different scale show different characteristics. In the study of specific target landmarks, the most appropriate scale for images can be attained by scale conversion, which improves the accuracy and efficiency of feature identification and classification. In this paper, the authors carried out experiments on multi-scale classification by taking the Shangri-la area in the north-western Yunnan province as the research area and the images from SPOT5 HRG and GF-1 Satellite as date sources. Firstly, the authors upscaled the two images by cubic convolution, and calculated the optimal scale for different objects on the earth shown in images by variation functions. Then the authors conducted multi-scale superposition classification on it by Maximum Likelyhood, and evaluated the classification accuracy. The results indicates that: (1) for most of the object on the earth, the optimal scale appears in the bigger scale instead of the original one. To be specific, water has the biggest optimal scale, i.e. around 25-30m; farmland, grassland, brushwood, roads, settlement places and woodland follows with 20-24m. The optimal scale for shades and flood land is basically as the same as the original one, i.e. 8m and 10m respectively. (2) Regarding the classification of the multi-scale superposed images, the overall accuracy of the ones from SPOT5 HRG and GF-1 Satellite is 12.84% and 14.76% higher than that of the original multi-spectral images, respectively, and Kappa coefficient is 0.1306 and 0.1419 higher, respectively. Hence, the multi-scale superposition classification which was applied in the research area can enhance the classification accuracy of remote sensing images .

  5. Bright is the New Black - Multi-Year Performance of Generic High-Albedo Roofs in an Urban Climate

    Science.gov (United States)

    Gaffin, S. R.; Imhoff, M.; Rosenzweig, C.; Khanbilvardi, R.; Pasqualini, A.; Kong, A. Y. Y.; Grillo, D.; Freed, A.; Hillel, D.; Hartung, E.

    2012-01-01

    High-albedo white and cool roofing membranes are recognized as a fundamental strategy that dense urban areas can deploy on a large scale, at low cost, to mitigate the urban heat island effect. We are monitoring three generic white membranes within New York City that represent a cross-section of the dominant white membrane options for U.S. flat roofs: (1) an ethylene propylene diene monomer (EPDM) rubber membrane; (2) a thermoplastic polyolefin (TPO) membrane and; (3) an asphaltic multi-ply built-up membrane coated with white elastomeric acrylic paint. The paint product is being used by New York City s government for the first major urban albedo enhancement program in its history. We report on the temperature and related albedo performance of these three membranes at three different sites over a multi-year period. The results indicate that the professionally installed white membranes are maintaining their temperature control effectively and are meeting the Energy Star Cool Roofing performance standards requiring a three-year aged albedo above 0.50. The EPDM membrane however shows evidence of low emissivity. The painted asphaltic surface shows high emissivity but lost about half of its initial albedo within two years after installation. Given that the acrylic approach is an important "do-it-yourself," low-cost, retrofit technique, and, as such, offers the most rapid technique for increasing urban albedo, further product performance research is recommended to identify conditions that optimize its long-term albedo control. Even so, its current multi-year performance still represents a significant albedo enhancement for urban heat island mitigation.

  6. A Large-scale Plume in an X-class Solar Flare

    Energy Technology Data Exchange (ETDEWEB)

    Fleishman, Gregory D.; Nita, Gelu M.; Gary, Dale E. [Physics Department, Center for Solar-Terrestrial Research, New Jersey Institute of Technology Newark, NJ, 07102-1982 (United States)

    2017-08-20

    Ever-increasing multi-frequency imaging of solar observations suggests that solar flares often involve more than one magnetic fluxtube. Some of the fluxtubes are closed, while others can contain open fields. The relative proportion of nonthermal electrons among those distinct loops is highly important for understanding energy release, particle acceleration, and transport. The access of nonthermal electrons to the open field is also important because the open field facilitates the solar energetic particle (SEP) escape from the flaring site, and thus controls the SEP fluxes in the solar system, both directly and as seed particles for further acceleration. The large-scale fluxtubes are often filled with a tenuous plasma, which is difficult to detect in either EUV or X-ray wavelengths; however, they can dominate at low radio frequencies, where a modest component of nonthermal electrons can render the source optically thick and, thus, bright enough to be observed. Here we report the detection of a large-scale “plume” at the impulsive phase of an X-class solar flare, SOL2001-08-25T16:23, using multi-frequency radio data from Owens Valley Solar Array. To quantify the flare’s spatial structure, we employ 3D modeling utilizing force-free-field extrapolations from the line of sight SOHO /MDI magnetograms with our modeling tool GX-Simulator. We found that a significant fraction of the nonthermal electrons that accelerated at the flare site low in the corona escapes to the plume, which contains both closed and open fields. We propose that the proportion between the closed and open fields at the plume is what determines the SEP population escaping into interplanetary space.

  7. Large-scale hydrogen production using nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Ryland, D.; Stolberg, L.; Kettner, A.; Gnanapragasam, N.; Suppiah, S. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)

    2014-07-01

    For many years, Atomic Energy of Canada Limited (AECL) has been studying the feasibility of using nuclear reactors, such as the Supercritical Water-cooled Reactor, as an energy source for large scale hydrogen production processes such as High Temperature Steam Electrolysis and the Copper-Chlorine thermochemical cycle. Recent progress includes the augmentation of AECL's experimental capabilities by the construction of experimental systems to test high temperature steam electrolysis button cells at ambient pressure and temperatures up to 850{sup o}C and CuCl/HCl electrolysis cells at pressures up to 7 bar and temperatures up to 100{sup o}C. In parallel, detailed models of solid oxide electrolysis cells and the CuCl/HCl electrolysis cell are being refined and validated using experimental data. Process models are also under development to assess options for economic integration of these hydrogen production processes with nuclear reactors. Options for large-scale energy storage, including hydrogen storage, are also under study. (author)

  8. Weighted Scaling in Non-growth Random Networks

    International Nuclear Information System (INIS)

    Chen Guang; Yang Xuhua; Xu Xinli

    2012-01-01

    We propose a weighted model to explain the self-organizing formation of scale-free phenomenon in non-growth random networks. In this model, we use multiple-edges to represent the connections between vertices and define the weight of a multiple-edge as the total weights of all single-edges within it and the strength of a vertex as the sum of weights for those multiple-edges attached to it. The network evolves according to a vertex strength preferential selection mechanism. During the evolution process, the network always holds its total number of vertices and its total number of single-edges constantly. We show analytically and numerically that a network will form steady scale-free distributions with our model. The results show that a weighted non-growth random network can evolve into scale-free state. It is interesting that the network also obtains the character of an exponential edge weight distribution. Namely, coexistence of scale-free distribution and exponential distribution emerges.

  9. Multi-scale graphene patterns on arbitrary substrates via laser-assisted transfer-printing process

    KAUST Repository

    Park, J. B.

    2012-01-01

    A laser-assisted transfer-printing process is developed for multi-scale graphene patterns on arbitrary substrates using femtosecond laser scanning on a graphene/metal substrate and transfer techniques without using multi-step patterning processes. The short pulse nature of a femtosecond laser on a graphene/copper sheet enables fabrication of high-resolution graphene patterns. Thanks to the scale up, fast, direct writing, multi-scale with high resolution, and reliable process characteristics, it can be an alternative pathway to the multi-step photolithography methods for printing arbitrary graphene patterns on desired substrates. We also demonstrate transparent strain devices without expensive photomasks and multi-step patterning process. © 2012 American Institute of Physics.

  10. Progresses in application of computational ?uid dynamic methods to large scale wind turbine aerodynamics?

    Institute of Scientific and Technical Information of China (English)

    Zhenyu ZHANG; Ning ZHAO; Wei ZHONG; Long WANG; Bofeng XU

    2016-01-01

    The computational ?uid dynamics (CFD) methods are applied to aerody-namic problems for large scale wind turbines. The progresses including the aerodynamic analyses of wind turbine pro?les, numerical ?ow simulation of wind turbine blades, evalu-ation of aerodynamic performance, and multi-objective blade optimization are discussed. Based on the CFD methods, signi?cant improvements are obtained to predict two/three-dimensional aerodynamic characteristics of wind turbine airfoils and blades, and the vorti-cal structure in their wake ?ows is accurately captured. Combining with a multi-objective genetic algorithm, a 1.5 MW NH-1500 optimized blade is designed with high e?ciency in wind energy conversion.

  11. SCALE INTERACTION IN A MIXING LAYER. THE ROLE OF THE LARGE-SCALE GRADIENTS

    KAUST Repository

    Fiscaletti, Daniele

    2015-08-23

    The interaction between scales is investigated in a turbulent mixing layer. The large-scale amplitude modulation of the small scales already observed in other works depends on the crosswise location. Large-scale positive fluctuations correlate with a stronger activity of the small scales on the low speed-side of the mixing layer, and a reduced activity on the high speed-side. However, from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.

  12. Multi-scale ocean and climate drivers of widespread bleaching in the Coral Triangle

    Science.gov (United States)

    Drenkard, E.; Curchitser, E. N.; Kleypas, J. A.; Castruccio, F. S.

    2016-12-01

    The Maritime Continent is home to the Coral Triangle (CT): the global pinnacle of tropical coral biodiversity. Historically, extensive bleaching-induced mortality (caused by thermal stress) among corals in the CT has been associated with extremes in the El Niño Southern Oscillation (ENSO), particularly years when a strong El Niños transitions to a La Niña state (i.e., 1998 and 2010). Similarities in the spatial distribution of satellite-derived indices, and the multi-scale environmental drivers of elevated sea surface temperatures (SSTs) during the 1998 and 2010 bleaching events suggests a potential predictability that has important implications for reef conservation. Using numerical models and ocean and atmosphere reanalysis products, we discuss the roles of ENSO-associated anomalies in both large-scale atmospheric circulation patterns (e.g., South Asian Monsoon) and regional ocean-cooling mechanisms such as coastal upwelling, tropical storm activity, and divergent (i.e., upwelling) circulation patterns (e.g., the Mindanao Eddy) in determining SSTs and, consequently projected patterns of reef ecosystem vulnerability to thermal stress. Conditions associated with the recent and ongoing 2015/2016 coral bleaching and mortality will be compared/contrasted.

  13. Scrubbing Up: Multi-Scale Investigation of Woody Encroachment in a Southern African Savannah

    Directory of Open Access Journals (Sweden)

    Christopher G. Marston

    2017-04-01

    Full Text Available Changes in the extent of woody vegetation represent a major conservation question in many savannah systems around the globe. To address the problem of the current lack of broad-scale cost-effective tools for land cover monitoring in complex savannah environments, we use a multi-scale approach to quantifying vegetation change in Kruger National Park (KNP, South Africa. We test whether medium spatial resolution satellite data (Landsat, existing back to the 1970s, which have pixel sizes larger than typical vegetation patches, can nevertheless capture the thematic detail required to detect woody encroachment in savannahs. We quantify vegetation change over a 13-year period in KNP, examine the changes that have occurred, assess the drivers of these changes, and compare appropriate remote sensing data sources for monitoring change. We generate land cover maps for three areas of southern KNP using very high resolution (VHR and medium resolution satellite sensor imagery from February 2001 to 2014. Considerable land cover change has occurred, with large increases in shrubs replacing both trees and grassland. Examination of exclosure areas and potential environmental driver data suggests two mechanisms: elephant herbivory removing trees and at least one separate mechanism responsible for conversion of grassland to shrubs, theorised to be increasing atmospheric CO2. Thus, the combination of these mechanisms causes the novel two-directional shrub encroachment that we observe (tree loss and grassland conversion. Multi-scale comparison of classifications indicates that although spatial detail is lost when using medium resolution rather than VHR imagery for land cover classification (e.g., Landsat imagery cannot readily distinguish between tree and shrub classes, while VHR imagery can, the thematic detail contained within both VHR and medium resolution classifications is remarkably congruent. This suggests that medium resolution imagery contains sufficient

  14. Updating Geospatial Data from Large Scale Data Sources

    Science.gov (United States)

    Zhao, R.; Chen, J.; Wang, D.; Shang, Y.; Wang, Z.; Li, X.; Ai, T.

    2011-08-01

    In the past decades, many geospatial databases have been established at national, regional and municipal levels over the world. Nowadays, it has been widely recognized that how to update these established geo-spatial database and keep them up to date is most critical for the value of geo-spatial database. So, more and more efforts have been devoted to the continuous updating of these geospatial databases. Currently, there exist two main types of methods for Geo-spatial database updating: directly updating with remote sensing images or field surveying materials, and indirectly updating with other updated data result such as larger scale newly updated data. The former method is the basis because the update data sources in the two methods finally root from field surveying and remote sensing. The later method is often more economical and faster than the former. Therefore, after the larger scale database is updated, the smaller scale database should be updated correspondingly in order to keep the consistency of multi-scale geo-spatial database. In this situation, it is very reasonable to apply map generalization technology into the process of geo-spatial database updating. The latter is recognized as one of most promising methods of geo-spatial database updating, especially in collaborative updating environment in terms of map scale, i.e , different scale database are produced and maintained separately by different level organizations such as in China. This paper is focused on applying digital map generalization into the updating of geo-spatial database from large scale in the collaborative updating environment for SDI. The requirements of the application of map generalization into spatial database updating are analyzed firstly. A brief review on geospatial data updating based digital map generalization is then given. Based on the requirements analysis and review, we analyze the key factors for implementing updating geospatial data from large scale including technical

  15. Large scale applicability of a Fully Adaptive Non-Intrusive Spectral Projection technique: Sensitivity and uncertainty analysis of a transient

    International Nuclear Information System (INIS)

    Perkó, Zoltán; Lathouwers, Danny; Kloosterman, Jan Leen; Hagen, Tim van der

    2014-01-01

    Highlights: • Grid and basis adaptive Polynomial Chaos techniques are presented for S and U analysis. • Dimensionality reduction and incremental polynomial order reduce computational costs. • An unprotected loss of flow transient is investigated in a Gas Cooled Fast Reactor. • S and U analysis is performed with MC and adaptive PC methods, for 42 input parameters. • PC accurately estimates means, variances, PDFs, sensitivities and uncertainties. - Abstract: Since the early years of reactor physics the most prominent sensitivity and uncertainty (S and U) analysis methods in the nuclear community have been adjoint based techniques. While these are very effective for pure neutronics problems due to the linearity of the transport equation, they become complicated when coupled non-linear systems are involved. With the continuous increase in computational power such complicated multi-physics problems are becoming progressively tractable, hence affordable and easily applicable S and U analysis tools also have to be developed in parallel. For reactor physics problems for which adjoint methods are prohibitive Polynomial Chaos (PC) techniques offer an attractive alternative to traditional random sampling based approaches. At TU Delft such PC methods have been studied for a number of years and this paper presents a large scale application of our Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm for performing the sensitivity and uncertainty analysis of a Gas Cooled Fast Reactor (GFR) Unprotected Loss Of Flow (ULOF) transient. The transient was simulated using the Cathare 2 code system and a fully detailed model of the GFR2400 reactor design that was investigated in the European FP7 GoFastR project. Several sources of uncertainty were taken into account amounting to an unusually high number of stochastic input parameters (42) and numerous output quantities were investigated. The results show consistently good performance of the applied adaptive PC

  16. Aeroelastic Stability Investigations for Large-scale Vertical Axis Wind Turbines

    International Nuclear Information System (INIS)

    2 P O Box 5800, Albuquerque, NM, 87185 (United States))" data-affiliation=" (Senior Member of Technical Staff, Analytical Structural Dynamics Sandia National Laboratories2 P O Box 5800, Albuquerque, NM, 87185 (United States))" >Owens, B C; 2 P O Box 5800, Albuquerque, NM, 87185 (United States))" data-affiliation=" (Principal Member of Technical Staff, Wind Energy Technologies Sandia National Laboratories2 P O Box 5800, Albuquerque, NM, 87185 (United States))" >Griffith, D T

    2014-01-01

    The availability of offshore wind resources in coastal regions, along with a high concentration of load centers in these areas, makes offshore wind energy an attractive opportunity for clean renewable electricity production. High infrastructure costs such as the offshore support structure and operation and maintenance costs for offshore wind technology, however, are significant obstacles that need to be overcome to make offshore wind a more cost-effective option. A vertical-axis wind turbine (VAWT) rotor configuration offers a potential transformative technology solution that significantly lowers cost of energy for offshore wind due to its inherent advantages for the offshore market. However, several potential challenges exist for VAWTs and this paper addresses one of them with an initial investigation of dynamic aeroelastic stability for large-scale, multi-megawatt VAWTs. The aeroelastic formulation and solution method from the BLade Aeroelastic STability Tool (BLAST) for HAWT blades was employed to extend the analysis capability of a newly developed structural dynamics design tool for VAWTs. This investigation considers the effect of configuration geometry, material system choice, and number of blades on the aeroelastic stability of a VAWT, and provides an initial scoping for potential aeroelastic instabilities in large-scale VAWT designs

  17. Aeroelastic Stability Investigations for Large-scale Vertical Axis Wind Turbines

    Science.gov (United States)

    Owens, B. C.; Griffith, D. T.

    2014-06-01

    The availability of offshore wind resources in coastal regions, along with a high concentration of load centers in these areas, makes offshore wind energy an attractive opportunity for clean renewable electricity production. High infrastructure costs such as the offshore support structure and operation and maintenance costs for offshore wind technology, however, are significant obstacles that need to be overcome to make offshore wind a more cost-effective option. A vertical-axis wind turbine (VAWT) rotor configuration offers a potential transformative technology solution that significantly lowers cost of energy for offshore wind due to its inherent advantages for the offshore market. However, several potential challenges exist for VAWTs and this paper addresses one of them with an initial investigation of dynamic aeroelastic stability for large-scale, multi-megawatt VAWTs. The aeroelastic formulation and solution method from the BLade Aeroelastic STability Tool (BLAST) for HAWT blades was employed to extend the analysis capability of a newly developed structural dynamics design tool for VAWTs. This investigation considers the effect of configuration geometry, material system choice, and number of blades on the aeroelastic stability of a VAWT, and provides an initial scoping for potential aeroelastic instabilities in large-scale VAWT designs.

  18. A 10-year Ground-Based Radar Climatology of Convective Penetration of Stratospheric Intrusions and Associated Large-Scale Transport over the CONUS

    Science.gov (United States)

    Homeyer, C. R.

    2017-12-01

    Deep convection reaching the upper troposphere and lower stratosphere (UTLS) and its impact on atmospheric composition through rapid vertical transport of lower troposphere air and stratosphere-troposphere exchange has received increasing attention in the past 5-10 years. Most efforts focused on convection have been directed toward storms that reach and/or penetrate the coincident environmental lapse-rate tropopause. However, convection has also been shown to reach into large-scale stratospheric intrusions (depressions of stratospheric air lying well below the lapse-rate tropopause on the cyclonic side of upper troposphere jet streams). Such convective penetration of stratospheric intrusions is not captured by studies of lapse-rate tropopause-penetrating convection. In this presentation, it will be shown using hourly, high-quality mergers of ground-based radar observations from 2004 to 2013 in the contiguous United States (CONUS) and forward large-scale trajectory analysis that convective penetration of stratospheric intrusions: 1) is more frequent than lapse-rate tropopause-penetrating convection, 2) occurs over a broader area of the CONUS than lapse-rate tropopause-penetrating convection, and 3) can influence the composition of the lower stratosphere through large-scale advection of convectively influenced air to altitudes above the lapse-rate tropopause, which we find to occur for about 8.5% of the intrusion volumes reached by convection.

  19. Cloud Detection by Fusing Multi-Scale Convolutional Features

    Science.gov (United States)

    Li, Zhiwei; Shen, Huanfeng; Wei, Yancong; Cheng, Qing; Yuan, Qiangqiang

    2018-04-01

    Clouds detection is an important pre-processing step for accurate application of optical satellite imagery. Recent studies indicate that deep learning achieves best performance in image segmentation tasks. Aiming at boosting the accuracy of cloud detection for multispectral imagery, especially for those that contain only visible and near infrared bands, in this paper, we proposed a deep learning based cloud detection method termed MSCN (multi-scale cloud net), which segments cloud by fusing multi-scale convolutional features. MSCN was trained on a global cloud cover validation collection, and was tested in more than ten types of optical images with different resolution. Experiment results show that MSCN has obvious advantages over the traditional multi-feature combined cloud detection method in accuracy, especially when in snow and other areas covered by bright non-cloud objects. Besides, MSCN produced more detailed cloud masks than the compared deep cloud detection convolution network. The effectiveness of MSCN make it promising for practical application in multiple kinds of optical imagery.

  20. PKI security in large-scale healthcare networks

    OpenAIRE

    Mantas, G.; Lymberopoulos, D.; Komninos, N.

    2012-01-01

    During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a ...

  1. Dissecting the large-scale galactic conformity

    Science.gov (United States)

    Seo, Seongu

    2018-01-01

    Galactic conformity is an observed phenomenon that galaxies located in the same region have similar properties such as star formation rate, color, gas fraction, and so on. The conformity was first observed among galaxies within in the same halos (“one-halo conformity”). The one-halo conformity can be readily explained by mutual interactions among galaxies within a halo. Recent observations however further witnessed a puzzling connection among galaxies with no direct interaction. In particular, galaxies located within a sphere of ~5 Mpc radius tend to show similarities, even though the galaxies do not share common halos with each other ("two-halo conformity" or “large-scale conformity”). Using a cosmological hydrodynamic simulation, Illustris, we investigate the physical origin of the two-halo conformity and put forward two scenarios. First, back-splash galaxies are likely responsible for the large-scale conformity. They have evolved into red galaxies due to ram-pressure stripping in a given galaxy cluster and happen to reside now within a ~5 Mpc sphere. Second, galaxies in strong tidal field induced by large-scale structure also seem to give rise to the large-scale conformity. The strong tides suppress star formation in the galaxies. We discuss the importance of the large-scale conformity in the context of galaxy evolution.

  2. Multi-scale habitat selection modeling: A review and outlook

    Science.gov (United States)

    Kevin McGarigal; Ho Yi Wan; Kathy A. Zeller; Brad C. Timm; Samuel A. Cushman

    2016-01-01

    Scale is the lens that focuses ecological relationships. Organisms select habitat at multiple hierarchical levels and at different spatial and/or temporal scales within each level. Failure to properly address scale dependence can result in incorrect inferences in multi-scale habitat selection modeling studies.

  3. A variational multi-scale method with spectral approximation of the sub-scales: Application to the 1D advection-diffusion equations

    KAUST Repository

    Chacó n Rebollo, Tomá s; Dia, Ben Mansour

    2015-01-01

    This paper introduces a variational multi-scale method where the sub-grid scales are computed by spectral approximations. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. This allows to element-wise calculate the sub-grid scales by means of the associated spectral expansion. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a finite number of modes. We apply this general framework to the convection-diffusion equation, by analytically computing the family of eigenfunctions. We perform a convergence and error analysis. We also present some numerical tests that show the stability of the method for an odd number of spectral modes, and an improvement of accuracy in the large resolved scales, due to the adding of the sub-grid spectral scales.

  4. A variational multi-scale method with spectral approximation of the sub-scales: Application to the 1D advection-diffusion equations

    KAUST Repository

    Chacón Rebollo, Tomás

    2015-03-01

    This paper introduces a variational multi-scale method where the sub-grid scales are computed by spectral approximations. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. This allows to element-wise calculate the sub-grid scales by means of the associated spectral expansion. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a finite number of modes. We apply this general framework to the convection-diffusion equation, by analytically computing the family of eigenfunctions. We perform a convergence and error analysis. We also present some numerical tests that show the stability of the method for an odd number of spectral modes, and an improvement of accuracy in the large resolved scales, due to the adding of the sub-grid spectral scales.

  5. Architecture for large-scale automatic web accessibility evaluation based on the UWEM methodology

    DEFF Research Database (Denmark)

    Ulltveit-Moe, Nils; Olsen, Morten Goodwin; Pillai, Anand B.

    2008-01-01

    The European Internet Accessibility project (EIAO) has developed an Observatory for performing large scale automatic web accessibility evaluations of public sector web sites in Europe. The architecture includes a distributed web crawler that crawls web sites for links until either a given budget...... of web pages have been identified or the web site has been crawled exhaustively. Subsequently, a uniform random subset of the crawled web pages is sampled and sent for accessibility evaluation and the evaluation results are stored in a Resource Description Format (RDF) database that is later loaded...... challenges that the project faced and the solutions developed towards building a system capable of regular large-scale accessibility evaluations with sufficient capacity and stability. It also outlines some possible future architectural improvements....

  6. Initial condition effects on large scale structure in numerical simulations of plane mixing layers

    Science.gov (United States)

    McMullan, W. A.; Garrett, S. J.

    2016-01-01

    In this paper, Large Eddy Simulations are performed on the spatially developing plane turbulent mixing layer. The simulated mixing layers originate from initially laminar conditions. The focus of this research is on the effect of the nature of the imposed fluctuations on the large-scale spanwise and streamwise structures in the flow. Two simulations are performed; one with low-level three-dimensional inflow fluctuations obtained from pseudo-random numbers, the other with physically correlated fluctuations of the same magnitude obtained from an inflow generation technique. Where white-noise fluctuations provide the inflow disturbances, no spatially stationary streamwise vortex structure is observed, and the large-scale spanwise turbulent vortical structures grow continuously and linearly. These structures are observed to have a three-dimensional internal geometry with branches and dislocations. Where physically correlated provide the inflow disturbances a "streaky" streamwise structure that is spatially stationary is observed, with the large-scale turbulent vortical structures growing with the square-root of time. These large-scale structures are quasi-two-dimensional, on top of which the secondary structure rides. The simulation results are discussed in the context of the varying interpretations of mixing layer growth that have been postulated. Recommendations are made concerning the data required from experiments in order to produce accurate numerical simulation recreations of real flows.

  7. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  8. Evaluation of Large-Scale Wing Vortex Wakes from Multi-Camera PIV Measurements in Free-Flight Laboratory

    Science.gov (United States)

    Carmer, Carl F. v.; Heider, André; Schröder, Andreas; Konrath, Robert; Agocs, Janos; Gilliot, Anne; Monnier, Jean-Claude

    Multiple-vortex systems of aircraft wakes have been investigated experimentally in a unique large-scale laboratory facility, the free-flight B20 catapult bench, ONERA Lille. 2D/2C PIV measurements have been performed in a translating reference frame, which provided time-resolved crossvelocity observations of the vortex systems in a Lagrangian frame normal to the wake axis. A PIV setup using a moving multiple-camera array and a variable double-frame time delay has been employed successfully. The large-scale quasi-2D structures of the wake-vortex system have been identified using the QW criterion based on the 2D velocity gradient tensor ∇H u, thus illustrating the temporal development of unequal-strength corotating vortex pairs in aircraft wakes for nondimensional times tU0/b≲45.

  9. Large Scale Solar Heating

    DEFF Research Database (Denmark)

    Heller, Alfred

    2001-01-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out...... model is designed and validated on the Marstal case. Applying the Danish Reference Year, a design tool is presented. The simulation tool is used for proposals for application of alternative designs, including high-performance solar collector types (trough solar collectors, vaccum pipe collectors......). Simulation programs are proposed as control supporting tool for daily operation and performance prediction of central solar heating plants. Finaly the CSHP technolgy is put into persepctive with respect to alternatives and a short discussion on the barries and breakthrough of the technology are given....

  10. A long-term, continuous simulation approach for large-scale flood risk assessments

    Science.gov (United States)

    Falter, Daniela; Schröter, Kai; Viet Dung, Nguyen; Vorogushyn, Sergiy; Hundecha, Yeshewatesfa; Kreibich, Heidi; Apel, Heiko; Merz, Bruno

    2014-05-01

    The Regional Flood Model (RFM) is a process based model cascade developed for flood risk assessments of large-scale basins. RFM consists of four model parts: the rainfall-runoff model SWIM, a 1D channel routing model, a 2D hinterland inundation model and the flood loss estimation model for residential buildings FLEMOps+r. The model cascade was recently undertaken a proof-of-concept study at the Elbe catchment (Germany) to demonstrate that flood risk assessments, based on a continuous simulation approach, including rainfall-runoff, hydrodynamic and damage estimation models, are feasible for large catchments. The results of this study indicated that uncertainties are significant, especially for hydrodynamic simulations. This was basically a consequence of low data quality and disregarding dike breaches. Therefore, RFM was applied with a refined hydraulic model setup for the Elbe tributary Mulde. The study area Mulde catchment comprises about 6,000 km2 and 380 river-km. The inclusion of more reliable information on overbank cross-sections and dikes considerably improved the results. For the application of RFM for flood risk assessments, long-term climate input data is needed to drive the model chain. This model input was provided by a multi-site, multi-variate weather generator that produces sets of synthetic meteorological data reproducing the current climate statistics. The data set comprises 100 realizations of 100 years of meteorological data. With the proposed continuous simulation approach of RFM, we simulated a virtual period of 10,000 years covering the entire flood risk chain including hydrological, 1D/2D hydrodynamic and flood damage estimation models. This provided a record of around 2.000 inundation events affecting the study area with spatially detailed information on inundation depths and damage to residential buildings on a resolution of 100 m. This serves as basis for a spatially consistent, flood risk assessment for the Mulde catchment presented in

  11. Geo-spatial Cognition on Human's Social Activity Space Based on Multi-scale Grids

    Directory of Open Access Journals (Sweden)

    ZHAI Weixin

    2016-12-01

    Full Text Available Widely applied location aware devices, including mobile phones and GPS receivers, have provided great convenience for collecting large volume individuals' geographical information. The researches on the human's society behavior space has attracts an increasingly number of researchers. In our research, based on location-based Flickr data From 2004 to May, 2014 in China, we choose five levels of spatial grids to form the multi-scale frame for investigate the correlation between the scale and the geo-spatial cognition on human's social activity space. The HT-index is selected as the fractal inspired by Alexander to estimate the maturity of the society activity on different scales. The results indicate that that the scale characteristics are related to the spatial cognition to a certain extent. It is favorable to use the spatial grid as a tool to control scales for geo-spatial cognition on human's social activity space.

  12. Large-scale perspective as a challenge

    NARCIS (Netherlands)

    Plomp, M.G.A.

    2012-01-01

    1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that

  13. Scale interactions in a mixing layer – the role of the large-scale gradients

    KAUST Repository

    Fiscaletti, D.

    2016-02-15

    © 2016 Cambridge University Press. The interaction between the large and the small scales of turbulence is investigated in a mixing layer, at a Reynolds number based on the Taylor microscale of , via direct numerical simulations. The analysis is performed in physical space, and the local vorticity root-mean-square (r.m.s.) is taken as a measure of the small-scale activity. It is found that positive large-scale velocity fluctuations correspond to large vorticity r.m.s. on the low-speed side of the mixing layer, whereas, they correspond to low vorticity r.m.s. on the high-speed side. The relationship between large and small scales thus depends on position if the vorticity r.m.s. is correlated with the large-scale velocity fluctuations. On the contrary, the correlation coefficient is nearly constant throughout the mixing layer and close to unity if the vorticity r.m.s. is correlated with the large-scale velocity gradients. Therefore, the small-scale activity appears closely related to large-scale gradients, while the correlation between the small-scale activity and the large-scale velocity fluctuations is shown to reflect a property of the large scales. Furthermore, the vorticity from unfiltered (small scales) and from low pass filtered (large scales) velocity fields tend to be aligned when examined within vortical tubes. These results provide evidence for the so-called \\'scale invariance\\' (Meneveau & Katz, Annu. Rev. Fluid Mech., vol. 32, 2000, pp. 1-32), and suggest that some of the large-scale characteristics are not lost at the small scales, at least at the Reynolds number achieved in the present simulation.

  14. A REGION-BASED MULTI-SCALE APPROACH FOR OBJECT-BASED IMAGE ANALYSIS

    Directory of Open Access Journals (Sweden)

    T. Kavzoglu

    2016-06-01

    Full Text Available Within the last two decades, object-based image analysis (OBIA considering objects (i.e. groups of pixels instead of pixels has gained popularity and attracted increasing interest. The most important stage of the OBIA is image segmentation that groups spectrally similar adjacent pixels considering not only the spectral features but also spatial and textural features. Although there are several parameters (scale, shape, compactness and band weights to be set by the analyst, scale parameter stands out the most important parameter in segmentation process. Estimating optimal scale parameter is crucially important to increase the classification accuracy that depends on image resolution, image object size and characteristics of the study area. In this study, two scale-selection strategies were implemented in the image segmentation process using pan-sharped Qickbird-2 image. The first strategy estimates optimal scale parameters for the eight sub-regions. For this purpose, the local variance/rate of change (LV-RoC graphs produced by the ESP-2 tool were analysed to determine fine, moderate and coarse scales for each region. In the second strategy, the image was segmented using the three candidate scale values (fine, moderate, coarse determined from the LV-RoC graph calculated for whole image. The nearest neighbour classifier was applied in all segmentation experiments and equal number of pixels was randomly selected to calculate accuracy metrics (overall accuracy and kappa coefficient. Comparison of region-based and image-based segmentation was carried out on the classified images and found that region-based multi-scale OBIA produced significantly more accurate results than image-based single-scale OBIA. The difference in classification accuracy reached to 10% in terms of overall accuracy.

  15. What is at stake in multi-scale approaches

    International Nuclear Information System (INIS)

    Jamet, Didier

    2008-01-01

    Full text of publication follows: Multi-scale approaches amount to analyzing physical phenomena at small space and time scales in order to model their effects at larger scales. This approach is very general in physics and engineering; one of the best examples of success of this approach is certainly statistical physics that allows to recover classical thermodynamics and to determine the limits of application of classical thermodynamics. Getting access to small scale information aims at reducing the models' uncertainty but it has a cost: fine scale models may be more complex than larger scale models and their resolution may require the development of specific and possibly expensive methods, numerical simulation techniques and experiments. For instance, in applications related to nuclear engineering, the application of computational fluid dynamics instead of cruder models is a formidable engineering challenge because it requires resorting to high performance computing. Likewise, in two-phase flow modeling, the techniques of direct numerical simulation, where all the interfaces are tracked individually and where all turbulence scales are captured, are getting mature enough to be considered for averaged modeling purposes. However, resolving small scale problems is a necessary step but it is not sufficient in a multi-scale approach. An important modeling challenge is to determine how to treat small scale data in order to get relevant information for larger scale models. For some applications, such as single-phase turbulence or transfers in porous media, this up-scaling approach is known and is now used rather routinely. However, in two-phase flow modeling, the up-scaling approach is not as mature and specific issues must be addressed that raise fundamental questions. This will be discussed and illustrated. (author)

  16. Algorithm and Application of Gcp-Independent Block Adjustment for Super Large-Scale Domestic High Resolution Optical Satellite Imagery

    Science.gov (United States)

    Sun, Y. S.; Zhang, L.; Xu, B.; Zhang, Y.

    2018-04-01

    The accurate positioning of optical satellite image without control is the precondition for remote sensing application and small/medium scale mapping in large abroad areas or with large-scale images. In this paper, aiming at the geometric features of optical satellite image, based on a widely used optimization method of constraint problem which is called Alternating Direction Method of Multipliers (ADMM) and RFM least-squares block adjustment, we propose a GCP independent block adjustment method for the large-scale domestic high resolution optical satellite image - GISIBA (GCP-Independent Satellite Imagery Block Adjustment), which is easy to parallelize and highly efficient. In this method, the virtual "average" control points are built to solve the rank defect problem and qualitative and quantitative analysis in block adjustment without control. The test results prove that the horizontal and vertical accuracy of multi-covered and multi-temporal satellite images are better than 10 m and 6 m. Meanwhile the mosaic problem of the adjacent areas in large area DOM production can be solved if the public geographic information data is introduced as horizontal and vertical constraints in the block adjustment process. Finally, through the experiments by using GF-1 and ZY-3 satellite images over several typical test areas, the reliability, accuracy and performance of our developed procedure will be presented and studied in this paper.

  17. RRW: repeated random walks on genome-scale protein networks for local cluster discovery

    Directory of Open Access Journals (Sweden)

    Can Tolga

    2009-09-01

    Full Text Available Abstract Background We propose an efficient and biologically sensitive algorithm based on repeated random walks (RRW for discovering functional modules, e.g., complexes and pathways, within large-scale protein networks. Compared to existing cluster identification techniques, RRW implicitly makes use of network topology, edge weights, and long range interactions between proteins. Results We apply the proposed technique on a functional network of yeast genes and accurately identify statistically significant clusters of proteins. We validate the biological significance of the results using known complexes in the MIPS complex catalogue database and well-characterized biological processes. We find that 90% of the created clusters have the majority of their catalogued proteins belonging to the same MIPS complex, and about 80% have the majority of their proteins involved in the same biological process. We compare our method to various other clustering techniques, such as the Markov Clustering Algorithm (MCL, and find a significant improvement in the RRW clusters' precision and accuracy values. Conclusion RRW, which is a technique that exploits the topology of the network, is more precise and robust in finding local clusters. In addition, it has the added flexibility of being able to find multi-functional proteins by allowing overlapping clusters.

  18. Multi-Scale Multi-physics Methods Development for the Calculation of Hot-Spots in the NGNP

    International Nuclear Information System (INIS)

    Downar, Thomas; Seker, Volkan

    2013-01-01

    Radioactive gaseous fission products are released out of the fuel element at a significantly higher rate when the fuel temperature exceeds 1600°C in high-temperature gas-cooled reactors (HTGRs). Therefore, it is of paramount importance to accurately predict the peak fuel temperature during all operational and design-basis accident conditions. The current methods used to predict the peak fuel temperature in HTGRs, such as the Next-Generation Nuclear Plant (NGNP), estimate the average fuel temperature in a computational mesh modeling hundreds of fuel pebbles or a fuel assembly in a pebble-bed reactor (PBR) or prismatic block type reactor (PMR), respectively. Experiments conducted in operating HTGRs indicate considerable uncertainty in the current methods and correlations used to predict actual temperatures. The objective of this project is to improve the accuracy in the prediction of local 'hot' spots by developing multi-scale, multi-physics methods and implementing them within the framework of established codes used for NGNP analysis.The multi-scale approach which this project will implement begins with defining suitable scales for a physical and mathematical model and then deriving and applying the appropriate boundary conditions between scales. The macro scale is the greatest length that describes the entire reactor, whereas the meso scale models only a fuel block in a prismatic reactor and ten to hundreds of pebbles in a pebble bed reactor. The smallest scale is the micro scale--the level of a fuel kernel of the pebble in a PBR and fuel compact in a PMR--which needs to be resolved in order to calculate the peak temperature in a fuel kernel.

  19. Multi-Scale Multi-physics Methods Development for the Calculation of Hot-Spots in the NGNP

    Energy Technology Data Exchange (ETDEWEB)

    Downar, Thomas [Univ. of Michigan, Ann Arbor, MI (United States); Seker, Volkan [Univ. of Michigan, Ann Arbor, MI (United States)

    2013-04-30

    Radioactive gaseous fission products are released out of the fuel element at a significantly higher rate when the fuel temperature exceeds 1600°C in high-temperature gas-cooled reactors (HTGRs). Therefore, it is of paramount importance to accurately predict the peak fuel temperature during all operational and design-basis accident conditions. The current methods used to predict the peak fuel temperature in HTGRs, such as the Next-Generation Nuclear Plant (NGNP), estimate the average fuel temperature in a computational mesh modeling hundreds of fuel pebbles or a fuel assembly in a pebble-bed reactor (PBR) or prismatic block type reactor (PMR), respectively. Experiments conducted in operating HTGRs indicate considerable uncertainty in the current methods and correlations used to predict actual temperatures. The objective of this project is to improve the accuracy in the prediction of local "hot" spots by developing multi-scale, multi-physics methods and implementing them within the framework of established codes used for NGNP analysis.The multi-scale approach which this project will implement begins with defining suitable scales for a physical and mathematical model and then deriving and applying the appropriate boundary conditions between scales. The macro scale is the greatest length that describes the entire reactor, whereas the meso scale models only a fuel block in a prismatic reactor and ten to hundreds of pebbles in a pebble bed reactor. The smallest scale is the micro scale--the level of a fuel kernel of the pebble in a PBR and fuel compact in a PMR--which needs to be resolved in order to calculate the peak temperature in a fuel kernel.

  20. Large-scale matrix-handling subroutines 'ATLAS'

    International Nuclear Information System (INIS)

    Tsunematsu, Toshihide; Takeda, Tatsuoki; Fujita, Keiichi; Matsuura, Toshihiko; Tahara, Nobuo

    1978-03-01

    Subroutine package ''ATLAS'' has been developed for handling large-scale matrices. The package is composed of four kinds of subroutines, i.e., basic arithmetic routines, routines for solving linear simultaneous equations and for solving general eigenvalue problems and utility routines. The subroutines are useful in large scale plasma-fluid simulations. (auth.)

  1. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  2. Effects of multi-stakeholder platforms on multi-stakeholder innovation networks: Implications for research for development interventions targeting innovations at scale

    Science.gov (United States)

    Schut, Marc; Hermans, Frans; van Asten, Piet; Leeuwis, Cees

    2018-01-01

    Multi-stakeholder platforms (MSPs) have been playing an increasing role in interventions aiming to generate and scale innovations in agricultural systems. However, the contribution of MSPs in achieving innovations and scaling has been varied, and many factors have been reported to be important for their performance. This paper aims to provide evidence on the contribution of MSPs to innovation and scaling by focusing on three developing country cases in Burundi, Democratic Republic of Congo, and Rwanda. Through social network analysis and logistic models, the paper studies the changes in the characteristics of multi-stakeholder innovation networks targeted by MSPs and identifies factors that play significant roles in triggering these changes. The results demonstrate that MSPs do not necessarily expand and decentralize innovation networks but can lead to contraction and centralization in the initial years of implementation. They show that some of the intended next users of interventions with MSPs–local-level actors–left the innovation networks, whereas the lead organization controlling resource allocation in the MSPs substantially increased its centrality. They also indicate that not all the factors of change in innovation networks are country specific. Initial conditions of innovation networks and funding provided by the MSPs are common factors explaining changes in innovation networks across countries and across different network functions. The study argues that investigating multi-stakeholder innovation network characteristics targeted by the MSP using a network approach in early implementation can contribute to better performance in generating and scaling innovations, and that funding can be an effective implementation tool in developing country contexts. PMID:29870559

  3. Effects of multi-stakeholder platforms on multi-stakeholder innovation networks: Implications for research for development interventions targeting innovations at scale.

    Science.gov (United States)

    Sartas, Murat; Schut, Marc; Hermans, Frans; Asten, Piet van; Leeuwis, Cees

    2018-01-01

    Multi-stakeholder platforms (MSPs) have been playing an increasing role in interventions aiming to generate and scale innovations in agricultural systems. However, the contribution of MSPs in achieving innovations and scaling has been varied, and many factors have been reported to be important for their performance. This paper aims to provide evidence on the contribution of MSPs to innovation and scaling by focusing on three developing country cases in Burundi, Democratic Republic of Congo, and Rwanda. Through social network analysis and logistic models, the paper studies the changes in the characteristics of multi-stakeholder innovation networks targeted by MSPs and identifies factors that play significant roles in triggering these changes. The results demonstrate that MSPs do not necessarily expand and decentralize innovation networks but can lead to contraction and centralization in the initial years of implementation. They show that some of the intended next users of interventions with MSPs-local-level actors-left the innovation networks, whereas the lead organization controlling resource allocation in the MSPs substantially increased its centrality. They also indicate that not all the factors of change in innovation networks are country specific. Initial conditions of innovation networks and funding provided by the MSPs are common factors explaining changes in innovation networks across countries and across different network functions. The study argues that investigating multi-stakeholder innovation network characteristics targeted by the MSP using a network approach in early implementation can contribute to better performance in generating and scaling innovations, and that funding can be an effective implementation tool in developing country contexts.

  4. Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials

    Science.gov (United States)

    Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A.; Burgueño, Juan; Bandeira e Sousa, Massaine; Crossa, José

    2018-01-01

    In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines (l) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. PMID:29476023

  5. Heuristic Relative Entropy Principles with Complex Measures: Large-Degree Asymptotics of a Family of Multi-variate Normal Random Polynomials

    Science.gov (United States)

    Kiessling, Michael Karl-Heinz

    2017-10-01

    Let z\\in C, let σ ^2>0 be a variance, and for N\\in N define the integrals E_N^{}(z;σ ) := {1/σ } \\int _R\\ (x^2+z^2) e^{-{1/2σ^2 x^2}}{√{2π }}/dx \\quad if N=1, {1/σ } \\int _{R^N} \\prod \\prod \\limits _{1≤ k1. These are expected values of the polynomials P_N^{}(z)=\\prod _{1≤ n≤ N}(X_n^2+z^2) whose 2 N zeros ± i X_k^{}_{k=1,\\ldots ,N} are generated by N identically distributed multi-variate mean-zero normal random variables {X_k}N_{k=1} with co-variance {Cov}_N^{}(X_k,X_l)=(1+σ ^2-1/N)δ _{k,l}+σ ^2-1/N(1-δ _{k,l}). The E_N^{}(z;σ ) are polynomials in z^2, explicitly computable for arbitrary N, yet a list of the first three E_N^{}(z;σ ) shows that the expressions become unwieldy already for moderate N—unless σ = 1, in which case E_N^{}(z;1) = (1+z^2)^N for all z\\in C and N\\in N. (Incidentally, commonly available computer algebra evaluates the integrals E_N^{}(z;σ ) only for N up to a dozen, due to memory constraints). Asymptotic evaluations are needed for the large- N regime. For general complex z these have traditionally been limited to analytic expansion techniques; several rigorous results are proved for complex z near 0. Yet if z\\in R one can also compute this "infinite-degree" limit with the help of the familiar relative entropy principle for probability measures; a rigorous proof of this fact is supplied. Computer algebra-generated evidence is presented in support of a conjecture that a generalization of the relative entropy principle to signed or complex measures governs the N→ ∞ asymptotics of the regime iz\\in R. Potential generalizations, in particular to point vortex ensembles and the prescribed Gauss curvature problem, and to random matrix ensembles, are emphasized.

  6. CLASS: The Cosmology Large Angular Scale Surveyor

    Science.gov (United States)

    Essinger-Hileman, Thomas; Ali, Aamir; Amiri, Mandana; Appel, John W.; Araujo, Derek; Bennett, Charles L.; Boone, Fletcher; Chan, Manwei; Cho, Hsiao-Mei; Chuss, David T.; hide

    2014-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is an experiment to measure the signature of a gravitational wave background from inflation in the polarization of the cosmic microwave background (CMB). CLASS is a multi-frequency array of four telescopes operating from a high-altitude site in the Atacama Desert in Chile. CLASS will survey 70% of the sky in four frequency bands centered at 38, 93, 148, and 217 GHz, which are chosen to straddle the Galactic-foreground minimum while avoiding strong atmospheric emission lines. This broad frequency coverage ensures that CLASS can distinguish Galactic emission from the CMB. The sky fraction of the CLASS survey will allow the full shape of the primordial B-mode power spectrum to be characterized, including the signal from reionization at low-length. Its unique combination of large sky coverage, control of systematic errors, and high sensitivity will allow CLASS to measure or place upper limits on the tensor-to-scalar ratio at a level of r = 0:01 and make a cosmic-variance-limited measurement of the optical depth to the surface of last scattering, tau. (c) (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.

  7. Implicit solvers for large-scale nonlinear problems

    International Nuclear Information System (INIS)

    Keyes, David E; Reynolds, Daniel R; Woodward, Carol S

    2006-01-01

    Computational scientists are grappling with increasingly complex, multi-rate applications that couple such physical phenomena as fluid dynamics, electromagnetics, radiation transport, chemical and nuclear reactions, and wave and material propagation in inhomogeneous media. Parallel computers with large storage capacities are paving the way for high-resolution simulations of coupled problems; however, hardware improvements alone will not prove enough to enable simulations based on brute-force algorithmic approaches. To accurately capture nonlinear couplings between dynamically relevant phenomena, often while stepping over rapid adjustments to quasi-equilibria, simulation scientists are increasingly turning to implicit formulations that require a discrete nonlinear system to be solved for each time step or steady state solution. Recent advances in iterative methods have made fully implicit formulations a viable option for solution of these large-scale problems. In this paper, we overview one of the most effective iterative methods, Newton-Krylov, for nonlinear systems and point to software packages with its implementation. We illustrate the method with an example from magnetically confined plasma fusion and briefly survey other areas in which implicit methods have bestowed important advantages, such as allowing high-order temporal integration and providing a pathway to sensitivity analyses and optimization. Lastly, we overview algorithm extensions under development motivated by current SciDAC applications

  8. Analytical Assessment of the Relationship between 100MWp Large-scale Grid-connected Photovoltaic Plant Performance and Meteorological Parameters

    Science.gov (United States)

    Sheng, Jie; Zhu, Qiaoming; Cao, Shijie; You, Yang

    2017-05-01

    This paper helps in study of the relationship between the photovoltaic power generation of large scale “fishing and PV complementary” grid-tied photovoltaic system and meteorological parameters, with multi-time scale power data from the photovoltaic power station and meteorological data over the same period of a whole year. The result indicates that, the PV power generation has the most significant correlation with global solar irradiation, followed by diurnal temperature range, sunshine hours, daily maximum temperature and daily average temperature. In different months, the maximum monthly average power generation appears in August, which related to the more global solar irradiation and longer sunshine hours in this month. However, the maximum daily average power generation appears in October, this is due to the drop in temperature brings about the improvement of the efficiency of PV panels. Through the contrast of monthly average performance ratio (PR) and monthly average temperature, it is shown that, the larger values of monthly average PR appears in April and October, while it is smaller in summer with higher temperature. The results concluded that temperature has a great influence on the performance ratio of large scale grid-tied PV power system, and it is important to adopt effective measures to decrease the temperature of PV plant properly.

  9. Data fusion of multi-scale representations for structural damage detection

    Science.gov (United States)

    Guo, Tian; Xu, Zili

    2018-01-01

    Despite extensive researches into structural health monitoring (SHM) in the past decades, there are few methods that can detect multiple slight damage in noisy environments. Here, we introduce a new hybrid method that utilizes multi-scale space theory and data fusion approach for multiple damage detection in beams and plates. A cascade filtering approach provides multi-scale space for noisy mode shapes and filters the fluctuations caused by measurement noise. In multi-scale space, a series of amplification and data fusion algorithms are utilized to search the damage features across all possible scales. We verify the effectiveness of the method by numerical simulation using damaged beams and plates with various types of boundary conditions. Monte Carlo simulations are conducted to illustrate the effectiveness and noise immunity of the proposed method. The applicability is further validated via laboratory cases studies focusing on different damage scenarios. Both results demonstrate that the proposed method has a superior noise tolerant ability, as well as damage sensitivity, without knowing material properties or boundary conditions.

  10. Multi-scale sampling to evaluate assemblage dynamics in an oceanic marine reserve.

    Directory of Open Access Journals (Sweden)

    Andrew R Thompson

    Full Text Available To resolve the capacity of Marine Protected Areas (MPA to enhance fish productivity it is first necessary to understand how environmental conditions affect the distribution and abundance of fishes independent of potential reserve effects. Baseline fish production was examined from 2002-2004 through ichthyoplankton sampling in a large (10,878 km(2 Southern Californian oceanic marine reserve, the Cowcod Conservation Area (CCA that was established in 2001, and the Southern California Bight as a whole (238,000 km(2 CalCOFI sampling domain. The CCA assemblage changed through time as the importance of oceanic-pelagic species decreased between 2002 (La Niña and 2003 (El Niño and then increased in 2004 (El Niño, while oceanic species and rockfishes displayed the opposite pattern. By contrast, the CalCOFI assemblage was relatively stable through time. Depth, temperature, and zooplankton explained more of the variability in assemblage structure at the CalCOFI scale than they did at the CCA scale. CalCOFI sampling revealed that oceanic species impinged upon the CCA between 2002 and 2003 in association with warmer offshore waters, thus explaining the increased influence of these species in the CCA during the El Nino years. Multi-scale, spatially explicit sampling and analysis was necessary to interpret assemblage dynamics in the CCA and likely will be needed to evaluate other focal oceanic marine reserves throughout the world.

  11. Langevin dynamics simulations of large frustrated Josephson junction arrays

    International Nuclear Information System (INIS)

    Groenbech-Jensen, N.; Bishop, A.R.; Lomdahl, P.S.

    1991-01-01

    Long-time Langevin dynamics simulations of large (N x N,N = 128) 2-dimensional arrays of Josephson junctions in a uniformly frustrating external magnetic field are reported. The results demonstrate: (1) Relaxation from an initially random flux configuration as a universal fit to a glassy stretched-exponential type of relaxation for the intermediate temperatures T(0.3 T c approx-lt T approx-lt 0.7 T c ), and an activated dynamic behavior for T ∼ T c ; (2) a glassy (multi-time, multi-length scale) voltage response to an applied current. Intrinsic dynamical symmetry breaking induced by boundaries as nucleation sites for flux lattice defects gives rise to transverse and noisy voltage response

  12. Langevin dynamics simulations of large frustrated Josephson junction arrays

    International Nuclear Information System (INIS)

    Gronbech-Jensen, N.; Bishop, A.R.; Lomdahl, P.S.

    1991-01-01

    Long-time Langevin dynamics simulations of large (N x N, N = 128) 2-dimensional arrays of Josephson junctions in a uniformly frustrating external magnetic field are reported. The results demonstrate: Relaxation from an initially random flux configuration as a ''universal'' fit to a ''glassy'' stretched-exponential type of relaxation for the intermediate temperatures T (0.3 T c approx-lt T approx-lt 0.7 T c ), and an ''activated dynamic'' behavior for T ∼ T c A glassy (multi-time, multi-length scale) voltage response to an applied current. Intrinsic dynamical symmetry breaking induced by boundaries as nucleation sites for flux lattice defects gives rise to transverse and noisy voltage response

  13. Multiresolution comparison of precipitation datasets for large-scale models

    Science.gov (United States)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  14. Probes of large-scale structure in the Universe

    International Nuclear Information System (INIS)

    Suto, Yasushi; Gorski, K.; Juszkiewicz, R.; Silk, J.

    1988-01-01

    Recent progress in observational techniques has made it possible to confront quantitatively various models for the large-scale structure of the Universe with detailed observational data. We develop a general formalism to show that the gravitational instability theory for the origin of large-scale structure is now capable of critically confronting observational results on cosmic microwave background radiation angular anisotropies, large-scale bulk motions and large-scale clumpiness in the galaxy counts. (author)

  15. A Multi-Component Day-Camp Weight-Loss Program Is Effective in Reducing BMI in Children after One Year: A Randomized Controlled Trial.

    Directory of Open Access Journals (Sweden)

    Kristian Traberg Larsen

    Full Text Available The objective of the present study was to evaluate the effectiveness of a one-year multi-component immersive day-camp weight-loss intervention for children with overweight and obesity. The study design was a parallel-group randomized controlled trial. One hundred fifteen 11-13-year-old children with overweight and obesity were randomized into either: A six-week day-camp intervention arm focusing on increased physical activity, and healthy diet followed by a subsequent one-year family-based intervention, or a standard intervention arm consisting of one weekly exercise session for six weeks. Body mass index (BMI was the primary outcome. BMI z-score, clustered cardiovascular risk z-score, and body composition were secondary outcomes. All outcomes were measured at baseline, six week-, and 52 week follow-up. After six weeks, children from the day-camp intervention arm had improved their BMI (-2.2 kg/m2 (95% CI -2.6 to -1.7, P<0.001 and all secondary outcomes when compared to the children from the standard intervention arm. After 52 weeks, the day-camp intervention arm had a lower BMI (-1.2 kg/m2 (95% CI -1.8 to -0.5, P = 0.001, and BMI z-score (-0.20 (95% CI -0.35 to -0.05, P = 0.008, and clustered cardiovascular risk z-score (-0.23 (95% CI -0.37 to -0.08, P = 0.002 compared to the standard intervention arm. No group differences were detected in body composition after 52 weeks. This study shows that the day-camp intervention arm is effective in reducing BMI and improving the metabolic health of children with overweight and obesity. However, the effects seem to be diminishing over time.

  16. Multi-time scale Climate Informed Stochastic Hybrid Simulation-Optimization Model (McISH model) for Multi-Purpose Reservoir System

    Science.gov (United States)

    Lu, M.; Lall, U.

    2013-12-01

    In order to mitigate the impacts of climate change, proactive management strategies to operate reservoirs and dams are needed. A multi-time scale climate informed stochastic model is developed to optimize the operations for a multi-purpose single reservoir by simulating decadal, interannual, seasonal and sub-seasonal variability. We apply the model to a setting motivated by the largest multi-purpose dam in N. India, the Bhakhra reservoir on the Sutlej River, a tributary of the Indus. This leads to a focus on timing and amplitude of the flows for the monsoon and snowmelt periods. The flow simulations are constrained by multiple sources of historical data and GCM future projections, that are being developed through a NSF funded project titled 'Decadal Prediction and Stochastic Simulation of Hydroclimate Over Monsoon Asia'. The model presented is a multilevel, nonlinear programming model that aims to optimize the reservoir operating policy on a decadal horizon and the operation strategy on an updated annual basis. The model is hierarchical, in terms of having a structure that two optimization models designated for different time scales are nested as a matryoshka doll. The two optimization models have similar mathematical formulations with some modifications to meet the constraints within that time frame. The first level of the model is designated to provide optimization solution for policy makers to determine contracted annual releases to different uses with a prescribed reliability; the second level is a within-the-period (e.g., year) operation optimization scheme that allocates the contracted annual releases on a subperiod (e.g. monthly) basis, with additional benefit for extra release and penalty for failure. The model maximizes the net benefit of irrigation, hydropower generation and flood control in each of the periods. The model design thus facilitates the consistent application of weather and climate forecasts to improve operations of reservoir systems. The

  17. Toward a global multi-scale heliophysics observatory

    Science.gov (United States)

    Semeter, J. L.

    2017-12-01

    We live within the only known stellar-planetary system that supports life. What we learn about this system is not only relevant to human society and its expanding reach beyond Earth's surface, but also to our understanding of the origins and evolution of life in the universe. Heliophysics is focused on solar-terrestrial interactions mediated by the magnetic and plasma environment surrounding the planet. A defining feature of energy flow through this environment is interaction across physical scales. A solar disturbance aimed at Earth can excite geospace variability on scales ranging from thousands of kilometers (e.g., global convection, region 1 and 2 currents, electrojet intensifications) to 10's of meters (e.g., equatorial spread-F, dispersive Alfven waves, plasma instabilities). Most "geospace observatory" concepts are focused on a single modality (e.g., HF/UHF radar, magnetometer, optical) providing a limited parameter set over a particular spatiotemporal resolution. Data assimilation methods have been developed to couple heterogeneous and distributed observations, but resolution has typically been prescribed a-priori and according to physical assumptions. This paper develops a conceptual framework for the next generation multi-scale heliophysics observatory, capable of revealing and quantifying the complete spectrum of cross-scale interactions occurring globally within the geospace system. The envisioned concept leverages existing assets, enlists citizen scientists, and exploits low-cost access to the geospace environment. Examples are presented where distributed multi-scale observations have resulted in substantial new insight into the inner workings of our stellar-planetary system.

  18. Results of a large-scale randomized behavior change intervention on road safety in Kenya.

    Science.gov (United States)

    Habyarimana, James; Jack, William

    2015-08-25

    Road accidents kill 1.3 million people each year, most in the developing world. We test the efficacy of evocative messages, delivered on stickers placed inside Kenyan matatus, or minibuses, in reducing road accidents. We randomize the intervention, which nudges passengers to complain to their drivers directly, across 12,000 vehicles and find that on average it reduces insurance claims rates of matatus by between one-quarter and one-third and is associated with 140 fewer road accidents per year than predicted. Messages promoting collective action are especially effective, and evocative images are an important motivator. Average maximum speeds and average moving speeds are 1-2 km/h lower in vehicles assigned to treatment. We cannot reject the null hypothesis of no placebo effect. We were unable to discern any impact of a complementary radio campaign on insurance claims. Finally, the sticker intervention is inexpensive: we estimate the cost-effectiveness of the most impactful stickers to be between $10 and $45 per disability-adjusted life-year saved.

  19. Design study on sodium-cooled large-scale reactor

    International Nuclear Information System (INIS)

    Shimakawa, Yoshio; Nibe, Nobuaki; Hori, Toru

    2002-05-01

    In Phase 1 of the 'Feasibility Study on Commercialized Fast Reactor Cycle Systems (F/S)', an advanced loop type reactor has been selected as a promising concept of sodium-cooled large-scale reactor, which has a possibility to fulfill the design requirements of the F/S. In Phase 2 of the F/S, it is planed to precede a preliminary conceptual design of a sodium-cooled large-scale reactor based on the design of the advanced loop type reactor. Through the design study, it is intended to construct such a plant concept that can show its attraction and competitiveness as a commercialized reactor. This report summarizes the results of the design study on the sodium-cooled large-scale reactor performed in JFY2001, which is the first year of Phase 2. In the JFY2001 design study, a plant concept has been constructed based on the design of the advanced loop type reactor, and fundamental specifications of main systems and components have been set. Furthermore, critical subjects related to safety, structural integrity, thermal hydraulics, operability, maintainability and economy have been examined and evaluated. As a result of this study, the plant concept of the sodium-cooled large-scale reactor has been constructed, which has a prospect to satisfy the economic goal (construction cost: less than 200,000yens/kWe, etc.) and has a prospect to solve the critical subjects. From now on, reflecting the results of elemental experiments, the preliminary conceptual design of this plant will be preceded toward the selection for narrowing down candidate concepts at the end of Phase 2. (author)

  20. Iterative learning-based decentralized adaptive tracker for large-scale systems: a digital redesign approach.

    Science.gov (United States)

    Tsai, Jason Sheng-Hong; Du, Yan-Yi; Huang, Pei-Hsiang; Guo, Shu-Mei; Shieh, Leang-San; Chen, Yuhua

    2011-07-01

    In this paper, a digital redesign methodology of the iterative learning-based decentralized adaptive tracker is proposed to improve the dynamic performance of sampled-data linear large-scale control systems consisting of N interconnected multi-input multi-output subsystems, so that the system output will follow any trajectory which may not be presented by the analytic reference model initially. To overcome the interference of each sub-system and simplify the controller design, the proposed model reference decentralized adaptive control scheme constructs a decoupled well-designed reference model first. Then, according to the well-designed model, this paper develops a digital decentralized adaptive tracker based on the optimal analog control and prediction-based digital redesign technique for the sampled-data large-scale coupling system. In order to enhance the tracking performance of the digital tracker at specified sampling instants, we apply the iterative learning control (ILC) to train the control input via continual learning. As a result, the proposed iterative learning-based decentralized adaptive tracker not only has robust closed-loop decoupled property but also possesses good tracking performance at both transient and steady state. Besides, evolutionary programming is applied to search for a good learning gain to speed up the learning process of ILC. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Recent Advances in Understanding Large Scale Vapour Explosions

    International Nuclear Information System (INIS)

    Board, S.J.; Hall, R.W.

    1976-01-01

    In foundries, violent explosions occur occasionally when molten metal comes into contact with water. If similar explosions can occur with other materials, hazardous situations may arise for example in LNG marine transportation accidents, or in liquid cooled reactor incidents when molten UO 2 contacts water or sodium coolant. Over the last 10 years a large body of experimental data has been obtained on the behaviour of small quantities of hot material in contact with a vaporisable coolant. Such experiments generally give low energy yields, despite producing fine fragmentation of the molten material. These events have been interpreted in terms of a wide range of phenomena such as violent boiling, liquid entrainment, bubble collapse, superheat, surface cracking and many others. Many of these studies have been aimed at understanding the small scale behaviour of the particular materials of interest. However, understanding the nature of the energetic events which were the original cause for concern may also be necessary to give confidence that violent events cannot occur for these materials in large scale situations. More recently, there has been a trend towards larger experiments and some of these have produced explosions of moderately high efficiency. Although occurrence of such large scale explosions can depend rather critically on initial conditions in a way which is not fully understood, there are signs that the interpretation of these events may be more straightforward than that of the single drop experiments. In the last two years several theoretical models for large scale explosions have appeared which attempt a self contained explanation of at least some stages of such high yield events: these have as their common feature a description of how a propagating breakdown of an initially quasi-stable distribution of materials is induced by the pressure and flow field caused by the energy release in adjacent regions. These models have led to the idea that for a full

  2. Designing adaptive operating rules for a large multi-purpose reservoir

    Science.gov (United States)

    Geressu, Robel; Rougé, Charles; Harou, Julien

    2017-04-01

    Reservoirs whose live storage capacity is large compared with annual inflow have "memory", i.e., their storage levels contain information about past inflows and reservoir operations. Such "long-memory" reservoirs can be found in basins in dry regions such as the Nile River Basin in Africa, the Colorado River Basin in the US, or river basins in Western and Central Asia. There the effects of a dry year have the potential to impact reservoir levels and downstream releases for several subsequent years, prompting tensions in transboundary basins. Yet, current reservoir operation rules in those reservoirs do not reflect this by integrating past climate history and release decisions among the factors that influence operating decisions. This work proposes and demonstrates an adaptive reservoir operating rule that explicitly accounts for the recent history of release decisions, and not only current storage level and near-term inflow forecasts. This implies adding long-term (e.g., multiyear) objectives to the existing short-term (e.g., annual) ones. We apply these operating rules to the Grand Ethiopian Renaissance Dam, a large reservoir under construction on the Blue Nile River. Energy generation has to be balanced with the imperative of releasing enough water in low flow years (e.g., the minimum 1, 2 or 3 year cumulative flow) to avoid tensions with downstream countries, Sudan and Egypt. Maximizing the minimum multi-year releases could be of interest for the Nile problem to minimize the impact on performance of the large High Aswan Dam in Egypt. Objectives include maximizing the average and minimum annual energy generation and maximizing the minimum annual, two year and three year cumulative releases. The system model is tested using 30 stochastically generated streamflow series. One can then derive adaptive release rules depending on the value of one- and two-year total releases with respect to thresholds. Then, there are 3 sets of release rules for the reservoir depending

  3. Multi-scale evaluations of submarine groundwater discharge

    Directory of Open Access Journals (Sweden)

    M. Taniguchi

    2015-03-01

    Full Text Available Multi-scale evaluations of submarine groundwater discharge (SGD have been made in Saijo, Ehime Prefecture, Shikoku Island, Japan, by using seepage meters for point scale, 222Rn tracer for point and coastal scales, and a numerical groundwater model (SEAWAT for coastal and basin scales. Daily basis temporal changes in SGD are evaluated by continuous seepage meter and 222Rn mooring measurements, and depend on sea level changes. Spatial evaluations of SGD were also made by 222Rn along the coast in July 2010 and November 2011. The area with larger 222Rn concentration during both seasons agreed well with the area with larger SGD calculated by 3D groundwater numerical simulations.

  4. Topological relics of symmetry breaking: winding numbers and scaling tilts from random vortex–antivortex pairs

    International Nuclear Information System (INIS)

    Zurek, W H

    2013-01-01

    I show that random distributions of vortex–antivortex pairs (rather than of individual vortices) lead to scaling of typical winding numbers W trapped inside a loop of circumference C with the square root of that circumference, W∼√C, when the expected winding numbers are large, |W| ≫ 1. Such scaling is consistent with the Kibble–Zurek mechanism (KZM), with 〈W 2 〉 inversely proportional to ξ-hat , the typical size of the domain that can break symmetry in unison. (The dependence of ξ-hat on quench rate is predicted by KZM from critical exponents of the phase transition.) Thus, according to KZM, the dispersion √ 2 > scales as √(C/ ξ-hat ) for large W. By contrast, a distribution of individual vortices with randomly assigned topological charges would result in the dispersion scaling with the square root of the area inside C (i.e., √ 2 > ∼ C). Scaling of the dispersion of W as well as of the probability of detection of non-zero W with C and ξ-hat can be also studied for loops so small that non-zero windings are rare. In this case I show that dispersion varies not as 1/√( ξ-hat ), but as 1/ ξ-hat , which results in a doubling of the scaling of dispersion with the quench rate when compared to the large |W| regime. Moreover, the probability of trapping of non-zero W becomes approximately equal to 〈W 2 〉, and scales as 1/ ξ-hat 2 . This quadruples—as compared with √ 2 > ≃ √C/ξ-circumflex valid for large W—the exponent in the power law dependence of the frequency of trapping of |W| = 1 on ξ-hat when the probability of |W| > 1 is negligible. This change of the power law exponent by a factor of four—from 1/√( ξ-hat ) for the dispersion of large W to 1/ ξ-hat 2 for the frequency of non-zero W when |W| > 1 is negligibly rare—is of paramount importance for experimental tests of KZM. (paper)

  5. Heat and mass transfer intensification and shape optimization a multi-scale approach

    CERN Document Server

    2013-01-01

    Is the heat and mass transfer intensification defined as a new paradigm of process engineering, or is it just a common and old idea, renamed and given the current taste? Where might intensification occur? How to achieve intensification? How the shape optimization of thermal and fluidic devices leads to intensified heat and mass transfers? To answer these questions, Heat & Mass Transfer Intensification and Shape Optimization: A Multi-scale Approach clarifies  the definition of the intensification by highlighting the potential role of the multi-scale structures, the specific interfacial area, the distribution of driving force, the modes of energy supply and the temporal aspects of processes.   A reflection on the methods of process intensification or heat and mass transfer enhancement in multi-scale structures is provided, including porous media, heat exchangers, fluid distributors, mixers and reactors. A multi-scale approach to achieve intensification and shape optimization is developed and clearly expla...

  6. Ion beam analysis techniques applied to large scale pollution studies

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, D D; Bailey, G; Martin, J; Garton, D; Noorman, H; Stelcer, E; Johnson, P [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1994-12-31

    Ion Beam Analysis (IBA) techniques are ideally suited to analyse the thousands of filter papers a year that may originate from a large scale aerosol sampling network. They are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. ANSTO in collaboration with the NSW EPA, Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 80,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP was funded by the Energy Research and Development Corporation (ERDC) and commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 {mu}m particle diameter cut off and runs for 24 hours every Sunday and Wednesday using one Gillman 25mm diameter stretched Teflon filter for each day. These filters are ideal targets for ion beam analysis work. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on the 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. Four simultaneous accelerator based IBA techniques are used at ANSTO, to analyse for the following 24 elements: H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. The IBA techniques were proved invaluable in identifying sources of fine particles and their spatial and seasonal variations accross the large area sampled by the ASP network. 3 figs.

  7. Ion beam analysis techniques applied to large scale pollution studies

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, D.D.; Bailey, G.; Martin, J.; Garton, D.; Noorman, H.; Stelcer, E.; Johnson, P. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1993-12-31

    Ion Beam Analysis (IBA) techniques are ideally suited to analyse the thousands of filter papers a year that may originate from a large scale aerosol sampling network. They are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. ANSTO in collaboration with the NSW EPA, Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 80,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP was funded by the Energy Research and Development Corporation (ERDC) and commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 {mu}m particle diameter cut off and runs for 24 hours every Sunday and Wednesday using one Gillman 25mm diameter stretched Teflon filter for each day. These filters are ideal targets for ion beam analysis work. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on the 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. Four simultaneous accelerator based IBA techniques are used at ANSTO, to analyse for the following 24 elements: H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. The IBA techniques were proved invaluable in identifying sources of fine particles and their spatial and seasonal variations accross the large area sampled by the ASP network. 3 figs.

  8. Japanese large-scale interferometers

    CERN Document Server

    Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K

    2002-01-01

    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.

  9. Large-scale and Long-duration Simulation of a Multi-stage Eruptive Solar Event

    Science.gov (United States)

    Jiang, chaowei; Hu, Qiang; Wu, S. T.

    2015-04-01

    We employ a data-driven 3D MHD active region evolution model by using the Conservation Element and Solution Element (CESE) numerical method. This newly developed model retains the full MHD effects, allowing time-dependent boundary conditions and time evolution studies. The time-dependent simulation is driven by measured vector magnetograms and the method of MHD characteristics on the bottom boundary. We have applied the model to investigate the coronal magnetic field evolution of AR11283 which was characterized by a pre-existing sigmoid structure in the core region and multiple eruptions, both in relatively small and large scales. We have succeeded in producing the core magnetic field structure and the subsequent eruptions of flux-rope structures (see https://dl.dropboxusercontent.com/u/96898685/large.mp4 for an animation) as the measured vector magnetograms on the bottom boundary evolve in time with constant flux emergence. The whole process, lasting for about an hour in real time, compares well with the corresponding SDO/AIA and coronagraph imaging observations. From these results, we show the capability of the model, largely data-driven, that is able to simulate complex, topological, and highly dynamic active region evolutions. (We acknowledge partial support of NSF grants AGS 1153323 and AGS 1062050, and data support from SDO/HMI and AIA teams).

  10. A large-scale multi-objective flights conflict avoidance approach supporting 4D trajectory operation

    OpenAIRE

    Guan, Xiangmin; Zhang, Xuejun; Lv, Renli; Chen, Jun; Weiszer, Michal

    2017-01-01

    Recently, the long-term conflict avoidance approaches based on large-scale flights scheduling have attracted much attention due to their ability to provide solutions from a global point of view. However, the current approaches which focus only on a single objective with the aim of minimizing the total delay and the number of conflicts, cannot provide the controllers with variety of optional solutions, representing different trade-offs. Furthermore, the flight track error is often overlooked i...

  11. Multi-scale theoretical investigation of hydrogen storage in covalent organic frameworks.

    Science.gov (United States)

    Tylianakis, Emmanuel; Klontzas, Emmanouel; Froudakis, George E

    2011-03-01

    The quest for efficient hydrogen storage materials has been the limiting step towards the commercialization of hydrogen as an energy carrier and has attracted a lot of attention from the scientific community. Sophisticated multi-scale theoretical techniques have been considered as a valuable tool for the prediction of materials storage properties. Such techniques have also been used for the investigation of hydrogen storage in a novel category of porous materials known as Covalent Organic Frameworks (COFs). These framework materials are consisted of light elements and are characterized by exceptional physicochemical properties such as large surface areas and pore volumes. Combinations of ab initio, Molecular Dynamics (MD) and Grand Canonical Monte-Carlo (GCMC) calculations have been performed to investigate the hydrogen adsorption in these ultra-light materials. The purpose of the present review is to summarize the theoretical hydrogen storage studies that have been published after the discovery of COFs. Experimental and theoretical studies have proven that COFs have comparable or better hydrogen storage abilities than other competitive materials such as MOF. The key factors that can lead to the improvement of the hydrogen storage properties of COFs are highlighted, accompanied with some recently presented theoretical multi-scale studies concerning these factors.

  12. Large-scale analysis of phosphorylation site occupancy in eukaryotic proteins

    DEFF Research Database (Denmark)

    Rao, R Shyama Prasad; Møller, Ian Max

    2012-01-01

    in proteins is currently lacking. We have therefore analyzed the occurrence and occupancy of phosphorylated sites (~ 100,281) in a large set of eukaryotic proteins (~ 22,995). Phosphorylation probability was found to be much higher in both the  termini of protein sequences and this is much pronounced...... maximum randomness. An analysis of phosphorylation motifs indicated that just 40 motifs and a much lower number of associated kinases might account for nearly 50% of the known phosphorylations in eukaryotic proteins. Our results provide a broad picture of the phosphorylation sites in eukaryotic proteins.......Many recent high throughput technologies have enabled large-scale discoveries of new phosphorylation sites and phosphoproteins. Although they have provided a number of insights into protein phosphorylation and the related processes, an inclusive analysis on the nature of phosphorylated sites...

  13. Constructing sites on a large scale

    DEFF Research Database (Denmark)

    Braae, Ellen Marie; Tietjen, Anne

    2011-01-01

    Since the 1990s, the regional scale has regained importance in urban and landscape design. In parallel, the focus in design tasks has shifted from master plans for urban extension to strategic urban transformation projects. A prominent example of a contemporary spatial development approach...... for setting the design brief in a large scale urban landscape in Norway, the Jaeren region around the city of Stavanger. In this paper, we first outline the methodological challenges and then present and discuss the proposed method based on our teaching experiences. On this basis, we discuss aspects...... is the IBA Emscher Park in the Ruhr area in Germany. Over a 10 years period (1988-1998), more than a 100 local transformation projects contributed to the transformation from an industrial to a post-industrial region. The current paradigm of planning by projects reinforces the role of the design disciplines...

  14. SOMAR-LES: A framework for multi-scale modeling of turbulent stratified oceanic flows

    Science.gov (United States)

    Chalamalla, Vamsi K.; Santilli, Edward; Scotti, Alberto; Jalali, Masoud; Sarkar, Sutanu

    2017-12-01

    A new multi-scale modeling technique, SOMAR-LES, is presented in this paper. Localized grid refinement gives SOMAR (the Stratified Ocean Model with Adaptive Resolution) access to small scales of the flow which are normally inaccessible to general circulation models (GCMs). SOMAR-LES drives a LES (Large Eddy Simulation) on SOMAR's finest grids, forced with large scale forcing from the coarser grids. Three-dimensional simulations of internal tide generation, propagation and scattering are performed to demonstrate this multi-scale modeling technique. In the case of internal tide generation at a two-dimensional bathymetry, SOMAR-LES is able to balance the baroclinic energy budget and accurately model turbulence losses at only 10% of the computational cost required by a non-adaptive solver running at SOMAR-LES's fine grid resolution. This relative cost is significantly reduced in situations with intermittent turbulence or where the location of the turbulence is not known a priori because SOMAR-LES does not require persistent, global, high resolution. To illustrate this point, we consider a three-dimensional bathymetry with grids adaptively refined along the tidally generated internal waves to capture remote mixing in regions of wave focusing. The computational cost in this case is found to be nearly 25 times smaller than that of a non-adaptive solver at comparable resolution. In the final test case, we consider the scattering of a mode-1 internal wave at an isolated two-dimensional and three-dimensional topography, and we compare the results with Legg (2014) numerical experiments. We find good agreement with theoretical estimates. SOMAR-LES is less dissipative than the closure scheme employed by Legg (2014) near the bathymetry. Depending on the flow configuration and resolution employed, a reduction of more than an order of magnitude in computational costs is expected, relative to traditional existing solvers.

  15. Regional-scale assessment of soil salinity in the Red River Valley using multi-year MODIS EVI and NDVI.

    Science.gov (United States)

    Lobell, D B; Lesch, S M; Corwin, D L; Ulmer, M G; Anderson, K A; Potts, D J; Doolittle, J A; Matos, M R; Baltes, M J

    2010-01-01

    The ability to inventory and map soil salinity at regional scales remains a significant challenge to scientists concerned with the salinization of agricultural soils throughout the world. Previous attempts to use satellite or aerial imagery to assess soil salinity have found limited success in part because of the inability of methods to isolate the effects of soil salinity on vegetative growth from other factors. This study evaluated the use of Moderate Resolution Imaging Spectroradiometer (MODIS) imagery in conjunction with directed soil sampling to assess and map soil salinity at a regional scale (i.e., 10-10(5) km(2)) in a parsimonious manner. Correlations with three soil salinity ground truth datasets differing in scale were made in Kittson County within the Red River Valley (RRV) of North Dakota and Minnesota, an area where soil salinity assessment is a top priority for the Natural Resource Conservation Service (NRCS). Multi-year MODIS imagery was used to mitigate the influence of temporally dynamic factors such as weather, pests, disease, and management influences. The average of the MODIS enhanced vegetation index (EVI) for a 7-yr period exhibited a strong relationship with soil salinity in all three datasets, and outperformed the normalized difference vegetation index (NDVI). One-third to one-half of the spatial variability in soil salinity could be captured by measuring average MODIS EVI and whether the land qualified for the Conservation Reserve Program (a USDA program that sets aside marginally productive land based on conservation principles). The approach has the practical simplicity to allow broad application in areas where limited resources are available for salinity assessment.

  16. Meteorological impact assessment of possible large scale irrigation in Southwest Saudi Arabia

    NARCIS (Netherlands)

    Maat, ter H.W.; Hutjes, R.W.A.; Ohba, R.; Ueda, H.; Bisselink, B.; Bauer, T.

    2006-01-01

    On continental to regional scales feedbacks between landuse and landcover change and climate have been widely documented over the past 10¿15 years. In the present study we explore the possibility that also vegetation changes over much smaller areas may affect local precipitation regimes. Large scale

  17. Multi-scale Modelling of Segmentation

    DEFF Research Database (Denmark)

    Hartmann, Martin; Lartillot, Olivier; Toiviainen, Petri

    2016-01-01

    pieces. In a second experiment on non-real-time segmentation, musicians indicated boundaries and their strength for six examples. Kernel density estimation was used to develop multi-scale segmentation models. Contrary to previous research, no relationship was found between boundary strength and boundary......While listening to music, people often unwittingly break down musical pieces into constituent chunks such as verses and choruses. Music segmentation studies have suggested that some consensus regarding boundary perception exists, despite individual differences. However, neither the effects...

  18. Numerical simulation of multi-directional random wave transformation in a yacht port

    Science.gov (United States)

    Ji, Qiaoling; Dong, Sheng; Zhao, Xizeng; Zhang, Guowei

    2012-09-01

    This paper extends a prediction model for multi-directional random wave transformation based on an energy balance equation by Mase with the consideration of wave shoaling, refraction, diffraction, reflection and breaking. This numerical model is improved by 1) introducing Wen's frequency spectrum and Mitsuyasu's directional function, which are more suitable to the coastal area of China; 2) considering energy dissipation caused by bottom friction, which ensures more accurate results for large-scale and shallow water areas; 3) taking into account a non-linear dispersion relation. Predictions using the extended wave model are carried out to study the feasibility of constructing the Ai Hua yacht port in Qingdao, China, with a comparison between two port layouts in design. Wave fields inside the port for different incident wave directions, water levels and return periods are simulated, and then two kinds of parameters are calculated to evaluate the wave conditions for the two layouts. Analyses show that Layout I is better than Layout II. Calculation results also show that the harbor will be calm for different wave directions under the design water level. On the contrary, the wave conditions do not wholly meet the requirements of a yacht port for ship berthing under the extreme water level. For safety consideration, the elevation of the breakwater might need to be properly increased to prevent wave overtopping under such water level. The extended numerical simulation model may provide an effective approach to computing wave heights in a harbor.

  19. Multi-scale Food Energy and Water Dynamics in the Blue Nile Highlands

    Science.gov (United States)

    Zaitchik, B. F.; Simane, B.; Block, P. J.; Foltz, J.; Mueller-Mahn, D.; Gilioli, G.; Sciarretta, A.

    2017-12-01

    The Ethiopian highlands are often called the "water tower of Africa," giving rise to major transboundary rivers. Rapid hydropower development is quickly transforming these highlands into the "power plant of Africa" as well. For local people, however, they are first and foremost a land of small farms, devoted primarily to subsistence agriculture. Under changing climate, rapid national economic growth, and steadily increasing population and land pressures, these mountains and their inhabitants have become the focal point of a multi-scale food-energy-water nexus with significant implications across East Africa. Here we examine coupled natural-human system dynamics that emerge when basin and nation scale resource development strategies are superimposed on a local economy that is largely subsistence based. Sensitivity to local and remote climate shocks are considered, as is the role of Earth Observation in understanding and informing management of food-energy-water resources across scales.

  20. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  1. Finite element analysis of multi-material models using a balancing domain decomposition method combined with the diagonal scaling preconditioner

    International Nuclear Information System (INIS)

    Ogino, Masao

    2016-01-01

    Actual problems in science and industrial applications are modeled by multi-materials and large-scale unstructured mesh, and the finite element analysis has been widely used to solve such problems on the parallel computer. However, for large-scale problems, the iterative methods for linear finite element equations suffer from slow or no convergence. Therefore, numerical methods having both robust convergence and scalable parallel efficiency are in great demand. The domain decomposition method is well known as an iterative substructuring method, and is an efficient approach for parallel finite element methods. Moreover, the balancing preconditioner achieves robust convergence. However, in case of problems consisting of very different materials, the convergence becomes bad. There are some research to solve this issue, however not suitable for cases of complex shape and composite materials. In this study, to improve convergence of the balancing preconditioner for multi-materials, a balancing preconditioner combined with the diagonal scaling preconditioner, called Scaled-BDD method, is proposed. Some numerical results are included which indicate that the proposed method has robust convergence for the number of subdomains and shows high performances compared with the original balancing preconditioner. (author)

  2. Distributed large-scale dimensional metrology new insights

    CERN Document Server

    Franceschini, Fiorenzo; Maisano, Domenico

    2011-01-01

    Focuses on the latest insights into and challenges of distributed large scale dimensional metrology Enables practitioners to study distributed large scale dimensional metrology independently Includes specific examples of the development of new system prototypes

  3. Multi-scale high-performance fluid flow: Simulations through porous media

    KAUST Repository

    Perović, Nevena

    2016-08-03

    Computational fluid dynamic (CFD) calculations on geometrically complex domains such as porous media require high geometric discretisation for accurately capturing the tested physical phenomena. Moreover, when considering a large area and analysing local effects, it is necessary to deploy a multi-scale approach that is both memory-intensive and time-consuming. Hence, this type of analysis must be conducted on a high-performance parallel computing infrastructure. In this paper, the coupling of two different scales based on the Navier–Stokes equations and Darcy\\'s law is described followed by the generation of complex geometries, and their discretisation and numerical treatment. Subsequently, the necessary parallelisation techniques and a rather specific tool, which is capable of retrieving data from the supercomputing servers and visualising them during the computation runtime (i.e. in situ) are described. All advantages and possible drawbacks of this approach, together with the preliminary results and sensitivity analyses are discussed in detail.

  4. Multi-scale high-performance fluid flow: Simulations through porous media

    KAUST Repository

    Perović, Nevena; Frisch, Jé rô me; Salama, Amgad; Sun, Shuyu; Rank, Ernst; Mundani, Ralf Peter

    2016-01-01

    Computational fluid dynamic (CFD) calculations on geometrically complex domains such as porous media require high geometric discretisation for accurately capturing the tested physical phenomena. Moreover, when considering a large area and analysing local effects, it is necessary to deploy a multi-scale approach that is both memory-intensive and time-consuming. Hence, this type of analysis must be conducted on a high-performance parallel computing infrastructure. In this paper, the coupling of two different scales based on the Navier–Stokes equations and Darcy's law is described followed by the generation of complex geometries, and their discretisation and numerical treatment. Subsequently, the necessary parallelisation techniques and a rather specific tool, which is capable of retrieving data from the supercomputing servers and visualising them during the computation runtime (i.e. in situ) are described. All advantages and possible drawbacks of this approach, together with the preliminary results and sensitivity analyses are discussed in detail.

  5. LARGE-SCALE TOPOLOGICAL PROPERTIES OF MOLECULAR NETWORKS.

    Energy Technology Data Exchange (ETDEWEB)

    MASLOV,S.SNEPPEN,K.

    2003-11-17

    Bio-molecular networks lack the top-down design. Instead, selective forces of biological evolution shape them from raw material provided by random events such as gene duplications and single gene mutations. As a result individual connections in these networks are characterized by a large degree of randomness. One may wonder which connectivity patterns are indeed random, while which arose due to the network growth, evolution, and/or its fundamental design principles and limitations? Here we introduce a general method allowing one to construct a random null-model version of a given network while preserving the desired set of its low-level topological features, such as, e.g., the number of neighbors of individual nodes, the average level of modularity, preferential connections between particular groups of nodes, etc. Such a null-model network can then be used to detect and quantify the non-random topological patterns present in large networks. In particular, we measured correlations between degrees of interacting nodes in protein interaction and regulatory networks in yeast. It was found that in both these networks, links between highly connected proteins are systematically suppressed. This effect decreases the likelihood of cross-talk between different functional modules of the cell, and increases the overall robustness of a network by localizing effects of deleterious perturbations. It also teaches us about the overall computational architecture of such networks and points at the origin of large differences in the number of neighbors of individual nodes.

  6. Literature Review: Herbal Medicine Treatment after Large-Scale Disasters.

    Science.gov (United States)

    Takayama, Shin; Kaneko, Soichiro; Numata, Takehiro; Kamiya, Tetsuharu; Arita, Ryutaro; Saito, Natsumi; Kikuchi, Akiko; Ohsawa, Minoru; Kohayagawa, Yoshitaka; Ishii, Tadashi

    2017-01-01

    Large-scale natural disasters, such as earthquakes, tsunamis, volcanic eruptions, and typhoons, occur worldwide. After the Great East Japan earthquake and tsunami, our medical support operation's experiences suggested that traditional medicine might be useful for treating the various symptoms of the survivors. However, little information is available regarding herbal medicine treatment in such situations. Considering that further disasters will occur, we performed a literature review and summarized the traditional medicine approaches for treatment after large-scale disasters. We searched PubMed and Cochrane Library for articles written in English, and Ichushi for those written in Japanese. Articles published before 31 March 2016 were included. Keywords "disaster" and "herbal medicine" were used in our search. Among studies involving herbal medicine after a disaster, we found two randomized controlled trials investigating post-traumatic stress disorder (PTSD), three retrospective investigations of trauma or common diseases, and seven case series or case reports of dizziness, pain, and psychosomatic symptoms. In conclusion, herbal medicine has been used to treat trauma, PTSD, and other symptoms after disasters. However, few articles have been published, likely due to the difficulty in designing high quality studies in such situations. Further study will be needed to clarify the usefulness of herbal medicine after disasters.

  7. Comparison of single- and multi-scale models for the prediction of the Culicoides biting midge distribution in Germany

    Directory of Open Access Journals (Sweden)

    Renke Lühken

    2016-05-01

    Full Text Available This study analysed Culicoides presence-absence data from 46 sampling sites in Germany, where monitoring was carried out from April 2007 until May 2008. Culicoides presence-absence data were analysed in relation to land cover data, in order to study whether the prevalence of biting midges is correlated to land cover data with respect to the trapping sites. We differentiated eight scales, i.e. buffer zones with radii of 0.5, 1, 2, 3, 4, 5, 7.5 and 10 km, around each site, and chose several land cover variables. For each species, we built eight single-scale models (i.e. predictor variables from one of the eight scales for each model based on averaged, generalised linear models and two multiscale models (i.e. predictor variables from all of the eight scales based on averaged, generalised linear models and generalised linear models with random forest variable selection. There were no significant differences between performance indicators of models built with land cover data from different buffer zones around the trapping sites. However, the overall performance of multi-scale models was higher than the alternatives. Furthermore, these models mostly achieved the best performance for the different species using the index area under the receiver operating characteristic curve. However, as also presented in this study, the relevance of the different variables could significantly differ between various scales, including the number of species affected and the positive or negative direction. This is an even more severe problem if multi-scale models are concerned, in which one model can have the same variable at different scales but with different directions, i.e. negative and positive direction of the same variable at different scales. However, multi-scale modelling is a promising approach to model the distribution of Culicoides species, accounting much more for the ecology of biting midges, which uses different resources (breeding sites, hosts, etc. at

  8. visPIG--a web tool for producing multi-region, multi-track, multi-scale plots of genetic data.

    Directory of Open Access Journals (Sweden)

    Matthew Scales

    Full Text Available We present VISual Plotting Interface for Genetics (visPIG; http://vispig.icr.ac.uk, a web application to produce multi-track, multi-scale, multi-region plots of genetic data. visPIG has been designed to allow users not well versed with mathematical software packages and/or programming languages such as R, Matlab®, Python, etc., to integrate data from multiple sources for interpretation and to easily create publication-ready figures. While web tools such as the UCSC Genome Browser or the WashU Epigenome Browser allow custom data uploads, such tools are primarily designed for data exploration. This is also true for the desktop-run Integrative Genomics Viewer (IGV. Other locally run data visualisation software such as Circos require significant computer skills of the user. The visPIG web application is a menu-based interface that allows users to upload custom data tracks and set track-specific parameters. Figures can be downloaded as PDF or PNG files. For sensitive data, the underlying R code can also be downloaded and run locally. visPIG is multi-track: it can display many different data types (e.g association, functional annotation, intensity, interaction, heat map data,…. It also allows annotation of genes and other custom features in the plotted region(s. Data tracks can be plotted individually or on a single figure. visPIG is multi-region: it supports plotting multiple regions, be they kilo- or megabases apart or even on different chromosomes. Finally, visPIG is multi-scale: a sub-region of particular interest can be 'zoomed' in. We describe the various features of visPIG and illustrate its utility with examples. visPIG is freely available through http://vispig.icr.ac.uk under a GNU General Public License (GPLv3.

  9. The anomalous scaling exponents of turbulence in general dimension from random geometry

    Energy Technology Data Exchange (ETDEWEB)

    Eling, Christopher [Rudolf Peierls Centre for Theoretical Physics, University of Oxford, 1 Keble Road, Oxford OX1 3NP (United Kingdom); Oz, Yaron [Raymond and Beverly Sackler School of Physics and Astronomy, Tel Aviv University, Tel Aviv 69978 (Israel)

    2015-09-22

    We propose an analytical formula for the anomalous scaling exponents of inertial range structure functions in incompressible fluid turbulence. The formula is a Knizhnik-Polyakov-Zamolodchikov (KPZ)-type relation and is valid in any number of space dimensions. It incorporates intermittency in a novel way by dressing the Kolmogorov linear scaling via a coupling to a lognormal random geometry. The formula has one real parameter γ that depends on the number of space dimensions. The scaling exponents satisfy the convexity inequality, and the supersonic bound constraint. They agree with the experimental and numerical data in two and three space dimensions, and with numerical data in four space dimensions. Intermittency increases with γ, and in the infinite γ limit the scaling exponents approach the value one, as in Burgers turbulence. At large n the nth order exponent scales as √n. We discuss the relation between fluid flows and black hole geometry that inspired our proposal.

  10. Design of a school-based randomized trial to reduce smoking among 13 to 15-year olds, the X:IT study.

    Science.gov (United States)

    Andersen, Anette; Bast, Lotus Sofie; Ringgaard, Lene Winther; Wohllebe, Louise; Jensen, Poul Dengsøe; Svendsen, Maria; Dalum, Peter; Due, Pernille

    2014-05-28

    Adolescent smoking is still highly prevalent in Denmark. One in four 13-year olds indicates that they have tried to smoke, and one in four 15-year olds answer that they smoke regularly. Smoking is more prevalent in socioeconomically disadvantaged populations in Denmark as well as in most Western countries. Previous school-based programs to prevent smoking have shown contrasting results internationally. In Denmark, previous programs have shown limited or no effect. This indicates a need for developing a well-designed, comprehensive, and multi-component intervention aimed at Danish schools with careful implementation and thorough evaluation.This paper describes X:IT, a study including 1) the development of a 3-year school-based multi-component intervention and 2) the randomized trial investigating the effect of the intervention. The study aims at reducing the prevalence of smoking among 13 to 15-year olds by 25%. The X:IT study is based on the Theory of Triadic Influences. The theory organizes factors influencing adolescent smoking into three streams: cultural environment, social situation, and personal factors. We added a fourth stream, the community aspects. The X:IT program comprises three main components: 1) smoke-free school premises, 2) parental involvement including smoke-free dialogues and smoke-free contracts between students and parents, and 3) a curricular component. The study encompasses process- and effect-evaluations as well as health economic analyses. Ninety-four schools in 17 municipalities were randomly allocated to the intervention (51 schools) or control (43 schools) group. At baseline in September 2010, 4,468 year 7 students were eligible of which 4,167 answered the baseline questionnaire (response rate = 93.3%). The X:IT study is a large, randomized controlled trial evaluating the effect of an intervention, based on components proven to be efficient in other Nordic settings. The X:IT study directs students, their parents, and smoking

  11. SCALE INTERACTION IN A MIXING LAYER. THE ROLE OF THE LARGE-SCALE GRADIENTS

    KAUST Repository

    Fiscaletti, Daniele; Attili, Antonio; Bisetti, Fabrizio; Elsinga, Gerrit E.

    2015-01-01

    from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.

  12. A Randomized Central Limit Theorem

    International Nuclear Information System (INIS)

    Eliazar, Iddo; Klafter, Joseph

    2010-01-01

    The Central Limit Theorem (CLT), one of the most elemental pillars of Probability Theory and Statistical Physics, asserts that: the universal probability law of large aggregates of independent and identically distributed random summands with zero mean and finite variance, scaled by the square root of the aggregate-size (√(n)), is Gaussian. The scaling scheme of the CLT is deterministic and uniform - scaling all aggregate-summands by the common and deterministic factor √(n). This Letter considers scaling schemes which are stochastic and non-uniform, and presents a 'Randomized Central Limit Theorem' (RCLT): we establish a class of random scaling schemes which yields universal probability laws of large aggregates of independent and identically distributed random summands. The RCLT universal probability laws, in turn, are the one-sided and the symmetric Levy laws.

  13. Multi-resolution and multi-scale simulation of the thermal hydraulics in fast neutron reactor assemblies

    International Nuclear Information System (INIS)

    Angeli, P.-E.

    2011-01-01

    The present work is devoted to a multi-scale numerical simulation of an assembly of fast neutron reactor. In spite of the rapid growth of the computer power, the fine complete CFD of a such system remains out of reach in a context of research and development. After the determination of the thermalhydraulic behaviour of the assembly at the macroscopic scale, we propose to carry out a local reconstruction of the fine scale information. The complete approach will require a much lower CPU time than the CFD of the entire structure. The macro-scale description is obtained using either the volume averaging formalism in porous media, or an alternative modeling historically developed for the study of fast neutron reactor assemblies. It provides some information used as constraint of a down-scaling problem, through a penalization technique of the local conservation equations. This problem lean on the periodic nature of the structure by integrating periodic boundary conditions for the required microscale fields or their spatial deviation. After validating the methodologies on some model applications, we undertake to perform them on 'industrial' configurations which demonstrate the viability of this multi-scale approach. (author) [fr

  14. Multi Scale Models for Flexure Deformation in Sheet Metal Forming

    Directory of Open Access Journals (Sweden)

    Di Pasquale Edmondo

    2016-01-01

    Full Text Available This paper presents the application of multi scale techniques to the simulation of sheet metal forming using the one-step method. When a blank flows over the die radius, it undergoes a complex cycle of bending and unbending. First, we describe an original model for the prediction of residual plastic deformation and stresses in the blank section. This model, working on a scale about one hundred times smaller than the element size, has been implemented in SIMEX, one-step sheet metal forming simulation code. The utilisation of this multi-scale modeling technique improves greatly the accuracy of the solution. Finally, we discuss the implications of this analysis on the prediction of springback in metal forming.

  15. Natural ventilation of large multi-span greenhouses

    NARCIS (Netherlands)

    Jong, de T.

    1990-01-01

    In this thesis the ventilation of large multi-span greenhouses caused by wind and temperature effects is studied. Quantification of the ventilation is important to improve the control of the greenhouse climate.

    Knowledge of the flow characteristics of the one-side-mounted windows of

  16. Multi-scale Modeling of Plasticity in Tantalum.

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Hojun [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Battaile, Corbett Chandler. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carroll, Jay [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Buchheit, Thomas E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Weinberger, Christopher [Drexel Univ., Philadelphia, PA (United States)

    2015-12-01

    In this report, we present a multi-scale computational model to simulate plastic deformation of tantalum and validating experiments. In atomistic/ dislocation level, dislocation kink- pair theory is used to formulate temperature and strain rate dependent constitutive equations. The kink-pair theory is calibrated to available data from single crystal experiments to produce accurate and convenient constitutive laws. The model is then implemented into a BCC crystal plasticity finite element method (CP-FEM) model to predict temperature and strain rate dependent yield stresses of single and polycrystalline tantalum and compared with existing experimental data from the literature. Furthermore, classical continuum constitutive models describing temperature and strain rate dependent flow behaviors are fit to the yield stresses obtained from the CP-FEM polycrystal predictions. The model is then used to conduct hydro- dynamic simulations of Taylor cylinder impact test and compared with experiments. In order to validate the proposed tantalum CP-FEM model with experiments, we introduce a method for quantitative comparison of CP-FEM models with various experimental techniques. To mitigate the effects of unknown subsurface microstructure, tantalum tensile specimens with a pseudo-two-dimensional grain structure and grain sizes on the order of millimeters are used. A technique combining an electron back scatter diffraction (EBSD) and high resolution digital image correlation (HR-DIC) is used to measure the texture and sub-grain strain fields upon uniaxial tensile loading at various applied strains. Deformed specimens are also analyzed with optical profilometry measurements to obtain out-of- plane strain fields. These high resolution measurements are directly compared with large-scale CP-FEM predictions. This computational method directly links fundamental dislocation physics to plastic deformations in the grain-scale and to the engineering-scale applications. Furthermore, direct

  17. Controls on Arctic sea ice from first-year and multi-year survival rates

    Energy Technology Data Exchange (ETDEWEB)

    Hunke, Jes [Los Alamos National Laboratory

    2009-01-01

    The recent decrease in Arctic sea ice cover has transpired with a significant loss of multi year ice. The transition to an Arctic that is populated by thinner first year sea ice has important implications for future trends in area and volume. Here we develop a reduced model for Arctic sea ice with which we investigate how the survivability of first year and multi year ice control the mean state, variability, and trends in ice area and volume.

  18. Algebraic mesh generation for large scale viscous-compressible aerodynamic simulation

    International Nuclear Information System (INIS)

    Smith, R.E.

    1984-01-01

    Viscous-compressible aerodynamic simulation is the numerical solution of the compressible Navier-Stokes equations and associated boundary conditions. Boundary-fitted coordinate systems are well suited for the application of finite difference techniques to the Navier-Stokes equations. An algebraic approach to boundary-fitted coordinate systems is one where an explicit functional relation describes a mesh on which a solution is obtained. This approach has the advantage of rapid-precise mesh control. The basic mathematical structure of three algebraic mesh generation techniques is described. They are transfinite interpolation, the multi-surface method, and the two-boundary technique. The Navier-Stokes equations are transformed to a computational coordinate system where boundary-fitted coordinates can be applied. Large-scale computation implies that there is a large number of mesh points in the coordinate system. Computation of viscous compressible flow using boundary-fitted coordinate systems and the application of this computational philosophy on a vector computer are presented

  19. Tile-Based Semisupervised Classification of Large-Scale VHR Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Haikel Alhichri

    2018-01-01

    Full Text Available This paper deals with the problem of the classification of large-scale very high-resolution (VHR remote sensing (RS images in a semisupervised scenario, where we have a limited training set (less than ten training samples per class. Typical pixel-based classification methods are unfeasible for large-scale VHR images. Thus, as a practical and efficient solution, we propose to subdivide the large image into a grid of tiles and then classify the tiles instead of classifying pixels. Our proposed method uses the power of a pretrained convolutional neural network (CNN to first extract descriptive features from each tile. Next, a neural network classifier (composed of 2 fully connected layers is trained in a semisupervised fashion and used to classify all remaining tiles in the image. This basically presents a coarse classification of the image, which is sufficient for many RS application. The second contribution deals with the employment of the semisupervised learning to improve the classification accuracy. We present a novel semisupervised approach which exploits both the spectral and spatial relationships embedded in the remaining unlabelled tiles. In particular, we embed a spectral graph Laplacian in the hidden layer of the neural network. In addition, we apply regularization of the output labels using a spatial graph Laplacian and the random Walker algorithm. Experimental results obtained by testing the method on two large-scale images acquired by the IKONOS2 sensor reveal promising capabilities of this method in terms of classification accuracy even with less than ten training samples per class.

  20. Trends in large-scale testing of reactor structures

    International Nuclear Information System (INIS)

    Blejwas, T.E.

    2003-01-01

    Large-scale tests of reactor structures have been conducted at Sandia National Laboratories since the late 1970s. This paper describes a number of different large-scale impact tests, pressurization tests of models of containment structures, and thermal-pressure tests of models of reactor pressure vessels. The advantages of large-scale testing are evident, but cost, in particular limits its use. As computer models have grown in size, such as number of degrees of freedom, the advent of computer graphics has made possible very realistic representation of results - results that may not accurately represent reality. A necessary condition to avoiding this pitfall is the validation of the analytical methods and underlying physical representations. Ironically, the immensely larger computer models sometimes increase the need for large-scale testing, because the modeling is applied to increasing more complex structural systems and/or more complex physical phenomena. Unfortunately, the cost of large-scale tests is a disadvantage that will likely severely limit similar testing in the future. International collaborations may provide the best mechanism for funding future programs with large-scale tests. (author)

  1. Multi-scale modeling of dispersed gas-liquid two-phase flow

    NARCIS (Netherlands)

    Deen, N.G.; Sint Annaland, van M.; Kuipers, J.A.M.

    2004-01-01

    In this work the concept of multi-scale modeling is demonstrated. The idea of this approach is to use different levels of modeling, each developed to study phenomena at a certain length scale. Information obtained at the level of small length scales can be used to provide closure information at the

  2. Numerical Modeling of Large-Scale Rocky Coastline Evolution

    Science.gov (United States)

    Limber, P.; Murray, A. B.; Littlewood, R.; Valvo, L.

    2008-12-01

    Seventy-five percent of the world's ocean coastline is rocky. On large scales (i.e. greater than a kilometer), many intertwined processes drive rocky coastline evolution, including coastal erosion and sediment transport, tectonics, antecedent topography, and variations in sea cliff lithology. In areas such as California, an additional aspect of rocky coastline evolution involves submarine canyons that cut across the continental shelf and extend into the nearshore zone. These types of canyons intercept alongshore sediment transport and flush sand to abyssal depths during periodic turbidity currents, thereby delineating coastal sediment transport pathways and affecting shoreline evolution over large spatial and time scales. How tectonic, sediment transport, and canyon processes interact with inherited topographic and lithologic settings to shape rocky coastlines remains an unanswered, and largely unexplored, question. We will present numerical model results of rocky coastline evolution that starts with an immature fractal coastline. The initial shape is modified by headland erosion, wave-driven alongshore sediment transport, and submarine canyon placement. Our previous model results have shown that, as expected, an initial sediment-free irregularly shaped rocky coastline with homogeneous lithology will undergo smoothing in response to wave attack; headlands erode and mobile sediment is swept into bays, forming isolated pocket beaches. As this diffusive process continues, pocket beaches coalesce, and a continuous sediment transport pathway results. However, when a randomly placed submarine canyon is introduced to the system as a sediment sink, the end results are wholly different: sediment cover is reduced, which in turn increases weathering and erosion rates and causes the entire shoreline to move landward more rapidly. The canyon's alongshore position also affects coastline morphology. When placed offshore of a headland, the submarine canyon captures local sediment

  3. Stochastic multi-scale analysis of homogenised properties considering uncertainties in cellular solid microstructures using a first-order perturbation

    Directory of Open Access Journals (Sweden)

    Khairul Salleh Basaruddin

    Full Text Available Randomness in the microstructure due to variations in microscopic properties and geometrical information is used to predict the stochastically homogenised properties of cellular media. Two stochastic problems at the micro-scale level that commonly occur due to fabrication inaccuracies, degradation mechanisms or natural heterogeneity were analysed using a stochastic homogenisation method based on a first-order perturbation. First, the influence of Young's modulus variation in an adhesive on the macroscopic properties of an aluminium-adhesive honeycomb structure was investigated. The fluctuations in the microscopic properties were then combined by varying the microstructure periodicity in a corrugated-core sandwich plate to obtain the variation of the homogenised property. The numerical results show that the uncertainties in the microstructure affect the dispersion of the homogenised property. These results indicate the importance of the presented stochastic multi-scale analysis for the design and fabrication of cellular solids when considering microscopic random variation.

  4. Dose monitoring in large-scale flowing aqueous media

    International Nuclear Information System (INIS)

    Kuruca, C.N.

    1995-01-01

    The Miami Electron Beam Research Facility (EBRF) has been in operation for six years. The EBRF houses a 1.5 MV, 75 KW DC scanned electron beam. Experiments have been conducted to evaluate the effectiveness of high-energy electron irradiation in the removal of toxic organic chemicals from contaminated water and the disinfection of various wastewater streams. The large-scale plant operates at approximately 450 L/min (120 gal/min). The radiation dose absorbed by the flowing aqueous streams is estimated by measuring the difference in water temperature before and after it passes in front of the beam. Temperature measurements are made using resistance temperature devices (RTDs) and recorded by computer along with other operating parameters. Estimated dose is obtained from the measured temperature differences using the specific heat of water. This presentation will discuss experience with this measurement system, its application to different water presentation devices, sources of error, and the advantages and disadvantages of its use in large-scale process applications

  5. Decentralized formation of random regular graphs for robust multi-agent networks

    KAUST Repository

    Yazicioglu, A. Yasin

    2014-12-15

    Multi-agent networks are often modeled via interaction graphs, where the nodes represent the agents and the edges denote direct interactions between the corresponding agents. Interaction graphs have significant impact on the robustness of networked systems. One family of robust graphs is the random regular graphs. In this paper, we present a locally applicable reconfiguration scheme to build random regular graphs through self-organization. For any connected initial graph, the proposed scheme maintains connectivity and the average degree while minimizing the degree differences and randomizing the links. As such, if the average degree of the initial graph is an integer, then connected regular graphs are realized uniformly at random as time goes to infinity.

  6. Large-Scale, Multi-Temporal Remote Sensing of Palaeo-River Networks: A Case Study from Northwest India and its Implications for the Indus Civilisation

    Directory of Open Access Journals (Sweden)

    Hector A. Orengo

    2017-07-01

    Full Text Available Remote sensing has considerable potential to contribute to the identification and reconstruction of lost hydrological systems and networks. Remote sensing-based reconstructions of palaeo-river networks have commonly employed single or limited time-span imagery, which limits their capacity to identify features in complex and varied landscape contexts. This paper presents a seasonal multi-temporal approach to the detection of palaeo-rivers over large areas based on long-term vegetation dynamics and spectral decomposition techniques. Twenty-eight years of Landsat 5 data, a total of 1711 multi-spectral images, have been bulk processed using Google Earth Engine© Code Editor and cloud computing infrastructure. The use of multi-temporal data has allowed us to overcome seasonal cultivation patterns and long-term visibility issues related to recent crop selection, extensive irrigation and land-use patterns. The application of this approach on the Sutlej-Yamuna interfluve (northwest India, a core area for the Bronze Age Indus Civilisation, has enabled the reconstruction of an unsuspectedly complex palaeo-river network comprising more than 8000 km of palaeo-channels. It has also enabled the definition of the morphology of these relict courses, which provides insights into the environmental conditions in which they operated. These new data will contribute to a better understanding of the settlement distribution and environmental settings in which this, often considered riverine, civilisation operated.

  7. Effectiveness of influenza vaccination for children in Japan: Four-year observational study using a large-scale claims database.

    Science.gov (United States)

    Shibata, Natsumi; Kimura, Shinya; Hoshino, Takahiro; Takeuchi, Masato; Urushihara, Hisashi

    2018-05-11

    To date, few large-scale comparative effectiveness studies of influenza vaccination have been conducted in Japan, since marketing authorization for influenza vaccines in Japan has been granted based only on the results of seroconversion and safety in small-sized populations in clinical trial phases not on the vaccine effectiveness. We evaluated the clinical effectiveness of influenza vaccination for children aged 1-15 years in Japan throughout four influenza seasons from 2010 to 2014 in the real world setting. We conducted a cohort study using a large-scale claims database for employee health care insurance plans covering more than 3 million people, including enrollees and their dependents. Vaccination status was identified using plan records for the influenza vaccination subsidies. The effectiveness of influenza vaccination in preventing influenza and its complications was evaluated. To control confounding related to influenza vaccination, odds ratios (OR) were calculated by applying a doubly robust method using the propensity score for vaccination. Total study population throughout the four consecutive influenza seasons was over 116,000. Vaccination rate was higher in younger children and in the recent influenza seasons. Throughout the four seasons, the estimated ORs for influenza onset were statistically significant and ranged from 0.797 to 0.894 after doubly robust adjustment. On age stratification, significant ORs were observed in younger children. Additionally, ORs for influenza complication outcomes, such as pneumonia, hospitalization with influenza and respiratory tract diseases, were significantly reduced, except for hospitalization with influenza in the 2010/2011 and 2012/2013 seasons. We confirmed the clinical effectiveness of influenza vaccination in children aged 1-15 years from the 2010/2011 to 2013/2014 influenza seasons. Influenza vaccine significantly prevented the onset of influenza and was effective in reducing its secondary complications

  8. Multi-scale Analysis of High Resolution Topography: Feature Extraction and Identification of Landscape Characteristic Scales

    Science.gov (United States)

    Passalacqua, P.; Sangireddy, H.; Stark, C. P.

    2015-12-01

    With the advent of digital terrain data, detailed information on terrain characteristics and on scale and location of geomorphic features is available over extended areas. Our ability to observe landscapes and quantify topographic patterns has greatly improved, including the estimation of fluxes of mass and energy across landscapes. Challenges still remain in the analysis of high resolution topography data; the presence of features such as roads, for example, challenges classic methods for feature extraction and large data volumes require computationally efficient extraction and analysis methods. Moreover, opportunities exist to define new robust metrics of landscape characterization for landscape comparison and model validation. In this presentation we cover recent research in multi-scale and objective analysis of high resolution topography data. We show how the analysis of the probability density function of topographic attributes such as slope, curvature, and topographic index contains useful information for feature localization and extraction. The analysis of how the distributions change across scales, quantified by the behavior of modal values and interquartile range, allows the identification of landscape characteristic scales, such as terrain roughness. The methods are introduced on synthetic signals in one and two dimensions and then applied to a variety of landscapes of different characteristics. Validation of the methods includes the analysis of modeled landscapes where the noise distribution is known and features of interest easily measured.

  9. Multi-scale response of runoff to climate fluctuation in the headwater region of Kaidu River in Xinjiang of China

    Science.gov (United States)

    Bai, Ling; Chen, Zhongsheng; Xu, Jianhua; Li, Weihong

    2016-08-01

    Based on the hydrological and meteorological data in the headwater region of the Kaidu River during 1960-2009, the multi-scale characteristics of runoff variability were analyzed using the ensemble empirical mode decomposition method (EEMD), and the aim is to investigate the oscillation mode structure characteristics of runoff change and its response to climate fluctuation at different time scales. Results indicated that in the past 50 years, the overall runoff of Kaidu River in Xinjiang has showed a significant nonlinear upward trend, and its changes have obviously exhibited an inter-annual scale (quasi-3 and quasi-6-year) and inter-decadal scale (quasi-10 and quasi-25-year). Variance contribution rates of each component manifested that the inter-decadal change had been playing a more important role in the overall runoff change for Kaidu River, and the reconstructed inter-annual variation trend could describe the fluctuation state of the original runoff anomaly during the study period. The reconstructed inter-decadal variability effectively revealed that the runoff for Kaidu River changed over the years, namely the states of abundance and low water period appear alternately. In addition, we found that runoff has a positive correlation to precipitation and temperature at different time scales, but they are most significant and relevant at inter-decadal scale, indicating the inter-decadal scale is most suitable for investigating the responses of runoff dynamics to climate fluctuation. At the same time, the results also suggested that EEMD is an effective method to analyze the multi-scale characteristics of nonlinear and non-stationary signal.

  10. A topological analysis of large-scale structure, studied using the CMASS sample of SDSS-III

    International Nuclear Information System (INIS)

    Parihar, Prachi; Gott, J. Richard III; Vogeley, Michael S.; Choi, Yun-Young; Kim, Juhan; Kim, Sungsoo S.; Speare, Robert; Brownstein, Joel R.; Brinkmann, J.

    2014-01-01

    We study the three-dimensional genus topology of large-scale structure using the northern region of the CMASS Data Release 10 (DR10) sample of the SDSS-III Baryon Oscillation Spectroscopic Survey. We select galaxies with redshift 0.452 < z < 0.625 and with a stellar mass M stellar > 10 11.56 M ☉ . We study the topology at two smoothing lengths: R G = 21 h –1 Mpc and R G = 34 h –1 Mpc. The genus topology studied at the R G = 21 h –1 Mpc scale results in the highest genus amplitude observed to date. The CMASS sample yields a genus curve that is characteristic of one produced by Gaussian random phase initial conditions. The data thus support the standard model of inflation where random quantum fluctuations in the early universe produced Gaussian random phase initial conditions. Modest deviations in the observed genus from random phase are as expected from shot noise effects and the nonlinear evolution of structure. We suggest the use of a fitting formula motivated by perturbation theory to characterize the shift and asymmetries in the observed genus curve with a single parameter. We construct 54 mock SDSS CMASS surveys along the past light cone from the Horizon Run 3 (HR3) N-body simulations, where gravitationally bound dark matter subhalos are identified as the sites of galaxy formation. We study the genus topology of the HR3 mock surveys with the same geometry and sampling density as the observational sample and find the observed genus topology to be consistent with ΛCDM as simulated by the HR3 mock samples. We conclude that the topology of the large-scale structure in the SDSS CMASS sample is consistent with cosmological models having primordial Gaussian density fluctuations growing in accordance with general relativity to form galaxies in massive dark matter halos.

  11. Advanced computational workflow for the multi-scale modeling of the bone metabolic processes.

    Science.gov (United States)

    Dao, Tien Tuan

    2017-06-01

    Multi-scale modeling of the musculoskeletal system plays an essential role in the deep understanding of complex mechanisms underlying the biological phenomena and processes such as bone metabolic processes. Current multi-scale models suffer from the isolation of sub-models at each anatomical scale. The objective of this present work was to develop a new fully integrated computational workflow for simulating bone metabolic processes at multi-scale levels. Organ-level model employs multi-body dynamics to estimate body boundary and loading conditions from body kinematics. Tissue-level model uses finite element method to estimate the tissue deformation and mechanical loading under body loading conditions. Finally, cell-level model includes bone remodeling mechanism through an agent-based simulation under tissue loading. A case study on the bone remodeling process located on the human jaw was performed and presented. The developed multi-scale model of the human jaw was validated using the literature-based data at each anatomical level. Simulation outcomes fall within the literature-based ranges of values for estimated muscle force, tissue loading and cell dynamics during bone remodeling process. This study opens perspectives for accurately simulating bone metabolic processes using a fully integrated computational workflow leading to a better understanding of the musculoskeletal system function from multiple length scales as well as to provide new informative data for clinical decision support and industrial applications.

  12. Identifying gene-environment interactions in schizophrenia: contemporary challenges for integrated, large-scale investigations.

    Science.gov (United States)

    van Os, Jim; Rutten, Bart P; Myin-Germeys, Inez; Delespaul, Philippe; Viechtbauer, Wolfgang; van Zelst, Catherine; Bruggeman, Richard; Reininghaus, Ulrich; Morgan, Craig; Murray, Robin M; Di Forti, Marta; McGuire, Philip; Valmaggia, Lucia R; Kempton, Matthew J; Gayer-Anderson, Charlotte; Hubbard, Kathryn; Beards, Stephanie; Stilo, Simona A; Onyejiaka, Adanna; Bourque, Francois; Modinos, Gemma; Tognin, Stefania; Calem, Maria; O'Donovan, Michael C; Owen, Michael J; Holmans, Peter; Williams, Nigel; Craddock, Nicholas; Richards, Alexander; Humphreys, Isla; Meyer-Lindenberg, Andreas; Leweke, F Markus; Tost, Heike; Akdeniz, Ceren; Rohleder, Cathrin; Bumb, J Malte; Schwarz, Emanuel; Alptekin, Köksal; Üçok, Alp; Saka, Meram Can; Atbaşoğlu, E Cem; Gülöksüz, Sinan; Gumus-Akay, Guvem; Cihan, Burçin; Karadağ, Hasan; Soygür, Haldan; Cankurtaran, Eylem Şahin; Ulusoy, Semra; Akdede, Berna; Binbay, Tolga; Ayer, Ahmet; Noyan, Handan; Karadayı, Gülşah; Akturan, Elçin; Ulaş, Halis; Arango, Celso; Parellada, Mara; Bernardo, Miguel; Sanjuán, Julio; Bobes, Julio; Arrojo, Manuel; Santos, Jose Luis; Cuadrado, Pedro; Rodríguez Solano, José Juan; Carracedo, Angel; García Bernardo, Enrique; Roldán, Laura; López, Gonzalo; Cabrera, Bibiana; Cruz, Sabrina; Díaz Mesa, Eva Ma; Pouso, María; Jiménez, Estela; Sánchez, Teresa; Rapado, Marta; González, Emiliano; Martínez, Covadonga; Sánchez, Emilio; Olmeda, Ma Soledad; de Haan, Lieuwe; Velthorst, Eva; van der Gaag, Mark; Selten, Jean-Paul; van Dam, Daniella; van der Ven, Elsje; van der Meer, Floor; Messchaert, Elles; Kraan, Tamar; Burger, Nadine; Leboyer, Marion; Szoke, Andrei; Schürhoff, Franck; Llorca, Pierre-Michel; Jamain, Stéphane; Tortelli, Andrea; Frijda, Flora; Vilain, Jeanne; Galliot, Anne-Marie; Baudin, Grégoire; Ferchiou, Aziz; Richard, Jean-Romain; Bulzacka, Ewa; Charpeaud, Thomas; Tronche, Anne-Marie; De Hert, Marc; van Winkel, Ruud; Decoster, Jeroen; Derom, Catherine; Thiery, Evert; Stefanis, Nikos C; Sachs, Gabriele; Aschauer, Harald; Lasser, Iris; Winklbaur, Bernadette; Schlögelhofer, Monika; Riecher-Rössler, Anita; Borgwardt, Stefan; Walter, Anna; Harrisberger, Fabienne; Smieskova, Renata; Rapp, Charlotte; Ittig, Sarah; Soguel-dit-Piquard, Fabienne; Studerus, Erich; Klosterkötter, Joachim; Ruhrmann, Stephan; Paruch, Julia; Julkowski, Dominika; Hilboll, Desiree; Sham, Pak C; Cherny, Stacey S; Chen, Eric Y H; Campbell, Desmond D; Li, Miaoxin; Romeo-Casabona, Carlos María; Emaldi Cirión, Aitziber; Urruela Mora, Asier; Jones, Peter; Kirkbride, James; Cannon, Mary; Rujescu, Dan; Tarricone, Ilaria; Berardi, Domenico; Bonora, Elena; Seri, Marco; Marcacci, Thomas; Chiri, Luigi; Chierzi, Federico; Storbini, Viviana; Braca, Mauro; Minenna, Maria Gabriella; Donegani, Ivonne; Fioritti, Angelo; La Barbera, Daniele; La Cascia, Caterina Erika; Mulè, Alice; Sideli, Lucia; Sartorio, Rachele; Ferraro, Laura; Tripoli, Giada; Seminerio, Fabio; Marinaro, Anna Maria; McGorry, Patrick; Nelson, Barnaby; Amminger, G Paul; Pantelis, Christos; Menezes, Paulo R; Del-Ben, Cristina M; Gallo Tenan, Silvia H; Shuhama, Rosana; Ruggeri, Mirella; Tosato, Sarah; Lasalvia, Antonio; Bonetto, Chiara; Ira, Elisa; Nordentoft, Merete; Krebs, Marie-Odile; Barrantes-Vidal, Neus; Cristóbal, Paula; Kwapil, Thomas R; Brietzke, Elisa; Bressan, Rodrigo A; Gadelha, Ary; Maric, Nadja P; Andric, Sanja; Mihaljevic, Marina; Mirjanic, Tijana

    2014-07-01

    Recent years have seen considerable progress in epidemiological and molecular genetic research into environmental and genetic factors in schizophrenia, but methodological uncertainties remain with regard to validating environmental exposures, and the population risk conferred by individual molecular genetic variants is small. There are now also a limited number of studies that have investigated molecular genetic candidate gene-environment interactions (G × E), however, so far, thorough replication of findings is rare and G × E research still faces several conceptual and methodological challenges. In this article, we aim to review these recent developments and illustrate how integrated, large-scale investigations may overcome contemporary challenges in G × E research, drawing on the example of a large, international, multi-center study into the identification and translational application of G × E in schizophrenia. While such investigations are now well underway, new challenges emerge for G × E research from late-breaking evidence that genetic variation and environmental exposures are, to a significant degree, shared across a range of psychiatric disorders, with potential overlap in phenotype. © The Author 2014. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  13. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  14. A highly efficient multi-core algorithm for clustering extremely large datasets

    Directory of Open Access Journals (Sweden)

    Kraus Johann M

    2010-04-01

    Full Text Available Abstract Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer.

  15. The topology of large-scale structure. III. Analysis of observations

    International Nuclear Information System (INIS)

    Gott, J.R. III; Weinberg, D.H.; Miller, J.; Thuan, T.X.; Schneider, S.E.

    1989-01-01

    A recently developed algorithm for quantitatively measuring the topology of large-scale structures in the universe was applied to a number of important observational data sets. The data sets included an Abell (1958) cluster sample out to Vmax = 22,600 km/sec, the Giovanelli and Haynes (1985) sample out to Vmax = 11,800 km/sec, the CfA sample out to Vmax = 5000 km/sec, the Thuan and Schneider (1988) dwarf sample out to Vmax = 3000 km/sec, and the Tully (1987) sample out to Vmax = 3000 km/sec. It was found that, when the topology is studied on smoothing scales significantly larger than the correlation length (i.e., smoothing length, lambda, not below 1200 km/sec), the topology is spongelike and is consistent with the standard model in which the structure seen today has grown from small fluctuations caused by random noise in the early universe. When the topology is studied on the scale of lambda of about 600 km/sec, a small shift is observed in the genus curve in the direction of a meatball topology. 66 refs

  16. The topology of large-scale structure. III - Analysis of observations

    Science.gov (United States)

    Gott, J. Richard, III; Miller, John; Thuan, Trinh X.; Schneider, Stephen E.; Weinberg, David H.; Gammie, Charles; Polk, Kevin; Vogeley, Michael; Jeffrey, Scott; Bhavsar, Suketu P.; Melott, Adrian L.; Giovanelli, Riccardo; Hayes, Martha P.; Tully, R. Brent; Hamilton, Andrew J. S.

    1989-05-01

    A recently developed algorithm for quantitatively measuring the topology of large-scale structures in the universe was applied to a number of important observational data sets. The data sets included an Abell (1958) cluster sample out to Vmax = 22,600 km/sec, the Giovanelli and Haynes (1985) sample out to Vmax = 11,800 km/sec, the CfA sample out to Vmax = 5000 km/sec, the Thuan and Schneider (1988) dwarf sample out to Vmax = 3000 km/sec, and the Tully (1987) sample out to Vmax = 3000 km/sec. It was found that, when the topology is studied on smoothing scales significantly larger than the correlation length (i.e., smoothing length, lambda, not below 1200 km/sec), the topology is spongelike and is consistent with the standard model in which the structure seen today has grown from small fluctuations caused by random noise in the early universe. When the topology is studied on the scale of lambda of about 600 km/sec, a small shift is observed in the genus curve in the direction of a 'meatball' topology.

  17. Multi-scale modeling strategies in materials science—The ...

    Indian Academy of Sciences (India)

    Unknown

    Multi-scale models; quasicontinuum method; finite elements. 1. Introduction ... boundary with external stresses, and the interaction of a lattice dislocation with a grain ..... mum value of se over the elements that touch node α. The acceleration of ...

  18. Comparison of prestellar core elongations and large-scale molecular cloud structures in the Lupus I region

    Energy Technology Data Exchange (ETDEWEB)

    Poidevin, Frédérick [UCL, KLB, Department of Physics and Astronomy, Gower Place, London WC1E 6BT (United Kingdom); Ade, Peter A. R.; Hargrave, Peter C.; Nutter, David [School of Physics and Astronomy, Cardiff University, Queens Buildings, The Parade, Cardiff CF24 3AA (United Kingdom); Angile, Francesco E.; Devlin, Mark J.; Klein, Jeffrey [Department of Physics and Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA 19104 (United States); Benton, Steven J.; Netterfield, Calvin B. [Department of Physics, University of Toronto, 60 St. George Street, Toronto, ON M5S 1A7 (Canada); Chapin, Edward L. [XMM SOC, ESAC, Apartado 78, E-28691 Villanueva de la Canãda, Madrid (Spain); Fissel, Laura M.; Gandilo, Natalie N. [Department of Astronomy and Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4 (Canada); Fukui, Yasuo [Department of Physics, Nagoya University, Chikusa-ku, Nagoya, Aichi 464-8601 (Japan); Gundersen, Joshua O. [Department of Physics, University of Miami, 1320 Campo Sano Drive, Coral Gables, FL 33146 (United States); Korotkov, Andrei L. [Department of Physics, Brown University, 182 Hope Street, Providence, RI 02912 (United States); Matthews, Tristan G.; Novak, Giles [Department of Physics and Astronomy, Northwestern University, 2145 Sheridan Road, Evanston, IL 60208 (United States); Moncelsi, Lorenzo; Mroczkowski, Tony K. [California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91125 (United States); Olmi, Luca, E-mail: fpoidevin@iac.es [Physics Department, University of Puerto Rico, Rio Piedras Campus, Box 23343, UPR station, San Juan, PR 00931 (United States); and others

    2014-08-10

    Turbulence and magnetic fields are expected to be important for regulating molecular cloud formation and evolution. However, their effects on sub-parsec to 100 parsec scales, leading to the formation of starless cores, are not well understood. We investigate the prestellar core structure morphologies obtained from analysis of the Herschel-SPIRE 350 μm maps of the Lupus I cloud. This distribution is first compared on a statistical basis to the large-scale shape of the main filament. We find the distribution of the elongation position angle of the cores to be consistent with a random distribution, which means no specific orientation of the morphology of the cores is observed with respect to the mean orientation of the large-scale filament in Lupus I, nor relative to a large-scale bent filament model. This distribution is also compared to the mean orientation of the large-scale magnetic fields probed at 350 μm with the Balloon-borne Large Aperture Telescope for Polarimetry during its 2010 campaign. Here again we do not find any correlation between the core morphology distribution and the average orientation of the magnetic fields on parsec scales. Our main conclusion is that the local filament dynamics—including secondary filaments that often run orthogonally to the primary filament—and possibly small-scale variations in the local magnetic field direction, could be the dominant factors for explaining the final orientation of each core.

  19. The development of a multi-dimensional gambling accessibility scale.

    Science.gov (United States)

    Hing, Nerilee; Haw, John

    2009-12-01

    The aim of the current study was to develop a scale of gambling accessibility that would have theoretical significance to exposure theory and also serve to highlight the accessibility risk factors for problem gambling. Scale items were generated from the Productivity Commission's (Australia's Gambling Industries: Report No. 10. AusInfo, Canberra, 1999) recommendations and tested on a group with high exposure to the gambling environment. In total, 533 gaming venue employees (aged 18-70 years; 67% women) completed a questionnaire that included six 13-item scales measuring accessibility across a range of gambling forms (gaming machines, keno, casino table games, lotteries, horse and dog racing, sports betting). Also included in the questionnaire was the Problem Gambling Severity Index (PGSI) along with measures of gambling frequency and expenditure. Principal components analysis indicated that a common three factor structure existed across all forms of gambling and these were labelled social accessibility, physical accessibility and cognitive accessibility. However, convergent validity was not demonstrated with inconsistent correlations between each subscale and measures of gambling behaviour. These results are discussed in light of exposure theory and the further development of a multi-dimensional measure of gambling accessibility.

  20. Multi-Year Program Plan 2011-2015

    Energy Technology Data Exchange (ETDEWEB)

    none,

    2010-12-01

    The Vehicle Technologies Multi-Year Program Plan, FY 2011 – 2015, outlines the scientific research and technologies developments for the five-year timeframe (beyond the FY 2010 base year) that need to be undertaken to help meet the Administration's goals for reductions in oil consumption and carbon emissions from the ground transport vehicle sector of the economy.