WorldWideScience

Sample records for large-scale application due

  1. Superconducting materials for large scale applications

    International Nuclear Information System (INIS)

    Dew-Hughes, D.

    1975-01-01

    Applications of superconductors capable of carrying large current densities in large-scale electrical devices are examined. Discussions are included on critical current density, superconducting materials available, and future prospects for improved superconducting materials. (JRD)

  2. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  3. Emerging large-scale solar heating applications

    International Nuclear Information System (INIS)

    Wong, W.P.; McClung, J.L.

    2009-01-01

    Currently the market for solar heating applications in Canada is dominated by outdoor swimming pool heating, make-up air pre-heating and domestic water heating in homes, commercial and institutional buildings. All of these involve relatively small systems, except for a few air pre-heating systems on very large buildings. Together these applications make up well over 90% of the solar thermal collectors installed in Canada during 2007. These three applications, along with the recent re-emergence of large-scale concentrated solar thermal for generating electricity, also dominate the world markets. This paper examines some emerging markets for large scale solar heating applications, with a focus on the Canadian climate and market. (author)

  4. Emerging large-scale solar heating applications

    Energy Technology Data Exchange (ETDEWEB)

    Wong, W.P.; McClung, J.L. [Science Applications International Corporation (SAIC Canada), Ottawa, Ontario (Canada)

    2009-07-01

    Currently the market for solar heating applications in Canada is dominated by outdoor swimming pool heating, make-up air pre-heating and domestic water heating in homes, commercial and institutional buildings. All of these involve relatively small systems, except for a few air pre-heating systems on very large buildings. Together these applications make up well over 90% of the solar thermal collectors installed in Canada during 2007. These three applications, along with the recent re-emergence of large-scale concentrated solar thermal for generating electricity, also dominate the world markets. This paper examines some emerging markets for large scale solar heating applications, with a focus on the Canadian climate and market. (author)

  5. Mitigation of Ground Vibration due to Collapse of a Large-Scale Cooling Tower with Novel Application of Materials as Cushions

    Directory of Open Access Journals (Sweden)

    Feng Lin

    2017-01-01

    Full Text Available Ground vibration induced by the collapse of large-scale cooling towers in nuclear power plants (NPPs has recently been realized as a potential secondary disaster to adjacent nuclear-related facilities with demands for vibration mitigation. The previous concept to design cooling towers and nuclear-related facilities operating in a containment as isolated components in NPPs is inappropriate in a limited site which is the cases for inland NPPs in China. This paper presents a numerical study on the mitigation of ground vibration in a “cooling tower-soil-containment” system via a novel application of two materials acting as cushions underneath cooling towers, that is, foamed concrete and a “tube assembly.” Comprehensive “cooling tower-cushion-soil” models were built with reasonable cushion material models. Computational cases were performed to demonstrate the effect of vibration mitigation using seven earthquake waves. Results found that collapse-induced ground vibrations at a point with a distance of 300 m were reduced in average by 91%, 79%, and 92% in radial, tangential, and vertical directions when foamed concrete was used, and the vibrations at the same point were reduced by 53%, 32%, and 59% when the “tube assembly” was applied, respectively. Therefore, remarkable vibration mitigation was achieved in both cases to enhance the resilience of the “cooling tower-soil-containment” system against the secondary disaster.

  6. Superconducting materials for large scale applications

    International Nuclear Information System (INIS)

    Scanlan, Ronald M.; Malozemoff, Alexis P.; Larbalestier, David C.

    2004-01-01

    Significant improvements in the properties of superconducting materials have occurred recently. These improvements are being incorporated into the latest generation of wires, cables, and tapes that are being used in a broad range of prototype devices. These devices include new, high field accelerator and NMR magnets, magnets for fusion power experiments, motors, generators, and power transmission lines. These prototype magnets are joining a wide array of existing applications that utilize the unique capabilities of superconducting magnets:accelerators such as the Large Hadron Collider, fusion experiments such as ITER, 930 MHz NMR, and 4 Tesla MRI. In addition, promising new materials such as MgB2 have been discovered and are being studied in order to assess their potential for new applications. In this paper, we will review the key developments that are leading to these new applications for superconducting materials. In some cases, the key factor is improved understanding or development of materials with significantly improved properties. An example of the former is the development of Nb3Sn for use in high field magnets for accelerators. In other cases, the development is being driven by the application. The aggressive effort to develop HTS tapes is being driven primarily by the need for materials that can operate at temperatures of 50 K and higher. The implications of these two drivers for further developments will be discussed. Finally, we will discuss the areas where further improvements are needed in order for new applications to be realized

  7. Superconducting materials for large scale applications

    Energy Technology Data Exchange (ETDEWEB)

    Scanlan, Ronald M.; Malozemoff, Alexis P.; Larbalestier, David C.

    2004-05-06

    Significant improvements in the properties ofsuperconducting materials have occurred recently. These improvements arebeing incorporated into the latest generation of wires, cables, and tapesthat are being used in a broad range of prototype devices. These devicesinclude new, high field accelerator and NMR magnets, magnets for fusionpower experiments, motors, generators, and power transmission lines.These prototype magnets are joining a wide array of existing applicationsthat utilize the unique capabilities of superconducting magnets:accelerators such as the Large Hadron Collider, fusion experiments suchas ITER, 930 MHz NMR, and 4 Tesla MRI. In addition, promising newmaterials such as MgB2 have been discovered and are being studied inorder to assess their potential for new applications. In this paper, wewill review the key developments that are leading to these newapplications for superconducting materials. In some cases, the key factoris improved understanding or development of materials with significantlyimproved properties. An example of the former is the development of Nb3Snfor use in high field magnets for accelerators. In other cases, thedevelopment is being driven by the application. The aggressive effort todevelop HTS tapes is being driven primarily by the need for materialsthat can operate at temperatures of 50 K and higher. The implications ofthese two drivers for further developments will be discussed. Finally, wewill discuss the areas where further improvements are needed in order fornew applications to be realized.

  8. Development of tsunami fragility evaluation methods by large scale experiments. Part 2. Validation of the applicability of evaluation methods of impact force due to tsunami floating debris

    International Nuclear Information System (INIS)

    Takabatake, Daisuke; Kihara, Naoto; Kaida, Hideki; Miyagawa, Yoshinori; Ikeno, Masaaki; Shibayama, Atsushi

    2015-01-01

    In order to examine the applicability of the existing estimation equations of the impact force due to tsunami floating debris, the collision tests are carried out. In the experiments, logs and full-scale light car are used. In this report, two types of existing equations, one is based on the Young's module of the debris (Eq.A) and the other one is based on the stiffness of the debris (Eq.B), are focused on. The estimated impact forces using Eq.A with log's Young module obtained by the material test agree with measured forces obtained by the collision test. But Eq.A does not applicate to a car because it is not easy to determine the Young's module of a car. On the other hand, the estimated impact forces using Eq.B with car's stiffness obtained by the static loading test agree with measured forces obtained by the collision test. This indicates that Eq.B unable us to estimate impact force of the floating debris such as car if the stiffness of the debris is determined. (author)

  9. New Visions for Large Scale Networks: Research and Applications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This paper documents the findings of the March 12-14, 2001 Workshop on New Visions for Large-Scale Networks: Research and Applications. The workshops objectives were...

  10. Bio-inspired wooden actuators for large scale applications.

    Directory of Open Access Journals (Sweden)

    Markus Rüggeberg

    Full Text Available Implementing programmable actuation into materials and structures is a major topic in the field of smart materials. In particular the bilayer principle has been employed to develop actuators that respond to various kinds of stimuli. A multitude of small scale applications down to micrometer size have been developed, but up-scaling remains challenging due to either limitations in mechanical stiffness of the material or in the manufacturing processes. Here, we demonstrate the actuation of wooden bilayers in response to changes in relative humidity, making use of the high material stiffness and a good machinability to reach large scale actuation and application. Amplitude and response time of the actuation were measured and can be predicted and controlled by adapting the geometry and the constitution of the bilayers. Field tests in full weathering conditions revealed long-term stability of the actuation. The potential of the concept is shown by a first demonstrator. With the sensor and actuator intrinsically incorporated in the wooden bilayers, the daily change in relative humidity is exploited for an autonomous and solar powered movement of a tracker for solar modules.

  11. Bio-inspired wooden actuators for large scale applications.

    Science.gov (United States)

    Rüggeberg, Markus; Burgert, Ingo

    2015-01-01

    Implementing programmable actuation into materials and structures is a major topic in the field of smart materials. In particular the bilayer principle has been employed to develop actuators that respond to various kinds of stimuli. A multitude of small scale applications down to micrometer size have been developed, but up-scaling remains challenging due to either limitations in mechanical stiffness of the material or in the manufacturing processes. Here, we demonstrate the actuation of wooden bilayers in response to changes in relative humidity, making use of the high material stiffness and a good machinability to reach large scale actuation and application. Amplitude and response time of the actuation were measured and can be predicted and controlled by adapting the geometry and the constitution of the bilayers. Field tests in full weathering conditions revealed long-term stability of the actuation. The potential of the concept is shown by a first demonstrator. With the sensor and actuator intrinsically incorporated in the wooden bilayers, the daily change in relative humidity is exploited for an autonomous and solar powered movement of a tracker for solar modules.

  12. Environmental Impacts of Large Scale Biochar Application Through Spatial Modeling

    Science.gov (United States)

    Huber, I.; Archontoulis, S.

    2017-12-01

    In an effort to study the environmental (emissions, soil quality) and production (yield) impacts of biochar application at regional scales we coupled the APSIM-Biochar model with the pSIMS parallel platform. So far the majority of biochar research has been concentrated on lab to field studies to advance scientific knowledge. Regional scale assessments are highly needed to assist decision making. The overall objective of this simulation study was to identify areas in the USA that have the most gain environmentally from biochar's application, as well as areas which our model predicts a notable yield increase due to the addition of biochar. We present the modifications in both APSIM biochar and pSIMS components that were necessary to facilitate these large scale model runs across several regions in the United States at a resolution of 5 arcminutes. This study uses the AgMERRA global climate data set (1980-2010) and the Global Soil Dataset for Earth Systems modeling as a basis for creating its simulations, as well as local management operations for maize and soybean cropping systems and different biochar application rates. The regional scale simulation analysis is in progress. Preliminary results showed that the model predicts that high quality soils (particularly those common to Iowa cropping systems) do not receive much, if any, production benefit from biochar. However, soils with low soil organic matter ( 0.5%) do get a noteworthy yield increase of around 5-10% in the best cases. We also found N2O emissions to be spatial and temporal specific; increase in some areas and decrease in some other areas due to biochar application. In contrast, we found increases in soil organic carbon and plant available water in all soils (top 30 cm) due to biochar application. The magnitude of these increases (% change from the control) were larger in soil with low organic matter (below 1.5%) and smaller in soils with high organic matter (above 3%) and also dependent on biochar

  13. Programs on large scale applications of superconductivity in Japan

    International Nuclear Information System (INIS)

    Yasukochi, K.; Ogasawara, T.

    1974-01-01

    History of the large scale application of superconductivity in Japan is reported. Experimental works on superconducting magnet systems for high energy physics have just begun. The programs are described by dividing into five categories: 1) MHD power generation systems, 2) superconducting rotating machines, 3) cryogenic power transmission systems, 4) magnetically levitated transportation, and 5) application to high energy physics experiments. The development of a big superconducting magnet for a 1,000 kW class generator was set up as a target of first seven year plan, which came to end in 1972, and continues for three years with the budget of 900 million yen from 1973 on. In the second phase plan, a prototype MHD generator is argued. A plan is contemplated to develop a synchronous generator with inner rotating field by Fuji Electric Co. The total budget for the future plans of superconducting power transmission system amounts to 20 billion yen for the first period of 8 approximately 9 years. In JNR's research and development efforts, several characteristic points are picked up: 1) linear motor drive with active side on ground, 2) loop track, 3) combined test run of maglev and LSM. The field test at the speed of 500 km/hr on a 7 km track is scheduled to be performed in 1975. The target of operation is in 1985. A 12 GeV proton synchrotron is now under construction for the study on high energy physics. Three ring intersecting storage accelerator is discussed for future plan. (Iwakiri, K.)

  14. Large scale electromechanical transistor with application in mass sensing

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Leisheng; Li, Lijie, E-mail: L.Li@swansea.ac.uk [Multidisciplinary Nanotechnology Centre, College of Engineering, Swansea University, Swansea SA2 8PP (United Kingdom)

    2014-12-07

    Nanomechanical transistor (NMT) has evolved from the single electron transistor, a device that operates by shuttling electrons with a self-excited central conductor. The unfavoured aspects of the NMT are the complexity of the fabrication process and its signal processing unit, which could potentially be overcome by designing much larger devices. This paper reports a new design of large scale electromechanical transistor (LSEMT), still taking advantage of the principle of shuttling electrons. However, because of the large size, nonlinear electrostatic forces induced by the transistor itself are not sufficient to drive the mechanical member into vibration—an external force has to be used. In this paper, a LSEMT device is modelled, and its new application in mass sensing is postulated using two coupled mechanical cantilevers, with one of them being embedded in the transistor. The sensor is capable of detecting added mass using the eigenstate shifts method by reading the change of electrical current from the transistor, which has much higher sensitivity than conventional eigenfrequency shift approach used in classical cantilever based mass sensors. Numerical simulations are conducted to investigate the performance of the mass sensor.

  15. Large-scale HTS bulks for magnetic application

    International Nuclear Information System (INIS)

    Werfel, Frank N.; Floegel-Delor, Uta; Riedel, Thomas; Goebel, Bernd; Rothfeld, Rolf; Schirrmeister, Peter; Wippich, Dieter

    2013-01-01

    Highlights: ► ATZ Company has constructed about 130 HTS magnet systems. ► Multi-seeded YBCO bulks joint the way for large-scale application. ► Levitation platforms demonstrate “superconductivity” to a great public audience (100 years anniversary). ► HTS magnetic bearings show forces up to 1 t. ► Modular HTS maglev vacuum cryostats are tested for train demonstrators in Brazil, China and Germany. -- Abstract: ATZ Company has constructed about 130 HTS magnet systems using high-Tc bulk magnets. A key feature in scaling-up is the fabrication of YBCO melts textured multi-seeded large bulks with three to eight seeds. Except of levitation, magnetization, trapped field and hysteresis, we review system engineering parameters of HTS magnetic linear and rotational bearings like compactness, cryogenics, power density, efficiency and robust construction. We examine mobile compact YBCO bulk magnet platforms cooled with LN 2 and Stirling cryo-cooler for demonstrator use. Compact cryostats for Maglev train operation contain 24 pieces of 3-seed bulks and can levitate 2500–3000 N at 10 mm above a permanent magnet (PM) track. The effective magnetic distance of the thermally insulated bulks is 2 mm only; the stored 2.5 l LN 2 allows more than 24 h operation without refilling. 34 HTS Maglev vacuum cryostats are manufactured tested and operate in Germany, China and Brazil. The magnetic levitation load to weight ratio is more than 15, and by group assembling the HTS cryostats under vehicles up to 5 t total loads levitated above a magnetic track is achieved

  16. Large-scale HTS bulks for magnetic application

    Energy Technology Data Exchange (ETDEWEB)

    Werfel, Frank N., E-mail: werfel@t-online.de [Adelwitz Technologiezentrum GmbH (ATZ), Rittergut Adelwitz 16, 04886 Arzberg-Adelwitz (Germany); Floegel-Delor, Uta; Riedel, Thomas; Goebel, Bernd; Rothfeld, Rolf; Schirrmeister, Peter; Wippich, Dieter [Adelwitz Technologiezentrum GmbH (ATZ), Rittergut Adelwitz 16, 04886 Arzberg-Adelwitz (Germany)

    2013-01-15

    Highlights: ► ATZ Company has constructed about 130 HTS magnet systems. ► Multi-seeded YBCO bulks joint the way for large-scale application. ► Levitation platforms demonstrate “superconductivity” to a great public audience (100 years anniversary). ► HTS magnetic bearings show forces up to 1 t. ► Modular HTS maglev vacuum cryostats are tested for train demonstrators in Brazil, China and Germany. -- Abstract: ATZ Company has constructed about 130 HTS magnet systems using high-Tc bulk magnets. A key feature in scaling-up is the fabrication of YBCO melts textured multi-seeded large bulks with three to eight seeds. Except of levitation, magnetization, trapped field and hysteresis, we review system engineering parameters of HTS magnetic linear and rotational bearings like compactness, cryogenics, power density, efficiency and robust construction. We examine mobile compact YBCO bulk magnet platforms cooled with LN{sub 2} and Stirling cryo-cooler for demonstrator use. Compact cryostats for Maglev train operation contain 24 pieces of 3-seed bulks and can levitate 2500–3000 N at 10 mm above a permanent magnet (PM) track. The effective magnetic distance of the thermally insulated bulks is 2 mm only; the stored 2.5 l LN{sub 2} allows more than 24 h operation without refilling. 34 HTS Maglev vacuum cryostats are manufactured tested and operate in Germany, China and Brazil. The magnetic levitation load to weight ratio is more than 15, and by group assembling the HTS cryostats under vehicles up to 5 t total loads levitated above a magnetic track is achieved.

  17. Large-scale applications of superconductivity in the United States: an overview. Metallurgy, fabrication, and applications

    International Nuclear Information System (INIS)

    Hein, R.A.; Gubser, D.U.

    1981-01-01

    This report presents an overview of ongoing development efforts in the USA concerned with large-scale applications of superconductivity. These applications are grouped according to magnetic field regime, as low field regime, intermediate field regime, and high field regime. In the low field regime two diverse areas of large application are identified, superconducting power transmission lines for electric utilities, and RF cavities for particle accelerators for high energy physics research. Activity in the intermediate regime has been significantly increased due to Fermilab's energy doubler or Tevatron project, and BNL's ISABELLE project. Rotating electrical machines, such as DC acyclic (homopolar) motors, generators, and energy storage magnets are also studied. In the high field regime magnetohydrodynamics (MHD) and magnetically confined fusion in tokamaks are examined. In each regime all current work is summarized according to key person, research topic, type of program, funding, status, and future outlook

  18. Nuclear-pumped lasers for large-scale applications

    International Nuclear Information System (INIS)

    Anderson, R.E.; Leonard, E.M.; Shea, R.F.; Berggren, R.R.

    1989-05-01

    Efficient initiation of large-volume chemical lasers may be achieved by neutron induced reactions which produce charged particles in the final state. When a burst mode nuclear reactor is used as the neutron source, both a sufficiently intense neutron flux and a sufficiently short initiation pulse may be possible. Proof-of-principle experiments are planned to demonstrate lasing in a direct nuclear-pumped large-volume system; to study the effects of various neutron absorbing materials on laser performance; to study the effects of long initiation pulse lengths; to demonstrate the performance of large-scale optics and the beam quality that may be obtained; and to assess the performance of alternative designs of burst systems that increase the neutron output and burst repetition rate. 21 refs., 8 figs., 5 tabs

  19. Traffic assignment models in large-scale applications

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær

    the potential of the method proposed and the possibility to use individual-based GPS units for travel surveys in real-life large-scale multi-modal networks. Congestion is known to highly influence the way we act in the transportation network (and organise our lives), because of longer travel times...... of observations of actual behaviour to obtain estimates of the (monetary) value of different travel time components, thereby increasing the behavioural realism of largescale models. vii The generation of choice sets is a vital component in route choice models. This is, however, not a straight-forward task in real......, but the reliability of the travel time also has a large impact on our travel choices. Consequently, in order to improve the realism of transport models, correct understanding and representation of two values that are related to the value of time (VoT) are essential: (i) the value of congestion (VoC), as the Vo...

  20. Applicability of vector processing to large-scale nuclear codes

    International Nuclear Information System (INIS)

    Ishiguro, Misako; Harada, Hiroo; Matsuura, Toshihiko; Okuda, Motoi; Ohta, Fumio; Umeya, Makoto.

    1982-03-01

    To meet the growing trend of computational requirements in JAERI, introduction of a high-speed computer with vector processing faculty (a vector processor) is desirable in the near future. To make effective use of a vector processor, appropriate optimization of nuclear codes to pipelined-vector architecture is vital, which will pose new problems concerning code development and maintenance. In this report, vector processing efficiency is assessed with respect to large-scale nuclear codes by examining the following items: 1) The present feature of computational load in JAERI is analyzed by compiling the computer utilization statistics. 2) Vector processing efficiency is estimated for the ten heavily-used nuclear codes by analyzing their dynamic behaviors run on a scalar machine. 3) Vector processing efficiency is measured for the other five nuclear codes by using the current vector processors, FACOM 230-75 APU and CRAY-1. 4) Effectiveness of applying a high-speed vector processor to nuclear codes is evaluated by taking account of the characteristics in JAERI jobs. Problems of vector processors are also discussed from the view points of code performance and ease of use. (author)

  1. Advanced I/O for large-scale scientific applications

    International Nuclear Information System (INIS)

    Klasky, Scott; Schwan, Karsten; Oldfield, Ron A.; Lofstead, Gerald F. II

    2010-01-01

    As scientific simulations scale to use petascale machines and beyond, the data volumes generated pose a dual problem. First, with increasing machine sizes, the careful tuning of IO routines becomes more and more important to keep the time spent in IO acceptable. It is not uncommon, for instance, to have 20% of an application's runtime spent performing IO in a 'tuned' system. Careful management of the IO routines can move that to 5% or even less in some cases. Second, the data volumes are so large, on the order of 10s to 100s of TB, that trying to discover the scientifically valid contributions requires assistance at runtime to both organize and annotate the data. Waiting for offline processing is not feasible due both to the impact on the IO system and the time required. To reduce this load and improve the ability of scientists to use the large amounts of data being produced, new techniques for data management are required. First, there is a need for techniques for efficient movement of data from the compute space to storage. These techniques should understand the underlying system infrastructure and adapt to changing system conditions. Technologies include aggregation networks, data staging nodes for a closer parity to the IO subsystem, and autonomic IO routines that can detect system bottlenecks and choose different approaches, such as splitting the output into multiple targets, staggering output processes. Such methods must be end-to-end, meaning that even with properly managed asynchronous techniques, it is still essential to properly manage the later synchronous interaction with the storage system to maintain acceptable performance. Second, for the data being generated, annotations and other metadata must be incorporated to help the scientist understand output data for the simulation run as a whole, to select data and data features without concern for what files or other storage technologies were employed. All of these features should be attained while

  2. Power-aware load balancing of large scale MPI applications

    OpenAIRE

    Etinski, Maja; Corbalán González, Julita; Labarta Mancho, Jesús José; Valero Cortés, Mateo; Veidenbaum, Alex

    2009-01-01

    Power consumption is a very important issue for HPC community, both at the level of one application or at the level of whole workload. Load imbalance of a MPI application can be exploited to save CPU energy without penalizing the execution time. An application is load imbalanced when some nodes are assigned more computation than others. The nodes with less computation can be run at lower frequency since otherwise they have to wait for the nodes with more computation blocked in MPI calls. A te...

  3. Final Report: Migration Mechanisms for Large-scale Parallel Applications

    Energy Technology Data Exchange (ETDEWEB)

    Jason Nieh

    2009-10-30

    Process migration is the ability to transfer a process from one machine to another. It is a useful facility in distributed computing environments, especially as computing devices become more pervasive and Internet access becomes more ubiquitous. The potential benefits of process migration, among others, are fault resilience by migrating processes off of faulty hosts, data access locality by migrating processes closer to the data, better system response time by migrating processes closer to users, dynamic load balancing by migrating processes to less loaded hosts, and improved service availability and administration by migrating processes before host maintenance so that applications can continue to run with minimal downtime. Although process migration provides substantial potential benefits and many approaches have been considered, achieving transparent process migration functionality has been difficult in practice. To address this problem, our work has designed, implemented, and evaluated new and powerful transparent process checkpoint-restart and migration mechanisms for desktop, server, and parallel applications that operate across heterogeneous cluster and mobile computing environments. A key aspect of this work has been to introduce lightweight operating system virtualization to provide processes with private, virtual namespaces that decouple and isolate processes from dependencies on the host operating system instance. This decoupling enables processes to be transparently checkpointed and migrated without modifying, recompiling, or relinking applications or the operating system. Building on this lightweight operating system virtualization approach, we have developed novel technologies that enable (1) coordinated, consistent checkpoint-restart and migration of multiple processes, (2) fast checkpointing of process and file system state to enable restart of multiple parallel execution environments and time travel, (3) process migration across heterogeneous

  4. Leverage hadoop framework for large scale clinical informatics applications.

    Science.gov (United States)

    Dong, Xiao; Bahroos, Neil; Sadhu, Eugene; Jackson, Tommie; Chukhman, Morris; Johnson, Robert; Boyd, Andrew; Hynes, Denise

    2013-01-01

    In this manuscript, we present our experiences using the Apache Hadoop framework for high data volume and computationally intensive applications, and discuss some best practice guidelines in a clinical informatics setting. There are three main aspects in our approach: (a) process and integrate diverse, heterogeneous data sources using standard Hadoop programming tools and customized MapReduce programs; (b) after fine-grained aggregate results are obtained, perform data analysis using the Mahout data mining library; (c) leverage the column oriented features in HBase for patient centric modeling and complex temporal reasoning. This framework provides a scalable solution to meet the rapidly increasing, imperative "Big Data" needs of clinical and translational research. The intrinsic advantage of fault tolerance, high availability and scalability of Hadoop platform makes these applications readily deployable at the enterprise level cluster environment.

  5. Designing a large-scale video chat application

    OpenAIRE

    Scholl, Jeremiah; Parnes, Peter; McCarthy, John D.; Sasse, Angela

    2005-01-01

    Studies of video conferencing systems generally focus on scenarios where users communicate using an audio channel. However, text chat serves users in a wide variety of contexts, and is commonly included in multimedia conferencing systems as a complement to the audio channel. This paper introduces a prototype application which integrates video and text communication, and describes a formative evaluation of the prototype with 53 users in a social setting. We focus the evaluation on bandwidth an...

  6. Assessment of renewable energy resources potential for large scale and standalone applications in Ethiopia

    NARCIS (Netherlands)

    Tucho, Gudina Terefe; Weesie, Peter D.M.; Nonhebel, Sanderine

    2014-01-01

    This study aims to determine the contribution of renewable energy to large scale and standalone application in Ethiopia. The assessment starts by determining the present energy system and the available potentials. Subsequently, the contribution of the available potentials for large scale and

  7. Evaluation of Eigenvalue Routines for Large Scale Applications

    Directory of Open Access Journals (Sweden)

    V.A. Tischler

    1994-01-01

    Full Text Available The NASA structural analysis (NASTRAN∗ program is one of the most extensively used engineering applications software in the world. It contains a wealth of matrix operations and numerical solution techniques, and they were used to construct efficient eigenvalue routines. The purpose of this article is to examine the current eigenvalue routines in NASTRAN and to make efficiency comparisons with a more recent implementation of the block Lanczos aLgorithm. This eigenvalue routine is now availabLe in several mathematics libraries as well as in severaL commerciaL versions of NASTRAN. In addition, the eRA Y library maintains a modified version of this routine on their network. Several example problems, with a varying number of degrees of freedom, were selected primarily for efficiency bench-marking. Accuracy is not an issue, because they all gave comparable results. The block Lanczos algorithm was found to be extremely efficient, particularly for very large problems.

  8. Large-scale budget applications of mathematical programming in the Forest Service

    Science.gov (United States)

    Malcolm Kirby

    1978-01-01

    Mathematical programming applications in the Forest Service, U.S. Department of Agriculture, are growing. They are being used for widely varying problems: budgeting, lane use planning, timber transport, road maintenance and timber harvest planning. Large-scale applications are being mace in budgeting. The model that is described can be used by developing economies....

  9. Large-scale dynamo action due to α fluctuations in a linear shear flow

    Science.gov (United States)

    Sridhar, S.; Singh, Nishant K.

    2014-12-01

    We present a model of large-scale dynamo action in a shear flow that has stochastic, zero-mean fluctuations of the α parameter. This is based on a minimal extension of the Kraichnan-Moffatt model, to include a background linear shear and Galilean-invariant α-statistics. Using the first-order smoothing approximation we derive a linear integro-differential equation for the large-scale magnetic field, which is non-perturbative in the shearing rate S , and the α-correlation time τα . The white-noise case, τα = 0 , is solved exactly, and it is concluded that the necessary condition for dynamo action is identical to the Kraichnan-Moffatt model without shear; this is because white-noise does not allow for memory effects, whereas shear needs time to act. To explore memory effects we reduce the integro-differential equation to a partial differential equation, valid for slowly varying fields when τα is small but non-zero. Seeking exponential modal solutions, we solve the modal dispersion relation and obtain an explicit expression for the growth rate as a function of the six independent parameters of the problem. A non-zero τα gives rise to new physical scales, and dynamo action is completely different from the white-noise case; e.g. even weak α fluctuations can give rise to a dynamo. We argue that, at any wavenumber, both Moffatt drift and Shear always contribute to increasing the growth rate. Two examples are presented: (a) a Moffatt drift dynamo in the absence of shear and (b) a Shear dynamo in the absence of Moffatt drift.

  10. Large-scale Labeled Datasets to Fuel Earth Science Deep Learning Applications

    Science.gov (United States)

    Maskey, M.; Ramachandran, R.; Miller, J.

    2017-12-01

    Deep learning has revolutionized computer vision and natural language processing with various algorithms scaled using high-performance computing. However, generic large-scale labeled datasets such as the ImageNet are the fuel that drives the impressive accuracy of deep learning results. Large-scale labeled datasets already exist in domains such as medical science, but creating them in the Earth science domain is a challenge. While there are ways to apply deep learning using limited labeled datasets, there is a need in the Earth sciences for creating large-scale labeled datasets for benchmarking and scaling deep learning applications. At the NASA Marshall Space Flight Center, we are using deep learning for a variety of Earth science applications where we have encountered the need for large-scale labeled datasets. We will discuss our approaches for creating such datasets and why these datasets are just as valuable as deep learning algorithms. We will also describe successful usage of these large-scale labeled datasets with our deep learning based applications.

  11. Large-scale structuring of a rotating plasma due to plasma macroinstabilities

    International Nuclear Information System (INIS)

    Kikuchi, Toshinori; Ikehata, Takashi; Sato, Naoyuki; Watahiki, Takeshi; Tanabe, Toshio; Mase, Hiroshi

    1995-01-01

    The formation of coherent structures during plasma macroinstabilities have been of interest in view of the nonlinear plasma physics. In the present paper, we have investigated in detail, the mechanism and specific features of large-scale structuring of a rotating plasma. In the case of weak magnetic field, the plasma ejected from a plasma gun has a high beta value (β > 1) so that it expands rapidly across the magnetic field excluding a magnetic flux from its interior. Then, the boundary between the expanding plasma and the magnetic field becomes unstable against Rayleigh-Taylor instability. This instability has the higher growth rate at the shorter wavelength and the mode appears as flute. These features of the instability are confirmed by the observation of radial plasma jets with the azimuthal mode number m=20-40 in the early time of the plasma expansion. In the case of strong magnetic field, on the other hand, the plasma little expands and rotates at two times the ion sound speed. Especially, we observe spiral jets of m=2 instead of short-wavelength radial jets. This mode appears only when a glass target is installed or a dense neutral gas is introduced around the plasma to give the plasma a frictional force. From these results and with reference to the theory of plasma instabilities, the centrifugal instability caused by a combination of the velocity shear and centrifugal force is concluded to be responsible for the formation of spiral jets. (author)

  12. A simple large-scale synthesis of mesoporous In_2O_3 for gas sensing applications

    International Nuclear Information System (INIS)

    Zhang, Su; Song, Peng; Yan, Huihui; Yang, Zhongxi; Wang, Qi

    2016-01-01

    Graphical abstract: Large-scale mesoporous In_2O_3 nanostructures for gas-sensing applications were successfully fabricated via a facile Lewis acid catalytic the furfural alcohol resin template route. - Highlights: • Mesoporous In_2O_3 nanostructures with high-yield have been successfully fabricated via a facile strategy. • The microstructure and formation mechanism of mesoporous In_2O_3 nanostructures were discussed based on the experimental results. • The as-prepared In_2O_3 samples exhibited high response, short response-recovery times and good selectivity to ethanol gas. - Abstract: In this paper, large-scale mesoporous In_2O_3 nanostructures were synthesized by a facile Lewis acid catalytic the furfural alcohol resin (FAR) template route for the high-yield. Their morphology and structure were characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM), transmission electron microscopy (TEM), differential thermal and thermogravimetry analysis (DSC-TG) and the Brunauer-Emmett-Teller (BET) approach. The as-obtained mesoporous In_2O_3 nanostructures possess excellent mesoporous and network structure, which increases the contact area with the gases, it is conducive for adsorption-desorption of gas on the surface of In_2O_3. The In_2O_3 particles and pores were both about 15 nm and very uniform. In gas-sensing measurements with target gases, the gas sensor based on mesoporous In_2O_3 nanostructures showed a good response, short response-recovery time, good selectivity and stability to ethanol. These properties are due to the large specific surface area of mesoporous structure. This synthetic method could use as a new design concept for functional mesoporous nanomaterials and for mass production.

  13. Large scale inverse problems computational methods and applications in the earth sciences

    CERN Document Server

    Scheichl, Robert; Freitag, Melina A; Kindermann, Stefan

    2013-01-01

    This book is thesecond volume of three volume series recording the ""Radon Special Semester 2011 on Multiscale Simulation & Analysis in Energy and the Environment"" taking place in Linz, Austria, October 3-7, 2011. The volume addresses the common ground in the mathematical and computational procedures required for large-scale inverse problems and data assimilation in forefront applications.

  14. Large-Scale Fabrication of Silicon Nanowires for Solar Energy Applications.

    Science.gov (United States)

    Zhang, Bingchang; Jie, Jiansheng; Zhang, Xiujuan; Ou, Xuemei; Zhang, Xiaohong

    2017-10-11

    The development of silicon (Si) materials during past decades has boosted up the prosperity of the modern semiconductor industry. In comparison with the bulk-Si materials, Si nanowires (SiNWs) possess superior structural, optical, and electrical properties and have attracted increasing attention in solar energy applications. To achieve the practical applications of SiNWs, both large-scale synthesis of SiNWs at low cost and rational design of energy conversion devices with high efficiency are the prerequisite. This review focuses on the recent progresses in large-scale production of SiNWs, as well as the construction of high-efficiency SiNW-based solar energy conversion devices, including photovoltaic devices and photo-electrochemical cells. Finally, the outlook and challenges in this emerging field are presented.

  15. Large-Scale Partial-Duplicate Image Retrieval and Its Applications

    Science.gov (United States)

    2016-04-23

    tree based image retrieval , a semantic-aware co-indexing algorithm is proposed to jointly embed two strong cues into the inverted indexes: 1) local...based image retrieval , a semantic-aware co-indexing algorithm is proposed to jointly embed two strong cues into the inverted indexes: 1) local...Distribution Unlimited UU UU UU UU 23-04-2016 23-Jan-2012 22-Jan-2016 Final Report: Large-Scale Partial-Duplicate Image Retrieval and Its Applications

  16. Multi-level discriminative dictionary learning with application to large scale image classification.

    Science.gov (United States)

    Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua

    2015-10-01

    The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.

  17. On the Large-Scaling Issues of Cloud-based Applications for Earth Science Dat

    Science.gov (United States)

    Hua, H.

    2016-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as NASA's SWOT and NISAR where its SAR data volumes and data throughput rates are order of magnitude larger than present day missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Experiences have shown that to embrace efficient cloud computing approaches for large-scale science data systems requires more than just moving existing code to cloud environments. At large cloud scales, we need to deal with scaling and cost issues. We present our experiences on deploying multiple instances of our hybrid-cloud computing science data system (HySDS) to support large-scale processing of Earth Science data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer 75%-90% costs savings but with an unpredictable computing environment based on market forces.

  18. A Combined Eulerian-Lagrangian Data Representation for Large-Scale Applications.

    Science.gov (United States)

    Sauer, Franz; Xie, Jinrong; Ma, Kwan-Liu

    2017-10-01

    The Eulerian and Lagrangian reference frames each provide a unique perspective when studying and visualizing results from scientific systems. As a result, many large-scale simulations produce data in both formats, and analysis tasks that simultaneously utilize information from both representations are becoming increasingly popular. However, due to their fundamentally different nature, drawing correlations between these data formats is a computationally difficult task, especially in a large-scale setting. In this work, we present a new data representation which combines both reference frames into a joint Eulerian-Lagrangian format. By reorganizing Lagrangian information according to the Eulerian simulation grid into a "unit cell" based approach, we can provide an efficient out-of-core means of sampling, querying, and operating with both representations simultaneously. We also extend this design to generate multi-resolution subsets of the full data to suit the viewer's needs and provide a fast flow-aware trajectory construction scheme. We demonstrate the effectiveness of our method using three large-scale real world scientific datasets and provide insight into the types of performance gains that can be achieved.

  19. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  20. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  1. Network Partitioning Domain Knowledge Multiobjective Application Mapping for Large-Scale Network-on-Chip

    Directory of Open Access Journals (Sweden)

    Yin Zhen Tei

    2014-01-01

    Full Text Available This paper proposes a multiobjective application mapping technique targeted for large-scale network-on-chip (NoC. As the number of intellectual property (IP cores in multiprocessor system-on-chip (MPSoC increases, NoC application mapping to find optimum core-to-topology mapping becomes more challenging. Besides, the conflicting cost and performance trade-off makes multiobjective application mapping techniques even more complex. This paper proposes an application mapping technique that incorporates domain knowledge into genetic algorithm (GA. The initial population of GA is initialized with network partitioning (NP while the crossover operator is guided with knowledge on communication demands. NP reduces the large-scale application mapping complexity and provides GA with a potential mapping search space. The proposed genetic operator is compared with state-of-the-art genetic operators in terms of solution quality. In this work, multiobjective optimization of energy and thermal-balance is considered. Through simulation, knowledge-based initial mapping shows significant improvement in Pareto front compared to random initial mapping that is widely used. The proposed knowledge-based crossover also shows better Pareto front compared to state-of-the-art knowledge-based crossover.

  2. Modified stress intensity factor as a crack growth parameter applicable under large scale yielding conditions

    International Nuclear Information System (INIS)

    Yasuoka, Tetsuo; Mizutani, Yoshihiro; Todoroki, Akira

    2014-01-01

    High-temperature water stress corrosion cracking has high tensile stress sensitivity, and its growth rate has been evaluated using the stress intensity factor, which is a linear fracture mechanics parameter. Stress corrosion cracking mainly occurs and propagates around welded metals or heat-affected zones. These regions have complex residual stress distributions and yield strength distributions because of input heat effects. The authors previously reported that the stress intensity factor becomes inapplicable when steep residual stress distributions or yield strength distributions occur along the crack propagation path, because small-scale yielding conditions deviate around those distributions. Here, when the stress intensity factor is modified by considering these distributions, the modified stress intensity factor may be used for crack growth evaluation for large-scale yielding. The authors previously proposed a modified stress intensity factor incorporating the stress distribution or yield strength distribution in front of the crack using the rate of change of stress intensity factor and yield strength. However, the applicable range of modified stress intensity factor for large-scale yielding was not clarified. In this study, the range was analytically investigated by comparison with the J-integral solution. A three-point bending specimen with parallel surface crack was adopted as the analytical model and the stress intensity factor, modified stress intensity factor and equivalent stress intensity factor derived from the J-integral were calculated and compared under large-scale yielding conditions. The modified stress intensity was closer to the equivalent stress intensity factor when compared with the stress intensity factor. If deviation from the J-integral solution is acceptable up to 2%, the modified stress intensity factor is applicable up to 30% of the J-integral limit, while the stress intensity factor is applicable up to 10%. These results showed that

  3. Effect of low dose gamma irradiation on onion yield: Large scale application

    International Nuclear Information System (INIS)

    Al-Oudat, M.

    1993-01-01

    Large scale application of presowing gamma-irradiation of seeds, bulblets and bulbs of onion, performed in 1989, using the doses of 10 Gy for seeds and 1 Gy for bulblets and bulbs. The doses were chosen on the basis of previous experiments. Reliable increases in yield of seeds (19.3%), bulblets (18.9) and bulbs (31.4%) for red variety. and of 22.3% and 23.4% for seeds and bulbs of white variety were obtained. (author). 2 tabs

  4. Generating descriptive visual words and visual phrases for large-scale image applications.

    Science.gov (United States)

    Zhang, Shiliang; Tian, Qi; Hua, Gang; Huang, Qingming; Gao, Wen

    2011-09-01

    Bag-of-visual Words (BoWs) representation has been applied for various problems in the fields of multimedia and computer vision. The basic idea is to represent images as visual documents composed of repeatable and distinctive visual elements, which are comparable to the text words. Notwithstanding its great success and wide adoption, visual vocabulary created from single-image local descriptors is often shown to be not as effective as desired. In this paper, descriptive visual words (DVWs) and descriptive visual phrases (DVPs) are proposed as the visual correspondences to text words and phrases, where visual phrases refer to the frequently co-occurring visual word pairs. Since images are the carriers of visual objects and scenes, a descriptive visual element set can be composed by the visual words and their combinations which are effective in representing certain visual objects or scenes. Based on this idea, a general framework is proposed for generating DVWs and DVPs for image applications. In a large-scale image database containing 1506 object and scene categories, the visual words and visual word pairs descriptive to certain objects or scenes are identified and collected as the DVWs and DVPs. Experiments show that the DVWs and DVPs are informative and descriptive and, thus, are more comparable with the text words than the classic visual words. We apply the identified DVWs and DVPs in several applications including large-scale near-duplicated image retrieval, image search re-ranking, and object recognition. The combination of DVW and DVP performs better than the state of the art in large-scale near-duplicated image retrieval in terms of accuracy, efficiency and memory consumption. The proposed image search re-ranking algorithm: DWPRank outperforms the state-of-the-art algorithm by 12.4% in mean average precision and about 11 times faster in efficiency.

  5. Assessment of the technology required to develop photovoltaic power system for large scale national energy applications

    Science.gov (United States)

    Lutwack, R.

    1974-01-01

    A technical assessment of a program to develop photovoltaic power system technology for large-scale national energy applications was made by analyzing and judging the alternative candidate photovoltaic systems and development tasks. A program plan was constructed based on achieving the 10 year objective of a program to establish the practicability of large-scale terrestrial power installations using photovoltaic conversion arrays costing less than $0.50/peak W. Guidelines for the tasks of a 5 year program were derived from a set of 5 year objectives deduced from the 10 year objective. This report indicates the need for an early emphasis on the development of the single-crystal Si photovoltaic system for commercial utilization; a production goal of 5 x 10 to the 8th power peak W/year of $0.50 cells was projected for the year 1985. The developments of other photovoltaic conversion systems were assigned to longer range development roles. The status of the technology developments and the applicability of solar arrays in particular power installations, ranging from houses to central power plants, was scheduled to be verified in a series of demonstration projects. The budget recommended for the first 5 year phase of the program is $268.5M.

  6. Applicability of laboratory data to large scale tests under dynamic loading conditions

    International Nuclear Information System (INIS)

    Kussmaul, K.; Klenk, A.

    1993-01-01

    The analysis of dynamic loading and subsequent fracture must be based on reliable data for loading and deformation history. This paper describes an investigation to examine the applicability of parameters which are determined by means of small-scale laboratory tests to large-scale tests. The following steps were carried out: (1) Determination of crack initiation by means of strain gauges applied in the crack tip field of compact tension specimens. (2) Determination of dynamic crack resistance curves of CT-specimens using a modified key-curve technique. The key curves are determined by dynamic finite element analyses. (3) Determination of strain-rate-dependent stress-strain relationships for the finite element simulation of small-scale and large-scale tests. (4) Analysis of the loading history for small-scale tests with the aid of experimental data and finite element calculations. (5) Testing of dynamically loaded tensile specimens taken as strips from ferritic steel pipes with a thickness of 13 mm resp. 18 mm. The strips contained slits and surface cracks. (6) Fracture mechanics analyses of the above mentioned tests and of wide plate tests. The wide plates (960x608x40 mm 3 ) had been tested in a propellant-driven 12 MN dynamic testing facility. For calculating the fracture mechanics parameters of both tests, a dynamic finite element simulation considering the dynamic material behaviour was employed. The finite element analyses showed a good agreement with the simulated tests. This prerequisite allowed to gain critical J-integral values. Generally the results of the large-scale tests were conservative. 19 refs., 20 figs., 4 tabs

  7. Symbiotic Sensing for Energy-Intensive Tasks in Large-Scale Mobile Sensing Applications.

    Science.gov (United States)

    Le, Duc V; Nguyen, Thuong; Scholten, Hans; Havinga, Paul J M

    2017-11-29

    Energy consumption is a critical performance and user experience metric when developing mobile sensing applications, especially with the significantly growing number of sensing applications in recent years. As proposed a decade ago when mobile applications were still not popular and most mobile operating systems were single-tasking, conventional sensing paradigms such as opportunistic sensing and participatory sensing do not explore the relationship among concurrent applications for energy-intensive tasks. In this paper, inspired by social relationships among living creatures in nature, we propose a symbiotic sensing paradigm that can conserve energy, while maintaining equivalent performance to existing paradigms. The key idea is that sensing applications should cooperatively perform common tasks to avoid acquiring the same resources multiple times. By doing so, this sensing paradigm executes sensing tasks with very little extra resource consumption and, consequently, extends battery life. To evaluate and compare the symbiotic sensing paradigm with the existing ones, we develop mathematical models in terms of the completion probability and estimated energy consumption. The quantitative evaluation results using various parameters obtained from real datasets indicate that symbiotic sensing performs better than opportunistic sensing and participatory sensing in large-scale sensing applications, such as road condition monitoring, air pollution monitoring, and city noise monitoring.

  8. Symbiotic Sensing for Energy-Intensive Tasks in Large-Scale Mobile Sensing Applications

    Directory of Open Access Journals (Sweden)

    Duc V. Le

    2017-11-01

    Full Text Available Energy consumption is a critical performance and user experience metric when developing mobile sensing applications, especially with the significantly growing number of sensing applications in recent years. As proposed a decade ago when mobile applications were still not popular and most mobile operating systems were single-tasking, conventional sensing paradigms such as opportunistic sensing and participatory sensing do not explore the relationship among concurrent applications for energy-intensive tasks. In this paper, inspired by social relationships among living creatures in nature, we propose a symbiotic sensing paradigm that can conserve energy, while maintaining equivalent performance to existing paradigms. The key idea is that sensing applications should cooperatively perform common tasks to avoid acquiring the same resources multiple times. By doing so, this sensing paradigm executes sensing tasks with very little extra resource consumption and, consequently, extends battery life. To evaluate and compare the symbiotic sensing paradigm with the existing ones, we develop mathematical models in terms of the completion probability and estimated energy consumption. The quantitative evaluation results using various parameters obtained from real datasets indicate that symbiotic sensing performs better than opportunistic sensing and participatory sensing in large-scale sensing applications, such as road condition monitoring, air pollution monitoring, and city noise monitoring.

  9. Symbiotic Sensing for Energy-Intensive Tasks in Large-Scale Mobile Sensing Applications

    Science.gov (United States)

    Scholten, Hans; Havinga, Paul J. M.

    2017-01-01

    Energy consumption is a critical performance and user experience metric when developing mobile sensing applications, especially with the significantly growing number of sensing applications in recent years. As proposed a decade ago when mobile applications were still not popular and most mobile operating systems were single-tasking, conventional sensing paradigms such as opportunistic sensing and participatory sensing do not explore the relationship among concurrent applications for energy-intensive tasks. In this paper, inspired by social relationships among living creatures in nature, we propose a symbiotic sensing paradigm that can conserve energy, while maintaining equivalent performance to existing paradigms. The key idea is that sensing applications should cooperatively perform common tasks to avoid acquiring the same resources multiple times. By doing so, this sensing paradigm executes sensing tasks with very little extra resource consumption and, consequently, extends battery life. To evaluate and compare the symbiotic sensing paradigm with the existing ones, we develop mathematical models in terms of the completion probability and estimated energy consumption. The quantitative evaluation results using various parameters obtained from real datasets indicate that symbiotic sensing performs better than opportunistic sensing and participatory sensing in large-scale sensing applications, such as road condition monitoring, air pollution monitoring, and city noise monitoring. PMID:29186037

  10. Fast Bound Methods for Large Scale Simulation with Application for Engineering Optimization

    Science.gov (United States)

    Patera, Anthony T.; Peraire, Jaime; Zang, Thomas A. (Technical Monitor)

    2002-01-01

    In this work, we have focused on fast bound methods for large scale simulation with application for engineering optimization. The emphasis is on the development of techniques that provide both very fast turnaround and a certificate of Fidelity; these attributes ensure that the results are indeed relevant to - and trustworthy within - the engineering context. The bound methodology which underlies this work has many different instantiations: finite element approximation; iterative solution techniques; and reduced-basis (parameter) approximation. In this grant we have, in fact, treated all three, but most of our effort has been concentrated on the first and third. We describe these below briefly - but with a pointer to an Appendix which describes, in some detail, the current "state of the art."

  11. Networking for large-scale science: infrastructure, provisioning, transport and application mapping

    International Nuclear Information System (INIS)

    Rao, Nageswara S; Carter, Steven M; Wu Qishi; Wing, William R; Zhu Mengxia; Mezzacappa, Anthony; Veeraraghavan, Malathi; Blondin, John M

    2005-01-01

    Large-scale science computations and experiments require unprecedented network capabilities in the form of large bandwidth and dynamically stable connections to support data transfers, interactive visualizations, and monitoring and steering operations. A number of component technologies dealing with the infrastructure, provisioning, transport and application mappings must be developed and/or optimized to achieve these capabilities. We present a brief account of the following technologies that contribute toward achieving these network capabilities: (a) DOE UltraScienceNet and NSF CHEETAH network testbeds that provide on-demand and scheduled dedicated network connections; (b) experimental results on transport protocols that achieve close to 100% utilization on dedicated 1Gbps wide-area channels; (c) a scheme for optimally mapping a visualization pipeline onto a network to minimize the end-to-end delays; and (d) interconnect configuration and protocols that provides multiple Gbps flows from Cray X1 to external hosts

  12. Networking for large-scale science: infrastructure, provisioning, transport and application mapping

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Nageswara S [Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Carter, Steven M [Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Wu Qishi [Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Wing, William R [Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Zhu Mengxia [Department of Computer Science, Louisiana State University, Baton Rouge, LA 70803 (United States); Mezzacappa, Anthony [Physics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Veeraraghavan, Malathi [Department of Computer Science, University of Virginia, Charlottesville, VA 22904 (United States); Blondin, John M [Department of Physics, North Carolina State University, Raleigh, NC 27695 (United States)

    2005-01-01

    Large-scale science computations and experiments require unprecedented network capabilities in the form of large bandwidth and dynamically stable connections to support data transfers, interactive visualizations, and monitoring and steering operations. A number of component technologies dealing with the infrastructure, provisioning, transport and application mappings must be developed and/or optimized to achieve these capabilities. We present a brief account of the following technologies that contribute toward achieving these network capabilities: (a) DOE UltraScienceNet and NSF CHEETAH network testbeds that provide on-demand and scheduled dedicated network connections; (b) experimental results on transport protocols that achieve close to 100% utilization on dedicated 1Gbps wide-area channels; (c) a scheme for optimally mapping a visualization pipeline onto a network to minimize the end-to-end delays; and (d) interconnect configuration and protocols that provides multiple Gbps flows from Cray X1 to external hosts.

  13. Application of spectral Lanczos decomposition method to large scale problems arising geophysics

    Energy Technology Data Exchange (ETDEWEB)

    Tamarchenko, T. [Western Atlas Logging Services, Houston, TX (United States)

    1996-12-31

    This paper presents an application of Spectral Lanczos Decomposition Method (SLDM) to numerical modeling of electromagnetic diffusion and elastic waves propagation in inhomogeneous media. SLDM approximates an action of a matrix function as a linear combination of basis vectors in Krylov subspace. I applied the method to model electromagnetic fields in three-dimensions and elastic waves in two dimensions. The finite-difference approximation of the spatial part of differential operator reduces the initial boundary-value problem to a system of ordinary differential equations with respect to time. The solution to this system requires calculating exponential and sine/cosine functions of the stiffness matrices. Large scale numerical examples are in a good agreement with the theoretical error bounds and stability estimates given by Druskin, Knizhnerman, 1987.

  14. Application of parallel computing techniques to a large-scale reservoir simulation

    International Nuclear Information System (INIS)

    Zhang, Keni; Wu, Yu-Shu; Ding, Chris; Pruess, Karsten

    2001-01-01

    Even with the continual advances made in both computational algorithms and computer hardware used in reservoir modeling studies, large-scale simulation of fluid and heat flow in heterogeneous reservoirs remains a challenge. The problem commonly arises from intensive computational requirement for detailed modeling investigations of real-world reservoirs. This paper presents the application of a massive parallel-computing version of the TOUGH2 code developed for performing large-scale field simulations. As an application example, the parallelized TOUGH2 code is applied to develop a three-dimensional unsaturated-zone numerical model simulating flow of moisture, gas, and heat in the unsaturated zone of Yucca Mountain, Nevada, a potential repository for high-level radioactive waste. The modeling approach employs refined spatial discretization to represent the heterogeneous fractured tuffs of the system, using more than a million 3-D gridblocks. The problem of two-phase flow and heat transfer within the model domain leads to a total of 3,226,566 linear equations to be solved per Newton iteration. The simulation is conducted on a Cray T3E-900, a distributed-memory massively parallel computer. Simulation results indicate that the parallel computing technique, as implemented in the TOUGH2 code, is very efficient. The reliability and accuracy of the model results have been demonstrated by comparing them to those of small-scale (coarse-grid) models. These comparisons show that simulation results obtained with the refined grid provide more detailed predictions of the future flow conditions at the site, aiding in the assessment of proposed repository performance

  15. Large-scale commercial applications of the in situ vitrification remediation technology

    International Nuclear Information System (INIS)

    Campbell, B.E.; Hansen, J.E.; McElroy, J.L.; Thompson, L.E.; Timmerman, C.L.

    1994-01-01

    The first large-scale commercial application of the innovative In Situ Vitrification (ISV) remediation technology was completed at the Parsons Chemical/ETM Enterprises Superfund site in Michigan State midyear 1994. This project involved treating 4,800 tons of pesticide and mercury-contaminated soil. The project also involved performance of the USEPA SITE Program demonstration test for the ISV technology. The Parsons project involved consolidation and staging of contaminated soil from widespread locations on and nearby the site. This paper presents a brief description of the ISV technology along with case-study type information on these two sites and the performance of the ISV technology on them. The paper also reviews other remediation projects where ISV has been identified as the/a preferred remedy, and where ISV is currently planned for use. These sites include soils contaminated with pesticides, dioxin, PCP, paint wastes, and a variety of heavy metals. This review of additional sites also includes a description of a planned radioactive mixed waste remediation project in Australia that contains large amounts of plutonium, uranium, lead, beryllium, and metallic and other debris buried in limestone and dolomitic soil burial pits. Initial test work has been completed on this application, and preparations are now underway for pilot testing in Australia. This project will demonstrate the applicability of the ISV technology to the challenging application of buried mixed wastes

  16. Segmentation and fragmentation of melt jets due to generation of large-scale structures. Observation in low subcooling conditions

    International Nuclear Information System (INIS)

    Sugiyama, Ken-ichiro; Yamada, Tsuyoshi

    1999-01-01

    In order to clarify a mechanism of melt-jet breakup and fragmentation entirely different from the mechanism of stripping, a series of experiments were carried out by using molten tin jets of 100 grams with initial temperatures from 250degC to 900degC. Molten tin jets with a small kinematic viscosity and a large thermal diffusivity were used to observe breakup and fragmentation of melt jets enhanced thermally and hydrodynamically. We observed jet columns with second-stage large-scale structures generated by the coalescence of large-scale structures recognized in the field of fluid mechanics. At a greater depth, the segmentation of jet columns between second-stage large-scale structures and the fragmentation of the segmented jet columns were observed. It is reasonable to consider that the segmentation and the fragmentation of jet columns are caused by the boiling of water hydrodynamically entrained within second-stage large-scale structures. (author)

  17. Canary in the coal mine: Historical oxygen decline in the Gulf of St. Lawrence due to large scale climate changes

    Science.gov (United States)

    Claret, M.; Galbraith, E. D.; Palter, J. B.; Gilbert, D.; Bianchi, D.; Dunne, J. P.

    2016-02-01

    The regional signature of anthropogenic climate change on the atmosphere and upper ocean is often difficult to discern from observational timeseries, dominated as they are by decadal climate variability. Here we argue that a long-term decline of dissolved oxygen concentrations observed in the Gulf of S. Lawrence (GoSL) is consistent with anthropogenic climate change. Oxygen concentrations in the GoSL have declined markedly since 1930 due primarily to an increase of oxygen-poor North Atlantic Central Waters relative to Labrador Current Waters (Gilbert et al. 2005). We compare these observations to a climate warming simulation using a very high-resolution global coupled ocean-atmospheric climate model. The numerical model (CM2.6), developed by the Geophysical Fluid Dynamics Laboratory, is strongly eddying and includes a biogeochemical module with dissolved oxygen. The warming scenario shows that oxygen in the GoSL decreases and it is associated to changes in western boundary currents and wind patterns in the North Atlantic. We speculate that the large-scale changes behind the simulated decrease in GoSL oxygen have also been at play in the real world over the past century, although they are difficult to resolve in noisy atmospheric data.

  18. A large-scale application of the Kalman alignment algorithm to the CMS tracker

    International Nuclear Information System (INIS)

    Widl, E; Fruehwirth, R

    2008-01-01

    The Kalman alignment algorithm has been specifically developed to cope with the demands that arise from the specifications of the CMS Tracker. The algorithmic concept is based on the Kalman filter formalism and is designed to avoid the inversion of large matrices. Most notably, the algorithm strikes a balance between conventional global and local track-based alignment algorithms, by restricting the computation of alignment parameters not only to alignable objects hit by the same track, but also to all other alignable objects that are significantly correlated. Nevertheless, this feature also comes with various trade-offs: Mechanisms are needed that affect which alignable objects are significantly correlated and keep track of these correlations. Due to the large amount of alignable objects involved at each update (at least compared to local alignment algorithms), the time spent for retrieving and writing alignment parameters as well as the required user memory becomes a significant factor. The large-scale test presented here applies the Kalman alignment algorithm to the (misaligned) CMS Tracker barrel, and demonstrates the feasibility of the algorithm in a realistic scenario. It is shown that both the computation time and the amount of required user memory are within reasonable bounds, given the available computing resources, and that the obtained results are satisfactory

  19. A simple large-scale synthesis of mesoporous In2O3 for gas sensing applications

    Science.gov (United States)

    Zhang, Su; Song, Peng; Yan, Huihui; Yang, Zhongxi; Wang, Qi

    2016-08-01

    In this paper, large-scale mesoporous In2O3 nanostructures were synthesized by a facile Lewis acid catalytic the furfural alcohol resin (FAR) template route for the high-yield. Their morphology and structure were characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM), transmission electron microscopy (TEM), differential thermal and thermogravimetry analysis (DSC-TG) and the Brunauer-Emmett-Teller (BET) approach. The as-obtained mesoporous In2O3 nanostructures possess excellent mesoporous and network structure, which increases the contact area with the gases, it is conducive for adsorption-desorption of gas on the surface of In2O3. The In2O3 particles and pores were both about 15 nm and very uniform. In gas-sensing measurements with target gases, the gas sensor based on mesoporous In2O3 nanostructures showed a good response, short response-recovery time, good selectivity and stability to ethanol. These properties are due to the large specific surface area of mesoporous structure. This synthetic method could use as a new design concept for functional mesoporous nanomaterials and for mass production.

  20. Vedic division methodology for high-speed very large scale integration applications

    Directory of Open Access Journals (Sweden)

    Prabir Saha

    2014-02-01

    Full Text Available Transistor level implementation of division methodology using ancient Vedic mathematics is reported in this Letter. The potentiality of the ‘Dhvajanka (on top of the flag’ formula was adopted from Vedic mathematics to implement such type of divider for practical very large scale integration applications. The division methodology was implemented through half of the divisor bit instead of the actual divisor, subtraction and little multiplication. Propagation delay and dynamic power consumption of divider circuitry were minimised significantly by stage reduction through Vedic division methodology. The functionality of the division algorithm was checked and performance parameters like propagation delay and dynamic power consumption were calculated through spice spectre with 90 nm complementary metal oxide semiconductor technology. The propagation delay of the resulted (32 ÷ 16 bit divider circuitry was only ∼300 ns and consumed ∼32.5 mW power for a layout area of 17.39 mm^2. Combination of Boolean arithmetic along with ancient Vedic mathematics, substantial amount of iterations were reduced resulted as ∼47, ∼38, 34% reduction in delay and ∼34, ∼21, ∼18% reduction in power were investigated compared with the mostly used (e.g. digit-recurrence, Newton–Raphson, Goldschmidt architectures.

  1. Online Censoring for Large-Scale Regressions with Application to Streaming Big Data.

    Science.gov (United States)

    Berberidis, Dimitris; Kekatos, Vassilis; Giannakis, Georgios B

    2016-08-01

    On par with data-intensive applications, the sheer size of modern linear regression problems creates an ever-growing demand for efficient solvers. Fortunately, a significant percentage of the data accrued can be omitted while maintaining a certain quality of statistical inference with an affordable computational budget. This work introduces means of identifying and omitting less informative observations in an online and data-adaptive fashion. Given streaming data, the related maximum-likelihood estimator is sequentially found using first- and second-order stochastic approximation algorithms. These schemes are well suited when data are inherently censored or when the aim is to save communication overhead in decentralized learning setups. In a different operational scenario, the task of joint censoring and estimation is put forth to solve large-scale linear regressions in a centralized setup. Novel online algorithms are developed enjoying simple closed-form updates and provable (non)asymptotic convergence guarantees. To attain desired censoring patterns and levels of dimensionality reduction, thresholding rules are investigated too. Numerical tests on real and synthetic datasets corroborate the efficacy of the proposed data-adaptive methods compared to data-agnostic random projection-based alternatives.

  2. Commercial applications of large-scale Research and Development computer simulation technologies

    International Nuclear Information System (INIS)

    Kuok Mee Ling; Pascal Chen; Wen Ho Lee

    1998-01-01

    The potential commercial applications of two large-scale R and D computer simulation technologies are presented. One such technology is based on the numerical solution of the hydrodynamics equations, and is embodied in the two-dimensional Eulerian code EULE2D, which solves the hydrodynamic equations with various models for the equation of state (EOS), constitutive relations and fracture mechanics. EULE2D is an R and D code originally developed to design and analyze conventional munitions for anti-armor penetrations such as shaped charges, explosive formed projectiles, and kinetic energy rods. Simulated results agree very well with actual experiments. A commercial application presented here is the design and simulation of shaped charges for oil and gas well bore perforation. The other R and D simulation technology is based on the numerical solution of Maxwell's partial differential equations of electromagnetics in space and time, and is implemented in the three-dimensional code FDTD-SPICE, which solves Maxwell's equations in the time domain with finite-differences in the three spatial dimensions and calls SPICE for information when nonlinear active devices are involved. The FDTD method has been used in the radar cross-section modeling of military aircrafts and many other electromagnetic phenomena. The coupling of FDTD method with SPICE, a popular circuit and device simulation program, provides a powerful tool for the simulation and design of microwave and millimeter-wave circuits containing nonlinear active semiconductor devices. A commercial application of FDTD-SPICE presented here is the simulation of a two-element active antenna system. The simulation results and the experimental measurements are in excellent agreement. (Author)

  3. Supervised Outlier Detection in Large-Scale Mvs Point Clouds for 3d City Modeling Applications

    Science.gov (United States)

    Stucker, C.; Richard, A.; Wegner, J. D.; Schindler, K.

    2018-05-01

    We propose to use a discriminative classifier for outlier detection in large-scale point clouds of cities generated via multi-view stereo (MVS) from densely acquired images. What makes outlier removal hard are varying distributions of inliers and outliers across a scene. Heuristic outlier removal using a specific feature that encodes point distribution often delivers unsatisfying results. Although most outliers can be identified correctly (high recall), many inliers are erroneously removed (low precision), too. This aggravates object 3D reconstruction due to missing data. We thus propose to discriminatively learn class-specific distributions directly from the data to achieve high precision. We apply a standard Random Forest classifier that infers a binary label (inlier or outlier) for each 3D point in the raw, unfiltered point cloud and test two approaches for training. In the first, non-semantic approach, features are extracted without considering the semantic interpretation of the 3D points. The trained model approximates the average distribution of inliers and outliers across all semantic classes. Second, semantic interpretation is incorporated into the learning process, i.e. we train separate inlieroutlier classifiers per semantic class (building facades, roof, ground, vegetation, fields, and water). Performance of learned filtering is evaluated on several large SfM point clouds of cities. We find that results confirm our underlying assumption that discriminatively learning inlier-outlier distributions does improve precision over global heuristics by up to ≍ 12 percent points. Moreover, semantically informed filtering that models class-specific distributions further improves precision by up to ≍ 10 percent points, being able to remove very isolated building, roof, and water points while preserving inliers on building facades and vegetation.

  4. A novel artificial fish swarm algorithm for solving large-scale reliability-redundancy application problem.

    Science.gov (United States)

    He, Qiang; Hu, Xiangtao; Ren, Hong; Zhang, Hongqi

    2015-11-01

    A novel artificial fish swarm algorithm (NAFSA) is proposed for solving large-scale reliability-redundancy allocation problem (RAP). In NAFSA, the social behaviors of fish swarm are classified in three ways: foraging behavior, reproductive behavior, and random behavior. The foraging behavior designs two position-updating strategies. And, the selection and crossover operators are applied to define the reproductive ability of an artificial fish. For the random behavior, which is essentially a mutation strategy, the basic cloud generator is used as the mutation operator. Finally, numerical results of four benchmark problems and a large-scale RAP are reported and compared. NAFSA shows good performance in terms of computational accuracy and computational efficiency for large scale RAP. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Application of a CFD based containment model to different large-scale hydrogen distribution experiments

    International Nuclear Information System (INIS)

    Visser, D.C.; Siccama, N.B.; Jayaraju, S.T.; Komen, E.M.J.

    2014-01-01

    Highlights: • A CFD based model developed in ANSYS-FLUENT for simulating the distribution of hydrogen in the containment of a nuclear power plant during a severe accident is validated against four large-scale experiments. • The successive formation and mixing of a stratified gas-layer in experiments performed in the THAI and PANDA facilities are predicted well by the CFD model. • The pressure evolution and related condensation rate during different mixed convection flow conditions in the TOSQAN facility are predicted well by the CFD model. • The results give confidence in the general applicability of the CFD model and model settings. - Abstract: In the event of core degradation during a severe accident in water-cooled nuclear power plants (NPPs), large amounts of hydrogen are generated that may be released into the reactor containment. As the hydrogen mixes with the air in the containment, it can form a flammable mixture. Upon ignition it can damage relevant safety systems and put the integrity of the containment at risk. Despite the installation of mitigation measures, it has been recognized that the temporary existence of combustible or explosive gas clouds cannot be fully excluded during certain postulated accident scenarios. The distribution of hydrogen in the containment and mitigation of the risk are, therefore, important safety issues for NPPs. Complementary to lumped parameter code modelling, Computational Fluid Dynamics (CFD) modelling is needed for the detailed assessment of the hydrogen risk in the containment and for the optimal design of hydrogen mitigation systems in order to reduce this risk as far as possible. The CFD model applied by NRG makes use of the well-developed basic features of the commercial CFD package ANSYS-FLUENT. This general purpose CFD package is complemented with specific user-defined sub-models required to capture the relevant thermal-hydraulic phenomena in the containment during a severe accident as well as the effect of

  6. Application of a CFD based containment model to different large-scale hydrogen distribution experiments

    Energy Technology Data Exchange (ETDEWEB)

    Visser, D.C., E-mail: visser@nrg.eu; Siccama, N.B.; Jayaraju, S.T.; Komen, E.M.J.

    2014-10-15

    Highlights: • A CFD based model developed in ANSYS-FLUENT for simulating the distribution of hydrogen in the containment of a nuclear power plant during a severe accident is validated against four large-scale experiments. • The successive formation and mixing of a stratified gas-layer in experiments performed in the THAI and PANDA facilities are predicted well by the CFD model. • The pressure evolution and related condensation rate during different mixed convection flow conditions in the TOSQAN facility are predicted well by the CFD model. • The results give confidence in the general applicability of the CFD model and model settings. - Abstract: In the event of core degradation during a severe accident in water-cooled nuclear power plants (NPPs), large amounts of hydrogen are generated that may be released into the reactor containment. As the hydrogen mixes with the air in the containment, it can form a flammable mixture. Upon ignition it can damage relevant safety systems and put the integrity of the containment at risk. Despite the installation of mitigation measures, it has been recognized that the temporary existence of combustible or explosive gas clouds cannot be fully excluded during certain postulated accident scenarios. The distribution of hydrogen in the containment and mitigation of the risk are, therefore, important safety issues for NPPs. Complementary to lumped parameter code modelling, Computational Fluid Dynamics (CFD) modelling is needed for the detailed assessment of the hydrogen risk in the containment and for the optimal design of hydrogen mitigation systems in order to reduce this risk as far as possible. The CFD model applied by NRG makes use of the well-developed basic features of the commercial CFD package ANSYS-FLUENT. This general purpose CFD package is complemented with specific user-defined sub-models required to capture the relevant thermal-hydraulic phenomena in the containment during a severe accident as well as the effect of

  7. Application of Anisotropy of Magnetic Susceptibility to large-scale fault kinematics: an evaluation

    Science.gov (United States)

    Casas, Antonio M.; Roman-Berdiel, Teresa; Marcén, Marcos; Oliva-Urcia, Belen; Soto, Ruth; Garcia-Lasanta, Cristina; Calvin, Pablo; Pocovi, Andres; Gil-Imaz, Andres; Pueyo-Anchuela, Oscar; Izquierdo-Llavall, Esther; Vernet, Eva; Santolaria, Pablo; Osacar, Cinta; Santanach, Pere; Corrado, Sveva; Invernizzi, Chiara; Aldega, Luca; Caricchi, Chiara; Villalain, Juan Jose

    2017-04-01

    be observed within the same fault zone, depending on the proximity to the core zone. The transition between them is usually defined by oblate fabrics, with the long and intermediate axes contained within the main foliation plane in SC-like structures. The faults studied in this work are located in Northeast Iberia; most of them were formed during the Late-Variscan fracturing stage and constitute first-order structures controlling the Mesozoic and Cenozoic evolution of the Iberian plate. They include (i) large-scale (Cameros-Demanda) and plurikilometric (Monroyo, Rastraculos), thrusts resulting from basement thrusting and Mesozoic basin inversion, and (ii) strike-slip to transpressional structures in the Iberian Chain (Río Grío and Daroca faults, Aragonian Branch) and the Catalonian Range (Vallès fault). Application of AMS in combination with structural analysis has allowed us a deeper approach into the kinematics of these fault zones and namely to (i) accurately define the transport direction of Cenozoic thrusts (NNW to NE-SW for the studied E-W segments) and the flow directions of décollements and to evaluate the representativity of small-scale structures linked to thrusting; (ii) to assess the transpressional character of deformation for the main NW-SE and NE-SW Late-Variscan faults in NE Iberia during the Cenozoic (horizontal to intermediate-plunging transport directions) and (iii) to define the strain partitioning between different thrust sheets and strike-slip faults to finally establish the pattern of displacements in this intra-plate setting.

  8. Model of large scale man-machine systems with an application to vessel traffic control

    NARCIS (Netherlands)

    Wewerinke, P.H.; van der Ent, W.I.; ten Hove, D.

    1989-01-01

    Mathematical models are discussed to deal with complex large-scale man-machine systems such as vessel (air, road) traffic and process control systems. Only interrelationships between subsystems are assumed. Each subsystem is controlled by a corresponding human operator (HO). Because of the

  9. Maps4Science - National Roadmap for Large-Scale Research Facilities 2011 (NWO Application form)

    NARCIS (Netherlands)

    Van Oosterom, P.J.M.; Van der Wal, T.; De By, R.A.

    2011-01-01

    The Netherlands is historically known as one of worlds' best-measured countries. It is continuing this tradition today with unequalled new datasets, such as the nationwide large-scale topographic map, our unique digital height map (nationwide coverage; ten very accurate 3D points for every Dutch m2)

  10. Unlocking biomarker discovery: large scale application of aptamer proteomic technology for early detection of lung cancer.

    Directory of Open Access Journals (Sweden)

    Rachel M Ostroff

    Full Text Available BACKGROUND: Lung cancer is the leading cause of cancer deaths worldwide. New diagnostics are needed to detect early stage lung cancer because it may be cured with surgery. However, most cases are diagnosed too late for curative surgery. Here we present a comprehensive clinical biomarker study of lung cancer and the first large-scale clinical application of a new aptamer-based proteomic technology to discover blood protein biomarkers in disease. METHODOLOGY/PRINCIPAL FINDINGS: We conducted a multi-center case-control study in archived serum samples from 1,326 subjects from four independent studies of non-small cell lung cancer (NSCLC in long-term tobacco-exposed populations. Sera were collected and processed under uniform protocols. Case sera were collected from 291 patients within 8 weeks of the first biopsy-proven lung cancer and prior to tumor removal by surgery. Control sera were collected from 1,035 asymptomatic study participants with ≥ 10 pack-years of cigarette smoking. We measured 813 proteins in each sample with a new aptamer-based proteomic technology, identified 44 candidate biomarkers, and developed a 12-protein panel (cadherin-1, CD30 ligand, endostatin, HSP90α, LRIG3, MIP-4, pleiotrophin, PRKCI, RGM-C, SCF-sR, sL-selectin, and YES that discriminates NSCLC from controls with 91% sensitivity and 84% specificity in cross-validated training and 89% sensitivity and 83% specificity in a separate verification set, with similar performance for early and late stage NSCLC. CONCLUSIONS/SIGNIFICANCE: This study is a significant advance in clinical proteomics in an area of high unmet clinical need. Our analysis exceeds the breadth and dynamic range of proteome interrogated of previously published clinical studies of broad serum proteome profiling platforms including mass spectrometry, antibody arrays, and autoantibody arrays. The sensitivity and specificity of our 12-biomarker panel improves upon published protein and gene expression panels

  11. A simple large-scale synthesis of mesoporous In{sub 2}O{sub 3} for gas sensing applications

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Su; Song, Peng, E-mail: mse_songp@ujn.edu.cn; Yan, Huihui; Yang, Zhongxi; Wang, Qi, E-mail: mse_wangq@ujn.edu.cn

    2016-08-15

    Graphical abstract: Large-scale mesoporous In{sub 2}O{sub 3} nanostructures for gas-sensing applications were successfully fabricated via a facile Lewis acid catalytic the furfural alcohol resin template route. - Highlights: • Mesoporous In{sub 2}O{sub 3} nanostructures with high-yield have been successfully fabricated via a facile strategy. • The microstructure and formation mechanism of mesoporous In{sub 2}O{sub 3} nanostructures were discussed based on the experimental results. • The as-prepared In{sub 2}O{sub 3} samples exhibited high response, short response-recovery times and good selectivity to ethanol gas. - Abstract: In this paper, large-scale mesoporous In{sub 2}O{sub 3} nanostructures were synthesized by a facile Lewis acid catalytic the furfural alcohol resin (FAR) template route for the high-yield. Their morphology and structure were characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM), transmission electron microscopy (TEM), differential thermal and thermogravimetry analysis (DSC-TG) and the Brunauer-Emmett-Teller (BET) approach. The as-obtained mesoporous In{sub 2}O{sub 3} nanostructures possess excellent mesoporous and network structure, which increases the contact area with the gases, it is conducive for adsorption-desorption of gas on the surface of In{sub 2}O{sub 3}. The In{sub 2}O{sub 3} particles and pores were both about 15 nm and very uniform. In gas-sensing measurements with target gases, the gas sensor based on mesoporous In{sub 2}O{sub 3} nanostructures showed a good response, short response-recovery time, good selectivity and stability to ethanol. These properties are due to the large specific surface area of mesoporous structure. This synthetic method could use as a new design concept for functional mesoporous nanomaterials and for mass production.

  12. RELAPS choked flow model and application to a large scale flow test

    International Nuclear Information System (INIS)

    Ransom, V.H.; Trapp, J.A.

    1980-01-01

    The RELAP5 code was used to simulate a large scale choked flow test. The fluid system used in the test was modeled in RELAP5 using a uniform, but coarse, nodalization. The choked mass discharge rate was calculated using the RELAP5 choked flow model. The calulations were in good agreement with the test data, and the flow was calculated to be near thermal equilibrium

  13. Application of simplified models to CO2 migration and immobilization in large-scale geological systems

    KAUST Repository

    Gasda, Sarah E.

    2012-07-01

    Long-term stabilization of injected carbon dioxide (CO 2) is an essential component of risk management for geological carbon sequestration operations. However, migration and trapping phenomena are inherently complex, involving processes that act over multiple spatial and temporal scales. One example involves centimeter-scale density instabilities in the dissolved CO 2 region leading to large-scale convective mixing that can be a significant driver for CO 2 dissolution. Another example is the potentially important effect of capillary forces, in addition to buoyancy and viscous forces, on the evolution of mobile CO 2. Local capillary effects lead to a capillary transition zone, or capillary fringe, where both fluids are present in the mobile state. This small-scale effect may have a significant impact on large-scale plume migration as well as long-term residual and dissolution trapping. Computational models that can capture both large and small-scale effects are essential to predict the role of these processes on the long-term storage security of CO 2 sequestration operations. Conventional modeling tools are unable to resolve sufficiently all of these relevant processes when modeling CO 2 migration in large-scale geological systems. Herein, we present a vertically-integrated approach to CO 2 modeling that employs upscaled representations of these subgrid processes. We apply the model to the Johansen formation, a prospective site for sequestration of Norwegian CO 2 emissions, and explore the sensitivity of CO 2 migration and trapping to subscale physics. Model results show the relative importance of different physical processes in large-scale simulations. The ability of models such as this to capture the relevant physical processes at large spatial and temporal scales is important for prediction and analysis of CO 2 storage sites. © 2012 Elsevier Ltd.

  14. Incipient multiple fault diagnosis in real time with applications to large-scale systems

    International Nuclear Information System (INIS)

    Chung, H.Y.; Bien, Z.; Park, J.H.; Seon, P.H.

    1994-01-01

    By using a modified signed directed graph (SDG) together with the distributed artificial neutral networks and a knowledge-based system, a method of incipient multi-fault diagnosis is presented for large-scale physical systems with complex pipes and instrumentations such as valves, actuators, sensors, and controllers. The proposed method is designed so as to (1) make a real-time incipient fault diagnosis possible for large-scale systems, (2) perform the fault diagnosis not only in the steady-state case but also in the transient case as well by using a concept of fault propagation time, which is newly adopted in the SDG model, (3) provide with highly reliable diagnosis results and explanation capability of faults diagnosed as in an expert system, and (4) diagnose the pipe damage such as leaking, break, or throttling. This method is applied for diagnosis of a pressurizer in the Kori Nuclear Power Plant (NPP) unit 2 in Korea under a transient condition, and its result is reported to show satisfactory performance of the method for the incipient multi-fault diagnosis of such a large-scale system in a real-time manner

  15. Analysis of the applicability of fracture mechanics on the basis of large scale specimen testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Polachova, H.; Sulc, J.; Anikovskij, V.; Dragunov, Y.; Rivkin, E.; Filatov, V.

    1988-01-01

    The verification is dealt with of fracture mechanics calculations for WWER reactor pressure vessels by large scale model testing performed on the large testing machine ZZ 8000 (maximum load of 80 MN) in the Skoda Concern. The results of testing a large set of large scale test specimens with surface crack-type defects are presented. The nominal thickness of the specimens was 150 mm with defect depths between 15 and 100 mm, the testing temperature varying between -30 and +80 degC (i.e., in the temperature interval of T ko ±50 degC). Specimens with a scale of 1:8 and 1:12 were also tested, as well as standard (CT and TPB) specimens. Comparisons of results of testing and calculations suggest some conservatism of calculations (especially for small defects) based on Linear Elastic Fracture Mechanics, according to the Nuclear Reactor Pressure Vessel Codes which use the fracture mechanics values from J IC testing. On the basis of large scale tests the ''Defect Analysis Diagram'' was constructed and recommended for brittle fracture assessment of reactor pressure vessels. (author). 7 figs., 2 tabs., 3 refs

  16. Applications of Data Assimilation to Analysis of the Ocean on Large Scales

    Science.gov (United States)

    Miller, Robert N.; Busalacchi, Antonio J.; Hackert, Eric C.

    1997-01-01

    It is commonplace to begin talks on this topic by noting that oceanographic data are too scarce and sparse to provide complete initial and boundary conditions for large-scale ocean models. Even considering the availability of remotely-sensed data such as radar altimetry from the TOPEX and ERS-1 satellites, a glance at a map of available subsurface data should convince most observers that this is still the case. Data are still too sparse for comprehensive treatment of interannual to interdecadal climate change through the use of models, since the new data sets have not been around for very long. In view of the dearth of data, we must note that the overall picture is changing rapidly. Recently, there have been a number of large scale ocean analysis and prediction efforts, some of which now run on an operational or at least quasi-operational basis, most notably the model based analyses of the tropical oceans. These programs are modeled on numerical weather prediction. Aside from the success of the global tide models, assimilation of data in the tropics, in support of prediction and analysis of seasonal to interannual climate change, is probably the area of large scale ocean modeling and data assimilation in which the most progress has been made. Climate change is a problem which is particularly suited to advanced data assimilation methods. Linear models are useful, and the linear theory can be exploited. For the most part, the data are sufficiently sparse that implementation of advanced methods is worthwhile. As an example of a large scale data assimilation experiment with a recent extensive data set, we present results of a tropical ocean experiment in which the Kalman filter was used to assimilate three years of altimetric data from Geosat into a coarsely resolved linearized long wave shallow water model. Since nonlinear processes dominate the local dynamic signal outside the tropics, subsurface dynamical quantities cannot be reliably inferred from surface height

  17. Designing and developing portable large-scale JavaScript web applications within the Experiment Dashboard framework

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Co...

  18. Designing and developing portable large-scale JavaScript web applications within the Experiment Dashboard framework

    CERN Document Server

    Andreeva, J; Karavakis, E; Kokoszkiewicz, L; Nowotka, M; Saiz, P; Tuckett, D

    2012-01-01

    Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Comp...

  19. Designing and developing portable large-scale JavaScript web applications within the Experiment Dashboard framework

    International Nuclear Information System (INIS)

    Andreeva, J; Dzhunov, I; Karavakis, E; Kokoszkiewicz, L; Nowotka, M; Saiz, P; Tuckett, D

    2012-01-01

    Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Computing Grid. We demonstrate the benefits of the approach for large-scale JavaScript web applications in this context by examining the design of several Experiment Dashboard applications for data processing, data transfer and site status monitoring, and by showing how they have been ported for different virtual organisations and technologies.

  20. Designing and developing portable large-scale JavaScript web applications within the Experiment Dashboard framework

    Science.gov (United States)

    Andreeva, J.; Dzhunov, I.; Karavakis, E.; Kokoszkiewicz, L.; Nowotka, M.; Saiz, P.; Tuckett, D.

    2012-12-01

    Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Computing Grid. We demonstrate the benefits of the approach for large-scale JavaScript web applications in this context by examining the design of several Experiment Dashboard applications for data processing, data transfer and site status monitoring, and by showing how they have been ported for different virtual organisations and technologies.

  1. Large-Scale Demonstration of Liquid Hydrogen Storage with Zero Boiloff for In-Space Applications

    Science.gov (United States)

    Hastings, L. J.; Bryant, C. B.; Flachbart, R. H.; Holt, K. A.; Johnson, E.; Hedayat, A.; Hipp, B.; Plachta, D. W.

    2010-01-01

    Cryocooler and passive insulation technology advances have substantially improved prospects for zero-boiloff cryogenic storage. Therefore, a cooperative effort by NASA s Ames Research Center, Glenn Research Center, and Marshall Space Flight Center (MSFC) was implemented to develop zero-boiloff concepts for in-space cryogenic storage. Described herein is one program element - a large-scale, zero-boiloff demonstration using the MSFC multipurpose hydrogen test bed (MHTB). A commercial cryocooler was interfaced with an existing MHTB spray bar mixer and insulation system in a manner that enabled a balance between incoming and extracted thermal energy.

  2. Role of large-scale permeability measurements in fractured rock and their application at Stripa

    International Nuclear Information System (INIS)

    Witherspoon, P.A.; Wilson, C.R.; Long, J.C.S.; DuBois, A.O.; Gale, J.E.; McPherson, M.

    1979-10-01

    Completion of the macropermeability experiment will provide: (i) a direct, in situ measurement of the permeability of 10 5 to 10 6 m 3 of rock; (ii) a potential method for confirming the analysis of a series of small scale permeability tests performed in surface and underground boreholes; (iii) a better understanding of the effect to open borehole zone length on pressure measurement; (iv) increased volume in fractured rock; (v) a basis for evaluating the ventilation technique for flow measurement in large scale testing of low permeability rocks

  3. The application of liquid air energy storage for large scale long duration solutions to grid balancing

    Science.gov (United States)

    Brett, Gareth; Barnett, Matthew

    2014-12-01

    Liquid Air Energy Storage (LAES) provides large scale, long duration energy storage at the point of demand in the 5 MW/20 MWh to 100 MW/1,000 MWh range. LAES combines mature components from the industrial gas and electricity industries assembled in a novel process and is one of the few storage technologies that can be delivered at large scale, with no geographical constraints. The system uses no exotic materials or scarce resources and all major components have a proven lifetime of 25+ years. The system can also integrate low grade waste heat to increase power output. Founded in 2005, Highview Power Storage, is a UK based developer of LAES. The company has taken the concept from academic analysis, through laboratory testing, and in 2011 commissioned the world's first fully integrated system at pilot plant scale (300 kW/2.5 MWh) hosted at SSE's (Scottish & Southern Energy) 80 MW Biomass Plant in Greater London which was partly funded by a Department of Energy and Climate Change (DECC) grant. Highview is now working with commercial customers to deploy multi MW commercial reference plants in the UK and abroad.

  4. Large-Scale medical image analytics: Recent methodologies, applications and Future directions.

    Science.gov (United States)

    Zhang, Shaoting; Metaxas, Dimitris

    2016-10-01

    Despite the ever-increasing amount and complexity of annotated medical image data, the development of large-scale medical image analysis algorithms has not kept pace with the need for methods that bridge the semantic gap between images and diagnoses. The goal of this position paper is to discuss and explore innovative and large-scale data science techniques in medical image analytics, which will benefit clinical decision-making and facilitate efficient medical data management. Particularly, we advocate that the scale of image retrieval systems should be significantly increased at which interactive systems can be effective for knowledge discovery in potentially large databases of medical images. For clinical relevance, such systems should return results in real-time, incorporate expert feedback, and be able to cope with the size, quality, and variety of the medical images and their associated metadata for a particular domain. The design, development, and testing of the such framework can significantly impact interactive mining in medical image databases that are growing rapidly in size and complexity and enable novel methods of analysis at much larger scales in an efficient, integrated fashion. Copyright © 2016. Published by Elsevier B.V.

  5. Topology assisted self-organization of colloidal nanoparticles: application to 2D large-scale nanomastering

    Directory of Open Access Journals (Sweden)

    Hind Kadiri

    2014-08-01

    Full Text Available Our aim was to elaborate a novel method for fully controllable large-scale nanopatterning. We investigated the influence of the surface topology, i.e., a pre-pattern of hydrogen silsesquioxane (HSQ posts, on the self-organization of polystyrene beads (PS dispersed over a large surface. Depending on the post size and spacing, long-range ordering of self-organized polystyrene beads is observed wherein guide posts were used leading to single crystal structure. Topology assisted self-organization has proved to be one of the solutions to obtain large-scale ordering. Besides post size and spacing, the colloidal concentration and the nature of solvent were found to have a significant effect on the self-organization of the PS beads. Scanning electron microscope and associated Fourier transform analysis were used to characterize the morphology of the ordered surfaces. Finally, the production of silicon molds is demonstrated by using the beads as a template for dry etching.

  6. Caught you: threats to confidentiality due to the public release of large-scale genetic data sets.

    Science.gov (United States)

    Wjst, Matthias

    2010-12-29

    Large-scale genetic data sets are frequently shared with other research groups and even released on the Internet to allow for secondary analysis. Study participants are usually not informed about such data sharing because data sets are assumed to be anonymous after stripping off personal identifiers. The assumption of anonymity of genetic data sets, however, is tenuous because genetic data are intrinsically self-identifying. Two types of re-identification are possible: the "Netflix" type and the "profiling" type. The "Netflix" type needs another small genetic data set, usually with less than 100 SNPs but including a personal identifier. This second data set might originate from another clinical examination, a study of leftover samples or forensic testing. When merged to the primary, unidentified set it will re-identify all samples of that individual. Even with no second data set at hand, a "profiling" strategy can be developed to extract as much information as possible from a sample collection. Starting with the identification of ethnic subgroups along with predictions of body characteristics and diseases, the asthma kids case as a real-life example is used to illustrate that approach. Depending on the degree of supplemental information, there is a good chance that at least a few individuals can be identified from an anonymized data set. Any re-identification, however, may potentially harm study participants because it will release individual genetic disease risks to the public.

  7. Caught you: threats to confidentiality due to the public release of large-scale genetic data sets

    Directory of Open Access Journals (Sweden)

    Wjst Matthias

    2010-12-01

    Full Text Available Abstract Background Large-scale genetic data sets are frequently shared with other research groups and even released on the Internet to allow for secondary analysis. Study participants are usually not informed about such data sharing because data sets are assumed to be anonymous after stripping off personal identifiers. Discussion The assumption of anonymity of genetic data sets, however, is tenuous because genetic data are intrinsically self-identifying. Two types of re-identification are possible: the "Netflix" type and the "profiling" type. The "Netflix" type needs another small genetic data set, usually with less than 100 SNPs but including a personal identifier. This second data set might originate from another clinical examination, a study of leftover samples or forensic testing. When merged to the primary, unidentified set it will re-identify all samples of that individual. Even with no second data set at hand, a "profiling" strategy can be developed to extract as much information as possible from a sample collection. Starting with the identification of ethnic subgroups along with predictions of body characteristics and diseases, the asthma kids case as a real-life example is used to illustrate that approach. Summary Depending on the degree of supplemental information, there is a good chance that at least a few individuals can be identified from an anonymized data set. Any re-identification, however, may potentially harm study participants because it will release individual genetic disease risks to the public.

  8. Application of bamboo laminates in large-scale wind turbine blade design?

    Institute of Scientific and Technical Information of China (English)

    Long WANG; Hui LI; Tongguang WANG

    2016-01-01

    From the viewpoint of material and structure in the design of bamboo blades of large-scale wind turbine, a series of mechanical property tests of bamboo laminates as the major enhancement materials for blades are presented. The basic mechanical characteristics needed in the design of bamboo blades are brie?y introduced. Based on these data, the aerodynamic-structural integrated design of a 1.5 MW wind turbine bamboo blade relying on a conventional platform of upwind, variable speed, variable pitch, and doubly-fed generator is carried out. The process of the structural layer design of bamboo blades is documented in detail. The structural strength and fatigue life of the designed wind turbine blades are certified. The technical issues raised from the design are discussed. Key problems and direction of the future study are also summarized.

  9. Progresses in application of computational ?uid dynamic methods to large scale wind turbine aerodynamics?

    Institute of Scientific and Technical Information of China (English)

    Zhenyu ZHANG; Ning ZHAO; Wei ZHONG; Long WANG; Bofeng XU

    2016-01-01

    The computational ?uid dynamics (CFD) methods are applied to aerody-namic problems for large scale wind turbines. The progresses including the aerodynamic analyses of wind turbine pro?les, numerical ?ow simulation of wind turbine blades, evalu-ation of aerodynamic performance, and multi-objective blade optimization are discussed. Based on the CFD methods, signi?cant improvements are obtained to predict two/three-dimensional aerodynamic characteristics of wind turbine airfoils and blades, and the vorti-cal structure in their wake ?ows is accurately captured. Combining with a multi-objective genetic algorithm, a 1.5 MW NH-1500 optimized blade is designed with high e?ciency in wind energy conversion.

  10. Application of the PSA method to decay heat removal systems in a large scale FBR design

    International Nuclear Information System (INIS)

    Kotake, S.; Satoh, K.; Matsumoto, H.; Sugawara, M.; Sakata, K.; Okabe, A.

    1993-01-01

    The Probabilistic Safety Assessment (PSA) method is applied to a large scale loop-type FBR in its conceptual design stage in order to establish a well-balanced safety. Both the reactor shut down and decay heat removal systems are designed to be highly reliable, e.g. 10 -7 /d. In this paper the results of several reliability analyses concerning the DHRS have been discussed, where the effects of the analytical assumptions, design options, accident managements on the reliability are examined. The reliability is evaluated small enough, since DRACSs consists of four independent loops with sufficient heat removal capacity and both forced and natural circulation capabilities are designed. It is found that the common mode failures for the active components in the DRACS dominate the reliability. The design diversity concerning these components can be effective for the improvements and the accident managements on BOP are also possible by making use of the long grace period in FBR. (author)

  11. Application of the PSA method to decay heat removal systems in a large scale FBR design

    Energy Technology Data Exchange (ETDEWEB)

    Kotake, S; Satoh, K [Japan Atomic Power Company, Otemachi, Chiyoda-ku, Tokyo (Japan); Matsumoto, H; Sugawara, M [Toshiba Corporation (Japan); Sakata, K [Mitsubishi Atomic Power Industries Inc. (Japan); Okabe, A [Hitachi Engineering Co., Ltd. (Japan)

    1993-02-01

    The Probabilistic Safety Assessment (PSA) method is applied to a large scale loop-type FBR in its conceptual design stage in order to establish a well-balanced safety. Both the reactor shut down and decay heat removal systems are designed to be highly reliable, e.g. 10{sup -7}/d. In this paper the results of several reliability analyses concerning the DHRS have been discussed, where the effects of the analytical assumptions, design options, accident managements on the reliability are examined. The reliability is evaluated small enough, since DRACSs consists of four independent loops with sufficient heat removal capacity and both forced and natural circulation capabilities are designed. It is found that the common mode failures for the active components in the DRACS dominate the reliability. The design diversity concerning these components can be effective for the improvements and the accident managements on BOP are also possible by making use of the long grace period in FBR. (author)

  12. Models of large-scale magnetic fields in stellar interiors. Application to solar and ap stars

    International Nuclear Information System (INIS)

    Duez, Vincent

    2009-01-01

    Stellar astrophysics needs today new models of large-scale magnetic fields, which are observed through spectropolarimetry at the surface of Ap/Bp stars, and thought to be an explanation for the uniform rotation of the solar radiation zone, deduced from helio seismic inversions. During my PhD, I focused on describing the possible magnetic equilibria in stellar interiors. The found configurations are mixed poloidal-toroidal, and minimize the energy for a given helicity, in analogy with Taylor states encountered in spheromaks. Taking into account the self-gravity leads us to the 'non force-free' equilibria family, that will thus influence the stellar structure. I derived all the physical quantities associated with the magnetic field; then I evaluated the perturbations they induce on gravity, thermodynamic quantities as well as energetic ones, for a solar model and an Ap star. 3D MHD simulations allowed me to show that these equilibria form a first stable states family, the generalization of such states remaining an open question. It has been shown that a large-scale magnetic field confined in the solar radiation zone can induce an oblateness comparable to a high core rotation law. I also studied the secular interaction between the magnetic field, the differential rotation and the meridional circulation in the aim of implementing their effects in a next generation stellar evolution code. The influence of the magnetism on convection has also been studied. Finally, hydrodynamic processes responsible for the mixing have been compared with diffusion and a change of convection's efficiency in the case of a CoRoT star target. (author) [fr

  13. Disclosure Control using Partially Synthetic Data for Large-Scale Health Surveys, with Applications to CanCORS

    OpenAIRE

    Loong, Bronwyn; Zaslavsky, Alan M.; He, Yulei; Harrington, David P.

    2013-01-01

    Statistical agencies have begun to partially synthesize public-use data for major surveys to protect the confidentiality of respondents’ identities and sensitive attributes, by replacing high disclosure risk and sensitive variables with multiple imputations. To date, there are few applications of synthetic data techniques to large-scale healthcare survey data. Here, we describe partial synthesis of survey data collected by CanCORS, a comprehensive observational study of the experiences, treat...

  14. Coordinated Multi-layer Multi-domain Optical Network (COMMON) for Large-Scale Science Applications (COMMON)

    Energy Technology Data Exchange (ETDEWEB)

    Vokkarane, Vinod [University of Massachusetts

    2013-09-01

    We intend to implement a Coordinated Multi-layer Multi-domain Optical Network (COMMON) Framework for Large-scale Science Applications. In the COMMON project, specific problems to be addressed include 1) anycast/multicast/manycast request provisioning, 2) deployable OSCARS enhancements, 3) multi-layer, multi-domain quality of service (QoS), and 4) multi-layer, multidomain path survivability. In what follows, we outline the progress in the above categories (Year 1, 2, and 3 deliverables).

  15. Application of plant metabonomics in quality assessment for large-scale production of traditional Chinese medicine.

    Science.gov (United States)

    Ning, Zhangchi; Lu, Cheng; Zhang, Yuxin; Zhao, Siyu; Liu, Baoqin; Xu, Xuegong; Liu, Yuanyan

    2013-07-01

    The curative effects of traditional Chinese medicines are principally based on the synergic effect of their multi-targeting, multi-ingredient preparations, in contrast to modern pharmacology and drug development that often focus on a single chemical entity. Therefore, the method employing a few markers or pharmacologically active constituents to assess the quality and authenticity of the complex preparations has a number of severe challenges. Metabonomics can provide an effective platform for complex sample analysis. It is also reported to be applied to the quality analysis of the traditional Chinese medicine. Metabonomics enables comprehensive assessment of complex traditional Chinese medicines or herbal remedies and sample classification of diverse biological statuses, origins, or qualities in samples, by means of chemometrics. Identification, processing, and pharmaceutical preparation are the main procedures in the large-scale production of Chinese medicinal preparations. Through complete scans, plants metabonomics addresses some of the shortfalls of single analyses and presents a considerable potential to become a sharp tool for traditional Chinese medicine quality assessment. Georg Thieme Verlag KG Stuttgart · New York.

  16. Large-scale application of highly-diluted bacteria for Leptospirosis epidemic control.

    Science.gov (United States)

    Bracho, Gustavo; Varela, Enrique; Fernández, Rolando; Ordaz, Barbara; Marzoa, Natalia; Menéndez, Jorge; García, Luis; Gilling, Esperanza; Leyva, Richard; Rufín, Reynaldo; de la Torre, Rubén; Solis, Rosa L; Batista, Niurka; Borrero, Reinier; Campa, Concepción

    2010-07-01

    Leptospirosis is a zoonotic disease of major importance in the tropics where the incidence peaks in rainy seasons. Natural disasters represent a big challenge to Leptospirosis prevention strategies especially in endemic regions. Vaccination is an effective option but of reduced effectiveness in emergency situations. Homeoprophylactic interventions might help to control epidemics by using highly-diluted pathogens to induce protection in a short time scale. We report the results of a very large-scale homeoprophylaxis (HP) intervention against Leptospirosis in a dangerous epidemic situation in three provinces of Cuba in 2007. Forecast models were used to estimate possible trends of disease incidence. A homeoprophylactic formulation was prepared from dilutions of four circulating strains of Leptospirosis. This formulation was administered orally to 2.3 million persons at high risk in an epidemic in a region affected by natural disasters. The data from surveillance were used to measure the impact of the intervention by comparing with historical trends and non-intervention regions. After the homeoprophylactic intervention a significant decrease of the disease incidence was observed in the intervention regions. No such modifications were observed in non-intervention regions. In the intervention region the incidence of Leptospirosis fell below the historic median. This observation was independent of rainfall. The homeoprophylactic approach was associated with a large reduction of disease incidence and control of the epidemic. The results suggest the use of HP as a feasible tool for epidemic control, further research is warranted. 2010 Elsevier Ltd. All rights reserved.

  17. Development and application of a computer model for large-scale flame acceleration experiments

    International Nuclear Information System (INIS)

    Marx, K.D.

    1987-07-01

    A new computational model for large-scale premixed flames is developed and applied to the simulation of flame acceleration experiments. The primary objective is to circumvent the necessity for resolving turbulent flame fronts; this is imperative because of the relatively coarse computational grids which must be used in engineering calculations. The essence of the model is to artificially thicken the flame by increasing the appropriate diffusivities and decreasing the combustion rate, but to do this in such a way that the burn velocity varies with pressure, temperature, and turbulence intensity according to prespecified phenomenological characteristics. The model is particularly aimed at implementation in computer codes which simulate compressible flows. To this end, it is applied to the two-dimensional simulation of hydrogen-air flame acceleration experiments in which the flame speeds and gas flow velocities attain or exceed the speed of sound in the gas. It is shown that many of the features of the flame trajectories and pressure histories in the experiments are simulated quite well by the model. Using the comparison of experimental and computational results as a guide, some insight is developed into the processes which occur in such experiments. 34 refs., 25 figs., 4 tabs

  18. Towards Development of Clustering Applications for Large-Scale Comparative Genotyping and Kinship Analysis Using Y-Short Tandem Repeats.

    Science.gov (United States)

    Seman, Ali; Sapawi, Azizian Mohd; Salleh, Mohd Zaki

    2015-06-01

    Y-chromosome short tandem repeats (Y-STRs) are genetic markers with practical applications in human identification. However, where mass identification is required (e.g., in the aftermath of disasters with significant fatalities), the efficiency of the process could be improved with new statistical approaches. Clustering applications are relatively new tools for large-scale comparative genotyping, and the k-Approximate Modal Haplotype (k-AMH), an efficient algorithm for clustering large-scale Y-STR data, represents a promising method for developing these tools. In this study we improved the k-AMH and produced three new algorithms: the Nk-AMH I (including a new initial cluster center selection), the Nk-AMH II (including a new dominant weighting value), and the Nk-AMH III (combining I and II). The Nk-AMH III was the superior algorithm, with mean clustering accuracy that increased in four out of six datasets and remained at 100% in the other two. Additionally, the Nk-AMH III achieved a 2% higher overall mean clustering accuracy score than the k-AMH, as well as optimal accuracy for all datasets (0.84-1.00). With inclusion of the two new methods, the Nk-AMH III produced an optimal solution for clustering Y-STR data; thus, the algorithm has potential for further development towards fully automatic clustering of any large-scale genotypic data.

  19. High-accuracy single-pass InSAR DEM for large-scale flood hazard applications

    Science.gov (United States)

    Schumann, G.; Faherty, D.; Moller, D.

    2017-12-01

    In this study, we used a unique opportunity of the GLISTIN-A (NASA airborne mission designed to characterizing the cryosphere) track to Greenland to acquire a high-resolution InSAR DEM of a large area in the Red River of the North Basin (north of Grand Forks, ND, USA), which is a very flood-vulnerable valley, particularly in spring time due to increased soil moisture content near state of saturation and/or, typical for this region, snowmelt. Having an InSAR DEM that meets flood inundation modeling and mapping requirements comparable to LiDAR, would demonstrate great application potential of new radar technology for national agencies with an operational flood forecasting mandate and also local state governments active in flood event prediction, disaster response and mitigation. Specifically, we derived a bare-earth DEM in SAR geometry by first removing the inherent far range bias related to airborne operation, which at the more typical large-scale DEM resolution of 30 m has a sensor accuracy of plus or minus 2.5 cm. Subsequently, an intelligent classifier based on informed relationships between InSAR height, intensity and correlation was used to distinguish between bare-earth, roads or embankments, buildings and tall vegetation in order to facilitate the creation of a bare-earth DEM that would meet the requirements for accurate floodplain inundation mapping. Using state-of-the-art LiDAR terrain data, we demonstrate that capability by achieving a root mean squared error of approximately 25 cm and further illustrating its applicability to flood modeling.

  20. Large scale and long term application of bioslurping: the case of a Greek petroleum refinery site.

    Science.gov (United States)

    Gidarakos, E; Aivalioti, M

    2007-11-19

    This paper presents the course and the remediation results of a 4-year application of bioslurping technology on the subsurface of a Greek petroleum refinery, which is still under full operation and has important and complicated subsurface contamination problems, mainly due to the presence of light non-aqueous phase liquids (LNAPL). About 55 wells are connected to the central bioslurping unit, while a mobile bioslurping unit is also used whenever and wherever is necessary. Moreover, there are about 120 additional wells for the monitoring of the subsurface of the facilities that cover a total area of 1,000,000 m(2). An integrated monitoring program has also been developed and applied on the site, including frequent LNAPL layer depth and thickness measurements, conduction of bail-down and recovery tests, sampling and chemical analysis of the free oil phase, etc., so as to evaluate the remediation technique's efficiency and ensure a prompt tracing of any new potential leak. Despite the occurrence of new leaks within the last 4 years and the observed entrapment of LNAPL in the vadoze zone, bioslurping has managed to greatly restrict the original plume within certain and relatively small parts of the refinery facilities.

  1. Efficient Key Management System for Large-scale Smart RFID Applications

    Directory of Open Access Journals (Sweden)

    Mohammad Fal Sadikin

    2015-08-01

    Full Text Available Due to low-cost and its practical solution, the integration of RFID tag to the sensor node called smart RFID has become prominent solution in various fields including industrial applications. Nevertheless, the constrained nature of smart RFID system introduces tremendous security and privacy problem. One of them is the problem in key management system. Indeed, it is not feasible to recall all RFID tags in order to update their security properties (e.g. update their private keys. On the other hand, using common key management solution like standard TLS/SSL is too heavy-weight that can drain and overload the limited resources. Furthermore, most of existing solutions are highly susceptible to various threats reaching from privacy threats, physical attacks to various technics of Man-in-the-Middle attacks. This paper introduces novel key management system, tailored to the limited resources of smart RFID system. It proposes light-weight mutual authentication and identity protection to mitigate the existing threats.

  2. Centralized manure digestion. Selection of locations and estimation of costs of large-scale manure storage application

    International Nuclear Information System (INIS)

    1995-03-01

    A study to assess the possibilities and the consequences of the use of existing Dutch large scale manure silos at centralised anaerobic digestion plants (CAD-plants) for manure and energy-rich organic wastes is carried out. Reconstruction of these large scale manure silos into digesters for a CAD-plant is not self-evident due to the high height/diameter ratio of these silos and the extra investments that have to be made for additional facilities for roofing, insulation, mixing and heating. From the results of an inventory and selection of large scale manure silos with a storage capacity above 1,500 m 3 it appeared that there are 21 locations in The Netherlands that can be qualified for realisation of a CAD plant with a processing capacity of 100 m 3 biomass (80% manure, 20% additives) per day. These locations are found in particular at the 'shortage-areas' for manure fertilisation in the Dutch provinces Groningen and Drenthe. Three of these 21 locations with large scale silos are considered to be the most suitable for realisation of a large scale CAD-plant. The selection is based on an optimal scale for a CAD-plant of 300 m 3 material (80% manure, 20% additives) to be processed per day and the most suitable consuming markets for the biogas produced at the CAD-plant. The three locations are at Middelharnis, Veendam, and Klazinaveen. Applying the conditions as used in this study and accounting for all costs for transport of manure, additives and end-product including the costs for the storage facilities, a break-even operation might be realised at a minimum income for the additives of approximately 50 Dutch guilders per m 3 (including TAV). This income price is considerably lower than the prevailing costs for tipping or processing of organic wastes in The Netherlands. This study revealed that a break-even exploitation of a large scale CAD-plant for the processing of manure with energy-rich additives is possible. (Abstract Truncated)

  3. Improvement of methods for large scale sequencing; application to human Xq28

    Energy Technology Data Exchange (ETDEWEB)

    Gibbs, R.A.; Andersson, B.; Wentland, M.A. [Baylor College of Medicine, Houston, TX (United States)] [and others

    1994-09-01

    Sequencing of a one-metabase region of Xq28, spanning the FRAXA and IDS loci has been undertaken in order to investigate the practicality of the shotgun approach for large scale sequencing and as a platform to develop improved methods. The efficiency of several steps in the shotgun sequencing strategy has been increased using PCR-based approaches. An improved method for preparation of M13 libraries has been developed. This protocol combines a previously described adaptor-based protocol with the uracil DNA glycosylase (UDG)-cloning procedure. The efficiency of this procedure has been found to be up to 100-fold higher than that of previously used protocols. In addition the novel protocol is more reliable and thus easy to establish in a laboratory. The method has also been adapted for the simultaneous shotgun sequencing of multiple short fragments by concentrating them before library construction is presented. This protocol is suitable for rapid characterization of cDNA clones. A library was constructed from 15 PCR-amplified and concentrated human cDNA inserts, and the insert sequences could easily be identified as separate contigs during the assembly process and the sequence coverage was even along each fragment. Using this strategy, the fine structures of the FraxA and IDS loci have been revealed and several EST homologies indicating novel expressed sequences have been identified. Use of PCR to close repetitive regions that are difficult to clone was tested by determination of the sequence of a cosmid mapping DXS455 in Xq28, containing a polymorphic VNTR. The region containing the VNTR was not represented in the shotgun library, but by designing PCR primers in the sequences flanking the gap and by cloning and sequencing the PCR product, the fine structure of the VNTR has been determined. It was found to be an AT-rich VNTR with a repeated 25-mer at the center.

  4. LUMINOUS RED GALAXY HALO DENSITY FIELD RECONSTRUCTION AND APPLICATION TO LARGE-SCALE STRUCTURE MEASUREMENTS

    International Nuclear Information System (INIS)

    Reid, Beth A.; Spergel, David N.; Bode, Paul

    2009-01-01

    The nontrivial relationship between observations of galaxy positions in redshift space and the underlying matter field complicates our ability to determine the linear theory power spectrum and extract cosmological information from galaxy surveys. The Sloan Digital Sky Survey (SDSS) luminous red galaxy (LRG) catalog has the potential to place powerful constraints on cosmological parameters. LRGs are bright, highly biased tracers of large-scale structure. However, because they are highly biased, the nonlinear contribution of satellite galaxies to the galaxy power spectrum is large and fingers-of-God (FOGs) are significant. The combination of these effects leads to a ∼10% correction in the underlying power spectrum at k = 0.1 h Mpc -1 and ∼40% correction at k = 0.2 h Mpc -1 in the LRG P(k) analysis of Tegmark et al., thereby compromising the cosmological constraints when this potentially large correction is left as a free parameter. We propose an alternative approach to recovering the matter field from galaxy observations. Our approach is to use halos rather than galaxies to trace the underlying mass distribution. We identify FOGs and replace each FOG with a single halo object. This removes the nonlinear contribution of satellite galaxies, the one-halo term. We test our method on a large set of high-fidelity mock SDSS LRG catalogs and find that the power spectrum of the reconstructed halo density field deviates from the underlying matter power spectrum at the ≤1% level for k ≤ 0.1 h Mpc -1 and ≤4% at k = 0.2 h Mpc -1 . The reconstructed halo density field also removes the bias in the measurement of the redshift space distortion parameter β induced by the FOG smearing of the linear redshift space distortions.

  5. On the Fidelity of Semi-distributed Hydrologic Model Simulations for Large Scale Catchment Applications

    Science.gov (United States)

    Ajami, H.; Sharma, A.; Lakshmi, V.

    2017-12-01

    Application of semi-distributed hydrologic modeling frameworks is a viable alternative to fully distributed hyper-resolution hydrologic models due to computational efficiency and resolving fine-scale spatial structure of hydrologic fluxes and states. However, fidelity of semi-distributed model simulations is impacted by (1) formulation of hydrologic response units (HRUs), and (2) aggregation of catchment properties for formulating simulation elements. Here, we evaluate the performance of a recently developed Soil Moisture and Runoff simulation Toolkit (SMART) for large catchment scale simulations. In SMART, topologically connected HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are equivalent cross sections (ECS) representative of a hillslope in first order sub-basins. Earlier investigations have shown that formulation of ECSs at the scale of a first order sub-basin reduces computational time significantly without compromising simulation accuracy. However, the implementation of this approach has not been fully explored for catchment scale simulations. To assess SMART performance, we set-up the model over the Little Washita watershed in Oklahoma. Model evaluations using in-situ soil moisture observations show satisfactory model performance. In addition, we evaluated the performance of a number of soil moisture disaggregation schemes recently developed to provide spatially explicit soil moisture outputs at fine scale resolution. Our results illustrate that the statistical disaggregation scheme performs significantly better than the methods based on topographic data. Future work is focused on assessing the performance of SMART using remotely sensed soil moisture observations using spatially based model evaluation metrics.

  6. Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems

    KAUST Repository

    Wu, Xingfu

    2011-08-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.

  7. Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2011-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.

  8. APPLICATIONS OF CFD METHOD TO GAS MIXING ANALYSIS IN A LARGE-SCALED TANK

    International Nuclear Information System (INIS)

    Lee, S; Richard Dimenna, R

    2007-01-01

    The computational fluid dynamics (CFD) modeling technique was applied to the estimation of maximum benzene concentration for the vapor space inside a large-scaled and high-level radioactive waste tank at Savannah River site (SRS). The objective of the work was to perform the calculations for the benzene mixing behavior in the vapor space of Tank 48 and its impact on the local concentration of benzene. The calculations were used to evaluate the degree to which purge air mixes with benzene evolving from the liquid surface and its ability to prevent an unacceptable concentration of benzene from forming. The analysis was focused on changing the tank operating conditions to establish internal recirculation and changing the benzene evolution rate from the liquid surface. The model used a three-dimensional momentum coupled with multi-species transport. The calculations included potential operating conditions for air inlet and exhaust flows, recirculation flow rate, and benzene evolution rate with prototypic tank geometry. The flow conditions are assumed to be fully turbulent since Reynolds numbers for typical operating conditions are in the range of 20,000 to 70,000 based on the inlet conditions of the air purge system. A standard two-equation turbulence model was used. The modeling results for the typical gas mixing problems available in the literature were compared and verified through comparisons with the test results. The benchmarking results showed that the predictions are in good agreement with the analytical solutions and literature data. Additional sensitivity calculations included a reduced benzene evolution rate, reduced air inlet and exhaust flow, and forced internal recirculation. The modeling results showed that the vapor space was fairly well mixed and that benzene concentrations were relatively low when forced recirculation and 72 cfm ventilation air through the tank boundary were imposed. For the same 72 cfm air inlet flow but without forced recirculation

  9. Direct Satellite Data Acquisition and its Application for Large -scale Monitoring Projects in Russia

    Science.gov (United States)

    Gershenzon, O.

    2011-12-01

    ScanEx RDC created an infrastructure (ground stations network) to acquire and process remote sensing data from different satellites: Terra, Aqua, Landsat, IRS-P5/P6, SPOT 4/5, FORMOSAT-2, EROS A/B, RADARSAT-1/2, ENVISAT-1. It owns image archives from these satellites as well as from SPOT-2 and CARTOSAT-2. ScanEx RDC builds and delivers remote sensing ground stations (working with up to 15 satellites); and owns the ground stations network to acquire data for Russia and surrounding territory. ScanEx stations are the basic component in departmental networks of remote sensing data acquisition for different state authorities (Roshydromet, Ministry of Natural Recourses, Emercom) and University- based remote sensing data acquisition and processing centers in Russia and abroad. ScanEx performs large-scale projects in collaboration with government agencies to monitor forests, floods, fires, sea surface pollution, and ice situation in Northern Russia. During 2010-2011 ScanEx conducted daily monitoring of wild fires in Russia detecting and registering thermal anomalies using data from Terra, Aqua, Landsat and SPOT satellites. Detailed SPOT 4/5 data is used to analyze burnt areas and to assess damage caused by fire. Satellite data along with other information about fire situation in Russia was daily updated and published via free-access Internet geoportal. A few projects ScanEx conducted together with environmental NGO. Project "Satellite monitoring of Especially Protected Natural Areas of Russia and its results visualization on geoportal was conducted in cooperation with NGO "Transparent World". The project's goal was to observe natural phenomena and economical activity, including illegal, by means of Earth remote sensing data. Monitoring is based on multi-temporal optical space imagery of different spatial resolution. Project results include detection of anthropogenic objects that appeared in the vicinity or even within the border of natural territories, that have never been

  10. Fiber Optic Rosette Strain Gauge Development and Application on a Large-Scale Composite Structure

    Science.gov (United States)

    Moore, Jason P.; Przekop, Adam; Juarez, Peter D.; Roth, Mark C.

    2015-01-01

    A detailed description of the construction, application, and measurement of 196 FO rosette strain gauges that measured multi-axis strain across the outside upper surface of the forward bulkhead component of a multibay composite fuselage test article is presented. A background of the FO strain gauge and the FO measurement system as utilized in this application is given and results for the higher load cases of the testing sequence are shown.

  11. State of the art and prospective of large scale applications of YBCO thick films grown on metallic substrates

    International Nuclear Information System (INIS)

    Boffa, Vincenzo

    1997-09-01

    In the framework of the high temperature superconducting materials, YBa 2 Cu 3 O 7 (YBCO) shows very interesting intrinsic superconducting transport properties at temperature higher than the liquid nitrogen temperature. These properties are very important in large scale applications: transport of energy, magnets, transformers, etc. Unfortunately the potential of this material cannot be achieved today, since it is very difficult to manufacture YBCO based tapes or cables. In the last years several groups have tried to overcome the problems with new fabrication techniques. In the present report the state of the art and the prospective in the field of YBCO film fabrication on metallic substrates are presented

  12. Large scale, highly conductive and patterned transparent films of silver nanowires on arbitrary substrates and their application in touch screens

    International Nuclear Information System (INIS)

    Madaria, Anuj R; Kumar, Akshay; Zhou Chongwu

    2011-01-01

    The application of silver nanowire films as transparent conductive electrodes has shown promising results recently. In this paper, we demonstrate the application of a simple spray coating technique to obtain large scale, highly uniform and conductive silver nanowire films on arbitrary substrates. We also integrated a polydimethylsiloxane (PDMS)-assisted contact transfer technique with spray coating, which allowed us to obtain large scale high quality patterned films of silver nanowires. The transparency and conductivity of the films was controlled by the volume of the dispersion used in spraying and the substrate area. We note that the optoelectrical property, σ DC /σ Op , for various films fabricated was in the range 75-350, which is extremely high for transparent thin film compared to other candidate alternatives to doped metal oxide film. Using this method, we obtain silver nanowire films on a flexible polyethylene terephthalate (PET) substrate with a transparency of 85% and sheet resistance of 33 Ω/sq, which is comparable to that of tin-doped indium oxide (ITO) on flexible substrates. In-depth analysis of the film shows a high performance using another commonly used figure-of-merit, Φ TE . Also, Ag nanowire film/PET shows good mechanical flexibility and the application of such a conductive silver nanowire film as an electrode in a touch panel has been demonstrated.

  13. Large scale, highly conductive and patterned transparent films of silver nanowires on arbitrary substrates and their application in touch screens.

    Science.gov (United States)

    Madaria, Anuj R; Kumar, Akshay; Zhou, Chongwu

    2011-06-17

    The application of silver nanowire films as transparent conductive electrodes has shown promising results recently. In this paper, we demonstrate the application of a simple spray coating technique to obtain large scale, highly uniform and conductive silver nanowire films on arbitrary substrates. We also integrated a polydimethylsiloxane (PDMS)-assisted contact transfer technique with spray coating, which allowed us to obtain large scale high quality patterned films of silver nanowires. The transparency and conductivity of the films was controlled by the volume of the dispersion used in spraying and the substrate area. We note that the optoelectrical property, σ(DC)/σ(Op), for various films fabricated was in the range 75-350, which is extremely high for transparent thin film compared to other candidate alternatives to doped metal oxide film. Using this method, we obtain silver nanowire films on a flexible polyethylene terephthalate (PET) substrate with a transparency of 85% and sheet resistance of 33 Ω/sq, which is comparable to that of tin-doped indium oxide (ITO) on flexible substrates. In-depth analysis of the film shows a high performance using another commonly used figure-of-merit, Φ(TE). Also, Ag nanowire film/PET shows good mechanical flexibility and the application of such a conductive silver nanowire film as an electrode in a touch panel has been demonstrated.

  14. Application of large-scale sequencing to marker discovery in plants

    Indian Academy of Sciences (India)

    2012-10-15

    Oct 15, 2012 ... mate-pair libraries (large insert libraries), RNA-Seq data, reduced ... range of different applications for SGS have been developed and applied to marker ..... duced by human selection for desirable grain qualities. A total of 399 ...

  15. Real-time nonlinear MPC and MHE for a large-scale mechatronic application

    DEFF Research Database (Denmark)

    Vukov, Milan; Gros, S.; Horn, G.

    2015-01-01

    Progress in optimization algorithms and in computational hardware made deployment of Nonlinear Model Predictive Control (NMPC) and Moving Horizon Estimation (MHE) possible to mechatronic applications. This paper aims to assess the computational performance of NMPC and MHE for rotational start-up ...

  16. Neighborhood communication paradigm to increase scalability in large-scale dynamic scientific applications

    KAUST Repository

    Ovcharenko, Aleksandr; Ibanez, Daniel; Delalondre, Fabien; Sahni, Onkar; Jansen, Kenneth E.; Carothers, Christopher D.; Shephard, Mark S.

    2012-01-01

    packing to manage message flow control and reduce the number and time of communication calls. The test application demonstrated is parallel unstructured mesh adaptation. Results on IBM Blue Gene/P and Cray XE6 computers show that the use of neighborhood

  17. Application of the actor model to large scale NDE data analysis

    Science.gov (United States)

    Coughlin, Chris

    2018-03-01

    The Actor model of concurrent computation discretizes a problem into a series of independent units or actors that interact only through the exchange of messages. Without direct coupling between individual components, an Actor-based system is inherently concurrent and fault-tolerant. These traits lend themselves to so-called "Big Data" applications in which the volume of data to analyze requires a distributed multi-system design. For a practical demonstration of the Actor computational model, a system was developed to assist with the automated analysis of Nondestructive Evaluation (NDE) datasets using the open source Myriad Data Reduction Framework. A machine learning model trained to detect damage in two-dimensional slices of C-Scan data was deployed in a streaming data processing pipeline. To demonstrate the flexibility of the Actor model, the pipeline was deployed on a local system and re-deployed as a distributed system without recompiling, reconfiguring, or restarting the running application.

  18. MageComet—web application for harmonizing existing large-scale experiment descriptions

    OpenAIRE

    Xue, Vincent; Burdett, Tony; Lukk, Margus; Taylor, Julie; Brazma, Alvis; Parkinson, Helen

    2012-01-01

    Motivation: Meta-analysis of large gene expression datasets obtained from public repositories requires consistently annotated data. Curation of such experiments, however, is an expert activity which involves repetitive manipulation of text. Existing tools for automated curation are few, which bottleneck the analysis pipeline. Results: We present MageComet, a web application for biologists and annotators that facilitates the re-annotation of gene expression experiments in MAGE-TAB format. It i...

  19. Performance of large-scale scientific applications on the IBM ASCI Blue-Pacific system

    International Nuclear Information System (INIS)

    Mirin, A.

    1998-01-01

    The IBM ASCI Blue-Pacific System is a scalable, distributed/shared memory architecture designed to reach multi-teraflop performance. The IBM SP pieces together a large number of nodes, each having a modest number of processors. The system is designed to accommodate a mixed programming model as well as a pure message-passing paradigm. We examine a number of applications on this architecture and evaluate their performance and scalability

  20. LDRD final report : robust analysis of large-scale combinatorial applications.

    Energy Technology Data Exchange (ETDEWEB)

    Carr, Robert D.; Morrison, Todd (University of Colorado, Denver, CO); Hart, William Eugene; Benavides, Nicolas L. (Santa Clara University, Santa Clara, CA); Greenberg, Harvey J. (University of Colorado, Denver, CO); Watson, Jean-Paul; Phillips, Cynthia Ann

    2007-09-01

    Discrete models of large, complex systems like national infrastructures and complex logistics frameworks naturally incorporate many modeling uncertainties. Consequently, there is a clear need for optimization techniques that can robustly account for risks associated with modeling uncertainties. This report summarizes the progress of the Late-Start LDRD 'Robust Analysis of Largescale Combinatorial Applications'. This project developed new heuristics for solving robust optimization models, and developed new robust optimization models for describing uncertainty scenarios.

  1. Front-End Intelligence for Large-Scale Application-Oriented Internet-of-Things

    KAUST Repository

    Bader, Ahmed; Ghazzai, Hakim; Kadri, Abdullah; Alouini, Mohamed-Slim

    2016-01-01

    The Internet-of-things (IoT) refers to the massive integration of electronic devices, vehicles, buildings, and other objects to collect and exchange data. It is the enabling technology for a plethora of applications touching various aspects of our lives such as healthcare, wearables, surveillance, home automation, smart manufacturing, and intelligent automotive systems. Existing IoT architectures are highly centralized and heavily rely on a back-end core network for all decision-making processes. This may lead to inefficiencies in terms of latency, network traffic management, computational processing, and power consumption. In this paper, we advocate the empowerment of front-end IoT devices to support the back-end network in fulfilling end-user applications requirements mainly by means of improved connectivity and efficient network management. A novel conceptual framework is presented for a new generation of IoT devices that will enable multiple new features for both the IoT administrators as well as end users. Exploiting the recent emergence of software-defined architecture, these smart IoT devices will allow fast, reliable, and intelligent management of diverse IoT-based applications. After highlighting relevant shortcomings of the existing IoT architectures, we outline some key design perspectives to enable front-end intelligence while shedding light on promising future research directions.

  2. Neighborhood communication paradigm to increase scalability in large-scale dynamic scientific applications

    KAUST Repository

    Ovcharenko, Aleksandr

    2012-03-01

    This paper introduces a general-purpose communication package built on top of MPI which is aimed at improving inter-processor communications independently of the supercomputer architecture being considered. The package is developed to support parallel applications that rely on computation characterized by large number of messages of various sizes, often small, that are focused within processor neighborhoods. In some cases, such as solvers having static mesh partitions, the number and size of messages are known a priori. However, in other cases such as mesh adaptation, the messages evolve and vary in number and size and include the dynamic movement of partition objects. The current package provides a utility for dynamic applications based on two key attributes that are: (i) explicit consideration of the neighborhood communication pattern to avoid many-to-many calls and also to reduce the number of collective calls to a minimum, and (ii) use of non-blocking MPI functions along with message packing to manage message flow control and reduce the number and time of communication calls. The test application demonstrated is parallel unstructured mesh adaptation. Results on IBM Blue Gene/P and Cray XE6 computers show that the use of neighborhood-based communication control leads to scalable results when executing generally imbalanced mesh adaptation runs. © 2011 Elsevier B.V. All rights reserved.

  3. Application of the RELAP5 to the analysis of large scale integral experiments

    International Nuclear Information System (INIS)

    D'Auria, F.; Galassi, G. M.

    2000-01-01

    The present paper discusses the application of the code-modalisation to the analysis of experiments performed in the UPTF and the PANDA facility available in Germany and in Switzerland, respectively. The UPTF simulates all the internals of the reactor pressure vessel of a Pressurized Water Reactor with 1/1 scale of the geometric dimensions (diameters and lengths). The PANDA simulates the containment system of a Simplified Biling Water Reactor and the vessel of the reactor. The considered experimental dana base includes the occurrence of three-dimensional phenomena that are relevant to the refill phase of a large break Loss of Coolant Accident in a PWR and to the coupling between primary system and containment performance in a SBWR. The application of the code also required to adapt and to extend the methodology for modalisation development. The results from the qualitatively point of view are satisfactory as far as the performance of code-modalisation is concerned and do not show any important code limitation. They also confirm the suitability of the code for applications in the nuclear technology. However, this conclusion should be considered as preliminary because of the lack of independent proofs. (author)

  4. Front-End Intelligence for Large-Scale Application-Oriented Internet-of-Things

    KAUST Repository

    Bader, Ahmed

    2016-06-14

    The Internet-of-things (IoT) refers to the massive integration of electronic devices, vehicles, buildings, and other objects to collect and exchange data. It is the enabling technology for a plethora of applications touching various aspects of our lives such as healthcare, wearables, surveillance, home automation, smart manufacturing, and intelligent automotive systems. Existing IoT architectures are highly centralized and heavily rely on a back-end core network for all decision-making processes. This may lead to inefficiencies in terms of latency, network traffic management, computational processing, and power consumption. In this paper, we advocate the empowerment of front-end IoT devices to support the back-end network in fulfilling end-user applications requirements mainly by means of improved connectivity and efficient network management. A novel conceptual framework is presented for a new generation of IoT devices that will enable multiple new features for both the IoT administrators as well as end users. Exploiting the recent emergence of software-defined architecture, these smart IoT devices will allow fast, reliable, and intelligent management of diverse IoT-based applications. After highlighting relevant shortcomings of the existing IoT architectures, we outline some key design perspectives to enable front-end intelligence while shedding light on promising future research directions.

  5. Thermal energy harvesting for large-scale applications using MWCNT-grafted glass fibers and polycarbonate-MWCNT nanocomposites

    Energy Technology Data Exchange (ETDEWEB)

    Tzounis, L., E-mail: ltzounis@physics.auth.gr [Leibniz-Institut für Polymerforschung Dresden e.V., IPF, Hohe Str. 6, D-01069 Dresden (Germany); Technische Universität Dresden, Helmholtzstraße 10, 01069 Dresden (Germany); Laboratory for Thin Films-Nanosystems and Nanometrolo (Greece); Liebscher, M.; Stamm, M. [Leibniz-Institut für Polymerforschung Dresden e.V., IPF, Hohe Str. 6, D-01069 Dresden, Germany and Technische Universität Dresden, Helmholtzstraße 10, 01069 Dresden (Germany); Mäder, E.; Pötschke, P. [Leibniz-Institut für Polymerforschung Dresden e.V., IPF, Hohe Str. 6, D-01069 Dresden (Germany); Logothetidis, S., E-mail: logot@auth.gr [Laboratory for Thin Films-Nanosystems and Nanometrology (LTFN), Physics Department, Aristotle University of Thessaloniki, GR-54124 Thessaloniki (Greece)

    2015-02-17

    materials and PC/MWCNT nanocomposites are ideal candidates for large-scale thermal energy harvesting. However, the thermoelectric values are still too low for commercial applications and in the future could be enhanced as will be discussed in this work.

  6. Thermal energy harvesting for large-scale applications using MWCNT-grafted glass fibers and polycarbonate-MWCNT nanocomposites

    International Nuclear Information System (INIS)

    Tzounis, L.; Liebscher, M.; Stamm, M.; Mäder, E.; Pötschke, P.; Logothetidis, S.

    2015-01-01

    and PC/MWCNT nanocomposites are ideal candidates for large-scale thermal energy harvesting. However, the thermoelectric values are still too low for commercial applications and in the future could be enhanced as will be discussed in this work

  7. The Potential and Utilization of Unused Energy Sources for Large-Scale Horticulture Facility Applications under Korean Climatic Conditions

    Directory of Open Access Journals (Sweden)

    In Tak Hyun

    2014-07-01

    Full Text Available As the use of fossil fuel has increased, not only in construction, but also in agriculture due to the drastic industrial development in recent times, the problems of heating costs and global warming are getting worse. Therefore, introduction of more reliable and environmentally-friendly alternative energy sources has become urgent and the same trend is found in large-scale horticulture facilities. In this study, among many alternative energy sources, we investigated the reserves and the potential of various different unused energy sources which have infinite potential, but are nowadays wasted due to limitations in their utilization. In addition, we utilized available unused energy as a heat source for a heat pump in a large-scale horticulture facility and analyzed its feasibility through EnergyPlus simulation modeling. Accordingly, the discharge flow rate from the Fan Coil Unit (FCU in the horticulture facility, the discharge air temperature, and the return temperature were analyzed. The performance and heat consumption of each heat source were compared with those of conventional boilers. The result showed that the power load of the heat pump was decreased and thus the heat efficiency was increased as the temperature of the heat source was increased. Among the analyzed heat sources, power plant waste heat which had the highest heat source temperature consumed the least electric energy and showed the highest efficiency.

  8. Large scale applicability of a Fully Adaptive Non-Intrusive Spectral Projection technique: Sensitivity and uncertainty analysis of a transient

    International Nuclear Information System (INIS)

    Perkó, Zoltán; Lathouwers, Danny; Kloosterman, Jan Leen; Hagen, Tim van der

    2014-01-01

    Highlights: • Grid and basis adaptive Polynomial Chaos techniques are presented for S and U analysis. • Dimensionality reduction and incremental polynomial order reduce computational costs. • An unprotected loss of flow transient is investigated in a Gas Cooled Fast Reactor. • S and U analysis is performed with MC and adaptive PC methods, for 42 input parameters. • PC accurately estimates means, variances, PDFs, sensitivities and uncertainties. - Abstract: Since the early years of reactor physics the most prominent sensitivity and uncertainty (S and U) analysis methods in the nuclear community have been adjoint based techniques. While these are very effective for pure neutronics problems due to the linearity of the transport equation, they become complicated when coupled non-linear systems are involved. With the continuous increase in computational power such complicated multi-physics problems are becoming progressively tractable, hence affordable and easily applicable S and U analysis tools also have to be developed in parallel. For reactor physics problems for which adjoint methods are prohibitive Polynomial Chaos (PC) techniques offer an attractive alternative to traditional random sampling based approaches. At TU Delft such PC methods have been studied for a number of years and this paper presents a large scale application of our Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm for performing the sensitivity and uncertainty analysis of a Gas Cooled Fast Reactor (GFR) Unprotected Loss Of Flow (ULOF) transient. The transient was simulated using the Cathare 2 code system and a fully detailed model of the GFR2400 reactor design that was investigated in the European FP7 GoFastR project. Several sources of uncertainty were taken into account amounting to an unusually high number of stochastic input parameters (42) and numerous output quantities were investigated. The results show consistently good performance of the applied adaptive PC

  9. Parallel real-time visualization system for large-scale simulation. Application to WSPEEDI

    International Nuclear Information System (INIS)

    Muramatsu, Kazuhiro; Otani, Takayuki; Kitabata, Hideyuki; Matsumoto, Hideki; Takei, Toshifumi; Doi, Shun

    2000-01-01

    The real-time visualization system, PATRAS (PArallel TRAcking Steering system) has been developed on parallel computing servers. The system performs almost all of the visualization tasks on a parallel computing server, and uses image data compression technique for efficient communication between the server and the client terminal. Therefore, the system realizes high performance concurrent visualization in an internet computing environment. The experience in applying PATRAS to WSPEEDI (Worldwide version of System for Prediction Environmental Emergency Dose Information) is reported. The application of PATRAS to WSPEEDI enables users to understand behaviours of radioactive tracers from different release points easily and quickly. (author)

  10. Prospects and strategy for large scale utility applications of photovoltaic power systems

    International Nuclear Information System (INIS)

    Vigotti, R.; Lysen, E.; Cole, A.

    1996-01-01

    The status and prospects of photovoltaic (PV) power systems are reviewed. The market diffusion strategy for the application of PV systems by utilities is described, and the mission, objectives and thoughts of the collaboration programme launched among 18 industrialized countries under the framework of the International Energy Agency are highly with particular reference to technology transfer to developing countries. Future sales of PV systems are expected to grow in the short and medium term mainly in the sector of isolated systems. (R.P.)

  11. Signal transmission techniques for large-scale nuclear fuel reprocessing applications

    International Nuclear Information System (INIS)

    Herndon, J.N.; Bible, D.W.

    1985-01-01

    The RCE is currently developing a prototypic microwave-based signal transmission system for reprocessing cell applications. This system, being developed for use in the Advanced Integrated Maintenance System (AIMS), will operate in the 10-GHz frequency range. Provisions are being made for five real-time video channels, three bidirectional data channels at one megabaud data rate each, and two audio channels. The basic utility of the concept has been proven in a laboratory demonstration using gallium arsenide gunn diode transmitter/receivers with horn antennas. Unidirectional transmission of one real-time video channel over a distance of 200 ft was demonstrated. No evidence of multipath interference was detected even when the transmission path was surrounded by metallic reflectors. The microwave signal transmission system for the AIMS application is in final design. Fabrication in the ORNL instrument shops will begin in October 1985, and the system should be operational in the Maintenance Systems Test Area (MSTA) at ORNL in the latter half of 1986

  12. The restricted stochastic user equilibrium with threshold model: Large-scale application and parameter testing

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær; Nielsen, Otto Anker; Watling, David P.

    2017-01-01

    Equilibrium model (DUE), by combining the strengths of the Boundedly Rational User Equilibrium model and the Restricted Stochastic User Equilibrium model (RSUE). Thereby, the RSUET model reaches an equilibrated solution in which the flow is distributed according to Random Utility Theory among a consistently...... model improves the behavioural realism, especially for high congestion cases. Also, fast and well-behaved convergence to equilibrated solutions among non-universal choice sets is observed across different congestion levels, choice model scale parameters, and algorithm step sizes. Clearly, the results...... highlight that the RSUET outperforms the MNP SUE in terms of convergence, calculation time and behavioural realism. The choice set composition is validated by using 16,618 observed route choices collected by GPS devices in the same network and observing their reproduction within the equilibrated choice sets...

  13. The application of sensitivity analysis to models of large scale physiological systems

    Science.gov (United States)

    Leonard, J. I.

    1974-01-01

    A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.

  14. The large scale use of strippable coatings in preventative, tie-down and decontamination applications

    International Nuclear Information System (INIS)

    Sanders, M.J.; Pengelly, M.G.A.

    1985-05-01

    The use of strippable coatings both to remove and prevent the radioactive contamination of equipment is discussed. Details of application by brush, roller, conventional (air) and airless spray are given. The use of strippable coatings to prevent the components of a re-useable temporary containment system from becoming contaminated is described and results of simple tests in which the coatings were used to remove plutonium dioxide contamination from a number of different surfaces in a Pressurised Suit Area are given. It is concluded that strippable coatings are particularly useful in contamination prevention and tie-down roles but test results indicate that they do not possess overwhelming advantages when used as a decontamination technique. The products used in the work reported here are water based. (author)

  15. Prospects and strategy for large scale utility applications of photovoltaic power systems

    International Nuclear Information System (INIS)

    Cole, A.; Vigotti, R.; Lysen, E.

    1995-01-01

    The paper reviews the status and prospects of photovoltaic power systems and the R and D trends (silicon performances, thin films, balance of systems components), and describes the market diffusion strategy for the application of PV systems: at the short and medium term level, isolated systems for rural electricity supply in IEA member countries and decentralized energy supply (remote users and village power) in developing countries; at the medium and long term level, decentralized building integration in urban and rural areas, power stations for peak power and local grid support. The objectives of the IEA collaboration programme launched among 18 industrialized countries are summarized, with particular reference to technology transfer to developing countries. 4 figs

  16. How Gamification Affects Physical Activity: Large-scale Analysis of Walking Challenges in a Mobile Application.

    Science.gov (United States)

    Shameli, Ali; Althoff, Tim; Saberi, Amin; Leskovec, Jure

    2017-04-01

    Gamification represents an effective way to incentivize user behavior across a number of computing applications. However, despite the fact that physical activity is essential for a healthy lifestyle, surprisingly little is known about how gamification and in particular competitions shape human physical activity. Here we study how competitions affect physical activity. We focus on walking challenges in a mobile activity tracking application where multiple users compete over who takes the most steps over a predefined number of days. We synthesize our findings in a series of game and app design implications. In particular, we analyze nearly 2,500 physical activity competitions over a period of one year capturing more than 800,000 person days of activity tracking. We observe that during walking competitions, the average user increases physical activity by 23%. Furthermore, there are large increases in activity for both men and women across all ages, and weight status, and even for users that were previously fairly inactive. We also find that the composition of participants greatly affects the dynamics of the game. In particular, if highly unequal participants get matched to each other, then competition suffers and the overall effect on the physical activity drops significantly. Furthermore, competitions with an equal mix of both men and women are more effective in increasing the level of activities. We leverage these insights to develop a statistical model to predict whether or not a competition will be particularly engaging with significant accuracy. Our models can serve as a guideline to help design more engaging competitions that lead to most beneficial behavioral changes.

  17. An accurate and efficient method for large-scale SSR genotyping and applications.

    Science.gov (United States)

    Li, Lun; Fang, Zhiwei; Zhou, Junfei; Chen, Hong; Hu, Zhangfeng; Gao, Lifen; Chen, Lihong; Ren, Sheng; Ma, Hongyu; Lu, Long; Zhang, Weixiong; Peng, Hai

    2017-06-02

    Accurate and efficient genotyping of simple sequence repeats (SSRs) constitutes the basis of SSRs as an effective genetic marker with various applications. However, the existing methods for SSR genotyping suffer from low sensitivity, low accuracy, low efficiency and high cost. In order to fully exploit the potential of SSRs as genetic marker, we developed a novel method for SSR genotyping, named as AmpSeq-SSR, which combines multiplexing polymerase chain reaction (PCR), targeted deep sequencing and comprehensive analysis. AmpSeq-SSR is able to genotype potentially more than a million SSRs at once using the current sequencing techniques. In the current study, we simultaneously genotyped 3105 SSRs in eight rice varieties, which were further validated experimentally. The results showed that the accuracies of AmpSeq-SSR were nearly 100 and 94% with a single base resolution for homozygous and heterozygous samples, respectively. To demonstrate the power of AmpSeq-SSR, we adopted it in two applications. The first was to construct discriminative fingerprints of the rice varieties using 3105 SSRs, which offer much greater discriminative power than the 48 SSRs commonly used for rice. The second was to map Xa21, a gene that confers persistent resistance to rice bacterial blight. We demonstrated that genome-scale fingerprints of an organism can be efficiently constructed and candidate genes, such as Xa21 in rice, can be accurately and efficiently mapped using an innovative strategy consisting of multiplexing PCR, targeted sequencing and computational analysis. While the work we present focused on rice, AmpSeq-SSR can be readily extended to animals and micro-organisms. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. Development and application of a massively parallel KKR Green function method for large scale systems

    Energy Technology Data Exchange (ETDEWEB)

    Thiess, Alexander R.

    2011-12-19

    In this thesis we present the development of the self-consistent, full-potential Korringa-Kohn-Rostoker (KKR) Green function method KKRnano for calculating the electronic properties, magnetic interactions, and total energy including all electrons on the basis of the density functional theory (DFT) on high-end massively parallelized high-performance computers for supercells containing thousands of atoms without sacrifice of accuracy. KKRnano was used for the following two applications. The first application is centered in the field of dilute magnetic semiconductors. In this field a new promising material combination was identified: gadolinium doped gallium nitride which shows ferromagnetic ordering of colossal magnetic moments above room temperature. It quickly turned out that additional extrinsic defects are inducing the striking properties. However, the question which kind of extrinsic defects are present in experimental samples is still unresolved. In order to shed light on this open question, we perform extensive studies of the most promising candidates: interstitial nitrogen and oxygen, as well as gallium vacancies. By analyzing the pairwise magnetic coupling among defects it is shown that nitrogen and oxygen interstitials cannot support thermally stable ferromagnetic order. Gallium vacancies, on the other hand, facilitate an important coupling mechanism. The vacancies are found to induce large magnetic moments on all surrounding nitrogen sites, which then couple ferromagnetically both among themselves and with the gadolinium dopants. Based on a statistical evaluation it can be concluded that already small concentrations of gallium vacancies can lead to a distinct long-range ferromagnetic ordering. Beyond this important finding we present further indications, from which we infer that gallium vacancies likely cause the striking ferromagnetic coupling of colossal magnetic moments in GaN:Gd. The second application deals with the phase-change material germanium

  19. Development and application of a massively parallel KKR Green function method for large scale systems

    International Nuclear Information System (INIS)

    Thiess, Alexander R.

    2011-01-01

    In this thesis we present the development of the self-consistent, full-potential Korringa-Kohn-Rostoker (KKR) Green function method KKRnano for calculating the electronic properties, magnetic interactions, and total energy including all electrons on the basis of the density functional theory (DFT) on high-end massively parallelized high-performance computers for supercells containing thousands of atoms without sacrifice of accuracy. KKRnano was used for the following two applications. The first application is centered in the field of dilute magnetic semiconductors. In this field a new promising material combination was identified: gadolinium doped gallium nitride which shows ferromagnetic ordering of colossal magnetic moments above room temperature. It quickly turned out that additional extrinsic defects are inducing the striking properties. However, the question which kind of extrinsic defects are present in experimental samples is still unresolved. In order to shed light on this open question, we perform extensive studies of the most promising candidates: interstitial nitrogen and oxygen, as well as gallium vacancies. By analyzing the pairwise magnetic coupling among defects it is shown that nitrogen and oxygen interstitials cannot support thermally stable ferromagnetic order. Gallium vacancies, on the other hand, facilitate an important coupling mechanism. The vacancies are found to induce large magnetic moments on all surrounding nitrogen sites, which then couple ferromagnetically both among themselves and with the gadolinium dopants. Based on a statistical evaluation it can be concluded that already small concentrations of gallium vacancies can lead to a distinct long-range ferromagnetic ordering. Beyond this important finding we present further indications, from which we infer that gallium vacancies likely cause the striking ferromagnetic coupling of colossal magnetic moments in GaN:Gd. The second application deals with the phase-change material germanium

  20. Application of EPR retrospective dosimetry for large-scale accidental situation

    International Nuclear Information System (INIS)

    Skvortsov, V.G.; Ivannikov, A.I.; Stepanenko, V.F.; Tsyb, A.F.; Khamidova, L.G.; Kondrashov, A.E.; Tikunov, D.D.

    2000-01-01

    Above 3000 tooth enamel samples, collected at population of radioactive contaminated territories after Chernobyl accident, the Chernobyl liquidators, the retired military of high radiation risk and the population of control radiation free territories were investigated by EPR spectroscopy method in order to obtain accumulated individual exposure doses. Results of EPR spectra measurements are stored in data bank; enamel samples are also stored in order to provide the possibility to repeat the measurements in future. Statistical analysis of results has allowed to detect the contribution into EPR signal in tooth enamel due to the action of the natural background radiation, and the radioactive contamination of territory. In general, the average doses of external exposure of the population obtained with EPR spectroscopy of teeth enamel are consistent with results based on other methods of direct and retrospective dosimetry. Essential exceeding of the individual doses above the average level within the population groups was observed for some persons. That gave the possibility to detect the individuals with overexposure, which were included into groups for medical monitoring

  1. Application and further development of ion implantation for very large scale integration. Pt. 2

    International Nuclear Information System (INIS)

    Haberger, K.; Ryssel, H.; Hoffmann, K.

    1982-08-01

    Ion implantation, used as a dopant technology, provides very well-controlled doping but is dependent on the usual masking techniques. For purposes of pattern generation, it would be desirable to utilize the digital controllability of a finely-focused ion beam. In this report, the suitability of a finely-focused ion beam for the purpose of direct writing implantation has been investigated. For this study, an ion accelerator was equipped with a computer-controlled fine-focusing system. Using this system it was possible to implant Van-der-Pauw test structures, resistors, and bipolar transistors, which were then electrically measured. The smallest line width was approx. 1 μm. A disadvantage is represented by the long implantation times resulting with present ion sources. Another VLSI-relevant area of application for this finely-focused ion-beam-writing system is photoresist exposure, as an alternative to electron-beam lithography, making possible the realization of very small structures without proximity effects and with a significantly higher resist sensitivity. (orig.) [de

  2. Application of soil venting at a large scale: A data and modeling analysis

    Energy Technology Data Exchange (ETDEWEB)

    Walton, J.C.; Baca, R.G.; Sisson, J.B.; Wood, T.R.

    1990-02-27

    Soil venting will be applied at a demonstration scale to a site at the Idaho National Engineering Laboratory which is contaminated with carbon tetrachloride and other organic vapors. The application of soil venting at the site is unique in several aspects including scale, geology, and data collection. The containmented portion of the site has a surface area of over 47,000 square meters (12 acres) and the depth to the water table is approximately 180 meters. Migration of contaminants through the entire depth of the vadose zone is evidenced by measured levels of chlorinated solvents in the underlying aquifer. The geology of the site consists of a series of layered basalt flows interspersed with sedimentary interbeds. The depth of the vadose zone, the nature of fractured basalt flows, and the degree of contamination all tend to make drilling difficult and expensive. Because of the scale of the site, extent of contamination, and expense of drilling, a computer model has been developed to simulate the migration of the chlorinated solvents during plume growth and cleanup. The demonstration soil venting operation has been designed to collect pressure drop and plume migration data to assist with calibration of the transport model. The model will then be used to help design a cost-effective system for site cleanup which will minimize the drilling required. This paper discusses mathematical models which have been developed to estimate the growth and eventful cleanup of the site. 12 refs., 4 figs.

  3. Congestion management in power systems. Long-term modeling framework and large-scale application

    Energy Technology Data Exchange (ETDEWEB)

    Bertsch, Joachim; Hagspiel, Simeon; Just, Lisa

    2015-06-15

    In liberalized power systems, generation and transmission services are unbundled, but remain tightly interlinked. Congestion management in the transmission network is of crucial importance for the efficiency of these inter-linkages. Different regulatory designs have been suggested, analyzed and followed, such as uniform zonal pricing with redispatch or nodal pricing. However, the literature has either focused on the short-term efficiency of congestion management or specific issues of timing investments. In contrast, this paper presents a generalized and flexible economic modeling framework based on a decomposed inter-temporal equilibrium model including generation, transmission, as well as their inter-linkages. Short and long-term effects of different congestion management designs can hence be analyzed. Specifically, we are able to identify and isolate implicit frictions and sources of inefficiencies in the different regulatory designs, and to provide a comparative analysis including a benchmark against a first-best welfare-optimal result. To demonstrate the applicability of our framework, we calibrate and numerically solve our model for a detailed representation of the Central Western European (CWE) region, consisting of 70 nodes and 174 power lines. Analyzing six different congestion management designs until 2030, we show that compared to the first-best benchmark, i.e., nodal pricing, inefficiencies of up to 4.6% arise. Inefficiencies are mainly driven by the approach of determining cross-border capacities as well as the coordination of transmission system operators' activities.

  4. Algorithm and Application of Gcp-Independent Block Adjustment for Super Large-Scale Domestic High Resolution Optical Satellite Imagery

    Science.gov (United States)

    Sun, Y. S.; Zhang, L.; Xu, B.; Zhang, Y.

    2018-04-01

    The accurate positioning of optical satellite image without control is the precondition for remote sensing application and small/medium scale mapping in large abroad areas or with large-scale images. In this paper, aiming at the geometric features of optical satellite image, based on a widely used optimization method of constraint problem which is called Alternating Direction Method of Multipliers (ADMM) and RFM least-squares block adjustment, we propose a GCP independent block adjustment method for the large-scale domestic high resolution optical satellite image - GISIBA (GCP-Independent Satellite Imagery Block Adjustment), which is easy to parallelize and highly efficient. In this method, the virtual "average" control points are built to solve the rank defect problem and qualitative and quantitative analysis in block adjustment without control. The test results prove that the horizontal and vertical accuracy of multi-covered and multi-temporal satellite images are better than 10 m and 6 m. Meanwhile the mosaic problem of the adjacent areas in large area DOM production can be solved if the public geographic information data is introduced as horizontal and vertical constraints in the block adjustment process. Finally, through the experiments by using GF-1 and ZY-3 satellite images over several typical test areas, the reliability, accuracy and performance of our developed procedure will be presented and studied in this paper.

  5. Large-scale nanofabrication of periodic nanostructures using nanosphere-related techniques for green technology applications (Conference Presentation)

    Science.gov (United States)

    Yen, Chen-Chung; Wu, Jyun-De; Chien, Yi-Hsin; Wang, Chang-Han; Liu, Chi-Ching; Ku, Chen-Ta; Chen, Yen-Jon; Chou, Meng-Cheng; Chang, Yun-Chorng

    2016-09-01

    Nanotechnology has been developed for decades and many interesting optical properties have been demonstrated. However, the major hurdle for the further development of nanotechnology depends on finding economic ways to fabricate such nanostructures in large-scale. Here, we demonstrate how to achieve low-cost fabrication using nanosphere-related techniques, such as Nanosphere Lithography (NSL) and Nanospherical-Lens Lithography (NLL). NSL is a low-cost nano-fabrication technique that has the ability to fabricate nano-triangle arrays that cover a very large area. NLL is a very similar technique that uses polystyrene nanospheres to focus the incoming ultraviolet light and exposure the underlying photoresist (PR) layer. PR hole arrays form after developing. Metal nanodisk arrays can be fabricated following metal evaporation and lifting-off processes. Nanodisk or nano-ellipse arrays with various sizes and aspect ratios are routinely fabricated in our research group. We also demonstrate we can fabricate more complicated nanostructures, such as nanodisk oligomers, by combining several other key technologies such as angled exposure and deposition, we can modify these methods to obtain various metallic nanostructures. The metallic structures are of high fidelity and in large scale. The metallic nanostructures can be transformed into semiconductor nanostructures and be used in several green technology applications.

  6. Model-Data Fusion and Adaptive Sensing for Large Scale Systems: Applications to Atmospheric Release Incidents

    Science.gov (United States)

    Madankan, Reza

    and observed data, given a set of kinetic constraints on mobile sensors. Dynamic Programming method has been utilized to solve the resulting optimal control problem. To complete the loop of source characterization process, two different estimation techniques, minimum variance estimation framework and Bayesian Inference method has been developed to fuse model forecast with measurement data. Incomplete information regarding the distribution of associated noise signal in measurement data, is another major challenge in the source characterization of plume dispersion incidents. This frequently happens in data assimilation of atmospheric data by using the satellite imagery. This occurs due to the fact that satellite imagery data can be polluted with noise, depending on weather conditions, clouds, humidity, etc. Unfortunately, there is no accurate procedure to quantify the error in recorded satellite data. Hence, using classical data assimilation methods in this situation is not straight forward. In this dissertation, the basic idea of a novel approach has been proposed to tackle these types of real world problems with more accuracy and robustness. A simple example demonstrating the real-world scenario is presented to validate the developed methodology.

  7. Multi-Sensing system for outdoor thermal monitoring: Application to large scale civil engineering components

    Science.gov (United States)

    Crinière, Antoine; Dumoulin, Jean; Manceau, Jean-Luc; Perez, Laetitia; Bourquin, Frederic

    2014-05-01

    and a backup system. All the components of the system are connected to the IrLaW software through an IP network. The monitoring system is fully autonomous since August 2013 and provides data at 0. Hz sampling frequency. First results obtained by data post-processing is addressed. Finally, discussion on experimental feedback and main outcomes of several month of measurement in outdoor conditions will be presented. REFERENCES [1]Proto M. et al., , 2010. Transport infrastructure surveillance and monitoring by electromagnetic sensing: the ISTIMES project. Sensors, 10,10620-10639, doi: 10.3390/s101210620. [2]J. Dumoulin, R. Averty ".Development of an infrared system coupled with a weather station for real time atmospheric corrections using GPU computing: Application to bridge monitoring", in Proc of 11th International Conference on Quantitative InfraRed Thermography, Naples Italy, 2012. [3]J. Dumoulin, A. Crinière, R. Averty ," Detection and thermal characterization of the inner structure of the "Musmeci" bridge deck by infrared thermography monitoring ",Journal of Geophysics and Engineering, Volume 10, Number 2, November 2013, IOP Science, doi:10.1088/1742-2132/10/6/064003. [4]I. Catapano, R. Di Napoli, F. Soldovieri1, M. Bavusi, A. Loperte and J. Dumoulin, "Structural monitoring via microwave tomography-enhanced GPR: the Montagnole test site", Journal of Geophysics and Engineering, Volume 9, Number 4, August 2012, pp 100-107, IOP Science, doi:10.1088/1742-2132/9/4/S100.

  8. Co-evolution of intelligent socio-technical systems modelling and applications in large scale emergency and transport domains

    CERN Document Server

    2013-01-01

    As the interconnectivity between humans through technical devices is becoming ubiquitous, the next step is already in the making: ambient intelligence, i.e. smart (technical) environments, which will eventually play the same active role in communication as the human players, leading to a co-evolution in all domains where real-time communication is essential. This topical volume, based on the findings of the Socionical European research project, gives equal attention to two highly relevant domains of applications: transport, specifically traffic, dynamics from the viewpoint of a socio-technical interaction and evacuation scenarios for large-scale emergency situations. Care was taken to investigate as much as possible the limits of scalability and to combine the modeling using complex systems science approaches with relevant data analysis.

  9. Third generation participatory design in health informatics--making user participation applicable to large-scale information system projects.

    Science.gov (United States)

    Pilemalm, Sofie; Timpka, Toomas

    2008-04-01

    Participatory Design (PD) methods in the field of health informatics have mainly been applied to the development of small-scale systems with homogeneous user groups in local settings. Meanwhile, health service organizations are becoming increasingly large and complex in character, making it necessary to extend the scope of the systems that are used for managing data, information and knowledge. This study reports participatory action research on the development of a PD framework for large-scale system design. The research was conducted in a public health informatics project aimed at developing a system for 175,000 users. A renewed PD framework was developed in response to six major limitations experienced to be associated with the existing methods. The resulting framework preserves the theoretical grounding, but extends the toolbox to suit applications in networked health service organizations. Future research should involve evaluations of the framework in other health service settings where comprehensive HISs are developed.

  10. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data

  11. Large scale electrolysers

    International Nuclear Information System (INIS)

    B Bello; M Junker

    2006-01-01

    Hydrogen production by water electrolysis represents nearly 4 % of the world hydrogen production. Future development of hydrogen vehicles will require large quantities of hydrogen. Installation of large scale hydrogen production plants will be needed. In this context, development of low cost large scale electrolysers that could use 'clean power' seems necessary. ALPHEA HYDROGEN, an European network and center of expertise on hydrogen and fuel cells, has performed for its members a study in 2005 to evaluate the potential of large scale electrolysers to produce hydrogen in the future. The different electrolysis technologies were compared. Then, a state of art of the electrolysis modules currently available was made. A review of the large scale electrolysis plants that have been installed in the world was also realized. The main projects related to large scale electrolysis were also listed. Economy of large scale electrolysers has been discussed. The influence of energy prices on the hydrogen production cost by large scale electrolysis was evaluated. (authors)

  12. Large Scale Solar Heating

    DEFF Research Database (Denmark)

    Heller, Alfred

    2001-01-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out...... model is designed and validated on the Marstal case. Applying the Danish Reference Year, a design tool is presented. The simulation tool is used for proposals for application of alternative designs, including high-performance solar collector types (trough solar collectors, vaccum pipe collectors......). Simulation programs are proposed as control supporting tool for daily operation and performance prediction of central solar heating plants. Finaly the CSHP technolgy is put into persepctive with respect to alternatives and a short discussion on the barries and breakthrough of the technology are given....

  13. Dynamics of soil carbon stocks due to large-scale land use changes across the former Soviet Union during the 20th century

    Science.gov (United States)

    Kurganova, Irina; Prishchepov, Alexander V.; Schierhorn, Florian; Lopes de Gerenyu, Valentin; Müller, Daniel; Kuzyakov, Yakov

    2016-04-01

    Land use change is a major driver of land-atmosphere carbon (C) fluxes. The largest net C fluxes caused by LUC are attributed to the conversion of native unmanaged ecosystems to croplands and vice versa. Here, we present the changes of soil organic carbon (SOC) stocks in response to large-scale land use changes in the former Soviet Union from 1953-2012. Widespread and rapid conversion of native ecosystems to croplands occurred in the course of the Virgin Lands Campaign (VLC) between 1954 to 1963 in the Soviet Union, when more than 45 million hectares (Mha) were ploughed in south-eastern Russia and northern Kazakhstan in order to expand domestic food production. After 1991, the collapse of the Soviet Union triggered the abandonment of around 75 Mha across the post-Soviet states. To assess SOC dynamics, we generated a static cropland mask for 2009 based on three global cropland maps. We used the cropland mask to spatially disaggregate annual sown area statistics at province level based on the suitability of each plot for crop production, which yielded land use maps for each year from 1954 to 2012 for all post-Soviet states. To estimate the SOC-dynamics due to the VLC and post-Soviet croplands abandonment, we used available experimental data, own field measurements, and soil maps. A bookkeeping approach was applied to assess the total changes in SOC-stocks in response to large-scale land use changes in the former Soviet Union. The massive croplands expansion during VLC resulted in a substantial loss of SOC - 611±47 Mt C and 241±11 Mt C for the upper 0-50 cm soil layer during the first 20 years of cultivation for Russia and Kazakhstan, respectively. These magnitudes are similar to C losses due to the plowing up of the prairies in USA in the mid-1930s. The total SOC sequestration due to post-Soviet croplands abandonment was estimated at 72.2±6.0 Mt C per year from 1991 to 2010. This amount of carbon equals about 40% of the current fossil fuel emission for this

  14. Large Scale Applications Using FBG Sensors: Determination of In-Flight Loads and Shape of a Composite Aircraft Wing

    Directory of Open Access Journals (Sweden)

    Matthew J. Nicolas

    2016-06-01

    Full Text Available Technological advances have enabled the development of a number of optical fiber sensing methods over the last few years. The most prevalent optical technique involves the use of fiber Bragg grating (FBG sensors. These small, lightweight sensors have many attributes that enable their use for a number of measurement applications. Although much literature is available regarding the use of FBGs for laboratory level testing, few publications in the public domain exist of their use at the operational level. Therefore, this paper gives an overview of the implementation of FBG sensors for large scale structures and applications. For demonstration, a case study is presented in which FBGs were used to determine the deflected wing shape and the out-of-plane loads of a 5.5-m carbon-composite wing of an ultralight aerial vehicle. The in-plane strains from the 780 FBG sensors were used to obtain the out-of-plane loads as well as the wing shape at various load levels. The calculated out-of-plane displacements and loads were within 4.2% of the measured data. This study demonstrates a practical method in which direct measurements are used to obtain critical parameters from the high distribution of FBG sensors. This procedure can be used to obtain information for structural health monitoring applications to quantify healthy vs. unhealthy structures.

  15. Combined FVTD/PSTD Schemes with Enhanced Spectral Accuracy for the Design of Large-Scale EMC Applications

    Directory of Open Access Journals (Sweden)

    N. V. Kantartzis

    2012-10-01

    Full Text Available A generalized conformal time-domain method with adjustable spectral accuracy is introduced in this paper for the consistent analysis of large-scale electromagnetic compatibility problems. The novel 3-D hybrid schemes blend a stencil-optimized finite-volume time-domain and a multimodal Fourier-Chebyshev pseudo-spectral time-domain algorithm that split the overall space into smaller and flexible areas. A key asset is that both techniques are updated independently and interconnected by robust boundary conditions. Also, combining a family of spatial derivative approximators with controllable precision in general curvilinear coordinates, the proposed method launches a conformal field flux formulation to derive electromagnetic quantities in regions with fine details. For advanced grid reliability at dissimilar media interfaces, dispersion-reduced adaptive operators, which assign the proper weights to each spatial increment, are developed. So, the resulting discretization yields highly rigorous and computationally affordable simulations, devoid of lattice errors. Numerical results, addressing detailed comparisons of various realistic applications with reference or measurement data verify our methodology and reveal its significant applicability.

  16. Disclosure control using partially synthetic data for large-scale health surveys, with applications to CanCORS.

    Science.gov (United States)

    Loong, Bronwyn; Zaslavsky, Alan M; He, Yulei; Harrington, David P

    2013-10-30

    Statistical agencies have begun to partially synthesize public-use data for major surveys to protect the confidentiality of respondents' identities and sensitive attributes by replacing high disclosure risk and sensitive variables with multiple imputations. To date, there are few applications of synthetic data techniques to large-scale healthcare survey data. Here, we describe partial synthesis of survey data collected by the Cancer Care Outcomes Research and Surveillance (CanCORS) project, a comprehensive observational study of the experiences, treatments, and outcomes of patients with lung or colorectal cancer in the USA. We review inferential methods for partially synthetic data and discuss selection of high disclosure risk variables for synthesis, specification of imputation models, and identification disclosure risk assessment. We evaluate data utility by replicating published analyses and comparing results using original and synthetic data and discuss practical issues in preserving inferential conclusions. We found that important subgroup relationships must be included in the synthetic data imputation model, to preserve the data utility of the observed data for a given analysis procedure. We conclude that synthetic CanCORS data are suited best for preliminary data analyses purposes. These methods address the requirement to share data in clinical research without compromising confidentiality. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Large-scale hydrological simulations using the soil water assessment tool, protocol development, and application in the danube basin.

    Science.gov (United States)

    Pagliero, Liliana; Bouraoui, Fayçal; Willems, Patrick; Diels, Jan

    2014-01-01

    The Water Framework Directive of the European Union requires member states to achieve good ecological status of all water bodies. A harmonized pan-European assessment of water resources availability and quality, as affected by various management options, is necessary for a successful implementation of European environmental legislation. In this context, we developed a methodology to predict surface water flow at the pan-European scale using available datasets. Among the hydrological models available, the Soil Water Assessment Tool was selected because its characteristics make it suitable for large-scale applications with limited data requirements. This paper presents the results for the Danube pilot basin. The Danube Basin is one of the largest European watersheds, covering approximately 803,000 km and portions of 14 countries. The modeling data used included land use and management information, a detailed soil parameters map, and high-resolution climate data. The Danube Basin was divided into 4663 subwatersheds of an average size of 179 km. A modeling protocol is proposed to cope with the problems of hydrological regionalization from gauged to ungauged watersheds and overparameterization and identifiability, which are usually present during calibration. The protocol involves a cluster analysis for the determination of hydrological regions and multiobjective calibration using a combination of manual and automated calibration. The proposed protocol was successfully implemented, with the modeled discharges capturing well the overall hydrological behavior of the basin. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  18. Large-scale manufacture and characterization of a lentiviral vector produced for clinical ex vivo gene therapy application.

    Science.gov (United States)

    Merten, Otto-Wilhelm; Charrier, Sabine; Laroudie, Nicolas; Fauchille, Sylvain; Dugué, Céline; Jenny, Christine; Audit, Muriel; Zanta-Boussif, Maria-Antonietta; Chautard, Hélène; Radrizzani, Marina; Vallanti, Giuliana; Naldini, Luigi; Noguiez-Hellin, Patricia; Galy, Anne

    2011-03-01

    From the perspective of a pilot clinical gene therapy trial for Wiskott-Aldrich syndrome (WAS), we implemented a process to produce a lentiviral vector under good manufacturing practices (GMP). The process is based on the transient transfection of 293T cells in Cell Factory stacks, scaled up to harvest 50 liters of viral stock per batch, followed by purification of the vesicular stomatitis virus glycoprotein-pseudotyped particles through several membrane-based and chromatographic steps. The process leads to a 200-fold volume concentration and an approximately 3-log reduction in protein and DNA contaminants. An average yield of 13% of infectious particles was obtained in six full-scale preparations. The final product contained low levels of contaminants such as simian virus 40 large T antigen or E1A sequences originating from producer cells. Titers as high as 2 × 10(9) infectious particles per milliliter were obtained, generating up to 6 × 10(11) infectious particles per batch. The purified WAS vector was biologically active, efficiently expressing the genetic insert in WAS protein-deficient B cell lines and transducing CD34(+) cells. The vector introduced 0.3-1 vector copy per cell on average in CD34(+) cells when used at the concentration of 10(8) infectious particles per milliliter, which is comparable to preclinical preparations. There was no evidence of cellular toxicity. These results show the implementation of large-scale GMP production, purification, and control of advanced HIV-1-derived lentiviral technology. Results obtained with the WAS vector provide the initial manufacturing and quality control benchmarking that should be helpful to further development and clinical applications.

  19. Evolution and application of a pseudo-multi-zone model for the prediction of NOx emissions from large-scale diesel engines at various operating conditions

    International Nuclear Information System (INIS)

    Savva, Nicholas S.; Hountalas, Dimitrios T.

    2014-01-01

    Highlights: • Development of a simplified simulation model for NO x formation during combustion. • Application of the proposed model on large-scale two and four-stroke diesel engines. • Experimental data from stationary and ship main and auxiliary engines were used. • The model captures the trend of NO x as engine power and fuel injection timing varies. • The model is recommended for research and practical use in maritime and power industry. - Abstract: Emissions regulations for heavy-duty diesel units used in maritime and power generation applications have become very strict the last years. Hence, the industry is enforced to limit specific gaseous and particulate emissions (NO x , SO x , CO x , PM and HC) depending on the regulations. Among numerous methods, simulation models are extensively used to support the development of techniques used for the control of emitted pollutants. This is very important for large-scale engines due to the extremely high cost of the experimental investigation resulting from the size of the engines and the test equipment involved. Beyond this, simulation models can also be used to support NO x monitoring, since on-board verification techniques are to become mandatory for the marine industry in the near future. Last but not least, simulation models can also be used for model-based control applications to support the operation of both in-cylinder and after-treatment techniques. Currently, the major controlled pollutant for both marine and stationary applications is NO x . For this reason, in the present work, authors focus on the development and application of a simplified NO x model with special emphasis on its ability to predict the effect of operating conditions on NO x for both two and four-stroke diesel engines. To accomplish this, an existing well validated simplified NO x model has been modified to enhance its physical background and applied on 16 different large-scale diesel engines utilizing 18 different sets of

  20. Water hammer and column separation due to accidental simultaneous closure of control valves in a large scale two-phase flow experimental test rig

    NARCIS (Netherlands)

    Bergant, A.; Westende, van 't J.M.C.; Koppel, T.; Gale, J.; Hou, Q.; Pandula, Z.; Tijsseling, A.S.

    2010-01-01

    A large-scale pipeline test rig at Deltares, Delft, The Netherlands has been used for filling and emptying experiments. Tests have been conducted in a horizontal 250 mm diameter PVC pipe of 258 m length with control valves at the downstream and upstream ends. This paper investigates the accidental

  1. Large-scale parallel configuration interaction. II. Two- and four-component double-group general active space implementation with application to BiH

    DEFF Research Database (Denmark)

    Knecht, Stefan; Jensen, Hans Jørgen Aagaard; Fleig, Timo

    2010-01-01

    We present a parallel implementation of a large-scale relativistic double-group configuration interaction CIprogram. It is applicable with a large variety of two- and four-component Hamiltonians. The parallel algorithm is based on a distributed data model in combination with a static load balanci...

  2. Large-scale membrane transfer process: its application to single-crystal-silicon continuous membrane deformable mirror

    International Nuclear Information System (INIS)

    Wu, Tong; Sasaki, Takashi; Hane, Kazuhiro; Akiyama, Masayuki

    2013-01-01

    This paper describes a large-scale membrane transfer process developed for the construction of large-scale membrane devices via the transfer of continuous single-crystal-silicon membranes from one substrate to another. This technique is applied for fabricating a large stroke deformable mirror. A bimorph spring array is used to generate a large air gap between the mirror membrane and the electrode. A 1.9 mm × 1.9 mm × 2 µm single-crystal-silicon membrane is successfully transferred to the electrode substrate by Au–Si eutectic bonding and the subsequent all-dry release process. This process provides an effective approach for transferring a free-standing large continuous single-crystal-silicon to a flexible suspension spring array with a large air gap. (paper)

  3. Application on small incision extracapsular cataract extraction in large-scale vision recovery action in Shaanxi Province

    Directory of Open Access Journals (Sweden)

    Juan Zhang

    2014-09-01

    Full Text Available AIM: To investigate the characteristics of scale cataract operations and the effects and experiences of small incision extracapsular cataract extraction with intraocular lens(IOLimplantation in large-scale vision recovery action. METHODS: Four thousand eight hundred ninety-two cases(4 892 eyesof cataract were treated by small incision non-phacoemulcification cataract extraction from March 2010 to November 2011 in our hospital(Fuming No.1 surgery car of Shaanxi Provincewhich were retrospectively analyzed. Visual acuity, intraoperative and postoperative complications, the recovery of postoperative inflammation were observed. RESULTS: Visual acuity reached 0.3 or more in 4 521 eyes(92.42%at 1d after the operation, at 3d after the operation in 4 571 eyes(93.44%, there were 4 887 eyes with IOL implantation, implantation rate was 99.90%. All the cases had lesser intraoperative and postoperative complications, and the postoperative inflammation recovered quickly. CONCLUSION: Small incision extracapsular cataract extraction with IOL implantation is simple, effective, economical, safe and adapting for large-scale vision recovery action.

  4. Urban Freight Management with Stochastic Time-Dependent Travel Times and Application to Large-Scale Transportation Networks

    Directory of Open Access Journals (Sweden)

    Shichao Sun

    2015-01-01

    Full Text Available This paper addressed the vehicle routing problem (VRP in large-scale urban transportation networks with stochastic time-dependent (STD travel times. The subproblem which is how to find the optimal path connecting any pair of customer nodes in a STD network was solved through a robust approach without requiring the probability distributions of link travel times. Based on that, the proposed STD-VRP model can be converted into solving a normal time-dependent VRP (TD-VRP, and algorithms for such TD-VRPs can also be introduced to obtain the solution. Numerical experiments were conducted to address STD-VRPTW of practical sizes on a real world urban network, demonstrated here on the road network of Shenzhen, China. The stochastic time-dependent link travel times of the network were calibrated by historical floating car data. A route construction algorithm was applied to solve the STD problem in 4 delivery scenarios efficiently. The computational results showed that the proposed STD-VRPTW model can improve the level of customer service by satisfying the time-window constraint under any circumstances. The improvement can be very significant especially for large-scale network delivery tasks with no more increase in cost and environmental impacts.

  5. Large scale reflood test

    International Nuclear Information System (INIS)

    Hirano, Kemmei; Murao, Yoshio

    1980-01-01

    The large-scale reflood test with a view to ensuring the safety of light water reactors was started in fiscal 1976 based on the special account act for power source development promotion measures by the entrustment from the Science and Technology Agency. Thereafter, to establish the safety of PWRs in loss-of-coolant accidents by joint international efforts, the Japan-West Germany-U.S. research cooperation program was started in April, 1980. Thereupon, the large-scale reflood test is now included in this program. It consists of two tests using a cylindrical core testing apparatus for examining the overall system effect and a plate core testing apparatus for testing individual effects. Each apparatus is composed of the mock-ups of pressure vessel, primary loop, containment vessel and ECCS. The testing method, the test results and the research cooperation program are described. (J.P.N.)

  6. Application of Large-Scale Database-Based Online Modeling to Plant State Long-Term Estimation

    Science.gov (United States)

    Ogawa, Masatoshi; Ogai, Harutoshi

    Recently, attention has been drawn to the local modeling techniques of a new idea called “Just-In-Time (JIT) modeling”. To apply “JIT modeling” to a large amount of database online, “Large-scale database-based Online Modeling (LOM)” has been proposed. LOM is a technique that makes the retrieval of neighboring data more efficient by using both “stepwise selection” and quantization. In order to predict the long-term state of the plant without using future data of manipulated variables, an Extended Sequential Prediction method of LOM (ESP-LOM) has been proposed. In this paper, the LOM and the ESP-LOM are introduced.

  7. Conditional sampling technique to test the applicability of the Taylor hypothesis for the large-scale coherent structures

    Science.gov (United States)

    Hussain, A. K. M. F.

    1980-01-01

    Comparisons of the distributions of large scale structures in turbulent flow with distributions based on time dependent signals from stationary probes and the Taylor hypothesis are presented. The study investigated an area in the near field of a 7.62 cm circular air jet at a Re of 32,000, specifically having coherent structures through small-amplitude controlled excitation and stable vortex pairing in the jet column mode. Hot-wire and X-wire anemometry were employed to establish phase averaged spatial distributions of longitudinal and lateral velocities, coherent Reynolds stress and vorticity, background turbulent intensities, streamlines and pseudo-stream functions. The Taylor hypothesis was used to calculate spatial distributions of the phase-averaged properties, with results indicating that the usage of the local time-average velocity or streamwise velocity produces large distortions.

  8. Application of cooperative and non-cooperative games in large-scale water quantity and quality management: a case study.

    Science.gov (United States)

    Mahjouri, Najmeh; Ardestani, Mojtaba

    2011-01-01

    In this paper, two cooperative and non-cooperative methodologies are developed for a large-scale water allocation problem in Southern Iran. The water shares of the water users and their net benefits are determined using optimization models having economic objectives with respect to the physical and environmental constraints of the system. The results of the two methodologies are compared based on the total obtained economic benefit, and the role of cooperation in utilizing a shared water resource is demonstrated. In both cases, the water quality in rivers satisfies the standards. Comparing the results of the two mentioned approaches shows the importance of acting cooperatively to achieve maximum revenue in utilizing a surface water resource while the river water quantity and quality issues are addressed.

  9. Facile preparation of monodisperse, impurity-free, and antioxidation copper nanoparticles on a large scale for application in conductive ink.

    Science.gov (United States)

    Zhang, Yu; Zhu, Pengli; Li, Gang; Zhao, Tao; Fu, Xianzhu; Sun, Rong; Zhou, Feng; Wong, Ching-ping

    2014-01-08

    Monodisperse copper nanoparticles with high purity and antioxidation properties are synthesized quickly (only 5 min) on a large scale (multigram amounts) by a modified polyol process using slightly soluble Cu(OH)2 as the precursor, L-ascorbic acid as the reductant, and PEG-2000 as the protectant. The resulting copper nanoparticles have a size distribution of 135 ± 30 nm and do not suffer significant oxidation even after being stored for 30 days under ambient conditions. The copper nanoparticles can be well-dispersed in an oil-based ink, which can be silk-screen printed onto flexible substrates and then converted into conductive patterns after heat treatment. An optimal electrical resistivity of 15.8 μΩ cm is achieved, which is only 10 times larger than that of bulk copper. The synthesized copper nanoparticles could be considered as a cheap and effective material for printed electronics.

  10. Expanded Large-Scale Forcing Properties Derived from the Multiscale Data Assimilation System and Its Application to Single-Column Models

    Science.gov (United States)

    Feng, S.; Li, Z.; Liu, Y.; Lin, W.; Toto, T.; Vogelmann, A. M.; Fridlind, A. M.

    2013-12-01

    We present an approach to derive large-scale forcing that is used to drive single-column models (SCMs) and cloud resolving models (CRMs)/large eddy simulation (LES) for evaluating fast physics parameterizations in climate models. The forcing fields are derived by use of a newly developed multi-scale data assimilation (MS-DA) system. This DA system is developed on top of the NCEP Gridpoint Statistical Interpolation (GSI) System and is implemented in the Weather Research and Forecasting (WRF) model at a cloud resolving resolution of 2 km. This approach has been applied to the generation of large scale forcing for a set of Intensive Operation Periods (IOPs) over the Atmospheric Radiation Measurement (ARM) Climate Research Facility's Southern Great Plains (SGP) site. The dense ARM in-situ observations and high-resolution satellite data effectively constrain the WRF model. The evaluation shows that the derived forcing displays accuracies comparable to the existing continuous forcing product and, overall, a better dynamic consistency with observed cloud and precipitation. One important application of this approach is to derive large-scale hydrometeor forcing and multiscale forcing, which is not provided in the existing continuous forcing product. It is shown that the hydrometeor forcing poses an appreciable impact on cloud and precipitation fields in the single-column model simulations. The large-scale forcing exhibits a significant dependency on domain-size that represents SCM grid-sizes. Subgrid processes often contribute a significant component to the large-scale forcing, and this contribution is sensitive to the grid-size and cloud-regime.

  11. Evaluation of the Potential Environmental Impacts from Large-Scale Use and Production of Hydrogen in Energy and Transportation Applications

    Energy Technology Data Exchange (ETDEWEB)

    Wuebbles, D.J.; Dubey, M.K., Edmonds, J.; Layzell, D.; Olsen, S.; Rahn, T.; Rocket, A.; Wang, D.; Jia, W.

    2010-06-01

    The purpose of this project is to systematically identify and examine possible near and long-term ecological and environmental effects from the production of hydrogen from various energy sources based on the DOE hydrogen production strategy and the use of that hydrogen in transportation applications. This project uses state-of-the-art numerical modeling tools of the environment and energy system emissions in combination with relevant new and prior measurements and other analyses to assess the understanding of the potential ecological and environmental impacts from hydrogen market penetration. H2 technology options and market penetration scenarios will be evaluated using energy-technology-economics models as well as atmospheric trace gas projections based on the IPCC SRES scenarios including the decline in halocarbons due to the Montreal Protocol. Specifically we investigate the impact of hydrogen releases on the oxidative capacity of the atmosphere, the long-term stability of the ozone layer due to changes in hydrogen emissions, the impact of hydrogen emissions and resulting concentrations on climate, the impact on microbial ecosystems involved in hydrogen uptake, and criteria pollutants emitted from distributed and centralized hydrogen production pathways and their impacts on human health, air quality, ecosystems, and structures under different penetration scenarios

  12. Non-smooth optimization methods for large-scale problems: applications to mid-term power generation planning

    International Nuclear Information System (INIS)

    Emiel, G.

    2008-01-01

    This manuscript deals with large-scale non-smooth optimization that may typically arise when performing Lagrangian relaxation of difficult problems. This technique is commonly used to tackle mixed-integer linear programming - or large-scale convex problems. For example, a classical approach when dealing with power generation planning problems in a stochastic environment is to perform a Lagrangian relaxation of the coupling constraints of demand. In this approach, a master problem coordinates local subproblems, specific to each generation unit. The master problem deals with a separable non-smooth dual function which can be maximized with, for example, bundle algorithms. In chapter 2, we introduce basic tools of non-smooth analysis and some recent results regarding incremental or inexact instances of non-smooth algorithms. However, in some situations, the dual problem may still be very hard to solve. For instance, when the number of dualized constraints is very large (exponential in the dimension of the primal problem), explicit dualization may no longer be possible or the update of dual variables may fail. In order to reduce the dual dimension, different heuristics were proposed. They involve a separation procedure to dynamically select a restricted set of constraints to be dualized along the iterations. This relax-and-cut type approach has shown its numerical efficiency in many combinatorial problems. In chapter 3, we show Primal-dual convergence of such strategy when using an adapted sub-gradient method for the dual step and under minimal assumptions on the separation procedure. Another limit of Lagrangian relaxation may appear when the dual function is separable in highly numerous or complex sub-functions. In such situation, the computational burden of solving all local subproblems may be preponderant in the whole iterative process. A natural strategy would be here to take full advantage of the dual separable structure, performing a dual iteration after having

  13. Numerical Analysis of Soil Settlement Prediction and Its Application In Large-Scale Marine Reclamation Artificial Island Project

    Directory of Open Access Journals (Sweden)

    Zhao Jie

    2017-11-01

    Full Text Available In an artificial island construction project based on the large-scale marine reclamation land, the soil settlement is a key to affect the late safe operation of the whole field. To analyze the factors of the soil settlement in a marine reclamation project, the SEM method in the soil micro-structural analysis method is used to test and study six soil samples such as the representative silt, mucky silty clay, silty clay and clay in the area. The structural characteristics that affect the soil settlement are obtained by observing the SEM charts at different depths. By combining numerical calculation method of Terzaghi’s one-dimensional and Biot’s two-dimensional consolidation theory, the one-dimensional and two-dimensional creep models are established and the numerical calculation results of two consolidation theories are compared in order to predict the maximum settlement of the soils 100 years after completion. The analysis results indicate that the micro-structural characteristics are the essential factor to affect the settlement in this area. Based on numerical analysis of one-dimensional and two-dimensional settlement, the settlement law and trend obtained by two numerical analysis method is similar. The analysis of this paper can provide reference and guidance to the project related to the marine reclamation land.

  14. Large-scale one-dimensional Bi x O y I z nanostructures: synthesis, characterization, and photocatalytic applications

    Science.gov (United States)

    Liu, Chaohong; Zhang, Dun

    2015-03-01

    The performances of Bi x O y I z photofunctional materials are very sensitive to their composition and microstructures; however, the morphology evolution and crystallization process of one-dimensional Bi x O y I z nanostructures, the roles of experimental factors, and related reaction mechanisms remain poorly understood. In this work, large-scale one-dimensional Bi x O y I z nanostructures were fabricated using simple inorganic iodine source. By combing the results of X-ray diffraction and scanning electron microscope, the effect of volume ratios of water and ethanol, concentration of NaOH, and reaction time on the morphologies and crystal phases of Bi x O y I z were elaborated. On the basis of characterizations, a possible process for the growth of Bi5O7I nanobelts was proposed. The optical performances of Bi x O y I z nanostructures were evaluated by ultraviolet-visible-near infrared diffuse reflectance spectra as well as photocatalytic degradation of organic dye and corrosive bacteria. The as-prepared Bi5O7I/Bi2O2CO3/BiOI composite showed excellent photocatalytic activity over malachite green under visible light irradiation, which was deduced closely related to its heterojunction structures.

  15. Modified Principal Component Analysis for Identifying Key Environmental Indicators and Application to a Large-Scale Tidal Flat Reclamation

    Directory of Open Access Journals (Sweden)

    Kejian Chu

    2018-01-01

    Full Text Available Identification of the key environmental indicators (KEIs from a large number of environmental variables is important for environmental management in tidal flat reclamation areas. In this study, a modified principal component analysis approach (MPCA has been developed for determining the KEIs. The MPCA accounts for the two important attributes of the environmental variables: pollution status and temporal variation, in addition to the commonly considered numerical divergence attribute. It also incorporates the distance correlation (dCor to replace the Pearson’s correlation to measure the nonlinear interrelationship between the variables. The proposed method was applied to the Tiaozini sand shoal, a large-scale tidal flat reclamation region in China. Five KEIs were identified as dissolved inorganic nitrogen, Cd, petroleum in the water column, Hg, and total organic carbon in the sediment. The identified KEIs were shown to respond well to the biodiversity of phytoplankton. This demonstrated that the identified KEIs adequately represent the environmental condition in the coastal marine system. Therefore, the MPCA is a practicable method for extracting effective indicators that have key roles in the coastal and marine environment.

  16. Free energies of binding from large-scale first-principles quantum mechanical calculations: application to ligand hydration energies.

    Science.gov (United States)

    Fox, Stephen J; Pittock, Chris; Tautermann, Christofer S; Fox, Thomas; Christ, Clara; Malcolm, N O J; Essex, Jonathan W; Skylaris, Chris-Kriton

    2013-08-15

    Schemes of increasing sophistication for obtaining free energies of binding have been developed over the years, where configurational sampling is used to include the all-important entropic contributions to the free energies. However, the quality of the results will also depend on the accuracy with which the intermolecular interactions are computed at each molecular configuration. In this context, the energy change associated with the rearrangement of electrons (electronic polarization and charge transfer) upon binding is a very important effect. Classical molecular mechanics force fields do not take this effect into account explicitly, and polarizable force fields and semiempirical quantum or hybrid quantum-classical (QM/MM) calculations are increasingly employed (at higher computational cost) to compute intermolecular interactions in free-energy schemes. In this work, we investigate the use of large-scale quantum mechanical calculations from first-principles as a way of fully taking into account electronic effects in free-energy calculations. We employ a one-step free-energy perturbation (FEP) scheme from a molecular mechanical (MM) potential to a quantum mechanical (QM) potential as a correction to thermodynamic integration calculations within the MM potential. We use this approach to calculate relative free energies of hydration of small aromatic molecules. Our quantum calculations are performed on multiple configurations from classical molecular dynamics simulations. The quantum energy of each configuration is obtained from density functional theory calculations with a near-complete psinc basis set on over 600 atoms using the ONETEP program.

  17. Application of soft x-ray laser interferometry to study large-scale-length, high-density plasmas

    International Nuclear Information System (INIS)

    Wan, A.S.; Barbee, T.W., Jr.; Cauble, R.

    1996-01-01

    We have employed a Mach-Zehnder interferometer, using a Ne-like Y x- ray laser at 155 Angstrom as the probe source, to study large-scale- length, high-density colliding plasmas and exploding foils. The measured density profile of counter-streaming high-density colliding plasmas falls in between the calculated profiles using collisionless and fluid approximations with the radiation hydrodynamic code LASNEX. We have also performed simultaneous measured the local gain and electron density of Y x-ray laser amplifier. Measured gains in the amplifier were found to be between 10 and 20 cm -1 , similar to predictions and indicating that refraction is the major cause of signal loss in long line focus lasers. Images showed that high gain was produced in spots with dimensions of ∼ 10 μm, which we believe is caused by intensity variations in the optical drive laser. Measured density variations were smooth on the 10-μm scale so that temperature variations were likely the cause of the localized gain regions. We are now using the interferometry technique as a mechanism to validate and benchmark our numerical codes used for the design and analysis of high-energy-density physics experiments. 11 refs., 6 figs

  18. Rectangular coordination polymer nanoplates: large-scale, rapid synthesis and their application as a fluorescent sensing platform for DNA detection.

    Directory of Open Access Journals (Sweden)

    Yingwei Zhang

    Full Text Available In this paper, we report on the large-scale, rapid synthesis of uniform rectangular coordination polymer nanoplates (RCPNs assembled from Cu(II and 4,4'-bipyridine for the first time. We further demonstrate that such RCPNs can be used as a very effective fluorescent sensing platform for multiple DNA detection with a detection limit as low as 30 pM and a high selectivity down to single-base mismatch. The DNA detection is accomplished by the following two steps: (1 RCPN binds dye-labeled single-stranded DNA (ssDNA probe, which brings dye and RCPN into close proximity, leading to fluorescence quenching; (2 Specific hybridization of the probe with its target generates a double-stranded DNA (dsDNA which detaches from RCPN, leading to fluorescence recovery. It suggests that this sensing system can well discriminate complementary and mismatched DNA sequences. The exact mechanism of fluorescence quenching involved is elucidated experimentally and its use in a human blood serum system is also demonstrated successfully.

  19. MELAS and Kearns–Sayre overlap syndrome due to the mtDNA m. A3243G mutation and large-scale mtDNA deletions

    Directory of Open Access Journals (Sweden)

    Nian Yu

    2016-09-01

    Full Text Available This paper reported an unusual manifestation of a 19-year-old Chinese male patient presented with a complex phenotype of mitochondrial encephalomyopathy, lactic acidosis and stroke-like episodes (MELAS syndrome and Kearns–Sayre syndrome (KSS. He was admitted to our hospital with the chief complaint of “acute fever, headache and slow reaction for 21 days”. He was initially misdiagnosed as “viral encephalitis”. This Chinese man with significant past medical history of intolerating fatigue presented paroxysmal neurobehavioral attacks that started about 10 years ago. During this span, 3 or 4 attack clusters were described during which several attacks occurred over a few days. The further examination found that the hallmark signs of this patient included progressive myoclonus epilepsy, cerebellar ataxia, hearing loss, myopathic weakness, ophthalmoparesis, pigmentary retinopathy and bifascicular heart block (Wolff–Parkinson–White syndrome. By young age the disease progression is characterized by the addition of migraine, vomiting, and stroke-like episodes, symptoms of MELAS expression, which indicated completion of the MELAS/KSS overlap syndrome. The m. A3243G mitochondrial DNA mutation and single large-scale mtDNA deletions were found in this patient. This mutation has been reported with MELAS, KSS, myopathy, deafness and mental disorder with cognitive impairment. This is the first description with a MELAS/KSS syndrome in Chinese.

  20. Application of Large-Scale, Multi-Resolution Watershed Modeling Framework Using the Hydrologic and Water Quality System (HAWQS

    Directory of Open Access Journals (Sweden)

    Haw Yen

    2016-04-01

    Full Text Available In recent years, large-scale watershed modeling has been implemented broadly in the field of water resources planning and management. Complex hydrological, sediment, and nutrient processes can be simulated by sophisticated watershed simulation models for important issues such as water resources allocation, sediment transport, and pollution control. Among commonly adopted models, the Soil and Water Assessment Tool (SWAT has been demonstrated to provide superior performance with a large amount of referencing databases. However, it is cumbersome to perform tedious initialization steps such as preparing inputs and developing a model with each changing targeted study area. In this study, the Hydrologic and Water Quality System (HAWQS is introduced to serve as a national-scale Decision Support System (DSS to conduct challenging watershed modeling tasks. HAWQS is a web-based DSS developed and maintained by Texas A & M University, and supported by the U.S. Environmental Protection Agency. Three different spatial resolutions of Hydrologic Unit Code (HUC8, HUC10, and HUC12 and three temporal scales (time steps in daily/monthly/annual are available as alternatives for general users. In addition, users can specify preferred values of model parameters instead of using the pre-defined sets. With the aid of HAWQS, users can generate a preliminarily calibrated SWAT project within a few minutes by only providing the ending HUC number of the targeted watershed and the simulation period. In the case study, HAWQS was implemented on the Illinois River Basin, USA, with graphical demonstrations and associated analytical results. Scientists and/or decision-makers can take advantage of the HAWQS framework while conducting relevant topics or policies in the future.

  1. The resource curse: Analysis of the applicability to the large-scale export of electricity from renewable resources

    International Nuclear Information System (INIS)

    Eisgruber, Lasse

    2013-01-01

    The “resource curse” has been analyzed extensively in the context of non-renewable resources such as oil and gas. More recently commentators have expressed concerns that also renewable electricity exports can have adverse economic impacts on exporting countries. My paper analyzes to what extent the resource curse applies in the case of large-scale renewable electricity exports. I develop a “comprehensive model” that integrates previous works and provides a consolidated view of how non-renewable resource abundance impacts economic growth. Deploying this model I analyze through case studies on Laos, Mongolia, and the MENA region to what extent exporters of renewable electricity run into the danger of the resource curse. I find that renewable electricity exports avoid some disadvantages of non-renewable resource exports including (i) shocks after resource depletion; (ii) macroeconomic fluctuations; and (iii) competition for a fixed amount of resources. Nevertheless, renewable electricity exports bear some of the same risks as conventional resource exports including (i) crowding-out of the manufacturing sector; (ii) incentives for corruption; and (iii) reduced government accountability. I conclude with recommendations for managing such risks. - Highlights: ► Study analyzes whether the resource curse applies to renewable electricity export. ► I develop a “comprehensive model of the resource curse” and use cases for the analysis. ► Renewable electricity export avoids some disadvantages compared to other resources. ► Renewable electricity bears some of the same risks as conventional resources. ► Study concludes with recommendations for managing such risks

  2. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  3. Large scale simulations of the mechanical properties of layered transition metal ternary compounds for fossil energy power system applications

    Energy Technology Data Exchange (ETDEWEB)

    Ching, Wai-Yim [Univ. of Missouri, Kansas City, MO (United States)

    2014-12-31

    Advanced materials with applications in extreme conditions such as high temperature, high pressure, and corrosive environments play a critical role in the development of new technologies to significantly improve the performance of different types of power plants. Materials that are currently employed in fossil energy conversion systems are typically the Ni-based alloys and stainless steels that have already reached their ultimate performance limits. Incremental improvements are unlikely to meet the more stringent requirements aimed at increased efficiency and reduce risks while addressing environmental concerns and keeping costs low. Computational studies can lead the way in the search for novel materials or for significant improvements in existing materials that can meet such requirements. Detailed computational studies with sufficient predictive power can provide an atomistic level understanding of the key characteristics that lead to desirable properties. This project focuses on the comprehensive study of a new class of materials called MAX phases, or Mn+1AXn (M = a transition metal, A = Al or other group III, IV, and V elements, X = C or N). The MAX phases are layered transition metal carbides or nitrides with a rare combination of metallic and ceramic properties. Due to their unique structural arrangements and special types of bonding, these thermodynamically stable alloys possess some of the most outstanding properties. We used a genomic approach in screening a large number of potential MAX phases and established a database for 665 viable MAX compounds on the structure, mechanical and electronic properties and investigated the correlations between them. This database if then used as a tool for materials informatics for further exploration of this class of intermetallic compounds.

  4. Application of Satellite Solar-Induced Chlorophyll Fluorescence to Understanding Large-Scale Variations in Vegetation Phenology and Function Over Northern High Latitude Forests

    Science.gov (United States)

    Jeong, Su-Jong; Schimel, David; Frankenberg, Christian; Drewry, Darren T.; Fisher, Joshua B.; Verma, Manish; Berry, Joseph A.; Lee, Jung-Eun; Joiner, Joanna

    2016-01-01

    This study evaluates the large-scale seasonal phenology and physiology of vegetation over northern high latitude forests (40 deg - 55 deg N) during spring and fall by using remote sensing of solar-induced chlorophyll fluorescence (SIF), normalized difference vegetation index (NDVI) and observation-based estimate of gross primary productivity (GPP) from 2009 to 2011. Based on GPP phenology estimation in GPP, the growing season determined by SIF time-series is shorter in length than the growing season length determined solely using NDVI. This is mainly due to the extended period of high NDVI values, as compared to SIF, by about 46 days (+/-11 days), indicating a large-scale seasonal decoupling of physiological activity and changes in greenness in the fall. In addition to phenological timing, mean seasonal NDVI and SIF have different responses to temperature changes throughout the growing season. We observed that both NDVI and SIF linearly increased with temperature increases throughout the spring. However, in the fall, although NDVI linearly responded to temperature increases, SIF and GPP did not linearly increase with temperature increases, implying a seasonal hysteresis of SIF and GPP in response to temperature changes across boreal ecosystems throughout their growing season. Seasonal hysteresis of vegetation at large-scales is consistent with the known phenomena that light limits boreal forest ecosystem productivity in the fall. Our results suggest that continuing measurements from satellite remote sensing of both SIF and NDVI can help to understand the differences between, and information carried by, seasonal variations vegetation structure and greenness and physiology at large-scales across the critical boreal regions.

  5. Theory and algorithms for solving large-scale numerical problems. Application to the management of electricity production

    International Nuclear Information System (INIS)

    Chiche, A.

    2012-01-01

    This manuscript deals with large-scale optimization problems, and more specifically with solving the electricity unit commitment problem arising at EDF. First, we focused on the augmented Lagrangian algorithm. The behavior of that algorithm on an infeasible convex quadratic optimization problem is analyzed. It is shown that the algorithm finds a point that satisfies the shifted constraints with the smallest possible shift in the sense of the Euclidean norm and that it minimizes the objective on the corresponding shifted constrained set. The convergence to such a point is realized at a global linear rate, which depends explicitly on the augmentation parameter. This suggests us a rule for determining the augmentation parameter to control the speed of convergence of the shifted constraint norm to zero. This rule has the advantage of generating bounded augmentation parameters even when the problem is infeasible. As a by-product, the algorithm computes the smallest translation in the Euclidean norm that makes the constraints feasible. Furthermore, this work provides solution methods for stochastic optimization industrial problems decomposed on a scenario tree, based on the progressive hedging algorithm introduced by [Rockafellar et Wets, 1991]. We also focus on the convergence of that algorithm. On the one hand, we offer a counter-example showing that the algorithm could diverge if its augmentation parameter is iteratively updated. On the other hand, we show how to recover the multipliers associated with the non-dualized constraints defined on the scenario tree from those associated with the corresponding constraints of the scenario subproblems. Their convergence is also analyzed for convex problems. The practical interest of theses solutions techniques is corroborated by numerical experiments performed on the electric production management problem. We apply the progressive hedging algorithm to a realistic industrial problem. More precisely, we solve the French medium

  6. Soil carbon sequestration due to post-Soviet cropland abandonment: estimates from a large-scale soil organic carbon field inventory.

    Science.gov (United States)

    Wertebach, Tim-Martin; Hölzel, Norbert; Kämpf, Immo; Yurtaev, Andrey; Tupitsin, Sergey; Kiehl, Kathrin; Kamp, Johannes; Kleinebecker, Till

    2017-09-01

    The break-up of the Soviet Union in 1991 triggered cropland abandonment on a continental scale, which in turn led to carbon accumulation on abandoned land across Eurasia. Previous studies have estimated carbon accumulation rates across Russia based on large-scale modelling. Studies that assess carbon sequestration on abandoned land based on robust field sampling are rare. We investigated soil organic carbon (SOC) stocks using a randomized sampling design along a climatic gradient from forest steppe to Sub-Taiga in Western Siberia (Tyumen Province). In total, SOC contents were sampled on 470 plots across different soil and land-use types. The effect of land use on changes in SOC stock was evaluated, and carbon sequestration rates were calculated for different age stages of abandoned cropland. While land-use type had an effect on carbon accumulation in the topsoil (0-5 cm), no independent land-use effects were found for deeper SOC stocks. Topsoil carbon stocks of grasslands and forests were significantly higher than those of soils managed for crops and under abandoned cropland. SOC increased significantly with time since abandonment. The average carbon sequestration rate for soils of abandoned cropland was 0.66 Mg C ha -1  yr -1 (1-20 years old, 0-5 cm soil depth), which is at the lower end of published estimates for Russia and Siberia. There was a tendency towards SOC saturation on abandoned land as sequestration rates were much higher for recently abandoned (1-10 years old, 1.04 Mg C ha -1  yr -1 ) compared to earlier abandoned crop fields (11-20 years old, 0.26 Mg C ha -1  yr -1 ). Our study confirms the global significance of abandoned cropland in Russia for carbon sequestration. Our findings also suggest that robust regional surveys based on a large number of samples advance model-based continent-wide SOC prediction. © 2017 John Wiley & Sons Ltd.

  7. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  8. Automation of Survey Data Processing, Documentation and Dissemination: An Application to Large-Scale Self-Reported Educational Survey.

    Science.gov (United States)

    Shim, Eunjae; Shim, Minsuk K.; Felner, Robert D.

    Automation of the survey process has proved successful in many industries, yet it is still underused in educational research. This is largely due to the facts (1) that number crunching is usually carried out using software that was developed before information technology existed, and (2) that the educational research is to a great extent trapped…

  9. Hot-spot application of biocontrol agents to replace pesticides in large scale commercial rose farms in Kenya

    DEFF Research Database (Denmark)

    Gacheri, Catherine; Kigen, Thomas; Sigsgaard, Lene

    2015-01-01

    Rose (Rosa hybrida L.) is the most important ornamental crop in Kenya, with huge investments in pest management. We provide the first full-scale, replicated experiment comparing cost and yield of conventional two-spotted spider mite (Tetranychus urticae Koch) control with hot-spot applications of...

  10. Application and comparison of large-scale solution-based DNA capture-enrichment methods on ancient DNA

    DEFF Research Database (Denmark)

    Avila Arcos, Maria del Carmen; Cappellini, Enrico; Romero-Navarro, J. Alberto

    2011-01-01

    The development of second-generation sequencing technologies has greatly benefitted the field of ancient DNA (aDNA). Its application can be further exploited by the use of targeted capture-enrichment methods to overcome restrictions posed by low endogenous and contaminating DNA in ancient samples...

  11. Application of large-scaled pre-cast components for the construction of water intake for a nuclear power plant

    International Nuclear Information System (INIS)

    Topolnicki, M.

    1976-01-01

    Problem of the construction of water intake for a 4000 MW nuclear power plant located at the seashore is solved. The advantages of application of large-size pre-cast components are presented,. The constructional solutions and proposed technologies are described in detail. (A.S.)

  12. The Large-Scale Synthesis of Vinyl-Functionalized Silicon Quantum Dot and Its Application in Miniemulsion Polymerization

    Directory of Open Access Journals (Sweden)

    Xuan-Dung Mai

    2016-01-01

    Full Text Available Stable luminescence, size-tunability, and biocompatibility encourage the deployment of Cd-free NPs into diverse biological applications. Here we report one-pot synthesis of blue-emitting and polymerizable silicon quantum dots (Si QDs from which water-soluble Si QDs embedded polystyrene nanoparticles (SiQD@PS NPs were prepared using a miniemulsion polymerization approach. The hydrodynamic size of NPs was controlled by KOH to oleic acid molar ratio. Studies on the photoluminescence properties of SiQD@PS NPs in different conditions reveal that they exhibit two-photon luminescence property and high stability against pH and UV exposure. These NPs add new size regime to the Si QDs based luminescent makers for bioimaging and therapy applications.

  13. Large-Scale Uncertainty and Error Analysis for Time-dependent Fluid/Structure Interactions in Wind Turbine Applications

    Energy Technology Data Exchange (ETDEWEB)

    Alonso, Juan J. [Stanford University; Iaccarino, Gianluca [Stanford University

    2013-08-25

    solution to the long-time integration problem of spectral chaos approaches; 4. A rigorous methodology to account for aleatory and epistemic uncertainties, to emphasize the most important variables via dimension reduction and dimension-adaptive refinement, and to support fusion with experimental data using Bayesian inference; 5. The application of novel methodologies to time-dependent reliability studies in wind turbine applications including a number of efforts relating to the uncertainty quantification in vertical-axis wind turbine applications. In this report, we summarize all accomplishments in the project (during the time period specified) focusing on advances in UQ algorithms and deployment efforts to the wind turbine application area. Detailed publications in each of these areas have also been completed and are available from the respective conference proceedings and journals as detailed in a later section.

  14. Genome Partitioner: A web tool for multi-level partitioning of large-scale DNA constructs for synthetic biology applications.

    Science.gov (United States)

    Christen, Matthias; Del Medico, Luca; Christen, Heinz; Christen, Beat

    2017-01-01

    Recent advances in lower-cost DNA synthesis techniques have enabled new innovations in the field of synthetic biology. Still, efficient design and higher-order assembly of genome-scale DNA constructs remains a labor-intensive process. Given the complexity, computer assisted design tools that fragment large DNA sequences into fabricable DNA blocks are needed to pave the way towards streamlined assembly of biological systems. Here, we present the Genome Partitioner software implemented as a web-based interface that permits multi-level partitioning of genome-scale DNA designs. Without the need for specialized computing skills, biologists can submit their DNA designs to a fully automated pipeline that generates the optimal retrosynthetic route for higher-order DNA assembly. To test the algorithm, we partitioned a 783 kb Caulobacter crescentus genome design. We validated the partitioning strategy by assembling a 20 kb test segment encompassing a difficult to synthesize DNA sequence. Successful assembly from 1 kb subblocks into the 20 kb segment highlights the effectiveness of the Genome Partitioner for reducing synthesis costs and timelines for higher-order DNA assembly. The Genome Partitioner is broadly applicable to translate DNA designs into ready to order sequences that can be assembled with standardized protocols, thus offering new opportunities to harness the diversity of microbial genomes for synthetic biology applications. The Genome Partitioner web tool can be accessed at https://christenlab.ethz.ch/GenomePartitioner.

  15. Genome Partitioner: A web tool for multi-level partitioning of large-scale DNA constructs for synthetic biology applications.

    Directory of Open Access Journals (Sweden)

    Matthias Christen

    Full Text Available Recent advances in lower-cost DNA synthesis techniques have enabled new innovations in the field of synthetic biology. Still, efficient design and higher-order assembly of genome-scale DNA constructs remains a labor-intensive process. Given the complexity, computer assisted design tools that fragment large DNA sequences into fabricable DNA blocks are needed to pave the way towards streamlined assembly of biological systems. Here, we present the Genome Partitioner software implemented as a web-based interface that permits multi-level partitioning of genome-scale DNA designs. Without the need for specialized computing skills, biologists can submit their DNA designs to a fully automated pipeline that generates the optimal retrosynthetic route for higher-order DNA assembly. To test the algorithm, we partitioned a 783 kb Caulobacter crescentus genome design. We validated the partitioning strategy by assembling a 20 kb test segment encompassing a difficult to synthesize DNA sequence. Successful assembly from 1 kb subblocks into the 20 kb segment highlights the effectiveness of the Genome Partitioner for reducing synthesis costs and timelines for higher-order DNA assembly. The Genome Partitioner is broadly applicable to translate DNA designs into ready to order sequences that can be assembled with standardized protocols, thus offering new opportunities to harness the diversity of microbial genomes for synthetic biology applications. The Genome Partitioner web tool can be accessed at https://christenlab.ethz.ch/GenomePartitioner.

  16. Large-scale wind energy application. Transporting wind energy over long distances using an HVDC transmission line, in combination with hydro energy or biomass energy

    International Nuclear Information System (INIS)

    Coelingh, J.P.; Van Wijk, A.J.M.; Betcke, J.W.H.; Geuzendam, C.; Gilijamse, W.; Westra, C.A.; Curvers, A.P.W.M.; Beurskens, H.J.M.

    1995-08-01

    The main objective of the study on the title subject is to assess the long-term prospects for large-scale application of wind energy, in combination with hydro energy in Norway and in combination with biomass energy in Scotland. These countries have high wind resource areas, however they are located far away from load centres. The development of new transmission technologies as High Voltage Direct Current (HVDC) transmission lines, in combination with highly suitable places for wind energy in Norway and Scotland, forms the driving force behind this study. The following two cases are being considered: (1) a large-scale wind farm (1,000 MW) in Norway from which electricity is transmitted to The Netherlands by using an HVDC transmission line, in combination with hydro energy. Hydro energy already makes a large contribution to the energy supply of Norway. Wind farms can contribute to the electricity production and save hydro energy generated electricity and make the export of electricity profitable; and (2) a large-scale wind farm (1,000 MW) in Scotland from which electricity is transmitted to The Netherlands by using an HVDC transmission line, in combination with biomass energy. Scotland has a large potential for biomass production such as energy crops and forestry. Poplars and willows cultivated on set-aside land can be gasified and fed into modern combined-cycle plants to generate electricity. In Scotland the usable potential of wind energy may be limited in the short and medium term by the capacity of the grid. New connections can overcome this constraint and allow wind energy to be treated as a European Union resource rather than as a national resource. Thus, the concept of this study is to look at the possibilities of making a 1,000 MW link from The Netherlands to Norway or to Scotland, in order to supply electricity at competitive costs generated with renewable energy sources. 16 figs., 24 tabs., 80 refs

  17. Forecasting in strategic international marketing. Conception for and application to the construction of large-scale pollution abatement installations

    International Nuclear Information System (INIS)

    Tressin, J.M.

    1992-01-01

    Division of labour continues to progress worldwide, particularly in the European economy, where it is an accompainment to the steady growth of transnational trade relations. In this situation forecasts attain great importance for companies facing structural decisions. This holds not only for large international concerns but also increasingly for medium-sized companies. The methodological approaches adopted by such companies are primarily aimed at reducing the risk of wrong decisions by gathering information and incorporating forecasts in their planning and decision-making processes. The present article describes various forecasting methods, exemplifying them for the area of commercial-sized building projects. Due to the great complexity of such installations and discontinuities in developments in international markets, quantitative forecasting methods and models are only of limited value here. In contrast to this, qualitative methods such as those provided by expert systems and contemporary dp tools for data collection and compression have attained great importance for the design of practiable and reliable forecasting systems. The present study thus lays have the interface between international marketing and corporate planning and decision-making processes. (orig.) [de

  18. Parameterization of disorder predictors for large-scale applications requiring high specificity by using an extended benchmark dataset

    Directory of Open Access Journals (Sweden)

    Eisenhaber Frank

    2010-02-01

    Full Text Available Abstract Background Algorithms designed to predict protein disorder play an important role in structural and functional genomics, as disordered regions have been reported to participate in important cellular processes. Consequently, several methods with different underlying principles for disorder prediction have been independently developed by various groups. For assessing their usability in automated workflows, we are interested in identifying parameter settings and threshold selections, under which the performance of these predictors becomes directly comparable. Results First, we derived a new benchmark set that accounts for different flavours of disorder complemented with a similar amount of order annotation derived for the same protein set. We show that, using the recommended default parameters, the programs tested are producing a wide range of predictions at different levels of specificity and sensitivity. We identify settings, in which the different predictors have the same false positive rate. We assess conditions when sets of predictors can be run together to derive consensus or complementary predictions. This is useful in the framework of proteome-wide applications where high specificity is required such as in our in-house sequence analysis pipeline and the ANNIE webserver. Conclusions This work identifies parameter settings and thresholds for a selection of disorder predictors to produce comparable results at a desired level of specificity over a newly derived benchmark dataset that accounts equally for ordered and disordered regions of different lengths.

  19. GEnomes Management Application (GEM.app): a new software tool for large-scale collaborative genome analysis.

    Science.gov (United States)

    Gonzalez, Michael A; Lebrigio, Rafael F Acosta; Van Booven, Derek; Ulloa, Rick H; Powell, Eric; Speziani, Fiorella; Tekin, Mustafa; Schüle, Rebecca; Züchner, Stephan

    2013-06-01

    Novel genes are now identified at a rapid pace for many Mendelian disorders, and increasingly, for genetically complex phenotypes. However, new challenges have also become evident: (1) effectively managing larger exome and/or genome datasets, especially for smaller labs; (2) direct hands-on analysis and contextual interpretation of variant data in large genomic datasets; and (3) many small and medium-sized clinical and research-based investigative teams around the world are generating data that, if combined and shared, will significantly increase the opportunities for the entire community to identify new genes. To address these challenges, we have developed GEnomes Management Application (GEM.app), a software tool to annotate, manage, visualize, and analyze large genomic datasets (https://genomics.med.miami.edu/). GEM.app currently contains ∼1,600 whole exomes from 50 different phenotypes studied by 40 principal investigators from 15 different countries. The focus of GEM.app is on user-friendly analysis for nonbioinformaticians to make next-generation sequencing data directly accessible. Yet, GEM.app provides powerful and flexible filter options, including single family filtering, across family/phenotype queries, nested filtering, and evaluation of segregation in families. In addition, the system is fast, obtaining results within 4 sec across ∼1,200 exomes. We believe that this system will further enhance identification of genetic causes of human disease. © 2013 Wiley Periodicals, Inc.

  20. Application of the Regional Water Mass Variations from GRACE Satellite Gravimetry to Large-Scale Water Management in Africa

    Directory of Open Access Journals (Sweden)

    Guillaume Ramillien

    2014-08-01

    Full Text Available Time series of regional 2° × 2° Gravity Recovery and Climate Experiment (GRACE solutions of surface water mass change have been computed over Africa from 2003 to 2012 with a 10-day resolution by using a new regional approach. These regional maps are used to describe and quantify water mass change. The contribution of African hydrology to actual sea level rise is negative and small in magnitude (i.e., −0.1 mm/y of equivalent sea level (ESL mainly explained by the water retained in the Zambezi River basin. Analysis of the regional water mass maps is used to distinguish different zones of important water mass variations, with the exception of the dominant seasonal cycle of the African monsoon in the Sahel and Central Africa. The analysis of the regional solutions reveals the accumulation in the Okavango swamp and South Niger. It confirms the continuous depletion of water in the North Sahara aquifer at the rate of −2.3 km3/y, with a decrease in early 2008. Synergistic use of altimetry-based lake water volume with total water storage (TWS from GRACE permits a continuous monitoring of sub-surface water storage for large lake drainage areas. These different applications demonstrate the potential of the GRACE mission for the management of water resources at the regional scale.

  1. Large-scale, rapid synthesis and application in surface-enhanced Raman spectroscopy of sub-micrometer polyhedral gold nanocrystals

    International Nuclear Information System (INIS)

    Guo Shaojun; Wang Yuling; Wang Erkang

    2007-01-01

    Macromolecule-protected sub-micrometer polyhedral gold nanocrystals have been facilely prepared by heating an aqueous solution containing poly (N-vinyl-2-pyrrolidone) (PVP) and HAuCl 4 without adding other reducing agents. Scanning electron microscopy (SEM), energy-dispersive x-ray spectroscopy (EDX), ultraviolet-visible-near-infrared spectroscopy (UV-vis-NIR), and x-ray diffraction (XRD) were employed to characterize the obtained polyhedral gold nanocrystals. It is found that the 10:1 molar ratio of PVP to gold is a key factor for obtaining quasi-monodisperse polyhedral gold nanocrystals. Furthermore, the application of polyhedral gold nanocrystals in surface-enhanced Raman scattering (SERS) was investigated by using 4-aminothiophenol (4-ATP) as a probe molecule. The results indicated that the sub-micrometer polyhedral gold nanocrystals modified on the ITO substrate exhibited higher SERS activity compared to the traditional gold nanoparticle modified film. The enhancement factor (EF) on polyhedral gold nanocrystals was about six times larger than that obtained on aggregated gold nanoparticles (∼25 nm)

  2. [Measures against Radiation Exposure Due to Large-Scale Nuclear Accident in Distant Place--Radioactive Materials in Nagasaki from Fukushima Daiichi Nuclear Power Plant].

    Science.gov (United States)

    Yuan, Jun; Sera, Koichiro; Takatsuji, Toshihiro

    2015-01-01

    To investigate human health effects of radiation exposure due to possible future nuclear accidents in distant places and other various findings of analysis of the radioactive materials contaminating the atmosphere of Nagasaki due to the Fukushima Daiichi Nuclear Power Plant accident. The concentrations of radioactive materials in aerosols in the atmosphere of Nagasaki were measured using a germanium semiconductor detector from March 2011 to March 2013. Internal exposure dose was calculated in accordance with ICRP Publ. 72. Air trajectories were analyzed using NOAA and METEX web-based systems. (134)Cs and (137)Cs were repeatedly detected. The air trajectory analysis showed that (134)Cs and (137)Cs flew directly from the Fukushima Daiichi Nuclear Power Plant from March to April 2011. However, the direct air trajectories were rarely detected after this period even when (134)Cs and (137)Cs were detected after this period. The activity ratios ((134)Cs/(137)Cs) of almost all the samples converted to those in March 2011 were about unity. This strongly suggests that the (134)Cs and (137)Cs detected mainly originated from the Fukushima Daiichi Nuclear Power Plant accident in March 2011. Although the (134)Cs and (137)Cs concentrations per air volume were very low and the human health effects of internal exposure via inhalation is expected to be negligible, the specific activities (concentrations per aerosol mass) were relatively high. It was found that possible future nuclear accidents may cause severe radioactive contaminations, which may require radiation exposure control of farm goods to more than 1000 km from places of nuclear accidents.

  3. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  4. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu

    2013-07-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  5. Conference on Large Scale Optimization

    CERN Document Server

    Hearn, D; Pardalos, P

    1994-01-01

    On February 15-17, 1993, a conference on Large Scale Optimization, hosted by the Center for Applied Optimization, was held at the University of Florida. The con­ ference was supported by the National Science Foundation, the U. S. Army Research Office, and the University of Florida, with endorsements from SIAM, MPS, ORSA and IMACS. Forty one invited speakers presented papers on mathematical program­ ming and optimal control topics with an emphasis on algorithm development, real world applications and numerical results. Participants from Canada, Japan, Sweden, The Netherlands, Germany, Belgium, Greece, and Denmark gave the meeting an important international component. At­ tendees also included representatives from IBM, American Airlines, US Air, United Parcel Serice, AT & T Bell Labs, Thinking Machines, Army High Performance Com­ puting Research Center, and Argonne National Laboratory. In addition, the NSF sponsored attendance of thirteen graduate students from universities in the United States and abro...

  6. Large-scale solar purchasing

    International Nuclear Information System (INIS)

    1999-01-01

    The principal objective of the project was to participate in the definition of a new IEA task concerning solar procurement (''the Task'') and to assess whether involvement in the task would be in the interest of the UK active solar heating industry. The project also aimed to assess the importance of large scale solar purchasing to UK active solar heating market development and to evaluate the level of interest in large scale solar purchasing amongst potential large scale purchasers (in particular housing associations and housing developers). A further aim of the project was to consider means of stimulating large scale active solar heating purchasing activity within the UK. (author)

  7. Improved L-BFGS diagonal preconditioners for a large-scale 4D-Var inversion system: application to CO2 flux constraints and analysis error calculation

    Science.gov (United States)

    Bousserez, Nicolas; Henze, Daven; Bowman, Kevin; Liu, Junjie; Jones, Dylan; Keller, Martin; Deng, Feng

    2013-04-01

    This work presents improved analysis error estimates for 4D-Var systems. From operational NWP models to top-down constraints on trace gas emissions, many of today's data assimilation and inversion systems in atmospheric science rely on variational approaches. This success is due to both the mathematical clarity of these formulations and the availability of computationally efficient minimization algorithms. However, unlike Kalman Filter-based algorithms, these methods do not provide an estimate of the analysis or forecast error covariance matrices, these error statistics being propagated only implicitly by the system. From both a practical (cycling assimilation) and scientific perspective, assessing uncertainties in the solution of the variational problem is critical. For large-scale linear systems, deterministic or randomization approaches can be considered based on the equivalence between the inverse Hessian of the cost function and the covariance matrix of analysis error. For perfectly quadratic systems, like incremental 4D-Var, Lanczos/Conjugate-Gradient algorithms have proven to be most efficient in generating low-rank approximations of the Hessian matrix during the minimization. For weakly non-linear systems though, the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS), a quasi-Newton descent algorithm, is usually considered the best method for the minimization. Suitable for large-scale optimization, this method allows one to generate an approximation to the inverse Hessian using the latest m vector/gradient pairs generated during the minimization, m depending upon the available core memory. At each iteration, an initial low-rank approximation to the inverse Hessian has to be provided, which is called preconditioning. The ability of the preconditioner to retain useful information from previous iterations largely determines the efficiency of the algorithm. Here we assess the performance of different preconditioners to estimate the inverse Hessian of a

  8. Applications of Neutron Scattering in the Chemical Industry: Proton Dynamics of Highly Dispersed Materials, Characterization of Fuel Cell Catalysts, and Catalysts from Large-Scale Chemical Processes

    Science.gov (United States)

    Albers, Peter W.; Parker, Stewart F.

    The attractiveness of neutron scattering techniques for the detailed characterization of materials of high degrees of dispersity and structural complexity as encountered in the chemical industry is discussed. Neutron scattering picks up where other analytical methods leave off because of the physico-chemical properties of finely divided products and materials whose absorption behavior toward electromagnetic radiation and electrical conductivity causes serious problems. This is demonstrated by presenting typical applications from large-scale production technology and industrial catalysis. These include the determination of the proton-related surface chemistry of advanced materials that are used as reinforcing fillers in the manufacture of tires, where interrelations between surface chemistry, rheological properties, improved safety, and significant reduction of fuel consumption are the focus of recent developments. Neutron scattering allows surface science studies of the dissociative adsorption of hydrogen on nanodispersed, supported precious metal particles of fuel cell catalysts under in situ loading at realistic gas pressures of about 1 bar. Insight into the occupation of catalytically relevant surface sites provides valuable information about the catalyst in the working state and supplies essential scientific input for tailoring better catalysts by technologists. The impact of deactivation phenomena on industrial catalysts by coke deposition, chemical transformation of carbonaceous deposits, and other processes in catalytic hydrogenation processes that result in significant shortening of the time of useful operation in large-scale plants can often be traced back in detail to surface or bulk properties of catalysts or materials of catalytic relevance. A better understanding of avoidable or unavoidable aspects of catalyst deactivation phenomena under certain in-process conditions and the development of effective means for reducing deactivation leads to more energy

  9. Development of large-scale manufacturing of adipose-derived stromal cells for clinical applications using bioreactors and human platelet lysate.

    Science.gov (United States)

    Haack-Sørensen, Mandana; Juhl, Morten; Follin, Bjarke; Harary Søndergaard, Rebekka; Kirchhoff, Maria; Kastrup, Jens; Ekblond, Annette

    2018-04-17

    In vitro expanded adipose-derived stromal cells (ASCs) are a useful resource for tissue regeneration. Translation of small-scale autologous cell production into a large-scale, allogeneic production process for clinical applications necessitates well-chosen raw materials and cell culture platform. We compare the use of clinical-grade human platelet lysate (hPL) and fetal bovine serum (FBS) as growth supplements for ASC expansion in the automated, closed hollow fibre quantum cell expansion system (bioreactor). Stromal vascular fractions were isolated from human subcutaneous abdominal fat. In average, 95 × 10 6 cells were suspended in 10% FBS or 5% hPL medium, and loaded into a bioreactor coated with cryoprecipitate. ASCs (P0) were harvested, and 30 × 10 6 ASCs were reloaded for continued expansion (P1). Feeding rate and time of harvest was guided by metabolic monitoring. Viability, sterility, purity, differentiation capacity, and genomic stability of ASCs P1 were determined. Cultivation of SVF in hPL medium for in average nine days, yielded 546 × 10 6 ASCs compared to 111 × 10 6 ASCs, after 17 days in FBS medium. ASCs P1 yields were in average 605 × 10 6 ASCs (PD [population doublings]: 4.65) after six days in hPL medium, compared to 119 × 10 6 ASCs (PD: 2.45) in FBS medium, after 21 days. ASCs fulfilled ISCT criteria and demonstrated genomic stability and sterility. The use of hPL as a growth supplement for ASCs expansion in the quantum cell expansion system provides an efficient expansion process compared to the use of FBS, while maintaining cell quality appropriate for clinical use. The described process is an obvious choice for manufacturing of large-scale allogeneic ASC products.

  10. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  11. Engineering management of large scale systems

    Science.gov (United States)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  12. Economic analysis of a new class of vanadium redox-flow battery for medium- and large-scale energy storage in commercial applications with renewable energy

    International Nuclear Information System (INIS)

    Li, Ming-Jia; Zhao, Wei; Chen, Xi; Tao, Wen-Quan

    2017-01-01

    Highlights: • A new class of the vanadium redox-flow battery (VRB) is developed. • The new class of VRB is more economic. It is simple process and easy to scale-up. • There are three levels of cell stacks and electrolytes with different qualities. • The economic analysis of the VRB system for renewable energy bases is carried out. • Related polices and suggestions based on the result are provided. - Abstract: Interest in the implement of vanadium redox-flow battery (VRB) for energy storage is growing, which is widely applicable to large-scale renewable energy (e.g. wind energy and solar photo-voltaic), developing distributed generation, lowering the imbalance and increasing the usage of electricity. However, a comprehensive economic analysis of the VRB for energy storage is obscured for various commercial applications, yet it is fundamental for implementation of the VRB in commercial electricity markets. In this study, based on a new class of the VRB that was developed by our team, a comprehensive economic analysis of the VRB for large-scale energy storage is carried out. The results illustrate the economy of the VRB applications for three typical energy systems: (1) The VRB storage system instead of the normal lead-acid battery to be the uninterrupted power supply (UPS) battery for office buildings and hospitals; (2) Application of vanadium battery in household distributed photo-voltaic power generation systems; (3) The wind power and solar power stations equipped with the VRB storage systems. The economic perspectives and cost-benefit analysis of the VRB storage systems may underpin optimisation for maximum profitability. In this case, two findings are concluded. First, with the fixed capacity power or fixed discharging time, the greater profit ratio will be generated from the longer time or the larger capacity power. Second, when the profit ratio, discharging time and capacity power are all variables, it is necessary to find out the best optimisation

  13. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Chase Qishi [New Jersey Inst. of Technology, Newark, NJ (United States); Univ. of Memphis, TN (United States); Zhu, Michelle Mengxia [Southern Illinois Univ., Carbondale, IL (United States)

    2016-06-06

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models feature diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific

  14. Reviving large-scale projects

    International Nuclear Information System (INIS)

    Desiront, A.

    2003-01-01

    For the past decade, most large-scale hydro development projects in northern Quebec have been put on hold due to land disputes with First Nations. Hydroelectric projects have recently been revived following an agreement signed with Aboriginal communities in the province who recognized the need to find new sources of revenue for future generations. Many Cree are working on the project to harness the waters of the Eastmain River located in the middle of their territory. The work involves building an 890 foot long dam, 30 dikes enclosing a 603 square-km reservoir, a spillway, and a power house with 3 generating units with a total capacity of 480 MW of power for start-up in 2007. The project will require the use of 2,400 workers in total. The Cree Construction and Development Company is working on relations between Quebec's 14,000 Crees and the James Bay Energy Corporation, the subsidiary of Hydro-Quebec which is developing the project. Approximately 10 per cent of the $735-million project has been designated for the environmental component. Inspectors ensure that the project complies fully with environmental protection guidelines. Total development costs for Eastmain-1 are in the order of $2 billion of which $735 million will cover work on site and the remainder will cover generating units, transportation and financial charges. Under the treaty known as the Peace of the Braves, signed in February 2002, the Quebec government and Hydro-Quebec will pay the Cree $70 million annually for 50 years for the right to exploit hydro, mining and forest resources within their territory. The project comes at a time when electricity export volumes to the New England states are down due to growth in Quebec's domestic demand. Hydropower is a renewable and non-polluting source of energy that is one of the most acceptable forms of energy where the Kyoto Protocol is concerned. It was emphasized that large-scale hydro-electric projects are needed to provide sufficient energy to meet both

  15. From Large-Scale Synthesis to Lighting Device Applications of Ternary I-III-VI Semiconductor Nanocrystals: Inspiring Greener Material Emitters.

    Science.gov (United States)

    Chen, Bingkun; Pradhan, Narayan; Zhong, Haizheng

    2018-01-18

    Quantum dots with fabulous size-dependent and color-tunable emissions remained as one of the most exciting inventories in nanomaterials for the last 3 decades. Even though a large number of such dot nanocrystals were developed, CdSe still remained as unbeatable and highly trusted lighting nanocrystals. Beyond these, the ternary I-III-VI family of nanocrystals emerged as the most widely accepted greener materials with efficient emissions tunable in visible as well as NIR spectral windows. These bring the high possibility of their implementation as lighting materials acceptable to the community and also to the environment. Keeping these in mind, in this Perspective, the latest developments of ternary I-III-VI nanocrystals from their large-scale synthesis to device applications are presented. Incorporating ZnS, tuning the composition, mixing with other nanocrystals, and doping with Mn ions, light-emitting devices of single color as well as for generating white light emissions are also discussed. In addition, the future prospects of these materials in lighting applications are also proposed.

  16. Vacuum isostatic micro/macro molding of PTFE materials for laser beam shaping in environmental applications: large scale UV laser water purification

    Science.gov (United States)

    Lizotte, Todd; Ohar, Orest

    2009-08-01

    Accessibility to fresh clean water has determined the location and survival of civilizations throughout the ages [1]. The tangible economic value of water is demonstrated by industry's need for water in fields such as semiconductor, food and pharmaceutical manufacturing. Economic stability for all sectors of industry depends on access to reliable volumes of good quality water. As can be seen on television a nation's economy is seriously affected by water shortages through drought or mismanagement and as such those water resources must therefore be managed both for the public interest and the economic future. For over 50 years ultraviolet water purification has been the mainstay technology for water treatment, killing potential microbiological agents in water for leisure activities such as swimming pools to large scale waste water treatment facilities where the UV light photo-oxidizes various pollutants and contaminants. Well tailored to the task, UV provides a cost effective way to reduce the use of chemicals in sanitization and anti-biological applications. Predominantly based on low pressure Hg UV discharge lamps, the system is plagued with lifetime issues (~1 year normal operation), the last ten years has shown that the technology continues to advance and larger scale systems are turning to more advanced lamp designs and evaluating solidstate UV light sources and more powerful laser sources. One of the issues facing the treatment of water with UV lasers is an appropriate means of delivering laser light efficiently over larger volumes or cross sections of water. This paper examines the potential advantages of laser beam shaping components made from isostatically micro molding microstructured PTFE materials for integration into large scale water purification and sterilization systems, for both lamps and laser sources. Applying a unique patented fabrication method engineers can form micro and macro scale diffractive, holographic and faceted reflective structures

  17. Large-scale grid management

    International Nuclear Information System (INIS)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-01-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series

  18. Large-scale pool fires

    Directory of Open Access Journals (Sweden)

    Steinhaus Thomas

    2007-01-01

    Full Text Available A review of research into the burning behavior of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low source Froude numbers and the radiative interaction with the fire source. In hydrocarbon fires, higher soot levels at increased diameters result in radiation blockage effects around the perimeter of large fire plumes; this yields lower emissive powers and a drastic reduction in the radiative loss fraction; whilst there are simplifying factors with these phenomena, arising from the fact that soot yield can saturate, there are other complications deriving from the intermittency of the behavior, with luminous regions of efficient combustion appearing randomly in the outer surface of the fire according the turbulent fluctuations in the fire plume. Knowledge of the fluid flow instabilities, which lead to the formation of large eddies, is also key to understanding the behavior of large-scale fires. Here modeling tools can be effectively exploited in order to investigate the fluid flow phenomena, including RANS- and LES-based computational fluid dynamics codes. The latter are well-suited to representation of the turbulent motions, but a number of challenges remain with their practical application. Massively-parallel computational resources are likely to be necessary in order to be able to adequately address the complex coupled phenomena to the level of detail that is necessary.

  19. Large-scale field application of RNAi technology reducing Israeli acute paralysis virus disease in honey bees (Apis mellifera, Hymenoptera: Apidae.

    Directory of Open Access Journals (Sweden)

    Wayne Hunter

    Full Text Available The importance of honey bees to the world economy far surpasses their contribution in terms of honey production; they are responsible for up to 30% of the world's food production through pollination of crops. Since fall 2006, honey bees in the U.S. have faced a serious population decline, due in part to a phenomenon called Colony Collapse Disorder (CCD, which is a disease syndrome that is likely caused by several factors. Data from an initial study in which investigators compared pathogens in honey bees affected by CCD suggested a putative role for Israeli Acute Paralysis Virus, IAPV. This is a single stranded RNA virus with no DNA stage placed taxonomically within the family Dicistroviridae. Although subsequent studies have failed to find IAPV in all CCD diagnosed colonies, IAPV has been shown to cause honey bee mortality. RNA interference technology (RNAi has been used successfully to silence endogenous insect (including honey bee genes both by injection and feeding. Moreover, RNAi was shown to prevent bees from succumbing to infection from IAPV under laboratory conditions. In the current study IAPV specific homologous dsRNA was used in the field, under natural beekeeping conditions in order to prevent mortality and improve the overall health of bees infected with IAPV. This controlled study included a total of 160 honey bee hives in two discrete climates, seasons and geographical locations (Florida and Pennsylvania. To our knowledge, this is the first successful large-scale real world use of RNAi for disease control.

  20. Large-scale fabrication of superhydrophobic polyurethane/nano-Al2O3 coatings by suspension flame spraying for anti-corrosion applications

    Science.gov (United States)

    Chen, Xiuyong; Yuan, Jianhui; Huang, Jing; Ren, Kun; Liu, Yi; Lu, Shaoyang; Li, Hua

    2014-08-01

    This study aims to further enhance the anti-corrosion performances of Al coatings by constructing superhydrophobic surfaces. The Al coatings were initially arc-sprayed onto steel substrates, followed by deposition of polyurethane (PU)/nano-Al2O3 composites by a suspension flame spraying process. Large-scale corrosion-resistant superhydrophobic PU/nano-Al2O3-Al coatings were successfully fabricated. The coatings showed tunable superhydrophilicity/superhydrophobicity as achieved by changing the concentration of PU in the starting suspension. The layer containing 2.0 wt.%PU displayed excellent hydrophobicity with the contact angle of ∼151° and the sliding angle of ∼6.5° for water droplets. The constructed superhydrophobic coatings showed markedly improved anti-corrosion performances as assessed by electrochemical corrosion testing carried out in 3.5 wt.% NaCl solution. The PU/nano-Al2O3-Al coatings with superhydrophobicity and competitive anti-corrosion performances could be potentially used as protective layers for marine infrastructures. This study presents a promising approach for fabricatiing superhydrophobic coatings for corrosion-resistant applications.

  1. Coupling a basin erosion and river sediment transport model into a large scale hydrological model: an application in the Amazon basin

    Science.gov (United States)

    Buarque, D. C.; Collischonn, W.; Paiva, R. C. D.

    2012-04-01

    This study presents the first application and preliminary results of the large scale hydrodynamic/hydrological model MGB-IPH with a new module to predict the spatial distribution of the basin erosion and river sediment transport in a daily time step. The MGB-IPH is a large-scale, distributed and process based hydrological model that uses a catchment based discretization and the Hydrological Response Units (HRU) approach. It uses physical based equations to simulate the hydrological processes, such as the Penman Monteith model for evapotranspiration, and uses the Muskingum Cunge approach and a full 1D hydrodynamic model for river routing; including backwater effects and seasonal flooding. The sediment module of the MGB-IPH model is divided into two components: 1) prediction of erosion over the basin and sediment yield to river network; 2) sediment transport along the river channels. Both MGB-IPH and the sediment module use GIS tools to display relevant maps and to extract parameters from SRTM DEM (a 15" resolution was adopted). Using the catchment discretization the sediment module applies the Modified Universal Soil Loss Equation to predict soil loss from each HRU considering three sediment classes defined according to the soil texture: sand, silt and clay. The effects of topography on soil erosion are estimated by a two-dimensional slope length (LS) factor which using the contributing area approach and a local slope steepness (S), both estimated for each DEM pixel using GIS algorithms. The amount of sediment releasing to the catchment river reach in each day is calculated using a linear reservoir. Once the sediment reaches the river they are transported into the river channel using an advection equation for silt and clay and a sediment continuity equation for sand. A sediment balance based on the Yang sediment transport capacity, allowing to compute the amount of erosion and deposition along the rivers, is performed for sand particles as bed load, whilst no

  2. Dissecting the large-scale galactic conformity

    Science.gov (United States)

    Seo, Seongu

    2018-01-01

    Galactic conformity is an observed phenomenon that galaxies located in the same region have similar properties such as star formation rate, color, gas fraction, and so on. The conformity was first observed among galaxies within in the same halos (“one-halo conformity”). The one-halo conformity can be readily explained by mutual interactions among galaxies within a halo. Recent observations however further witnessed a puzzling connection among galaxies with no direct interaction. In particular, galaxies located within a sphere of ~5 Mpc radius tend to show similarities, even though the galaxies do not share common halos with each other ("two-halo conformity" or “large-scale conformity”). Using a cosmological hydrodynamic simulation, Illustris, we investigate the physical origin of the two-halo conformity and put forward two scenarios. First, back-splash galaxies are likely responsible for the large-scale conformity. They have evolved into red galaxies due to ram-pressure stripping in a given galaxy cluster and happen to reside now within a ~5 Mpc sphere. Second, galaxies in strong tidal field induced by large-scale structure also seem to give rise to the large-scale conformity. The strong tides suppress star formation in the galaxies. We discuss the importance of the large-scale conformity in the context of galaxy evolution.

  3. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  4. Large scale structure and baryogenesis

    International Nuclear Information System (INIS)

    Kirilova, D.P.; Chizhov, M.V.

    2001-08-01

    We discuss a possible connection between the large scale structure formation and the baryogenesis in the universe. An update review of the observational indications for the presence of a very large scale 120h -1 Mpc in the distribution of the visible matter of the universe is provided. The possibility to generate a periodic distribution with the characteristic scale 120h -1 Mpc through a mechanism producing quasi-periodic baryon density perturbations during inflationary stage, is discussed. The evolution of the baryon charge density distribution is explored in the framework of a low temperature boson condensate baryogenesis scenario. Both the observed very large scale of a the visible matter distribution in the universe and the observed baryon asymmetry value could naturally appear as a result of the evolution of a complex scalar field condensate, formed at the inflationary stage. Moreover, for some model's parameters a natural separation of matter superclusters from antimatter ones can be achieved. (author)

  5. Applying Taguchi design and large-scale strategy for mycosynthesis of nano-silver from endophytic Trichoderma harzianum SYA.F4 and its application against phytopathogens

    Science.gov (United States)

    El-Moslamy, Shahira H.; Elkady, Marwa F.; Rezk, Ahmed H.; Abdel-Fattah, Yasser R.

    2017-03-01

    Development of reliable and low-cost requirement for large-scale eco-friendly biogenic synthesis of metallic nanoparticles is an important step for industrial applications of bionanotechnology. In the present study, the mycosynthesis of spherical nano-Ag (12.7 ± 0.8 nm) from extracellular filtrate of local endophytic T. harzianum SYA.F4 strain which have interested mixed bioactive metabolites (alkaloids, flavonoids, tannins, phenols, nitrate reductase (320 nmol/hr/ml), carbohydrate (25 μg/μl) and total protein concentration (2.5 g/l) was reported. Industrial mycosynthesis of nano-Ag can be induced with different characters depending on the fungal cultivation and physical conditions. Taguchi design was applied to improve the physicochemical conditions for nano-Ag production, and the optimum conditions which increased its mass weight 3 times larger than a basal condition were as follows: AgNO3 (0.01 M), diluted reductant (10 v/v, pH 5) and incubated at 30 °C, 200 rpm for 24 hr. Kinetic conversion rates in submerged batch cultivation in 7 L stirred tank bioreactor on using semi-defined cultivation medium was as follows: the maximum biomass production (Xmax) and maximum nano-Ag mass weight (Pmax) calculated (60.5 g/l and 78.4 g/l respectively). The best nano-Ag concentration that formed large inhibition zones was 100 μg/ml which showed against A.alternate (43 mm) followed by Helminthosporium sp. (35 mm), Botrytis sp. (32 mm) and P. arenaria (28 mm).

  6. Fluid-structure interaction simulation of floating structures interacting with complex, large-scale ocean waves and atmospheric turbulence with application to floating offshore wind turbines

    Science.gov (United States)

    Calderer, Antoni; Guo, Xin; Shen, Lian; Sotiropoulos, Fotis

    2018-02-01

    We develop a numerical method for simulating coupled interactions of complex floating structures with large-scale ocean waves and atmospheric turbulence. We employ an efficient large-scale model to develop offshore wind and wave environmental conditions, which are then incorporated into a high resolution two-phase flow solver with fluid-structure interaction (FSI). The large-scale wind-wave interaction model is based on a two-fluid dynamically-coupled approach that employs a high-order spectral method for simulating the water motion and a viscous solver with undulatory boundaries for the air motion. The two-phase flow FSI solver is based on the level set method and is capable of simulating the coupled dynamic interaction of arbitrarily complex bodies with airflow and waves. The large-scale wave field solver is coupled with the near-field FSI solver with a one-way coupling approach by feeding into the latter waves via a pressure-forcing method combined with the level set method. We validate the model for both simple wave trains and three-dimensional directional waves and compare the results with experimental and theoretical solutions. Finally, we demonstrate the capabilities of the new computational framework by carrying out large-eddy simulation of a floating offshore wind turbine interacting with realistic ocean wind and waves.

  7. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  8. Report on a workshop on the application of thermoluminescence dosimetry to large scale individual monitoring, Ispra, 11-13 September 1985

    International Nuclear Information System (INIS)

    Barthe, J.R.; Christensen, P.; Driscoll, C.M.H.; Marshall, T.O.; Harvey, J.R.; Julius, H.W.; Marshall, M.; Oberhoffer, M.

    1987-01-01

    The workshop was held for the benefit of those actually involved in the operation of large scale automatic thermoluminescence dosimetry systems. It was organised around three overall themes subdivided into 13 subject headings: External constraints: User requirements, Quantities and Units, Legal requirements, Testing, Intercomparisons; Dosimetry systems: Materials, Dosemeter design, Reading systems, Annealing procedures, Rogue readings; Management: Distribution and organisation, Reporting and record keeping, Financial aspects. (author)

  9. Application of GRA method, dynamic analysis and fuzzy set theory in evaluation and selection of emergency treatment technology for large scale phenol spill incidents

    Science.gov (United States)

    Zhao, Jingjing; Yu, Lean; Li, Lian

    2017-05-01

    Select an appropriate technology in an emergency response is a very important issue with various kinds of chemical contingency spills frequently taking place. Due to the complexity, fuzziness and uncertainties of the chemical contingency spills, the theory of GRA method, dynamic analysis combined with fuzzy set theory will be appropriately applied to selection and evaluation of emergency treatment technology. Finally, a emergency phenol spill accidence occurred in highway is provided to illustrate the applicability and feasibility of the proposed methods.

  10. Japanese large-scale interferometers

    CERN Document Server

    Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K

    2002-01-01

    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.

  11. The application of two-step linear temperature program to thermal analysis for monitoring the lipid induction of Nostoc sp. KNUA003 in large scale cultivation.

    Science.gov (United States)

    Kang, Bongmun; Yoon, Ho-Sung

    2015-02-01

    Recently, microalgae was considered as a renewable energy for fuel production because its production is nonseasonal and may take place on nonarable land. Despite all of these advantages, microalgal oil production is significantly affected by environmental factors. Furthermore, the large variability remains an important problem in measurement of algae productivity and compositional analysis, especially, the total lipid content. Thus, there is considerable interest in accurate determination of total lipid content during the biotechnological process. For these reason, various high-throughput technologies were suggested for accurate measurement of total lipids contained in the microorganisms, especially oleaginous microalgae. In addition, more advanced technologies were employed to quantify the total lipids of the microalgae without a pretreatment. However, these methods are difficult to measure total lipid content in wet form microalgae obtained from large-scale production. In present study, the thermal analysis performed with two-step linear temeperature program was applied to measure heat evolved in temperature range from 310 to 351 °C of Nostoc sp. KNUA003 obtained from large-scale cultivation. And then, we examined the relationship between the heat evolved in 310-351 °C (HE) and total lipid content of the wet Nostoc cell cultivated in raceway. As a result, the linear relationship was determined between HE value and total lipid content of Nostoc sp. KNUA003. Particularly, there was a linear relationship of 98% between the HE value and the total lipid content of the tested microorganism. Based on this relationship, the total lipid content converted from the heat evolved of wet Nostoc sp. KNUA003 could be used for monitoring its lipid induction in large-scale cultivation. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Modelling of large-scale structures arising under developed turbulent convection in a horizontal fluid layer (with application to the problem of tropical cyclone origination

    Directory of Open Access Journals (Sweden)

    G. V. Levina

    2000-01-01

    Full Text Available The work is concerned with the results of theoretical and laboratory modelling the processes of the large-scale structure generation under turbulent convection in the rotating-plane horizontal layer of an incompressible fluid with unstable stratification. The theoretical model describes three alternative ways of creating unstable stratification: a layer heating from below, a volumetric heating of a fluid with internal heat sources and combination of both factors. The analysis of the model equations show that under conditions of high intensity of the small-scale convection and low level of heat loss through the horizontal layer boundaries a long wave instability may arise. The condition for the existence of an instability and criterion identifying the threshold of its initiation have been determined. The principle of action of the discovered instability mechanism has been described. Theoretical predictions have been verified by a series of experiments on a laboratory model. The horizontal dimensions of the experimentally-obtained long-lived vortices are 4÷6 times larger than the thickness of the fluid layer. This work presents a description of the laboratory setup and experimental procedure. From the geophysical viewpoint the examined mechanism of the long wave instability is supposed to be adequate to allow a description of the initial step in the evolution of such large-scale vortices as tropical cyclones - a transition form the small-scale cumulus clouds to the state of the atmosphere involving cloud clusters (the stage of initial tropical perturbation.

  13. Multimode Resource-Constrained Multiple Project Scheduling Problem under Fuzzy Random Environment and Its Application to a Large Scale Hydropower Construction Project

    Science.gov (United States)

    Xu, Jiuping

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method. PMID:24550708

  14. Multimode resource-constrained multiple project scheduling problem under fuzzy random environment and its application to a large scale hydropower construction project.

    Science.gov (United States)

    Xu, Jiuping; Feng, Cuiying

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method.

  15. Assessing the Challenges in the Application of Potential Probiotic Lactic Acid Bacteria in the Large-Scale Fermentation of Spanish-Style Table Olives

    Directory of Open Access Journals (Sweden)

    Francisco Rodríguez-Gómez

    2017-05-01

    Full Text Available This work studies the inoculation conditions for allowing the survival/predominance of a potential probiotic strain (Lactobacillus pentosus TOMC-LAB2 when used as a starter culture in large-scale fermentations of green Spanish-style olives. The study was performed in two successive seasons (2011/2012 and 2012/2013, using about 150 tons of olives. Inoculation immediately after brining (to prevent wild initial microbiota growth followed by re-inoculation 24 h later (to improve competitiveness was essential for inoculum predominance. Processing early in the season (September showed a favorable effect on fermentation and strain predominance on olives (particularly when using acidified brines containing 25 L HCl/vessel but caused the disappearance of the target strain from both brines and olives during the storage phase. On the contrary, processing in October slightly reduced the target strain predominance on olives (70–90% but allowed longer survival. The type of inoculum used (laboratory vs. industry pre-adapted never had significant effects. Thus, this investigation discloses key issues for the survival and predominance of starter cultures in large-scale industrial fermentations of green Spanish-style olives. Results can be of interest for producing probiotic table olives and open new research challenges on the causes of inoculum vanishing during the storage phase.

  16. Development of fine-resolution analyses and expanded large-scale forcing properties: 2. Scale awareness and application to single-column model experiments

    Science.gov (United States)

    Feng, Sha; Li, Zhijin; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Vogelmann, Andrew M.; Endo, Satoshi

    2015-01-01

    three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy's Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multiscale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scales larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.

  17. Large scale chromatographic separations using continuous displacement chromatography (CDC)

    International Nuclear Information System (INIS)

    Taniguchi, V.T.; Doty, A.W.; Byers, C.H.

    1988-01-01

    A process for large scale chromatographic separations using a continuous chromatography technique is described. The process combines the advantages of large scale batch fixed column displacement chromatography with conventional analytical or elution continuous annular chromatography (CAC) to enable large scale displacement chromatography to be performed on a continuous basis (CDC). Such large scale, continuous displacement chromatography separations have not been reported in the literature. The process is demonstrated with the ion exchange separation of a binary lanthanide (Nd/Pr) mixture. The process is, however, applicable to any displacement chromatography separation that can be performed using conventional batch, fixed column chromatography

  18. Large scale biomimetic membrane arrays

    DEFF Research Database (Denmark)

    Hansen, Jesper Søndergaard; Perry, Mark; Vogel, Jörg

    2009-01-01

    To establish planar biomimetic membranes across large scale partition aperture arrays, we created a disposable single-use horizontal chamber design that supports combined optical-electrical measurements. Functional lipid bilayers could easily and efficiently be established across CO2 laser micro......-structured 8 x 8 aperture partition arrays with average aperture diameters of 301 +/- 5 mu m. We addressed the electro-physical properties of the lipid bilayers established across the micro-structured scaffold arrays by controllable reconstitution of biotechnological and physiological relevant membrane...... peptides and proteins. Next, we tested the scalability of the biomimetic membrane design by establishing lipid bilayers in rectangular 24 x 24 and hexagonal 24 x 27 aperture arrays, respectively. The results presented show that the design is suitable for further developments of sensitive biosensor assays...

  19. Large scale nuclear structure studies

    International Nuclear Information System (INIS)

    Faessler, A.

    1985-01-01

    Results of large scale nuclear structure studies are reported. The starting point is the Hartree-Fock-Bogoliubov solution with angular momentum and proton and neutron number projection after variation. This model for number and spin projected two-quasiparticle excitations with realistic forces yields in sd-shell nuclei similar good results as the 'exact' shell-model calculations. Here the authors present results for a pf-shell nucleus 46 Ti and results for the A=130 mass region where they studied 58 different nuclei with the same single-particle energies and the same effective force derived from a meson exchange potential. They carried out a Hartree-Fock-Bogoliubov variation after mean field projection in realistic model spaces. In this way, they determine for each yrast state the optimal mean Hartree-Fock-Bogoliubov field. They apply this method to 130 Ce and 128 Ba using the same effective nucleon-nucleon interaction. (Auth.)

  20. Large-scale river regulation

    International Nuclear Information System (INIS)

    Petts, G.

    1994-01-01

    Recent concern over human impacts on the environment has tended to focus on climatic change, desertification, destruction of tropical rain forests, and pollution. Yet large-scale water projects such as dams, reservoirs, and inter-basin transfers are among the most dramatic and extensive ways in which our environment has been, and continues to be, transformed by human action. Water running to the sea is perceived as a lost resource, floods are viewed as major hazards, and wetlands are seen as wastelands. River regulation, involving the redistribution of water in time and space, is a key concept in socio-economic development. To achieve water and food security, to develop drylands, and to prevent desertification and drought are primary aims for many countries. A second key concept is ecological sustainability. Yet the ecology of rivers and their floodplains is dependent on the natural hydrological regime, and its related biochemical and geomorphological dynamics. (Author)

  1. Fires in large scale ventilation systems

    International Nuclear Information System (INIS)

    Gregory, W.S.; Martin, R.A.; White, B.W.; Nichols, B.D.; Smith, P.R.; Leslie, I.H.; Fenton, D.L.; Gunaji, M.V.; Blythe, J.P.

    1991-01-01

    This paper summarizes the experience gained simulating fires in large scale ventilation systems patterned after ventilation systems found in nuclear fuel cycle facilities. The series of experiments discussed included: (1) combustion aerosol loading of 0.61x0.61 m HEPA filters with the combustion products of two organic fuels, polystyrene and polymethylemethacrylate; (2) gas dynamic and heat transport through a large scale ventilation system consisting of a 0.61x0.61 m duct 90 m in length, with dampers, HEPA filters, blowers, etc.; (3) gas dynamic and simultaneous transport of heat and solid particulate (consisting of glass beads with a mean aerodynamic diameter of 10μ) through the large scale ventilation system; and (4) the transport of heat and soot, generated by kerosene pool fires, through the large scale ventilation system. The FIRAC computer code, designed to predict fire-induced transients in nuclear fuel cycle facility ventilation systems, was used to predict the results of experiments (2) through (4). In general, the results of the predictions were satisfactory. The code predictions for the gas dynamics, heat transport, and particulate transport and deposition were within 10% of the experimentally measured values. However, the code was less successful in predicting the amount of soot generation from kerosene pool fires, probably due to the fire module of the code being a one-dimensional zone model. The experiments revealed a complicated three-dimensional combustion pattern within the fire room of the ventilation system. Further refinement of the fire module within FIRAC is needed. (orig.)

  2. Economically viable large-scale hydrogen liquefaction

    Science.gov (United States)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  3. Large-Scale Off-Target Identification Using Fast and Accurate Dual Regularized One-Class Collaborative Filtering and Its Application to Drug Repurposing.

    Directory of Open Access Journals (Sweden)

    Hansaim Lim

    2016-10-01

    Full Text Available Target-based screening is one of the major approaches in drug discovery. Besides the intended target, unexpected drug off-target interactions often occur, and many of them have not been recognized and characterized. The off-target interactions can be responsible for either therapeutic or side effects. Thus, identifying the genome-wide off-targets of lead compounds or existing drugs will be critical for designing effective and safe drugs, and providing new opportunities for drug repurposing. Although many computational methods have been developed to predict drug-target interactions, they are either less accurate than the one that we are proposing here or computationally too intensive, thereby limiting their capability for large-scale off-target identification. In addition, the performances of most machine learning based algorithms have been mainly evaluated to predict off-target interactions in the same gene family for hundreds of chemicals. It is not clear how these algorithms perform in terms of detecting off-targets across gene families on a proteome scale. Here, we are presenting a fast and accurate off-target prediction method, REMAP, which is based on a dual regularized one-class collaborative filtering algorithm, to explore continuous chemical space, protein space, and their interactome on a large scale. When tested in a reliable, extensive, and cross-gene family benchmark, REMAP outperforms the state-of-the-art methods. Furthermore, REMAP is highly scalable. It can screen a dataset of 200 thousands chemicals against 20 thousands proteins within 2 hours. Using the reconstructed genome-wide target profile as the fingerprint of a chemical compound, we predicted that seven FDA-approved drugs can be repurposed as novel anti-cancer therapies. The anti-cancer activity of six of them is supported by experimental evidences. Thus, REMAP is a valuable addition to the existing in silico toolbox for drug target identification, drug repurposing

  4. Large-Scale Off-Target Identification Using Fast and Accurate Dual Regularized One-Class Collaborative Filtering and Its Application to Drug Repurposing.

    Science.gov (United States)

    Lim, Hansaim; Poleksic, Aleksandar; Yao, Yuan; Tong, Hanghang; He, Di; Zhuang, Luke; Meng, Patrick; Xie, Lei

    2016-10-01

    Target-based screening is one of the major approaches in drug discovery. Besides the intended target, unexpected drug off-target interactions often occur, and many of them have not been recognized and characterized. The off-target interactions can be responsible for either therapeutic or side effects. Thus, identifying the genome-wide off-targets of lead compounds or existing drugs will be critical for designing effective and safe drugs, and providing new opportunities for drug repurposing. Although many computational methods have been developed to predict drug-target interactions, they are either less accurate than the one that we are proposing here or computationally too intensive, thereby limiting their capability for large-scale off-target identification. In addition, the performances of most machine learning based algorithms have been mainly evaluated to predict off-target interactions in the same gene family for hundreds of chemicals. It is not clear how these algorithms perform in terms of detecting off-targets across gene families on a proteome scale. Here, we are presenting a fast and accurate off-target prediction method, REMAP, which is based on a dual regularized one-class collaborative filtering algorithm, to explore continuous chemical space, protein space, and their interactome on a large scale. When tested in a reliable, extensive, and cross-gene family benchmark, REMAP outperforms the state-of-the-art methods. Furthermore, REMAP is highly scalable. It can screen a dataset of 200 thousands chemicals against 20 thousands proteins within 2 hours. Using the reconstructed genome-wide target profile as the fingerprint of a chemical compound, we predicted that seven FDA-approved drugs can be repurposed as novel anti-cancer therapies. The anti-cancer activity of six of them is supported by experimental evidences. Thus, REMAP is a valuable addition to the existing in silico toolbox for drug target identification, drug repurposing, phenotypic screening, and

  5. Large Scale Glazed Concrete Panels

    DEFF Research Database (Denmark)

    Bache, Anja Margrethe

    2010-01-01

    Today, there is a lot of focus on concrete surface’s aesthitic potential, both globally and locally. World famous architects such as Herzog De Meuron, Zaha Hadid, Richard Meyer and David Chippenfield challenge the exposure of concrete in their architecture. At home, this trend can be seen...... in the crinkly façade of DR-Byen (the domicile of the Danish Broadcasting Company) by architect Jean Nouvel and Zaha Hadid’s Ordrupgård’s black curved smooth concrete surfaces. Furthermore, one can point to initiatives such as “Synlig beton” (visible concrete) that can be seen on the website www.......synligbeton.dk and spæncom’s aesthetic relief effects by the designer Line Kramhøft (www.spaencom.com). It is my hope that the research-development project “Lasting large scale glazed concrete formwork,” I am working on at DTU, department of Architectural Engineering will be able to complement these. It is a project where I...

  6. Large scale cluster computing workshop

    International Nuclear Information System (INIS)

    Dane Skow; Alan Silverman

    2002-01-01

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community

  7. Large scale cross hole testing

    International Nuclear Information System (INIS)

    Ball, J.K.; Black, J.H.; Doe, T.

    1991-05-01

    As part of the Site Characterisation and Validation programme the results of the large scale cross hole testing have been used to document hydraulic connections across the SCV block, to test conceptual models of fracture zones and obtain hydrogeological properties of the major hydrogeological features. The SCV block is highly heterogeneous. This heterogeneity is not smoothed out even over scales of hundreds of meters. Results of the interpretation validate the hypothesis of the major fracture zones, A, B and H; not much evidence of minor fracture zones is found. The uncertainty in the flow path, through the fractured rock, causes sever problems in interpretation. Derived values of hydraulic conductivity were found to be in a narrow range of two to three orders of magnitude. Test design did not allow fracture zones to be tested individually. This could be improved by testing the high hydraulic conductivity regions specifically. The Piezomac and single hole equipment worked well. Few, if any, of the tests ran long enough to approach equilibrium. Many observation boreholes showed no response. This could either be because there is no hydraulic connection, or there is a connection but a response is not seen within the time scale of the pumping test. The fractional dimension analysis yielded credible results, and the sinusoidal testing procedure provided an effective means of identifying the dominant hydraulic connections. (10 refs.) (au)

  8. Large-scale synthesis of macroporous SnO2 with/without carbon and their application as anode materials for lithium-ion batteries

    International Nuclear Information System (INIS)

    Wang Fei; Yao Gang; Xu Minwei; Zhao Mingshu; Sun Zhanbo; Song Xiaoping

    2011-01-01

    Highlights: → A new hard template prepared from glucose was used to synthesize macroporous SnO 2 . → SnO 2 and SnO 2 /C were prepared in a simple and large-scale synthetic method. → Combining the nanostructure design and active/inactive nanocomposite concept. → The obtained SnO 2 /C composite exhibited superior cycling performance. - Abstract: The macroporous SnO 2 is prepared using close packed carbonaceous sphere template which synthesized from glucose by hydrothermal method. The structure and morphology of the macroporous SnO 2 are evaluated by XRD and FE-SEM. The average pore size of the macroporous SnO 2 is about 190 nm and its wall thickness is less than 10 nm. When the macroporous SnO 2 filled with carbon is used as an anode material for lithium-ion battery, the capacity is about 380 mAh g -1 after 70 cycles. The improved cyclability is attributed to the carbon matrix which is used as an effective physical buffer to prevent the collapse of the well dispersed macroporous SnO 2 .

  9. Large-scale synthesis of macroporous SnO{sub 2} with/without carbon and their application as anode materials for lithium-ion batteries

    Energy Technology Data Exchange (ETDEWEB)

    Wang Fei; Yao Gang; Xu Minwei [MOE Key Laboratory for Non-equilibrium Synthesis and Modulation of Condensed Matter, School of Science, Xi' an Jiaotong University, Shaan Xi 710049 (China); Zhao Mingshu, E-mail: zhaomshu@mail.xjtu.edu.cn [MOE Key Laboratory for Non-equilibrium Synthesis and Modulation of Condensed Matter, School of Science, Xi' an Jiaotong University, Shaan Xi 710049 (China); Sun Zhanbo [MOE Key Laboratory for Non-equilibrium Synthesis and Modulation of Condensed Matter, School of Science, Xi' an Jiaotong University, Shaan Xi 710049 (China); Song Xiaoping, E-mail: xpsong@mail.xjtu.edu.cn [MOE Key Laboratory for Non-equilibrium Synthesis and Modulation of Condensed Matter, School of Science, Xi' an Jiaotong University, Shaan Xi 710049 (China)

    2011-05-19

    Highlights: > A new hard template prepared from glucose was used to synthesize macroporous SnO{sub 2}. > SnO{sub 2} and SnO{sub 2}/C were prepared in a simple and large-scale synthetic method. > Combining the nanostructure design and active/inactive nanocomposite concept. > The obtained SnO{sub 2}/C composite exhibited superior cycling performance. - Abstract: The macroporous SnO{sub 2} is prepared using close packed carbonaceous sphere template which synthesized from glucose by hydrothermal method. The structure and morphology of the macroporous SnO{sub 2} are evaluated by XRD and FE-SEM. The average pore size of the macroporous SnO{sub 2} is about 190 nm and its wall thickness is less than 10 nm. When the macroporous SnO{sub 2} filled with carbon is used as an anode material for lithium-ion battery, the capacity is about 380 mAh g{sup -1} after 70 cycles. The improved cyclability is attributed to the carbon matrix which is used as an effective physical buffer to prevent the collapse of the well dispersed macroporous SnO{sub 2}.

  10. A large-scale mutant panel in wheat developed using heavy-ion beam mutagenesis and its application to genetic research

    Energy Technology Data Exchange (ETDEWEB)

    Murai, Koji, E-mail: murai@fpu.ac.jp [Department of Bioscience, Fukui Prefectural University, 4-1-1 Matsuoka-Kenjojima, Eiheiji-cho, Yoshida-gun, Fukui 910-1195 (Japan); Nishiura, Aiko [Department of Bioscience, Fukui Prefectural University, 4-1-1 Matsuoka-Kenjojima, Eiheiji-cho, Yoshida-gun, Fukui 910-1195 (Japan); Kazama, Yusuke [RIKEN, Innovation Center, 2-1 Hirosawa, Wako, Saitama 351-0198 (Japan); Abe, Tomoko [RIKEN, Innovation Center, 2-1 Hirosawa, Wako, Saitama 351-0198 (Japan); RIKEN, Nishina Center, 2-1 Hirosawa, Wako, Saitama 351-0198 (Japan)

    2013-11-01

    Mutation analysis is a powerful tool for studying gene function. Heavy-ion beam mutagenesis is a comparatively new approach to inducing mutations in plants and is particularly efficient because of its high linear energy transfer (LET). High LET radiation induces a higher rate of DNA double-strand breaks than other mutagenic methods. Over the last 12 years, we have constructed a large-scale mutant panel in diploid einkorn wheat (Triticum monococcum) using heavy-ion beam mutagenesis. Einkorn wheat seeds were exposed to a heavy-ion beam and then sown in the field. Selfed seeds from each spike of M{sub 1} plants were used to generate M{sub 2} lines. Every year, we obtained approximately 1000 M{sub 2} lines and eventually developed a mutant panel with 10,000 M{sub 2} lines in total. This mutant panel is being systematically screened for mutations affecting reproductive growth, and especially for flowering-time mutants. To date, we have identified several flowering-time mutants of great interest: non-flowering mutants (mvp: maintained vegetative phase), late-flowering mutants, and early-flowering mutants. These novel mutations will be of value for investigations of the genetic mechanism of flowering in wheat.

  11. Economic benefits of large-scale remediation of contaminated marine sediments. A literature review and an application to the Grenland fjords in Norway

    Energy Technology Data Exchange (ETDEWEB)

    Barton, David Nicholas [Norwegian Inst. for Water Research (NIVA) / Norwegian Inst. Nature Research, Oslo (Norway); Navrud, Staale; Bjoerkeslett, Heid; Lilleby, Ingrid [Dept. of Economics and Resource Management, Norwegian Univ. of Life Sciences (UMB), As (Norway)

    2010-03-15

    Purpose: As input to a cost-benefit analysis of large-scale remediation measures of contaminated sediments in the Grenland fjords in Norway, we conducted a contingent valuation (CV) survey of a representative sample of households from municipalities adjacent to these fjords. Materials and methods: The CV method aimed at valuing the benefits perceived by households of removing dietary health advisories on seafood consumption currently in place through the fjords. Results: Mean household willingness-to-pay (WTP) per year over a 10-year period was found to decrease with increased distance from the Grenland fjords and was somewhat higher than in a similar study conducted 10 years earlier than our study. Aggregating mean WTP over all households in neighbouring municipalities to the fjords resulted in total economic benefits of the same magnitude as the total remediation costs. The WTP results make the case in such a way that the high costs of remediation of contaminated marine sediments can be defended by the large economic benefits generated for households around the fjord. The research was financed by the local environmental authorities and the local industry that had caused the contaminated sediments. The WTP results were strongly contested by the industry upon completion of the study. Conclusion: The paper addresses the industry's critiques of this particular CV study and discusses how to better inform local stakeholders about the potential and limitations of the CV method and how to improve communication of economic valuation results. (orig.)

  12. Successive and large-scale synthesis of InP/ZnS quantum dots in a hybrid reactor and their application to white LEDs

    International Nuclear Information System (INIS)

    Kim, Kyungnam; Jeong, Sohee; Woo, Ju Yeon; Han, Chang-Soo

    2012-01-01

    We report successive and large-scale synthesis of InP/ZnS core/shell nanocrystal quantum dots (QDs) using a customized hybrid flow reactor, which is based on serial combination of a batch-type mixer and a flow-type furnace. InP cores and InP/ZnS core/shell QDs were successively synthesized in the hybrid reactor in a simple one-step process. In this reactor, the flow rate of the solutions was typically 1 ml min −1 , 100 times larger than that of conventional microfluidic reactors. In order to synthesize high-quality InP/ZnS QDs, we controlled both the flow rate and the crystal growth temperature. Finally, we obtained high-quality InP/ZnS QDs in colors from bluish green to red, and we demonstrated that these core/shell QDs could be incorporated into white-light-emitting diode (LED) devices to improve color rendering performance. (paper)

  13. Successive and large-scale synthesis of InP/ZnS quantum dots in a hybrid reactor and their application to white LEDs

    Science.gov (United States)

    Kim, Kyungnam; Jeong, Sohee; Woo, Ju Yeon; Han, Chang-Soo

    2012-02-01

    We report successive and large-scale synthesis of InP/ZnS core/shell nanocrystal quantum dots (QDs) using a customized hybrid flow reactor, which is based on serial combination of a batch-type mixer and a flow-type furnace. InP cores and InP/ZnS core/shell QDs were successively synthesized in the hybrid reactor in a simple one-step process. In this reactor, the flow rate of the solutions was typically 1 ml min-1, 100 times larger than that of conventional microfluidic reactors. In order to synthesize high-quality InP/ZnS QDs, we controlled both the flow rate and the crystal growth temperature. Finally, we obtained high-quality InP/ZnS QDs in colors from bluish green to red, and we demonstrated that these core/shell QDs could be incorporated into white-light-emitting diode (LED) devices to improve color rendering performance.

  14. The Schroedinger-Poisson equations as the large-N limit of the Newtonian N-body system. Applications to the large scale dark matter dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Briscese, Fabio [Northumbria University, Department of Mathematics, Physics and Electrical Engineering, Newcastle upon Tyne (United Kingdom); Citta Universitaria, Istituto Nazionale di Alta Matematica Francesco Severi, Gruppo Nazionale di Fisica Matematica, Rome (Italy)

    2017-09-15

    In this paper it is argued how the dynamics of the classical Newtonian N-body system can be described in terms of the Schroedinger-Poisson equations in the large N limit. This result is based on the stochastic quantization introduced by Nelson, and on the Calogero conjecture. According to the Calogero conjecture, the emerging effective Planck constant is computed in terms of the parameters of the N-body system as ℎ ∝ M{sup 5/3}G{sup 1/2}(N/ left angle ρ right angle){sup 1/6}, where is G the gravitational constant, N and M are the number and the mass of the bodies, and left angle ρ right angle is their average density. The relevance of this result in the context of large scale structure formation is discussed. In particular, this finding gives a further argument in support of the validity of the Schroedinger method as numerical double of the N-body simulations of dark matter dynamics at large cosmological scales. (orig.)

  15. Application of stochastic models in identification and apportionment of heavy metal pollution sources in the surface soils of a large-scale region.

    Science.gov (United States)

    Hu, Yuanan; Cheng, Hefa

    2013-04-16

    As heavy metals occur naturally in soils at measurable concentrations and their natural background contents have significant spatial variations, identification and apportionment of heavy metal pollution sources across large-scale regions is a challenging task. Stochastic models, including the recently developed conditional inference tree (CIT) and the finite mixture distribution model (FMDM), were applied to identify the sources of heavy metals found in the surface soils of the Pearl River Delta, China, and to apportion the contributions from natural background and human activities. Regression trees were successfully developed for the concentrations of Cd, Cu, Zn, Pb, Cr, Ni, As, and Hg in 227 soil samples from a region of over 7.2 × 10(4) km(2) based on seven specific predictors relevant to the source and behavior of heavy metals: land use, soil type, soil organic carbon content, population density, gross domestic product per capita, and the lengths and classes of the roads surrounding the sampling sites. The CIT and FMDM results consistently indicate that Cd, Zn, Cu, Pb, and Cr in the surface soils of the PRD were contributed largely by anthropogenic sources, whereas As, Ni, and Hg in the surface soils mostly originated from the soil parent materials.

  16. Application of the extreme value theory to beam loss estimates in the SPIRAL2 linac based on large scale Monte Carlo computations

    Directory of Open Access Journals (Sweden)

    R. Duperrier

    2006-04-01

    Full Text Available The influence of random perturbations of high intensity accelerator elements on the beam losses is considered. This paper presents the error sensitivity study which has been performed for the SPIRAL2 linac in order to define the tolerances for the construction. The proposed driver aims to accelerate a 5 mA deuteron beam up to 20   A MeV and a 1 mA ion beam for q/A=1/3 up to 14.5 A MeV. It is a continuous wave regime linac, designed for a maximum efficiency in the transmission of intense beams and a tunable energy. It consists in an injector (two   ECRs   sources+LEBTs with the possibility to inject from several sources+radio frequency quadrupole followed by a superconducting section based on an array of independently phased cavities where the transverse focalization is performed with warm quadrupoles. The correction scheme and the expected losses are described. The extreme value theory is used to estimate the expected beam losses. The described method couples large scale computations to obtain probability distribution functions. The bootstrap technique is used to provide confidence intervals associated to the beam loss predictions. With such a method, it is possible to measure the risk to loose a few watts in this high power linac (up to 200 kW.

  17. Large-scale galaxy bias

    Science.gov (United States)

    Desjacques, Vincent; Jeong, Donghui; Schmidt, Fabian

    2018-02-01

    This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy statistics. We then review the excursion-set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  18. Large-scale galaxy bias

    Science.gov (United States)

    Jeong, Donghui; Desjacques, Vincent; Schmidt, Fabian

    2018-01-01

    Here, we briefly introduce the key results of the recent review (arXiv:1611.09787), whose abstract is as following. This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy (or halo) statistics. We then review the excursion set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  19. Emissions from waste combustion. An application of statistical experimental design in a laboratory-scale boiler and an investigation from large-scale incineration plants

    Energy Technology Data Exchange (ETDEWEB)

    Xiaojing, Zhang

    1997-05-01

    The aim of this thesis is a study of the emissions from the combustion of household refuse. The experiments were both on a laboratory-scale boiler and on full-scale incineration plants. In the laboratory, an artificial household refuse with known composition was fed into a pilot boiler with a stationary grate. Combustion was under non-optimum conditions. Direct sampling with a Tenax adsorbent was used to measure a range of VOCs. Measurements were also made of incompletely burnt hydrocarbons, carbon monoxide, carbon dioxide, oxygen and flue gas temperature. Combustion and emission parameters were recorded continuously by a multi-point data logger. VOCs were analysed by gas chromatography and mass spectrometry (GC/MS). The full-scale tests were on seven Swedish incineration plants. The data were used to evaluate the emissions from large-scale incineration plants with various type of fuels and incinerators, and were also compared with the laboratory results. The response surface model developed from the laboratory experiments was also validated. This thesis also includes studies on the gasification of household refuse pellets, estimations of particulate and soot emissions, and a thermodynamic analysis of PAHs from combustion flue gas. For pellet gasification, experiments were performed on single, well characterised refuse pellets under carefully controlled conditions. The aim was to see if the effects of pellets were different from those of untreated household refuse. The results from both laboratory and full-scale tests showed that the main contributions to emissions from household refuse are plastics and moisture. 142 refs, 82 figs, 51 tabs

  20. A flexible and cost-effective compensation method for leveling using large-scale coordinate measuring machines and its application in aircraft digital assembly

    Science.gov (United States)

    Deng, Zhengping; Li, Shuanggao; Huang, Xiang

    2018-06-01

    In the assembly process of large-size aerospace products, the leveling and horizontal alignment of large components are essential prior to the installation of an inertial navigation system (INS) and the final quality inspection. In general, the inherent coordinate systems of large-scale coordinate measuring devices are not coincident with the geodetic horizontal system, and a dual-axis compensation system is commonly required for the measurement of difference in heights. These compensation systems are expensive and dedicated designs for different devices at present. Considering that a large-size assembly site usually needs more than one measuring device, a compensation approach which is versatile for different devices would be a more convenient and economic choice for manufacturers. In this paper, a flexible and cost-effective compensation method is proposed. Firstly, an auxiliary measuring device called a versatile compensation fixture (VCF) is designed, which mainly comprises reference points for coordinate transformation and a dual-axis inclinometer, and a kind of network tighten points (NTPs) are introduced and temporarily deployed in the large measuring space to further reduce transformation error. Secondly, the measuring principle of height difference is studied, based on coordinate transformation theory and trigonometry while considering the effects of earth curvature, and the coordinate transformation parameters are derived by least squares adjustment. Thirdly, the analytical solution of leveling uncertainty is analyzed, based on which the key parameters of the VCF and the proper deployment of NTPs are determined according to the leveling accuracy requirement. Furthermore, the proposed method is practically applied to the assembly of a large helicopter by developing an automatic leveling and alignment system. By measuring four NTPs, the leveling uncertainty (2σ) is reduced by 29.4% to about 0.12 mm, compared with that without NTPs.

  1. Programmed Nanomaterial Assemblies in Large Scales: Applications of Synthetic and Genetically- Engineered Peptides to Bridge Nano-Assemblies and Macro-Assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Matsui, Hiroshi

    2014-09-09

    Work is reported in these areas: Large-scale & reconfigurable 3D structures of precise nanoparticle assemblies in self-assembled collagen peptide grids; Binary QD-Au NP 3D superlattices assembled with collagen-like peptides and energy transfer between QD and Au NP in 3D peptide frameworks; Catalytic peptides discovered by new hydrogel-based combinatorial phage display approach and their enzyme-mimicking 2D assembly; New autonomous motors of metal-organic frameworks (MOFs) powered by reorganization of self-assembled peptides at interfaces; Biomimetic assembly of proteins into microcapsules on oil-in-water droplets with structural reinforcement via biomolecular recognition-based cross-linking of surface peptides; and Biomimetic fabrication of strong freestanding genetically-engineered collagen peptide films reinforced by quantum dot joints. We gained the broad knowledge about biomimetic material assembly from nanoscale to microscale ranges by coassembling peptides and NPs via biomolecular recognition. We discovered: Genetically-engineered collagen-like peptides can be self-assembled with Au NPs to generate 3D superlattices in large volumes (> μm{sup 3}); The assembly of the 3D peptide-Au NP superstructures is dynamic and the interparticle distance changes with assembly time as the reconfiguration of structure is triggered by pH change; QDs/NPs can be assembled with the peptide frameworks to generate 3D superlattices and these QDs/NPs can be electronically coupled for the efficient energy transfer; The controlled assembly of catalytic peptides mimicking the catalytic pocket of enzymes can catalyze chemical reactions with high selectivity; and, For the bacteria-mimicking swimmer fabrication, peptide-MOF superlattices can power translational and propellant motions by the reconfiguration of peptide assembly at the MOF-liquid interface.

  2. State of the art and prospective of large scale applications of YBCO thick films grown on metallic substrates; Possibilita` applicative a larga scala dei film spessi di YBCO su substrati metallici: Stato dell`arte e prospettive

    Energy Technology Data Exchange (ETDEWEB)

    Boffa, Vincenzo [ENEA, Centro Ricerche Frascati, Rome (Italy). Dipt. Energia

    1997-09-01

    In the framework of the high temperature superconducting materials, YBa{sub 2}Cu{sub 3}O{sub 7} (YBCO) shows very interesting intrinsic superconducting transport properties at temperature higher than the liquid nitrogen temperature. These properties are very important in large scale applications: transport of energy, magnets, transformers, etc. Unfortunately the potential of this material cannot be achieved today, since it is very difficult to manufacture YBCO based tapes or cables. In the last years several groups have tried to overcome the problems with new fabrication techniques. In the present report the state of the art and the prospective in the field of YBCO film fabrication on metallic substrates are presented.

  3. Facile large scale synthesis of Bi{sub 2}S{sub 3} nano rods–graphene composite for photocatalytic photoelectrochemical and supercapacitor application

    Energy Technology Data Exchange (ETDEWEB)

    Vadivel, S. [Electrochemical Engineering Laboratory, Department of Chemical Engineering, C. Tech Campus, Anna University, Chennai-600 025 (India); Naveen, A. Nirmalesh [Department of Physics, Anna University, Chennai, Tamil Nadu 600025 (India); Kamalakannan, V.P. [Electrochemical Engineering Laboratory, Department of Chemical Engineering, C. Tech Campus, Anna University, Chennai-600 025 (India); Cao, P. [Department of Chemistry and Materials Engineering, The University of Auckland, PB 92019, Auckland 1142 (New Zealand); Balasubramanian, N., E-mail: nbsbala@annauniv.edu [Electrochemical Engineering Laboratory, Department of Chemical Engineering, C. Tech Campus, Anna University, Chennai-600 025 (India)

    2015-10-01

    Graphical abstract: - Highlights: • A Bi{sub 2}S{sub 3}/RGO composite was synthesized by one pot precipitation method. • The synthesized Bi{sub 2}S{sub 3}/RGO composite exhibit rod like morphology. • As synthesized composite was applied for malachite green degradation. • The synthesized Bi{sub 2}S{sub 3}/RGO composite exhibits a specific capacitance of 290 F g{sup −1} at a scan rate of 1 A g{sup −1}. • Photocatalytic and supercapacitor properties of Bi{sub 2}S{sub 3} were enhanced mainly due to effective graphene incorporation. - Abstract: Bi{sub 2}S{sub 3} nano rods–graphene (BG) composite material was synthesized by a simple one step precipitation method. The crystallanity, structural and morphological properties were studied by the X-ray diffraction (XRD), field-emission scanning electron microscopy (FESEM), high resolution transmission electron microscopy (HRTEM), X-ray photoelectron spectroscopy (XPS) and Raman spectroscopy techniques. The photocatalytic activity of BG was evaluated by the photocatalytic degradation of malachite green dye (MG) aqueous solution under the visible light irradiation. The effect of graphene content on the photoelectrochemical property of Bi{sub 2}S{sub 3} nano rods was also studied. The enhancement of photocurrent and photocatalytic properties of BG composite attributed to the synergistic effect between the Bi{sub 2}S{sub 3} nano rods and graphene sheets which improves the charge separation efficiency in Bi{sub 2}S{sub 3} nano rods. The supercapacitor behavior was studied using cyclic voltametry and galvanostatic charge discharge studies. The BG composite exhibits a maximum specific capacitance of 290 F g{sup −1} at a current density of 1 A g{sup −1}. The present study may provide as a new approach in improving the performance of BG composite in supercapacitor, solar cells and photocatalytic applications.

  4. Performance Health Monitoring of Large-Scale Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rajamony, Ram [IBM Research, Austin, TX (United States)

    2014-11-20

    This report details the progress made on the ASCR funded project Performance Health Monitoring for Large Scale Systems. A large-­scale application may not achieve its full performance potential due to degraded performance of even a single subsystem. Detecting performance faults, isolating them, and taking remedial action is critical for the scale of systems on the horizon. PHM aims to develop techniques and tools that can be used to identify and mitigate such performance problems. We accomplish this through two main aspects. The PHM framework encompasses diagnostics, system monitoring, fault isolation, and performance evaluation capabilities that indicates when a performance fault has been detected, either due to an anomaly present in the system itself or due to contention for shared resources between concurrently executing jobs. Software components called the PHM Control system then build upon the capabilities provided by the PHM framework to mitigate degradation caused by performance problems.

  5. On combination of strict Bayesian principles with model reduction technique or how stochastic model calibration can become feasible for large-scale applications

    Science.gov (United States)

    Oladyshkin, S.; Schroeder, P.; Class, H.; Nowak, W.

    2013-12-01

    Predicting underground carbon dioxide (CO2) storage represents a challenging problem in a complex dynamic system. Due to lacking information about reservoir parameters, quantification of uncertainties may become the dominant question in risk assessment. Calibration on past observed data from pilot-scale test injection can improve the predictive power of the involved geological, flow, and transport models. The current work performs history matching to pressure time series from a pilot storage site operated in Europe, maintained during an injection period. Simulation of compressible two-phase flow and transport (CO2/brine) in the considered site is computationally very demanding, requiring about 12 days of CPU time for an individual model run. For that reason, brute-force approaches for calibration are not feasible. In the current work, we explore an advanced framework for history matching based on the arbitrary polynomial chaos expansion (aPC) and strict Bayesian principles. The aPC [1] offers a drastic but accurate stochastic model reduction. Unlike many previous chaos expansions, it can handle arbitrary probability distribution shapes of uncertain parameters, and can therefore handle directly the statistical information appearing during the matching procedure. We capture the dependence of model output on these multipliers with the expansion-based reduced model. In our study we keep the spatial heterogeneity suggested by geophysical methods, but consider uncertainty in the magnitude of permeability trough zone-wise permeability multipliers. Next combined the aPC with Bootstrap filtering (a brute-force but fully accurate Bayesian updating mechanism) in order to perform the matching. In comparison to (Ensemble) Kalman Filters, our method accounts for higher-order statistical moments and for the non-linearity of both the forward model and the inversion, and thus allows a rigorous quantification of calibrated model uncertainty. The usually high computational costs of

  6. A CMOS-compatible large-scale monolithic integration of heterogeneous multi-sensors on flexible silicon for IoT applications

    KAUST Repository

    Nassar, Joanna M.

    2017-02-07

    We report CMOS technology enabled fabrication and system level integration of flexible bulk silicon (100) based multi-sensors platform which can simultaneously sense pressure, temperature, strain and humidity under various physical deformations. We also show an advanced wearable version for body vital monitoring which can enable advanced healthcare for IoT applications.

  7. A CMOS-compatible large-scale monolithic integration of heterogeneous multi-sensors on flexible silicon for IoT applications

    KAUST Repository

    Nassar, Joanna M.; Sevilla, Galo T.; Velling, Seneca J.; Cordero, Marlon D.; Hussain, Muhammad Mustafa

    2017-01-01

    We report CMOS technology enabled fabrication and system level integration of flexible bulk silicon (100) based multi-sensors platform which can simultaneously sense pressure, temperature, strain and humidity under various physical deformations. We also show an advanced wearable version for body vital monitoring which can enable advanced healthcare for IoT applications.

  8. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  9. On the Phenomenology of an Accelerated Large-Scale Universe

    Directory of Open Access Journals (Sweden)

    Martiros Khurshudyan

    2016-10-01

    Full Text Available In this review paper, several new results towards the explanation of the accelerated expansion of the large-scale universe is discussed. On the other hand, inflation is the early-time accelerated era and the universe is symmetric in the sense of accelerated expansion. The accelerated expansion of is one of the long standing problems in modern cosmology, and physics in general. There are several well defined approaches to solve this problem. One of them is an assumption concerning the existence of dark energy in recent universe. It is believed that dark energy is responsible for antigravity, while dark matter has gravitational nature and is responsible, in general, for structure formation. A different approach is an appropriate modification of general relativity including, for instance, f ( R and f ( T theories of gravity. On the other hand, attempts to build theories of quantum gravity and assumptions about existence of extra dimensions, possible variability of the gravitational constant and the speed of the light (among others, provide interesting modifications of general relativity applicable to problems of modern cosmology, too. In particular, here two groups of cosmological models are discussed. In the first group the problem of the accelerated expansion of large-scale universe is discussed involving a new idea, named the varying ghost dark energy. On the other hand, the second group contains cosmological models addressed to the same problem involving either new parameterizations of the equation of state parameter of dark energy (like varying polytropic gas, or nonlinear interactions between dark energy and dark matter. Moreover, for cosmological models involving varying ghost dark energy, massless particle creation in appropriate radiation dominated universe (when the background dynamics is due to general relativity is demonstrated as well. Exploring the nature of the accelerated expansion of the large-scale universe involving generalized

  10. Ethics of large-scale change

    OpenAIRE

    Arler, Finn

    2006-01-01

      The subject of this paper is long-term large-scale changes in human society. Some very significant examples of large-scale change are presented: human population growth, human appropriation of land and primary production, the human use of fossil fuels, and climate change. The question is posed, which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, th...

  11. Patterning of self-assembled monolayers by phase-shifting mask and its applications in large-scale assembly of nanowires

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Fan; Zhang, Dakuan; Wang, Jianyu; Sheng, Yun; Wang, Xinran; Chen, Kunji; Zhou, Minmin [Key Laboratory of Advanced Photonic and Electronic Materials and School of Electronic Science and Engineering, Nanjing University, Nanjing 210093 (China); Yan, Shancheng [Key Laboratory of Advanced Photonic and Electronic Materials and School of Electronic Science and Engineering, Nanjing University, Nanjing 210093 (China); School of Geography and Biological Information, Nanjing University of Posts and Telecommunications, Nanjing 210046 (China); Shen, Jiancang; Pan, Lijia; Shi, Yi, E-mail: yshi@nju.edu.cn [Key Laboratory of Advanced Photonic and Electronic Materials and School of Electronic Science and Engineering, Nanjing University, Nanjing 210093 (China); Collaborative Innovation Center of Advanced Micro-structures, Nanjing University, Nanjing 210093 (China)

    2015-01-26

    A nonselective micropatterning method of self-assembled monolayers (SAMs) based on laser and phase-shifting mask (PSM) is demonstrated. Laser beam is spatially modulated by a PSM, and periodic SAM patterns are generated sequentially through thermal desorption. Patterned wettability is achieved with alternating hydrophilic/hydrophobic stripes on octadecyltrichlorosilane monolayers. The substrate is then used to assemble CdS semiconductor nanowires (NWs) from a solution, obtaining well-aligned NWs in one step. Our results show valuably the application potential of this technique in engineering SAMs for integration of functional devices.

  12. Creating Large Scale Database Servers

    International Nuclear Information System (INIS)

    Becla, Jacek

    2001-01-01

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region

  13. Creating Large Scale Database Servers

    Energy Technology Data Exchange (ETDEWEB)

    Becla, Jacek

    2001-12-14

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region.

  14. Amplification of large-scale magnetic field in nonhelical magnetohydrodynamics

    KAUST Repository

    Kumar, Rohit

    2017-08-11

    It is typically assumed that the kinetic and magnetic helicities play a crucial role in the growth of large-scale dynamo. In this paper, we demonstrate that helicity is not essential for the amplification of large-scale magnetic field. For this purpose, we perform nonhelical magnetohydrodynamic (MHD) simulation, and show that the large-scale magnetic field can grow in nonhelical MHD when random external forcing is employed at scale 1/10 the box size. The energy fluxes and shell-to-shell transfer rates computed using the numerical data show that the large-scale magnetic energy grows due to the energy transfers from the velocity field at the forcing scales.

  15. Prediction of etching-shape anomaly due to distortion of ion sheath around a large-scale three-dimensional structure by means of on-wafer monitoring technique and computer simulation

    International Nuclear Information System (INIS)

    Kubota, Tomohiro; Ohtake, Hiroto; Araki, Ryosuke; Yanagisawa, Yuuki; Samukawa, Seiji; Iwasaki, Takuya; Ono, Kohei; Miwa, Kazuhiro

    2013-01-01

    A system for predicting distortion of a profile during plasma etching was developed. The system consists of a combination of measurement and simulation. An ‘on-wafer sheath-shape sensor’ for measuring the plasma-sheath parameters (sheath potential and thickness) on the stage of the plasma etcher was developed. The sensor has numerous small electrodes for measuring sheath potential and saturation ion-current density, from which sheath thickness can be calculated. The results of the measurement show reasonable dependence on source power, bias power and pressure. Based on self-consistent calculation of potential distribution and ion- and electron-density distributions, simulation of the sheath potential distribution around an arbitrary 3D structure and the trajectory of incident ions from the plasma to the structure was developed. To confirm the validity of the distortion prediction by comparing it with experimentally measured distortion, silicon trench etching under chlorine inductively coupled plasma (ICP) was performed using a sample with a vertical step. It was found that the etched trench was distorted when the distance from the step was several millimetres or less. The distortion angle was about 20° at maximum. Measurement was performed using the on-wafer sheath-shape sensor in the same plasma condition as the etching. The ion incident angle, calculated as a function of distance from the step, successfully reproduced the experimentally measured angle, indicating that the combination of measurement by the on-wafer sheath-shape sensor and simulation can predict distortion of an etched structure. This prediction system will be useful for designing devices with large-scale 3D structures (such as those in MEMS) and determining the optimum etching conditions to obtain the desired profiles. (paper)

  16. Large-Scale 3D Printing: The Way Forward

    Science.gov (United States)

    Jassmi, Hamad Al; Najjar, Fady Al; Ismail Mourad, Abdel-Hamid

    2018-03-01

    Research on small-scale 3D printing has rapidly evolved, where numerous industrial products have been tested and successfully applied. Nonetheless, research on large-scale 3D printing, directed to large-scale applications such as construction and automotive manufacturing, yet demands a great a great deal of efforts. Large-scale 3D printing is considered an interdisciplinary topic and requires establishing a blended knowledge base from numerous research fields including structural engineering, materials science, mechatronics, software engineering, artificial intelligence and architectural engineering. This review article summarizes key topics of relevance to new research trends on large-scale 3D printing, particularly pertaining (1) technological solutions of additive construction (i.e. the 3D printers themselves), (2) materials science challenges, and (3) new design opportunities.

  17. Genome-wide development and use of microsatellite markers for large-scale genotyping applications in foxtail millet [Setaria italica (L.)].

    Science.gov (United States)

    Pandey, Garima; Misra, Gopal; Kumari, Kajal; Gupta, Sarika; Parida, Swarup Kumar; Chattopadhyay, Debasis; Prasad, Manoj

    2013-04-01

    The availability of well-validated informative co-dominant microsatellite markers and saturated genetic linkage map has been limited in foxtail millet (Setaria italica L.). In view of this, we conducted a genome-wide analysis and identified 28 342 microsatellite repeat-motifs spanning 405.3 Mb of foxtail millet genome. The trinucleotide repeats (∼48%) was prevalent when compared with dinucleotide repeats (∼46%). Of the 28 342 microsatellites, 21 294 (∼75%) primer pairs were successfully designed, and a total of 15 573 markers were physically mapped on 9 chromosomes of foxtail millet. About 159 markers were validated successfully in 8 accessions of Setaria sp. with ∼67% polymorphic potential. The high percentage (89.3%) of cross-genera transferability across millet and non-millet species with higher transferability percentage in bioenergy grasses (∼79%, Switchgrass and ∼93%, Pearl millet) signifies their importance in studying the bioenergy grasses. In silico comparative mapping of 15 573 foxtail millet microsatellite markers against the mapping data of sorghum (16.9%), maize (14.5%) and rice (6.4%) indicated syntenic relationships among the chromosomes of foxtail millet and target species. The results, thus, demonstrate the immense applicability of developed microsatellite markers in germplasm characterization, phylogenetics, construction of genetic linkage map for gene/quantitative trait loci discovery, comparative mapping in foxtail millet, including other millets and bioenergy grass species.

  18. The Nonmydriatic Fundus Camera in Diabetic Retinopathy Screening: A Cost-Effective Study with Evaluation for Future Large-Scale Application

    Directory of Open Access Journals (Sweden)

    Giuseppe Scarpa

    2016-01-01

    Full Text Available Aims. The study aimed to present the experience of a screening programme for early detection of diabetic retinopathy (DR using a nonmydriatic fundus camera, evaluating the feasibility in terms of validity, resources absorption, and future advantages of a potential application, in an Italian local health authority. Methods. Diabetic patients living in the town of Ponzano, Veneto Region (Northern Italy, were invited to be enrolled in the screening programme. The “no prevention strategy” with the inclusion of the estimation of blindness related costs was compared with screening costs in order to evaluate a future extensive and feasible implementation of the procedure, through a budget impact approach. Results. Out of 498 diabetic patients eligible, 80% was enrolled in the screening programme. 115 patients (34% were referred to an ophthalmologist and 9 cases required prompt treatment for either proliferative DR or macular edema. Based on the pilot data, it emerged that an extensive use of the investigated screening programme, within the Greater Treviso area, could prevent 6 cases of blindness every year, resulting in a saving of €271,543.32 (−13.71%. Conclusions. Fundus images obtained with a nonmydriatic fundus camera could be considered an effective, cost-sparing, and feasible screening tool for the early detection of DR, preventing blindness as a result of diabetes.

  19. Large-scale sequential quadratic programming algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  20. Large scale PV plants - also in Denmark. Project report

    Energy Technology Data Exchange (ETDEWEB)

    Ahm, P [PA Energy, Malling (Denmark); Vedde, J [SiCon. Silicon and PV consulting, Birkeroed (Denmark)

    2011-04-15

    Large scale PV (LPV) plants, plants with a capacity of more than 200 kW, has since 2007 constituted an increasing share of the global PV installations. In 2009 large scale PV plants with cumulative power more that 1,3 GWp were connected to the grid. The necessary design data for LPV plants in Denmark are available or can be found, although irradiance data could be improved. There seems to be very few institutional barriers for LPV projects, but as so far no real LPV projects have been processed, these findings have to be regarded as preliminary. The fast growing number of very large scale solar thermal plants for district heating applications supports these findings. It has further been investigated, how to optimize the lay-out of LPV plants. Under the Danish irradiance conditions with several winter months with very low solar height PV installations on flat surfaces will have to balance the requirements of physical space - and cost, and the loss of electricity production due to shadowing effects. The potential for LPV plants in Denmark are found in three main categories: PV installations on flat roof of large commercial buildings, PV installations on other large scale infrastructure such as noise barriers and ground mounted PV installations. The technical potential for all three categories is found to be significant and in the range of 50 - 250 km2. In terms of energy harvest PV plants will under Danish conditions exhibit an overall efficiency of about 10 % in conversion of the energy content of the light compared to about 0,3 % for biomass. The theoretical ground area needed to produce the present annual electricity consumption of Denmark at 33-35 TWh is about 300 km2 The Danish grid codes and the electricity safety regulations mention very little about PV and nothing about LPV plants. It is expected that LPV plants will be treated similarly to big wind turbines. A number of LPV plant scenarios have been investigated in detail based on real commercial offers and

  1. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  2. Automating large-scale reactor systems

    International Nuclear Information System (INIS)

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig

  3. Decentralized Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2013-01-01

    problem is formulated as a centralized large-scale optimization problem but is then decomposed into smaller subproblems that are solved locally by each unit connected to an aggregator. For large-scale systems the method is faster than solving the full problem and can be distributed to include an arbitrary...

  4. Monitoring and Information Fusion for Search and Rescue Operations in Large-Scale Disasters

    National Research Council Canada - National Science Library

    Nardi, Daniele

    2002-01-01

    ... for information fusion with application to search-and-rescue and large scale disaster relief. The objective is to develop and to deploy tools to support the monitoring activities in an intervention caused by a large-scale disaster...

  5. Large Scale GW Calculations on the Cori System

    Science.gov (United States)

    Deslippe, Jack; Del Ben, Mauro; da Jornada, Felipe; Canning, Andrew; Louie, Steven

    The NERSC Cori system, powered by 9000+ Intel Xeon-Phi processors, represents one of the largest HPC systems for open-science in the United States and the world. We discuss the optimization of the GW methodology for this system, including both node level and system-scale optimizations. We highlight multiple large scale (thousands of atoms) case studies and discuss both absolute application performance and comparison to calculations on more traditional HPC architectures. We find that the GW method is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism across many layers of the system. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, as part of the Computational Materials Sciences Program.

  6. Large-scale assembly of colloidal particles

    Science.gov (United States)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  7. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  8. Newton Methods for Large Scale Problems in Machine Learning

    Science.gov (United States)

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  9. Large-scale numerical simulations of plasmas

    International Nuclear Information System (INIS)

    Hamaguchi, Satoshi

    2004-01-01

    The recent trend of large scales simulations of fusion plasma and processing plasmas is briefly summarized. Many advanced simulation techniques have been developed for fusion plasmas and some of these techniques are now applied to analyses of processing plasmas. (author)

  10. Large-scale computing with Quantum Espresso

    International Nuclear Information System (INIS)

    Giannozzi, P.; Cavazzoni, C.

    2009-01-01

    This paper gives a short introduction to Quantum Espresso: a distribution of software for atomistic simulations in condensed-matter physics, chemical physics, materials science, and to its usage in large-scale parallel computing.

  11. Large-scale regions of antimatter

    International Nuclear Information System (INIS)

    Grobov, A. V.; Rubin, S. G.

    2015-01-01

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era

  12. Large-scale regions of antimatter

    Energy Technology Data Exchange (ETDEWEB)

    Grobov, A. V., E-mail: alexey.grobov@gmail.com; Rubin, S. G., E-mail: sgrubin@mephi.ru [National Research Nuclear University MEPhI (Russian Federation)

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  13. Large-scale grid management; Storskala Nettforvaltning

    Energy Technology Data Exchange (ETDEWEB)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-07-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series.

  14. Large scale molecular simulations of nanotoxicity.

    Science.gov (United States)

    Jimenez-Cruz, Camilo A; Kang, Seung-gu; Zhou, Ruhong

    2014-01-01

    The widespread use of nanomaterials in biomedical applications has been accompanied by an increasing interest in understanding their interactions with tissues, cells, and biomolecules, and in particular, on how they might affect the integrity of cell membranes and proteins. In this mini-review, we present a summary of some of the recent studies on this important subject, especially from the point of view of large scale molecular simulations. The carbon-based nanomaterials and noble metal nanoparticles are the main focus, with additional discussions on quantum dots and other nanoparticles as well. The driving forces for adsorption of fullerenes, carbon nanotubes, and graphene nanosheets onto proteins or cell membranes are found to be mainly hydrophobic interactions and the so-called π-π stacking (between aromatic rings), while for the noble metal nanoparticles the long-range electrostatic interactions play a bigger role. More interestingly, there are also growing evidences showing that nanotoxicity can have implications in de novo design of nanomedicine. For example, the endohedral metallofullerenol Gd@C₈₂(OH)₂₂ is shown to inhibit tumor growth and metastasis by inhibiting enzyme MMP-9, and graphene is illustrated to disrupt bacteria cell membranes by insertion/cutting as well as destructive extraction of lipid molecules. These recent findings have provided a better understanding of nanotoxicity at the molecular level and also suggested therapeutic potential by using the cytotoxicity of nanoparticles against cancer or bacteria cells. © 2014 Wiley Periodicals, Inc.

  15. Political consultation and large-scale research

    International Nuclear Information System (INIS)

    Bechmann, G.; Folkers, H.

    1977-01-01

    Large-scale research and policy consulting have an intermediary position between sociological sub-systems. While large-scale research coordinates science, policy, and production, policy consulting coordinates science, policy and political spheres. In this very position, large-scale research and policy consulting lack of institutional guarantees and rational back-ground guarantee which are characteristic for their sociological environment. This large-scale research can neither deal with the production of innovative goods under consideration of rentability, nor can it hope for full recognition by the basis-oriented scientific community. Policy consulting knows neither the competence assignment of the political system to make decisions nor can it judge succesfully by the critical standards of the established social science, at least as far as the present situation is concerned. This intermediary position of large-scale research and policy consulting has, in three points, a consequence supporting the thesis which states that this is a new form of institutionalization of science: These are: 1) external control, 2) the organization form, 3) the theoretical conception of large-scale research and policy consulting. (orig.) [de

  16. Better use of the potential offered by large-scale heat-pumps - Planning, applications, client's opinion; Potenziale von Gross-Waermepumpen besser nutzen. Konzeption, Anwendungen, Kundensicht

    Energy Technology Data Exchange (ETDEWEB)

    Ehrbar, M.; Rognon, F. (eds.)

    2006-07-01

    These proceedings published by the Swiss Federal Office of Energy (SFOE) include the contributions presented at the 13{sup th} Conference of the Research Programme on Ambient Heat, Combined Heat and Power Systems and Cold-generation that was held at the University of Applied Sciences in Burgdorf, Switzerland in 2006. At the conference, ten papers were presented that covered technical and political aspects of the use of large-scale heat-pumps for heating and cooling applications. As an introduction, Fabrice Rognon, head of the SFOE programme, took a look at the relevance of large heat-pumps in Swiss energy policy. Peter Hubacher discussed the advantages and disadvantages offered by centralised and decentralised heat-pump systems from the energy and economics points of view. Bernhard Eggen took a look at heat-source concepts for large heat-pumps while Rolf Loehrer discussed meeting temperature requirements when extracting heat. Patrice Anstett presented a paper on the measurement of the parameters of an air/water heat pump with CO{sub 2} as a working fluid used for hot water preparation. A paper on a combined heating and cooling system for a food warehouse complex in southern Switzerland was presented by Vinicio Curti. Beat Wellig described ways to avoid excessive power consumption in building air-conditioning systems using exergy analysis. Jean-Philippe Borel took a further look at heat and cold generation using heat-pumps that use geothermal probes as a thermal source. The use of large-scale heat-pump systems in contracting installations was examined by Georg Dubacher. Finally, reinsurance expert Primo Bianchi discussed if ecology and economy are inconsistent with each other. Two of these contributions, those of Peter Hubacher, who discussed the energy and economics of centralised and decentralised heat-pump systems and Jean-Philippe Borel, who examined heat and cold generation using geothermal probes as a thermal source, are also covered in two separately

  17. Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

    Directory of Open Access Journals (Sweden)

    Qiang Liu

    2018-05-01

    Full Text Available Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal computer, a Graphics Processing Unit (GPU-based, high-performance computing method using the OpenACC application was adopted to parallelize the shallow water model. An unstructured data management method was presented to control the data transportation between the GPU and CPU (Central Processing Unit with minimum overhead, and then both computation and data were offloaded from the CPU to the GPU, which exploited the computational capability of the GPU as much as possible. The parallel model was validated using various benchmarks and real-world case studies. The results demonstrate that speed-ups of up to one order of magnitude can be achieved in comparison with the serial model. The proposed parallel model provides a fast and reliable tool with which to quickly assess flood hazards in large-scale areas and, thus, has a bright application prospect for dynamic inundation risk identification and disaster assessment.

  18. Outbreaks of Pox Disease Due to Canarypox-Like and Fowlpox-Like Viruses in Large-Scale Houbara Bustard Captive-Breeding Programmes, in Morocco and the United Arab Emirates.

    Science.gov (United States)

    Le Loc'h, G; Paul, M C; Camus-Bouclainville, C; Bertagnoli, S

    2016-12-01

    Infectious diseases can be serious threats for the success of reinforcement programmes of endangered species. Houbara Bustard species (Chlamydotis undulata and Chlamydotis macqueenii), whose populations declined in the last decades, have been captive-bred for conservation purposes for more than 15 years in North Africa and the Middle East. Field observations show that pox disease, caused by avipoxviruses (APV), regularly emerges in conservation projects of Houbara Bustard, despite a very strict implementation of both vaccination and biosecurity. Data collected from captive flocks of Houbara Bustard in Morocco from 2006 through 2013 and in the United Arab Emirates from 2011 through 2013 were analysed, and molecular investigations were carried out to define the virus strains involved. Pox cases (n = 2311) were observed during more than half of the year (88% of the months in Morocco, 54% in the United Arab Emirates). Monthly morbidity rates showed strong variations across the time periods considered, species and study sites: Four outbreaks were described during the study period on both sites. Molecular typing revealed that infections were mostly due to canarypox-like viruses in Morocco while fowlpox-like viruses were predominant in the United Arab Emirates. This study highlights that APV remain a major threat to consider in bird conservation initiatives. © 2015 Blackwell Verlag GmbH.

  19. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  20. ability in Large Scale Land Acquisitions in Kenya

    International Development Research Centre (IDRC) Digital Library (Canada)

    Corey Piccioni

    Kenya's national planning strategy, Vision 2030. Agri- culture, natural resource exploitation, and infrastruc- ... sitions due to high levels of poverty and unclear or in- secure land tenure rights in Kenya. Inadequate social ... lease to a private company over the expansive Yala. Swamp to undertake large-scale irrigation farming.

  1. Chirping for large-scale maritime archaeological survey

    DEFF Research Database (Denmark)

    Grøn, Ole; Boldreel, Lars Ole

    2014-01-01

    Archaeological wrecks exposed on the sea floor are mapped using side-scan and multibeam techniques, whereas the detection of submerged archaeological sites, such as Stone Age settlements, and wrecks, partially or wholly embedded in sea-floor sediments, requires the application of high-resolution ...... the present state of this technology, it appears well suited to large-scale maritime archaeological mapping....

  2. Large-scale networks in engineering and life sciences

    CERN Document Server

    Findeisen, Rolf; Flockerzi, Dietrich; Reichl, Udo; Sundmacher, Kai

    2014-01-01

    This edited volume provides insights into and tools for the modeling, analysis, optimization, and control of large-scale networks in the life sciences and in engineering. Large-scale systems are often the result of networked interactions between a large number of subsystems, and their analysis and control are becoming increasingly important. The chapters of this book present the basic concepts and theoretical foundations of network theory and discuss its applications in different scientific areas such as biochemical reactions, chemical production processes, systems biology, electrical circuits, and mobile agents. The aim is to identify common concepts, to understand the underlying mathematical ideas, and to inspire discussions across the borders of the various disciplines.  The book originates from the interdisciplinary summer school “Large Scale Networks in Engineering and Life Sciences” hosted by the International Max Planck Research School Magdeburg, September 26-30, 2011, and will therefore be of int...

  3. Growth Limits in Large Scale Networks

    DEFF Research Database (Denmark)

    Knudsen, Thomas Phillip

    limitations. The rising complexity of network management with the convergence of communications platforms is shown as problematic for both automatic management feasibility and for manpower resource management. In the fourth step the scope is extended to include the present society with the DDN project as its......The Subject of large scale networks is approached from the perspective of the network planner. An analysis of the long term planning problems is presented with the main focus on the changing requirements for large scale networks and the potential problems in meeting these requirements. The problems...... the fundamental technological resources in network technologies are analysed for scalability. Here several technological limits to continued growth are presented. The third step involves a survey of major problems in managing large scale networks given the growth of user requirements and the technological...

  4. Accelerating sustainability in large-scale facilities

    CERN Multimedia

    Marina Giampietro

    2011-01-01

    Scientific research centres and large-scale facilities are intrinsically energy intensive, but how can big science improve its energy management and eventually contribute to the environmental cause with new cleantech? CERN’s commitment to providing tangible answers to these questions was sealed in the first workshop on energy management for large scale scientific infrastructures held in Lund, Sweden, on the 13-14 October.   Participants at the energy management for large scale scientific infrastructures workshop. The workshop, co-organised with the European Spallation Source (ESS) and  the European Association of National Research Facilities (ERF), tackled a recognised need for addressing energy issues in relation with science and technology policies. It brought together more than 150 representatives of Research Infrastrutures (RIs) and energy experts from Europe and North America. “Without compromising our scientific projects, we can ...

  5. Successful application of FTA Classic Card technology and use of bacteriophage phi29 DNA polymerase for large-scale field sampling and cloning of complete maize streak virus genomes.

    Science.gov (United States)

    Owor, Betty E; Shepherd, Dionne N; Taylor, Nigel J; Edema, Richard; Monjane, Adérito L; Thomson, Jennifer A; Martin, Darren P; Varsani, Arvind

    2007-03-01

    Leaf samples from 155 maize streak virus (MSV)-infected maize plants were collected from 155 farmers' fields in 23 districts in Uganda in May/June 2005 by leaf-pressing infected samples onto FTA Classic Cards. Viral DNA was successfully extracted from cards stored at room temperature for 9 months. The diversity of 127 MSV isolates was analysed by PCR-generated RFLPs. Six representative isolates having different RFLP patterns and causing either severe, moderate or mild disease symptoms, were chosen for amplification from FTA cards by bacteriophage phi29 DNA polymerase using the TempliPhi system. Full-length genomes were inserted into a cloning vector using a unique restriction enzyme site, and sequenced. The 1.3-kb PCR product amplified directly from FTA-eluted DNA and used for RFLP analysis was also cloned and sequenced. Comparison of cloned whole genome sequences with those of the original PCR products indicated that the correct virus genome had been cloned and that no errors were introduced by the phi29 polymerase. This is the first successful large-scale application of FTA card technology to the field, and illustrates the ease with which large numbers of infected samples can be collected and stored for downstream molecular applications such as diversity analysis and cloning of potentially new virus genomes.

  6. Large-Scale Analysis of Art Proportions

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2014-01-01

    While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square) and with majo......While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square...

  7. The Expanded Large Scale Gap Test

    Science.gov (United States)

    1987-03-01

    NSWC TR 86-32 DTIC THE EXPANDED LARGE SCALE GAP TEST BY T. P. LIDDIARD D. PRICE RESEARCH AND TECHNOLOGY DEPARTMENT ’ ~MARCH 1987 Ap~proved for public...arises, to reduce the spread in the LSGT 50% gap value.) The worst charges, such as those with the highest or lowest densities, the largest re-pressed...Arlington, VA 22217 PE 62314N INS3A 1 RJ14E31 7R4TBK 11 TITLE (Include Security CIlmsilficatiorn The Expanded Large Scale Gap Test . 12. PEIRSONAL AUTHOR() T

  8. A fast approach to generate large-scale topographic maps based on new Chinese vehicle-borne Lidar system

    International Nuclear Information System (INIS)

    Youmei, Han; Bogang, Yang

    2014-01-01

    Large -scale topographic maps are important basic information for city and regional planning and management. Traditional large- scale mapping methods are mostly based on artificial mapping and photogrammetry. The traditional mapping method is inefficient and limited by the environments. While the photogrammetry methods(such as low-altitude aerial mapping) is an economical and effective way to map wide and regulate range of large scale topographic map but doesn't work well in the small area due to the high cost of manpower and resources. Recent years, the vehicle-borne LIDAR technology has a rapid development, and its application in surveying and mapping is becoming a new topic. The main objective of this investigation is to explore the potential of vehicle-borne LIDAR technology to be used to fast mapping large scale topographic maps based on new Chinese vehicle-borne LIDAR system. It studied how to use the new Chinese vehicle-borne LIDAR system measurement technology to map large scale topographic maps. After the field data capture, it can be mapped in the office based on the LIDAR data (point cloud) by software which programmed by ourselves. In addition, the detailed process and accuracy analysis were proposed by an actual case. The result show that this new technology provides a new fast method to generate large scale topographic maps, which is high efficient and accuracy compared to traditional methods

  9. An interactive display system for large-scale 3D models

    Science.gov (United States)

    Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman

    2018-04-01

    With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.

  10. Configuration management in large scale infrastructure development

    NARCIS (Netherlands)

    Rijn, T.P.J. van; Belt, H. van de; Los, R.H.

    2000-01-01

    Large Scale Infrastructure (LSI) development projects such as the construction of roads, rail-ways and other civil engineering (water)works is tendered differently today than a decade ago. Traditional workflow requested quotes from construction companies for construction works where the works to be

  11. Large-scale Motion of Solar Filaments

    Indian Academy of Sciences (India)

    tribpo

    Large-scale Motion of Solar Filaments. Pavel Ambrož, Astronomical Institute of the Acad. Sci. of the Czech Republic, CZ-25165. Ondrejov, The Czech Republic. e-mail: pambroz@asu.cas.cz. Alfred Schroll, Kanzelhöehe Solar Observatory of the University of Graz, A-9521 Treffen,. Austria. e-mail: schroll@solobskh.ac.at.

  12. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  13. Ethics of large-scale change

    DEFF Research Database (Denmark)

    Arler, Finn

    2006-01-01

    , which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, the neoclassical economists' approach, and finally the so-called Concentric Circle Theories approach...

  14. The origin of large scale cosmic structure

    International Nuclear Information System (INIS)

    Jones, B.J.T.; Palmer, P.L.

    1985-01-01

    The paper concerns the origin of large scale cosmic structure. The evolution of density perturbations, the nonlinear regime (Zel'dovich's solution and others), the Gott and Rees clustering hierarchy, the spectrum of condensations, and biassed galaxy formation, are all discussed. (UK)

  15. Large-scale perspective as a challenge

    NARCIS (Netherlands)

    Plomp, M.G.A.

    2012-01-01

    1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that

  16. Learning from large scale neural simulations

    DEFF Research Database (Denmark)

    Serban, Maria

    2017-01-01

    Large-scale neural simulations have the marks of a distinct methodology which can be fruitfully deployed to advance scientific understanding of the human brain. Computer simulation studies can be used to produce surrogate observational data for better conceptual models and new how...

  17. Large-Scale Outflows in Seyfert Galaxies

    Science.gov (United States)

    Colbert, E. J. M.; Baum, S. A.

    1995-12-01

    \\catcode`\\@=11 \\ialign{m @th#1hfil ##hfil \\crcr#2\\crcr\\sim\\crcr}}} \\catcode`\\@=12 Highly collimated outflows extend out to Mpc scales in many radio-loud active galaxies. In Seyfert galaxies, which are radio-quiet, the outflows extend out to kpc scales and do not appear to be as highly collimated. In order to study the nature of large-scale (>~1 kpc) outflows in Seyferts, we have conducted optical, radio and X-ray surveys of a distance-limited sample of 22 edge-on Seyfert galaxies. Results of the optical emission-line imaging and spectroscopic survey imply that large-scale outflows are present in >~{{1} /{4}} of all Seyferts. The radio (VLA) and X-ray (ROSAT) surveys show that large-scale radio and X-ray emission is present at about the same frequency. Kinetic luminosities of the outflows in Seyferts are comparable to those in starburst-driven superwinds. Large-scale radio sources in Seyferts appear diffuse, but do not resemble radio halos found in some edge-on starburst galaxies (e.g. M82). We discuss the feasibility of the outflows being powered by the active nucleus (e.g. a jet) or a circumnuclear starburst.

  18. Stability of large scale interconnected dynamical systems

    International Nuclear Information System (INIS)

    Akpan, E.P.

    1993-07-01

    Large scale systems modelled by a system of ordinary differential equations are considered and necessary and sufficient conditions are obtained for the uniform asymptotic connective stability of the systems using the method of cone-valued Lyapunov functions. It is shown that this model significantly improves the existing models. (author). 9 refs

  19. Large-scale structure of the Universe

    International Nuclear Information System (INIS)

    Doroshkevich, A.G.

    1978-01-01

    The problems, discussed at the ''Large-scale Structure of the Universe'' symposium are considered on a popular level. Described are the cell structure of galaxy distribution in the Universe, principles of mathematical galaxy distribution modelling. The images of cell structures, obtained after reprocessing with the computer are given. Discussed are three hypothesis - vortical, entropic, adiabatic, suggesting various processes of galaxy and galaxy clusters origin. A considerable advantage of the adiabatic hypothesis is recognized. The relict radiation, as a method of direct studying the processes taking place in the Universe is considered. The large-scale peculiarities and small-scale fluctuations of the relict radiation temperature enable one to estimate the turbance properties at the pre-galaxy stage. The discussion of problems, pertaining to studying the hot gas, contained in galaxy clusters, the interactions within galaxy clusters and with the inter-galaxy medium, is recognized to be a notable contribution into the development of theoretical and observational cosmology

  20. Challenges for Large Scale Structure Theory

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    I will describe some of the outstanding questions in Cosmology where answers could be provided by observations of the Large Scale Structure of the Universe at late times.I will discuss some of the theoretical challenges which will have to be overcome to extract this information from the observations. I will describe some of the theoretical tools that might be useful to achieve this goal. 

  1. Methods for Large-Scale Nonlinear Optimization.

    Science.gov (United States)

    1980-05-01

    STANFORD, CALIFORNIA 94305 METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION by Philip E. Gill, Waiter Murray, I Michael A. Saunden, and Masgaret H. Wright...typical iteration can be partitioned so that where B is an m X m basise matrix. This partition effectively divides the vari- ables into three classes... attention is given to the standard of the coding or the documentation. A much better way of obtaining mathematical software is from a software library

  2. Large scale inhomogeneities and the cosmological principle

    International Nuclear Information System (INIS)

    Lukacs, B.; Meszaros, A.

    1984-12-01

    The compatibility of cosmologic principles and possible large scale inhomogeneities of the Universe is discussed. It seems that the strongest symmetry principle which is still compatible with reasonable inhomogeneities, is a full conformal symmetry in the 3-space defined by the cosmological velocity field, but even in such a case, the standard model is isolated from the inhomogeneous ones when the whole evolution is considered. (author)

  3. Large-scale Complex IT Systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2011-01-01

    This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that identifies the major challen...

  4. Large-scale complex IT systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2012-01-01

    12 pages, 2 figures This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that ident...

  5. LAVA: Large scale Automated Vulnerability Addition

    Science.gov (United States)

    2016-05-23

    LAVA: Large-scale Automated Vulnerability Addition Brendan Dolan -Gavitt∗, Patrick Hulin†, Tim Leek†, Fredrich Ulrich†, Ryan Whelan† (Authors listed...released, and thus rapidly become stale. We can expect tools to have been trained to detect bugs that have been released. Given the commercial price tag...low TCN) and dead (low liveness) program data is a powerful one for vulnera- bility injection. The DUAs it identifies are internal program quantities

  6. Large-Scale Transit Signal Priority Implementation

    OpenAIRE

    Lee, Kevin S.; Lozner, Bailey

    2018-01-01

    In 2016, the District Department of Transportation (DDOT) deployed Transit Signal Priority (TSP) at 195 intersections in highly urbanized areas of Washington, DC. In collaboration with a broader regional implementation, and in partnership with the Washington Metropolitan Area Transit Authority (WMATA), DDOT set out to apply a systems engineering–driven process to identify, design, test, and accept a large-scale TSP system. This presentation will highlight project successes and lessons learned.

  7. Results of large scale thyroid dose reconstruction in Ukraine

    International Nuclear Information System (INIS)

    Likhtarev, I.; Sobolev, B.; Kairo, I.; Tabachny, L.; Jacob, P.; Proehl, G.; Goulko, G.

    1996-01-01

    In 1993, the Ukrainian Ministry on Chernobyl Affairs initiated a large scale reconstruction of thyroid exposures to radioiodine after the Chernobyl accident. The objective was to provide the state policy on social compensations with a scientific background. About 7000 settlements from five contaminated regions have gotten certificates of thyroid exposure since then. Certificates contain estimates of the average thyroid dose from 131 I for seven age groups. The primary dose estimates used about 150000 direct measurements of the 131 I activity in the thyroid glands of inhabitants from Chernigiv, Kiev, Zhytomyr, and also Vinnytsa regions. Parameters of the assumed intake function were related to environmental and questionnaire data. The dose reconstruction for the remaining territory was based on empirical relations between intake function parameters and the 137 Cs deposition. The relationship was specified by the distance and the direction to the Chernobyl Nuclear Power Plant. The relations were first derived for territories with direct measurements and then they were spread on other areas using daily iodine releases and atmospheric transportation routes. The results of the dose reconstruction allowed to mark zones on the territory of Ukraine according to the average levels of thyroid exposures. These zones underlay a policy of post-accidental health care and social compensations. Another important application of the thyroid dose reconstruction is the radiation risk assessment of thyroid cancer among people exposed during childhood due to the Chernobyl accident

  8. Rotation invariant fast features for large-scale recognition

    Science.gov (United States)

    Takacs, Gabriel; Chandrasekhar, Vijay; Tsai, Sam; Chen, David; Grzeszczuk, Radek; Girod, Bernd

    2012-10-01

    We present an end-to-end feature description pipeline which uses a novel interest point detector and Rotation- Invariant Fast Feature (RIFF) descriptors. The proposed RIFF algorithm is 15× faster than SURF1 while producing large-scale retrieval results that are comparable to SIFT.2 Such high-speed features benefit a range of applications from Mobile Augmented Reality (MAR) to web-scale image retrieval and analysis.

  9. On a Game of Large-Scale Projects Competition

    Science.gov (United States)

    Nikonov, Oleg I.; Medvedeva, Marina A.

    2009-09-01

    The paper is devoted to game-theoretical control problems motivated by economic decision making situations arising in realization of large-scale projects, such as designing and putting into operations the new gas or oil pipelines. A non-cooperative two player game is considered with payoff functions of special type for which standard existence theorems and algorithms for searching Nash equilibrium solutions are not applicable. The paper is based on and develops the results obtained in [1]-[5].

  10. Large scale particle image velocimetry with helium filled soap bubbles

    Energy Technology Data Exchange (ETDEWEB)

    Bosbach, Johannes; Kuehn, Matthias; Wagner, Claus [German Aerospace Center (DLR), Institute of Aerodynamics and Flow Technology, Goettingen (Germany)

    2009-03-15

    The application of particle image velocimetry (PIV) to measurement of flows on large scales is a challenging necessity especially for the investigation of convective air flows. Combining helium filled soap bubbles as tracer particles with high power quality switched solid state lasers as light sources allows conducting PIV on scales of the order of several square meters. The technique was applied to mixed convection in a full scale double aisle aircraft cabin mock-up for validation of computational fluid dynamics simulations. (orig.)

  11. Large scale particle image velocimetry with helium filled soap bubbles

    Science.gov (United States)

    Bosbach, Johannes; Kühn, Matthias; Wagner, Claus

    2009-03-01

    The application of Particle Image Velocimetry (PIV) to measurement of flows on large scales is a challenging necessity especially for the investigation of convective air flows. Combining helium filled soap bubbles as tracer particles with high power quality switched solid state lasers as light sources allows conducting PIV on scales of the order of several square meters. The technique was applied to mixed convection in a full scale double aisle aircraft cabin mock-up for validation of Computational Fluid Dynamics simulations.

  12. Photorealistic large-scale urban city model reconstruction.

    Science.gov (United States)

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite).

  13. The Software Reliability of Large Scale Integration Circuit and Very Large Scale Integration Circuit

    OpenAIRE

    Artem Ganiyev; Jan Vitasek

    2010-01-01

    This article describes evaluation method of faultless function of large scale integration circuits (LSI) and very large scale integration circuits (VLSI). In the article there is a comparative analysis of factors which determine faultless of integrated circuits, analysis of already existing methods and model of faultless function evaluation of LSI and VLSI. The main part describes a proposed algorithm and program for analysis of fault rate in LSI and VLSI circuits.

  14. Large scale injection test (LASGIT) modelling

    International Nuclear Information System (INIS)

    Arnedo, D.; Olivella, S.; Alonso, E.E.

    2010-01-01

    Document available in extended abstract form only. With the objective of understanding the gas flow processes through clay barriers in schemes of radioactive waste disposal, the Lasgit in situ experiment was planned and is currently in progress. The modelling of the experiment will permit to better understand of the responses, to confirm hypothesis of mechanisms and processes and to learn in order to design future experiments. The experiment and modelling activities are included in the project FORGE (FP7). The in situ large scale injection test Lasgit is currently being performed at the Aespoe Hard Rock Laboratory by SKB and BGS. An schematic layout of the test is shown. The deposition hole follows the KBS3 scheme. A copper canister is installed in the axe of the deposition hole, surrounded by blocks of highly compacted MX-80 bentonite. A concrete plug is placed at the top of the buffer. A metallic lid anchored to the surrounding host rock is included in order to prevent vertical movements of the whole system during gas injection stages (high gas injection pressures are expected to be reached). Hydration of the buffer material is achieved by injecting water through filter mats, two placed at the rock walls and two at the interfaces between bentonite blocks. Water is also injected through the 12 canister filters. Gas injection stages are performed injecting gas to some of the canister injection filters. Since the water pressure and the stresses (swelling pressure development) will be high during gas injection, it is necessary to inject at high gas pressures. This implies mechanical couplings as gas penetrates after the gas entry pressure is achieved and may produce deformations which in turn lead to permeability increments. A 3D hydro-mechanical numerical model of the test using CODE-BRIGHT is presented. The domain considered for the modelling is shown. The materials considered in the simulation are the MX-80 bentonite blocks (cylinders and rings), the concrete plug

  15. RESTRUCTURING OF THE LARGE-SCALE SPRINKLERS

    Directory of Open Access Journals (Sweden)

    Paweł Kozaczyk

    2016-09-01

    Full Text Available One of the best ways for agriculture to become independent from shortages of precipitation is irrigation. In the seventies and eighties of the last century a number of large-scale sprinklers in Wielkopolska was built. At the end of 1970’s in the Poznan province 67 sprinklers with a total area of 6400 ha were installed. The average size of the sprinkler reached 95 ha. In 1989 there were 98 sprinklers, and the area which was armed with them was more than 10 130 ha. The study was conducted on 7 large sprinklers with the area ranging from 230 to 520 hectares in 1986÷1998. After the introduction of the market economy in the early 90’s and ownership changes in agriculture, large-scale sprinklers have gone under a significant or total devastation. Land on the State Farms of the State Agricultural Property Agency has leased or sold and the new owners used the existing sprinklers to a very small extent. This involved a change in crop structure, demand structure and an increase in operating costs. There has also been a threefold increase in electricity prices. Operation of large-scale irrigation encountered all kinds of barriers in practice and limitations of system solutions, supply difficulties, high levels of equipment failure which is not inclined to rational use of available sprinklers. An effect of a vision of the local area was to show the current status of the remaining irrigation infrastructure. The adopted scheme for the restructuring of Polish agriculture was not the best solution, causing massive destruction of assets previously invested in the sprinkler system.

  16. Optical interconnect for large-scale systems

    Science.gov (United States)

    Dress, William

    2013-02-01

    This paper presents a switchless, optical interconnect module that serves as a node in a network of identical distribution modules for large-scale systems. Thousands to millions of hosts or endpoints may be interconnected by a network of such modules, avoiding the need for multi-level switches. Several common network topologies are reviewed and their scaling properties assessed. The concept of message-flow routing is discussed in conjunction with the unique properties enabled by the optical distribution module where it is shown how top-down software control (global routing tables, spanning-tree algorithms) may be avoided.

  17. Adaptive visualization for large-scale graph

    International Nuclear Information System (INIS)

    Nakamura, Hiroko; Shinano, Yuji; Ohzahata, Satoshi

    2010-01-01

    We propose an adoptive visualization technique for representing a large-scale hierarchical dataset within limited display space. A hierarchical dataset has nodes and links showing the parent-child relationship between the nodes. These nodes and links are described using graphics primitives. When the number of these primitives is large, it is difficult to recognize the structure of the hierarchical data because many primitives are overlapped within a limited region. To overcome this difficulty, we propose an adaptive visualization technique for hierarchical datasets. The proposed technique selects an appropriate graph style according to the nodal density in each area. (author)

  18. Neutrinos and large-scale structure

    International Nuclear Information System (INIS)

    Eisenstein, Daniel J.

    2015-01-01

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos

  19. Puzzles of large scale structure and gravitation

    International Nuclear Information System (INIS)

    Sidharth, B.G.

    2006-01-01

    We consider the puzzle of cosmic voids bounded by two-dimensional structures of galactic clusters as also a puzzle pointed out by Weinberg: How can the mass of a typical elementary particle depend on a cosmic parameter like the Hubble constant? An answer to the first puzzle is proposed in terms of 'Scaled' Quantum Mechanical like behaviour which appears at large scales. The second puzzle can be answered by showing that the gravitational mass of an elementary particle has a Machian character (see Ahmed N. Cantorian small worked, Mach's principle and the universal mass network. Chaos, Solitons and Fractals 2004;21(4))

  20. Neutrinos and large-scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Eisenstein, Daniel J. [Daniel J. Eisenstein, Harvard-Smithsonian Center for Astrophysics, 60 Garden St., MS #20, Cambridge, MA 02138 (United States)

    2015-07-15

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.

  1. Concepts for Large Scale Hydrogen Production

    OpenAIRE

    Jakobsen, Daniel; Åtland, Vegar

    2016-01-01

    The objective of this thesis is to perform a techno-economic analysis of large-scale, carbon-lean hydrogen production in Norway, in order to evaluate various production methods and estimate a breakeven price level. Norway possesses vast energy resources and the export of oil and gas is vital to the country s economy. The results of this thesis indicate that hydrogen represents a viable, carbon-lean opportunity to utilize these resources, which can prove key in the future of Norwegian energy e...

  2. Stabilization Algorithms for Large-Scale Problems

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg

    2006-01-01

    The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some......-curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New...

  3. Large scale phononic metamaterials for seismic isolation

    International Nuclear Information System (INIS)

    Aravantinos-Zafiris, N.; Sigalas, M. M.

    2015-01-01

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials

  4. Novel algorithm of large-scale simultaneous linear equations

    International Nuclear Information System (INIS)

    Fujiwara, T; Hoshi, T; Yamamoto, S; Sogabe, T; Zhang, S-L

    2010-01-01

    We review our recently developed methods of solving large-scale simultaneous linear equations and applications to electronic structure calculations both in one-electron theory and many-electron theory. This is the shifted COCG (conjugate orthogonal conjugate gradient) method based on the Krylov subspace, and the most important issue for applications is the shift equation and the seed switching method, which greatly reduce the computational cost. The applications to nano-scale Si crystals and the double orbital extended Hubbard model are presented.

  5. [A large-scale accident in Alpine terrain].

    Science.gov (United States)

    Wildner, M; Paal, P

    2015-02-01

    Due to the geographical conditions, large-scale accidents amounting to mass casualty incidents (MCI) in Alpine terrain regularly present rescue teams with huge challenges. Using an example incident, specific conditions and typical problems associated with such a situation are presented. The first rescue team members to arrive have the elementary tasks of qualified triage and communication to the control room, which is required to dispatch the necessary additional support. Only with a clear "concept", to which all have to adhere, can the subsequent chaos phase be limited. In this respect, a time factor confounded by adverse weather conditions or darkness represents enormous pressure. Additional hazards are frostbite and hypothermia. If priorities can be established in terms of urgency, then treatment and procedure algorithms have proven successful. For evacuation of causalities, a helicopter should be strived for. Due to the low density of hospitals in Alpine regions, it is often necessary to distribute the patients over a wide area. Rescue operations in Alpine terrain have to be performed according to the particular conditions and require rescue teams to have specific knowledge and expertise. The possibility of a large-scale accident should be considered when planning events. With respect to optimization of rescue measures, regular training and exercises are rational, as is the analysis of previous large-scale Alpine accidents.

  6. Large-scale influences in near-wall turbulence.

    Science.gov (United States)

    Hutchins, Nicholas; Marusic, Ivan

    2007-03-15

    Hot-wire data acquired in a high Reynolds number facility are used to illustrate the need for adequate scale separation when considering the coherent structure in wall-bounded turbulence. It is found that a large-scale motion in the log region becomes increasingly comparable in energy to the near-wall cycle as the Reynolds number increases. Through decomposition of fluctuating velocity signals, it is shown that this large-scale motion has a distinct modulating influence on the small-scale energy (akin to amplitude modulation). Reassessment of DNS data, in light of these results, shows similar trends, with the rate and intensity of production due to the near-wall cycle subject to a modulating influence from the largest-scale motions.

  7. Dipolar modulation of Large-Scale Structure

    Science.gov (United States)

    Yoon, Mijin

    For the last two decades, we have seen a drastic development of modern cosmology based on various observations such as the cosmic microwave background (CMB), type Ia supernovae, and baryonic acoustic oscillations (BAO). These observational evidences have led us to a great deal of consensus on the cosmological model so-called LambdaCDM and tight constraints on cosmological parameters consisting the model. On the other hand, the advancement in cosmology relies on the cosmological principle: the universe is isotropic and homogeneous on large scales. Testing these fundamental assumptions is crucial and will soon become possible given the planned observations ahead. Dipolar modulation is the largest angular anisotropy of the sky, which is quantified by its direction and amplitude. We measured a huge dipolar modulation in CMB, which mainly originated from our solar system's motion relative to CMB rest frame. However, we have not yet acquired consistent measurements of dipolar modulations in large-scale structure (LSS), as they require large sky coverage and a number of well-identified objects. In this thesis, we explore measurement of dipolar modulation in number counts of LSS objects as a test of statistical isotropy. This thesis is based on two papers that were published in peer-reviewed journals. In Chapter 2 [Yoon et al., 2014], we measured a dipolar modulation in number counts of WISE matched with 2MASS sources. In Chapter 3 [Yoon & Huterer, 2015], we investigated requirements for detection of kinematic dipole in future surveys.

  8. Internationalization Measures in Large Scale Research Projects

    Science.gov (United States)

    Soeding, Emanuel; Smith, Nancy

    2017-04-01

    Internationalization measures in Large Scale Research Projects Large scale research projects (LSRP) often serve as flagships used by universities or research institutions to demonstrate their performance and capability to stakeholders and other interested parties. As the global competition among universities for the recruitment of the brightest brains has increased, effective internationalization measures have become hot topics for universities and LSRP alike. Nevertheless, most projects and universities are challenged with little experience on how to conduct these measures and make internationalization an cost efficient and useful activity. Furthermore, those undertakings permanently have to be justified with the Project PIs as important, valuable tools to improve the capacity of the project and the research location. There are a variety of measures, suited to support universities in international recruitment. These include e.g. institutional partnerships, research marketing, a welcome culture, support for science mobility and an effective alumni strategy. These activities, although often conducted by different university entities, are interlocked and can be very powerful measures if interfaced in an effective way. On this poster we display a number of internationalization measures for various target groups, identify interfaces between project management, university administration, researchers and international partners to work together, exchange information and improve processes in order to be able to recruit, support and keep the brightest heads to your project.

  9. Large scale integration of photovoltaics in cities

    International Nuclear Information System (INIS)

    Strzalka, Aneta; Alam, Nazmul; Duminil, Eric; Coors, Volker; Eicker, Ursula

    2012-01-01

    Highlights: ► We implement the photovoltaics on a large scale. ► We use three-dimensional modelling for accurate photovoltaic simulations. ► We consider the shadowing effect in the photovoltaic simulation. ► We validate the simulated results using detailed hourly measured data. - Abstract: For a large scale implementation of photovoltaics (PV) in the urban environment, building integration is a major issue. This includes installations on roof or facade surfaces with orientations that are not ideal for maximum energy production. To evaluate the performance of PV systems in urban settings and compare it with the building user’s electricity consumption, three-dimensional geometry modelling was combined with photovoltaic system simulations. As an example, the modern residential district of Scharnhauser Park (SHP) near Stuttgart/Germany was used to calculate the potential of photovoltaic energy and to evaluate the local own consumption of the energy produced. For most buildings of the district only annual electrical consumption data was available and only selected buildings have electronic metering equipment. The available roof area for one of these multi-family case study buildings was used for a detailed hourly simulation of the PV power production, which was then compared to the hourly measured electricity consumption. The results were extrapolated to all buildings of the analyzed area by normalizing them to the annual consumption data. The PV systems can produce 35% of the quarter’s total electricity consumption and half of this generated electricity is directly used within the buildings.

  10. Status: Large-scale subatmospheric cryogenic systems

    International Nuclear Information System (INIS)

    Peterson, T.

    1989-01-01

    In the late 1960's and early 1970's an interest in testing and operating RF cavities at 1.8K motivated the development and construction of four large (300 Watt) 1.8K refrigeration systems. in the past decade, development of successful superconducting RF cavities and interest in obtaining higher magnetic fields with the improved Niobium-Titanium superconductors has once again created interest in large-scale 1.8K refrigeration systems. The L'Air Liquide plant for Tore Supra is a recently commissioned 300 Watt 1.8K system which incorporates new technology, cold compressors, to obtain the low vapor pressure for low temperature cooling. CEBAF proposes to use cold compressors to obtain 5KW at 2.0K. Magnetic refrigerators of 10 Watt capacity or higher at 1.8K are now being developed. The state of the art of large-scale refrigeration in the range under 4K will be reviewed. 28 refs., 4 figs., 7 tabs

  11. Large-scale Intelligent Transporation Systems simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.

    1995-06-01

    A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.

  12. Large-scale fracture mechancis testing -- requirements and possibilities

    International Nuclear Information System (INIS)

    Brumovsky, M.

    1993-01-01

    Application of fracture mechanics to very important and/or complicated structures, like reactor pressure vessels, brings also some questions about the reliability and precision of such calculations. These problems become more pronounced in cases of elastic-plastic conditions of loading and/or in parts with non-homogeneous materials (base metal and austenitic cladding, property gradient changes through material thickness) or with non-homogeneous stress fields (nozzles, bolt threads, residual stresses etc.). For such special cases some verification by large-scale testing is necessary and valuable. This paper discusses problems connected with planning of such experiments with respect to their limitations, requirements to a good transfer of received results to an actual vessel. At the same time, an analysis of possibilities of small-scale model experiments is also shown, mostly in connection with application of results between standard, small-scale and large-scale experiments. Experience from 30 years of large-scale testing in SKODA is used as an example to support this analysis. 1 fig

  13. Cosmic ray acceleration by large scale galactic shocks

    International Nuclear Information System (INIS)

    Cesarsky, C.J.; Lagage, P.O.

    1987-01-01

    The mechanism of diffusive shock acceleration may account for the existence of galactic cosmic rays detailed application to stellar wind shocks and especially to supernova shocks have been developed. Existing models can usually deal with the energetics or the spectral slope, but the observed energy range of cosmic rays is not explained. Therefore it seems worthwhile to examine the effect that large scale, long-lived galactic shocks may have on galactic cosmic rays, in the frame of the diffusive shock acceleration mechanism. Large scale fast shocks can only be expected to exist in the galactic halo. We consider three situations where they may arise: expansion of a supernova shock in the halo, galactic wind, galactic infall; and discuss the possible existence of these shocks and their role in accelerating cosmic rays

  14. Efficient algorithms for collaborative decision making for large scale settings

    DEFF Research Database (Denmark)

    Assent, Ira

    2011-01-01

    to bring about more effective and more efficient retrieval systems that support the users' decision making process. We sketch promising research directions for more efficient algorithms for collaborative decision making, especially for large scale systems.......Collaborative decision making is a successful approach in settings where data analysis and querying can be done interactively. In large scale systems with huge data volumes or many users, collaboration is often hindered by impractical runtimes. Existing work on improving collaboration focuses...... on avoiding redundancy for users working on the same task. While this improves the effectiveness of the user work process, the underlying query processing engine is typically considered a "black box" and left unchanged. Research in multiple query processing, on the other hand, ignores the application...

  15. Lagrangian space consistency relation for large scale structure

    International Nuclear Information System (INIS)

    Horn, Bart; Hui, Lam; Xiao, Xiao

    2015-01-01

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias and Riotto and Peloso and Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present. The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space

  16. Robust large-scale parallel nonlinear solvers for simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any

  17. Radiations: large scale monitoring in Japan

    International Nuclear Information System (INIS)

    Linton, M.; Khalatbari, A.

    2011-01-01

    As the consequences of radioactive leaks on their health are a matter of concern for Japanese people, a large scale epidemiological study has been launched by the Fukushima medical university. It concerns the two millions inhabitants of the Fukushima Prefecture. On the national level and with the support of public funds, medical care and follow-up, as well as systematic controls are foreseen, notably to check the thyroid of 360.000 young people less than 18 year old and of 20.000 pregnant women in the Fukushima Prefecture. Some measurements have already been performed on young children. Despite the sometimes rather low measures, and because they know that some parts of the area are at least as much contaminated as it was the case around Chernobyl, some people are reluctant to go back home

  18. Large-scale digitizer system, analog converters

    International Nuclear Information System (INIS)

    Althaus, R.F.; Lee, K.L.; Kirsten, F.A.; Wagner, L.J.

    1976-10-01

    Analog to digital converter circuits that are based on the sharing of common resources, including those which are critical to the linearity and stability of the individual channels, are described. Simplicity of circuit composition is valued over other more costly approaches. These are intended to be applied in a large-scale processing and digitizing system for use with high-energy physics detectors such as drift-chambers or phototube-scintillator arrays. Signal distribution techniques are of paramount importance in maintaining adequate signal-to-noise ratio. Noise in both amplitude and time-jitter senses is held sufficiently low so that conversions with 10-bit charge resolution and 12-bit time resolution are achieved

  19. Grid sensitivity capability for large scale structures

    Science.gov (United States)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  20. Large - scale Rectangular Ruler Automated Verification Device

    Science.gov (United States)

    Chen, Hao; Chang, Luping; Xing, Minjian; Xie, Xie

    2018-03-01

    This paper introduces a large-scale rectangular ruler automated verification device, which consists of photoelectric autocollimator and self-designed mechanical drive car and data automatic acquisition system. The design of mechanical structure part of the device refer to optical axis design, drive part, fixture device and wheel design. The design of control system of the device refer to hardware design and software design, and the hardware mainly uses singlechip system, and the software design is the process of the photoelectric autocollimator and the automatic data acquisition process. This devices can automated achieve vertical measurement data. The reliability of the device is verified by experimental comparison. The conclusion meets the requirement of the right angle test procedure.

  1. Large Scale Landform Mapping Using Lidar DEM

    Directory of Open Access Journals (Sweden)

    Türkay Gökgöz

    2015-08-01

    Full Text Available In this study, LIDAR DEM data was used to obtain a primary landform map in accordance with a well-known methodology. This primary landform map was generalized using the Focal Statistics tool (Majority, considering the minimum area condition in cartographic generalization in order to obtain landform maps at 1:1000 and 1:5000 scales. Both the primary and the generalized landform maps were verified visually with hillshaded DEM and an orthophoto. As a result, these maps provide satisfactory visuals of the landforms. In order to show the effect of generalization, the area of each landform in both the primary and the generalized maps was computed. Consequently, landform maps at large scales could be obtained with the proposed methodology, including generalization using LIDAR DEM.

  2. Constructing sites on a large scale

    DEFF Research Database (Denmark)

    Braae, Ellen Marie; Tietjen, Anne

    2011-01-01

    Since the 1990s, the regional scale has regained importance in urban and landscape design. In parallel, the focus in design tasks has shifted from master plans for urban extension to strategic urban transformation projects. A prominent example of a contemporary spatial development approach...... for setting the design brief in a large scale urban landscape in Norway, the Jaeren region around the city of Stavanger. In this paper, we first outline the methodological challenges and then present and discuss the proposed method based on our teaching experiences. On this basis, we discuss aspects...... is the IBA Emscher Park in the Ruhr area in Germany. Over a 10 years period (1988-1998), more than a 100 local transformation projects contributed to the transformation from an industrial to a post-industrial region. The current paradigm of planning by projects reinforces the role of the design disciplines...

  3. Large scale study of tooth enamel

    International Nuclear Information System (INIS)

    Bodart, F.; Deconninck, G.; Martin, M.T.

    Human tooth enamel contains traces of foreign elements. The presence of these elements is related to the history and the environment of the human body and can be considered as the signature of perturbations which occur during the growth of a tooth. A map of the distribution of these traces on a large scale sample of the population will constitute a reference for further investigations of environmental effects. On hundred eighty samples of teeth were first analyzed using PIXE, backscattering and nuclear reaction techniques. The results were analyzed using statistical methods. Correlations between O, F, Na, P, Ca, Mn, Fe, Cu, Zn, Pb and Sr were observed and cluster analysis was in progress. The techniques described in the present work have been developed in order to establish a method for the exploration of very large samples of the Belgian population. (author)

  4. Testing Einstein's Gravity on Large Scales

    Science.gov (United States)

    Prescod-Weinstein, Chandra

    2011-01-01

    A little over a decade has passed since two teams studying high redshift Type Ia supernovae announced the discovery that the expansion of the universe was accelerating. After all this time, we?re still not sure how cosmic acceleration fits into the theory that tells us about the large-scale universe: General Relativity (GR). As part of our search for answers, we have been forced to question GR itself. But how will we test our ideas? We are fortunate enough to be entering the era of precision cosmology, where the standard model of gravity can be subjected to more rigorous testing. Various techniques will be employed over the next decade or two in the effort to better understand cosmic acceleration and the theory behind it. In this talk, I will describe cosmic acceleration, current proposals to explain it, and weak gravitational lensing, an observational effect that allows us to do the necessary precision cosmology.

  5. Large-Scale Astrophysical Visualization on Smartphones

    Science.gov (United States)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  6. The (in)effectiveness of Global Land Policies on Large-Scale Land Acquisition

    NARCIS (Netherlands)

    Verhoog, S.M.

    2014-01-01

    Due to current crises, large-scale land acquisition (LSLA) is becoming a topic of growing concern. Public data from the ‘Land Matrix Global Observatory’ project (Land Matrix 2014a) demonstrates that since 2000, 1,664 large-scale land transactions in low- and middle-income countries were reported,

  7. Homogenization of Large-Scale Movement Models in Ecology

    Science.gov (United States)

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  8. Multiresolution comparison of precipitation datasets for large-scale models

    Science.gov (United States)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  9. BILGO: Bilateral greedy optimization for large scale semidefinite programming

    KAUST Repository

    Hao, Zhifeng

    2013-10-03

    Many machine learning tasks (e.g. metric and manifold learning problems) can be formulated as convex semidefinite programs. To enable the application of these tasks on a large-scale, scalability and computational efficiency are considered as desirable properties for a practical semidefinite programming algorithm. In this paper, we theoretically analyze a new bilateral greedy optimization (denoted BILGO) strategy in solving general semidefinite programs on large-scale datasets. As compared to existing methods, BILGO employs a bilateral search strategy during each optimization iteration. In such an iteration, the current semidefinite matrix solution is updated as a bilateral linear combination of the previous solution and a suitable rank-1 matrix, which can be efficiently computed from the leading eigenvector of the descent direction at this iteration. By optimizing for the coefficients of the bilateral combination, BILGO reduces the cost function in every iteration until the KKT conditions are fully satisfied, thus, it tends to converge to a global optimum. In fact, we prove that BILGO converges to the global optimal solution at a rate of O(1/k), where k is the iteration counter. The algorithm thus successfully combines the efficiency of conventional rank-1 update algorithms and the effectiveness of gradient descent. Moreover, BILGO can be easily extended to handle low rank constraints. To validate the effectiveness and efficiency of BILGO, we apply it to two important machine learning tasks, namely Mahalanobis metric learning and maximum variance unfolding. Extensive experimental results clearly demonstrate that BILGO can solve large-scale semidefinite programs efficiently.

  10. BILGO: Bilateral greedy optimization for large scale semidefinite programming

    KAUST Repository

    Hao, Zhifeng; Yuan, Ganzhao; Ghanem, Bernard

    2013-01-01

    Many machine learning tasks (e.g. metric and manifold learning problems) can be formulated as convex semidefinite programs. To enable the application of these tasks on a large-scale, scalability and computational efficiency are considered as desirable properties for a practical semidefinite programming algorithm. In this paper, we theoretically analyze a new bilateral greedy optimization (denoted BILGO) strategy in solving general semidefinite programs on large-scale datasets. As compared to existing methods, BILGO employs a bilateral search strategy during each optimization iteration. In such an iteration, the current semidefinite matrix solution is updated as a bilateral linear combination of the previous solution and a suitable rank-1 matrix, which can be efficiently computed from the leading eigenvector of the descent direction at this iteration. By optimizing for the coefficients of the bilateral combination, BILGO reduces the cost function in every iteration until the KKT conditions are fully satisfied, thus, it tends to converge to a global optimum. In fact, we prove that BILGO converges to the global optimal solution at a rate of O(1/k), where k is the iteration counter. The algorithm thus successfully combines the efficiency of conventional rank-1 update algorithms and the effectiveness of gradient descent. Moreover, BILGO can be easily extended to handle low rank constraints. To validate the effectiveness and efficiency of BILGO, we apply it to two important machine learning tasks, namely Mahalanobis metric learning and maximum variance unfolding. Extensive experimental results clearly demonstrate that BILGO can solve large-scale semidefinite programs efficiently.

  11. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  12. Image-based Exploration of Large-Scale Pathline Fields

    KAUST Repository

    Nagoor, Omniah H.

    2014-05-27

    While real-time applications are nowadays routinely used in visualizing large nu- merical simulations and volumes, handling these large-scale datasets requires high-end graphics clusters or supercomputers to process and visualize them. However, not all users have access to powerful clusters. Therefore, it is challenging to come up with a visualization approach that provides insight to large-scale datasets on a single com- puter. Explorable images (EI) is one of the methods that allows users to handle large data on a single workstation. Although it is a view-dependent method, it combines both exploration and modification of visual aspects without re-accessing the original huge data. In this thesis, we propose a novel image-based method that applies the concept of EI in visualizing large flow-field pathlines data. The goal of our work is to provide an optimized image-based method, which scales well with the dataset size. Our approach is based on constructing a per-pixel linked list data structure in which each pixel contains a list of pathlines segments. With this view-dependent method it is possible to filter, color-code and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination and deferred shading are applied, which further improves the performance and scalability of our approach.

  13. Large-scale compositional heterogeneity in the Earth's mantle

    Science.gov (United States)

    Ballmer, M.

    2017-12-01

    Seismic imaging of subducted Farallon and Tethys lithosphere in the lower mantle has been taken as evidence for whole-mantle convection, and efficient mantle mixing. However, cosmochemical constraints point to a lower-mantle composition that has a lower Mg/Si compared to upper-mantle pyrolite. Moreover, geochemical signatures of magmatic rocks indicate the long-term persistence of primordial reservoirs somewhere in the mantle. In this presentation, I establish geodynamic mechanisms for sustaining large-scale (primordial) heterogeneity in the Earth's mantle using numerical models. Mantle flow is controlled by rock density and viscosity. Variations in intrinsic rock density, such as due to heterogeneity in basalt or iron content, can induce layering or partial layering in the mantle. Layering can be sustained in the presence of persistent whole mantle convection due to active "unmixing" of heterogeneity in low-viscosity domains, e.g. in the transition zone or near the core-mantle boundary [1]. On the other hand, lateral variations in intrinsic rock viscosity, such as due to heterogeneity in Mg/Si, can strongly affect the mixing timescales of the mantle. In the extreme case, intrinsically strong rocks may remain unmixed through the age of the Earth, and persist as large-scale domains in the mid-mantle due to focusing of deformation along weak conveyor belts [2]. That large-scale lateral heterogeneity and/or layering can persist in the presence of whole-mantle convection can explain the stagnation of some slabs, as well as the deflection of some plumes, in the mid-mantle. These findings indeed motivate new seismic studies for rigorous testing of model predictions. [1] Ballmer, M. D., N. C. Schmerr, T. Nakagawa, and J. Ritsema (2015), Science Advances, doi:10.1126/sciadv.1500815. [2] Ballmer, M. D., C. Houser, J. W. Hernlund, R. Wentzcovitch, and K. Hirose (2017), Nature Geoscience, doi:10.1038/ngeo2898.

  14. Estimating GHG emission mitigation supply curves of large-scale biomass use on a country level

    International Nuclear Information System (INIS)

    Dornburg, Veronika; Dam, Jinke van; Faaij, Andre

    2007-01-01

    This study evaluates the possible influences of a large-scale introduction of biomass material and energy systems and their market volumes on land, material and energy market prices and their feedback to greenhouse gas (GHG) emission mitigation costs. GHG emission mitigation supply curves for large-scale biomass use were compiled using a methodology that combines a bottom-up analysis of biomass applications, biomass cost supply curves and market prices of land, biomaterials and bioenergy carriers. These market prices depend on the scale of biomass use and the market volume of materials and energy carriers and were estimated using own-price elasticities of demand. The methodology was demonstrated for a case study of Poland in the year 2015 applying different scenarios on economic development and trade in Europe. For the key technologies considered, i.e. medium density fibreboard, poly lactic acid, electricity and methanol production, GHG emission mitigation costs increase strongly with the scale of biomass production. Large-scale introduction of biomass use decreases the GHG emission reduction potential at costs below 50 Euro /Mg CO 2eq with about 13-70% depending on the scenario. Biomaterial production accounts for only a small part of this GHG emission reduction potential due to relatively small material markets and the subsequent strong decrease of biomaterial market prices at large scale of production. GHG emission mitigation costs depend strongly on biomass supply curves, own-price elasticity of land and market volumes of bioenergy carriers. The analysis shows that these influences should be taken into account for developing biomass implementations strategies

  15. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir; Ltaief, Hatem; Mikhalev, Aleksandr; Charara, Ali; Keyes, David E.

    2018-01-01

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  16. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir

    2018-02-24

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  17. Large-Scale Analysis of Network Bistability for Human Cancers

    Science.gov (United States)

    Shiraishi, Tetsuya; Matsuyama, Shinako; Kitano, Hiroaki

    2010-01-01

    Protein–protein interaction and gene regulatory networks are likely to be locked in a state corresponding to a disease by the behavior of one or more bistable circuits exhibiting switch-like behavior. Sets of genes could be over-expressed or repressed when anomalies due to disease appear, and the circuits responsible for this over- or under-expression might persist for as long as the disease state continues. This paper shows how a large-scale analysis of network bistability for various human cancers can identify genes that can potentially serve as drug targets or diagnosis biomarkers. PMID:20628618

  18. Large-scale stochasticity in Hamiltonian systems

    International Nuclear Information System (INIS)

    Escande, D.F.

    1982-01-01

    Large scale stochasticity (L.S.S.) in Hamiltonian systems is defined on the paradigm Hamiltonian H(v,x,t) =v 2 /2-M cos x-P cos k(x-t) which describes the motion of one particle in two electrostatic waves. A renormalization transformation Tsub(r) is described which acts as a microscope that focusses on a given KAM (Kolmogorov-Arnold-Moser) torus in phase space. Though approximate, Tsub(r) yields the threshold of L.S.S. in H with an error of 5-10%. The universal behaviour of KAM tori is predicted: for instance the scale invariance of KAM tori and the critical exponent of the Lyapunov exponent of Cantori. The Fourier expansion of KAM tori is computed and several conjectures by L. Kadanoff and S. Shenker are proved. Chirikov's standard mapping for stochastic layers is derived in a simpler way and the width of the layers is computed. A simpler renormalization scheme for these layers is defined. A Mathieu equation for describing the stability of a discrete family of cycles is derived. When combined with Tsub(r), it allows to prove the link between KAM tori and nearby cycles, conjectured by J. Greene and, in particular, to compute the mean residue of a torus. The fractal diagrams defined by G. Schmidt are computed. A sketch of a methodology for computing the L.S.S. threshold in any two-degree-of-freedom Hamiltonian system is given. (Auth.)

  19. Large-scale tides in general relativity

    Energy Technology Data Exchange (ETDEWEB)

    Ip, Hiu Yan; Schmidt, Fabian, E-mail: iphys@mpa-garching.mpg.de, E-mail: fabians@mpa-garching.mpg.de [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany)

    2017-02-01

    Density perturbations in cosmology, i.e. spherically symmetric adiabatic perturbations of a Friedmann-Lemaȋtre-Robertson-Walker (FLRW) spacetime, are locally exactly equivalent to a different FLRW solution, as long as their wavelength is much larger than the sound horizon of all fluid components. This fact is known as the 'separate universe' paradigm. However, no such relation is known for anisotropic adiabatic perturbations, which correspond to an FLRW spacetime with large-scale tidal fields. Here, we provide a closed, fully relativistic set of evolutionary equations for the nonlinear evolution of such modes, based on the conformal Fermi (CFC) frame. We show explicitly that the tidal effects are encoded by the Weyl tensor, and are hence entirely different from an anisotropic Bianchi I spacetime, where the anisotropy is sourced by the Ricci tensor. In order to close the system, certain higher derivative terms have to be dropped. We show that this approximation is equivalent to the local tidal approximation of Hui and Bertschinger [1]. We also show that this very simple set of equations matches the exact evolution of the density field at second order, but fails at third and higher order. This provides a useful, easy-to-use framework for computing the fully relativistic growth of structure at second order.

  20. Food appropriation through large scale land acquisitions

    International Nuclear Information System (INIS)

    Cristina Rulli, Maria; D’Odorico, Paolo

    2014-01-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300–550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190–370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations. (letter)

  1. Large Scale EOF Analysis of Climate Data

    Science.gov (United States)

    Prabhat, M.; Gittens, A.; Kashinath, K.; Cavanaugh, N. R.; Mahoney, M.

    2016-12-01

    We present a distributed approach towards extracting EOFs from 3D climate data. We implement the method in Apache Spark, and process multi-TB sized datasets on O(1000-10,000) cores. We apply this method to latitude-weighted ocean temperature data from CSFR, a 2.2 terabyte-sized data set comprising ocean and subsurface reanalysis measurements collected at 41 levels in the ocean, at 6 hour intervals over 31 years. We extract the first 100 EOFs of this full data set and compare to the EOFs computed simply on the surface temperature field. Our analyses provide evidence of Kelvin and Rossy waves and components of large-scale modes of oscillation including the ENSO and PDO that are not visible in the usual SST EOFs. Further, they provide information on the the most influential parts of the ocean, such as the thermocline, that exist below the surface. Work is ongoing to understand the factors determining the depth-varying spatial patterns observed in the EOFs. We will experiment with weighting schemes to appropriately account for the differing depths of the observations. We also plan to apply the same distributed approach to analysis of analysis of 3D atmospheric climatic data sets, including multiple variables. Because the atmosphere changes on a quicker time-scale than the ocean, we expect that the results will demonstrate an even greater advantage to computing 3D EOFs in lieu of 2D EOFs.

  2. Mirror dark matter and large scale structure

    International Nuclear Information System (INIS)

    Ignatiev, A.Yu.; Volkas, R.R.

    2003-01-01

    Mirror matter is a dark matter candidate. In this paper, we reexamine the linear regime of density perturbation growth in a universe containing mirror dark matter. Taking adiabatic scale-invariant perturbations as the input, we confirm that the resulting processed power spectrum is richer than for the more familiar cases of cold, warm and hot dark matter. The new features include a maximum at a certain scale λ max , collisional damping below a smaller characteristic scale λ S ' , with oscillatory perturbations between the two. These scales are functions of the fundamental parameters of the theory. In particular, they decrease for decreasing x, the ratio of the mirror plasma temperature to that of the ordinary. For x∼0.2, the scale λ max becomes galactic. Mirror dark matter therefore leads to bottom-up large scale structure formation, similar to conventional cold dark matter, for x(less-or-similar sign)0.2. Indeed, the smaller the value of x, the closer mirror dark matter resembles standard cold dark matter during the linear regime. The differences pertain to scales smaller than λ S ' in the linear regime, and generally in the nonlinear regime because mirror dark matter is chemically complex and to some extent dissipative. Lyman-α forest data and the early reionization epoch established by WMAP may hold the key to distinguishing mirror dark matter from WIMP-style cold dark matter

  3. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  4. Enabling Large-Scale Biomedical Analysis in the Cloud

    Directory of Open Access Journals (Sweden)

    Ying-Chih Lin

    2013-01-01

    Full Text Available Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable.

  5. Generation of large-scale vorticity in rotating stratified turbulence with inhomogeneous helicity: mean-field theory

    Science.gov (United States)

    Kleeorin, N.

    2018-06-01

    We discuss a mean-field theory of the generation of large-scale vorticity in a rotating density stratified developed turbulence with inhomogeneous kinetic helicity. We show that the large-scale non-uniform flow is produced due to either a combined action of a density stratified rotating turbulence and uniform kinetic helicity or a combined effect of a rotating incompressible turbulence and inhomogeneous kinetic helicity. These effects result in the formation of a large-scale shear, and in turn its interaction with the small-scale turbulence causes an excitation of the large-scale instability (known as a vorticity dynamo) due to a combined effect of the large-scale shear and Reynolds stress-induced generation of the mean vorticity. The latter is due to the effect of large-scale shear on the Reynolds stress. A fast rotation suppresses this large-scale instability.

  6. Sensitivity technologies for large scale simulation

    International Nuclear Information System (INIS)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  7. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus

    2013-11-19

    Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous

  8. Large Scale Self-Organizing Information Distribution System

    National Research Council Canada - National Science Library

    Low, Steven

    2005-01-01

    This project investigates issues in "large-scale" networks. Here "large-scale" refers to networks with large number of high capacity nodes and transmission links, and shared by a large number of users...

  9. Distributed large-scale dimensional metrology new insights

    CERN Document Server

    Franceschini, Fiorenzo; Maisano, Domenico

    2011-01-01

    Focuses on the latest insights into and challenges of distributed large scale dimensional metrology Enables practitioners to study distributed large scale dimensional metrology independently Includes specific examples of the development of new system prototypes

  10. Unified Access Architecture for Large-Scale Scientific Datasets

    Science.gov (United States)

    Karna, Risav

    2014-05-01

    Data-intensive sciences have to deploy diverse large scale database technologies for data analytics as scientists have now been dealing with much larger volume than ever before. While array databases have bridged many gaps between the needs of data-intensive research fields and DBMS technologies (Zhang 2011), invocation of other big data tools accompanying these databases is still manual and separate the database management's interface. We identify this as an architectural challenge that will increasingly complicate the user's work flow owing to the growing number of useful but isolated and niche database tools. Such use of data analysis tools in effect leaves the burden on the user's end to synchronize the results from other data manipulation analysis tools with the database management system. To this end, we propose a unified access interface for using big data tools within large scale scientific array database using the database queries themselves to embed foreign routines belonging to the big data tools. Such an invocation of foreign data manipulation routines inside a query into a database can be made possible through a user-defined function (UDF). UDFs that allow such levels of freedom as to call modules from another language and interface back and forth between the query body and the side-loaded functions would be needed for this purpose. For the purpose of this research we attempt coupling of four widely used tools Hadoop (hadoop1), Matlab (matlab1), R (r1) and ScaLAPACK (scalapack1) with UDF feature of rasdaman (Baumann 98), an array-based data manager, for investigating this concept. The native array data model used by an array-based data manager provides compact data storage and high performance operations on ordered data such as spatial data, temporal data, and matrix-based data for linear algebra operations (scidbusr1). Performances issues arising due to coupling of tools with different paradigms, niche functionalities, separate processes and output

  11. Probes of large-scale structure in the Universe

    International Nuclear Information System (INIS)

    Suto, Yasushi; Gorski, K.; Juszkiewicz, R.; Silk, J.

    1988-01-01

    Recent progress in observational techniques has made it possible to confront quantitatively various models for the large-scale structure of the Universe with detailed observational data. We develop a general formalism to show that the gravitational instability theory for the origin of large-scale structure is now capable of critically confronting observational results on cosmic microwave background radiation angular anisotropies, large-scale bulk motions and large-scale clumpiness in the galaxy counts. (author)

  12. Dose monitoring in large-scale flowing aqueous media

    International Nuclear Information System (INIS)

    Kuruca, C.N.

    1995-01-01

    The Miami Electron Beam Research Facility (EBRF) has been in operation for six years. The EBRF houses a 1.5 MV, 75 KW DC scanned electron beam. Experiments have been conducted to evaluate the effectiveness of high-energy electron irradiation in the removal of toxic organic chemicals from contaminated water and the disinfection of various wastewater streams. The large-scale plant operates at approximately 450 L/min (120 gal/min). The radiation dose absorbed by the flowing aqueous streams is estimated by measuring the difference in water temperature before and after it passes in front of the beam. Temperature measurements are made using resistance temperature devices (RTDs) and recorded by computer along with other operating parameters. Estimated dose is obtained from the measured temperature differences using the specific heat of water. This presentation will discuss experience with this measurement system, its application to different water presentation devices, sources of error, and the advantages and disadvantages of its use in large-scale process applications

  13. Solving large scale structure in ten easy steps with COLA

    Energy Technology Data Exchange (ETDEWEB)

    Tassev, Svetlin [Department of Astrophysical Sciences, Princeton University, 4 Ivy Lane, Princeton, NJ 08544 (United States); Zaldarriaga, Matias [School of Natural Sciences, Institute for Advanced Study, Olden Lane, Princeton, NJ 08540 (United States); Eisenstein, Daniel J., E-mail: stassev@cfa.harvard.edu, E-mail: matiasz@ias.edu, E-mail: deisenstein@cfa.harvard.edu [Center for Astrophysics, Harvard University, 60 Garden Street, Cambridge, MA 02138 (United States)

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.

  14. GPU-Accelerated Sparse Matrix Solvers for Large-Scale Simulations, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Many large-scale numerical simulations can be broken down into common mathematical routines. While the applications may differ, the need to perform functions such as...

  15. PANDA: a Large Scale Multi-Purpose Test Facility for LWR Safety Research

    Energy Technology Data Exchange (ETDEWEB)

    Dreier, Joerg; Paladino, Domenico; Huggenberger, Max; Andreani, Michele [Laboratory for Thermal-Hydraulics, Nuclear Energy and Safety Research Department, Paul Scherrer Institut (PSI), CH-5232 Villigen PSI (Switzerland); Yadigaroglu, George [ETH Zuerich, Technoparkstrasse 1, Einstein 22- CH-8005 Zuerich (Switzerland)

    2008-07-01

    PANDA is a large-scale multi-purpose thermal-hydraulics test facility, built and operated by PSI. Due to its modular structure, PANDA provides flexibility for a variety of applications, ranging from integral containment system investigations, primary system tests, component experiments to large-scale separate-effects tests. For many applications, the experimental results are directly used for example for concept demonstrations or for the characterisation of phenomena or components, but all the experimental data generated in the various test campaigns is unique and was or/and will still be widely used for the validation and improvement of a variety of computer codes, including codes with 3D capabilities, for reactor safety analysis. The paper provides an overview of the already completed and on-going research programs performed in the PANDA facility in the different area of applications, including the main results and conclusions of the investigations. In particular the advanced passive containment cooling system concept investigations of the SBWR, ESBWR as well as of the SWR1000 in relation to various aspects are presented and the main findings are summarised. Finally the goals, planned investigations and expected results of the on-going OECD project SETH-2 are presented. (authors)

  16. PANDA: a Large Scale Multi-Purpose Test Facility for LWR Safety Research

    International Nuclear Information System (INIS)

    Dreier, Joerg; Paladino, Domenico; Huggenberger, Max; Andreani, Michele; Yadigaroglu, George

    2008-01-01

    PANDA is a large-scale multi-purpose thermal-hydraulics test facility, built and operated by PSI. Due to its modular structure, PANDA provides flexibility for a variety of applications, ranging from integral containment system investigations, primary system tests, component experiments to large-scale separate-effects tests. For many applications, the experimental results are directly used for example for concept demonstrations or for the characterisation of phenomena or components, but all the experimental data generated in the various test campaigns is unique and was or/and will still be widely used for the validation and improvement of a variety of computer codes, including codes with 3D capabilities, for reactor safety analysis. The paper provides an overview of the already completed and on-going research programs performed in the PANDA facility in the different area of applications, including the main results and conclusions of the investigations. In particular the advanced passive containment cooling system concept investigations of the SBWR, ESBWR as well as of the SWR1000 in relation to various aspects are presented and the main findings are summarised. Finally the goals, planned investigations and expected results of the on-going OECD project SETH-2 are presented. (authors)

  17. Large scale dynamics of protoplanetary discs

    Science.gov (United States)

    Béthune, William

    2017-08-01

    Planets form in the gaseous and dusty disks orbiting young stars. These protoplanetary disks are dispersed in a few million years, being accreted onto the central star or evaporated into the interstellar medium. To explain the observed accretion rates, it is commonly assumed that matter is transported through the disk by turbulence, although the mechanism sustaining turbulence is uncertain. On the other side, irradiation by the central star could heat up the disk surface and trigger a photoevaporative wind, but thermal effects cannot account for the observed acceleration and collimation of the wind into a narrow jet perpendicular to the disk plane. Both issues can be solved if the disk is sensitive to magnetic fields. Weak fields lead to the magnetorotational instability, whose outcome is a state of sustained turbulence. Strong fields can slow down the disk, causing it to accrete while launching a collimated wind. However, the coupling between the disk and the neutral gas is done via electric charges, each of which is outnumbered by several billion neutral molecules. The imperfect coupling between the magnetic field and the neutral gas is described in terms of "non-ideal" effects, introducing new dynamical behaviors. This thesis is devoted to the transport processes happening inside weakly ionized and weakly magnetized accretion disks; the role of microphysical effects on the large-scale dynamics of the disk is of primary importance. As a first step, I exclude the wind and examine the impact of non-ideal effects on the turbulent properties near the disk midplane. I show that the flow can spontaneously organize itself if the ionization fraction is low enough; in this case, accretion is halted and the disk exhibits axisymmetric structures, with possible consequences on planetary formation. As a second step, I study the launching of disk winds via a global model of stratified disk embedded in a warm atmosphere. This model is the first to compute non-ideal effects from

  18. Large-Scale Spacecraft Fire Safety Tests

    Science.gov (United States)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; hide

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests

  19. Large-scale fuel cycle centres

    International Nuclear Information System (INIS)

    Smiley, S.H.; Black, K.M.

    1977-01-01

    The US Nuclear Regulatory Commission (NRC) has considered the nuclear energy centre concept for fuel cycle plants in the Nuclear Energy Centre Site Survey 1975 (NECSS-75) Rep. No. NUREG-0001, an important study mandated by the US Congress in the Energy Reorganization Act of 1974 which created the NRC. For this study, the NRC defined fuel cycle centres as consisting of fuel reprocessing and mixed-oxide fuel fabrication plants, and optional high-level waste and transuranic waste management facilities. A range of fuel cycle centre sizes corresponded to the fuel throughput of power plants with a total capacity of 50,000-300,000MW(e). The types of fuel cycle facilities located at the fuel cycle centre permit the assessment of the role of fuel cycle centres in enhancing the safeguard of strategic special nuclear materials - plutonium and mixed oxides. Siting fuel cycle centres presents a smaller problem than siting reactors. A single reprocessing plant of the scale projected for use in the USA (1500-2000t/a) can reprocess fuel from reactors producing 50,000-65,000MW(e). Only two or three fuel cycle centres of the upper limit size considered in the NECSS-75 would be required in the USA by the year 2000. The NECSS-75 fuel cycle centre evaluation showed that large-scale fuel cycle centres present no real technical siting difficulties from a radiological effluent and safety standpoint. Some construction economies may be achievable with fuel cycle centres, which offer opportunities to improve waste-management systems. Combined centres consisting of reactors and fuel reprocessing and mixed-oxide fuel fabrication plants were also studied in the NECSS. Such centres can eliminate shipment not only of Pu but also mixed-oxide fuel. Increased fuel cycle costs result from implementation of combined centres unless the fuel reprocessing plants are commercial-sized. Development of Pu-burning reactors could reduce any economic penalties of combined centres. The need for effective fissile

  20. Synthesis of Large-Scale Single-Crystalline Monolayer WS2 Using a Semi-Sealed Method

    Directory of Open Access Journals (Sweden)

    Feifei Lan

    2018-02-01

    Full Text Available As a two-dimensional semiconductor, WS2 has attracted great attention due to its rich physical properties and potential applications. However, it is still difficult to synthesize monolayer single-crystalline WS2 at larger scale. Here, we report the growth of large-scale triangular single-crystalline WS2 with a semi-sealed installation by chemical vapor deposition (CVD. Through this method, triangular single-crystalline WS2 with an average length of more than 300 µm was obtained. The largest one was about 405 μm in length. WS2 triangles with different sizes and thicknesses were analyzed by optical microscope and atomic force microscope (AFM. Their optical properties were evaluated by Raman and photoluminescence (PL spectra. This report paves the way to fabricating large-scale single-crystalline monolayer WS2, which is useful for the growth of high-quality WS2 and its potential applications in the future.

  1. Manufacturing test of large scale hollow capsule and long length cladding in the large scale oxide dispersion strengthened (ODS) martensitic steel

    International Nuclear Information System (INIS)

    Narita, Takeshi; Ukai, Shigeharu; Kaito, Takeji; Ohtsuka, Satoshi; Fujiwara, Masayuki

    2004-04-01

    Mass production capability of oxide dispersion strengthened (ODS) martensitic steel cladding (9Cr) has being evaluated in the Phase II of the Feasibility Studies on Commercialized Fast Reactor Cycle System. The cost for manufacturing mother tube (raw materials powder production, mechanical alloying (MA) by ball mill, canning, hot extrusion, and machining) is a dominant factor in the total cost for manufacturing ODS ferritic steel cladding. In this study, the large-sale 9Cr-ODS martensitic steel mother tube which is made with a large-scale hollow capsule, and long length claddings were manufactured, and the applicability of these processes was evaluated. Following results were obtained in this study. (1) Manufacturing the large scale mother tube in the dimension of 32 mm OD, 21 mm ID, and 2 m length has been successfully carried out using large scale hollow capsule. This mother tube has a high degree of accuracy in size. (2) The chemical composition and the micro structure of the manufactured mother tube are similar to the existing mother tube manufactured by a small scale can. And the remarkable difference between the bottom and top sides in the manufactured mother tube has not been observed. (3) The long length cladding has been successfully manufactured from the large scale mother tube which was made using a large scale hollow capsule. (4) For reducing the manufacturing cost of the ODS steel claddings, manufacturing process of the mother tubes using a large scale hollow capsules is promising. (author)

  2. Large-scale fuel cycle centers

    International Nuclear Information System (INIS)

    Smiley, S.H.; Black, K.M.

    1977-01-01

    The United States Nuclear Regulatory Commission (NRC) has considered the nuclear energy center concept for fuel cycle plants in the Nuclear Energy Center Site Survey - 1975 (NECSS-75) -- an important study mandated by the U.S. Congress in the Energy Reorganization Act of 1974 which created the NRC. For the study, NRC defined fuel cycle centers to consist of fuel reprocessing and mixed oxide fuel fabrication plants, and optional high-level waste and transuranic waste management facilities. A range of fuel cycle center sizes corresponded to the fuel throughput of power plants with a total capacity of 50,000 - 300,000 MWe. The types of fuel cycle facilities located at the fuel cycle center permit the assessment of the role of fuel cycle centers in enhancing safeguarding of strategic special nuclear materials -- plutonium and mixed oxides. Siting of fuel cycle centers presents a considerably smaller problem than the siting of reactors. A single reprocessing plant of the scale projected for use in the United States (1500-2000 MT/yr) can reprocess the fuel from reactors producing 50,000-65,000 MWe. Only two or three fuel cycle centers of the upper limit size considered in the NECSS-75 would be required in the United States by the year 2000 . The NECSS-75 fuel cycle center evaluations showed that large scale fuel cycle centers present no real technical difficulties in siting from a radiological effluent and safety standpoint. Some construction economies may be attainable with fuel cycle centers; such centers offer opportunities for improved waste management systems. Combined centers consisting of reactors and fuel reprocessing and mixed oxide fuel fabrication plants were also studied in the NECSS. Such centers can eliminate not only shipment of plutonium, but also mixed oxide fuel. Increased fuel cycle costs result from implementation of combined centers unless the fuel reprocessing plants are commercial-sized. Development of plutonium-burning reactors could reduce any

  3. Fuel pin integrity assessment under large scale transients

    International Nuclear Information System (INIS)

    Dutta, B.K.

    2006-01-01

    The integrity of fuel rods under normal, abnormal and accident conditions is an important consideration during fuel design of advanced nuclear reactors. The fuel matrix and the sheath form the first barrier to prevent the release of radioactive materials into the primary coolant. An understanding of the fuel and clad behaviour under different reactor conditions, particularly under the beyond-design-basis accident scenario leading to large scale transients, is always desirable to assess the inherent safety margins in fuel pin design and to plan for the mitigation the consequences of accidents, if any. The severe accident conditions are typically characterized by the energy deposition rates far exceeding the heat removal capability of the reactor coolant system. This may lead to the clad failure due to fission gas pressure at high temperature, large- scale pellet-clad interaction and clad melting. The fuel rod performance is affected by many interdependent complex phenomena involving extremely complex material behaviour. The versatile experimental database available in this area has led to the development of powerful analytical tools to characterize fuel under extreme scenarios

  4. Large Scale Community Detection Using a Small World Model

    Directory of Open Access Journals (Sweden)

    Ranjan Kumar Behera

    2017-11-01

    Full Text Available In a social network, small or large communities within the network play a major role in deciding the functionalities of the network. Despite of diverse definitions, communities in the network may be defined as the group of nodes that are more densely connected as compared to nodes outside the group. Revealing such hidden communities is one of the challenging research problems. A real world social network follows small world phenomena, which indicates that any two social entities can be reachable in a small number of steps. In this paper, nodes are mapped into communities based on the random walk in the network. However, uncovering communities in large-scale networks is a challenging task due to its unprecedented growth in the size of social networks. A good number of community detection algorithms based on random walk exist in literature. In addition, when large-scale social networks are being considered, these algorithms are observed to take considerably longer time. In this work, with an objective to improve the efficiency of algorithms, parallel programming framework like Map-Reduce has been considered for uncovering the hidden communities in social network. The proposed approach has been compared with some standard existing community detection algorithms for both synthetic and real-world datasets in order to examine its performance, and it is observed that the proposed algorithm is more efficient than the existing ones.

  5. Literature Review: Herbal Medicine Treatment after Large-Scale Disasters.

    Science.gov (United States)

    Takayama, Shin; Kaneko, Soichiro; Numata, Takehiro; Kamiya, Tetsuharu; Arita, Ryutaro; Saito, Natsumi; Kikuchi, Akiko; Ohsawa, Minoru; Kohayagawa, Yoshitaka; Ishii, Tadashi

    2017-01-01

    Large-scale natural disasters, such as earthquakes, tsunamis, volcanic eruptions, and typhoons, occur worldwide. After the Great East Japan earthquake and tsunami, our medical support operation's experiences suggested that traditional medicine might be useful for treating the various symptoms of the survivors. However, little information is available regarding herbal medicine treatment in such situations. Considering that further disasters will occur, we performed a literature review and summarized the traditional medicine approaches for treatment after large-scale disasters. We searched PubMed and Cochrane Library for articles written in English, and Ichushi for those written in Japanese. Articles published before 31 March 2016 were included. Keywords "disaster" and "herbal medicine" were used in our search. Among studies involving herbal medicine after a disaster, we found two randomized controlled trials investigating post-traumatic stress disorder (PTSD), three retrospective investigations of trauma or common diseases, and seven case series or case reports of dizziness, pain, and psychosomatic symptoms. In conclusion, herbal medicine has been used to treat trauma, PTSD, and other symptoms after disasters. However, few articles have been published, likely due to the difficulty in designing high quality studies in such situations. Further study will be needed to clarify the usefulness of herbal medicine after disasters.

  6. DEMNUni: massive neutrinos and the bispectrum of large scale structures

    Science.gov (United States)

    Ruggeri, Rossana; Castorina, Emanuele; Carbone, Carmelita; Sefusatti, Emiliano

    2018-03-01

    The main effect of massive neutrinos on the large-scale structure consists in a few percent suppression of matter perturbations on all scales below their free-streaming scale. Such effect is of particular importance as it allows to constraint the value of the sum of neutrino masses from measurements of the galaxy power spectrum. In this work, we present the first measurements of the next higher-order correlation function, the bispectrum, from N-body simulations that include massive neutrinos as particles. This is the simplest statistics characterising the non-Gaussian properties of the matter and dark matter halos distributions. We investigate, in the first place, the suppression due to massive neutrinos on the matter bispectrum, comparing our measurements with the simplest perturbation theory predictions, finding the approximation of neutrinos contributing at quadratic order in perturbation theory to provide a good fit to the measurements in the simulations. On the other hand, as expected, a linear approximation for neutrino perturbations would lead to Script O(fν) errors on the total matter bispectrum at large scales. We then attempt an extension of previous results on the universality of linear halo bias in neutrino cosmologies, to non-linear and non-local corrections finding consistent results with the power spectrum analysis.

  7. Towards large-scale plasma-assisted synthesis of nanowires

    Science.gov (United States)

    Cvelbar, U.

    2011-05-01

    Large quantities of nanomaterials, e.g. nanowires (NWs), are needed to overcome the high market price of nanomaterials and make nanotechnology widely available for general public use and applications to numerous devices. Therefore, there is an enormous need for new methods or routes for synthesis of those nanostructures. Here plasma technologies for synthesis of NWs, nanotubes, nanoparticles or other nanostructures might play a key role in the near future. This paper presents a three-dimensional problem of large-scale synthesis connected with the time, quantity and quality of nanostructures. Herein, four different plasma methods for NW synthesis are presented in contrast to other methods, e.g. thermal processes, chemical vapour deposition or wet chemical processes. The pros and cons are discussed in detail for the case of two metal oxides: iron oxide and zinc oxide NWs, which are important for many applications.

  8. Automatic Installation and Configuration for Large Scale Farms

    CERN Document Server

    Novák, J

    2005-01-01

    Since the early appearance of commodity hardware, the utilization of computers rose rapidly, and they became essential in all areas of life. Soon it was realized that nodes are able to work cooperatively, in order to solve new, more complex tasks. This conception got materialized in coherent aggregations of computers called farms and clusters. Collective application of nodes, being efficient and economical, was adopted in education, research and industry before long. But maintainance, especially in large scale, appeared as a problem to be resolved. New challenges needed new methods and tools. Development work has been started to build farm management applications and frameworks. In the first part of the thesis, these systems are introduced. After a general description of the matter, a comparative analysis of different approaches and tools illustrates the practical aspects of the theoretical discussion. CERN, the European Organization of Nuclear Research is the largest Particle Physics laboratory in the world....

  9. USE OF RFID AT LARGE-SCALE EVENTS

    Directory of Open Access Journals (Sweden)

    Yuusuke KAWAKITA

    2005-01-01

    Full Text Available Radio Frequency Identification (RFID devices and related technologies have received a great deal of attention for their ability to perform non-contact object identification. Systems incorporating RFID have been evaluated from a variety of perspectives. The authors constructed a networked RFID system to support event management at NetWorld+Interop 2004 Tokyo, an event that received 150,000 visitors. The system used multiple RFID readers installed at the venue and RFID tags carried by each visitor to provide a platform for running various management and visitor support applications. This paper presents the results of this field trial of RFID readability rates. It further addresses the applicability of RFID systems to visitor management, a problematic aspect of large-scale events.

  10. Evaluating Unmanned Aerial Platforms for Cultural Heritage Large Scale Mapping

    Science.gov (United States)

    Georgopoulos, A.; Oikonomou, C.; Adamopoulos, E.; Stathopoulou, E. K.

    2016-06-01

    When it comes to large scale mapping of limited areas especially for cultural heritage sites, things become critical. Optical and non-optical sensors are developed to such sizes and weights that can be lifted by such platforms, like e.g. LiDAR units. At the same time there is an increase in emphasis on solutions that enable users to get access to 3D information faster and cheaper. Considering the multitude of platforms, cameras and the advancement of algorithms in conjunction with the increase of available computing power this challenge should and indeed is further investigated. In this paper a short review of the UAS technologies today is attempted. A discussion follows as to their applicability and advantages, depending on their specifications, which vary immensely. The on-board cameras available are also compared and evaluated for large scale mapping. Furthermore a thorough analysis, review and experimentation with different software implementations of Structure from Motion and Multiple View Stereo algorithms, able to process such dense and mostly unordered sequence of digital images is also conducted and presented. As test data set, we use a rich optical and thermal data set from both fixed wing and multi-rotor platforms over an archaeological excavation with adverse height variations and using different cameras. Dense 3D point clouds, digital terrain models and orthophotos have been produced and evaluated for their radiometric as well as metric qualities.

  11. The combustion behavior of large scale lithium titanate battery

    Science.gov (United States)

    Huang, Peifeng; Wang, Qingsong; Li, Ke; Ping, Ping; Sun, Jinhua

    2015-01-01

    Safety problem is always a big obstacle for lithium battery marching to large scale application. However, the knowledge on the battery combustion behavior is limited. To investigate the combustion behavior of large scale lithium battery, three 50 Ah Li(NixCoyMnz)O2/Li4Ti5O12 batteries under different state of charge (SOC) were heated to fire. The flame size variation is depicted to analyze the combustion behavior directly. The mass loss rate, temperature and heat release rate are used to analyze the combustion behavior in reaction way deeply. Based on the phenomenon, the combustion process is divided into three basic stages, even more complicated at higher SOC with sudden smoke flow ejected. The reason is that a phase change occurs in Li(NixCoyMnz)O2 material from layer structure to spinel structure. The critical temperatures of ignition are at 112–121°C on anode tab and 139 to 147°C on upper surface for all cells. But the heating time and combustion time become shorter with the ascending of SOC. The results indicate that the battery fire hazard increases with the SOC. It is analyzed that the internal short and the Li+ distribution are the main causes that lead to the difference. PMID:25586064

  12. A novel bonding method for large scale poly(methyl methacrylate) micro- and nanofluidic chip fabrication

    Science.gov (United States)

    Qu, Xingtian; Li, Jinlai; Yin, Zhifu

    2018-04-01

    Micro- and nanofluidic chips are becoming increasing significance for biological and medical applications. Future advances in micro- and nanofluidics and its utilization in commercial applications depend on the development and fabrication of low cost and high fidelity large scale plastic micro- and nanofluidic chips. However, the majority of the present fabrication methods suffer from a low bonding rate of the chip during thermal bonding process due to air trapping between the substrate and the cover plate. In the present work, a novel bonding technique based on Ar plasma and water treatment was proposed to fully bond the large scale micro- and nanofluidic chips. The influence of Ar plasma parameters on the water contact angle and the effect of bonding conditions on the bonding rate and the bonding strength of the chip were studied. The fluorescence tests demonstrate that the 5 × 5 cm2 poly(methyl methacrylate) chip with 180 nm wide and 180 nm deep nanochannels can be fabricated without any block and leakage by our newly developed method.

  13. Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

    OpenAIRE

    Qiang Liu; Yi Qin; Guodong Li

    2018-01-01

    Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal...

  14. Pro website development and operations streamlining DevOps for large-scale websites

    CERN Document Server

    Sacks, Matthew

    2012-01-01

    Pro Website Development and Operations gives you the experience you need to create and operate a large-scale production website. Large-scale websites have their own unique set of problems regarding their design-problems that can get worse when agile methodologies are adopted for rapid results. Managing large-scale websites, deploying applications, and ensuring they are performing well often requires a full scale team involving the development and operations sides of the company-two departments that don't always see eye to eye. When departments struggle with each other, it adds unnecessary comp

  15. Thermal power generation projects ``Large Scale Solar Heating``; EU-Thermie-Projekte ``Large Scale Solar Heating``

    Energy Technology Data Exchange (ETDEWEB)

    Kuebler, R.; Fisch, M.N. [Steinbeis-Transferzentrum Energie-, Gebaeude- und Solartechnik, Stuttgart (Germany)

    1998-12-31

    The aim of this project is the preparation of the ``Large-Scale Solar Heating`` programme for an Europe-wide development of subject technology. The following demonstration programme was judged well by the experts but was not immediately (1996) accepted for financial subsidies. In November 1997 the EU-commission provided 1,5 million ECU which allowed the realisation of an updated project proposal. By mid 1997 a small project was approved, that had been requested under the lead of Chalmes Industriteteknik (CIT) in Sweden and is mainly carried out for the transfer of technology. (orig.) [Deutsch] Ziel dieses Vorhabens ist die Vorbereitung eines Schwerpunktprogramms `Large Scale Solar Heating`, mit dem die Technologie europaweit weiterentwickelt werden sollte. Das daraus entwickelte Demonstrationsprogramm wurde von den Gutachtern positiv bewertet, konnte jedoch nicht auf Anhieb (1996) in die Foerderung aufgenommen werden. Im November 1997 wurden von der EU-Kommission dann kurzfristig noch 1,5 Mio ECU an Foerderung bewilligt, mit denen ein aktualisierter Projektvorschlag realisiert werden kann. Bereits Mitte 1997 wurde ein kleineres Vorhaben bewilligt, das unter Federfuehrung von Chalmers Industriteknik (CIT) in Schweden beantragt worden war und das vor allem dem Technologietransfer dient. (orig.)

  16. Analysis using large-scale ringing data

    Directory of Open Access Journals (Sweden)

    Baillie, S. R.

    2004-06-01

    survival and recruitment estimates from the French CES scheme to assess the relative contributions of survival and recruitment to overall population changes. He develops a novel approach to modelling survival rates from such multi–site data by using within–year recaptures to provide a covariate of between–year recapture rates. This provided parsimonious models of variation in recapture probabilities between sites and years. The approach provides promising results for the four species investigated and can potentially be extended to similar data from other CES/MAPS schemes. The final paper by Blandine Doligez, David Thomson and Arie van Noordwijk (Doligez et al., 2004 illustrates how large-scale studies of population dynamics can be important for evaluating the effects of conservation measures. Their study is concerned with the reintroduction of White Stork populations to the Netherlands where a re–introduction programme started in 1969 had resulted in a breeding population of 396 pairs by 2000. They demonstrate the need to consider a wide range of models in order to account for potential age, time, cohort and “trap–happiness” effects. As the data are based on resightings such trap–happiness must reflect some form of heterogeneity in resighting probabilities. Perhaps surprisingly, the provision of supplementary food did not influence survival, but it may havehad an indirect effect via the alteration of migratory behaviour. Spatially explicit modelling of data gathered at many sites inevitably results in starting models with very large numbers of parameters. The problem is often complicated further by having relatively sparse data at each site, even where the total amount of data gathered is very large. Both Julliard (2004 and Doligez et al. (2004 give explicit examples of problems caused by needing to handle very large numbers of parameters and show how they overcame them for their particular data sets. Such problems involve both the choice of appropriate

  17. Large-Scale Aerosol Modeling and Analysis

    Science.gov (United States)

    2008-09-30

    intrusion that occurred over the Iberian Peninsula (IP) during the last few decades. NAAPS simulations were used to investigate the origin and...Torres, R. Rodrigo, J. de la Rosa, and A. M. De Frutos, Strongest desert dust intrusion mixed with smoke over the Iberian Peninsula registered with...impact cloud processes globally. With increasing dust storms due to climate change and land use changes in desert regions, the impact of the

  18. The Effect of Large Scale Salinity Gradient on Langmuir Turbulence

    Science.gov (United States)

    Fan, Y.; Jarosz, E.; Yu, Z.; Jensen, T.; Sullivan, P. P.; Liang, J.

    2017-12-01

    Langmuir circulation (LC) is believed to be one of the leading order causes of turbulent mixing in the upper ocean. It is important for momentum and heat exchange across the mixed layer (ML) and directly impact the dynamics and thermodynamics in the upper ocean and lower atmosphere including the vertical distributions of chemical, biological, optical, and acoustic properties. Based on Craik and Leibovich (1976) theory, large eddy simulation (LES) models have been developed to simulate LC in the upper ocean, yielding new insights that could not be obtained from field observations and turbulent closure models. Due its high computational cost, LES models are usually limited to small domain sizes and cannot resolve large-scale flows. Furthermore, most LES models used in the LC simulations use periodic boundary conditions in the horizontal direction, which assumes the physical properties (i.e. temperature and salinity) and expected flow patterns in the area of interest are of a periodically repeating nature so that the limited small LES domain is representative for the larger area. Using periodic boundary condition can significantly reduce computational effort in problems, and it is a good assumption for isotropic shear turbulence. However, LC is anisotropic (McWilliams et al 1997) and was observed to be modulated by crosswind tidal currents (Kukulka et al 2011). Using symmetrical domains, idealized LES studies also indicate LC could interact with oceanic fronts (Hamlington et al 2014) and standing internal waves (Chini and Leibovich, 2005). The present study expands our previous LES modeling investigations of Langmuir turbulence to the real ocean conditions with large scale environmental motion that features fresh water inflow into the study region. Large scale gradient forcing is introduced to the NCAR LES model through scale separation analysis. The model is applied to a field observation in the Gulf of Mexico in July, 2016 when the measurement site was impacted by

  19. Planning under uncertainty solving large-scale stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G. [Stanford Univ., CA (United States). Dept. of Operations Research]|[Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  20. Risk Management Challenges in Large-scale Energy PSS

    DEFF Research Database (Denmark)

    Tegeltija, Miroslava; Oehmen, Josef; Kozin, Igor

    2017-01-01

    Probabilistic risk management approaches have a long tradition in engineering. A large variety of tools and techniques based on the probabilistic view of risk is available and applied in PSS practice. However, uncertainties that arise due to lack of knowledge and information are still missing...... adequate representations. We focus on a large-scale energy company in Denmark as one case of current product/servicesystems risk management best practices. We analyze their risk management process and investigate the tools they use in order to support decision making processes within the company. First, we...... identify the following challenges in the current risk management practices that are in line with literature: (1) current methods are not appropriate for the situations dominated by weak knowledge and information; (2) quality of traditional models in such situations is open to debate; (3) quality of input...

  1. Large-Scale Structure Behind The Milky Way with ALFAZOA

    Science.gov (United States)

    Sanchez Barrantes, Monica; Henning, Patricia A.; Momjian, Emmanuel; McIntyre, Travis; Minchin, Robert F.

    2018-06-01

    The region of the sky behind the Milky Way (the Zone of Avoidance; ZOA) is not well studied due to high obscuration from gas and dust in our galaxy as well as stellar confusion, which results in low detection rate of galaxies in this region. Because of this, little is known about the distribution of galaxies in the ZOA, and other all sky redshift surveys have incomplete maps (e.g. the 2MASS Redshift survey in NIR has a gap of 5-8 deg around the Galactic plane). There is still controversy about the dipole anisotropy calculated from the comparison between the CMB and galaxy and redshift surveys, in part due to the incomplete sky mapping and redshift depth of these surveys. Fortunately, there is no ZOA at radio wavelengths because such wavelengths can pass unimpeded through dust and are not affected by stellar confusion. Therefore, we can detect and make a map of the distribution of obscured galaxies that contain the 21cm neutral hydrogen emission line, and trace the large-scale structure across the Galactic plane. The Arecibo L-Band Feed Array Zone of Avoidance (ALFAZOA) survey is a blind HI survey for galaxies behind the Milky Way that covers more than 1000 square degrees of the sky, conducted in two phases: shallow (completed) and deep (ongoing). We show the results of the finished shallow phase of the survey, which mapped a region between the galactic longitude l=30-75 deg, and latitude b <|10 deg|, and detected 418 galaxies to about 12,000 km/s, including galaxy properties and mapped large-scale structure. We do the same for new results from the deep phase, which is ongoing and covers 30 < l < 75 deg and b < |2| deg for the inner galaxy and 175 < l < 207 deg, with -2 < b < 1 for the outer galaxy.

  2. Self-* and Adaptive Mechanisms for Large Scale Distributed Systems

    Science.gov (United States)

    Fragopoulou, P.; Mastroianni, C.; Montero, R.; Andrjezak, A.; Kondo, D.

    Large-scale distributed computing systems and infrastructure, such as Grids, P2P systems and desktop Grid platforms, are decentralized, pervasive, and composed of a large number of autonomous entities. The complexity of these systems is such that human administration is nearly impossible and centralized or hierarchical control is highly inefficient. These systems need to run on highly dynamic environments, where content, network topologies and workloads are continuously changing. Moreover, they are characterized by the high degree of volatility of their components and the need to provide efficient service management and to handle efficiently large amounts of data. This paper describes some of the areas for which adaptation emerges as a key feature, namely, the management of computational Grids, the self-management of desktop Grid platforms and the monitoring and healing of complex applications. It also elaborates on the use of bio-inspired algorithms to achieve self-management. Related future trends and challenges are described.

  3. Large-scale additive manufacturing with bioinspired cellulosic materials.

    Science.gov (United States)

    Sanandiya, Naresh D; Vijay, Yadunund; Dimopoulou, Marina; Dritsas, Stylianos; Fernandez, Javier G

    2018-06-05

    Cellulose is the most abundant and broadly distributed organic compound and industrial by-product on Earth. However, despite decades of extensive research, the bottom-up use of cellulose to fabricate 3D objects is still plagued with problems that restrict its practical applications: derivatives with vast polluting effects, use in combination with plastics, lack of scalability and high production cost. Here we demonstrate the general use of cellulose to manufacture large 3D objects. Our approach diverges from the common association of cellulose with green plants and it is inspired by the wall of the fungus-like oomycetes, which is reproduced introducing small amounts of chitin between cellulose fibers. The resulting fungal-like adhesive material(s) (FLAM) are strong, lightweight and inexpensive, and can be molded or processed using woodworking techniques. We believe this first large-scale additive manufacture with ubiquitous biological polymers will be the catalyst for the transition to environmentally benign and circular manufacturing models.

  4. Structural Quality of Service in Large-Scale Networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup

    , telephony and data. To meet the requirements of the different applications, and to handle the increased vulnerability to failures, the ability to design robust networks providing good Quality of Service is crucial. However, most planning of large-scale networks today is ad-hoc based, leading to highly...... complex networks lacking predictability and global structural properties. The thesis applies the concept of Structural Quality of Service to formulate desirable global properties, and it shows how regular graph structures can be used to obtain such properties.......Digitalization has created the base for co-existence and convergence in communications, leading to an increasing use of multi service networks. This is for example seen in the Fiber To The Home implementations, where a single fiber is used for virtually all means of communication, including TV...

  5. Hydrogen-combustion analyses of large-scale tests

    International Nuclear Information System (INIS)

    Gido, R.G.; Koestel, A.

    1986-01-01

    This report uses results of the large-scale tests with turbulence performed by the Electric Power Research Institute at the Nevada Test Site to evaluate hydrogen burn-analysis procedures based on lumped-parameter codes like COMPARE-H2 and associated burn-parameter models. The test results: (1) confirmed, in a general way, the procedures for application to pulsed burning, (2) increased significantly our understanding of the burn phenomenon by demonstrating that continuous burning can occur, and (3) indicated that steam can terminate continuous burning. Future actions recommended include: (1) modification of the code to perform continuous-burn analyses, which is demonstrated, (2) analyses to determine the type of burning (pulsed or continuous) that will exist in nuclear containments and the stable location if the burning is continuous, and (3) changes to the models for estimating burn parameters

  6. Hydrogen-combustion analyses of large-scale tests

    International Nuclear Information System (INIS)

    Gido, R.G.; Koestel, A.

    1986-01-01

    This report uses results of the large-scale tests with turbulence performed by the Electric Power Research Institute at the Nevada Test Site to evaluate hydrogen burn-analysis procedures based on lumped-parameter codes like COMPARE-H2 and associated burn-parameter models. The test results (a) confirmed, in a general way, the procedures for application to pulsed burning, (b) increased significantly our understanding of the burn phenomenon by demonstrating that continuous burning can occur and (c) indicated that steam can terminate continuous burning. Future actions recommended include (a) modification of the code to perform continuous-burn analyses, which is demonstrated, (b) analyses to determine the type of burning (pulsed or continuous) that will exist in nuclear containments and the stable location if the burning is continuous, and (c) changes to the models for estimating burn parameters

  7. Deep Feature Learning and Cascaded Classifier for Large Scale Data

    DEFF Research Database (Denmark)

    Prasoon, Adhish

    from data rather than having a predefined feature set. We explore deep learning approach of convolutional neural network (CNN) for segmenting three dimensional medical images. We propose a novel system integrating three 2D CNNs, which have a one-to-one association with the xy, yz and zx planes of 3D......This thesis focuses on voxel/pixel classification based approaches for image segmentation. The main application is segmentation of articular cartilage in knee MRIs. The first major contribution of the thesis deals with large scale machine learning problems. Many medical imaging problems need huge...... amount of training data to cover sufficient biological variability. Learning methods scaling badly with number of training data points cannot be used in such scenarios. This may restrict the usage of many powerful classifiers having excellent generalization ability. We propose a cascaded classifier which...

  8. Policy Driven Development: Flexible Policy Insertion for Large Scale Systems.

    Science.gov (United States)

    Demchak, Barry; Krüger, Ingolf

    2012-07-01

    The success of a software system depends critically on how well it reflects and adapts to stakeholder requirements. Traditional development methods often frustrate stakeholders by creating long latencies between requirement articulation and system deployment, especially in large scale systems. One source of latency is the maintenance of policy decisions encoded directly into system workflows at development time, including those involving access control and feature set selection. We created the Policy Driven Development (PDD) methodology to address these development latencies by enabling the flexible injection of decision points into existing workflows at runtime , thus enabling policy composition that integrates requirements furnished by multiple, oblivious stakeholder groups. Using PDD, we designed and implemented a production cyberinfrastructure that demonstrates policy and workflow injection that quickly implements stakeholder requirements, including features not contemplated in the original system design. PDD provides a path to quickly and cost effectively evolve such applications over a long lifetime.

  9. Multidimensional quantum entanglement with large-scale integrated optics.

    Science.gov (United States)

    Wang, Jianwei; Paesani, Stefano; Ding, Yunhong; Santagati, Raffaele; Skrzypczyk, Paul; Salavrakos, Alexia; Tura, Jordi; Augusiak, Remigiusz; Mančinska, Laura; Bacco, Davide; Bonneau, Damien; Silverstone, Joshua W; Gong, Qihuang; Acín, Antonio; Rottwitt, Karsten; Oxenløwe, Leif K; O'Brien, Jeremy L; Laing, Anthony; Thompson, Mark G

    2018-04-20

    The ability to control multidimensional quantum systems is central to the development of advanced quantum technologies. We demonstrate a multidimensional integrated quantum photonic platform able to generate, control, and analyze high-dimensional entanglement. A programmable bipartite entangled system is realized with dimensions up to 15 × 15 on a large-scale silicon photonics quantum circuit. The device integrates more than 550 photonic components on a single chip, including 16 identical photon-pair sources. We verify the high precision, generality, and controllability of our multidimensional technology, and further exploit these abilities to demonstrate previously unexplored quantum applications, such as quantum randomness expansion and self-testing on multidimensional states. Our work provides an experimental platform for the development of multidimensional quantum technologies. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  10. Bonus algorithm for large scale stochastic nonlinear programming problems

    CERN Document Server

    Diwekar, Urmila

    2015-01-01

    This book presents the details of the BONUS algorithm and its real world applications in areas like sensor placement in large scale drinking water networks, sensor placement in advanced power systems, water management in power systems, and capacity expansion of energy systems. A generalized method for stochastic nonlinear programming based on a sampling based approach for uncertainty analysis and statistical reweighting to obtain probability information is demonstrated in this book. Stochastic optimization problems are difficult to solve since they involve dealing with optimization and uncertainty loops. There are two fundamental approaches used to solve such problems. The first being the decomposition techniques and the second method identifies problem specific structures and transforms the problem into a deterministic nonlinear programming problem. These techniques have significant limitations on either the objective function type or the underlying distributions for the uncertain variables. Moreover, these ...

  11. Electric vehicles and large-scale integration of wind power

    DEFF Research Database (Denmark)

    Liu, Wen; Hu, Weihao; Lund, Henrik

    2013-01-01

    with this imbalance and to reduce its high dependence on oil production. For this reason, it is interesting to analyse the extent to which transport electrification can further the renewable energy integration. This paper quantifies this issue in Inner Mongolia, where the share of wind power in the electricity supply...... was 6.5% in 2009 and which has the plan to develop large-scale wind power. The results show that electric vehicles (EVs) have the ability to balance the electricity demand and supply and to further the wind power integration. In the best case, the energy system with EV can increase wind power...... integration by 8%. The application of EVs benefits from saving both energy system cost and fuel cost. However, the negative consequences of decreasing energy system efficiency and increasing the CO2 emission should be noted when applying the hydrogen fuel cell vehicle (HFCV). The results also indicate...

  12. Segmentation by Large Scale Hypothesis Testing - Segmentation as Outlier Detection

    DEFF Research Database (Denmark)

    Darkner, Sune; Dahl, Anders Lindbjerg; Larsen, Rasmus

    2010-01-01

    a microscope and we show how the method can handle transparent particles with significant glare point. The method generalizes to other problems. THis is illustrated by applying the method to camera calibration images and MRI of the midsagittal plane for gray and white matter separation and segmentation......We propose a novel and efficient way of performing local image segmentation. For many applications a threshold of pixel intensities is sufficient but determine the appropriate threshold value can be difficult. In cases with large global intensity variation the threshold value has to be adapted...... locally. We propose a method based on large scale hypothesis testing with a consistent method for selecting an appropriate threshold for the given data. By estimating the background distribution we characterize the segment of interest as a set of outliers with a certain probability based on the estimated...

  13. Exploiting large-scale correlations to detect continuous gravitational waves.

    Science.gov (United States)

    Pletsch, Holger J; Allen, Bruce

    2009-10-30

    Fully coherent searches (over realistic ranges of parameter space and year-long observation times) for unknown sources of continuous gravitational waves are computationally prohibitive. Less expensive hierarchical searches divide the data into shorter segments which are analyzed coherently, then detection statistics from different segments are combined incoherently. The novel method presented here solves the long-standing problem of how best to do the incoherent combination. The optimal solution exploits large-scale parameter-space correlations in the coherent detection statistic. Application to simulated data shows dramatic sensitivity improvements compared with previously available (ad hoc) methods, increasing the spatial volume probed by more than 2 orders of magnitude at lower computational cost.

  14. Large scale computing in theoretical physics: Example QCD

    International Nuclear Information System (INIS)

    Schilling, K.

    1986-01-01

    The limitations of the classical mathematical analysis of Newton and Leibniz appear to be more and more overcome by the power of modern computers. Large scale computing techniques - which resemble closely the methods used in simulations within statistical mechanics - allow to treat nonlinear systems with many degrees of freedom such as field theories in nonperturbative situations, where analytical methods do fail. The computation of the hadron spectrum within the framework of lattice QCD sets a demanding goal for the application of supercomputers in basic science. It requires both big computer capacities and clever algorithms to fight all the numerical evils that one encounters in the Euclidean world. The talk will attempt to describe both the computer aspects and the present state of the art of spectrum calculations within lattice QCD. (orig.)

  15. Optimization of large-scale fabrication of dielectric elastomer transducers

    DEFF Research Database (Denmark)

    Hassouneh, Suzan Sager

    Dielectric elastomers (DEs) have gained substantial ground in many different applications, such as wave energy harvesting, valves and loudspeakers. For DE technology to be commercially viable, it is necessary that any large-scale production operation is nondestructive, efficient and cheap. Danfoss......-strength laminates to perform as monolithic elements. For the front-to-back and front-to-front configurations, conductive elastomers were utilised. One approach involved adding the cheap and conductive filler, exfoliated graphite (EG) to a PDMS matrix to increase dielectric permittivity. The results showed that even...... as conductive adhesives were rejected. Dielectric properties below the percolation threshold were subsequently investigated, in order to conclude the study. In order to avoid destroying the network structure, carbon nanotubes (CNTs) were used as fillers during the preparation of the conductive elastomers...

  16. Engineering large-scale agent-based systems with consensus

    Science.gov (United States)

    Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.

    1994-01-01

    The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.

  17. Automatic management software for large-scale cluster system

    International Nuclear Information System (INIS)

    Weng Yunjian; Chinese Academy of Sciences, Beijing; Sun Gongxing

    2007-01-01

    At present, the large-scale cluster system faces to the difficult management. For example the manager has large work load. It needs to cost much time on the management and the maintenance of large-scale cluster system. The nodes in large-scale cluster system are very easy to be chaotic. Thousands of nodes are put in big rooms so that some managers are very easy to make the confusion with machines. How do effectively carry on accurate management under the large-scale cluster system? The article introduces ELFms in the large-scale cluster system. Furthermore, it is proposed to realize the large-scale cluster system automatic management. (authors)

  18. Small scale models equal large scale savings

    International Nuclear Information System (INIS)

    Lee, R.; Segroves, R.

    1994-01-01

    A physical scale model of a reactor is a tool which can be used to reduce the time spent by workers in the containment during an outage and thus to reduce the radiation dose and save money. The model can be used for worker orientation, and for planning maintenance, modifications, manpower deployment and outage activities. Examples of the use of models are presented. These were for the La Salle 2 and Dresden 1 and 2 BWRs. In each case cost-effectiveness and exposure reduction due to the use of a scale model is demonstrated. (UK)

  19. Large Scale Experiments on Spacecraft Fire Safety

    DEFF Research Database (Denmark)

    Urban, David L.; Ruff, Gary A.; Minster, Olivier

    2012-01-01

    -based microgravity facilities or has been limited to very small fuel samples. Still, the work conducted to date has shown that fire behaviour in low-gravity is very different from that in normal-gravity, with differences observed for flammability limits, ignition delay, flame spread behaviour, flame colour and flame......Full scale fire testing complemented by computer modelling has provided significant knowhow about the risk, prevention and suppression of fire in terrestrial systems (cars, ships, planes, buildings, mines, and tunnels). In comparison, no such testing has been carried out for manned spacecraft due...... to the complexity, cost and risk associ-ated with operating a long duration fire safety experiment of a relevant size in microgravity. Therefore, there is currently a gap in knowledge of fire behaviour in spacecraft. The entire body of low-gravity fire research has either been conducted in short duration ground...

  20. JAERI femtosecond pulsed and tens-kilowatts average-powered free-electron lasers and their applications of large-scaled non-thermal manufacturing in nuclear energy industry

    International Nuclear Information System (INIS)

    Minehara, Eisuke J.

    2004-01-01

    applied stress during a much shorter term operation. Most people had thought to be unavoidable that Japanese economy would be damaged more or less because most of the nuclear power plants near Tokyo were shut-down during the last summer because of the CWSCC failures. As it was fortunately a cold summer last year because of global climate disorder, Japan and the Japanese domestic electric power companies could have a narrow escape from a large scale black-out in many prefectures near Tokyo and Tokyo metropolitan area

  1. The viability of balancing wind generation with large scale energy storage

    International Nuclear Information System (INIS)

    Nyamdash, Batsaikhan; Denny, Eleanor; O'Malley, Mark

    2010-01-01

    This paper studies the impact of combining wind generation and dedicated large scale energy storage on the conventional thermal plant mix and the CO 2 emissions of a power system. Different strategies are proposed here in order to explore the best operational strategy for the wind and storage system in terms of its effect on the net load. Furthermore, the economic viability of combining wind and large scale storage is studied. The empirical application, using data for the Irish power system, shows that combined wind and storage reduces the participation of mid-merit plants and increases the participation of base-load plants. Moreover, storage negates some of the CO 2 emissions reduction of the wind generation. It was also found that the wind and storage output can significantly reduce the variability of the net load under certain operational strategies and the optimal strategy depends on the installed wind capacity. However, in the absence of any supporting mechanism none of the storage devices were economically viable when they were combined with the wind generation on the Irish power system. - Research Highlights: → Energy storage would displace the peaking and mid-merit plants generations by the base-load plants generations. Energy storage may negate the CO 2 emissions reduction that is due to the increased wind generations. →Energy storage reduces the variation of the net load. →Under certain market conditions, merchant type energy storage is not viable.

  2. Large scale features and energetics of the hybrid subtropical low `Duck' over the Tasman Sea

    Science.gov (United States)

    Pezza, Alexandre Bernardes; Garde, Luke Andrew; Veiga, José Augusto Paixão; Simmonds, Ian

    2014-01-01

    New aspects of the genesis and partial tropical transition of a rare hybrid subtropical cyclone on the eastern Australian coast are presented. The `Duck' (March 2001) attracted more recent attention due to its underlying genesis mechanisms being remarkably similar to the first South Atlantic hurricane (March 2004). Here we put this cyclone in climate perspective, showing that it belongs to a class within the 1 % lowest frequency percentile in the Southern Hemisphere as a function of its thermal evolution. A large scale analysis reveals a combined influence from an existing tropical cyclone and a persistent mid-latitude block. A Lagrangian tracer showed that the upper level air parcels arriving at the cyclone's center had been modified by the blocking. Lorenz energetics is used to identify connections with both tropical and extratropical processes, and reveal how these create the large scale environment conducive to the development of the vortex. The results reveal that the blocking exerted the most important influence, with a strong peak in barotropic generation of kinetic energy over a large area traversed by the air parcels just before genesis. A secondary peak also coincided with the first time the cyclone developed an upper level warm core, but with insufficient amplitude to allow for a full tropical transition. The applications of this technique are numerous and promising, particularly on the use of global climate models to infer changes in environmental parameters associated with severe storms.

  3. Implementation of highly parallel and large scale GW calculations within the OpenAtom software

    Science.gov (United States)

    Ismail-Beigi, Sohrab

    The need to describe electronic excitations with better accuracy than provided by band structures produced by Density Functional Theory (DFT) has been a long-term enterprise for the computational condensed matter and materials theory communities. In some cases, appropriate theoretical frameworks have existed for some time but have been difficult to apply widely due to computational cost. For example, the GW approximation incorporates a great deal of important non-local and dynamical electronic interaction effects but has been too computationally expensive for routine use in large materials simulations. OpenAtom is an open source massively parallel ab initiodensity functional software package based on plane waves and pseudopotentials (http://charm.cs.uiuc.edu/OpenAtom/) that takes advantage of the Charm + + parallel framework. At present, it is developed via a three-way collaboration, funded by an NSF SI2-SSI grant (ACI-1339804), between Yale (Ismail-Beigi), IBM T. J. Watson (Glenn Martyna) and the University of Illinois at Urbana Champaign (Laxmikant Kale). We will describe the project and our current approach towards implementing large scale GW calculations with OpenAtom. Potential applications of large scale parallel GW software for problems involving electronic excitations in semiconductor and/or metal oxide systems will be also be pointed out.

  4. Directed partial correlation: inferring large-scale gene regulatory network through induced topology disruptions.

    Directory of Open Access Journals (Sweden)

    Yinyin Yuan

    Full Text Available Inferring regulatory relationships among many genes based on their temporal variation in transcript abundance has been a popular research topic. Due to the nature of microarray experiments, classical tools for time series analysis lose power since the number of variables far exceeds the number of the samples. In this paper, we describe some of the existing multivariate inference techniques that are applicable to hundreds of variables and show the potential challenges for small-sample, large-scale data. We propose a directed partial correlation (DPC method as an efficient and effective solution to regulatory network inference using these data. Specifically for genomic data, the proposed method is designed to deal with large-scale datasets. It combines the efficiency of partial correlation for setting up network topology by testing conditional independence, and the concept of Granger causality to assess topology change with induced interruptions. The idea is that when a transcription factor is induced artificially within a gene network, the disruption of the network by the induction signifies a genes role in transcriptional regulation. The benchmarking results using GeneNetWeaver, the simulator for the DREAM challenges, provide strong evidence of the outstanding performance of the proposed DPC method. When applied to real biological data, the inferred starch metabolism network in Arabidopsis reveals many biologically meaningful network modules worthy of further investigation. These results collectively suggest DPC is a versatile tool for genomics research. The R package DPC is available for download (http://code.google.com/p/dpcnet/.

  5. Feasibility study of a large-scale tuned mass damper with eddy current damping mechanism

    Science.gov (United States)

    Wang, Zhihao; Chen, Zhengqing; Wang, Jianhui

    2012-09-01

    Tuned mass dampers (TMDs) have been widely used in recent years to mitigate structural vibration. However, the damping mechanisms employed in the TMDs are mostly based on viscous dampers, which have several well-known disadvantages, such as oil leakage and difficult adjustment of damping ratio for an operating TMD. Alternatively, eddy current damping (ECD) that does not require any contact with the main structure is a potential solution. This paper discusses the design, analysis, manufacture and testing of a large-scale horizontal TMD based on ECD. First, the theoretical model of ECD is formulated, then one large-scale horizontal TMD using ECD is constructed, and finally performance tests of the TMD are conducted. The test results show that the proposed TMD has a very low intrinsic damping ratio, while the damping ratio due to ECD is the dominant damping source, which can be as large as 15% in a proper configuration. In addition, the damping ratios estimated with the theoretical model are roughly consistent with those identified from the test results, and the source of this error is investigated. Moreover, it is demonstrated that the damping ratio in the proposed TMD can be easily adjusted by varying the air gap between permanent magnets and conductive plates. In view of practical applications, possible improvements and feasibility considerations for the proposed TMD are then discussed. It is confirmed that the proposed TMD with ECD is reliable and feasible for use in structural vibration control.

  6. Large scale structure from viscous dark matter

    CERN Document Server

    Blas, Diego; Garny, Mathias; Tetradis, Nikolaos; Wiedemann, Urs Achim

    2015-01-01

    Cosmological perturbations of sufficiently long wavelength admit a fluid dynamic description. We consider modes with wavevectors below a scale $k_m$ for which the dynamics is only mildly non-linear. The leading effect of modes above that scale can be accounted for by effective non-equilibrium viscosity and pressure terms. For mildly non-linear scales, these mainly arise from momentum transport within the ideal and cold but inhomogeneous fluid, while momentum transport due to more microscopic degrees of freedom is suppressed. As a consequence, concrete expressions with no free parameters, except the matching scale $k_m$, can be derived from matching evolution equations to standard cosmological perturbation theory. Two-loop calculations of the matter power spectrum in the viscous theory lead to excellent agreement with $N$-body simulations up to scales $k=0.2 \\, h/$Mpc. The convergence properties in the ultraviolet are better than for standard perturbation theory and the results are robust with respect to varia...

  7. Large-scale modelling of neuronal systems

    International Nuclear Information System (INIS)

    Castellani, G.; Verondini, E.; Giampieri, E.; Bersani, F.; Remondini, D.; Milanesi, L.; Zironi, I.

    2009-01-01

    The brain is, without any doubt, the most, complex system of the human body. Its complexity is also due to the extremely high number of neurons, as well as the huge number of synapses connecting them. Each neuron is capable to perform complex tasks, like learning and memorizing a large class of patterns. The simulation of large neuronal systems is challenging for both technological and computational reasons, and can open new perspectives for the comprehension of brain functioning. A well-known and widely accepted model of bidirectional synaptic plasticity, the BCM model, is stated by a differential equation approach based on bistability and selectivity properties. We have modified the BCM model extending it from a single-neuron to a whole-network model. This new model is capable to generate interesting network topologies starting from a small number of local parameters, describing the interaction between incoming and outgoing links from each neuron. We have characterized this model in terms of complex network theory, showing how this, learning rule can be a support For network generation.

  8. Large scale analysis of signal reachability.

    Science.gov (United States)

    Todor, Andrei; Gabr, Haitham; Dobra, Alin; Kahveci, Tamer

    2014-06-15

    Major disorders, such as leukemia, have been shown to alter the transcription of genes. Understanding how gene regulation is affected by such aberrations is of utmost importance. One promising strategy toward this objective is to compute whether signals can reach to the transcription factors through the transcription regulatory network (TRN). Due to the uncertainty of the regulatory interactions, this is a #P-complete problem and thus solving it for very large TRNs remains to be a challenge. We develop a novel and scalable method to compute the probability that a signal originating at any given set of source genes can arrive at any given set of target genes (i.e., transcription factors) when the topology of the underlying signaling network is uncertain. Our method tackles this problem for large networks while providing a provably accurate result. Our method follows a divide-and-conquer strategy. We break down the given network into a sequence of non-overlapping subnetworks such that reachability can be computed autonomously and sequentially on each subnetwork. We represent each interaction using a small polynomial. The product of these polynomials express different scenarios when a signal can or cannot reach to target genes from the source genes. We introduce polynomial collapsing operators for each subnetwork. These operators reduce the size of the resulting polynomial and thus the computational complexity dramatically. We show that our method scales to entire human regulatory networks in only seconds, while the existing methods fail beyond a few tens of genes and interactions. We demonstrate that our method can successfully characterize key reachability characteristics of the entire transcriptions regulatory networks of patients affected by eight different subtypes of leukemia, as well as those from healthy control samples. All the datasets and code used in this article are available at bioinformatics.cise.ufl.edu/PReach/scalable.htm. © The Author 2014

  9. Responses in large-scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Barreira, Alexandre; Schmidt, Fabian, E-mail: barreira@MPA-Garching.MPG.DE, E-mail: fabians@MPA-Garching.MPG.DE [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany)

    2017-06-01

    We introduce a rigorous definition of general power-spectrum responses as resummed vertices with two hard and n soft momenta in cosmological perturbation theory. These responses measure the impact of long-wavelength perturbations on the local small-scale power spectrum. The kinematic structure of the responses (i.e., their angular dependence) can be decomposed unambiguously through a ''bias'' expansion of the local power spectrum, with a fixed number of physical response coefficients , which are only a function of the hard wavenumber k . Further, the responses up to n -th order completely describe the ( n +2)-point function in the squeezed limit, i.e. with two hard and n soft modes, which one can use to derive the response coefficients. This generalizes previous results, which relate the angle-averaged squeezed limit to isotropic response coefficients. We derive the complete expression of first- and second-order responses at leading order in perturbation theory, and present extrapolations to nonlinear scales based on simulation measurements of the isotropic response coefficients. As an application, we use these results to predict the non-Gaussian part of the angle-averaged matter power spectrum covariance Cov{sup NG}{sub ℓ=0}( k {sub 1}, k {sub 2}), in the limit where one of the modes, say k {sub 2}, is much smaller than the other. Without any free parameters, our model results are in very good agreement with simulations for k {sub 2} ∼< 0.06 h Mpc{sup −1}, and for any k {sub 1} ∼> 2 k {sub 2}. The well-defined kinematic structure of the power spectrum response also permits a quick evaluation of the angular dependence of the covariance matrix. While we focus on the matter density field, the formalism presented here can be generalized to generic tracers such as galaxies.

  10. Autonomous Sensors for Large Scale Data Collection

    Science.gov (United States)

    Noto, J.; Kerr, R.; Riccobono, J.; Kapali, S.; Migliozzi, M. A.; Goenka, C.

    2017-12-01

    Presented here is a novel implementation of a "Doppler imager" which remotely measures winds and temperatures of the neutral background atmosphere at ionospheric altitudes of 87-300Km and possibly above. Incorporating both recent optical manufacturing developments, modern network awareness and the application of machine learning techniques for intelligent self-monitoring and data classification. This system achieves cost savings in manufacturing, deployment and lifetime operating costs. Deployed in both ground and space-based modalities, this cost-disruptive technology will allow computer models of, ionospheric variability and other space weather models to operate with higher precision. Other sensors can be folded into the data collection and analysis architecture easily creating autonomous virtual observatories. A prototype version of this sensor has recently been deployed in Trivandrum India for the Indian Government. This Doppler imager is capable of operation, even within the restricted CubeSat environment. The CubeSat bus offers a very challenging environment, even for small instruments. The lack of SWaP and the challenging thermal environment demand development of a new generation of instruments; the Doppler imager presented is well suited to this environment. Concurrent with this CubeSat development is the development and construction of ground based arrays of inexpensive sensors using the proposed technology. This instrument could be flown inexpensively on one or more CubeSats to provide valuable data to space weather forecasters and ionospheric scientists. Arrays of magnetometers have been deployed for the last 20 years [Alabi, 2005]. Other examples of ground based arrays include an array of white-light all sky imagers (THEMIS) deployed across Canada [Donovan et al., 2006], oceans sensors on buoys [McPhaden et al., 2010], and arrays of seismic sensors [Schweitzer et al., 2002]. A comparable array of Doppler imagers can be constructed and deployed on the

  11. Responses in large-scale structure

    Science.gov (United States)

    Barreira, Alexandre; Schmidt, Fabian

    2017-06-01

    We introduce a rigorous definition of general power-spectrum responses as resummed vertices with two hard and n soft momenta in cosmological perturbation theory. These responses measure the impact of long-wavelength perturbations on the local small-scale power spectrum. The kinematic structure of the responses (i.e., their angular dependence) can be decomposed unambiguously through a ``bias'' expansion of the local power spectrum, with a fixed number of physical response coefficients, which are only a function of the hard wavenumber k. Further, the responses up to n-th order completely describe the (n+2)-point function in the squeezed limit, i.e. with two hard and n soft modes, which one can use to derive the response coefficients. This generalizes previous results, which relate the angle-averaged squeezed limit to isotropic response coefficients. We derive the complete expression of first- and second-order responses at leading order in perturbation theory, and present extrapolations to nonlinear scales based on simulation measurements of the isotropic response coefficients. As an application, we use these results to predict the non-Gaussian part of the angle-averaged matter power spectrum covariance CovNGl=0(k1,k2), in the limit where one of the modes, say k2, is much smaller than the other. Without any free parameters, our model results are in very good agreement with simulations for k2 lesssim 0.06 h Mpc-1, and for any k1 gtrsim 2k2. The well-defined kinematic structure of the power spectrum response also permits a quick evaluation of the angular dependence of the covariance matrix. While we focus on the matter density field, the formalism presented here can be generalized to generic tracers such as galaxies.

  12. Large Scale Experiments on Spacecraft Fire Safety

    Science.gov (United States)

    Urban, David; Ruff, Gary A.; Minster, Olivier; Fernandez-Pello, A. Carlos; Tien, James S.; Torero, Jose L.; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; hide

    2012-01-01

    Full scale fire testing complemented by computer modelling has provided significant knowhow about the risk, prevention and suppression of fire in terrestrial systems (cars, ships, planes, buildings, mines, and tunnels). In comparison, no such testing has been carried out for manned spacecraft due to the complexity, cost and risk associated with operating a long duration fire safety experiment of a relevant size in microgravity. Therefore, there is currently a gap in knowledge of fire behaviour in spacecraft. The entire body of low-gravity fire research has either been conducted in short duration ground-based microgravity facilities or has been limited to very small fuel samples. Still, the work conducted to date has shown that fire behaviour in low-gravity is very different from that in normal gravity, with differences observed for flammability limits, ignition delay, flame spread behaviour, flame colour and flame structure. As a result, the prediction of the behaviour of fires in reduced gravity is at present not validated. To address this gap in knowledge, a collaborative international project, Spacecraft Fire Safety, has been established with its cornerstone being the development of an experiment (Fire Safety 1) to be conducted on an ISS resupply vehicle, such as the Automated Transfer Vehicle (ATV) or Orbital Cygnus after it leaves the ISS and before it enters the atmosphere. A computer modelling effort will complement the experimental effort. Although the experiment will need to meet rigorous safety requirements to ensure the carrier vehicle does not sustain damage, the absence of a crew removes the need for strict containment of combustion products. This will facilitate the possibility of examining fire behaviour on a scale that is relevant to spacecraft fire safety and will provide unique data for fire model validation. This unprecedented opportunity will expand the understanding of the fundamentals of fire behaviour in spacecraft. The experiment is being

  13. Oligopolistic competition in wholesale electricity markets: Large-scale simulation and policy analysis using complementarity models

    Science.gov (United States)

    Helman, E. Udi

    This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using

  14. Prospects for large scale electricity storage in Denmark

    DEFF Research Database (Denmark)

    Krog Ekman, Claus; Jensen, Søren Højgaard

    2010-01-01

    In a future power systems with additional wind power capacity there will be an increased need for large scale power management as well as reliable balancing and reserve capabilities. Different technologies for large scale electricity storage provide solutions to the different challenges arising w...

  15. Large-scale matrix-handling subroutines 'ATLAS'

    International Nuclear Information System (INIS)

    Tsunematsu, Toshihide; Takeda, Tatsuoki; Fujita, Keiichi; Matsuura, Toshihiko; Tahara, Nobuo

    1978-03-01

    Subroutine package ''ATLAS'' has been developed for handling large-scale matrices. The package is composed of four kinds of subroutines, i.e., basic arithmetic routines, routines for solving linear simultaneous equations and for solving general eigenvalue problems and utility routines. The subroutines are useful in large scale plasma-fluid simulations. (auth.)

  16. Large-scale Agricultural Land Acquisitions in West Africa | IDRC ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    This project will examine large-scale agricultural land acquisitions in nine West African countries -Burkina Faso, Guinea-Bissau, Guinea, Benin, Mali, Togo, Senegal, Niger, and Côte d'Ivoire. ... They will use the results to increase public awareness and knowledge about the consequences of large-scale land acquisitions.

  17. Large-scale synthesis of YSZ nanopowder by Pechini method

    Indian Academy of Sciences (India)

    Administrator

    structure and chemical purity of 99⋅1% by inductively coupled plasma optical emission spectroscopy on a large scale. Keywords. Sol–gel; yttria-stabilized zirconia; large scale; nanopowder; Pechini method. 1. Introduction. Zirconia has attracted the attention of many scientists because of its tremendous thermal, mechanical ...

  18. An innovative large scale integration of silicon nanowire-based field effect transistors

    Science.gov (United States)

    Legallais, M.; Nguyen, T. T. T.; Mouis, M.; Salem, B.; Robin, E.; Chenevier, P.; Ternon, C.

    2018-05-01

    Since the early 2000s, silicon nanowire field effect transistors are emerging as ultrasensitive biosensors while offering label-free, portable and rapid detection. Nevertheless, their large scale production remains an ongoing challenge due to time consuming, complex and costly technology. In order to bypass these issues, we report here on the first integration of silicon nanowire networks, called nanonet, into long channel field effect transistors using standard microelectronic process. A special attention is paid to the silicidation of the contacts which involved a large number of SiNWs. The electrical characteristics of these FETs constituted by randomly oriented silicon nanowires are also studied. Compatible integration on the back-end of CMOS readout and promising electrical performances open new opportunities for sensing applications.

  19. Large-scale transmission-type multifunctional anisotropic coding metasurfaces in millimeter-wave frequencies

    Science.gov (United States)

    Cui, Tie Jun; Wu, Rui Yuan; Wu, Wei; Shi, Chuan Bo; Li, Yun Bo

    2017-10-01

    We propose fast and accurate designs to large-scale and low-profile transmission-type anisotropic coding metasurfaces with multiple functions in the millimeter-wave frequencies based on the antenna-array method. The numerical simulation of an anisotropic coding metasurface with the size of 30λ × 30λ by the proposed method takes only 20 min, which however cannot be realized by commercial software due to huge memory usage in personal computers. To inspect the performance of coding metasurfaces in the millimeter-wave band, the working frequency is chosen as 60 GHz. Based on the convolution operations and holographic theory, the proposed multifunctional anisotropic coding metasurface exhibits different effects excited by y-polarized and x-polarized incidences. This study extends the frequency range of coding metasurfaces, filling the gap between microwave and terahertz bands, and implying promising applications in millimeter-wave communication and imaging.

  20. Algorithm 896: LSA: Algorithms for Large-Scale Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2009-01-01

    Roč. 36, č. 3 (2009), 16-1-16-29 ISSN 0098-3500 R&D Pro jects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : algorithms * design * large-scale optimization * large-scale nonsmooth optimization * large-scale nonlinear least squares * large-scale nonlinear minimax * large-scale systems of nonlinear equations * sparse pro blems * partially separable pro blems * limited-memory methods * discrete Newton methods * quasi-Newton methods * primal interior-point methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.904, year: 2009

  1. Large-scale parallel genome assembler over cloud computing environment.

    Science.gov (United States)

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  2. Large scale solar district heating. Evaluation, modelling and designing

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the tool for design studies and on a local energy planning case. The evaluation of the central solar heating technology is based on measurements on the case plant in Marstal, Denmark, and on published and unpublished data for other, mainly Danish, CSDHP plants. Evaluations on the thermal, economical and environmental performances are reported, based on the experiences from the last decade. The measurements from the Marstal case are analysed, experiences extracted and minor improvements to the plant design proposed. For the detailed designing and energy planning of CSDHPs, a computer simulation model is developed and validated on the measurements from the Marstal case. The final model is then generalised to a 'generic' model for CSDHPs in general. The meteorological reference data, Danish Reference Year, is applied to find the mean performance for the plant designs. To find the expectable variety of the thermal performance of such plants, a method is proposed where data from a year with poor solar irradiation and a year with strong solar irradiation are applied. Equipped with a simulation tool design studies are carried out spreading from parameter analysis over energy planning for a new settlement to a proposal for the combination of plane solar collectors with high performance solar collectors, exemplified by a trough solar collector. The methodology of utilising computer simulation proved to be a cheap and relevant tool in the design of future solar heating plants. The thesis also exposed the demand for developing computer models for the more advanced solar collector designs and especially for the control operation of CSHPs. In the final chapter the CSHP technology is put into perspective with respect to other possible technologies to find the relevance of the application

  3. Detection of large-scale concentric gravity waves from a Chinese airglow imager network

    Science.gov (United States)

    Lai, Chang; Yue, Jia; Xu, Jiyao; Yuan, Wei; Li, Qinzeng; Liu, Xiao

    2018-06-01

    Concentric gravity waves (CGWs) contain a broad spectrum of horizontal wavelengths and periods due to their instantaneous localized sources (e.g., deep convection, volcanic eruptions, or earthquake, etc.). However, it is difficult to observe large-scale gravity waves of >100 km wavelength from the ground for the limited field of view of a single camera and local bad weather. Previously, complete large-scale CGW imagery could only be captured by satellite observations. In the present study, we developed a novel method that uses assembling separate images and applying low-pass filtering to obtain temporal and spatial information about complete large-scale CGWs from a network of all-sky airglow imagers. Coordinated observations from five all-sky airglow imagers in Northern China were assembled and processed to study large-scale CGWs over a wide area (1800 km × 1 400 km), focusing on the same two CGW events as Xu et al. (2015). Our algorithms yielded images of large-scale CGWs by filtering out the small-scale CGWs. The wavelengths, wave speeds, and periods of CGWs were measured from a sequence of consecutive assembled images. Overall, the assembling and low-pass filtering algorithms can expand the airglow imager network to its full capacity regarding the detection of large-scale gravity waves.

  4. Large-scale weakly supervised object localization via latent category learning.

    Science.gov (United States)

    Chong Wang; Kaiqi Huang; Weiqiang Ren; Junge Zhang; Maybank, Steve

    2015-04-01

    Localizing objects in cluttered backgrounds is challenging under large-scale weakly supervised conditions. Due to the cluttered image condition, objects usually have large ambiguity with backgrounds. Besides, there is also a lack of effective algorithm for large-scale weakly supervised localization in cluttered backgrounds. However, backgrounds contain useful latent information, e.g., the sky in the aeroplane class. If this latent information can be learned, object-background ambiguity can be largely reduced and background can be suppressed effectively. In this paper, we propose the latent category learning (LCL) in large-scale cluttered conditions. LCL is an unsupervised learning method which requires only image-level class labels. First, we use the latent semantic analysis with semantic object representation to learn the latent categories, which represent objects, object parts or backgrounds. Second, to determine which category contains the target object, we propose a category selection strategy by evaluating each category's discrimination. Finally, we propose the online LCL for use in large-scale conditions. Evaluation on the challenging PASCAL Visual Object Class (VOC) 2007 and the large-scale imagenet large-scale visual recognition challenge 2013 detection data sets shows that the method can improve the annotation precision by 10% over previous methods. More importantly, we achieve the detection precision which outperforms previous results by a large margin and can be competitive to the supervised deformable part model 5.0 baseline on both data sets.

  5. Low-Complexity Transmit Antenna Selection and Beamforming for Large-Scale MIMO Communications

    Directory of Open Access Journals (Sweden)

    Kun Qian

    2014-01-01

    Full Text Available Transmit antenna selection plays an important role in large-scale multiple-input multiple-output (MIMO communications, but optimal large-scale MIMO antenna selection is a technical challenge. Exhaustive search is often employed in antenna selection, but it cannot be efficiently implemented in large-scale MIMO communication systems due to its prohibitive high computation complexity. This paper proposes a low-complexity interactive multiple-parameter optimization method for joint transmit antenna selection and beamforming in large-scale MIMO communication systems. The objective is to jointly maximize the channel outrage capacity and signal-to-noise (SNR performance and minimize the mean square error in transmit antenna selection and minimum variance distortionless response (MVDR beamforming without exhaustive search. The effectiveness of all the proposed methods is verified by extensive simulation results. It is shown that the required antenna selection processing time of the proposed method does not increase along with the increase of selected antennas, but the computation complexity of conventional exhaustive search method will significantly increase when large-scale antennas are employed in the system. This is particularly useful in antenna selection for large-scale MIMO communication systems.

  6. The role of large-scale, extratropical dynamics in climate change

    International Nuclear Information System (INIS)

    Shepherd, T.G.

    1994-02-01

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop's University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database

  7. The role of large-scale, extratropical dynamics in climate change

    Energy Technology Data Exchange (ETDEWEB)

    Shepherd, T.G. [ed.

    1994-02-01

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop`s University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database.

  8. Cooling pipeline disposing structure for large-scaled cryogenic structure

    International Nuclear Information System (INIS)

    Takahashi, Hiroyuki.

    1996-01-01

    The present invention concerns an electromagnetic force supporting structure for superconductive coils. As the size of a cryogenic structure is increased, since it takes much cooling time, temperature difference between cooling pipelines and the cryogenic structure is increased over a wide range, and difference of heat shrinkage is increased to increase thermal stresses. Then, in the cooling pipelines for a large scaled cryogenic structure, the cooling pipelines and the structure are connected by way of a thin metal plate made of a material having a heat conductivity higher than that of the material of the structure by one digit or more, and the thin metal plate is bent. The displacement between the cryogenic structure and the cooling pipelines caused by heat shrinkage is absorbed by the elongation/shrinkage of the bent structure of the thin metal plate, and the thermal stresses due to the displacement is reduced. In addition, the heat of the cryogenic structures is transferred by way of the thin metal plate. Then, the cooling pipelines can be secured to the cryogenic structure such that cooling by heat transfer is enabled by absorbing a great deviation or three dimensional displacement due to the difference of the temperature distribution between the cryogenic structure enlarged in the scale and put into the three dimensional shape, and the cooling pipelines. (N.H.)

  9. Parallel Framework for Dimensionality Reduction of Large-Scale Datasets

    Directory of Open Access Journals (Sweden)

    Sai Kiranmayee Samudrala

    2015-01-01

    Full Text Available Dimensionality reduction refers to a set of mathematical techniques used to reduce complexity of the original high-dimensional data, while preserving its selected properties. Improvements in simulation strategies and experimental data collection methods are resulting in a deluge of heterogeneous and high-dimensional data, which often makes dimensionality reduction the only viable way to gain qualitative and quantitative understanding of the data. However, existing dimensionality reduction software often does not scale to datasets arising in real-life applications, which may consist of thousands of points with millions of dimensions. In this paper, we propose a parallel framework for dimensionality reduction of large-scale data. We identify key components underlying the spectral dimensionality reduction techniques, and propose their efficient parallel implementation. We show that the resulting framework can be used to process datasets consisting of millions of points when executed on a 16,000-core cluster, which is beyond the reach of currently available methods. To further demonstrate applicability of our framework we perform dimensionality reduction of 75,000 images representing morphology evolution during manufacturing of organic solar cells in order to identify how processing parameters affect morphology evolution.

  10. Large-scale solvothermal synthesis of fluorescent carbon nanoparticles

    International Nuclear Information System (INIS)

    Ku, Kahoe; Park, Jinwoo; Kim, Nayon; Kim, Woong; Lee, Seung-Wook; Chung, Haegeun; Han, Chi-Hwan

    2014-01-01

    The large-scale production of high-quality carbon nanomaterials is highly desirable for a variety of applications. We demonstrate a novel synthetic route to the production of fluorescent carbon nanoparticles (CNPs) in large quantities via a single-step reaction. The simple heating of a mixture of benzaldehyde, ethanol and graphite oxide (GO) with residual sulfuric acid in an autoclave produced 7 g of CNPs with a quantum yield of 20%. The CNPs can be dispersed in various organic solvents; hence, they are easily incorporated into polymer composites in forms such as nanofibers and thin films. Additionally, we observed that the GO present during the CNP synthesis was reduced. The reduced GO (RGO) was sufficiently conductive (σ ≈ 282 S m −1 ) such that it could be used as an electrode material in a supercapacitor; in addition, it can provide excellent capacitive behavior and high-rate capability. This work will contribute greatly to the development of efficient synthetic routes to diverse carbon nanomaterials, including CNPs and RGO, that are suitable for a wide range of applications. (paper)

  11. Implicit solvers for large-scale nonlinear problems

    International Nuclear Information System (INIS)

    Keyes, David E; Reynolds, Daniel R; Woodward, Carol S

    2006-01-01

    Computational scientists are grappling with increasingly complex, multi-rate applications that couple such physical phenomena as fluid dynamics, electromagnetics, radiation transport, chemical and nuclear reactions, and wave and material propagation in inhomogeneous media. Parallel computers with large storage capacities are paving the way for high-resolution simulations of coupled problems; however, hardware improvements alone will not prove enough to enable simulations based on brute-force algorithmic approaches. To accurately capture nonlinear couplings between dynamically relevant phenomena, often while stepping over rapid adjustments to quasi-equilibria, simulation scientists are increasingly turning to implicit formulations that require a discrete nonlinear system to be solved for each time step or steady state solution. Recent advances in iterative methods have made fully implicit formulations a viable option for solution of these large-scale problems. In this paper, we overview one of the most effective iterative methods, Newton-Krylov, for nonlinear systems and point to software packages with its implementation. We illustrate the method with an example from magnetically confined plasma fusion and briefly survey other areas in which implicit methods have bestowed important advantages, such as allowing high-order temporal integration and providing a pathway to sensitivity analyses and optimization. Lastly, we overview algorithm extensions under development motivated by current SciDAC applications

  12. Reconstructing Information in Large-Scale Structure via Logarithmic Mapping

    Science.gov (United States)

    Szapudi, Istvan

    We propose to develop a new method to extract information from large-scale structure data combining two-point statistics and non-linear transformations; before, this information was available only with substantially more complex higher-order statistical methods. Initially, most of the cosmological information in large-scale structure lies in two-point statistics. With non- linear evolution, some of that useful information leaks into higher-order statistics. The PI and group has shown in a series of theoretical investigations how that leakage occurs, and explained the Fisher information plateau at smaller scales. This plateau means that even as more modes are added to the measurement of the power spectrum, the total cumulative information (loosely speaking the inverse errorbar) is not increasing. Recently we have shown in Neyrinck et al. (2009, 2010) that a logarithmic (and a related Gaussianization or Box-Cox) transformation on the non-linear Dark Matter or galaxy field reconstructs a surprisingly large fraction of this missing Fisher information of the initial conditions. This was predicted by the earlier wave mechanical formulation of gravitational dynamics by Szapudi & Kaiser (2003). The present proposal is focused on working out the theoretical underpinning of the method to a point that it can be used in practice to analyze data. In particular, one needs to deal with the usual real-life issues of galaxy surveys, such as complex geometry, discrete sam- pling (Poisson or sub-Poisson noise), bias (linear, or non-linear, deterministic, or stochastic), redshift distortions, pro jection effects for 2D samples, and the effects of photometric redshift errors. We will develop methods for weak lensing and Sunyaev-Zeldovich power spectra as well, the latter specifically targetting Planck. In addition, we plan to investigate the question of residual higher- order information after the non-linear mapping, and possible applications for cosmology. Our aim will be to work out

  13. An Novel Architecture of Large-scale Communication in IOT

    Science.gov (United States)

    Ma, Wubin; Deng, Su; Huang, Hongbin

    2018-03-01

    In recent years, many scholars have done a great deal of research on the development of Internet of Things and networked physical systems. However, few people have made the detailed visualization of the large-scale communications architecture in the IOT. In fact, the non-uniform technology between IPv6 and access points has led to a lack of broad principles of large-scale communications architectures. Therefore, this paper presents the Uni-IPv6 Access and Information Exchange Method (UAIEM), a new architecture and algorithm that addresses large-scale communications in the IOT.

  14. Large scale and big data processing and management

    CERN Document Server

    Sakr, Sherif

    2014-01-01

    Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas

  15. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    The present paper presents overtopping measurements from small scale model test performed at the Haudraulic & Coastal Engineering Laboratory, Aalborg University, Denmark and large scale model tests performed at the Largde Wave Channel,Hannover, Germany. Comparison between results obtained from...... small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...

  16. Large-scale groundwater modeling using global datasets: A test case for the Rhine-Meuse basin

    NARCIS (Netherlands)

    Sutanudjaja, E.H.; Beek, L.P.H. van; Jong, S.M. de; Geer, F.C. van; Bierkens, M.F.P.

    2011-01-01

    Large-scale groundwater models involving aquifers and basins of multiple countries are still rare due to a lack of hydrogeological data which are usually only available in developed countries. In this study, we propose a novel approach to construct large-scale groundwater models by using global

  17. Large-scale groundwater modeling using global datasets: a test case for the Rhine-Meuse basin

    NARCIS (Netherlands)

    Sutanudjaja, E.H.; Beek, L.P.H. van; Jong, S.M. de; Geer, F.C. van; Bierkens, M.F.P.

    2011-01-01

    The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in

  18. Large-scale groundwater modeling using global datasets: A test case for the Rhine-Meuse basin

    NARCIS (Netherlands)

    Sutanudjaja, E.H.; Beek, L.P.H. van; Jong, S.M. de; Geer, F.C. van; Bierkens, M.F.P.

    2011-01-01

    The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in developed

  19. Higher Education Teachers' Descriptions of Their Own Learning: A Large-Scale Study of Finnish Universities of Applied Sciences

    Science.gov (United States)

    Töytäri, Aija; Piirainen, Arja; Tynjälä, Päivi; Vanhanen-Nuutinen, Liisa; Mäki, Kimmo; Ilves, Vesa

    2016-01-01

    In this large-scale study, higher education teachers' descriptions of their own learning were examined with qualitative analysis involving application of principles of phenomenographic research. This study is unique: it is unusual to use large-scale data in qualitative studies. The data were collected through an e-mail survey sent to 5960 teachers…

  20. Application of visible/near infrared spectroscopy to quality control of fresh fruits and vegetables in large-scale mass distribution channels: a preliminary test on carrots and tomatoes.

    Science.gov (United States)

    Beghi, Roberto; Giovenzana, Valentina; Tugnolo, Alessio; Guidetti, Riccardo

    2018-05-01

    The market for fruits and vegetables is mainly controlled by the mass distribution channel (MDC). MDC buyers do not have useful instruments to rapidly evaluate the quality of the products. Decisions by the buyers are driven primarily by pricing strategies rather than product quality. Simple, rapid and easy-to-use methods for objectively evaluating the quality of postharvest products are needed. The present study aimed to use visible and near-infrared (vis/NIR) spectroscopy to estimate some qualitative parameters of two low-price products (carrots and tomatoes) of various brands, as well as evaluate the applicability of this technique for use in stores. A non-destructive optical system (vis/NIR spectrophotometer with a reflection probe, spectral range 450-1650 nm) was tested. The differences in quality among carrots and tomatoes purchased from 13 stores on various dates were examined. The reference quality parameters (firmness, water content, soluble solids content, pH and colour) were correlated with the spectral readings. The models derived from the optical data gave positive results, in particular for the prediction of the soluble solids content and the colour, with better results for tomatoes than for carrots. The application of optical techniques may help MDC buyers to monitor the quality of postharvest products, leading to an effective optimization of the entire supply chain. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  1. State of the Art in Large-Scale Soil Moisture Monitoring

    Science.gov (United States)

    Ochsner, Tyson E.; Cosh, Michael Harold; Cuenca, Richard H.; Dorigo, Wouter; Draper, Clara S.; Hagimoto, Yutaka; Kerr, Yan H.; Larson, Kristine M.; Njoku, Eni Gerald; Small, Eric E.; hide

    2013-01-01

    Soil moisture is an essential climate variable influencing land atmosphere interactions, an essential hydrologic variable impacting rainfall runoff processes, an essential ecological variable regulating net ecosystem exchange, and an essential agricultural variable constraining food security. Large-scale soil moisture monitoring has advanced in recent years creating opportunities to transform scientific understanding of soil moisture and related processes. These advances are being driven by researchers from a broad range of disciplines, but this complicates collaboration and communication. For some applications, the science required to utilize large-scale soil moisture data is poorly developed. In this review, we describe the state of the art in large-scale soil moisture monitoring and identify some critical needs for research to optimize the use of increasingly available soil moisture data. We review representative examples of 1) emerging in situ and proximal sensing techniques, 2) dedicated soil moisture remote sensing missions, 3) soil moisture monitoring networks, and 4) applications of large-scale soil moisture measurements. Significant near-term progress seems possible in the use of large-scale soil moisture data for drought monitoring. Assimilation of soil moisture data for meteorological or hydrologic forecasting also shows promise, but significant challenges related to model structures and model errors remain. Little progress has been made yet in the use of large-scale soil moisture observations within the context of ecological or agricultural modeling. Opportunities abound to advance the science and practice of large-scale soil moisture monitoring for the sake of improved Earth system monitoring, modeling, and forecasting.

  2. Large-scale land transformations in Indonesia: The role of ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    ... enable timely responses to the impacts of large-scale land transformations in Central Kalimantan ... In partnership with UNESCO's Organization for Women in Science for the ... New funding opportunity for gender equality and climate change.

  3. Resolute large scale mining company contribution to health services of

    African Journals Online (AJOL)

    Resolute large scale mining company contribution to health services of Lusu ... in terms of socio economic, health, education, employment, safe drinking water, ... The data were analyzed using Scientific Package for Social Science (SPSS).

  4. Personalized Opportunistic Computing for CMS at Large Scale

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    **Douglas Thain** is an Associate Professor of Computer Science and Engineering at the University of Notre Dame, where he designs large scale distributed computing systems to power the needs of advanced science and...

  5. Bottom-Up Accountability Initiatives and Large-Scale Land ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Corey Piccioni

    fuel/energy, climate, and finance has occurred and one of the most ... this wave of large-scale land acquisitions. In fact, esti- ... Environmental Rights Action/Friends of the Earth,. Nigeria ... map the differentiated impacts (gender, ethnicity,.

  6. Large-scale linear programs in planning and prediction.

    Science.gov (United States)

    2017-06-01

    Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...

  7. Bottom-Up Accountability Initiatives and Large-Scale Land ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    ... Security can help increase accountability for large-scale land acquisitions in ... to build decent economic livelihoods and participate meaningfully in decisions ... its 2017 call for proposals to establish Cyber Policy Centres in the Global South.

  8. Needs, opportunities, and options for large scale systems research

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  9. Large-scale and patternable graphene: direct transformation of amorphous carbon film into graphene/graphite on insulators via Cu mediation engineering and its application to all-carbon based devices

    Science.gov (United States)

    Chen, Yu-Ze; Medina, Henry; Lin, Hung-Chiao; Tsai, Hung-Wei; Su, Teng-Yu; Chueh, Yu-Lun

    2015-01-01

    Chemical vapour deposition of graphene was the preferred way to synthesize graphene for multiple applications. However, several problems related to transfer processes, such as wrinkles, cleanness and scratches, have limited its application at the industrial scale. Intense research was triggered into developing alternative synthesis methods to directly deposit graphene on insulators at low cost with high uniformity and large area. In this work, we demonstrate a new concept to directly achieve growth of graphene on non-metal substrates. By exposing an amorphous carbon (a-C) film in Cu gaseous molecules after annealing at 850 °C, the carbon (a-C) film surprisingly undergoes a noticeable transformation to crystalline graphene. Furthermore, the thickness of graphene could be controlled, depending on the thickness of the pre-deposited a-C film. The transformation mechanism was investigated and explained in detail. This approach enables development of a one-step process to fabricate electrical devices made of all carbon material, highlighting the uniqueness of the novel approach for developing graphene electronic devices. Interestingly, the carbon electrodes made directly on the graphene layer by our approach offer a good ohmic contact compared with the Schottky barriers usually observed on graphene devices using metals as electrodes.Chemical vapour deposition of graphene was the preferred way to synthesize graphene for multiple applications. However, several problems related to transfer processes, such as wrinkles, cleanness and scratches, have limited its application at the industrial scale. Intense research was triggered into developing alternative synthesis methods to directly deposit graphene on insulators at low cost with high uniformity and large area. In this work, we demonstrate a new concept to directly achieve growth of graphene on non-metal substrates. By exposing an amorphous carbon (a-C) film in Cu gaseous molecules after annealing at 850 °C, the carbon (a

  10. Cloud-enabled large-scale land surface model simulations with the NASA Land Information System

    Science.gov (United States)

    Duffy, D.; Vaughan, G.; Clark, M. P.; Peters-Lidard, C. D.; Nijssen, B.; Nearing, G. S.; Rheingrover, S.; Kumar, S.; Geiger, J. V.

    2017-12-01

    Developed by the Hydrological Sciences Laboratory at NASA Goddard Space Flight Center (GSFC), the Land Information System (LIS) is a high-performance software framework for terrestrial hydrology modeling and data assimilation. LIS provides the ability to integrate satellite and ground-based observational products and advanced modeling algorithms to extract land surface states and fluxes. Through a partnership with the National Center for Atmospheric Research (NCAR) and the University of Washington, the LIS model is currently being extended to include the Structure for Unifying Multiple Modeling Alternatives (SUMMA). With the addition of SUMMA in LIS, meaningful simulations containing a large multi-model ensemble will be enabled and can provide advanced probabilistic continental-domain modeling capabilities at spatial scales relevant for water managers. The resulting LIS/SUMMA application framework is difficult for non-experts to install due to the large amount of dependencies on specific versions of operating systems, libraries, and compilers. This has created a significant barrier to entry for domain scientists that are interested in using the software on their own systems or in the cloud. In addition, the requirement to support multiple run time environments across the LIS community has created a significant burden on the NASA team. To overcome these challenges, LIS/SUMMA has been deployed using Linux containers, which allows for an entire software package along with all dependences to be installed within a working runtime environment, and Kubernetes, which orchestrates the deployment of a cluster of containers. Within a cloud environment, users can now easily create a cluster of virtual machines and run large-scale LIS/SUMMA simulations. Installations that have taken weeks and months can now be performed in minutes of time. This presentation will discuss the steps required to create a cloud-enabled large-scale simulation, present examples of its use, and

  11. GMP-compliant isolation and large-scale expansion of bone marrow-derived MSC.

    Directory of Open Access Journals (Sweden)

    Natalie Fekete

    Full Text Available BACKGROUND: Mesenchymal stromal cells (MSC have gained importance in tissue repair, tissue engineering and in immunosupressive therapy during the last years. Due to the limited availability of MSC in the bone marrow, ex vivo amplification prior to clinical application is requisite to obtain therapeutic applicable cell doses. Translation of preclinical into clinical-grade large-scale MSC expansion necessitates precise definition and standardization of all procedural parameters including cell seeding density, culture medium and cultivation devices. While xenogeneic additives such as fetal calf serum are still widely used for cell culture, its use in the clinical context is associated with many risks, such as prion and viral transmission or adverse immunological reactions against xenogeneic components. METHODS AND FINDINGS: We established animal-free expansion protocols using platelet lysate as medium supplement and thereby could confirm its safety and feasibility for large-scale MSC isolation and expansion. Five different GMP-compliant standardized protocols designed for the safe, reliable, efficient and economical isolation and expansion of MSC was performed and MSC obtained were analyzed for differentiation capacity by qPCR and histochemistry. Expression of standard MSC markers as defined by the International Society for Cellular Therapy as well as expression of additional MSC markers and of various chemokine and cytokine receptors was analysed by flow cytometry. Changes of metabolic markers and cytokines in the medium were addressed using the LUMINEX platform. CONCLUSIONS: The five different systems for isolation and expansion of MSC described in this study are all suitable to produce at least 100 millions of MSC, which is commonly regarded as a single clinical dose. Final products are equal according to the minimal criteria for MSC defined by the ISCT. We showed that chemokine and integrin receptors analyzed had the same expression pattern

  12. Imprint of non-linear effects on HI intensity mapping on large scales

    Energy Technology Data Exchange (ETDEWEB)

    Umeh, Obinna, E-mail: umeobinna@gmail.com [Department of Physics and Astronomy, University of the Western Cape, Cape Town 7535 (South Africa)

    2017-06-01

    Intensity mapping of the HI brightness temperature provides a unique way of tracing large-scale structures of the Universe up to the largest possible scales. This is achieved by using a low angular resolution radio telescopes to detect emission line from cosmic neutral Hydrogen in the post-reionization Universe. We use general relativistic perturbation theory techniques to derive for the first time the full expression for the HI brightness temperature up to third order in perturbation theory without making any plane-parallel approximation. We use this result and the renormalization prescription for biased tracers to study the impact of nonlinear effects on the power spectrum of HI brightness temperature both in real and redshift space. We show how mode coupling at nonlinear order due to nonlinear bias parameters and redshift space distortion terms modulate the power spectrum on large scales. The large scale modulation may be understood to be due to the effective bias parameter and effective shot noise.

  13. No Large Scale Curvature Perturbations during Waterfall of Hybrid Inflation

    OpenAIRE

    Abolhasani, Ali Akbar; Firouzjahi, Hassan

    2010-01-01

    In this paper the possibility of generating large scale curvature perturbations induced from the entropic perturbations during the waterfall phase transition of standard hybrid inflation model is studied. We show that whether or not appreciable amounts of large scale curvature perturbations are produced during the waterfall phase transition depend crucially on the competition between the classical and the quantum mechanical back-reactions to terminate inflation. If one considers only the clas...

  14. Benefits of transactive memory systems in large-scale development

    OpenAIRE

    Aivars, Sablis

    2016-01-01

    Context. Large-scale software development projects are those consisting of a large number of teams, maybe even spread across multiple locations, and working on large and complex software tasks. That means that neither a team member individually nor an entire team holds all the knowledge about the software being developed and teams have to communicate and coordinate their knowledge. Therefore, teams and team members in large-scale software development projects must acquire and manage expertise...

  15. Capabilities of the Large-Scale Sediment Transport Facility

    Science.gov (United States)

    2016-04-01

    pump flow meters, sediment trap weigh tanks , and beach profiling lidar. A detailed discussion of the original LSTF features and capabilities can be...ERDC/CHL CHETN-I-88 April 2016 Approved for public release; distribution is unlimited. Capabilities of the Large-Scale Sediment Transport...describes the Large-Scale Sediment Transport Facility (LSTF) and recent upgrades to the measurement systems. The purpose of these upgrades was to increase

  16. Comparative Analysis of Different Protocols to Manage Large Scale Networks

    OpenAIRE

    Anil Rao Pimplapure; Dr Jayant Dubey; Prashant Sen

    2013-01-01

    In recent year the numbers, complexity and size is increased in Large Scale Network. The best example of Large Scale Network is Internet, and recently once are Data-centers in Cloud Environment. In this process, involvement of several management tasks such as traffic monitoring, security and performance optimization is big task for Network Administrator. This research reports study the different protocols i.e. conventional protocols like Simple Network Management Protocol and newly Gossip bas...

  17. How large-scale subsidence affects stratocumulus transitions

    Directory of Open Access Journals (Sweden)

    J. J. van der Dussen

    2016-01-01

    Full Text Available Some climate modeling results suggest that the Hadley circulation might weaken in a future climate, causing a subsequent reduction in the large-scale subsidence velocity in the subtropics. In this study we analyze the cloud liquid water path (LWP budget from large-eddy simulation (LES results of three idealized stratocumulus transition cases, each with a different subsidence rate. As shown in previous studies a reduced subsidence is found to lead to a deeper stratocumulus-topped boundary layer, an enhanced cloud-top entrainment rate and a delay in the transition of stratocumulus clouds into shallow cumulus clouds during its equatorwards advection by the prevailing trade winds. The effect of a reduction of the subsidence rate can be summarized as follows. The initial deepening of the stratocumulus layer is partly counteracted by an enhanced absorption of solar radiation. After some hours the deepening of the boundary layer is accelerated by an enhancement of the entrainment rate. Because this is accompanied by a change in the cloud-base turbulent fluxes of moisture and heat, the net change in the LWP due to changes in the turbulent flux profiles is negligibly small.

  18. Silver nanoparticles: Large scale solvothermal synthesis and optical properties

    Energy Technology Data Exchange (ETDEWEB)

    Wani, Irshad A.; Khatoon, Sarvari [Nanochemistry Laboratory, Department of Chemistry, Jamia Millia Islamia, New Delhi 110025 (India); Ganguly, Aparna [Nanochemistry Laboratory, Department of Chemistry, Jamia Millia Islamia, New Delhi 110025 (India); Department of Chemistry, Indian Institute of Technology, Hauz Khas, New Delhi 110016 (India); Ahmed, Jahangeer; Ganguli, Ashok K. [Department of Chemistry, Indian Institute of Technology, Hauz Khas, New Delhi 110016 (India); Ahmad, Tokeer, E-mail: tokeer.ch@jmi.ac.in [Nanochemistry Laboratory, Department of Chemistry, Jamia Millia Islamia, New Delhi 110025 (India)

    2010-08-15

    Silver nanoparticles have been successfully synthesized by a simple and modified solvothermal method at large scale using ethanol as the refluxing solvent and NaBH{sub 4} as reducing agent. The nanopowder was investigated by means of X-ray diffraction (XRD), transmission electron microscopy (TEM), dynamic light scattering (DLS), UV-visible and BET surface area studies. XRD studies reveal the monophasic nature of these highly crystalline silver nanoparticles. Transmission electron microscopic studies show the monodisperse and highly uniform nanoparticles of silver of the particle size of 5 nm, however, the size is found to be 7 nm using dynamic light scattering which is in good agreement with the TEM and X-ray line broadening studies. The surface area was found to be 34.5 m{sup 2}/g. UV-visible studies show the absorption band at {approx}425 nm due to surface plasmon resonance. The percentage yield of silver nanoparticles was found to be as high as 98.5%.

  19. Research on large-scale wind farm modeling

    Science.gov (United States)

    Ma, Longfei; Zhang, Baoqun; Gong, Cheng; Jiao, Ran; Shi, Rui; Chi, Zhongjun; Ding, Yifeng

    2017-01-01

    Due to intermittent and adulatory properties of wind energy, when large-scale wind farm connected to the grid, it will have much impact on the power system, which is different from traditional power plants. Therefore it is necessary to establish an effective wind farm model to simulate and analyze the influence wind farms have on the grid as well as the transient characteristics of the wind turbines when the grid is at fault. However we must first establish an effective WTGs model. As the doubly-fed VSCF wind turbine has become the mainstream wind turbine model currently, this article first investigates the research progress of doubly-fed VSCF wind turbine, and then describes the detailed building process of the model. After that investigating the common wind farm modeling methods and pointing out the problems encountered. As WAMS is widely used in the power system, which makes online parameter identification of the wind farm model based on off-output characteristics of wind farm be possible, with a focus on interpretation of the new idea of identification-based modeling of large wind farms, which can be realized by two concrete methods.

  20. Comprehensive large-scale assessment of intrinsic protein disorder.

    Science.gov (United States)

    Walsh, Ian; Giollo, Manuel; Di Domenico, Tomás; Ferrari, Carlo; Zimmermann, Olav; Tosatto, Silvio C E

    2015-01-15

    Intrinsically disordered regions are key for the function of numerous proteins. Due to the difficulties in experimental disorder characterization, many computational predictors have been developed with various disorder flavors. Their performance is generally measured on small sets mainly from experimentally solved structures, e.g. Protein Data Bank (PDB) chains. MobiDB has only recently started to collect disorder annotations from multiple experimental structures. MobiDB annotates disorder for UniProt sequences, allowing us to conduct the first large-scale assessment of fast disorder predictors on 25 833 different sequences with X-ray crystallographic structures. In addition to a comprehensive ranking of predictors, this analysis produced the following interesting observations. (i) The predictors cluster according to their disorder definition, with a consensus giving more confidence. (ii) Previous assessments appear over-reliant on data annotated at the PDB chain level and performance is lower on entire UniProt sequences. (iii) Long disordered regions are harder to predict. (iv) Depending on the structural and functional types of the proteins, differences in prediction performance of up to 10% are observed. The datasets are available from Web site at URL: http://mobidb.bio.unipd.it/lsd. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.