WorldWideScience

Sample records for large-scale performance assessments

  1. Assessing the Performance of Large Scale Green Roofs and Their Impact on the Urban Microclimate

    Science.gov (United States)

    Smalls-Mantey, L.; Foti, R.; Montalto, F. A.

    2015-12-01

    In ultra-urban environments green roofs offer a feasible solution to add green infrastructure (GI) in neighborhoods where space is limited. Green roofs offer the typical advantages of urban GI such as stormwater reduction and management while providing direct benefits to the buildings on which they are installed through thermal protection and mitigation of temperature fluctuations. At 6.8 acres, the Jacob K. Javits Convention Center (JJCC) in New York City, hosts the second largest green roof in the United States. Since its installation in August 2013, the Sustainable Water Resource (SWRE) Laboratory at Drexel University has monitored the climate on and around the green roof by means of four weather stations situated on various roof and ground locations. Using two years of fine scale climatic data collected at the JJCC, this study explores the energy balance of a large scale green roof system. Temperature, radiation, evapotranspiration and wind profiles pre- and post- installation of the JJCC green roof were analyzed and compared across monitored locations, with the goal of identifying the impact of the green roof on the building and urban micro-climate. Our findings indicate that the presence of the green roof, not only altered the climatic conditions above the JJCC, but also had a measurable impact on the climatic profile of the areas immediately surrounding it. Furthermore, as a result of the mitigation of roof temperature fluctuations and of the cooling provided during warmer months, an improvement of the building thermal efficiency was contextually observed. Such findings support the installation of GI as an effective practice in urban settings and important in the discussion of key issues including energy conservation measures, carbon emission reductions and the mitigation of urban heat islands.

  2. Analytical Assessment of the Relationship between 100MWp Large-scale Grid-connected Photovoltaic Plant Performance and Meteorological Parameters

    Science.gov (United States)

    Sheng, Jie; Zhu, Qiaoming; Cao, Shijie; You, Yang

    2017-05-01

    This paper helps in study of the relationship between the photovoltaic power generation of large scale “fishing and PV complementary” grid-tied photovoltaic system and meteorological parameters, with multi-time scale power data from the photovoltaic power station and meteorological data over the same period of a whole year. The result indicates that, the PV power generation has the most significant correlation with global solar irradiation, followed by diurnal temperature range, sunshine hours, daily maximum temperature and daily average temperature. In different months, the maximum monthly average power generation appears in August, which related to the more global solar irradiation and longer sunshine hours in this month. However, the maximum daily average power generation appears in October, this is due to the drop in temperature brings about the improvement of the efficiency of PV panels. Through the contrast of monthly average performance ratio (PR) and monthly average temperature, it is shown that, the larger values of monthly average PR appears in April and October, while it is smaller in summer with higher temperature. The results concluded that temperature has a great influence on the performance ratio of large scale grid-tied PV power system, and it is important to adopt effective measures to decrease the temperature of PV plant properly.

  3. A large-scale examination of the effectiveness of anonymous marking in reducing group performance differences in higher education assessment.

    Directory of Open Access Journals (Sweden)

    Daniel P Hinton

    Full Text Available The present research aims to more fully explore the issues of performance differences in higher education assessment, particularly in the context of a common measure taken to address them. The rationale for the study is that, while performance differences in written examinations are relatively well researched, few studies have examined the efficacy of anonymous marking in reducing these performance differences, particularly in modern student populations. By examining a large archive (N = 30674 of assessment data spanning a twelve-year period, the relationship between assessment marks and factors such as ethnic group, gender and socio-environmental background was investigated. In particular, analysis focused on the impact that the implementation of anonymous marking for assessment of written examinations and coursework has had on the magnitude of mean score differences between demographic groups of students. While group differences were found to be pervasive in higher education assessment, these differences were observed to be relatively small in practical terms. Further, it appears that the introduction of anonymous marking has had a negligible effect in reducing them. The implications of these results are discussed, focusing on two issues, firstly a defence of examinations as a fair and legitimate form of assessment in Higher Education, and, secondly, a call for the re-examination of the efficacy of anonymous marking in reducing group performance differences.

  4. A large-scale examination of the effectiveness of anonymous marking in reducing group performance differences in higher education assessment.

    Science.gov (United States)

    Hinton, Daniel P; Higson, Helen

    2017-01-01

    The present research aims to more fully explore the issues of performance differences in higher education assessment, particularly in the context of a common measure taken to address them. The rationale for the study is that, while performance differences in written examinations are relatively well researched, few studies have examined the efficacy of anonymous marking in reducing these performance differences, particularly in modern student populations. By examining a large archive (N = 30674) of assessment data spanning a twelve-year period, the relationship between assessment marks and factors such as ethnic group, gender and socio-environmental background was investigated. In particular, analysis focused on the impact that the implementation of anonymous marking for assessment of written examinations and coursework has had on the magnitude of mean score differences between demographic groups of students. While group differences were found to be pervasive in higher education assessment, these differences were observed to be relatively small in practical terms. Further, it appears that the introduction of anonymous marking has had a negligible effect in reducing them. The implications of these results are discussed, focusing on two issues, firstly a defence of examinations as a fair and legitimate form of assessment in Higher Education, and, secondly, a call for the re-examination of the efficacy of anonymous marking in reducing group performance differences.

  5. Energetic and Economic Assessment of Pipe Network Effects on Unused Energy Source System Performance in Large-Scale Horticulture Facilities

    Directory of Open Access Journals (Sweden)

    Jae Ho Lee

    2015-04-01

    Full Text Available As the use of fossil fuel has increased, not only in construction, but also in agriculture due to the drastic industrial development in recent times, the problems of heating costs and global warming are getting worse. Therefore, the introduction of more reliable and environmentally-friendly alternative energy sources has become urgent and the same trend is found in large-scale horticulture facilities. In this study, among many alternative energy sources, we investigated the reserves and the potential of various different unused energy sources which have infinite potential, but are nowadays wasted due to limitations in their utilization. This study investigated the effects of the distance between the greenhouse and the actual heat source by taking into account the heat transfer taking place inside the pipe network. This study considered CO2 emissions and economic aspects to determine the optimal heat source. Payback period analysis against initial investment cost shows that a heat pump based on a power plant’s waste heat has the shortest payback period of 7.69 years at a distance of 0 km. On the other hand, the payback period of a heat pump based on geothermal heat showed the shortest payback period of 10.17 year at the distance of 5 km, indicating that heat pumps utilizing geothermal heat were the most effective model if the heat transfer inside the pipe network between the greenhouse and the actual heat source is taken into account.

  6. Performance assessment of mass flow rate measurement capability in a large scale transient two-phase flow test system

    International Nuclear Information System (INIS)

    Nalezny, C.L.; Chapman, R.L.; Martinell, J.S.; Riordon, R.P.; Solbrig, C.W.

    1979-01-01

    Mass flow is an important measured variable in the Loss-of-Fluid Test (LOFT) Program. Large uncertainties in mass flow measurements in the LOFT piping during LOFT coolant experiments requires instrument testing in a transient two-phase flow loop that simulates the geometry of the LOFT piping. To satisfy this need, a transient two-phase flow loop has been designed and built. The load cell weighing system, which provides reference mass flow measurements, has been analyzed to assess its capability to provide the measurements. The analysis consisted of first performing a thermal-hydraulic analysis using RELAP4 to compute mass inventory and pressure fluctuations in the system and mass flow rate at the instrument location. RELAP4 output was used as input to a structural analysis code SAPIV which is used to determine load cell response. The computed load cell response was then smoothed and differentiated to compute mass flow rate from the system. Comparison between computed mass flow rate at the instrument location and mass flow rate from the system computed from the load cell output was used to evaluate mass flow measurement capability of the load cell weighing system. Results of the analysis indicate that the load cell weighing system will provide reference mass flows more accurately than the instruments now in LOFT

  7. Performance regression manager for large scale systems

    Science.gov (United States)

    Faraj, Daniel A.

    2017-08-01

    System and computer program product to perform an operation comprising generating, based on a first output generated by a first execution instance of a command, a first output file specifying a value of at least one performance metric, wherein the first output file is formatted according to a predefined format, comparing the value of the at least one performance metric in the first output file to a value of the performance metric in a second output file, the second output file having been generated based on a second output generated by a second execution instance of the command, and outputting for display an indication of a result of the comparison of the value of the at least one performance metric of the first output file to the value of the at least one performance metric of the second output file.

  8. Linking Large-Scale Reading Assessments: Comment

    Science.gov (United States)

    Hanushek, Eric A.

    2016-01-01

    E. A. Hanushek points out in this commentary that applied researchers in education have only recently begun to appreciate the value of international assessments, even though there are now 50 years of experience with these. Until recently, these assessments have been stand-alone surveys that have not been linked, and analysis has largely focused on…

  9. Fuel pin integrity assessment under large scale transients

    International Nuclear Information System (INIS)

    Dutta, B.K.

    2006-01-01

    The integrity of fuel rods under normal, abnormal and accident conditions is an important consideration during fuel design of advanced nuclear reactors. The fuel matrix and the sheath form the first barrier to prevent the release of radioactive materials into the primary coolant. An understanding of the fuel and clad behaviour under different reactor conditions, particularly under the beyond-design-basis accident scenario leading to large scale transients, is always desirable to assess the inherent safety margins in fuel pin design and to plan for the mitigation the consequences of accidents, if any. The severe accident conditions are typically characterized by the energy deposition rates far exceeding the heat removal capability of the reactor coolant system. This may lead to the clad failure due to fission gas pressure at high temperature, large- scale pellet-clad interaction and clad melting. The fuel rod performance is affected by many interdependent complex phenomena involving extremely complex material behaviour. The versatile experimental database available in this area has led to the development of powerful analytical tools to characterize fuel under extreme scenarios

  10. Comprehensive large-scale assessment of intrinsic protein disorder.

    Science.gov (United States)

    Walsh, Ian; Giollo, Manuel; Di Domenico, Tomás; Ferrari, Carlo; Zimmermann, Olav; Tosatto, Silvio C E

    2015-01-15

    Intrinsically disordered regions are key for the function of numerous proteins. Due to the difficulties in experimental disorder characterization, many computational predictors have been developed with various disorder flavors. Their performance is generally measured on small sets mainly from experimentally solved structures, e.g. Protein Data Bank (PDB) chains. MobiDB has only recently started to collect disorder annotations from multiple experimental structures. MobiDB annotates disorder for UniProt sequences, allowing us to conduct the first large-scale assessment of fast disorder predictors on 25 833 different sequences with X-ray crystallographic structures. In addition to a comprehensive ranking of predictors, this analysis produced the following interesting observations. (i) The predictors cluster according to their disorder definition, with a consensus giving more confidence. (ii) Previous assessments appear over-reliant on data annotated at the PDB chain level and performance is lower on entire UniProt sequences. (iii) Long disordered regions are harder to predict. (iv) Depending on the structural and functional types of the proteins, differences in prediction performance of up to 10% are observed. The datasets are available from Web site at URL: http://mobidb.bio.unipd.it/lsd. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Performance Health Monitoring of Large-Scale Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rajamony, Ram [IBM Research, Austin, TX (United States)

    2014-11-20

    This report details the progress made on the ASCR funded project Performance Health Monitoring for Large Scale Systems. A large-­scale application may not achieve its full performance potential due to degraded performance of even a single subsystem. Detecting performance faults, isolating them, and taking remedial action is critical for the scale of systems on the horizon. PHM aims to develop techniques and tools that can be used to identify and mitigate such performance problems. We accomplish this through two main aspects. The PHM framework encompasses diagnostics, system monitoring, fault isolation, and performance evaluation capabilities that indicates when a performance fault has been detected, either due to an anomaly present in the system itself or due to contention for shared resources between concurrently executing jobs. Software components called the PHM Control system then build upon the capabilities provided by the PHM framework to mitigate degradation caused by performance problems.

  12. NASA: Assessments of Selected Large-Scale Projects

    Science.gov (United States)

    2011-03-01

    REPORT DATE MAR 2011 2. REPORT TYPE 3. DATES COVERED 00-00-2011 to 00-00-2011 4. TITLE AND SUBTITLE Assessments Of Selected Large-Scale Projects...Volatile EvolutioN MEP Mars Exploration Program MIB Mishap Investigation Board MMRTG Multi Mission Radioisotope Thermoelectric Generator MMS Magnetospheric...probes designed to explore the Martian surface, to satellites equipped with advanced sensors to study the earth , to telescopes intended to explore the

  13. Evolutionary leap in large-scale flood risk assessment needed

    OpenAIRE

    Vorogushyn, Sergiy; Bates, Paul D.; de Bruijn, Karin; Castellarin, Attilio; Kreibich, Heidi; Priest, Sally J.; Schröter, Kai; Bagli, Stefano; Blöschl, Günter; Domeneghetti, Alessio; Gouldby, Ben; Klijn, Frans; Lammersen, Rita; Neal, Jeffrey C.; Ridder, Nina

    2018-01-01

    Current approaches for assessing large-scale flood risks contravene the fundamental principles of the flood risk system functioning because they largely ignore basic interactions and feedbacks between atmosphere, catchments, river-floodplain systems and socio-economic processes. As a consequence, risk analyses are uncertain and might be biased. However, reliable risk estimates are required for prioritizing national investments in flood risk mitigation or for appraisal and management of insura...

  14. Large scale and performance tests of the ATLAS online software

    International Nuclear Information System (INIS)

    Alexandrov; Kotov, V.; Mineev, M.; Roumiantsev, V.; Wolters, H.; Amorim, A.; Pedro, L.; Ribeiro, A.; Badescu, E.; Caprini, M.; Burckhart-Chromek, D.; Dobson, M.; Jones, R.; Kazarov, A.; Kolos, S.; Liko, D.; Lucio, L.; Mapelli, L.; Nassiakou, M.; Schweiger, D.; Soloviev, I.; Hart, R.; Ryabov, Y.; Moneta, L.

    2001-01-01

    One of the sub-systems of the Trigger/DAQ system of the future ATLAS experiment is the Online Software system. It encompasses the functionality needed to configure, control and monitor the DAQ. Its architecture is based on a component structure described in the ATLAS Trigger/DAQ technical proposal. Regular integration tests ensure its smooth operation in test beam setups during its evolutionary development towards the final ATLAS online system. Feedback is received and returned into the development process. Studies of the system behavior have been performed on a set of up to 111 PCs on a configuration which is getting closer to the final size. Large scale and performance test of the integrated system were performed on this setup with emphasis on investigating the aspects of the inter-dependence of the components and the performance of the communication software. Of particular interest were the run control state transitions in various configurations of the run control hierarchy. For the purpose of the tests, the software from other Trigger/DAQ sub-systems has been emulated. The author presents a brief overview of the online system structure, its components and the large scale integration tests and their results

  15. Technical Design Report for large-scale neutrino detectors prototyping and phased performance assessment in view of a long-baseline oscillation experiment

    CERN Document Server

    De Bonis, I.; Duchesneau, D.; Pessard, H.; Bordoni, S.; Ieva, M.; Lux, T.; Sanchez, F.; Jipa, A.; Lazanu, I.; Calin, M.; Esanu, T.; Ristea, O.; Ristea, C.; Nita, L.; Efthymiopoulos, I.; Nessi, M.; Asfandiyarov, R.; Blondel, A.; Bravar, A.; Cadoux, F.; Haesler, A.; Karadzhov, Y.; Korzenev, A.; Martin, C.; Noah, E.; Ravonel, M.; Rayner, M.; Scantamburlo, E.; Bayes, R.; Soler, F.J.P.; Nuijten, G.A.; Loo, K.; Maalampi, J.; Slupecki, M.; Trzaska, W.H.; Campanelli, M.; Blebea-Apostu, A.M.; Chesneanu, D.; Gomoiu, M.C; Mitrica, B.; Margineanu, R.M.; Stanca, D.L.; Colino, N.; Gil-Botella, I.; Novella, P.; Palomares, C.; Santorelli, R.; Verdugo, A.; Karpikov, I.; Khotjantsev, A.; Kudenko, Y.; Mefodiev, A.; Mineev, O.; Ovsiannikova, T.; Yershov, N.; Enqvist, T.; Kuusiniemi, P.; De La Taille, C.; Dulucq, F.; Martin-Chassard, G.; Andrieu, B.; Dumarchez, J.; Giganti, C.; Levy, J.-M.; Popov, B.; Robert, A.; Agostino, L.; Buizza-Avanzini, M.; Dawson, J.; Franco, D.; Gorodetzky, P.; Kryn, D.; Patzak, T.; Tonazzo, A.; Vannucci, F.; Bésida, O.; Bolognesi, S.; Delbart, A.; Emery, S.; Galymov, V.; Mazzucato, E.; Vasseur, G.; Zito, M.; Bogomilov, M.; Tsenov, R.; Vankova-Kirilova, G.; Friend, M.; Hasegawa, T.; Nakadaira, T.; Sakashita, K.; Zambelli, L.; Autiero, D.; Caiulo, D.; Chaussard, L.; Déclais, Y.; Franco, D.; Marteau, J.; Pennacchio, E.; Bay, F.; Cantini, C.; Crivelli, P.; Epprecht, L.; Gendotti, A.; Di Luise, S.; Horikawa, S.; Murphy, S.; Nikolics, K.; Periale, L.; Regenfus, C.; Rubbia, A.; Sgalaberna, D.; Viant, T.; Wu, S.; Sergiampietri, F.; CERN. Geneva. SPS and PS Experiments Committee; SPSC

    2014-01-01

    In June 2012, an Expression of Interest for a long-baseline experiment (LBNO, CERN-SPSC-EOI-007) has been submitted to the CERN SPSC and is presently under review. LBNO considers three types of neutrino detector technologies: a double-phase liquid argon (LAr) TPC and a magnetised iron detector as far detectors. For the near detector, a high-pressure gas TPC embedded in a calorimeter and a magnet is the baseline design. A mandatory milestone in view of any future long baseline experiment is a concrete prototyping effort towards the envisioned large-scale detectors, and an accompanying campaign of measurements aimed at assessing the systematic errors that will be affecting their intended physics programme. Following an encouraging feedback from 108th SPSC on the technology choices, we have defined as priority the construction and operation of a $6\\times 6\\times 6$m$^3$ (active volume) double-phase liquid argon (DLAr) demonstrator, and a parallel development of the technologies necessary for large magnetised MIN...

  16. Assessing large-scale wildlife responses to human infrastructure development.

    Science.gov (United States)

    Torres, Aurora; Jaeger, Jochen A G; Alonso, Juan Carlos

    2016-07-26

    Habitat loss and deterioration represent the main threats to wildlife species, and are closely linked to the expansion of roads and human settlements. Unfortunately, large-scale effects of these structures remain generally overlooked. Here, we analyzed the European transportation infrastructure network and found that 50% of the continent is within 1.5 km of transportation infrastructure. We present a method for assessing the impacts from infrastructure on wildlife, based on functional response curves describing density reductions in birds and mammals (e.g., road-effect zones), and apply it to Spain as a case study. The imprint of infrastructure extends over most of the country (55.5% in the case of birds and 97.9% for mammals), with moderate declines predicted for birds (22.6% of individuals) and severe declines predicted for mammals (46.6%). Despite certain limitations, we suggest the approach proposed is widely applicable to the evaluation of effects of planned infrastructure developments under multiple scenarios, and propose an internationally coordinated strategy to update and improve it in the future.

  17. How the Internet Will Help Large-Scale Assessment Reinvent Itself

    Directory of Open Access Journals (Sweden)

    Randy Elliot Bennett

    2001-02-01

    Full Text Available Large-scale assessment in the United States is undergoing enormous pressure to change. That pressure stems from many causes. Depending upon the type of test, the issues precipitating change include an outmoded cognitive-scientific basis for test design; a mismatch with curriculum; the differential performance of population groups; a lack of information to help individuals improve; and inefficiency. These issues provide a strong motivation to reconceptualize both the substance and the business of large-scale assessment. At the same time, advances in technology, measurement, and cognitive science are providing the means to make that reconceptualization a reality. The thesis of this paper is that the largest facilitating factor will be technological, in particular the Internet. In the same way that it is already helping to revolutionize commerce, education, and even social interaction, the Internet will help revolutionize the business and substance of large-scale assessment.

  18. Assessment of climate change impacts on rainfall using large scale ...

    Indian Academy of Sciences (India)

    Many of the applied techniques in water resources management can be directly or indirectly influenced by ... is based on large scale climate signals data around the world. In order ... predictand relationships are often very complex. .... constraints to solve the optimization problem. ..... social, and environmental sustainability.

  19. Performance of mushroom fruiting for large scale commercial production

    International Nuclear Information System (INIS)

    Mat Rosol Awang; Rosnani Abdul Rashid; Hassan Hamdani Mutaat; Mohd Meswan Maskom

    2012-01-01

    The paper described the determination of mushroom fruiting yield, which is vital to economics of mushroom production. Consistency in mushroom yields enabling an estimation to be made for revenues and hence profitability could be predicted. It has been reported by many growers, there are a large variation in mushroom yields over different times of production. To assess such claims we have run four batches of mushroom fruiting and the performance fruiting body productions are presented. (author)

  20. Enabling High Performance Large Scale Dense Problems through KBLAS

    KAUST Repository

    Abdelfattah, Ahmad

    2014-05-04

    KBLAS (KAUST BLAS) is a small library that provides highly optimized BLAS routines on systems accelerated with GPUs. KBLAS is entirely written in CUDA C, and targets NVIDIA GPUs with compute capability 2.0 (Fermi) or higher. The current focus is on level-2 BLAS routines, namely the general matrix vector multiplication (GEMV) kernel, and the symmetric/hermitian matrix vector multiplication (SYMV/HEMV) kernel. KBLAS provides these two kernels in all four precisions (s, d, c, and z), with support to multi-GPU systems. Through advanced optimization techniques that target latency hiding and pushing memory bandwidth to the limit, KBLAS outperforms state-of-the-art kernels by 20-90% improvement. Competitors include CUBLAS-5.5, MAGMABLAS-1.4.0, and CULAR17. The SYMV/HEMV kernel from KBLAS has been adopted by NVIDIA, and should appear in CUBLAS-6.0. KBLAS has been used in large scale simulations of multi-object adaptive optics.

  1. Matrix Sampling of Items in Large-Scale Assessments

    Directory of Open Access Journals (Sweden)

    Ruth A. Childs

    2003-07-01

    Full Text Available Matrix sampling of items -' that is, division of a set of items into different versions of a test form..-' is used by several large-scale testing programs. Like other test designs, matrixed designs have..both advantages and disadvantages. For example, testing time per student is less than if each..student received all the items, but the comparability of student scores may decrease. Also,..curriculum coverage is maintained, but reporting of scores becomes more complex. In this paper,..matrixed designs are compared with more traditional designs in nine categories of costs:..development costs, materials costs, administration costs, educational costs, scoring costs,..reliability costs, comparability costs, validity costs, and reporting costs. In choosing among test..designs, a testing program should examine the costs in light of its mandate(s, the content of the..tests, and the financial resources available, among other considerations.

  2. Stereotype Threat, Inquiring about Test Takers' Race and Gender, and Performance on Low-Stakes Tests in a Large-Scale Assessment. Research Report. ETS RR-15-02

    Science.gov (United States)

    Stricker, Lawrence J.; Rock, Donald A.; Bridgeman, Brent

    2015-01-01

    This study explores stereotype threat on low-stakes tests used in a large-scale assessment, math and reading tests in the Education Longitudinal Study of 2002 (ELS). Issues identified in laboratory research (though not observed in studies of high-stakes tests) were assessed: whether inquiring about their race and gender is related to the…

  3. Toward Instructional Leadership: Principals' Perceptions of Large-Scale Assessment in Schools

    Science.gov (United States)

    Prytula, Michelle; Noonan, Brian; Hellsten, Laurie

    2013-01-01

    This paper describes a study of the perceptions that Saskatchewan school principals have regarding large-scale assessment reform and their perceptions of how assessment reform has affected their roles as principals. The findings revealed that large-scale assessments, especially provincial assessments, have affected the principal in Saskatchewan…

  4. Ownership and firm performance after large-scale privatization

    Czech Academy of Sciences Publication Activity Database

    Kočenda, Evžen; Švejnar, Jan

    -, č. 4143 (2003), s. 1-36 ISSN 0265-8003 Institutional research plan: CEZ:AV0Z7085904 Keywords : industrial organization * ownership * performance and privatization Subject RIV: AH - Economics www.cepr.org/pubs/dps/DP4143.asp

  5. Assessment of present and future large-scale semiconductor detector systems

    International Nuclear Information System (INIS)

    Spieler, H.G.; Haller, E.E.

    1984-11-01

    The performance of large-scale semiconductor detector systems is assessed with respect to their theoretical potential and to the practical limitations imposed by processing techniques, readout electronics and radiation damage. In addition to devices which detect reaction products directly, the analysis includes photodetectors for scintillator arrays. Beyond present technology we also examine currently evolving structures and techniques which show potential for producing practical devices in the foreseeable future

  6. Large-Scale Assessment, Rationality, and Scientific Management: The Case of No Child Left Behind

    Science.gov (United States)

    Roach, Andrew T.; Frank, Jennifer

    2007-01-01

    This article examines the ways in which NCLB and the movement towards large-scale assessment systems are based on Weber's concept of formal rationality and tradition of scientific management. Building on these ideas, the authors use Ritzer's McDonaldization thesis to examine some of the core features of large-scale assessment and accountability…

  7. Quality Control Charts in Large-Scale Assessment Programs

    Science.gov (United States)

    Schafer, William D.; Coverdale, Bradley J.; Luxenberg, Harlan; Jin, Ying

    2011-01-01

    There are relatively few examples of quantitative approaches to quality control in educational assessment and accountability contexts. Among the several techniques that are used in other fields, Shewart charts have been found in a few instances to be applicable in educational settings. This paper describes Shewart charts and gives examples of how…

  8. Accuracy assessment of planimetric large-scale map data for decision-making

    Directory of Open Access Journals (Sweden)

    Doskocz Adam

    2016-06-01

    Full Text Available This paper presents decision-making risk estimation based on planimetric large-scale map data, which are data sets or databases which are useful for creating planimetric maps on scales of 1:5,000 or larger. The studies were conducted on four data sets of large-scale map data. Errors of map data were used for a risk assessment of decision-making about the localization of objects, e.g. for land-use planning in realization of investments. An analysis was performed for a large statistical sample set of shift vectors of control points, which were identified with the position errors of these points (errors of map data.

  9. The use of test scores from large-scale assessment surveys: psychometric and statistical considerations

    Directory of Open Access Journals (Sweden)

    Henry Braun

    2017-11-01

    Full Text Available Abstract Background Economists are making increasing use of measures of student achievement obtained through large-scale survey assessments such as NAEP, TIMSS, and PISA. The construction of these measures, employing plausible value (PV methodology, is quite different from that of the more familiar test scores associated with assessments such as the SAT or ACT. These differences have important implications both for utilization and interpretation. Although much has been written about PVs, it appears that there are still misconceptions about whether and how to employ them in secondary analyses. Methods We address a range of technical issues, including those raised in a recent article that was written to inform economists using these databases. First, an extensive review of the relevant literature was conducted, with particular attention to key publications that describe the derivation and psychometric characteristics of such achievement measures. Second, a simulation study was carried out to compare the statistical properties of estimates based on the use of PVs with those based on other, commonly used methods. Results It is shown, through both theoretical analysis and simulation, that under fairly general conditions appropriate use of PV yields approximately unbiased estimates of model parameters in regression analyses of large scale survey data. The superiority of the PV methodology is particularly evident when measures of student achievement are employed as explanatory variables. Conclusions The PV methodology used to report student test performance in large scale surveys remains the state-of-the-art for secondary analyses of these databases.

  10. DECOVALEX III/BENCHPAR PROJECTS. Approaches to Upscaling Thermal-Hydro-Mechanical Processes in a Fractured Rock. Mass and its Significance for Large-Scale Repository Performance Assessment. Summary of Findings. Report of BMT2/WP3

    International Nuclear Information System (INIS)

    Andersson, Johan; Staub, Isabelle; Knight, Les

    2005-02-01

    The Benchmark Test 2 of DECOVALEX III and Work Package 3 of BENCHPAR concerns the upscaling Thermal (T), Hydrological (H) and Mechanical (M) processes in a fractured rock mass and its significance for large-scale repository performance assessment. The work is primarily concerned with the extent to which various thermo-hydro-mechanical couplings in a fractured rock mass adjacent to a repository are significant in terms of solute transport typically calculated in large-scale repository performance assessments. Since the presence of even quite small fractures may control the hydraulic, mechanical and coupled hydromechanical behaviour of the rock mass, a key of the work has been to explore the extent to which these can be upscaled and represented by 'equivalent' continuum properties appropriate PA calculations. From these general aims the BMT was set-up as a numerical study of a large scale reference problem. Analysing this reference problem should: help explore how different means of simplifying the geometrical detail of a site, with its implications on model parameters, ('upscaling') impacts model predictions of relevance to repository performance, explore to what extent the THM-coupling needs to be considered in relation to PA-measures, compare the uncertainties in upscaling (both to uncertainty on how to upscale or uncertainty that arises due to the upscaling processes) and consideration of THM couplings with the inherent uncertainty and spatial variability of the site specific data. Furthermore, it has been an essential component of the work that individual teams not only produce numerical results but are forced to make their own judgements and to provide the proper justification for their conclusions based on their analysis. It should also be understood that conclusions drawn will partly be specific to the problem analysed, in particular as it mainly concerns a 2D application. This means that specific conclusions may have limited applicability to real problems in

  11. DECOVALEX III III/BENCHPAR PROJECTS. Approaches to Upscaling Thermal-Hydro-Mechanical Processes in a Fractured Rock. Mass and its Significance for Large-Scale Repository Performance Assessment. Summary of Findings. Report of BMT2/WP3

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Johan (comp.) [JA Streamflow AB, Aelvsjoe (Sweden); Staub, Isabelle (comp.) [Golder Associates AB, Stockholm (Sweden); Knight, Les (comp.) [Nirex UK Ltd, Oxon (United Kingdom)

    2005-02-15

    The Benchmark Test 2 of DECOVALEX III and Work Package 3 of BENCHPAR concerns the upscaling Thermal (T), Hydrological (H) and Mechanical (M) processes in a fractured rock mass and its significance for large-scale repository performance assessment. The work is primarily concerned with the extent to which various thermo-hydro-mechanical couplings in a fractured rock mass adjacent to a repository are significant in terms of solute transport typically calculated in large-scale repository performance assessments. Since the presence of even quite small fractures may control the hydraulic, mechanical and coupled hydromechanical behaviour of the rock mass, a key of the work has been to explore the extent to which these can be upscaled and represented by 'equivalent' continuum properties appropriate PA calculations. From these general aims the BMT was set-up as a numerical study of a large scale reference problem. Analysing this reference problem should: help explore how different means of simplifying the geometrical detail of a site, with its implications on model parameters, ('upscaling') impacts model predictions of relevance to repository performance, explore to what extent the THM-coupling needs to be considered in relation to PA-measures, compare the uncertainties in upscaling (both to uncertainty on how to upscale or uncertainty that arises due to the upscaling processes) and consideration of THM couplings with the inherent uncertainty and spatial variability of the site specific data. Furthermore, it has been an essential component of the work that individual teams not only produce numerical results but are forced to make their own judgements and to provide the proper justification for their conclusions based on their analysis. It should also be understood that conclusions drawn will partly be specific to the problem analysed, in particular as it mainly concerns a 2D application. This means that specific conclusions may have limited applicability

  12. Assessment of renewable energy resources potential for large scale and standalone applications in Ethiopia

    NARCIS (Netherlands)

    Tucho, Gudina Terefe; Weesie, Peter D.M.; Nonhebel, Sanderine

    2014-01-01

    This study aims to determine the contribution of renewable energy to large scale and standalone application in Ethiopia. The assessment starts by determining the present energy system and the available potentials. Subsequently, the contribution of the available potentials for large scale and

  13. Overview of large scale experiments performed within the LBB project in the Czech Republic

    Energy Technology Data Exchange (ETDEWEB)

    Kadecka, P.; Lauerova, D. [Nuclear Research Institute, Rez (Czechoslovakia)

    1997-04-01

    During several recent years NRI Rez has been performing the LBB analyses of safety significant primary circuit pipings of NPPs in Czech and Slovak Republics. The analyses covered the NPPs with reactors WWER 440 Type 230 and 213 and WWER 1000 Type 320. Within the relevant LBB projects undertaken with the aim to prove the fulfilling of the requirements of LBB, a series of large scale experiments were performed. The goal of these experiments was to verify the properties of the components selected, and to prove the quality and/or conservatism of assessments used in the LBB-analyses. In this poster, a brief overview of experiments performed in Czech Republic under guidance of NRI Rez is presented.

  14. Environmental performance evaluation of large-scale municipal solid waste incinerators using data envelopment analysis

    International Nuclear Information System (INIS)

    Chen, H.-W.; Chang, N.-B.; Chen, J.-C.; Tsai, S.-J.

    2010-01-01

    Limited to insufficient land resources, incinerators are considered in many countries such as Japan and Germany as the major technology for a waste management scheme capable of dealing with the increasing demand for municipal and industrial solid waste treatment in urban regions. The evaluation of these municipal incinerators in terms of secondary pollution potential, cost-effectiveness, and operational efficiency has become a new focus in the highly interdisciplinary area of production economics, systems analysis, and waste management. This paper aims to demonstrate the application of data envelopment analysis (DEA) - a production economics tool - to evaluate performance-based efficiencies of 19 large-scale municipal incinerators in Taiwan with different operational conditions. A 4-year operational data set from 2002 to 2005 was collected in support of DEA modeling using Monte Carlo simulation to outline the possibility distributions of operational efficiency of these incinerators. Uncertainty analysis using the Monte Carlo simulation provides a balance between simplifications of our analysis and the soundness of capturing the essential random features that complicate solid waste management systems. To cope with future challenges, efforts in the DEA modeling, systems analysis, and prediction of the performance of large-scale municipal solid waste incinerators under normal operation and special conditions were directed toward generating a compromised assessment procedure. Our research findings will eventually lead to the identification of the optimal management strategies for promoting the quality of solid waste incineration, not only in Taiwan, but also elsewhere in the world.

  15. Environmental performance evaluation of large-scale municipal solid waste incinerators using data envelopment analysis.

    Science.gov (United States)

    Chen, Ho-Wen; Chang, Ni-Bin; Chen, Jeng-Chung; Tsai, Shu-Ju

    2010-07-01

    Limited to insufficient land resources, incinerators are considered in many countries such as Japan and Germany as the major technology for a waste management scheme capable of dealing with the increasing demand for municipal and industrial solid waste treatment in urban regions. The evaluation of these municipal incinerators in terms of secondary pollution potential, cost-effectiveness, and operational efficiency has become a new focus in the highly interdisciplinary area of production economics, systems analysis, and waste management. This paper aims to demonstrate the application of data envelopment analysis (DEA)--a production economics tool--to evaluate performance-based efficiencies of 19 large-scale municipal incinerators in Taiwan with different operational conditions. A 4-year operational data set from 2002 to 2005 was collected in support of DEA modeling using Monte Carlo simulation to outline the possibility distributions of operational efficiency of these incinerators. Uncertainty analysis using the Monte Carlo simulation provides a balance between simplifications of our analysis and the soundness of capturing the essential random features that complicate solid waste management systems. To cope with future challenges, efforts in the DEA modeling, systems analysis, and prediction of the performance of large-scale municipal solid waste incinerators under normal operation and special conditions were directed toward generating a compromised assessment procedure. Our research findings will eventually lead to the identification of the optimal management strategies for promoting the quality of solid waste incineration, not only in Taiwan, but also elsewhere in the world. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  16. Optimizing Prediction Using Bayesian Model Averaging: Examples Using Large-Scale Educational Assessments.

    Science.gov (United States)

    Kaplan, David; Lee, Chansoon

    2018-01-01

    This article provides a review of Bayesian model averaging as a means of optimizing the predictive performance of common statistical models applied to large-scale educational assessments. The Bayesian framework recognizes that in addition to parameter uncertainty, there is uncertainty in the choice of models themselves. A Bayesian approach to addressing the problem of model uncertainty is the method of Bayesian model averaging. Bayesian model averaging searches the space of possible models for a set of submodels that satisfy certain scientific principles and then averages the coefficients across these submodels weighted by each model's posterior model probability (PMP). Using the weighted coefficients for prediction has been shown to yield optimal predictive performance according to certain scoring rules. We demonstrate the utility of Bayesian model averaging for prediction in education research with three examples: Bayesian regression analysis, Bayesian logistic regression, and a recently developed approach for Bayesian structural equation modeling. In each case, the model-averaged estimates are shown to yield better prediction of the outcome of interest than any submodel based on predictive coverage and the log-score rule. Implications for the design of large-scale assessments when the goal is optimal prediction in a policy context are discussed.

  17. Performance of automatic generation control mechanisms with large-scale wind power

    Energy Technology Data Exchange (ETDEWEB)

    Ummels, B.C.; Gibescu, M.; Paap, G.C. [Delft Univ. of Technology (Netherlands); Kling, W.L. [Transmission Operations Department of TenneT bv (Netherlands)

    2007-11-15

    The unpredictability and variability of wind power increasingly challenges real-time balancing of supply and demand in electric power systems. In liberalised markets, balancing is a responsibility jointly held by the TSO (real-time power balancing) and PRPs (energy programs). In this paper, a procedure is developed for the simulation of power system balancing and the assessment of AGC performance in the presence of large-scale wind power, using the Dutch control zone as a case study. The simulation results show that the performance of existing AGC-mechanisms is adequate for keeping ACE within acceptable bounds. At higher wind power penetrations, however, the capabilities of the generation mix are increasingly challenged and additional reserves are required at the same level. (au)

  18. Secondary Analysis of Large-Scale Assessment Data: An Alternative to Variable-Centred Analysis

    Science.gov (United States)

    Chow, Kui Foon; Kennedy, Kerry John

    2014-01-01

    International large-scale assessments are now part of the educational landscape in many countries and often feed into major policy decisions. Yet, such assessments also provide data sets for secondary analysis that can address key issues of concern to educators and policymakers alike. Traditionally, such secondary analyses have been based on a…

  19. Analysis of environmental impact assessment for large-scale X-ray medical equipments

    International Nuclear Information System (INIS)

    Fu Jin; Pei Chengkai

    2011-01-01

    Based on an Environmental Impact Assessment (EIA) project, this paper elaborates the basic analysis essentials of EIA for the sales project of large-scale X-ray medical equipment, and provides the analysis procedure of environmental impact and dose estimation method under normal and accident conditions. The key points of EIA for the sales project of large-scale X-ray medical equipment include the determination of pollution factor and management limit value according to the project's actual situation, the utilization of various methods of assessment and prediction such as analogy, actual measurement and calculation to analyze, monitor, calculate and predict the pollution during normal and accident condition. (authors)

  20. Understanding water delivery performance in a large-scale irrigation system in Peru

    NARCIS (Netherlands)

    Vos, J.M.C.

    2005-01-01

    During a two-year field study the performance of the water delivery was evaluated in a large-scale irrigation system on the north coast of Peru. Flow measurements were carried out along the main canals, along two secondary canals, and in two tertiary blocks in the Chancay-Lambayeque irrigation

  1. Explore the Usefulness of Person-Fit Analysis on Large-Scale Assessment

    Science.gov (United States)

    Cui, Ying; Mousavi, Amin

    2015-01-01

    The current study applied the person-fit statistic, l[subscript z], to data from a Canadian provincial achievement test to explore the usefulness of conducting person-fit analysis on large-scale assessments. Item parameter estimates were compared before and after the misfitting student responses, as identified by l[subscript z], were removed. The…

  2. Balancing Tensions in Educational Policy Reforms: Large-Scale Implementation of Assessment for Learning in Norway

    Science.gov (United States)

    Hopfenbeck, Therese N.; Flórez Petour, María Teresa; Tolo, Astrid

    2015-01-01

    This study investigates how different stakeholders in Norway experienced a government-initiated, large-scale policy implementation programme on "Assessment for Learning" ("AfL"). Data were collected through 58 interviews with stakeholders in charge of the policy; Ministers of Education and members of the Directorate of…

  3. How to Measure and Explain Achievement Change in Large-Scale Assessments: A Rejoinder

    Science.gov (United States)

    Hickendorff, Marian; Heiser, Willem J.; van Putten, Cornelis M.; Verhelst, Norman D.

    2009-01-01

    In this rejoinder, we discuss substantive and methodological validity issues of large-scale assessments of trends in student achievement, commenting on the discussion paper by Van den Heuvel-Panhuizen, Robitzsch, Treffers, and Koller (2009). We focus on methodological challenges in deciding what to measure, how to measure it, and how to foster…

  4. Estimating the Effectiveness of Special Education Using Large-Scale Assessment Data

    Science.gov (United States)

    Ewing, Katherine Anne

    2009-01-01

    The inclusion of students with disabilities in large scale assessment and accountability programs has provided new opportunities to examine the impact of special education services on student achievement. Hanushek, Kain, and Rivkin (1998, 2002) evaluated the effectiveness of special education programs by examining students' gains on a large-scale…

  5. A conceptual analysis of standard setting in large-scale assessments

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1994-01-01

    Elements of arbitrariness in the standard setting process are explored, and an alternative to the use of cut scores is presented. The first part of the paper analyzes the use of cut scores in large-scale assessments, discussing three different functions: (1) cut scores define the qualifications used

  6. Large-scale hydrology in Europe : observed patterns and model performance

    Energy Technology Data Exchange (ETDEWEB)

    Gudmundsson, Lukas

    2011-06-15

    In a changing climate, terrestrial water storages are of great interest as water availability impacts key aspects of ecosystem functioning. Thus, a better understanding of the variations of wet and dry periods will contribute to fully grasp processes of the earth system such as nutrient cycling and vegetation dynamics. Currently, river runoff from small, nearly natural, catchments is one of the few variables of the terrestrial water balance that is regularly monitored with detailed spatial and temporal coverage on large scales. River runoff, therefore, provides a foundation to approach European hydrology with respect to observed patterns on large scales, with regard to the ability of models to capture these.The analysis of observed river flow from small catchments, focused on the identification and description of spatial patterns of simultaneous temporal variations of runoff. These are dominated by large-scale variations of climatic variables but also altered by catchment processes. It was shown that time series of annual low, mean and high flows follow the same atmospheric drivers. The observation that high flows are more closely coupled to large scale atmospheric drivers than low flows, indicates the increasing influence of catchment properties on runoff under dry conditions. Further, it was shown that the low-frequency variability of European runoff is dominated by two opposing centres of simultaneous variations, such that dry years in the north are accompanied by wet years in the south.Large-scale hydrological models are simplified representations of our current perception of the terrestrial water balance on large scales. Quantification of the models strengths and weaknesses is the prerequisite for a reliable interpretation of simulation results. Model evaluations may also enable to detect shortcomings with model assumptions and thus enable a refinement of the current perception of hydrological systems. The ability of a multi model ensemble of nine large-scale

  7. Towards large scale stochastic rainfall models for flood risk assessment in trans-national basins

    Science.gov (United States)

    Serinaldi, F.; Kilsby, C. G.

    2012-04-01

    While extensive research has been devoted to rainfall-runoff modelling for risk assessment in small and medium size watersheds, less attention has been paid, so far, to large scale trans-national basins, where flood events have severe societal and economic impacts with magnitudes quantified in billions of Euros. As an example, in the April 2006 flood events along the Danube basin at least 10 people lost their lives and up to 30 000 people were displaced, with overall damages estimated at more than half a billion Euros. In this context, refined analytical methods are fundamental to improve the risk assessment and, then, the design of structural and non structural measures of protection, such as hydraulic works and insurance/reinsurance policies. Since flood events are mainly driven by exceptional rainfall events, suitable characterization and modelling of space-time properties of rainfall fields is a key issue to perform a reliable flood risk analysis based on alternative precipitation scenarios to be fed in a new generation of large scale rainfall-runoff models. Ultimately, this approach should be extended to a global flood risk model. However, as the need of rainfall models able to account for and simulate spatio-temporal properties of rainfall fields over large areas is rather new, the development of new rainfall simulation frameworks is a challenging task involving that faces with the problem of overcoming the drawbacks of the existing modelling schemes (devised for smaller spatial scales), but keeping the desirable properties. In this study, we critically summarize the most widely used approaches for rainfall simulation. Focusing on stochastic approaches, we stress the importance of introducing suitable climate forcings in these simulation schemes in order to account for the physical coherence of rainfall fields over wide areas. Based on preliminary considerations, we suggest a modelling framework relying on the Generalized Additive Models for Location, Scale

  8. Google Street View as an alternative method to car surveys in large-scale vegetation assessments.

    Science.gov (United States)

    Deus, Ernesto; Silva, Joaquim S; Catry, Filipe X; Rocha, Miguel; Moreira, Francisco

    2015-10-01

    Car surveys (CS) are a common method for assessing the distribution of alien invasive plants. Google Street View (GSV), a free-access web technology where users may experience a virtual travel along roads, has been suggested as a cost-effective alternative to car surveys. We tested if we could replicate the results from a countrywide survey conducted by car in Portugal using GSV as a remote sensing tool, aiming at assessing the distribution of Eucalyptus globulus Labill. wildlings on roadsides adjacent to eucalypt stands. Georeferenced points gathered along CS were used to create road transects visible as lines overlapping the road in GSV environment, allowing surveying the same sampling areas using both methods. This paper presents the results of the comparison between the two methods. Both methods produced similar models of plant abundance, selecting the same explanatory variables, in the same hierarchical order of importance and depicting a similar influence on plant abundance. Even though the GSV model had a lower performance and the GSV survey detected fewer plants, additional variables collected exclusively with GSV improved model performance and provided a new insight into additional factors influencing plant abundance. The survey using GSV required ca. 9 % of the funds and 62 % of the time needed to accomplish the CS. We conclude that GSV may be a cost-effective alternative to CS. We discuss some advantages and limitations of GSV as a survey method. We forecast that GSV may become a widespread tool in road ecology, particularly in large-scale vegetation assessments.

  9. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  10. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  11. Feasibility Assessment of Using Power Plant Waste Heat in Large Scale Horticulture Facility Energy Supply Systems

    Directory of Open Access Journals (Sweden)

    Min Gyung Yu

    2016-02-01

    Full Text Available Recently, the Korean government has been carrying out projects to construct several large scale horticulture facilities. However, it is difficult for an energy supply to operate stably and economically with only a conventional fossil fuel boiler system. For this reason, several unused energy sources have become attractive and it was found that power plant waste heat has the greatest potential for application in this scenario. In this study, we performed a feasibility assessment of power plant waste heat as an energy source for horticulture facilities. As a result, it was confirmed that there was a sufficient amount of energy potential for the use of waste heat to supply energy to the assumed area. In Dangjin, an horticultural area of 500 ha could be constructed by utilizing 20% of the energy reserves. In Hadong, a horticulture facility can be set up to be 260 ha with 7.4% of the energy reserves. In Youngdong, an assumed area of 65 ha could be built utilizing about 19% of the energy reserves. Furthermore, the payback period was calculated in order to evaluate the economic feasibility compared with a conventional system. The initial investment costs can be recovered by the approximately 83% reduction in the annual operating costs.

  12. Transfrontier consequences to the population of Greece of large scale nuclear accidents: a preliminary assessment

    International Nuclear Information System (INIS)

    Kollas, J.G.; Catsaros, Nicolas.

    1985-06-01

    In this report the consequences to the population of Greece from hypothetical large scale nuclear accidents at the Kozlodui (Bulgaria) nuclear power station are estimated under some simplifying assumptions. Three different hypothetical accident scenarios - the most serious for pressurized water reactors - are examined. The analysis is performed by the current Greek version of code CRAC2 and includes health and economic consequences to the population of Greece. (author)

  13. Zone modelling of the thermal performances of a large-scale bloom reheating furnace

    International Nuclear Information System (INIS)

    Tan, Chee-Keong; Jenkins, Joana; Ward, John; Broughton, Jonathan; Heeley, Andy

    2013-01-01

    This paper describes the development and comparison of a two- (2D) and three-dimensional (3D) mathematical models, based on the zone method of radiation analysis, to simulate the thermal performances of a large bloom reheating furnace. The modelling approach adopted in the current paper differs from previous work since it takes into account the net radiation interchanges between the top and bottom firing sections of the furnace and also allows for enthalpy exchange due to the flows of combustion products between these sections. The models were initially validated at two different furnace throughput rates using experimental and plant's model data supplied by Tata Steel. The results to-date demonstrated that the model predictions are in good agreement with measured heating profiles of the blooms encountered in the actual furnace. It was also found no significant differences between the predictions from the 2D and 3D models. Following the validation, the 2D model was then used to assess the impact of the furnace responses to changing throughput rate. It was found that the potential furnace response to changing throughput rate influences the settling time of the furnace to the next steady state operation. Overall the current work demonstrates the feasibility and practicality of zone modelling and its potential for incorporation into a model based furnace control system. - Highlights: ► 2D and 3D zone models of large-scale bloom reheating furnace. ► The models were validated with experimental and plant model data. ► Examine the transient furnace response to changing the furnace throughput rates. ► No significant differences found between the predictions from the 2D and 3D models.

  14. Fault Transient Analysis and Protection Performance Evaluation within a Large-scale PV Power Plant

    Directory of Open Access Journals (Sweden)

    Wen Jinghua

    2016-01-01

    Full Text Available In this paper, a short-circuit test within a large-scale PV power plant with a total capacity of 850MWp is discussed. The fault currents supplied by the PV generation units are presented and analysed. According to the fault behaviour, the existing protection coordination principles with the plant are considered and their performances are evaluated. Moreover, these protections are examined in simulation platform under different operating situations. A simple measure with communication system is proposed to deal with the foreseeable problem about the current protection scheme in the PV power plant.

  15. Improving Large-scale Storage System Performance via Topology-aware and Balanced Data Placement

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Feiyi [ORNL; Oral, H Sarp [ORNL; Vazhkudai, Sudharshan S [ORNL

    2014-01-01

    With the advent of big data, the I/O subsystems of large-scale compute clusters are becoming a center of focus, with more applications putting greater demands on end-to-end I/O performance. These subsystems are often complex in design. They comprise of multiple hardware and software layers to cope with the increasing capacity, capability and scalability requirements of data intensive applications. The sharing nature of storage resources and the intrinsic interactions across these layers make it to realize user-level, end-to-end performance gains a great challenge. We propose a topology-aware resource load balancing strategy to improve per-application I/O performance. We demonstrate the effectiveness of our algorithm on an extreme-scale compute cluster, Titan, at the Oak Ridge Leadership Computing Facility (OLCF). Our experiments with both synthetic benchmarks and a real-world application show that, even under congestion, our proposed algorithm can improve large-scale application I/O performance significantly, resulting in both the reduction of application run times and higher resolution simulation runs.

  16. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    Science.gov (United States)

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software

  17. An integrated model for assessing both crop productivity and agricultural water resources at a large scale

    Science.gov (United States)

    Okada, M.; Sakurai, G.; Iizumi, T.; Yokozawa, M.

    2012-12-01

    Agricultural production utilizes regional resources (e.g. river water and ground water) as well as local resources (e.g. temperature, rainfall, solar energy). Future climate changes and increasing demand due to population increases and economic developments would intensively affect the availability of water resources for agricultural production. While many studies assessed the impacts of climate change on agriculture, there are few studies that dynamically account for changes in water resources and crop production. This study proposes an integrated model for assessing both crop productivity and agricultural water resources at a large scale. Also, the irrigation management to subseasonal variability in weather and crop response varies for each region and each crop. To deal with such variations, we used the Markov Chain Monte Carlo technique to quantify regional-specific parameters associated with crop growth and irrigation water estimations. We coupled a large-scale crop model (Sakurai et al. 2012), with a global water resources model, H08 (Hanasaki et al. 2008). The integrated model was consisting of five sub-models for the following processes: land surface, crop growth, river routing, reservoir operation, and anthropogenic water withdrawal. The land surface sub-model was based on a watershed hydrology model, SWAT (Neitsch et al. 2009). Surface and subsurface runoffs simulated by the land surface sub-model were input to the river routing sub-model of the H08 model. A part of regional water resources available for agriculture, simulated by the H08 model, was input as irrigation water to the land surface sub-model. The timing and amount of irrigation water was simulated at a daily step. The integrated model reproduced the observed streamflow in an individual watershed. Additionally, the model accurately reproduced the trends and interannual variations of crop yields. To demonstrate the usefulness of the integrated model, we compared two types of impact assessment of

  18. Do large-scale assessments measure students' ability to integrate scientific knowledge?

    Science.gov (United States)

    Lee, Hee-Sun

    2010-03-01

    Large-scale assessments are used as means to diagnose the current status of student achievement in science and compare students across schools, states, and countries. For efficiency, multiple-choice items and dichotomously-scored open-ended items are pervasively used in large-scale assessments such as Trends in International Math and Science Study (TIMSS). This study investigated how well these items measure secondary school students' ability to integrate scientific knowledge. This study collected responses of 8400 students to 116 multiple-choice and 84 open-ended items and applied an Item Response Theory analysis based on the Rasch Partial Credit Model. Results indicate that most multiple-choice items and dichotomously-scored open-ended items can be used to determine whether students have normative ideas about science topics, but cannot measure whether students integrate multiple pieces of relevant science ideas. Only when the scoring rubric is redesigned to capture subtle nuances of student open-ended responses, open-ended items become a valid and reliable tool to assess students' knowledge integration ability.

  19. A feasibility assessment for incorporating of passive RHRS into large scale active PWR

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S O; Sub, S Y; Kim, Y S; Chang, M H; Park, J K [Korea Atomic Energy Research Inst., Taejon (Korea, Republic of)

    1996-12-01

    A feasibility study was carried out for the possible incorporation of passive RHRS (Residual Heat Removal System) into a large-scale of active PWR plant. Four kinds of system configurations were considered. For each case its performance and impacts on plant safety, cost, licensing, operation and maintenance were evaluated. The evaluation came up with a finding of PRHRS with a gravity feed tank as most probable design concept. However, considering rearrangement of structure and pipe routing inside and outside containment, it is concluded that implementation of the PRHRS concept into well developed active plants is not desirable at present. (author). 6 refs, 7 figs, 1 tab.

  20. Assessing Programming Costs of Explicit Memory Localization on a Large Scale Shared Memory Multiprocessor

    Directory of Open Access Journals (Sweden)

    Silvio Picano

    1992-01-01

    Full Text Available We present detailed experimental work involving a commercially available large scale shared memory multiple instruction stream-multiple data stream (MIMD parallel computer having a software controlled cache coherence mechanism. To make effective use of such an architecture, the programmer is responsible for designing the program's structure to match the underlying multiprocessors capabilities. We describe the techniques used to exploit our multiprocessor (the BBN TC2000 on a network simulation program, showing the resulting performance gains and the associated programming costs. We show that an efficient implementation relies heavily on the user's ability to explicitly manage the memory system.

  1. A feasibility assessment for incorporating of passive RHRS into large scale active PWR

    International Nuclear Information System (INIS)

    Kim, S.O.; Sub, S.Y.; Kim, Y.S.; Chang, M.H.; Park, J.K.

    1996-01-01

    A feasibility study was carried out for the possible incorporation of passive RHRS (Residual Heat Removal System) into a large-scale of active PWR plant. Four kinds of system configurations were considered. For each case its performance and impacts on plant safety, cost, licensing, operation and maintenance were evaluated. The evaluation came up with a finding of PRHRS with a gravity feed tank as most probable design concept. However, considering rearrangement of structure and pipe routing inside and outside containment, it is concluded that implementation of the PRHRS concept into well developed active plants is not desirable at present. (author). 6 refs, 7 figs, 1 tab

  2. Large scale electrolysers

    International Nuclear Information System (INIS)

    B Bello; M Junker

    2006-01-01

    Hydrogen production by water electrolysis represents nearly 4 % of the world hydrogen production. Future development of hydrogen vehicles will require large quantities of hydrogen. Installation of large scale hydrogen production plants will be needed. In this context, development of low cost large scale electrolysers that could use 'clean power' seems necessary. ALPHEA HYDROGEN, an European network and center of expertise on hydrogen and fuel cells, has performed for its members a study in 2005 to evaluate the potential of large scale electrolysers to produce hydrogen in the future. The different electrolysis technologies were compared. Then, a state of art of the electrolysis modules currently available was made. A review of the large scale electrolysis plants that have been installed in the world was also realized. The main projects related to large scale electrolysis were also listed. Economy of large scale electrolysers has been discussed. The influence of energy prices on the hydrogen production cost by large scale electrolysis was evaluated. (authors)

  3. Performance evaluation of the DCMD desalination process under bench scale and large scale module operating conditions

    KAUST Repository

    Francis, Lijo; Ghaffour, NorEddine; Alsaadi, Ahmad Salem; Nunes, Suzana Pereira; Amy, Gary L.

    2014-01-01

    The flux performance of different hydrophobic microporous flat sheet commercial membranes made of poly tetrafluoroethylene (PTFE) and poly propylene (PP) was tested for Red Sea water desalination using the direct contact membrane distillation (DCMD) process, under bench scale (high δT) and large scale module (low δT) operating conditions. Membranes were characterized for their surface morphology, water contact angle, thickness, porosity, pore size and pore size distribution. The DCMD process performance was optimized using a locally designed and fabricated module aiming to maximize the flux at different levels of operating parameters, mainly feed water and coolant inlet temperatures at different temperature differences across the membrane (δT). Water vapor flux of 88.8kg/m2h was obtained using a PTFE membrane at high δT (60°C). In addition, the flux performance was compared to the first generation of a new locally synthesized and fabricated membrane made of a different class of polymer under the same conditions. A total salt rejection of 99.99% and boron rejection of 99.41% were achieved under extreme operating conditions. On the other hand, a detailed water characterization revealed that low molecular weight non-ionic molecules (ppb level) were transported with the water vapor molecules through the membrane structure. The membrane which provided the highest flux was then tested under large scale module operating conditions. The average flux of the latter study (low δT) was found to be eight times lower than that of the bench scale (high δT) operating conditions.

  4. Performance evaluation of the DCMD desalination process under bench scale and large scale module operating conditions

    KAUST Repository

    Francis, Lijo

    2014-04-01

    The flux performance of different hydrophobic microporous flat sheet commercial membranes made of poly tetrafluoroethylene (PTFE) and poly propylene (PP) was tested for Red Sea water desalination using the direct contact membrane distillation (DCMD) process, under bench scale (high δT) and large scale module (low δT) operating conditions. Membranes were characterized for their surface morphology, water contact angle, thickness, porosity, pore size and pore size distribution. The DCMD process performance was optimized using a locally designed and fabricated module aiming to maximize the flux at different levels of operating parameters, mainly feed water and coolant inlet temperatures at different temperature differences across the membrane (δT). Water vapor flux of 88.8kg/m2h was obtained using a PTFE membrane at high δT (60°C). In addition, the flux performance was compared to the first generation of a new locally synthesized and fabricated membrane made of a different class of polymer under the same conditions. A total salt rejection of 99.99% and boron rejection of 99.41% were achieved under extreme operating conditions. On the other hand, a detailed water characterization revealed that low molecular weight non-ionic molecules (ppb level) were transported with the water vapor molecules through the membrane structure. The membrane which provided the highest flux was then tested under large scale module operating conditions. The average flux of the latter study (low δT) was found to be eight times lower than that of the bench scale (high δT) operating conditions.

  5. Coverage of the migrant population in large-scale assessment surveys. Experiences from PIAAC in Germany

    Directory of Open Access Journals (Sweden)

    Débora B. Maehler

    2017-03-01

    Full Text Available Abstract Background European countries, and especially Germany, are currently very much affected by human migration flows, with the result that the task of integration has become a challenge. Only very little empirical evidence on topics such as labor market participation and processes of social integration of migrant subpopulations is available to date from large-scale population surveys. The present paper provides an overview of the representation of the migrant population in the German Programme for the International Assessment of Adult Competencies (PIAAC sample and evaluates reasons for the under-coverage of this population. Methods We examine outcome rates and reasons for nonresponse among the migrant population based on sampling frame data, and we also examine para data from the interviewers’ contact protocols to evaluate time patterns for the successful contacting of migrants. Results and Conclusions This is the first time that results of this kind have been presented for a large-scale assessment in educational research. These results are also discussed in the context of future PIAAC cycles. Overall, they confirm the expectations in the literature that factors such as language problems result in lower contact and response rates among migrants.

  6. High performance nanostructured Silicon heterojunction for water splitting on large scales

    KAUST Repository

    Bonifazi, Marcella

    2017-11-02

    In past years the global demand for energy has been increasing steeply, as well as the awareness that new sources of clean energy are essential. Photo-electrochemical devices (PEC) for water splitting applications have stirred great interest, and different approach has been explored to improve the efficiency of these devices and to avoid optical losses at the interfaces with water. These include engineering materials and nanostructuring the device\\'s surfaces [1]-[2]. Despite the promising initial results, there are still many drawbacks that needs to be overcome to reach large scale production with optimized performances [3]. We present a new device that relies on the optimization of the nanostructuring process that exploits suitably disordered surfaces. Additionally, this device could harvest light on both sides to efficiently gain and store the energy to keep the photocatalytic reaction active.

  7. Performance of the first Japanese large-scale facility for radon inhalation experiments with small animals

    International Nuclear Information System (INIS)

    Ishimori, Y.; Mitsunobu, F.; Yamaoka, K.; Tanaka, H.; Kataoka, T.; Sakoda, A.

    2011-01-01

    A radon test facility for small animals was developed in order to increase the statistical validity of differences of the biological response in various radon environments. This paper illustrates the performances of that facility, the first large-scale facility of its kind in Japan. The facility has a capability to conduct approximately 150 mouse-scale tests at the same time. The apparatus for exposing small animals to radon has six animal chamber groups with five independent cages each. Different radon concentrations in each animal chamber group are available. Because the first target of this study is to examine the in vivo behaviour of radon and its effects, the major functions to control radon and to eliminate thoron were examined experimentally. Additionally, radon progeny concentrations and their particle size distributions in the cages were also examined experimentally to be considered in future projects. (authors)

  8. High performance nanostructured Silicon heterojunction for water splitting on large scales

    KAUST Repository

    Bonifazi, Marcella; Fu, Hui-chun; He, Jr-Hau; Fratalocchi, Andrea

    2017-01-01

    In past years the global demand for energy has been increasing steeply, as well as the awareness that new sources of clean energy are essential. Photo-electrochemical devices (PEC) for water splitting applications have stirred great interest, and different approach has been explored to improve the efficiency of these devices and to avoid optical losses at the interfaces with water. These include engineering materials and nanostructuring the device's surfaces [1]-[2]. Despite the promising initial results, there are still many drawbacks that needs to be overcome to reach large scale production with optimized performances [3]. We present a new device that relies on the optimization of the nanostructuring process that exploits suitably disordered surfaces. Additionally, this device could harvest light on both sides to efficiently gain and store the energy to keep the photocatalytic reaction active.

  9. The Convergence of High Performance Computing and Large Scale Data Analytics

    Science.gov (United States)

    Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.

    2015-12-01

    As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.

  10. Assessment of the technology required to develop photovoltaic power system for large scale national energy applications

    Science.gov (United States)

    Lutwack, R.

    1974-01-01

    A technical assessment of a program to develop photovoltaic power system technology for large-scale national energy applications was made by analyzing and judging the alternative candidate photovoltaic systems and development tasks. A program plan was constructed based on achieving the 10 year objective of a program to establish the practicability of large-scale terrestrial power installations using photovoltaic conversion arrays costing less than $0.50/peak W. Guidelines for the tasks of a 5 year program were derived from a set of 5 year objectives deduced from the 10 year objective. This report indicates the need for an early emphasis on the development of the single-crystal Si photovoltaic system for commercial utilization; a production goal of 5 x 10 to the 8th power peak W/year of $0.50 cells was projected for the year 1985. The developments of other photovoltaic conversion systems were assigned to longer range development roles. The status of the technology developments and the applicability of solar arrays in particular power installations, ranging from houses to central power plants, was scheduled to be verified in a series of demonstration projects. The budget recommended for the first 5 year phase of the program is $268.5M.

  11. Assessment of clean development mechanism potential of large-scale energy efficiency measures in heavy industries

    International Nuclear Information System (INIS)

    Hayashi, Daisuke; Krey, Matthias

    2007-01-01

    This paper assesses clean development mechanism (CDM) potential of large-scale energy efficiency measures in selected heavy industries (iron and steel, cement, aluminium, pulp and paper, and ammonia) taking India and Brazil as examples of CDM project host countries. We have chosen two criteria for identification of the CDM potential of each energy efficiency measure: (i) emission reductions volume (in CO 2 e) that can be expected from the measure and (ii) likelihood of the measure passing the additionality test of the CDM Executive Board (EB) when submitted as a proposed CDM project activity. The paper shows that the CDM potential of large-scale energy efficiency measures strongly depends on the project-specific and country-specific context. In particular, technologies for the iron and steel industry (coke dry quenching (CDQ), top pressure recovery turbine (TRT), and basic oxygen furnace (BOF) gas recovery), the aluminium industry (point feeder prebake (PFPB) smelter), and the pulp and paper industry (continuous digester technology) offer promising CDM potential

  12. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    Science.gov (United States)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated

  13. Performance Evaluation of Hadoop-based Large-scale Network Traffic Analysis Cluster

    Directory of Open Access Journals (Sweden)

    Tao Ran

    2016-01-01

    Full Text Available As Hadoop has gained popularity in big data era, it is widely used in various fields. The self-design and self-developed large-scale network traffic analysis cluster works well based on Hadoop, with off-line applications running on it to analyze the massive network traffic data. On purpose of scientifically and reasonably evaluating the performance of analysis cluster, we propose a performance evaluation system. Firstly, we set the execution times of three benchmark applications as the benchmark of the performance, and pick 40 metrics of customized statistical resource data. Then we identify the relationship between the resource data and the execution times by a statistic modeling analysis approach, which is composed of principal component analysis and multiple linear regression. After training models by historical data, we can predict the execution times by current resource data. Finally, we evaluate the performance of analysis cluster by the validated predicting of execution times. Experimental results show that the predicted execution times by trained models are within acceptable error range, and the evaluation results of performance are accurate and reliable.

  14. The effect of various parameters of large scale radio propagation models on improving performance mobile communications

    Science.gov (United States)

    Pinem, M.; Fauzi, R.

    2018-02-01

    One technique for ensuring continuity of wireless communication services and keeping a smooth transition on mobile communication networks is the soft handover technique. In the Soft Handover (SHO) technique the inclusion and reduction of Base Station from the set of active sets is determined by initiation triggers. One of the initiation triggers is based on the strong reception signal. In this paper we observed the influence of parameters of large-scale radio propagation models to improve the performance of mobile communications. The observation parameters for characterizing the performance of the specified mobile system are Drop Call, Radio Link Degradation Rate and Average Size of Active Set (AS). The simulated results show that the increase in altitude of Base Station (BS) Antenna and Mobile Station (MS) Antenna contributes to the improvement of signal power reception level so as to improve Radio Link quality and increase the average size of Active Set and reduce the average Drop Call rate. It was also found that Hata’s propagation model contributed significantly to improvements in system performance parameters compared to Okumura’s propagation model and Lee’s propagation model.

  15. Co-Cure-Ply Resins for High Performance, Large-Scale Structures

    Data.gov (United States)

    National Aeronautics and Space Administration — Large-scale composite structures are commonly joined by secondary bonding of molded-and-cured thermoset components. This approach may result in unpredictable joint...

  16. A probabilistic assessment of large scale wind power development for long-term energy resource planning

    Science.gov (United States)

    Kennedy, Scott Warren

    A steady decline in the cost of wind turbines and increased experience in their successful operation have brought this technology to the forefront of viable alternatives for large-scale power generation. Methodologies for understanding the costs and benefits of large-scale wind power development, however, are currently limited. In this thesis, a new and widely applicable technique for estimating the social benefit of large-scale wind power production is presented. The social benefit is based upon wind power's energy and capacity services and the avoidance of environmental damages. The approach uses probabilistic modeling techniques to account for the stochastic interaction between wind power availability, electricity demand, and conventional generator dispatch. A method for including the spatial smoothing effect of geographically dispersed wind farms is also introduced. The model has been used to analyze potential offshore wind power development to the south of Long Island, NY. If natural gas combined cycle (NGCC) and integrated gasifier combined cycle (IGCC) are the alternative generation sources, wind power exhibits a negative social benefit due to its high capacity cost and the relatively low emissions of these advanced fossil-fuel technologies. Environmental benefits increase significantly if charges for CO2 emissions are included. Results also reveal a diminishing social benefit as wind power penetration increases. The dependence of wind power benefits on natural gas and coal prices is also discussed. In power systems with a high penetration of wind generated electricity, the intermittent availability of wind power may influence hourly spot prices. A price responsive electricity demand model is introduced that shows a small increase in wind power value when consumers react to hourly spot prices. The effectiveness of this mechanism depends heavily on estimates of the own- and cross-price elasticities of aggregate electricity demand. This work makes a valuable

  17. Large-scale Assessment Yields Evidence of Minimal Use of Reasoning Skills in Traditionally Taught Classes

    Science.gov (United States)

    Thacker, Beth

    2017-01-01

    Large-scale assessment data from Texas Tech University yielded evidence that most students taught traditionally in large lecture classes with online homework and predominantly multiple choice question exams, when asked to answer free-response (FR) questions, did not support their answers with logical arguments grounded in physics concepts. In addition to a lack of conceptual understanding, incorrect and partially correct answers lacked evidence of the ability to apply even lower level reasoning skills in order to solve a problem. Correct answers, however, did show evidence of at least lower level thinking skills as coded using a rubric based on Bloom's taxonomy. With the introduction of evidence-based instruction into the labs and recitations of the large courses and in a small, completely laboratory-based, hands-on course, the percentage of correct answers with correct explanations increased. The FR format, unlike other assessment formats, allowed assessment of both conceptual understanding and the application of thinking skills, clearly pointing out weaknesses not revealed by other assessment instruments, and providing data on skills beyond conceptual understanding for course and program assessment. Supported by National Institutes of Health (NIH) Challenge grant #1RC1GM090897-01.

  18. Integrating adaptive behaviour in large-scale flood risk assessments: an Agent-Based Modelling approach

    Science.gov (United States)

    Haer, Toon; Aerts, Jeroen

    2015-04-01

    Between 1998 and 2009, Europe suffered over 213 major damaging floods, causing 1126 deaths, displacing around half a million people. In this period, floods caused at least 52 billion euro in insured economic losses making floods the most costly natural hazard faced in Europe. In many low-lying areas, the main strategy to cope with floods is to reduce the risk of the hazard through flood defence structures, like dikes and levees. However, it is suggested that part of the responsibility for flood protection needs to shift to households and businesses in areas at risk, and that governments and insurers can effectively stimulate the implementation of individual protective measures. However, adaptive behaviour towards flood risk reduction and the interaction between the government, insurers, and individuals has hardly been studied in large-scale flood risk assessments. In this study, an European Agent-Based Model is developed including agent representatives for the administrative stakeholders of European Member states, insurers and reinsurers markets, and individuals following complex behaviour models. The Agent-Based Modelling approach allows for an in-depth analysis of the interaction between heterogeneous autonomous agents and the resulting (non-)adaptive behaviour. Existing flood damage models are part of the European Agent-Based Model to allow for a dynamic response of both the agents and the environment to changing flood risk and protective efforts. By following an Agent-Based Modelling approach this study is a first contribution to overcome the limitations of traditional large-scale flood risk models in which the influence of individual adaptive behaviour towards flood risk reduction is often lacking.

  19. Large scale debris-flow hazard assessment: a geotechnical approach and GIS modelling

    Directory of Open Access Journals (Sweden)

    G. Delmonaco

    2003-01-01

    Full Text Available A deterministic distributed model has been developed for large-scale debris-flow hazard analysis in the basin of River Vezza (Tuscany Region – Italy. This area (51.6 km 2 was affected by over 250 landslides. These were classified as debris/earth flow mainly involving the metamorphic geological formations outcropping in the area, triggered by the pluviometric event of 19 June 1996. In the last decades landslide hazard and risk analysis have been favoured by the development of GIS techniques permitting the generalisation, synthesis and modelling of stability conditions on a large scale investigation (>1:10 000. In this work, the main results derived by the application of a geotechnical model coupled with a hydrological model for the assessment of debris flows hazard analysis, are reported. This analysis has been developed starting by the following steps: landslide inventory map derived by aerial photo interpretation, direct field survey, generation of a database and digital maps, elaboration of a DTM and derived themes (i.e. slope angle map, definition of a superficial soil thickness map, geotechnical soil characterisation through implementation of a backanalysis on test slopes, laboratory test analysis, inference of the influence of precipitation, for distinct return times, on ponding time and pore pressure generation, implementation of a slope stability model (infinite slope model and generalisation of the safety factor for estimated rainfall events with different return times. Such an approach has allowed the identification of potential source areas of debris flow triggering. This is used to detected precipitation events with estimated return time of 10, 50, 75 and 100 years. The model shows a dramatic decrease of safety conditions for the simulation when is related to a 75 years return time rainfall event. It corresponds to an estimated cumulated daily intensity of 280–330 mm. This value can be considered the hydrological triggering

  20. Large scale gas injection test (Lasgit) performed at the Aespoe Hard Rock Laboratory. Summary report 2008

    International Nuclear Information System (INIS)

    Cuss, R.J.; Harrington, J.F.; Noy, D.J.

    2010-02-01

    This report describes the set-up, operation and observations from the first 1,385 days (3.8 years) of the large scale gas injection test (Lasgit) experiment conducted at the Aespoe Hard Rock Laboratory. During this time the bentonite buffer has been artificially hydrated and has given new insight into the evolution of the buffer. After 2 years (849 days) of artificial hydration a canister filter was identified to perform a series of hydraulic and gas tests, a period that lasted 268 days. The results from the gas test showed that the full-scale bentonite buffer behaved in a similar way to previous laboratory experiments. This confirms the up-scaling of laboratory observations with the addition of considerable information on the stress responses throughout the deposition hole. During the gas testing stage, the buffer was continued to artificially hydrate. Hydraulic results, from controlled and uncontrolled events, show that the buffer continues to mature and has yet to reach full maturation. Lasgit has yielded high quality data relating to the hydration of the bentonite and the evolution in hydrogeological properties adjacent to the deposition hole. The initial hydraulic and gas injection tests confirm the correct working of all control and data acquisition systems. Lasgit has been in successful operation for in excess of 1,385 days

  1. Large scale gas injection test (Lasgit) performed at the Aespoe Hard Rock Laboratory. Summary report 2008

    Energy Technology Data Exchange (ETDEWEB)

    Cuss, R.J.; Harrington, J.F.; Noy, D.J. (British Geological Survey (United Kingdom))

    2010-02-15

    This report describes the set-up, operation and observations from the first 1,385 days (3.8 years) of the large scale gas injection test (Lasgit) experiment conducted at the Aespoe Hard Rock Laboratory. During this time the bentonite buffer has been artificially hydrated and has given new insight into the evolution of the buffer. After 2 years (849 days) of artificial hydration a canister filter was identified to perform a series of hydraulic and gas tests, a period that lasted 268 days. The results from the gas test showed that the full-scale bentonite buffer behaved in a similar way to previous laboratory experiments. This confirms the up-scaling of laboratory observations with the addition of considerable information on the stress responses throughout the deposition hole. During the gas testing stage, the buffer was continued to artificially hydrate. Hydraulic results, from controlled and uncontrolled events, show that the buffer continues to mature and has yet to reach full maturation. Lasgit has yielded high quality data relating to the hydration of the bentonite and the evolution in hydrogeological properties adjacent to the deposition hole. The initial hydraulic and gas injection tests confirm the correct working of all control and data acquisition systems. Lasgit has been in successful operation for in excess of 1,385 days

  2. High-Performance Carbon Dioxide Electrocatalytic Reduction by Easily Fabricated Large-Scale Silver Nanowire Arrays.

    Science.gov (United States)

    Luan, Chuhao; Shao, Yang; Lu, Qi; Gao, Shenghan; Huang, Kai; Wu, Hui; Yao, Kefu

    2018-05-17

    An efficient and selective catalyst is in urgent need for carbon dioxide electroreduction and silver is one of the promising candidates with affordable costs. Here we fabricated large-scale vertically standing Ag nanowire arrays with high crystallinity and electrical conductivity as carbon dioxide electroreduction catalysts by a simple nanomolding method that was usually considered not feasible for metallic crystalline materials. A great enhancement of current densities and selectivity for CO at moderate potentials was achieved. The current density for CO ( j co ) of Ag nanowire array with 200 nm in diameter was more than 2500 times larger than that of Ag foil at an overpotential of 0.49 V with an efficiency over 90%. The origin of enhanced performances are attributed to greatly increased electrochemically active surface area (ECSA) and higher intrinsic activity compared to those of polycrystalline Ag foil. More low-coordinated sites on the nanowires which can stabilize the CO 2 intermediate better are responsible for the high intrinsic activity. In addition, the impact of surface morphology that induces limited mass transportation on reaction selectivity and efficiency of nanowire arrays with different diameters was also discussed.

  3. Application of plant metabonomics in quality assessment for large-scale production of traditional Chinese medicine.

    Science.gov (United States)

    Ning, Zhangchi; Lu, Cheng; Zhang, Yuxin; Zhao, Siyu; Liu, Baoqin; Xu, Xuegong; Liu, Yuanyan

    2013-07-01

    The curative effects of traditional Chinese medicines are principally based on the synergic effect of their multi-targeting, multi-ingredient preparations, in contrast to modern pharmacology and drug development that often focus on a single chemical entity. Therefore, the method employing a few markers or pharmacologically active constituents to assess the quality and authenticity of the complex preparations has a number of severe challenges. Metabonomics can provide an effective platform for complex sample analysis. It is also reported to be applied to the quality analysis of the traditional Chinese medicine. Metabonomics enables comprehensive assessment of complex traditional Chinese medicines or herbal remedies and sample classification of diverse biological statuses, origins, or qualities in samples, by means of chemometrics. Identification, processing, and pharmaceutical preparation are the main procedures in the large-scale production of Chinese medicinal preparations. Through complete scans, plants metabonomics addresses some of the shortfalls of single analyses and presents a considerable potential to become a sharp tool for traditional Chinese medicine quality assessment. Georg Thieme Verlag KG Stuttgart · New York.

  4. Guidance for Large-scale Implementation of Alternate Wetting and Drying: A Biophysical Suitability Assessment

    Science.gov (United States)

    Sander, B. O.; Wassmann, R.; Nelson, A.; Palao, L.; Wollenberg, E.; Ishitani, M.

    2014-12-01

    The alternate wetting and drying (AWD) technology for rice production does not only save 15-30% of irrigation water, it also reduces methane emissions by up to 70%. AWD is defined by periodic drying and re-flooding of a rice field. Due to its high mitigation potential and its simplicity to execute this practice AWD has gained a lot of attention in recent years. The Climate and Clean Air Coalition (CCAC) has put AWD high on its agenda and funds a project to guide implementation of this technology in Vietnam, Bangladesh and Colombia. One crucial activity is a biophysical suitability assessment for AWD in the three countries. For this, we analyzed rainfall and soil data as well as potential evapotranspiration to assess if the water balance allows practicing AWD or if precipitation is too high for rice fields to fall dry. In my talk I will outline key factors for a successful large-scale implementation of AWD with a focus on the biophysical suitability assessment. The seasonal suitability maps that we generated highlight priority areas for AWD implementation and guide policy makers to informed decisions about meaningful investments in infrastructure and extension work.

  5. Using Procedure Based on Item Response Theory to Evaluate Classification Consistency Indices in the Practice of Large-Scale Assessment

    Directory of Open Access Journals (Sweden)

    Shanshan Zhang

    2017-09-01

    Full Text Available In spite of the growing interest in the methods of evaluating the classification consistency (CC indices, only few researches are available in the field of applying these methods in the practice of large-scale educational assessment. In addition, only few studies considered the influence of practical factors, for example, the examinee ability distribution, the cut score location and the score scale, on the performance of CC indices. Using the newly developed Lee's procedure based on the item response theory (IRT, the main purpose of this study is to investigate the performance of CC indices when practical factors are taken into consideration. A simulation study and an empirical study were conducted under comprehensive conditions. Results suggested that with negatively skewed distribution, the CC indices were larger than with other distributions. Interactions occurred among ability distribution, cut score location, and score scale. Consequently, Lee's IRT procedure is reliable to be used in the field of large-scale educational assessment, and when reporting the indices, it should be treated with caution as testing conditions may vary a lot.

  6. Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data.

    Science.gov (United States)

    Aji, Ablimit; Wang, Fusheng; Saltz, Joel H

    2012-11-06

    Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the "big data" challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce.

  7. Modeling Student Motivation and Students’ Ability Estimates From a Large-Scale Assessment of Mathematics

    Directory of Open Access Journals (Sweden)

    Carlos Zerpa

    2011-09-01

    Full Text Available When large-scale assessments (LSA do not hold personal stakes for students, students may not put forth their best effort. Low-effort examinee behaviors (e.g., guessing, omitting items result in an underestimate of examinee abilities, which is a concern when using results of LSA to inform educational policy and planning. The purpose of this study was to explore the relationship between examinee motivation as defined by expectancy-value theory, student effort, and examinee mathematics abilities. A principal components analysis was used to examine the data from Grade 9 students (n = 43,562 who responded to a self-report questionnaire on their attitudes and practices related to mathematics. The results suggested a two-component model where the components were interpreted as task-values in mathematics and student effort. Next, a hierarchical linear model was implemented to examine the relationship between examinee component scores and their estimated ability on a LSA. The results of this study provide evidence that motivation, as defined by the expectancy-value theory and student effort, partially explains student ability estimates and may have implications in the information that get transferred to testing organizations, school boards, and teachers while assessing students’ Grade 9 mathematics learning.

  8. A long-term, continuous simulation approach for large-scale flood risk assessments

    Science.gov (United States)

    Falter, Daniela; Schröter, Kai; Viet Dung, Nguyen; Vorogushyn, Sergiy; Hundecha, Yeshewatesfa; Kreibich, Heidi; Apel, Heiko; Merz, Bruno

    2014-05-01

    The Regional Flood Model (RFM) is a process based model cascade developed for flood risk assessments of large-scale basins. RFM consists of four model parts: the rainfall-runoff model SWIM, a 1D channel routing model, a 2D hinterland inundation model and the flood loss estimation model for residential buildings FLEMOps+r. The model cascade was recently undertaken a proof-of-concept study at the Elbe catchment (Germany) to demonstrate that flood risk assessments, based on a continuous simulation approach, including rainfall-runoff, hydrodynamic and damage estimation models, are feasible for large catchments. The results of this study indicated that uncertainties are significant, especially for hydrodynamic simulations. This was basically a consequence of low data quality and disregarding dike breaches. Therefore, RFM was applied with a refined hydraulic model setup for the Elbe tributary Mulde. The study area Mulde catchment comprises about 6,000 km2 and 380 river-km. The inclusion of more reliable information on overbank cross-sections and dikes considerably improved the results. For the application of RFM for flood risk assessments, long-term climate input data is needed to drive the model chain. This model input was provided by a multi-site, multi-variate weather generator that produces sets of synthetic meteorological data reproducing the current climate statistics. The data set comprises 100 realizations of 100 years of meteorological data. With the proposed continuous simulation approach of RFM, we simulated a virtual period of 10,000 years covering the entire flood risk chain including hydrological, 1D/2D hydrodynamic and flood damage estimation models. This provided a record of around 2.000 inundation events affecting the study area with spatially detailed information on inundation depths and damage to residential buildings on a resolution of 100 m. This serves as basis for a spatially consistent, flood risk assessment for the Mulde catchment presented in

  9. Very Large-Scale Neighborhoods with Performance Guarantees for Minimizing Makespan on Parallel Machines

    NARCIS (Netherlands)

    Brueggemann, T.; Hurink, Johann L.; Vredeveld, T.; Woeginger, Gerhard

    2006-01-01

    We study the problem of minimizing the makespan on m parallel machines. We introduce a very large-scale neighborhood of exponential size (in the number of machines) that is based on a matching in a complete graph. The idea is to partition the jobs assigned to the same machine into two sets. This

  10. Lessons from a large-scale assessment: Results from conceptual inventories

    Directory of Open Access Journals (Sweden)

    Beth Thacker

    2014-07-01

    Full Text Available We report conceptual inventory results of a large-scale assessment project at a large university. We studied the introduction of materials and instructional methods informed by physics education research (PER (physics education research-informed materials into a department where most instruction has previously been traditional and a significant number of faculty are hesitant, ambivalent, or even resistant to the introduction of such reforms. Data were collected in all of the sections of both the large algebra- and calculus-based introductory courses for a number of years employing commonly used conceptual inventories. Results from a small PER-informed, inquiry-based, laboratory-based class are also reported. Results suggest that when PER-informed materials are introduced in the labs and recitations, independent of the lecture style, there is an increase in students’ conceptual inventory gains. There is also an increase in the results on conceptual inventories if PER-informed instruction is used in the lecture. The highest conceptual inventory gains were achieved by the combination of PER-informed lectures and laboratories in large class settings and by the hands-on, laboratory-based, inquiry-based course taught in a small class setting.

  11. Large-scale assessment of olfactory preferences and learning in Drosophila melanogaster: behavioral and genetic components

    Directory of Open Access Journals (Sweden)

    Elisabetta Versace

    2015-09-01

    Full Text Available In the Evolve and Resequence method (E&R, experimental evolution and genomics are combined to investigate evolutionary dynamics and the genotype-phenotype link. As other genomic approaches, this methods requires many replicates with large population sizes, which imposes severe restrictions on the analysis of behavioral phenotypes. Aiming to use E&R for investigating the evolution of behavior in Drosophila, we have developed a simple and effective method to assess spontaneous olfactory preferences and learning in large samples of fruit flies using a T-maze. We tested this procedure on (a a large wild-caught population and (b 11 isofemale lines of Drosophila melanogaster. Compared to previous methods, this procedure reduces the environmental noise and allows for the analysis of large population samples. Consistent with previous results, we show that flies have a preference for orange vs. apple odor. With our procedure wild-derived flies exhibit olfactory learning in the absence of previous laboratory selection. Furthermore, we find genetic differences in the olfactory learning with relatively high heritability. We propose this large-scale method as an effective tool for E&R and genome-wide association studies on olfactory preferences and learning.

  12. Use of large-scale acoustic monitoring to assess anthropogenic pressures on Orthoptera communities.

    Science.gov (United States)

    Penone, Caterina; Le Viol, Isabelle; Pellissier, Vincent; Julien, Jean-François; Bas, Yves; Kerbiriou, Christian

    2013-10-01

    Biodiversity monitoring at large spatial and temporal scales is greatly needed in the context of global changes. Although insects are a species-rich group and are important for ecosystem functioning, they have been largely neglected in conservation studies and policies, mainly due to technical and methodological constraints. Sound detection, a nondestructive method, is easily applied within a citizen-science framework and could be an interesting solution for insect monitoring. However, it has not yet been tested at a large scale. We assessed the value of a citizen-science program in which Orthoptera species (Tettigoniidae) were monitored acoustically along roads. We used Bayesian model-averaging analyses to test whether we could detect widely known patterns of anthropogenic effects on insects, such as the negative effects of urbanization or intensive agriculture on Orthoptera populations and communities. We also examined site-abundance correlations between years and estimated the biases in species detection to evaluate and improve the protocol. Urbanization and intensive agricultural landscapes negatively affected Orthoptera species richness, diversity, and abundance. This finding is consistent with results of previous studies of Orthoptera, vertebrates, carabids, and butterflies. The average mass of communities decreased as urbanization increased. The dispersal ability of communities increased as the percentage of agricultural land and, to a lesser extent, urban area increased. Despite changes in abundances over time, we found significant correlations between yearly abundances. We identified biases linked to the protocol (e.g., car speed or temperature) that can be accounted for ease in analyses. We argue that acoustic monitoring of Orthoptera along roads offers several advantages for assessing Orthoptera biodiversity at large spatial and temporal extents, particularly in a citizen science framework. © 2013 Society for Conservation Biology.

  13. Using GRACE Satellite Gravimetry for Assessing Large-Scale Hydrologic Extremes

    Directory of Open Access Journals (Sweden)

    Alexander Y. Sun

    2017-12-01

    Full Text Available Global assessment of the spatiotemporal variability in terrestrial total water storage anomalies (TWSA in response to hydrologic extremes is critical for water resources management. Using TWSA derived from the gravity recovery and climate experiment (GRACE satellites, this study systematically assessed the skill of the TWSA-climatology (TC approach and breakpoint (BP detection method for identifying large-scale hydrologic extremes. The TC approach calculates standardized anomalies by using the mean and standard deviation of the GRACE TWSA corresponding to each month. In the BP detection method, the empirical mode decomposition (EMD is first applied to identify the mean return period of TWSA extremes, and then a statistical procedure is used to identify the actual occurrence times of abrupt changes (i.e., BPs in TWSA. Both detection methods were demonstrated on basin-averaged TWSA time series for the world’s 35 largest river basins. A nonlinear event coincidence analysis measure was applied to cross-examine abrupt changes detected by these methods with those detected by the Standardized Precipitation Index (SPI. Results show that our EMD-assisted BP procedure is a promising tool for identifying hydrologic extremes using GRACE TWSA data. Abrupt changes detected by the BP method coincide well with those of the SPI anomalies and with documented hydrologic extreme events. Event timings obtained by the TC method were ambiguous for a number of river basins studied, probably because the GRACE data length is too short to derive long-term climatology at this time. The BP approach demonstrates a robust wet-dry anomaly detection capability, which will be important for applications with the upcoming GRACE Follow-On mission.

  14. Integrating weather and geotechnical monitoring data for assessing the stability of large scale surface mining operations

    Directory of Open Access Journals (Sweden)

    Steiakakis Chrysanthos

    2016-01-01

    Full Text Available The geotechnical challenges for safe slope design in large scale surface mining operations are enormous. Sometimes one degree of slope inclination can significantly reduce the overburden to ore ratio and therefore dramatically improve the economics of the operation, while large scale slope failures may have a significant impact on human lives. Furthermore, adverse weather conditions, such as high precipitation rates, may unfavorably affect the already delicate balance between operations and safety. Geotechnical, weather and production parameters should be systematically monitored and evaluated in order to safely operate such pits. Appropriate data management, processing and storage are critical to ensure timely and informed decisions.

  15. Large Scale Evapotranspiration Estimates: An Important Component in Regional Water Balances to Assess Water Availability

    Science.gov (United States)

    Garatuza-Payan, J.; Yepez, E. A.; Watts, C.; Rodriguez, J. C.; Valdez-Torres, L. C.; Robles-Morua, A.

    2013-05-01

    Water security, can be defined as the reliable supply in quantity and quality of water to help sustain future populations and maintaining ecosystem health and productivity. Water security is rapidly declining in many parts of the world due to population growth, drought, climate change, salinity, pollution, land use change, over-allocation and over-utilization, among other issues. Governmental offices (such as the Comision Nacional del Agua in Mexico, CONAGUA) require and conduct studies to estimate reliable water balances at regional or continental scales in order to provide reasonable assessments of the amount of water that can be provided (from surface or ground water sources) to supply all the human needs while maintaining natural vegetation, on an operational basis and, more important, under disturbances, such as droughts. Large scale estimates of evapotranspiration (ET), a critical component of the water cycle, are needed for a better comprehension of the hydrological cycle at large scales, which, in most water balances is left as the residual. For operational purposes, such water balance estimates can not rely on ET measurements since they do not exist, should be simple and require the least ground information possible, information that is often scarce or does not exist at all. Given this limitation, the use of remotely sensed data to estimate ET could supplement the lack of ground information, particularly in remote regions In this study, a simple method, based on the Makkink equation is used to estimate ET for large areas at high spatial resolutions (1 km). The Makkink model used here is forced using three remotely sensed datasets. First, the model uses solar radiation estimates obtained from the Geostationary Operational Environmental Satellite (GOES); Second, the model uses an Enhanced Vegetation Index (EVI) obtained from the Moderate-resolution Imaging Spectroradiometer (MODIS) normalized to get an estimate for vegetation amount and land use which was

  16. Contributions to large scale and performance tests of the ATLAS online software

    International Nuclear Information System (INIS)

    Badescu, E.; Caprini, M.

    2003-01-01

    : once all processes are started and the controllers are in the Initial state, go to Running state; Luke warm stop: reverse of the luke warm start phase; Warm start: once all processes are alive and all controllers are the Configured state, go to the Running state; Warm stop: Reverse of the warm start phase. It was shown that the online system is capable of running on 111 PCs controlling a 3 or 4 level hierarchy of up to 111 run controllers. Furthermore, parallel partitions with a 2 level hierarchy of 11 run controllers were run successfully demonstrating the principle of partition independence. The set of incremental configurations was run sequentially to study the system behaviour with increasing numbers of controllers and PCs. Aspects of inter-operability and correct system behaviour for a large scale was verified with the partition containing 111 controllers which represent more than a factor 10 in size compared to its current use in test beam. In order to start studies of the online system for the next order of magnitude, the 4-level super partitions with 300 and 1000 crate controllers were exercised. Limits were found on the level of communication and state transition coordination which will be investigated further. (authors)

  17. A Heuristic Approach to Author Name Disambiguation in Bibliometrics Databases for Large-scale Research Assessments

    NARCIS (Netherlands)

    D'Angelo, C.A.; Giuffrida, C.; Abramo, G.

    2011-01-01

    National exercises for the evaluation of research activity by universities are becoming regular practice in ever more countries. These exercises have mainly been conducted through the application of peer-review methods. Bibliometrics has not been able to offer a valid large-scale alternative because

  18. The use of soil moisture - remote sensing products for large-scale groundwater modeling and assessment

    NARCIS (Netherlands)

    Sutanudjaja, E.H.

    2012-01-01

    In this thesis, the possibilities of using spaceborne remote sensing for large-scale groundwater modeling are explored. We focus on a soil moisture product called European Remote Sensing Soil Water Index (ERS SWI, Wagner et al., 1999) - representing the upper profile soil moisture. As a test-bed, we

  19. Meteorological impact assessment of possible large scale irrigation in Southwest Saudi Arabia

    NARCIS (Netherlands)

    Maat, ter H.W.; Hutjes, R.W.A.; Ohba, R.; Ueda, H.; Bisselink, B.; Bauer, T.

    2006-01-01

    On continental to regional scales feedbacks between landuse and landcover change and climate have been widely documented over the past 10¿15 years. In the present study we explore the possibility that also vegetation changes over much smaller areas may affect local precipitation regimes. Large scale

  20. Assessment of economically optimal water management and geospatial potential for large-scale water storage

    Science.gov (United States)

    Weerasinghe, Harshi; Schneider, Uwe A.

    2010-05-01

    Assessment of economically optimal water management and geospatial potential for large-scale water storage Weerasinghe, Harshi; Schneider, Uwe A Water is an essential but limited and vulnerable resource for all socio-economic development and for maintaining healthy ecosystems. Water scarcity accelerated due to population expansion, improved living standards, and rapid growth in economic activities, has profound environmental and social implications. These include severe environmental degradation, declining groundwater levels, and increasing problems of water conflicts. Water scarcity is predicted to be one of the key factors limiting development in the 21st century. Climate scientists have projected spatial and temporal changes in precipitation and changes in the probability of intense floods and droughts in the future. As scarcity of accessible and usable water increases, demand for efficient water management and adaptation strategies increases as well. Addressing water scarcity requires an intersectoral and multidisciplinary approach in managing water resources. This would in return safeguard the social welfare and the economical benefit to be at their optimal balance without compromising the sustainability of ecosystems. This paper presents a geographically explicit method to assess the potential for water storage with reservoirs and a dynamic model that identifies the dimensions and material requirements under an economically optimal water management plan. The methodology is applied to the Elbe and Nile river basins. Input data for geospatial analysis at watershed level are taken from global data repositories and include data on elevation, rainfall, soil texture, soil depth, drainage, land use and land cover; which are then downscaled to 1km spatial resolution. Runoff potential for different combinations of land use and hydraulic soil groups and for mean annual precipitation levels are derived by the SCS-CN method. Using the overlay and decision tree algorithms

  1. Large-scale model-based assessment of deer-vehicle collision risk.

    Directory of Open Access Journals (Sweden)

    Torsten Hothorn

    Full Text Available Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer-vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on >74,000 deer-vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer-vehicle collisions and to investigate the relationship between deer-vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer-vehicle collisions, which allows nonlinear environment-deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new "deer-vehicle collision index" for deer management. We show that the risk of deer-vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer-vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer-vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and defining

  2. Age-related differences in the relations between individualised HRM and organisational performance: a large-scale employer survey

    NARCIS (Netherlands)

    Bal, P.M.; Dorenbosch, L.

    2015-01-01

    The current study aimed to investigate the relationship between individualised HRM practices and several measures of organisational performance, including the moderating role of employee age in these relationships. A large-scale representative study among 4,591 organisations in the Netherlands

  3. Study on sandstorm PM10 exposure assessment in the large-scale region: a case study in Inner Mongolia.

    Science.gov (United States)

    Wang, Hongmei; Lv, Shihai; Diao, Zhaoyan; Wang, Baolu; Zhang, Han; Yu, Caihong

    2018-04-12

    The current exposure-effect curves describing sandstorm PM 10 exposure and the health effects are drawn roughly by the outdoor concentration (OC), which ignored the exposure levels of people's practical activity sites. The main objective of this work is to develop a novel approach to quantify human PM 10 exposure by their socio-categorized micro-environment activities-time weighed (SCMEATW) in strong sandstorm period, which can be used to assess the exposure profiles in the large-scale region. Types of people's SCMEATW were obtained by questionnaire investigation. Different types of representatives were trackly recorded during the big sandstorm. The average exposure levels were estimated by SCMEATW. Furthermore, the geographic information system (GIS) technique was taken not only to simulate the outdoor concentration spatially but also to create human exposure outlines in a visualized map simultaneously, which could help to understand the risk to different types of people. Additionally, exposure-response curves describing the acute outpatient rate odds by sandstorm were formed by SCMEATW, and the differences between SCMEATW and OC were compared. Results indicated that acute outpatient rate odds had relationships with PM 10 exposure from SCMEATW, with a level less than that of OC. Some types of people, such as herdsmen and those people walking outdoors during a strong sandstorm, have more risk than office men. Our findings provide more understanding of human practical activities on their exposure levels; they especially provide a tool to understand sandstorm PM 10 exposure in large scale spatially, which might help to perform the different categories population's risk assessment regionally.

  4. Performance analysis on a large scale borehole ground source heat pump in Tianjin cultural centre

    Science.gov (United States)

    Yin, Baoquan; Wu, Xiaoting

    2018-02-01

    In this paper, the temperature distribution of the geothermal field for the vertical borehole ground-coupled heat pump was tested and analysed. Besides the borehole ground-coupled heat pump, the system composed of the ice storage, heat supply network and cooling tower. According to the operation data for nearly three years, the temperature constant zone is in the ground depth of 40m -120m with a temperature gradient of about 3.0°C/100m. The temperature of the soil dropped significantly in the heating season, increased significantly in the cooling season, and reinstated in the transitional season. With the energy balance design of the heating and cooling and the existence of the soil thermal inertia, the soil temperature stayed in a relative stable range and the ground source heat pump system was operated with a relative high efficiency. The geothermal source heat pump was shown to be applicable for large scale utilization.

  5. International Large-Scale Assessment Studies and Educational Policy-Making in Chile: Contexts and Dimensions of Influence

    Science.gov (United States)

    Cox, Cristián; Meckes, Lorena

    2016-01-01

    Since the 1990s, Chile has participated in all major international large-scale assessment studies (ILSAs) of the IEA and OECD, as well as the regional ones conducted by UNESCO in Latin America, after it had been involved in the very first international Science Study in 1970-1971. This article examines the various ways in which these studies have…

  6. The Limits and Possibilities of International Large-Scale Assessments. Education Policy Brief. Volume 9, Number 2, Spring 2011

    Science.gov (United States)

    Rutkowski, David J.; Prusinski, Ellen L.

    2011-01-01

    The staff of the Center for Evaluation & Education Policy (CEEP) at Indiana University is often asked about how international large-scale assessments influence U.S. educational policy. This policy brief is designed to provide answers to some of the most frequently asked questions encountered by CEEP researchers concerning the three most popular…

  7. How International Large-Scale Skills Assessments Engage with National Actors: Mobilising Networks through Policy, Media and Public Knowledge

    Science.gov (United States)

    Hamilton, Mary

    2017-01-01

    This paper examines how international, large-scale skills assessments (ILSAs) engage with the broader societies they seek to serve and improve. It looks particularly at the discursive work that is done by different interest groups and the media through which the findings become part of public conversations and are translated into usable form in…

  8. The Contribution of International Large-Scale Assessments to Educational Research: Combining Individual and Institutional Data Sources

    Science.gov (United States)

    Strietholt, Rolf; Scherer, Ronny

    2018-01-01

    The present paper aims to discuss how data from international large-scale assessments (ILSAs) can be utilized and combined, even with other existing data sources, in order to monitor educational outcomes and study the effectiveness of educational systems. We consider different purposes of linking data, namely, extending outcomes measures,…

  9. Control protocol: large scale implementation at the CERN PS complex - a first assessment

    International Nuclear Information System (INIS)

    Abie, H.; Benincasa, G.; Coudert, G.; Davydenko, Y.; Dehavay, C.; Gavaggio, R.; Gelato, G.; Heinze, W.; Legras, M.; Lustig, H.; Merard, L.; Pearson, T.; Strubin, P.; Tedesco, J.

    1994-01-01

    The Control Protocol is a model-based, uniform access procedure from a control system to accelerator equipment. It was proposed at CERN about 5 years ago and prototypes were developed in the following years. More recently, this procedure has been finalized and implemented at a large scale in the PS Complex. More than 300 pieces of equipment are now using this protocol in normal operation and another 300 are under implementation. These include power converters, vacuum systems, beam instrumentation devices, RF equipment, etc. This paper describes how the single general procedure is applied to the different kinds of equipment. The advantages obtained are also discussed. ((orig.))

  10. Assessing the value of storage services in large-scale multireservoir systems

    Science.gov (United States)

    Tilmant, A.; Arjoon, D.; Guilherme, G. F.

    2012-12-01

    both countries, the highly contrasted hydrologic regime of the Euphrates river could only be dealt with through storage. However, due to political tensions, those projects were carried out without much cooperation and coordination among riparian countries. The development started in the late 1960s with the construction of the head reservoir in Turkey (Keban dam) and the most downstream reservoir in Syria (Tabqa dam). Thirty years later, five other dams in both countries had been commissioned, changing the economy of this region through the export of hydroelectric power (7812 MW) and agricultural products (cotton and cereals). The operating policies and marginal water values of this multipurpose multiresevoir system are determined using Stochastic Dual Dynamic Programming, an optimization algorithm that can handle large-scale reservoir operation problems while keeping an individual representation of the hydraulic infrastructure and the demand sites. The analysis of the simulation results reveal that the average value of storage for the entire cascade of reservoirs is around 420 million US/a, which is 18% of the annual short-run benefits of the system (2.26 billion US/a).

  11. Integrating weather and geotechnical monitoring data for assessing the stability of large scale surface mining operations

    Science.gov (United States)

    Steiakakis, Chrysanthos; Agioutantis, Zacharias; Apostolou, Evangelia; Papavgeri, Georgia; Tripolitsiotis, Achilles

    2016-01-01

    The geotechnical challenges for safe slope design in large scale surface mining operations are enormous. Sometimes one degree of slope inclination can significantly reduce the overburden to ore ratio and therefore dramatically improve the economics of the operation, while large scale slope failures may have a significant impact on human lives. Furthermore, adverse weather conditions, such as high precipitation rates, may unfavorably affect the already delicate balance between operations and safety. Geotechnical, weather and production parameters should be systematically monitored and evaluated in order to safely operate such pits. Appropriate data management, processing and storage are critical to ensure timely and informed decisions. This paper presents an integrated data management system which was developed over a number of years as well as the advantages through a specific application. The presented case study illustrates how the high production slopes of a mine that exceed depths of 100-120 m were successfully mined with an average displacement rate of 10- 20 mm/day, approaching an almost slow to moderate landslide velocity. Monitoring data of the past four years are included in the database and can be analyzed to produce valuable results. Time-series data correlations of movements, precipitation records, etc. are evaluated and presented in this case study. The results can be used to successfully manage mine operations and ensure the safety of the mine and the workforce.

  12. Performance on large-scale science tests: Item attributes that may impact achievement scores

    Science.gov (United States)

    Gordon, Janet Victoria

    Significant differences in achievement among ethnic groups persist on the eighth-grade science Washington Assessment of Student Learning (WASL). The WASL measures academic performance in science using both scenario and stand-alone question types. Previous research suggests that presenting target items connected to an authentic context, like scenario question types, can increase science achievement scores especially in underrepresented groups and thus help to close the achievement gap. The purpose of this study was to identify significant differences in performance between gender and ethnic subgroups by question type on the 2005 eighth-grade science WASL. MANOVA and ANOVA were used to examine relationships between gender and ethnic subgroups as independent variables with achievement scores on scenario and stand-alone question types as dependent variables. MANOVA revealed no significant effects for gender, suggesting that the 2005 eighth-grade science WASL was gender neutral. However, there were significant effects for ethnicity. ANOVA revealed significant effects for ethnicity and ethnicity by gender interaction in both question types. Effect sizes were negligible for the ethnicity by gender interaction. Large effect sizes between ethnicities on scenario question types became moderate to small effect sizes on stand-alone question types. This indicates the score advantage the higher performing subgroups had over the lower performing subgroups was not as large on stand-alone question types compared to scenario question types. A further comparison examined performance on multiple-choice items only within both question types. Similar achievement patterns between ethnicities emerged; however, achievement patterns between genders changed in boys' favor. Scenario question types appeared to register differences between ethnic groups to a greater degree than stand-alone question types. These differences may be attributable to individual differences in cognition

  13. Methodology for a GIS-based damage assessment for researchers following large scale disasters

    Science.gov (United States)

    Crawford, Patrick Shane

    research field. Along with visually mapping the data, geometric calculations can be conducted on the data to give the viewer more information about the damage. In Chapter 4, a tornado damage contour for Moore, Oklahoma following the May 20, 2013 tornado is shown. This damage contour was created in GIS based on the Enhanced Fujita (EF) damage scale, and gives the viewer an easily understood picture of the extent and distribution of the tornado. This thesis aims to describe a foundational groundwork for activities that are performed in the GIS-based damage assessment procedure and provide uses for the damage assessment as well as research being conducted on how to use the data collected from these assessments. This will allow researchers to conduct a highly adaptable, rapid GIS-based damage assessment of their own.

  14. Differences Across Levels in the Language of Agency and Ability in Rating Scales for Large-Scale Second Language Writing Assessments

    OpenAIRE

    Anderson Salena Sampson

    2017-01-01

    While large-scale language and writing assessments benefit from a wealth of literature on the reliability and validity of specific tests and rating procedures, there is comparatively less literature that explores the specific language of second language writing rubrics. This paper provides an analysis of the language of performance descriptors for the public versions of the TOEFL and IELTS writing assessment rubrics, with a focus on linguistic agency encoded by agentive verbs and language of ...

  15. Performance Analysis of an Updraft Tower System for Dry Cooling in Large-Scale Power Plants

    Directory of Open Access Journals (Sweden)

    Haotian Liu

    2017-11-01

    Full Text Available An updraft tower cooling system is assessed for elimination of water use associated with power plant heat rejection. Heat rejected from the power plant condenser is used to warm the air at the base of an updraft tower; buoyancy-driven air flows through a recuperative turbine inside the tower. The secondary loop, which couples the power plant condenser to a heat exchanger at the tower base, can be configured either as a constant-pressure pump cycle or a vapor compression cycle. The novel use of a compressor can elevate the air temperature in the tower base to increases the turbine power recovery and decrease the power plant condensing temperature. The system feasibility is evaluated by comparing the net power needed to operate the system versus alternative dry cooling schemes. A thermodynamic model coupling all system components is developed for parametric studies and system performance evaluation. The model predicts that constant-pressure pump cycle consumes less power than using a compressor; the extra compression power required for temperature lift is much larger than the gain in turbine power output. The updraft tower system with a pumped secondary loop can allow dry cooling with less power plant efficiency penalty compared to air-cooled condensers.

  16. Decision support for large-scale remediation strategies by fused urban metabolism and life cycle assessment

    DEFF Research Database (Denmark)

    Ohms, Pernille; Andersen, Camilla; Landgren, Mathilde

    2018-01-01

    Purpose: This paper seeks to identify the most environmental friendly way of conducting a refurbishment of Broendby Strand, with focus on PCB remediation. The actual identification is conducted by comparing four remediation techniques using urban metabolism fused with life cycle assessment (UM......-LCA) in combination with information relating to cost and efficiency of the compared techniques. The methodological goal of our paper is to test UM-LCA as a decision support tool and discuss application of the method in relation to large refurbishment projects. Methods: To assess the environmental performance of PCB......-remediation techniques, the UM-LCA method was applied. By combining UM and LCA methodologies, the total environmental impact potentials of the remediation techniques were calculated. To build an inventory for each technique, we contacted and interviewed experts and studied existing literature, cases, and projects...

  17. Assessing large-scale weekly cycles in meteorological variables: a review

    Directory of Open Access Journals (Sweden)

    A. Sanchez-Lorenzo

    2012-07-01

    Full Text Available Several studies have claimed to have found significant weekly cycles of meteorological variables appearing over large domains, which can hardly be related to urban effects exclusively. Nevertheless, there is still an ongoing scientific debate whether these large-scale weekly cycles exist or not, and some other studies fail to reproduce them with statistical significance. In addition to the lack of the positive proof for the existence of these cycles, their possible physical explanations have been controversially discussed during the last years. In this work we review the main results about this topic published during the recent two decades, including a summary of the existence or non-existence of significant weekly weather cycles across different regions of the world, mainly over the US, Europe and Asia. In addition, some shortcomings of common statistical methods for analyzing weekly cycles are listed. Finally, a brief summary of supposed causes of the weekly cycles, focusing on the aerosol-cloud-radiation interactions and their impact on meteorological variables as a result of the weekly cycles of anthropogenic activities, and possible directions for future research, is presented.

  18. Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems

    KAUST Repository

    Wu, Xingfu

    2011-08-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.

  19. Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2011-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.

  20. Simulation of buoyancy induced gas mixing tests performed in a large scale containment facility using GOTHIC code

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Z.; Chin, Y.S. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)

    2014-07-01

    This paper compares containment thermal-hydraulics simulations performed using GOTHIC against a past test set of large scale buoyancy induced helium-air-steam mixing experiments that had been performed at the AECL's Chalk River Laboratories. A number of typical post-accident containment phenomena, including thermal/gas stratification, natural convection, cool air entrainment, steam condensation on concrete walls and active local air cooler, were covered. The results provide useful insights into hydrogen gas mixing behaviour following a loss-of-coolant accident and demonstrate GOTHIC's capability in simulating these phenomena. (author)

  1. Simulation of buoyancy induced gas mixing tests performed in a large scale containment facility using GOTHIC code

    International Nuclear Information System (INIS)

    Liang, Z.; Chin, Y.S.

    2014-01-01

    This paper compares containment thermal-hydraulics simulations performed using GOTHIC against a past test set of large scale buoyancy induced helium-air-steam mixing experiments that had been performed at the AECL's Chalk River Laboratories. A number of typical post-accident containment phenomena, including thermal/gas stratification, natural convection, cool air entrainment, steam condensation on concrete walls and active local air cooler, were covered. The results provide useful insights into hydrogen gas mixing behaviour following a loss-of-coolant accident and demonstrate GOTHIC's capability in simulating these phenomena. (author)

  2. Performance analysis of large-scale applications based on wavefront algorithms

    International Nuclear Information System (INIS)

    Hoisie, A.; Lubeck, O.; Wasserman, H.

    1998-01-01

    The authors introduced a performance model for parallel, multidimensional, wavefront calculations with machine performance characterized using the LogGP framework. The model accounts for overlap in the communication and computation components. The agreement with experimental data is very good under a variety of model sizes, data partitioning, blocking strategies, and on three different parallel architectures. Using the model, the authors analyzed performance of a deterministic transport code on a hypothetical 100 Tflops future parallel system of interest to ASCI

  3. Assessment of Vehicle Sizing, Energy Consumption and Cost Through Large Scale Simulation of Advanced Vehicle Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Moawad, Ayman [Argonne National Lab. (ANL), Argonne, IL (United States); Kim, Namdoo [Argonne National Lab. (ANL), Argonne, IL (United States); Shidore, Neeraj [Argonne National Lab. (ANL), Argonne, IL (United States); Rousseau, Aymeric [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-01

    The U.S. Department of Energy (DOE) Vehicle Technologies Office (VTO) has been developing more energy-efficient and environmentally friendly highway transportation technologies that will enable America to use less petroleum. The long-term aim is to develop "leapfrog" technologies that will provide Americans with greater freedom of mobility and energy security, while lowering costs and reducing impacts on the environment. This report reviews the results of the DOE VTO. It gives an assessment of the fuel and light-duty vehicle technologies that are most likely to be established, developed, and eventually commercialized during the next 30 years (up to 2045). Because of the rapid evolution of component technologies, this study is performed every two years to continuously update the results based on the latest state-of-the-art technologies.

  4. Assessing Human Modifications to Floodplains using Large-Scale Hydrogeomorphic Floodplain Modeling

    Science.gov (United States)

    Morrison, R. R.; Scheel, K.; Nardi, F.; Annis, A.

    2017-12-01

    Human modifications to floodplains for water resource and flood management purposes have significantly transformed river-floodplain connectivity dynamics in many watersheds. Bridges, levees, reservoirs, shifts in land use, and other hydraulic engineering works have altered flow patterns and caused changes in the timing and extent of floodplain inundation processes. These hydrogeomorphic changes have likely resulted in negative impacts to aquatic habitat and ecological processes. The availability of large-scale topographic datasets at high resolution provide an opportunity for detecting anthropogenic impacts by means of geomorphic mapping. We have developed and are implementing a methodology for comparing a hydrogeomorphic floodplain mapping technique to hydraulically-modeled floodplain boundaries to estimate floodplain loss due to human activities. Our hydrogeomorphic mapping methodology assumes that river valley morphology intrinsically includes information on flood-driven erosion and depositional phenomena. We use a digital elevation model-based algorithm to identify the floodplain as the area of the fluvial corridor laying below water reference levels, which are estimated using a simplified hydrologic model. Results from our hydrogeomorphic method are compared to hydraulically-derived flood zone maps and spatial datasets of levee protected-areas to explore where water management features, such as levees, have changed floodplain dynamics and landscape features. Parameters associated with commonly used F-index functions are quantified and analyzed to better understand how floodplain areas have been reduced within a basin. Preliminary results indicate that the hydrogeomorphic floodplain model is useful for quickly delineating floodplains at large watershed scales, but further analyses are needed to understand the caveats for using the model in determining floodplain loss due to levees. We plan to continue this work by exploring the spatial dependencies of the F

  5. Assessment of Future Whole-System Value of Large-Scale Pumped Storage Plants in Europe

    Directory of Open Access Journals (Sweden)

    Fei Teng

    2018-01-01

    Full Text Available This paper analyses the impacts and benefits of the pumped storage plant (PSP and its upgrade to variable speed on generation and transmission capacity requirements, capital costs, system operating costs and carbon emissions in the future European electricity system. The combination of a deterministic system planning tool, Whole-electricity System Investment Model (WeSIM, and a stochastic system operation optimisation tool, Advanced Stochastic Unit Commitment (ASUC, is used to analyse the whole-system value of PSP technology and to quantify the impact of European balancing market integration and other competing flexible technologies on the value of the PSP. Case studies on the Pan-European system demonstrate that PSPs can reduce the total system cost by up to €13 billion per annum by 2050 in a scenario with a high share of renewables. Upgrading the PSP to variable-speed drive enhances its long-term benefits by 10–20%. On the other hand, balancing market integration across Europe may potentially reduce the overall value of the variable-speed PSP, although the effect can vary across different European regions. The results also suggest that large-scale deployment of demand-side response (DSR leads to a significant reduction in the value of PSPs, while the value of PSPs increases by circa 18% when the total European interconnection capacity is halved. The benefit of PSPs in reducing emissions is relatively negligible by 2030 but constitutes around 6–10% of total annual carbon emissions from the European power sector by 2050.

  6. A new method for large-scale assessment of change in ecosystem functioning in relation to land degradation

    Science.gov (United States)

    Horion, Stephanie; Ivits, Eva; Verzandvoort, Simone; Fensholt, Rasmus

    2017-04-01

    Ongoing pressures on European land are manifold with extreme climate events and non-sustainable use of land resources being amongst the most important drivers altering the functioning of the ecosystems. The protection and conservation of European natural capital is one of the key objectives of the 7th Environmental Action Plan (EAP). The EAP stipulates that European land must be managed in a sustainable way by 2020 and the UN Sustainable development goals define a Land Degradation Neutral world as one of the targets. This implies that land degradation (LD) assessment of European ecosystems must be performed repeatedly allowing for the assessment of the current state of LD as well as changes compared to a baseline adopted by the UNCCD for the objective of land degradation neutrality. However, scientifically robust methods are still lacking for large-scale assessment of LD and repeated consistent mapping of the state of terrestrial ecosystems. Historical land degradation assessments based on various methods exist, but methods are generally non-replicable or difficult to apply at continental scale (Allan et al. 2007). The current lack of research methods applicable at large spatial scales is notably caused by the non-robust definition of LD, the scarcity of field data on LD, as well as the complex inter-play of the processes driving LD (Vogt et al., 2011). Moreover, the link between LD and changes in land use (how land use changes relates to change in vegetation productivity and ecosystem functioning) is not straightforward. In this study we used the segmented trend method developed by Horion et al. (2016) for large-scale systematic assessment of hotspots of change in ecosystem functioning in relation to LD. This method alleviates shortcomings of widely used linear trend model that does not account for abrupt change, nor adequately captures the actual changes in ecosystem functioning (de Jong et al. 2013; Horion et al. 2016). Here we present a new methodology for

  7. Performance of large-scale scientific applications on the IBM ASCI Blue-Pacific system

    International Nuclear Information System (INIS)

    Mirin, A.

    1998-01-01

    The IBM ASCI Blue-Pacific System is a scalable, distributed/shared memory architecture designed to reach multi-teraflop performance. The IBM SP pieces together a large number of nodes, each having a modest number of processors. The system is designed to accommodate a mixed programming model as well as a pure message-passing paradigm. We examine a number of applications on this architecture and evaluate their performance and scalability

  8. A visual analytics system for optimizing the performance of large-scale networks in supercomputing systems

    Directory of Open Access Journals (Sweden)

    Takanori Fujiwara

    2018-03-01

    Full Text Available The overall efficiency of an extreme-scale supercomputer largely relies on the performance of its network interconnects. Several of the state of the art supercomputers use networks based on the increasingly popular Dragonfly topology. It is crucial to study the behavior and performance of different parallel applications running on Dragonfly networks in order to make optimal system configurations and design choices, such as job scheduling and routing strategies. However, in order to study these temporal network behavior, we would need a tool to analyze and correlate numerous sets of multivariate time-series data collected from the Dragonfly’s multi-level hierarchies. This paper presents such a tool–a visual analytics system–that uses the Dragonfly network to investigate the temporal behavior and optimize the communication performance of a supercomputer. We coupled interactive visualization with time-series analysis methods to help reveal hidden patterns in the network behavior with respect to different parallel applications and system configurations. Our system also provides multiple coordinated views for connecting behaviors observed at different levels of the network hierarchies, which effectively helps visual analysis tasks. We demonstrate the effectiveness of the system with a set of case studies. Our system and findings can not only help improve the communication performance of supercomputing applications, but also the network performance of next-generation supercomputers. Keywords: Supercomputing, Parallel communication network, Dragonfly networks, Time-series data, Performance analysis, Visual analytics

  9. Large-scale renewable energy project barriers: Environmental impact assessment streamlining efforts in Japan and the EU

    International Nuclear Information System (INIS)

    Schumacher, Kim

    2017-01-01

    Environmental Impact Assessment (EIA) procedures have been identified as a major barrier to renewable energy (RE) development with regards to large-scale projects (LS-RE). However EIA laws have also been neglected by many decision-makers who have been underestimating its impact on RE development and the stifling potential they possess. As a consequence, apart from acknowledging the shortcomings of the systems currently in place, few governments momentarily have concrete plans to reform their EIA laws. By looking at recent EIA streamlining efforts in two industrialized regions that underwent major transformations in their energy sectors, this paper attempts to assess how such reform efforts can act as a means to support the balancing of environmental protection and climate change mitigation with socio-economic challenges. Thereby this paper fills this intellectual void by identifying the strengths and weaknesses of the Japanese EIA law by contrasting it with the recently revised EIA Directive of the European Union (EU). This enables the identification of the regulatory provisions that impact RE development the most and the determination of how structured EIA law reforms would affect domestic RE project development. The main focus lies on the evaluation of regulatory streamlining efforts in the Japanese and EU contexts through the application of a mixed-methods approach, consisting of in-depth literary and legal reviews, followed by a comparative analysis and a series of semi-structured interviews. Highlighting several legal inconsistencies in combination with the views of EIA professionals, academics and law- and policymakers, allowed for a more comprehensive assessment of what streamlining elements of the reformed EU EIA Directive and the proposed Japanese EIA framework modifications could either promote or stifle further RE deployment. - Highlights: •Performs an in-depth review of EIA reforms in OECD territories •First paper to compare Japan and the European

  10. Non-destructive screening method for radiation hardened performance of large scale integration

    International Nuclear Information System (INIS)

    Zhou Dong; Xi Shanbin; Guo Qi; Ren Diyuan; Li Yudong; Sun Jing; Wen Lin

    2013-01-01

    The space radiation environment could induce radiation damage on the electronic devices. As the performance of commercial devices is generally superior to that of radiation hardened devices, it is necessary to screen out the devices with good radiation hardened performance from the commercial devices and applying these devices to space systems could improve the reliability of the systems. Combining the mathematical regression analysis with the different physical stressing experiments, we investigated the non-destructive screening method for radiation hardened performance of the integrated circuit. The relationship between the change of typical parameters and the radiation performance of the circuit was discussed. The irradiation-sensitive parameters were confirmed. The pluralistic linear regression equation toward the prediction of the radiation performance was established. Finally, the regression equations under stress conditions were verified by practical irradiation. The results show that the reliability and accuracy of the non-destructive screening method can be elevated by combining the mathematical regression analysis with the practical stressing experiment. (authors)

  11. Modeling electrochemical performance in large scale proton exchange membrane fuel cell stacks

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J H [Los Alamos National Lab., NM (United States); Lalk, T R [Texas A and M Univ., College Station, TX (United States). Dept. of Mechanical Engineering; Appleby, A J [Center for Electrochemical Studies and Hydrogen Research, Texas Engineering Experimentation Station, Texas A and M Univ., College Station, TX (United States)

    1998-02-01

    The processes, losses, and electrical characteristics of a Membrane-Electrode Assembly (MEA) of a Proton Exchange Membrane Fuel Cell (PEMFC) are described. In addition, a technique for numerically modeling the electrochemical performance of a MEA, developed specifically to be implemented as part of a numerical model of a complete fuel cell stack, is presented. The technique of calculating electrochemical performance was demonstrated by modeling the MEA of a 350 cm{sup 2}, 125 cell PEMFC and combining it with a dynamic fuel cell stack model developed by the authors. Results from the demonstration that pertain to the MEA sub-model are given and described. These include plots of the temperature, pressure, humidity, and oxygen partial pressure distributions for the middle MEA of the modeled stack as well as the corresponding current produced by that MEA. The demonstration showed that models developed using this technique produce results that are reasonable when compared to established performance expectations and experimental results. (orig.)

  12. Large-scale performance and design for construction activity erosion control best management practices.

    Science.gov (United States)

    Faucette, L B; Scholl, B; Beighley, R E; Governo, J

    2009-01-01

    The National Pollutant Discharge Elimination System (NPDES) Phase II requires construction activities to have erosion and sediment control best management practices (BMPs) designed and installed for site storm water management. Although BMPs are specified on storm water pollution prevention plans (SWPPPs) as part of the construction general permit (GP), there is little evidence in the research literature as to how BMPs perform or should be designed. The objectives of this study were to: (i) comparatively evaluate the performance of common construction activity erosion control BMPs under a standardized test method, (ii) evaluate the performance of compost erosion control blanket thickness, (iii) evaluate the performance of compost erosion control blankets (CECBs) on a variety of slope angles, and (iv) determine Universal Soil Loss Equation (USLE) cover management factors (C factors) for these BMPs to assist site designers and engineers. Twenty-three erosion control BMPs were evaluated using American Society of Testing and Materials (ASTM) D-6459, standard test method for determination of ECB performance in protecting hill slopes from rainfall induced erosion, on 4:1 (H:V), 3:1, and 2:1 slopes. Soil loss reduction for treatments exposed to 5 cm of rainfall on a 2:1 slope ranged from-7 to 99%. For rainfall exposure of 10 cm, treatment soil loss reduction ranged from 8 to 99%. The 2.5 and 5 cm CECBs significantly reduced erosion on slopes up to 2:1, while CECBs or= 4:1 when rainfall totals reach 5 cm. Based on the soil loss results, USLE C factors ranged from 0.01 to 0.9. These performance and design criteria should aid site planners and designers in decision-making processes.

  13. Multiple Skills Underlie Arithmetic Performance: A Large-Scale Structural Equation Modeling Analysis

    Directory of Open Access Journals (Sweden)

    Sarit Ashkenazi

    2017-12-01

    Full Text Available Current theoretical approaches point to the importance of several cognitive skills not specific to mathematics for the etiology of mathematics disorders (MD. In the current study, we examined the role of many of these skills, specifically: rapid automatized naming, attention, reading, and visual perception, on mathematics performance among a large group of college students (N = 1,322 with a wide range of arithmetic proficiency. Using factor analysis, we discovered that our data clustered to four latent variables 1 mathematics, 2 perception speed, 3 attention and 4 reading. In subsequent structural equation modeling, we found that the latent variable perception speed had a strong and meaningful effect on mathematics performance. Moreover, sustained attention, independent from the effect of the latent variable perception speed, had a meaningful, direct effect on arithmetic fact retrieval and procedural knowledge. The latent variable reading had a modest effect on mathematics performance. Specifically, reading comprehension, independent from the effect of the latent variable reading, had a meaningful direct effect on mathematics, and particularly on number line knowledge. Attention, tested by the attention network test, had no effect on mathematics, reading or perception speed. These results indicate that multiple factors can affect mathematics performance supporting a heterogeneous approach to mathematics. These results have meaningful implications for the diagnosis and intervention of pure and comorbid learning disorders.

  14. How brain asymmetry relates to performance – a large-scale dichotic listening study

    Directory of Open Access Journals (Sweden)

    Marco eHirnstein

    2014-01-01

    Full Text Available All major mental functions including language, spatial and emotional processing are lateralized but how strongly and to which hemisphere is subject to inter- and intraindividual variation. Relatively little, however, is known about how the degree and direction of lateralization affect how well the functions are carried out, i.e., how lateralization and task performance are related. The present study therefore examined the relationship between lateralization and performance in a dichotic listening (DL task for which we had data available from 1839 participants. In this task, consonant-vowel syllables are presented simultaneously to the left and right ear, such that each ear receives a different syllable. When asked which of the two they heard best, participants typically report more syllables from the right ear, which is a marker of left-hemispheric speech dominance. We calculated the degree of lateralization (based on the difference between correct left and right ear reports and correlated it with overall response accuracy (left plus right ear reports. In addition, we used reference models to control for statistical interdependency between left and right ear reports. The results revealed a u-shaped relationship between degree of lateralization and overall accuracy: the stronger the left or right ear advantage, the better the overall accuracy. This u-shaped asymmetry-performance relationship consistently emerged in males, females, right-/non-right-handers, and different age groups. Taken together, the present study demonstrates that performance on lateralized language functions depends on how strongly these functions are lateralized. The present study further stresses the importance of controlling for statistical interdependency when examining asymmetry-performance relationships in general.

  15. A Parametric Genetic Algorithm Approach to Assess Complementary Options of Large Scale Wind-solar Coupling

    Institute of Scientific and Technical Information of China (English)

    Tim; Mareda; Ludovic; Gaudard; Franco; Romerio

    2017-01-01

    The transitional path towards a highly renewable power system based on wind and solar energy sources is investigated considering their intermittent and spatially distributed characteristics. Using an extensive weather-driven simulation of hourly power mismatches between generation and load, we explore the interplay between geographical resource complementarity and energy storage strategies. Solar and wind resources are considered at variable spatial scales across Europe and related to the Swiss load curve, which serve as a typical demand side reference. The optimal spatial distribution of renewable units is further assessed through a parameterized optimization method based on a genetic algorithm. It allows us to explore systematically the effective potential of combined integration strategies depending on the sizing of the system, with a focus on how overall performance is affected by the definition of network boundaries. Upper bounds on integration schemes are provided considering both renewable penetration and needed reserve power capacity. The quantitative trade-off between grid extension, storage and optimal wind-solar mix is highlighted.This paper also brings insights on how optimal geographical distribution of renewable units evolves as a function of renewable penetration and grid extent.

  16. A Hybrid Testbed for Performance Evaluation of Large-Scale Datacenter Networks

    DEFF Research Database (Denmark)

    Pilimon, Artur; Ruepp, Sarah Renée

    2018-01-01

    Datacenters (DC) as well as their network interconnects are growing in scale and complexity. They are constantly being challenged in terms of energy and resource utilization efficiency, scalability, availability, reliability and performance requirements. Therefore, these resource-intensive enviro......Datacenters (DC) as well as their network interconnects are growing in scale and complexity. They are constantly being challenged in terms of energy and resource utilization efficiency, scalability, availability, reliability and performance requirements. Therefore, these resource......-intensive environments must be properly tested and analyzed in order to make timely upgrades and transformations. However, a limited number of academic institutions and Research and Development companies have access to production scale DC Network (DCN) testing facilities, and resource-limited studies can produce...... misleading or inaccurate results. To address this problem, we introduce an alternative solution, which forms a solid base for a more realistic and comprehensive performance evaluation of different aspects of DCNs. It is based on the System-in-the-loop (SITL) concept, where real commercial DCN equipment...

  17. Strategic Planning Tools for Large-Scale Technology-Based Assessments

    Science.gov (United States)

    Koomen, Marten; Zoanetti, Nathan

    2018-01-01

    Education systems are increasingly being called upon to implement new technology-based assessment systems that generate efficiencies, better meet changing stakeholder expectations, or fulfil new assessment purposes. These assessment systems require coordinated organisational effort to implement and can be expensive in time, skill and other…

  18. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Edward S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Orr, Laurel J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Thompson, Kyle R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  19. Performance of large-scale helium refrigerators subjected to pulsed heat load from fusion devices

    Energy Technology Data Exchange (ETDEWEB)

    Dutta, R.; Ghosh, P.; Chowdhury, K. [Cryogenic Engineering Centre, Indian Institute of Technology, Kharagpur (India)

    2012-07-01

    The immediate effect of pulsed heat load from fusion devices in helium refrigerators is wide variation in mass flow rate of low pressure stream returning to the cold-box. In this paper, a four expander based modified Claude cycle has been analyzed in quasi steady and dynamic simulations using Aspen HYSYS to identify critical equipment that may be affected due to such flow rate fluctuations at the return stream and their transient performance. Additional constraints on process parameters over steady state design have been identified. Suitable techniques for mitigation of fluctuation of return stream have also been explored. (author)

  20. Performance of large-scale helium refrigerators subjected to pulsed heat load from fusion devices

    International Nuclear Information System (INIS)

    Dutta, R.; Ghosh, P.; Chowdhury, K.

    2012-01-01

    The immediate effect of pulsed heat load from fusion devices in helium refrigerators is wide variation in mass flow rate of low pressure stream returning to the cold-box. In this paper, a four expander based modified Claude cycle has been analyzed in quasi steady and dynamic simulations using Aspen HYSYS to identify critical equipment that may be affected due to such flow rate fluctuations at the return stream and their transient performance. Additional constraints on process parameters over steady state design have been identified. Suitable techniques for mitigation of fluctuation of return stream have also been explored. (author)

  1. Evaluation of creep-fatigue crack growth for large-scale FBR reactor vessel and NDE assessment

    Energy Technology Data Exchange (ETDEWEB)

    Joo, Young Sang; Kim, Jong Bum; Kim, Seok Hun; Yoo, Bong

    2001-03-01

    Creep fatigue crack growth contributes to the failure of FRB reactor vessels in high temperature condition. In the design stage of reactor vessel, crack growth evaluation is very important to ensure the structural safety and setup the in-service inspection strategy. In this study, creep-fatigue crack growth evaluation has been performed for the semi-elliptical surface cracks subjected to thermal loading. The thermal stress analysis of a large-scale FBR reactor vessel has been carried out for the load conditions. The distributions of axial, radial, hoop, and Von Mises stresses were obtained for the loading conditions. At the maximum point of the axial and hoop stress, the longitudinal and circumferential surface cracks (i.e. PTS crack, NDE short crack and shallow long crack) were postulated. Using the maximum and minimum values of stresses, the creep-fatigue crack growth of the proposed cracks was simulated. The crack growth rate of circumferential cracks becomes greater than that of longitudinal cracks. The total crack growth of the largest PTS crack is very small after 427 cycles. The structural integrity of a large-scale reactor can be maintained for the plant life. The crack depth growth of the shallow long crack is faster than that of the NDE short crack. In the ISI of the large-scale FBR reactor vessel, the ultrasonic inspection is beneficial to detect the shallow circumferential cracks.

  2. Assessing the Probability that a Finding Is Genuine for Large-Scale Genetic Association Studies.

    Science.gov (United States)

    Kuo, Chia-Ling; Vsevolozhskaya, Olga A; Zaykin, Dmitri V

    2015-01-01

    Genetic association studies routinely involve massive numbers of statistical tests accompanied by P-values. Whole genome sequencing technologies increased the potential number of tested variants to tens of millions. The more tests are performed, the smaller P-value is required to be deemed significant. However, a small P-value is not equivalent to small chances of a spurious finding and significance thresholds may fail to serve as efficient filters against false results. While the Bayesian approach can provide a direct assessment of the probability that a finding is spurious, its adoption in association studies has been slow, due in part to the ubiquity of P-values and the automated way they are, as a rule, produced by software packages. Attempts to design simple ways to convert an association P-value into the probability that a finding is spurious have been met with difficulties. The False Positive Report Probability (FPRP) method has gained increasing popularity. However, FPRP is not designed to estimate the probability for a particular finding, because it is defined for an entire region of hypothetical findings with P-values at least as small as the one observed for that finding. Here we propose a method that lets researchers extract probability that a finding is spurious directly from a P-value. Considering the counterpart of that probability, we term this method POFIG: the Probability that a Finding is Genuine. Our approach shares FPRP's simplicity, but gives a valid probability that a finding is spurious given a P-value. In addition to straightforward interpretation, POFIG has desirable statistical properties. The POFIG average across a set of tentative associations provides an estimated proportion of false discoveries in that set. POFIGs are easily combined across studies and are immune to multiple testing and selection bias. We illustrate an application of POFIG method via analysis of GWAS associations with Crohn's disease.

  3. Experimental performance evaluation of software defined networking (SDN) based data communication networks for large scale flexi-grid optical networks.

    Science.gov (United States)

    Zhao, Yongli; He, Ruiying; Chen, Haoran; Zhang, Jie; Ji, Yuefeng; Zheng, Haomian; Lin, Yi; Wang, Xinbo

    2014-04-21

    Software defined networking (SDN) has become the focus in the current information and communication technology area because of its flexibility and programmability. It has been introduced into various network scenarios, such as datacenter networks, carrier networks, and wireless networks. Optical transport network is also regarded as an important application scenario for SDN, which is adopted as the enabling technology of data communication networks (DCN) instead of general multi-protocol label switching (GMPLS). However, the practical performance of SDN based DCN for large scale optical networks, which is very important for the technology selection in the future optical network deployment, has not been evaluated up to now. In this paper we have built a large scale flexi-grid optical network testbed with 1000 virtual optical transport nodes to evaluate the performance of SDN based DCN, including network scalability, DCN bandwidth limitation, and restoration time. A series of network performance parameters including blocking probability, bandwidth utilization, average lightpath provisioning time, and failure restoration time have been demonstrated under various network environments, such as with different traffic loads and different DCN bandwidths. The demonstration in this work can be taken as a proof for the future network deployment.

  4. Improved technique that allows the performance of large-scale SNP genotyping on DNA immobilized by FTA technology.

    Science.gov (United States)

    He, Hongbin; Argiro, Laurent; Dessein, Helia; Chevillard, Christophe

    2007-01-01

    FTA technology is a novel method designed to simplify the collection, shipment, archiving and purification of nucleic acids from a wide variety of biological sources. The number of punches that can normally be obtained from a single specimen card are often however, insufficient for the testing of the large numbers of loci required to identify genetic factors that control human susceptibility or resistance to multifactorial diseases. In this study, we propose an improved technique to perform large-scale SNP genotyping. We applied a whole genome amplification method to amplify DNA from buccal cell samples stabilized using FTA technology. The results show that using the improved technique it is possible to perform up to 15,000 genotypes from one buccal cell sample. Furthermore, the procedure is simple. We consider this improved technique to be a promising methods for performing large-scale SNP genotyping because the FTA technology simplifies the collection, shipment, archiving and purification of DNA, while whole genome amplification of FTA card bound DNA produces sufficient material for the determination of thousands of SNP genotypes.

  5. On the network protocol performance evaluation for large scale communication system of nuclear plant

    International Nuclear Information System (INIS)

    Song, K. S.; Lee, T. H.; Kim, H. R.; Kim, D. H.; Ku, I. S.

    1998-01-01

    Computer technology has been dramatically advanced and it is now natural to apply digital network technology into nuclear plants. Communication architecture for nuclear plant defines the coordination of safety reactor control, balance of plant, subsystem utilities, and plant monitoring functions, and how they are connected and their user interface to guarantee plant performance and guarantee safety requirements. Therefore, to implement a digital network for control and monitoring systems of advanced nuclear plant needs systematic design and evaluation procedures because of responsive and hard real-time process characteristics of nuclear plant. In this paper, we evaluate several digital network protocols in terms of network delay, link failure effects to hard real-time requirements with full scale traffic

  6. Performance Analysis of a Wind Turbine Driven Swash Plate Pump for Large Scale Offshore Applications

    International Nuclear Information System (INIS)

    Buhagiar, D; Sant, T

    2014-01-01

    This paper deals with the performance modelling and analysis of offshore wind turbine-driven hydraulic pumps. The concept consists of an open loop hydraulic system with the rotor main shaft directly coupled to a swash plate pump to supply pressurised sea water. A mathematical model is derived to cater for the steady state behaviour of entire system. A simplified model for the pump is implemented together with different control scheme options for regulating the rotor shaft power. A new control scheme is investigated, based on the combined use of hydraulic pressure and pitch control. Using a steady-state analysis, the study shows how the adoption of alternative control schemes in a the wind turbine-hydraulic pump system may result in higher energy yields than those from a conventional system with an electrical generator and standard pitch control for power regulation. This is in particular the case with the new control scheme investigated in this study that is based on the combined use of pressure and rotor blade pitch control

  7. Performance Prediction for Large-Scale Nuclear Waste Repositories: Final Report

    International Nuclear Information System (INIS)

    Glassley, W E; Nitao, J J; Grant, W; Boulos, T N; Gokoffski, M O; Johnson, J W; Kercher, J R; Levatin, J A; Steefel, C I

    2001-01-01

    The goal of this project was development of a software package capable of utilizing terascale computational platforms for solving subsurface flow and transport problems important for disposal of high level nuclear waste materials, as well as for DOE-complex clean-up and stewardship efforts. We sought to develop a tool that would diminish reliance on abstracted models, and realistically represent the coupling between subsurface fluid flow, thermal effects and chemical reactions that both modify the physical framework of the rock materials and which change the rock mineralogy and chemistry of the migrating fluid. Providing such a capability would enhance realism in models and increase confidence in long-term predictions of performance. Achieving this goal also allows more cost-effective design and execution of monitoring programs needed to evaluate model results. This goal was successfully accomplished through the development of a new simulation tool (NUFT-C). This capability allows high resolution modeling of complex coupled thermal-hydrological-geochemical processes in the saturated and unsaturated zones of the Earth's crust. The code allows consideration of virtually an unlimited number of chemical species and minerals in a multi-phase, non-isothermal environment. Because the code is constructed to utilize the computational power of the tera-scale IBM ASCI computers, simulations that encompass large rock volumes and complex chemical systems can now be done without sacrificing spatial or temporal resolution. The code is capable of doing one-, two-, and three-dimensional simulations, allowing unprecedented evaluation of the evolution of rock properties and mineralogical and chemical change as a function of time. The code has been validated by comparing results of simulations to laboratory-scale experiments, other benchmark codes, field scale experiments, and observations in natural systems. The results of these exercises demonstrate that the physics and chemistry

  8. Verification of the analytical fracture assessments methods by a large scale pressure vessel test

    Energy Technology Data Exchange (ETDEWEB)

    Keinanen, H; Oberg, T; Rintamaa, R; Wallin, K

    1988-12-31

    This document deals with the use of fracture mechanics for the assessment of reactor pressure vessel. Tests have been carried out to verify the analytical fracture assessment methods. The analysis is focused on flaw dimensions and the scatter band of material characteristics. Results are provided and are compared to experimental ones. (TEC).

  9. How much is too much assessment? Insight into assessment-driven student learning gains in large-scale undergraduate microbiology courses.

    Science.gov (United States)

    Wang, Jack T H; Schembri, Mark A; Hall, Roy A

    2013-01-01

    Designing and implementing assessment tasks in large-scale undergraduate science courses is a labor-intensive process subject to increasing scrutiny from students and quality assurance authorities alike. Recent pedagogical research has provided conceptual frameworks for teaching introductory undergraduate microbiology, but has yet to define best-practice assessment guidelines. This study assessed the applicability of Biggs' theory of constructive alignment in designing consistent learning objectives, activities, and assessment items that aligned with the American Society for Microbiology's concept-based microbiology curriculum in MICR2000, an introductory microbiology course offered at the University of Queensland, Australia. By improving the internal consistency in assessment criteria and increasing the number of assessment items explicitly aligned to the course learning objectives, the teaching team was able to efficiently provide adequate feedback on numerous assessment tasks throughout the semester, which contributed to improved student performance and learning gains. When comparing the constructively aligned 2011 offering of MICR2000 with its 2010 counterpart, students obtained higher marks in both coursework assignments and examinations as the semester progressed. Students also valued the additional feedback provided, as student rankings for course feedback provision increased in 2011 and assessment and feedback was identified as a key strength of MICR2000. By designing MICR2000 using constructive alignment and iterative assessment tasks that followed a common set of learning outcomes, the teaching team was able to effectively deliver detailed and timely feedback in a large introductory microbiology course. This study serves as a case study for how constructive alignment can be integrated into modern teaching practices for large-scale courses.

  10. Large-scale solar purchasing

    International Nuclear Information System (INIS)

    1999-01-01

    The principal objective of the project was to participate in the definition of a new IEA task concerning solar procurement (''the Task'') and to assess whether involvement in the task would be in the interest of the UK active solar heating industry. The project also aimed to assess the importance of large scale solar purchasing to UK active solar heating market development and to evaluate the level of interest in large scale solar purchasing amongst potential large scale purchasers (in particular housing associations and housing developers). A further aim of the project was to consider means of stimulating large scale active solar heating purchasing activity within the UK. (author)

  11. A concurrent visualization system for large-scale unsteady simulations. Parallel vector performance on an NEC SX-4

    International Nuclear Information System (INIS)

    Takei, Toshifumi; Doi, Shun; Matsumoto, Hideki; Muramatsu, Kazuhiro

    2000-01-01

    We have developed a concurrent visualization system RVSLIB (Real-time Visual Simulation Library). This paper shows the effectiveness of the system when it is applied to large-scale unsteady simulations, for which the conventional post-processing approach may no longer work, on high-performance parallel vector supercomputers. The system performs almost all of the visualization tasks on a computation server and uses compressed visualized image data for efficient communication between the server and the user terminal. We have introduced several techniques, including vectorization and parallelization, into the system to minimize the computational costs of the visualization tools. The performance of RVSLIB was evaluated by using an actual CFD code on an NEC SX-4. The computational time increase due to the concurrent visualization was at most 3% for a smaller (1.6 million) grid and less than 1% for a larger (6.2 million) one. (author)

  12. Model development to acceptability-assessment of large scale power plants for electricity generation

    International Nuclear Information System (INIS)

    Schubert, Katharina

    2013-01-01

    An approach to specific assessment of large power plants is presented. This approach is intended to provide the decision which kind of nuclear, fossil and renewable installation operation minimizes unacceptable consequences for the environment, economy, and society. The tool ACCEPPT, which is currently under development for this purpose, allows a comprehensible and quantitative assessment of the reasonableness of unintended side-effects of different power plant types. The flexible design of the tool elements frame conditions and system technology supports a dynamic acceptability assessment under consideration of the particular context and plant configuration. Thus, current conditions can be used for evaluation as well as development scenarios. Finally the comprehensible acceptability results are intended to contribute overcoming of acceptance problems in the society. (orig.)

  13. The Use of Illustrations in Large-Scale Science Assessment: A Comparative Study

    Science.gov (United States)

    Wang, Chao

    2012-01-01

    This dissertation addresses the complexity of test illustrations design across cultures. More specifically, it examines how the characteristics of illustrations used in science test items vary across content areas, assessment programs, and cultural origins. It compares a total of 416 Grade 8 illustrated items from the areas of earth science, life…

  14. Advancing effects analysis for integrated, large-scale wildfire risk assessment

    Science.gov (United States)

    Matthew P. Thompson; David E. Calkin; Julie W. Gilbertson-Day; Alan A. Ager

    2011-01-01

    In this article, we describe the design and development of a quantitative, geospatial risk assessment tool intended to facilitate monitoring trends in wildfire risk over time and to provide information useful in prioritizing fuels treatments and mitigation measures. The research effort is designed to develop, from a strategic view, a first approximation of how both...

  15. On-line transient stability assessment of large-scale power systems by using ball vector machines

    International Nuclear Information System (INIS)

    Mohammadi, M.; Gharehpetian, G.B.

    2010-01-01

    In this paper ball vector machine (BVM) has been used for on-line transient stability assessment of large-scale power systems. To classify the system transient security status, a BVM has been trained for all contingencies. The proposed BVM based security assessment algorithm has very small training time and space in comparison with artificial neural networks (ANN), support vector machines (SVM) and other machine learning based algorithms. In addition, the proposed algorithm has less support vectors (SV) and therefore is faster than existing algorithms for on-line applications. One of the main points, to apply a machine learning method is feature selection. In this paper, a new Decision Tree (DT) based feature selection technique has been presented. The proposed BVM based algorithm has been applied to New England 39-bus power system. The simulation results show the effectiveness and the stability of the proposed method for on-line transient stability assessment procedure of large-scale power system. The proposed feature selection algorithm has been compared with different feature selection algorithms. The simulation results demonstrate the effectiveness of the proposed feature algorithm.

  16. AUSERA: Large-Scale Automated Security Risk Assessment of Global Mobile Banking Apps

    OpenAIRE

    Chen, Sen; Meng, Guozhu; Su, Ting; Fan, Lingling; Xue, Yinxing; Liu, Yang; Xu, Lihua; Xue, Minhui; Li, Bo; Hao, Shuang

    2018-01-01

    Contemporary financial technology (FinTech) that enables cashless mobile payment has been widely adopted by financial institutions, such as banks, due to its convenience and efficiency. However, FinTech has also made massive and dynamic transactions susceptible to security risks. Given large financial losses caused by such vulnerabilities, regulatory technology (RegTech) has been developed, but more comprehensive security risk assessment is specifically desired to develop robust, scalable, an...

  17. LBB assessment on ferrite piping structure of large-scale FBR

    OpenAIRE

    兪 淵植

    2002-01-01

    These days, this interest on LBB(Leak before Break) design becomes to be rising in the viewpoint of the cost reduction and structural inter-grity for the commercialization of FBR plants, LBB design enables pla-nts to be shut down safely before occuring unstable fracture by dete- cting the leak rates even if a crack initiates and penetrates a wall thickness. It is necessary to assess crack growth and penetration be- havior considering in-service conditions under operation temperature, leak re...

  18. On the Use of Educational Numbers: Comparative Constructions of Hierarchies by Means of Large-Scale Assessments

    Directory of Open Access Journals (Sweden)

    Daniel Pettersson

    2016-01-01

    later the growing importance of transnational agencies and international, regional and national assessments. How to reference this article Pettersson, D., Popkewitz, T. S., & Lindblad, S. (2016. On the Use of Educational Numbers: Comparative Constructions of Hierarchies by Means of Large-Scale Assessments. Espacio, Tiempo y Educación, 3(1, 177-202. doi: http://dx.doi.org/10.14516/ete.2016.003.001.10

  19. DC-DC Converter Topology Assessment for Large Scale Distributed Photovoltaic Plant Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Agamy, Mohammed S; Harfman-Todorovic, Maja; Elasser, Ahmed; Sabate, Juan A; Steigerwald, Robert L; Jiang, Yan; Essakiappan, Somasundaram

    2011-07-01

    Distributed photovoltaic (PV) plant architectures are emerging as a replacement for the classical central inverter based systems. However, power converters of smaller ratings may have a negative impact on system efficiency, reliability and cost. Therefore, it is necessary to design converters with very high efficiency and simpler topologies in order not to offset the benefits gained by using distributed PV systems. In this paper an evaluation of the selection criteria for dc-dc converters for distributed PV systems is performed; this evaluation includes efficiency, simplicity of design, reliability and cost. Based on this evaluation, recommendations can be made as to which class of converters is best fit for this application.

  20. A new approach to ductile tearing assessment of pipelines under large-scale yielding

    Energy Technology Data Exchange (ETDEWEB)

    Ostby, Erling [SINTEF Materials and Chemistry, N-7465, Trondheim (Norway)]. E-mail: Erling.Obstby@sintef.no; Thaulow, Christian [Norwegian University of Science and Technology, N-7491, Trondheim (Norway); Nyhus, Bard [SINTEF Materials and Chemistry, N-7465, Trondheim (Norway)

    2007-06-15

    In this paper we focus on the issue of ductile tearing assessment for cases with global plasticity, relevant for example to strain-based design of pipelines. A proposal for a set of simplified strain-based driving force equations is used as a basis for calculation of ductile tearing. We compare the traditional approach using the tangency criterion to predict unstable tearing, with a new alternative approach for ductile tearing calculations. A criterion to determine the CTOD at maximum load carrying capacity in the crack ligament is proposed, and used as the failure criterion in the new approach. Compared to numerical reference simulations, the tangency criterion predicts conservative results with regard to the strain capacity. The new approach yields results in better agreement with the reference numerical simulations.

  1. Failure Impact Assessment for Large-Scale Landslides Located Near Human Settlement: Case Study in Southern Taiwan

    Directory of Open Access Journals (Sweden)

    Ming-Chien Chung

    2018-05-01

    Full Text Available In 2009, Typhoon Morakot caused over 680 deaths and more than 20,000 landslides in Taiwan. From 2010 to 2015, the Central Geological Survey of the Ministry of Economic Affairs identified 1047 potential large-scale landslides in Taiwan, of which 103 may have affected human settlements. This paper presents an analytical procedure that can be applied to assess the possible impact of a landslide collapse on nearby settlements. In this paper, existing technologies, including interpretation of remote sensing images, hydrogeological investigation, and numerical analysis, are integrated to evaluate potential failure scenarios and the landslide scale of a specific case: the Xinzhuang landslide. GeoStudio and RAMMS analysis modes and hazard classification produced the following results: (1 evaluation of the failure mechanisms and the influence zones of large-scale landslides; (2 assessment of the migration and accumulation of the landslide mass after failure; and (3 a landslide hazard and evacuation map. The results of the case study show that this analytical procedure can quantitatively estimate potential threats to human settlements. Furthermore, it can be applied to other villages and used as a reference in disaster prevention and evacuation planning.

  2. Large-scale assessment of Mediterranean marine protected areas effects on fish assemblages.

    Directory of Open Access Journals (Sweden)

    Paolo Guidetti

    Full Text Available Marine protected areas (MPAs were acknowledged globally as effective tools to mitigate the threats to oceans caused by fishing. Several studies assessed the effectiveness of individual MPAs in protecting fish assemblages, but regional assessments of multiple MPAs are scarce. Moreover, empirical evidence on the role of MPAs in contrasting the propagation of non-indigenous-species (NIS and thermophilic species (ThS is missing. We simultaneously investigated here the role of MPAs in reversing the effects of overfishing and in limiting the spread of NIS and ThS. The Mediterranean Sea was selected as study area as it is a region where 1 MPAs are numerous, 2 fishing has affected species and ecosystems, and 3 the arrival of NIS and the northward expansion of ThS took place. Fish surveys were done in well-enforced no-take MPAs (HP, partially-protected MPAs (IP and fished areas (F at 30 locations across the Mediterranean. Significantly higher fish biomass was found in HP compared to IP MPAs and F. Along a recovery trajectory from F to HP MPAs, IP were similar to F, showing that just well enforced MPAs triggers an effective recovery. Within HP MPAs, trophic structure of fish assemblages resembled a top-heavy biomass pyramid. Although the functional structure of fish assemblages was consistent among HP MPAs, species driving the recovery in HP MPAs differed among locations: this suggests that the recovery trajectories in HP MPAs are likely to be functionally similar (i.e., represented by predictable changes in trophic groups, especially fish predators, but the specific composition of the resulting assemblages may depend on local conditions. Our study did not show any effect of MPAs on NIS and ThS. These results may help provide more robust expectations, at proper regional scale, about the effects of new MPAs that may be established in the Mediterranean Sea and other ecoregions worldwide.

  3. Large-Scale Assessment of Mediterranean Marine Protected Areas Effects on Fish Assemblages

    Science.gov (United States)

    Guidetti, Paolo; Baiata, Pasquale; Ballesteros, Enric; Di Franco, Antonio; Hereu, Bernat; Macpherson, Enrique; Micheli, Fiorenza; Pais, Antonio; Panzalis, Pieraugusto; Rosenberg, Andrew A.; Zabala, Mikel; Sala, Enric

    2014-01-01

    Marine protected areas (MPAs) were acknowledged globally as effective tools to mitigate the threats to oceans caused by fishing. Several studies assessed the effectiveness of individual MPAs in protecting fish assemblages, but regional assessments of multiple MPAs are scarce. Moreover, empirical evidence on the role of MPAs in contrasting the propagation of non-indigenous-species (NIS) and thermophilic species (ThS) is missing. We simultaneously investigated here the role of MPAs in reversing the effects of overfishing and in limiting the spread of NIS and ThS. The Mediterranean Sea was selected as study area as it is a region where 1) MPAs are numerous, 2) fishing has affected species and ecosystems, and 3) the arrival of NIS and the northward expansion of ThS took place. Fish surveys were done in well-enforced no-take MPAs (HP), partially-protected MPAs (IP) and fished areas (F) at 30 locations across the Mediterranean. Significantly higher fish biomass was found in HP compared to IP MPAs and F. Along a recovery trajectory from F to HP MPAs, IP were similar to F, showing that just well enforced MPAs triggers an effective recovery. Within HP MPAs, trophic structure of fish assemblages resembled a top-heavy biomass pyramid. Although the functional structure of fish assemblages was consistent among HP MPAs, species driving the recovery in HP MPAs differed among locations: this suggests that the recovery trajectories in HP MPAs are likely to be functionally similar (i.e., represented by predictable changes in trophic groups, especially fish predators), but the specific composition of the resulting assemblages may depend on local conditions. Our study did not show any effect of MPAs on NIS and ThS. These results may help provide more robust expectations, at proper regional scale, about the effects of new MPAs that may be established in the Mediterranean Sea and other ecoregions worldwide. PMID:24740479

  4. Large-scale control site selection for population monitoring: an example assessing Sage-grouse trends

    Science.gov (United States)

    Fedy, Bradley C.; O'Donnell, Michael; Bowen, Zachary H.

    2015-01-01

    Human impacts on wildlife populations are widespread and prolific and understanding wildlife responses to human impacts is a fundamental component of wildlife management. The first step to understanding wildlife responses is the documentation of changes in wildlife population parameters, such as population size. Meaningful assessment of population changes in potentially impacted sites requires the establishment of monitoring at similar, nonimpacted, control sites. However, it is often difficult to identify appropriate control sites in wildlife populations. We demonstrated use of Geographic Information System (GIS) data across large spatial scales to select biologically relevant control sites for population monitoring. Greater sage-grouse (Centrocercus urophasianus; hearafter, sage-grouse) are negatively affected by energy development, and monitoring of sage-grouse population within energy development areas is necessary to detect population-level responses. Weused population data (1995–2012) from an energy development area in Wyoming, USA, the Atlantic Rim Project Area (ARPA), and GIS data to identify control sites that were not impacted by energy development for population monitoring. Control sites were surrounded by similar habitat and were within similar climate areas to the ARPA. We developed nonlinear trend models for both the ARPA and control sites and compared long-term trends from the 2 areas. We found little difference between the ARPA and control sites trends over time. This research demonstrated an approach for control site selection across large landscapes and can be used as a template for similar impact-monitoring studies. It is important to note that identification of changes in population parameters between control and treatment sites is only the first step in understanding the mechanisms that underlie those changes. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  5. A large-scale field assessment of carbon stocks in human-modified tropical forests.

    Science.gov (United States)

    Berenguer, Erika; Ferreira, Joice; Gardner, Toby Alan; Aragão, Luiz Eduardo Oliveira Cruz; De Camargo, Plínio Barbosa; Cerri, Carlos Eduardo; Durigan, Mariana; Cosme De Oliveira Junior, Raimundo; Vieira, Ima Célia Guimarães; Barlow, Jos

    2014-12-01

    Tropical rainforests store enormous amounts of carbon, the protection of which represents a vital component of efforts to mitigate global climate change. Currently, tropical forest conservation, science, policies, and climate mitigation actions focus predominantly on reducing carbon emissions from deforestation alone. However, every year vast areas of the humid tropics are disturbed by selective logging, understory fires, and habitat fragmentation. There is an urgent need to understand the effect of such disturbances on carbon stocks, and how stocks in disturbed forests compare to those found in undisturbed primary forests as well as in regenerating secondary forests. Here, we present the results of the largest field study to date on the impacts of human disturbances on above and belowground carbon stocks in tropical forests. Live vegetation, the largest carbon pool, was extremely sensitive to disturbance: forests that experienced both selective logging and understory fires stored, on average, 40% less aboveground carbon than undisturbed forests and were structurally similar to secondary forests. Edge effects also played an important role in explaining variability in aboveground carbon stocks of disturbed forests. Results indicate a potential rapid recovery of the dead wood and litter carbon pools, while soil stocks (0-30 cm) appeared to be resistant to the effects of logging and fire. Carbon loss and subsequent emissions due to human disturbances remain largely unaccounted for in greenhouse gas inventories, but by comparing our estimates of depleted carbon stocks in disturbed forests with Brazilian government assessments of the total forest area annually disturbed in the Amazon, we show that these emissions could represent up to 40% of the carbon loss from deforestation in the region. We conclude that conservation programs aiming to ensure the long-term permanence of forest carbon stocks, such as REDD+, will remain limited in their success unless they effectively

  6. Assessing the Cost of Large-Scale Power Outages to Residential Customers.

    Science.gov (United States)

    Baik, Sunhee; Davis, Alexander L; Morgan, M Granger

    2018-02-01

    Residents in developed economies depend heavily on electric services. While distributed resources and a variety of new smart technologies can increase the reliability of that service, adopting them involves costs, necessitating tradeoffs between cost and reliability. An important input to making such tradeoffs is an estimate of the value customers place on reliable electric services. We develop an elicitation framework that helps individuals think systematically about the value they attach to reliable electric service. Our approach employs a detailed and realistic blackout scenario, full or partial (20 A) backup service, questions about willingness to pay (WTP) using a multiple bounded discrete choice method, information regarding inconveniences and economic losses, and checks for bias and consistency. We applied this method to a convenience sample of residents in Allegheny County, Pennsylvania, finding that respondents valued a kWh for backup services they assessed to be high priority more than services that were seen as low priority ($0.75/kWh vs. $0.51/kWh). As more information about the consequences of a blackout was provided, this difference increased ($1.2/kWh vs. $0.35/kWh), and respondents' uncertainty about the backup services decreased (Full: $11 to $9.0, Partial: $13 to $11). There was no evidence that the respondents were anchored by their previous WTP statements, but they demonstrated only weak scope sensitivity. In sum, the consumer surplus associated with providing a partial electric backup service during a blackout may justify the costs of such service, but measurement of that surplus depends on the public having accurate information about blackouts and their consequences. © 2017 Society for Risk Analysis.

  7. The Construction of the Malaysian Educators Selection Inventory (MEdSI: A Large Scale Assessment Initiative

    Directory of Open Access Journals (Sweden)

    Joharry Othman

    2008-06-01

    Full Text Available The crucial role that teachers and schools play in the development of a nation’s human resource is undeniable. In Malaysia, teaching has always been perceived as a financially secure and relatively easy job by many, resulting in mass application for entry into teacher education programmes. Many of those who aspire and opto to go into the teaching profession however do so regardless of their personal interests, potential, and values. Pursuing a program that does not fit a person’s personality and interest – despite initially having good academic credentials and excellent co-curricular involvement in school – may result in unsatisfactory academic performance, frustration, change of program and even withdrawal at college level. Hence, in the quest for selecting suitable teacher trainee candidates, a psychometrically sound instrument known as the Malaysian Educators Selection Inventory (MEdSI was developed as a screening measure to filter the large number of teacher hopefuls. This paper specifically describes the theoretical basis and the constructs of the instrument developed.

  8. Collaborative-Large scale Engineering Assessment Networks for Environmental Research: The Overview

    Science.gov (United States)

    Moo-Young, H.

    2004-05-01

    A networked infrastructure for engineering solutions and policy alternatives is necessary to assess, manage, and protect complex, anthropogenic ally stressed environmental resources effectively. Reductionist and discrete disciplinary methodologies are no longer adequate to evaluate and model complex environmental systems and anthropogenic stresses. While the reductonist approach provides important information regarding individual mechanisms, it cannot provide complete information about how multiple processes are related. Therefore, it is not possible to make accurate predictions about system responses to engineering interventions and the effectiveness of policy options. For example, experts cannot agree on best management strategies for contaminated sediments in riverine and estuarine systems. This is due, in part to the fact that existing models do not accurately capture integrated system dynamics. In addition, infrastructure is not available for investigators to exchange and archive data, to collaborate on new investigative methods, and to synthesize these results to develop engineering solutions and policy alternatives. Our vision for the future is to create a network comprising field facilities and a collaboration of engineers, scientists, policy makers, and community groups. This will allow integration across disciplines, across different temporal and spatial scales, surface and subsurface geographies, and air sheds and watersheds. Benefits include fast response to changes in system health, real-time decision making, and continuous data collection that can be used to anticipate future problems, and to develop sound engineering solutions and management decisions. CLEANER encompasses four general aspects: 1) A Network of environmental field facilities instrumented for the acquisition and analysis of environmental data; 2) A Virtual Repository of Data and information technology for engineering modeling, analysis and visualization of data, i.e. an environmental

  9. 'Oorja' in India: Assessing a large-scale commercial distribution of advanced biomass stoves to households.

    Science.gov (United States)

    Thurber, Mark C; Phadke, Himani; Nagavarapu, Sriniketh; Shrimali, Gireesh; Zerriffi, Hisham

    2014-04-01

    Replacing traditional stoves with advanced alternatives that burn more cleanly has the potential to ameliorate major health problems associated with indoor air pollution in developing countries. With a few exceptions, large government and charitable programs to distribute advanced stoves have not had the desired impact. Commercially-based distributions that seek cost recovery and even profits might plausibly do better, both because they encourage distributors to supply and promote products that people want and because they are based around properly-incentivized supply chains that could more be scalable, sustainable, and replicable. The sale in India of over 400,000 "Oorja" stoves to households from 2006 onwards represents the largest commercially-based distribution of a gasification-type advanced biomass stove. BP's Emerging Consumer Markets (ECM) division and then successor company First Energy sold this stove and the pelletized biomass fuel on which it operates. We assess the success of this effort and the role its commercial aspect played in outcomes using a survey of 998 households in areas of Maharashtra and Karnataka where the stove was sold as well as detailed interviews with BP and First Energy staff. Statistical models based on this data indicate that Oorja purchase rates were significantly influenced by the intensity of Oorja marketing in a region as well as by pre-existing stove mix among households. The highest rate of adoption came from LPG-using households for which Oorja's pelletized biomass fuel reduced costs. Smoke- and health-related messages from Oorja marketing did not significantly influence the purchase decision, although they did appear to affect household perceptions about smoke. By the time of our survey, only 9% of households that purchased Oorja were still using the stove, the result in large part of difficulties First Energy encountered in developing a viable supply chain around low-cost procurement of "agricultural waste" to make

  10. The assessment of the readiness of five countries to implement child maltreatment prevention programs on a large scale.

    Science.gov (United States)

    Mikton, Christopher; Power, Mick; Raleva, Marija; Makoae, Mokhantso; Al Eissa, Majid; Cheah, Irene; Cardia, Nancy; Choo, Claire; Almuneef, Maha

    2013-12-01

    This study aimed to systematically assess the readiness of five countries - Brazil, the Former Yugoslav Republic of Macedonia, Malaysia, Saudi Arabia, and South Africa - to implement evidence-based child maltreatment prevention programs on a large scale. To this end, it applied a recently developed method called Readiness Assessment for the Prevention of Child Maltreatment based on two parallel 100-item instruments. The first measures the knowledge, attitudes, and beliefs concerning child maltreatment prevention of key informants; the second, completed by child maltreatment prevention experts using all available data in the country, produces a more objective assessment readiness. The instruments cover all of the main aspects of readiness including, for instance, availability of scientific data on the problem, legislation and policies, will to address the problem, and material resources. Key informant scores ranged from 31.2 (Brazil) to 45.8/100 (the Former Yugoslav Republic of Macedonia) and expert scores, from 35.2 (Brazil) to 56/100 (Malaysia). Major gaps identified in almost all countries included a lack of professionals with the skills, knowledge, and expertise to implement evidence-based child maltreatment programs and of institutions to train them; inadequate funding, infrastructure, and equipment; extreme rarity of outcome evaluations of prevention programs; and lack of national prevalence surveys of child maltreatment. In sum, the five countries are in a low to moderate state of readiness to implement evidence-based child maltreatment prevention programs on a large scale. Such an assessment of readiness - the first of its kind - allows gaps to be identified and then addressed to increase the likelihood of program success. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Infrastructure for large-scale quality-improvement projects: early lessons from North Carolina Improving Performance in Practice.

    Science.gov (United States)

    Newton, Warren P; Lefebvre, Ann; Donahue, Katrina E; Bacon, Thomas; Dobson, Allen

    2010-01-01

    Little is known regarding how to accomplish large-scale health care improvement. Our goal is to improve the quality of chronic disease care in all primary care practices throughout North Carolina. Methods for improvement include (1) common quality measures and shared data system; (2) rapid cycle improvement principles; (3) quality-improvement consultants (QICs), or practice facilitators; (4) learning networks; and (5) alignment of incentives. We emphasized a community-based strategy and developing a statewide infrastructure. Results are reported from the first 2 years of the North Carolina Improving Performance in Practice (IPIP) project. A coalition was formed to include professional societies, North Carolina AHEC, Community Care of North Carolina, insurers, and other organizations. Wave One started with 18 practices in 2 of 9 regions of the state. Quality-improvement consultants recruited practices. Over 80 percent of practices attended all quarterly regional meetings. In 9 months, almost all diabetes measures improved, and a bundled asthma measure improved from 33 to 58 percent. Overall, the magnitude of improvement was clinically and statistically significant (P = .001). Quality improvements were maintained on review 1 year later. Wave Two has spread to 103 practices in all 9 regions of the state, with 42 additional practices beginning the enrollment process. Large-scale health care quality improvement is feasible, when broadly supported by statewide leadership and community infrastructure. Practice-collected data and lack of a control group are limitations of the study design. Future priorities include maintaining improved sustainability for practices and communities. Our long-term goal is to transform all 2000 primary-care practices in our state.

  12. Performing a Large-Scale Modal Test on the B2 Stand Crane at NASA's Stennis Space Center

    Science.gov (United States)

    Stasiunas, Eric C.; Parks, Russel A.; Sontag, Brendan D.

    2018-01-01

    A modal test of NASA's Space Launch System (SLS) Core Stage is scheduled to occur at the Stennis Space Center B2 test stand. A derrick crane with a 150-ft long boom, located at the top of the stand, will be used to suspend the Core Stage in order to achieve defined boundary conditions. During this suspended modal test, it is expected that dynamic coupling will occur between the crane and the Core Stage. Therefore, a separate modal test was performed on the B2 crane itself, in order to evaluate the varying dynamic characteristics and correlate math models of the crane. Performing a modal test on such a massive structure was challenging and required creative test setup and procedures, including implementing both AC and DC accelerometers, and performing both classical hammer and operational modal analysis. This paper describes the logistics required to perform this large-scale test, as well as details of the test setup, the modal test methods used, and an overview and application of the results.

  13. Embedded Electro-Optic Sensor Network for the On-Site Calibration and Real-Time Performance Monitoring of Large-Scale Phased Arrays

    National Research Council Canada - National Science Library

    Yang, Kyoung

    2005-01-01

    This final report summarizes the progress during the Phase I SBIR project entitled "Embedded Electro-Optic Sensor Network for the On-Site Calibration and Real-Time Performance Monitoring of Large-Scale Phased Arrays...

  14. Methods for assessing the socioeconomic impacts of large-scale resource developments: implications for nuclear repository siting

    International Nuclear Information System (INIS)

    Murdock, S.H.; Leistritz, F.L.

    1983-03-01

    An overview of the major methods presently available for assessing the socioeconomic impacts of large-scale resource developments and includes discussion of the implications and applications of such methods for nuclear-waste-repository siting are provided. The report: (1) summarizes conceptual approaches underlying, and methodological alternatives for, the conduct of impact assessments in each substantive area, and then enumerates advantages and disadvantages of each alternative; (2) describes factors related to the impact-assessment process, impact events, and the characteristics of rural areas that affect the magnitude and distribution of impacts and the assessment of impacts in each area; (3) provides a detailed review of those methodologies actually used in impact assessment for each area, describes advantages and problems encountered in the use of each method, and identifies the frequency of use and the general level of acceptance of each technique; and (4) summarizes the implications of each area of projection for the repository-siting process, the applicability of the methods for each area to the special and standard features of repositories, and makes general recommendations concerning specific methods and procedures that should be incorporated in assessments for siting areas

  15. Spatiotemporally enhancing time-series DMSP/OLS nighttime light imagery for assessing large-scale urban dynamics

    Science.gov (United States)

    Xie, Yanhua; Weng, Qihao

    2017-06-01

    Accurate, up-to-date, and consistent information of urban extents is vital for numerous applications central to urban planning, ecosystem management, and environmental assessment and monitoring. However, current large-scale urban extent products are not uniform with respect to definition, spatial resolution, temporal frequency, and thematic representation. This study aimed to enhance, spatiotemporally, time-series DMSP/OLS nighttime light (NTL) data for detecting large-scale urban changes. The enhanced NTL time series from 1992 to 2013 were firstly generated by implementing global inter-calibration, vegetation-based spatial adjustment, and urban archetype-based temporal modification. The dataset was then used for updating and backdating urban changes for the contiguous U.S.A. (CONUS) and China by using the Object-based Urban Thresholding method (i.e., NTL-OUT method, Xie and Weng, 2016b). The results showed that the updated urban extents were reasonably accurate, with city-scale RMSE (root mean square error) of 27 km2 and Kappa of 0.65 for CONUS, and 55 km2 and 0.59 for China, respectively. The backdated urban extents yielded similar accuracy, with RMSE of 23 km2 and Kappa of 0.63 in CONUS, while 60 km2 and 0.60 in China. The accuracy assessment further revealed that the spatial enhancement greatly improved the accuracy of urban updating and backdating by significantly reducing RMSE and slightly increasing Kappa values. The temporal enhancement also reduced RMSE, and improved the spatial consistency between estimated and reference urban extents. Although the utilization of enhanced NTL data successfully detected urban size change, relatively low locational accuracy of the detected urban changes was observed. It is suggested that the proposed methodology would be more effective for updating and backdating global urban maps if further fusion of NTL data with higher spatial resolution imagery was implemented.

  16. Differences Across Levels in the Language of Agency and Ability in Rating Scales for Large-Scale Second Language Writing Assessments

    Directory of Open Access Journals (Sweden)

    Anderson Salena Sampson

    2017-12-01

    Full Text Available While large-scale language and writing assessments benefit from a wealth of literature on the reliability and validity of specific tests and rating procedures, there is comparatively less literature that explores the specific language of second language writing rubrics. This paper provides an analysis of the language of performance descriptors for the public versions of the TOEFL and IELTS writing assessment rubrics, with a focus on linguistic agency encoded by agentive verbs and language of ability encoded by modal verbs can and cannot. While the IELTS rubrics feature more agentive verbs than the TOEFL rubrics, both pairs of rubrics feature uneven syntax across the band or score descriptors with either more agentive verbs for the highest scores, more nominalization for the lowest scores, or language of ability exclusively in the lowest scores. These patterns mirror similar patterns in the language of college-level classroom-based writing rubrics, but they differ from patterns seen in performance descriptors for some large-scale admissions tests. It is argued that the lack of syntactic congruity across performance descriptors in the IELTS and TOEFL rubrics may reflect a bias in how actual student performances at different levels are characterized.

  17. Performance of lap splices in large-scale column specimens affected by ASR and/or DEF-extension phase.

    Science.gov (United States)

    2015-03-01

    A large experimental program, consisting of the design, construction, curing, exposure, and structural load : testing of 16 large-scale column specimens with a critical lap splice region that were influenced by varying : stages of alkali-silica react...

  18. Hierarchical ZnO microspheres built by sheet-like network: Large-scale synthesis and structurally enhanced catalytic performances

    International Nuclear Information System (INIS)

    Zhu Guoxing; Liu Yuanjun; Ji Zhenyuan; Bai Song; Shen Xiaoping; Xu Zheng

    2012-01-01

    Highlights: ► Hierarchical ZnO microspheres were prepared through a facile precursor procedure in the absence of self-assembled templates, organic additives, or matrices. ► The building blocks of microspheres, sheet-like ZnO networks, are porous mesocrystal terminated with (0 1 −1 0) crystal planes. ► The hierarchical ZnO microsphere catalyst exhibits structure-induced enhancement of catalytic performance and a strong durability. - Abstract: Large-scale novel hierarchical ZnO microspheres were fabricated by a facile precursor procedure in the absence of self-assembled templates, organic additives, or matrices. A field emission scanning electron microscopy (FESEM) image reveals that the ZnO microspheres with diameter of 5–18 μm are built by sheet-like ZnO networks with average thickness of 40 nm and length of several microns. High resolution transmission electron microscopy (HRTEM) image indicates that the building blocks, sheet-like ZnO networks, are porous mesocrystal terminated with {0 1 −1 0} crystal planes. A potential application of the ZnO microspheres as a catalyst in the synthesis of 5-substituted 1H-tetrazoles was investigated. It was found that the hierarchical ZnO microsphere catalyst exhibits structure-induced enhancement of catalytic performance and a strong durability.

  19. The Rights and Responsibility of Test Takers When Large-Scale Testing Is Used for Classroom Assessment

    Science.gov (United States)

    van Barneveld, Christina; Brinson, Karieann

    2017-01-01

    The purpose of this research was to identify conflicts in the rights and responsibility of Grade 9 test takers when some parts of a large-scale test are marked by teachers and used in the calculation of students' class marks. Data from teachers' questionnaires and students' questionnaires from a 2009-10 administration of a large-scale test of…

  20. Performance of the improved version of Monte Carlo code A 3MCNP for large-scale shielding problems

    International Nuclear Information System (INIS)

    Omura, M.; Miyake, Y.; Hasegawa, T.; Ueki, K.; Sato, O.; Haghighat, A.; Sjoden, G. E.

    2005-01-01

    A 3MCNP (Automatic Adjoint Accelerated MCNP) is a revised version of the MCNP Monte Carlo code, which automatically prepares variance reduction parameters for the CADIS (Consistent Adjoint Driven Importance Sampling) methodology. Using a deterministic 'importance' (or adjoint) function, CADIS performs source and transport biasing within the weight-window technique. The current version of A 3MCNP uses the three-dimensional (3-D) Sn transport TORT code to determine a 3-D importance function distribution. Based on simulation of several real-life problems, it is demonstrated that A 3MCNP provides precise calculation results with a remarkably short computation time by using the proper and objective variance reduction parameters. However, since the first version of A 3MCNP provided only a point source configuration option for large-scale shielding problems, such as spent-fuel transport casks, a large amount of memory may be necessary to store enough points to properly represent the source. Hence, we have developed an improved version of A 3MCNP (referred to as A 3MCNPV) which has a volumetric source configuration option. This paper describes the successful use of A 3MCNPV for a concrete cask neutron and gamma-ray shielding problem, and a PWR dosimetry problem. (authors)

  1. Low-Temperature Soft-Cover Deposition of Uniform Large-Scale Perovskite Films for High-Performance Solar Cells.

    Science.gov (United States)

    Ye, Fei; Tang, Wentao; Xie, Fengxian; Yin, Maoshu; He, Jinjin; Wang, Yanbo; Chen, Han; Qiang, Yinghuai; Yang, Xudong; Han, Liyuan

    2017-09-01

    Large-scale high-quality perovskite thin films are crucial to produce high-performance perovskite solar cells. However, for perovskite films fabricated by solvent-rich processes, film uniformity can be prevented by convection during thermal evaporation of the solvent. Here, a scalable low-temperature soft-cover deposition (LT-SCD) method is presented, where the thermal convection-induced defects in perovskite films are eliminated through a strategy of surface tension relaxation. Compact, homogeneous, and convection-induced-defects-free perovskite films are obtained on an area of 12 cm 2 , which enables a power conversion efficiency (PCE) of 15.5% on a solar cell with an area of 5 cm 2 . This is the highest efficiency at this large cell area. A PCE of 15.3% is also obtained on a flexible perovskite solar cell deposited on the polyethylene terephthalate substrate owing to the advantage of presented low-temperature processing. Hence, the present LT-SCD technology provides a new non-spin-coating route to the deposition of large-area uniform perovskite films for both rigid and flexible perovskite devices. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Advances in compact manufacturing for shape and performance controllability of large-scale components-a review

    Science.gov (United States)

    Qin, Fangcheng; Li, Yongtang; Qi, Huiping; Ju, Li

    2017-01-01

    Research on compact manufacturing technology for shape and performance controllability of metallic components can realize the simplification and high-reliability of manufacturing process on the premise of satisfying the requirement of macro/micro-structure. It is not only the key paths in improving performance, saving material and energy, and green manufacturing of components used in major equipments, but also the challenging subjects in frontiers of advanced plastic forming. To provide a novel horizon for the manufacturing in the critical components is significant. Focused on the high-performance large-scale components such as bearing rings, flanges, railway wheels, thick-walled pipes, etc, the conventional processes and their developing situations are summarized. The existing problems including multi-pass heating, wasting material and energy, high cost and high-emission are discussed, and the present study unable to meet the manufacturing in high-quality components is also pointed out. Thus, the new techniques related to casting-rolling compound precise forming of rings, compact manufacturing for duplex-metal composite rings, compact manufacturing for railway wheels, and casting-extruding continuous forming of thick-walled pipes are introduced in detail, respectively. The corresponding research contents, such as casting ring blank, hot ring rolling, near solid-state pressure forming, hot extruding, are elaborated. Some findings in through-thickness microstructure evolution and mechanical properties are also presented. The components produced by the new techniques are mainly characterized by fine and homogeneous grains. Moreover, the possible directions for further development of those techniques are suggested. Finally, the key scientific problems are first proposed. All of these results and conclusions have reference value and guiding significance for the integrated control of shape and performance in advanced compact manufacturing.

  3. New Possibilities for High-Resolution, Large-Scale Ecosystem Assessment of the World's Semi-Arid Regions

    Science.gov (United States)

    Burney, J. A.; Goldblatt, R.

    2016-12-01

    Understanding drivers of land use change - and in particular, levels of ecosystem degradation - in semi-arid regions is of critical importance because these agroecosystems (1) are home to the world's poorest populations, almost all of whom depend on agriculture for their livelihoods, (2) play a critical role in the global carbon and climate cycles, and (3) have in many cases seen dramatic changes in temperature and precipitation, relative to global averages, over the past several decades. However, assessing ecosystem health (or, conversely, degradation) presents a difficult measurement problem. Established methods are very labor intensive and rest on detailed questionnaires and field assessments. High-resolution satellite imagery has a unique role semi-arid ecosystem assessment in that it can be used for rapid (or repeated) and very simple measurements of tree and shrub density, an excellent overall indicator for dryland ecosystem health. Because trees and large shrubs are more sparse in semi-arid regions, sub-meter resolution imagery in conjunction with automated image analysis can be used to assess density differences at high spatial resolution without expensive and time-consuming ground-truthing. This could be used down to the farm level, for example, to better assess the larger-scale ecosystem impacts of different management practices, to assess compliance with REDD+ carbon offset protocols, or to evaluate implementation of conservation goals. Here we present results comparing spatial and spectral remote sensing methods for semi-arid ecosystem assessment across new data sources, using the Brazilian Sertão as an example, and the implications for large-scale use in semi-arid ecosystem science.

  4. Large-scale, high-performance and cloud-enabled multi-model analytics experiments in the context of the Earth System Grid Federation

    Science.gov (United States)

    Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.

    2017-12-01

    The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the

  5. Large Scale Solar Heating

    DEFF Research Database (Denmark)

    Heller, Alfred

    2001-01-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out...... model is designed and validated on the Marstal case. Applying the Danish Reference Year, a design tool is presented. The simulation tool is used for proposals for application of alternative designs, including high-performance solar collector types (trough solar collectors, vaccum pipe collectors......). Simulation programs are proposed as control supporting tool for daily operation and performance prediction of central solar heating plants. Finaly the CSHP technolgy is put into persepctive with respect to alternatives and a short discussion on the barries and breakthrough of the technology are given....

  6. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  7. Assessing sandy beach macrofaunal patterns along large-scale environmental gradients: A Fuzzy Naïve Bayes approach

    Science.gov (United States)

    Bozzeda, Fabio; Zangrilli, Maria Paola; Defeo, Omar

    2016-06-01

    A Fuzzy Naïve Bayes (FNB) classifier was developed to assess large-scale variations in abundance, species richness and diversity of the macrofauna inhabiting fifteen Uruguayan sandy beaches affected by the effects of beach morphodynamics and the estuarine gradient generated by Rio de la Plata. Information from six beaches was used to estimate FNB parameters, while abiotic data of the remaining nine beaches were used to forecast abundance, species richness and diversity. FNB simulations reproduced the general increasing trend of target variables from inner estuarine reflective beaches to marine dissipative ones. The FNB model also identified a threshold value of salinity range beyond which diversity markedly increased towards marine beaches. Salinity range is suggested as an ecological master factor governing distributional patterns in sandy beach macrofauna. However, the model: 1) underestimated abundance and species richness at the innermost estuarine beach, with the lowest salinity, and 2) overestimated species richness in marine beaches with a reflective morphodynamic state, which is strongly linked to low abundance, species richness and diversity. Therefore, future modeling efforts should be refined by giving a dissimilar weigh to the gradients defined by estuarine (estuarine beaches) and morphodynamic (marine beaches) variables, which could improve predictions of target variables. Our modeling approach could be applied to a wide spectrum of issues, ranging from basic ecology to social-ecological systems. This approach seems relevant, given the current challenge to develop predictive methodologies to assess the simultaneous and nonlinear effects of anthropogenic and natural impacts in coastal ecosystems.

  8. Large-scale hydrological simulations using the soil water assessment tool, protocol development, and application in the danube basin.

    Science.gov (United States)

    Pagliero, Liliana; Bouraoui, Fayçal; Willems, Patrick; Diels, Jan

    2014-01-01

    The Water Framework Directive of the European Union requires member states to achieve good ecological status of all water bodies. A harmonized pan-European assessment of water resources availability and quality, as affected by various management options, is necessary for a successful implementation of European environmental legislation. In this context, we developed a methodology to predict surface water flow at the pan-European scale using available datasets. Among the hydrological models available, the Soil Water Assessment Tool was selected because its characteristics make it suitable for large-scale applications with limited data requirements. This paper presents the results for the Danube pilot basin. The Danube Basin is one of the largest European watersheds, covering approximately 803,000 km and portions of 14 countries. The modeling data used included land use and management information, a detailed soil parameters map, and high-resolution climate data. The Danube Basin was divided into 4663 subwatersheds of an average size of 179 km. A modeling protocol is proposed to cope with the problems of hydrological regionalization from gauged to ungauged watersheds and overparameterization and identifiability, which are usually present during calibration. The protocol involves a cluster analysis for the determination of hydrological regions and multiobjective calibration using a combination of manual and automated calibration. The proposed protocol was successfully implemented, with the modeled discharges capturing well the overall hydrological behavior of the basin. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  9. Energy performance strategies for the large scale introduction of geothermal energy in residential and industrial buildings: The GEO.POWER project

    International Nuclear Information System (INIS)

    Giambastiani, B.M.S.; Tinti, F.; Mendrinos, D.; Mastrocicco, M.

    2014-01-01

    Use of shallow geothermal energy, in terms of ground coupled heat pumps (GCHP) for heating and cooling purposes, is an environmentally-friendly and cost-effective alternative with potential to replace fossil fuels and help mitigate global warming. Focusing on the recent results of the GEO.POWER project, this paper aims at examining the energy performance strategies and the future regional and national financial instruments for large scale introduction of geothermal energy and GCHP systems in both residential and industrial buildings. After a transferability assessment to evaluate the reproducibility of some outstanding examples of systems currently existing in Europe for the utilisation of shallow geothermal energy, a set of regulatory, economic and technical actions is proposed to encourage the GCHP market development and support geothermal energy investments in the frame of the existing European normative platforms. This analysis shows that many European markets are changing from a new GCHP market to growth market. However some interventions are still required, such as incentives, regulatory framework, certification schemes and training activities in order to accelerate the market uptake and achieve the main European energy and climate targets. - Highlights: • Potentiality of geothermal applications for heating and cooling in buildings. • Description of the GEO.POWER project and its results. • Local strategies for the large scale introduction of GCHPs

  10. The Climate Potentials and Side-Effects of Large-Scale terrestrial CO2 Removal - Insights from Quantitative Model Assessments

    Science.gov (United States)

    Boysen, L.; Heck, V.; Lucht, W.; Gerten, D.

    2015-12-01

    Terrestrial carbon dioxide removal (tCDR) through dedicated biomass plantations is considered as one climate engineering (CE) option if implemented at large-scale. While the risks and costs are supposed to be small, the effectiveness depends strongly on spatial and temporal scales of implementation. Based on simulations with a dynamic global vegetation model (LPJmL) we comprehensively assess the effectiveness, biogeochemical side-effects and tradeoffs from an earth system-analytic perspective. We analyzed systematic land-use scenarios in which all, 25%, or 10% of natural and/or agricultural areas are converted to tCDR plantations including the assumption that biomass plantations are established once the 2°C target is crossed in a business-as-usual climate change trajectory. The resulting tCDR potentials in year 2100 include the net accumulated annual biomass harvests and changes in all land carbon pools. We find that only the most spatially excessive, and thus undesirable, scenario would be capable to restore the 2° target by 2100 under continuing high emissions (with a cooling of 3.02°C). Large-scale biomass plantations covering areas between 1.1 - 4.2 Gha would produce a climate reduction potential of 0.8 - 1.4°C. tCDR plantations at smaller scales do not build up enough biomass over this considered period and the potentials to achieve global warming reductions are substantially lowered to no more than 0.5-0.6°C. Finally, we demonstrate that the (non-economic) costs for the Earth system include negative impacts on the water cycle and on ecosystems, which are already under pressure due to both land use change and climate change. Overall, tCDR may lead to a further transgression of land- and water-related planetary boundaries while not being able to set back the crossing of the planetary boundary for climate change. tCDR could still be considered in the near-future mitigation portfolio if implemented on small scales on wisely chosen areas.

  11. Large-Scale Assessment of a Fully Automatic Co-Adaptive Motor Imagery-Based Brain Computer Interface.

    Directory of Open Access Journals (Sweden)

    Laura Acqualagna

    Full Text Available In the last years Brain Computer Interface (BCI technology has benefited from the development of sophisticated machine leaning methods that let the user operate the BCI after a few trials of calibration. One remarkable example is the recent development of co-adaptive techniques that proved to extend the use of BCIs also to people not able to achieve successful control with the standard BCI procedure. Especially for BCIs based on the modulation of the Sensorimotor Rhythm (SMR these improvements are essential, since a not negligible percentage of users is unable to operate SMR-BCIs efficiently. In this study we evaluated for the first time a fully automatic co-adaptive BCI system on a large scale. A pool of 168 participants naive to BCIs operated the co-adaptive SMR-BCI in one single session. Different psychological interventions were performed prior the BCI session in order to investigate how motor coordination training and relaxation could influence BCI performance. A neurophysiological indicator based on the Power Spectral Density (PSD was extracted by the recording of few minutes of resting state brain activity and tested as predictor of BCI performances. Results show that high accuracies in operating the BCI could be reached by the majority of the participants before the end of the session. BCI performances could be significantly predicted by the neurophysiological indicator, consolidating the validity of the model previously developed. Anyway, we still found about 22% of users with performance significantly lower than the threshold of efficient BCI control at the end of the session. Being the inter-subject variability still the major problem of BCI technology, we pointed out crucial issues for those who did not achieve sufficient control. Finally, we propose valid developments to move a step forward to the applicability of the promising co-adaptive methods.

  12. An integrated assessment of a large-scale biodiesel production in Italy: Killing several birds with one stone?

    International Nuclear Information System (INIS)

    Russi, Daniela

    2008-01-01

    Biofuels are often presented as a contribution towards the solution of the problems related to our strong dependency on fossil fuels, i.e. greenhouse effect, energy dependency, urban pollution, besides being a way to support rural development. In this paper, an integrated assessment approach is employed to discuss the social desirability of a large-scale biodiesel production in Italy, taking into account social, environmental and economic factors. The conclusion is that the advantages in terms of reduction of greenhouse gas emissions, energy dependency and urban pollution would be very modest. The small benefits would not be enough to offset the huge costs in terms of land requirement: if the target of the European Directive 2003/30/EC were reached (5.75% of the energy used for transport by 2010) the equivalent of about one-third of the Italian agricultural land would be needed. The consequences would be a considerable increase in food imports and large environmental impacts in the agricultural phase. Also, since biodiesel must be de-taxed in order to make it competitive with oil-derived diesel, the Italian energy revenues would be reduced. In the end, rural development remains the only sound reason to promote biodiesel, but even for this objective other strategies look more advisable, like supporting organic agriculture. (author)

  13. Environmental implications of large-scale adoption of wind power: a scenario-based life cycle assessment

    International Nuclear Information System (INIS)

    Arvesen, Anders; Hertwich, Edgar G

    2011-01-01

    We investigate the potential environmental impacts of a large-scale adoption of wind power to meet up to 22% of the world’s growing electricity demand. The analysis builds on life cycle assessments of generic onshore and offshore wind farms, meant to represent average conditions for global deployment of wind power. We scale unit-based findings to estimate aggregated emissions of building, operating and decommissioning wind farms toward 2050, taking into account changes in the electricity mix in manufacturing. The energy scenarios investigated are the International Energy Agency’s BLUE scenarios. We estimate 1.7–2.6 Gt CO 2 -eq climate change, 2.1–3.2 Mt N-eq marine eutrophication, 9.2–14 Mt NMVOC photochemical oxidant formation, and 9.5–15 Mt SO 2 -eq terrestrial acidification impact category indicators due to global wind power in 2007–50. Assuming lifetimes 5 yr longer than reference, the total climate change indicator values are reduced by 8%. In the BLUE Map scenario, construction of new capacity contributes 64%, and repowering of existing capacity 38%, to total cumulative greenhouse gas emissions. The total emissions of wind electricity range between 4% and 14% of the direct emissions of the replaced fossil-fueled power plants. For all impact categories, the indirect emissions of displaced fossil power are larger than the total emissions caused by wind power.

  14. Infrastructure for Large-Scale Quality-Improvement Projects: Early Lessons from North Carolina Improving Performance in Practice

    Science.gov (United States)

    Newton, Warren P.; Lefebvre, Ann; Donahue, Katrina E.; Bacon, Thomas; Dobson, Allen

    2010-01-01

    Introduction: Little is known regarding how to accomplish large-scale health care improvement. Our goal is to improve the quality of chronic disease care in all primary care practices throughout North Carolina. Methods: Methods for improvement include (1) common quality measures and shared data system; (2) rapid cycle improvement principles; (3)…

  15. Magma viscosity estimation based on analysis of erupted products. Potential assessment for large-scale pyroclastic eruptions

    International Nuclear Information System (INIS)

    Takeuchi, Shingo

    2010-01-01

    After the formulation of guidelines for volcanic hazards in site evaluation for nuclear installations (e.g. JEAG4625-2009), it is required to establish appropriate methods to assess potential of large-scale pyroclastic eruptions at long-dormant volcanoes, which is one of the most hazardous volcanic phenomena on the safety of the installations. In considering the volcanic dormancy, magma eruptability is an important concept. The magma eruptability is dominantly controlled by magma viscosity, which can be estimated from petrological analysis of erupted materials. Therefore, viscosity estimation of magmas erupted in past eruptions should provide important information to assess future activities at hazardous volcanoes. In order to show the importance of magma viscosity in the concept of magma eruptability, this report overviews dike propagation processes from a magma chamber and nature of magma viscosity. Magma viscosity at pre-eruptive conditions of magma chambers were compiled based on previous petrological studies on past eruptions in Japan. There are only 16 examples of eruptions at 9 volcanoes satisfying data requirement for magma viscosity estimation. Estimated magma viscosities range from 10 2 to 10 7 Pa·s for basaltic to rhyolitic magmas. Most of examples fall below dike propagation limit of magma viscosity (ca. 10 6 Pa·s) estimated based on a dike propagation model. Highly viscous magmas (ca. 10 7 Pa·s) than the dike propagation limit are considered to lose eruptability which is the ability to form dikes and initiate eruptions. However, in some cases, small precursory eruptions of less viscous magmas commonly occurred just before climactic eruptions of the highly viscous magmas, suggesting that the precursory dike propagation by the less viscous magmas induced the following eruptions of highly viscous magmas (ca. 10 7 Pa·s). (author)

  16. Performance analysis of a large-scale helium Brayton cryo-refrigerator with static gas bearing turboexpander

    International Nuclear Information System (INIS)

    Zhang, Yu; Li, Qiang; Wu, Jihao; Li, Qing; Lu, Wenhai; Xiong, Lianyou; Liu, Liqiang; Xu, Xiangdong; Sun, Lijia; Sun, Yu; Xie, Xiujuan; Wang, Bingming; Qiu, Yinan; Zhang, Peng

    2015-01-01

    Highlights: • A 2 kW at 20.0 K helium Brayton cryo-refrigerator is built in China. • A series of tests have been systematically conducted to investigate the performance of the cryo-refrigerator. • Maximum heat conductance proportion (90.7%) appears in the heat exchangers of cold box rather than those of heat reservoirs. • A model of helium Brayton cryo-refrigerator/cycle is presented according to finite-time thermodynamics. - Abstract: Large-scale helium cryo-refrigerator is widely used in superconducting systems, nuclear fusion engineering, and scientific researches, etc., however, its energy efficiency is quite low. First, a 2 kW at 20.0 K helium Brayton cryo-refrigerator is built, and a series of tests have been systematically conducted to investigate the performance of the cryo-refrigerator. It is found that maximum heat conductance proportion (90.7%) appears in the heat exchangers of cold box rather than those of heat reservoirs, which is the main characteristic of the helium Brayton cryo-refrigerator/cycle different from the air Brayton refrigerator/cycle. Other three characteristics also lie in the configuration of refrigerant helium bypass, internal purifier and non-linearity of specific heat of helium. Second, a model of helium Brayton cryo-refrigerator/cycle is presented according to finite-time thermodynamics. The assumption named internal purification temperature depth (PTD) is introduced, and the heat capacity rate of whole cycle is divided into three different regions in accordance with the PTD: room temperature region, upper internal purification temperature region and lower one. Analytical expressions of cooling capacity and COP are obtained, and we found that the expressions are piecewise functions. Further, comparison between the model and the experimental results for cooling capacity of the helium cryo-refrigerator shows that error is less than 7.6%. The PTD not only helps to achieve the analytical formulae and indicates the working

  17. On the use of Cloud Computing and Machine Learning for Large-Scale SAR Science Data Processing and Quality Assessment Analysi

    Science.gov (United States)

    Hua, H.

    2016-12-01

    Geodetic imaging is revolutionizing geophysics, but the scope of discovery has been limited by labor-intensive technological implementation of the analyses. The Advanced Rapid Imaging and Analysis (ARIA) project has proven capability to automate SAR data processing and analysis. Existing and upcoming SAR missions such as Sentinel-1A/B and NISAR are also expected to generate massive amounts of SAR data. This has brought to the forefront the need for analytical tools for SAR quality assessment (QA) on the large volumes of SAR data-a critical step before higher-level time series and velocity products can be reliably generated. Initially leveraging an advanced hybrid-cloud computing science data system for performing large-scale processing, machine learning approaches were augmented for automated analysis of various quality metrics. Machine learning-based user-training of features, cross-validation, prediction models were integrated into our cloud-based science data processing flow to enable large-scale and high-throughput QA analytics for enabling improvements to the production quality of geodetic data products.

  18. Water surface assisted synthesis of large-scale carbon nanotube film for high-performance and stretchable supercapacitors.

    Science.gov (United States)

    Yu, Minghao; Zhang, Yangfan; Zeng, Yinxiang; Balogun, Muhammad-Sadeeq; Mai, Kancheng; Zhang, Zishou; Lu, Xihong; Tong, Yexiang

    2014-07-16

    A kind of multiwalled carbon-nanotube (MWCNT)/polydimethylsiloxane (PDMS) film with excellent conductivity and mechanical properties is developed using a facile and large-scale water surface assisted synthesis method. The film can act as a conductive support for electrochemically active PANI nano fibers. A device based on these PANI/MWCNT/PDMS electrodes shows good and stable capacitive behavior, even under static and dynamic stretching conditions. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    KAUST Repository

    Tonellot, Thierry; Etienne, Vincent; Gashawbeza, Ewenet; Curiel, Emesto Sandoval; Khan, Azizur; Feki, Saber; Kortas, Samuel

    2017-01-01

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey's acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less than

  20. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    KAUST Repository

    Tonellot, Thierry

    2017-02-27

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey\\'s acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less

  1. Is Bhutan destined for 100% organic? Assessing the economy-wide effects of a large-scale conversion policy.

    Science.gov (United States)

    Feuerbacher, Arndt; Luckmann, Jonas; Boysen, Ole; Zikeli, Sabine; Grethe, Harald

    2018-01-01

    Organic agriculture (OA) is considered a strategy to make agriculture more sustainable. Bhutan has embraced the ambitious goal of becoming the world's first 100% organic nation. By analysing recent on-farm data in Bhutan, we found organic crop yields on average to be 24% lower than conventional yields. Based on these yield gaps, we assess the effects of the 100% organic conversion policy by employing an economy-wide computable general equilibrium (CGE) model with detailed representation of Bhutan's agricultural sector incorporating agroecological zones, crop nutrients, and field operations. Despite a low dependency on agrochemicals from the onset of this initiative, we find a considerable reduction in Bhutan's GDP, substantial welfare losses, particularly for non-agricultural households, and adverse impacts on food security. The yield gap is the main driver for a strong decline in domestic agricultural production, which is largely compensated by increased food imports, resulting in a weakening of the country's cereal self-sufficiency. Current organic by default farming practices in Bhutan are still underdeveloped and do not apply the systems approach of organic farming as defined in the IFOAM organic farming standards. This is reflected in the strong decline of nitrogen (N) availability to crops in our simulation and bears potential for increased yields in OA. Improvement of soil-fertility practices, e.g., the adoption of N-fixing crops, improved animal husbandry systems with increased provision of animal manure and access to markets with price premium for organic products could help to lower the economic cost of the large-scale conversion.

  2. Development and Large-Scale Validation of an Instrument to Assess Arabic-Speaking Students' Attitudes Toward Science

    Science.gov (United States)

    Abd-El-Khalick, Fouad; Summers, Ryan; Said, Ziad; Wang, Shuai; Culbertson, Michael

    2015-11-01

    This study is part of a large-scale project focused on 'Qatari students' Interest in, and Attitudes toward, Science' (QIAS). QIAS aimed to gauge Qatari student attitudes toward science in grades 3-12, examine factors that impact these attitudes, and assess the relationship between student attitudes and prevailing modes of science teaching in Qatari schools. This report details the development and validation of the 'Arabic-Speaking Students' Attitudes toward Science Survey' (ASSASS), which was specifically developed for the purposes of the QIAS project. The theories of reasoned action and planned behavior (TRAPB) [Ajzen, I., & Fishbein, M. (2005). The influence of attitudes on behavior. In D. Albarracín, B. T. Johnson, & M. P. Zanna (Eds.), The handbook of attitudes (pp. 173-221). Mahwah, NJ: Erlbaum] guided the instrument development. Development and validation of the ASSASS proceeded in 3 phases. First, a 10-member expert panel examined an initial pool of 74 items, which were revised and consolidated into a 60-item version of the instrument. This version was piloted with 369 Qatari students from the target schools and grade levels. Analyses of pilot data resulted in a refined version of the ASSASS, which was administered to a national probability sample of 3027 participants representing all students enrolled in grades 3-12 in the various types of schools in Qatar. Of the latter, 1978 students completed the Arabic version of the instrument. Analyses supported a robust, 5-factor model for the instrument, which is consistent with the TRAPB framework. The factors were: Attitudes toward science and school science, unfavorable outlook on science, control beliefs about ability in science, behavioral beliefs about the consequences of engaging with science, and intentions to pursue science.

  3. Large-scale assessment of soil erosion in Africa: satellites help to jointly account for dynamic rainfall and vegetation cover

    Science.gov (United States)

    Vrieling, Anton; Hoedjes, Joost C. B.; van der Velde, Marijn

    2015-04-01

    Efforts to map and monitor soil erosion need to account for the erratic nature of the soil erosion process. Soil erosion by water occurs on sloped terrain when erosive rainfall and consequent surface runoff impact soils that are not well-protected by vegetation or other soil protective measures. Both rainfall erosivity and vegetation cover are highly variable through space and time. Due to data paucity and the relative ease of spatially overlaying geographical data layers into existing models like USLE (Universal Soil Loss Equation), many studies and mapping efforts merely use average annual values for erosivity and vegetation cover as input. We first show that rainfall erosivity can be estimated from satellite precipitation data. We obtained average annual erosivity estimates from 15 yr of 3-hourly TRMM Multi-satellite Precipitation Analysis (TMPA) data (1998-2012) using intensity-erosivity relationships. Our estimates showed a positive correlation (r = 0.84) with long-term annual erosivity values of 37 stations obtained from literature. Using these TMPA erosivity retrievals, we demonstrate the large interannual variability, with maximum annual erosivity often exceeding two to three times the mean value, especially in semi-arid areas. We then calculate erosivity at a 10-daily time-step and combine this with vegetation cover development for selected locations in Africa using NDVI - normalized difference vegetation index - time series from SPOT VEGETATION. Although we do not integrate the data at this point, the joint analysis of both variables stresses the need for joint accounting for erosivity and vegetation cover for large-scale erosion assessment and monitoring.

  4. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    Science.gov (United States)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill

    2000-01-01

    We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3

  5. Strategic Environmental Assessment and Environmental Auditing in Large-scale Public Infrastructure Construction: the case of Qinghai-Tibet Railway

    NARCIS (Netherlands)

    He, G.; Zhang, L.; Lu, Y.

    2009-01-01

    Large-scale public infrastructure projects have featured in China’s modernization course since the early 1980s. During the early stages of China’s rapid economic development, public attention focused on the economic and social impact of high-profile construction projects. In recent years, however,

  6. Recent Developments in Language Assessment and the Case of Four Large-Scale Tests of ESOL Ability

    Science.gov (United States)

    Stoynoff, Stephen

    2009-01-01

    This review article surveys recent developments and validation activities related to four large-scale tests of L2 English ability: the iBT TOEFL, the IELTS, the FCE, and the TOEIC. In addition to describing recent changes to these tests, the paper reports on validation activities that were conducted on the measures. The results of this research…

  7. Standard Errors for National Trends in International Large-Scale Assessments in the Case of Cross-National Differential Item Functioning

    Science.gov (United States)

    Sachse, Karoline A.; Haag, Nicole

    2017-01-01

    Standard errors computed according to the operational practices of international large-scale assessment studies such as the Programme for International Student Assessment's (PISA) or the Trends in International Mathematics and Science Study (TIMSS) may be biased when cross-national differential item functioning (DIF) and item parameter drift are…

  8. Multilevel Latent Class Analysis for Large-Scale Educational Assessment Data: Exploring the Relation between the Curriculum and Students' Mathematical Strategies

    Science.gov (United States)

    Fagginger Auer, Marije F.; Hickendorff, Marian; Van Putten, Cornelis M.; Béguin, Anton A.; Heiser, Willem J.

    2016-01-01

    A first application of multilevel latent class analysis (MLCA) to educational large-scale assessment data is demonstrated. This statistical technique addresses several of the challenges that assessment data offers. Importantly, MLCA allows modeling of the often ignored teacher effects and of the joint influence of teacher and student variables.…

  9. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  10. Metric matters : the performance and organisation of volumetric water control in large-scale irrigation in the North Coast of Peru

    NARCIS (Netherlands)

    Vos, J.M.C.

    2002-01-01

    This thesis describes the organisation and performance of two large-scale irrigation systems in the North Coast of Peru. Good water management is important in this area because water is scarce and irrigated agriculture provides a livelihood to many small and middle-sized farmers. Water in

  11. An integrated, indicator framework for assessing large-scale variations and change in seasonal timing and phenology (Invited)

    Science.gov (United States)

    Betancourt, J. L.; Weltzin, J. F.

    2013-12-01

    As part of an effort to develop an Indicator System for the National Climate Assessment (NCA), the Seasonality and Phenology Indicators Technical Team (SPITT) proposed an integrated, continental-scale framework for understanding and tracking seasonal timing in physical and biological systems. The framework shares several metrics with the EPA's National Climate Change Indicators. The SPITT framework includes a comprehensive suite of national indicators to track conditions, anticipate vulnerabilities, and facilitate intervention or adaptation to the extent possible. Observed, modeled, and forecasted seasonal timing metrics can inform a wide spectrum of decisions on federal, state, and private lands in the U.S., and will be pivotal for international efforts to mitigation and adaptation. Humans use calendars both to understand the natural world and to plan their lives. Although the seasons are familiar concepts, we lack a comprehensive understanding of how variability arises in the timing of seasonal transitions in the atmosphere, and how variability and change translate and propagate through hydrological, ecological and human systems. For example, the contributions of greenhouse warming and natural variability to secular trends in seasonal timing are difficult to disentangle, including earlier spring transitions from winter (strong westerlies) to summer (weak easterlies) patterns of atmospheric circulation; shifts in annual phasing of daily temperature means and extremes; advanced timing of snow and ice melt and soil thaw at higher latitudes and elevations; and earlier start and longer duration of the growing and fire seasons. The SPITT framework aims to relate spatiotemporal variability in surface climate to (1) large-scale modes of natural climate variability and greenhouse gas-driven climatic change, and (2) spatiotemporal variability in hydrological, ecological and human responses and impacts. The hierarchical framework relies on ground and satellite observations

  12. Assessing Effects of Joining Common Currency Area with Large-Scale DSGE model: A Case of Poland

    OpenAIRE

    Maciej Bukowski; Sebastian Dyrda; Pawe³ Kowal

    2008-01-01

    In this paper we present a large scale dynamic stochastic general equilibrium model, in order to analyze and simulate effects of Euro introduction in Poland. Presented framework is a based on a two-country open economy model, where foreign acts as the Eurozone, and home as a candidate country. We have implemented various types of structural frictions in the open economy block, that generate empirically observable deviations from purchasing power parity rule. We consider such mechanisms as a d...

  13. Process evaluation and assessment of use of a large scale water filter and cookstove program in Rwanda

    Directory of Open Access Journals (Sweden)

    Christina K. Barstow

    2016-07-01

    financed, public health intervention can achieve high levels of initial adoption and usage of household level water filtration and improved cookstoves at a large scale.

  14. Using Learning and Motivation Theories to Coherently Link Formative Assessment, Grading Practices, and Large-Scale Assessment

    Science.gov (United States)

    Shepard, L. A.; Penuel, W. R.; Pellegrino, J. W.

    2018-01-01

    To support equitable and ambitious teaching practices, classroom assessment design must be grounded in a research-based theory of learning. Compared to other theories, sociocultural theory offers a more powerful, integrative account of how motivational aspects of learning--such as self-regulation, self-efficacy, sense of belonging, and…

  15. Understanding Business Interests in International Large-Scale Student Assessments: A Media Analysis of "The Economist," "Financial Times," and "Wall Street Journal"

    Science.gov (United States)

    Steiner-Khamsi, Gita; Appleton, Margaret; Vellani, Shezleen

    2018-01-01

    The media analysis is situated in the larger body of studies that explore the varied reasons why different policy actors advocate for international large-scale student assessments (ILSAs) and adds to the research on the fast advance of the global education industry. The analysis of "The Economist," "Financial Times," and…

  16. On Matrix Sampling and Imputation of Context Questionnaires with Implications for the Generation of Plausible Values in Large-Scale Assessments

    Science.gov (United States)

    Kaplan, David; Su, Dan

    2016-01-01

    This article presents findings on the consequences of matrix sampling of context questionnaires for the generation of plausible values in large-scale assessments. Three studies are conducted. Study 1 uses data from PISA 2012 to examine several different forms of missing data imputation within the chained equations framework: predictive mean…

  17. Large-Scale Assessment of Change in Student Achievement: Dutch Primary School Students' Results on Written Division in 1997 and 2004 as an Example

    Science.gov (United States)

    van den Heuvel-Panhuizen, Marja; Robitzsch, Alexander; Treffers, Adri; Koller, Olaf

    2009-01-01

    This article discusses large-scale assessment of change in student achievement and takes the study by Hickendorff, Heiser, Van Putten, and Verhelst (2009) as an example. This study compared the achievement of students in the Netherlands in 1997 and 2004 on written division problems. Based on this comparison, they claim that there is a performance…

  18. Comparing the Effectiveness of Self-Paced and Collaborative Frame-of-Reference Training on Rater Accuracy in a Large-Scale Writing Assessment

    Science.gov (United States)

    Raczynski, Kevin R.; Cohen, Allan S.; Engelhard, George, Jr.; Lu, Zhenqiu

    2015-01-01

    There is a large body of research on the effectiveness of rater training methods in the industrial and organizational psychology literature. Less has been reported in the measurement literature on large-scale writing assessments. This study compared the effectiveness of two widely used rater training methods--self-paced and collaborative…

  19. Revising the potential of large-scale Jatropha oil production in Tanzania: An economic land evaluation assessment

    International Nuclear Information System (INIS)

    Segerstedt, Anna; Bobert, Jans

    2013-01-01

    Following up the rather sobering results of the biofuels boom in Tanzania, we analyze the preconditions that would make large-scale oil production from the feedstock Jatropha curcas viable. We do this by employing an economic land evaluation approach; first, we estimate the physical land suitability and the necessary inputs to reach certain amounts of yields. Subsequently, we estimate costs and benefits for different input-output levels. Finally, to incorporate the increased awareness of sustainability in the export sector, we introduce also certification criteria. Using data from an experimental farm in Kilosa, we find that high yields are crucial for the economic feasibility and that they can only be obtained on good soils at high input rates. Costs of compliance with certification criteria depend on site specific characteristics such as land suitability and precipitation. In general, both domestic production and (certified) exports are too expensive to be able to compete with conventional diesel/rapeseed oil from the EU. Even though the crop may have potential for large scale production as a niche product, there is still a lot of risk involved and more experimental research is needed. - Highlights: ► We use an economic land evaluation analysis to reassess the potential of large-scale Jatropha oil. ► High yields are possible only at high input rates and for good soil qualities. ► Production costs are still too high to break even on the domestic and export market. ► More research is needed to stabilize yields and improve the oil content. ► Focus should be on broadening our knowledge-base rather than promoting new Jatropha investments

  20. Metric matters : the performance and organisation of volumetric water control in large-scale irrigation in the North Coast of Peru

    OpenAIRE

    Vos, J.M.C.

    2002-01-01

    This thesis describes the organisation and performance of two large-scale irrigation systems in the North Coast of Peru. Good water management is important in this area because water is scarce and irrigated agriculture provides a livelihood to many small and middle-sized farmers. Water in the coast of Peru is considered to be badly managed, however this study shows that performance is more optimal than critics assume. Apart from the relevance in the local water management discussion,...

  1. Green Routing Fuel Saving Opportunity Assessment: A Case Study on California Large-Scale Real-World Travel Data

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Lei [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Holden, Jacob [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gonder, Jeffrey D [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Wood, Eric W [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-07-31

    New technologies, such as connected and automated vehicles, have attracted more and more researchers for improving the energy efficiency and environmental impact of current transportation systems. The green routing strategy instructs a vehicle to select the most fuel-efficient route before the vehicle departs. It benefits the current transportation system with fuel saving opportunity through identifying the greenest route. This paper introduces an evaluation framework for estimating benefits of green routing based on large-scale, real-world travel data. The framework has the capability to quantify fuel savings by estimating the fuel consumption of actual routes and comparing to routes procured by navigation systems. A route-based fuel consumption estimation model, considering road traffic conditions, functional class, and road grade is proposed and used in the framework. An experiment using a large-scale data set from the California Household Travel Survey global positioning system trajectory data base indicates that 31% of actual routes have fuel savings potential with a cumulative estimated fuel savings of 12%.

  2. Green Routing Fuel Saving Opportunity Assessment: A Case Study on California Large-Scale Real-World Travel Data: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Lei; Holden, Jacob; Gonder, Jeff; Wood, Eric

    2017-07-13

    New technologies, such as connected and automated vehicles, have attracted more and more researchers for improving the energy efficiency and environmental impact of current transportation systems. The green routing strategy instructs a vehicle to select the most fuel-efficient route before the vehicle departs. It benefits the current transportation system with fuel saving opportunity through identifying the greenest route. This paper introduces an evaluation framework for estimating benefits of green routing based on large-scale, real-world travel data. The framework has the capability to quantify fuel savings by estimating the fuel consumption of actual routes and comparing to routes procured by navigation systems. A route-based fuel consumption estimation model, considering road traffic conditions, functional class, and road grade is proposed and used in the framework. An experiment using a large-scale data set from the California Household Travel Survey global positioning system trajectory data base indicates that 31% of actual routes have fuel savings potential with a cumulative estimated fuel savings of 12%.

  3. Large scale reflood test

    International Nuclear Information System (INIS)

    Hirano, Kemmei; Murao, Yoshio

    1980-01-01

    The large-scale reflood test with a view to ensuring the safety of light water reactors was started in fiscal 1976 based on the special account act for power source development promotion measures by the entrustment from the Science and Technology Agency. Thereafter, to establish the safety of PWRs in loss-of-coolant accidents by joint international efforts, the Japan-West Germany-U.S. research cooperation program was started in April, 1980. Thereupon, the large-scale reflood test is now included in this program. It consists of two tests using a cylindrical core testing apparatus for examining the overall system effect and a plate core testing apparatus for testing individual effects. Each apparatus is composed of the mock-ups of pressure vessel, primary loop, containment vessel and ECCS. The testing method, the test results and the research cooperation program are described. (J.P.N.)

  4. Remote Sensing Contributions to Prediction and Risk Assessment of Natural Disasters Caused by Large Scale Rift Valley Fever Outbreaks

    Science.gov (United States)

    Anyamba, Assaf; Linthicum, Kenneth J.; Small, Jennifer; Britch, S. C.; Tucker, C. J.

    2012-01-01

    Remotely sensed vegetation measurements for the last 30 years combined with other climate data sets such as rainfall and sea surface temperatures have come to play an important role in the study of the ecology of arthropod-borne diseases. We show that epidemics and epizootics of previously unpredictable Rift Valley fever are directly influenced by large scale flooding associated with the El Ni o/Southern Oscillation. This flooding affects the ecology of disease transmitting arthropod vectors through vegetation development and other bioclimatic factors. This information is now utilized to monitor, model, and map areas of potential Rift Valley fever outbreaks and is used as an early warning system for risk reduction of outbreaks to human and animal health, trade, and associated economic impacts. The continuation of such satellite measurements is critical to anticipating, preventing, and managing disease epidemics and epizootics and other climate-related disasters.

  5. Performance of the IAEA transport regulations in controlling doses and risks from a large-scale radioactive waste transport system

    International Nuclear Information System (INIS)

    Hutchinson, D.; Miles, R.; White, I.

    2004-01-01

    The role of United Kingdom Nirex Limited is to provide the UK with safe, environmentally sound and publicly acceptable options for the long-term management of radioactive materials generated by the UK's commercial, medical, research and defence activities. An important part of this role is to set standards and specifications for waste packaging. Waste producers in the UK are currently developing processes for packaging many different types of intermediatelevel waste (ILW), and also those forms of low-level waste that will require similar management to ILW. When packaging processes are at the proposal stage, the waste producers consult Nirex about the suitability of the resulting packages for all future aspects of waste management. The response that Nirex provides is based on detailed assessments of the proposed packages, their compliance with Nirex standards and specifications, and their predicted performance through the successive phases of waste management. One of those phases is transport through the public domain. This paper draws on experience gained from more than 200 separate transport safety assessments, which have cumulatively covered a wide range of waste types, waste packages and transport packages

  6. Free Global Dsm Assessment on Large Scale Areas Exploiting the Potentialities of the Innovative Google Earth Engine Platform

    Science.gov (United States)

    Nascetti, A.; Di Rita, M.; Ravanelli, R.; Amicuzi, M.; Esposito, S.; Crespi, M.

    2017-05-01

    The high-performance cloud-computing platform Google Earth Engine has been developed for global-scale analysis based on the Earth observation data. In particular, in this work, the geometric accuracy of the two most used nearly-global free DSMs (SRTM and ASTER) has been evaluated on the territories of four American States (Colorado, Michigan, Nevada, Utah) and one Italian Region (Trentino Alto- Adige, Northern Italy) exploiting the potentiality of this platform. These are large areas characterized by different terrain morphology, land covers and slopes. The assessment has been performed using two different reference DSMs: the USGS National Elevation Dataset (NED) and a LiDAR acquisition. The DSMs accuracy has been evaluated through computation of standard statistic parameters, both at global scale (considering the whole State/Region) and in function of the terrain morphology using several slope classes. The geometric accuracy in terms of Standard deviation and NMAD, for SRTM range from 2-3 meters in the first slope class to about 45 meters in the last one, whereas for ASTER, the values range from 5-6 to 30 meters. In general, the performed analysis shows a better accuracy for the SRTM in the flat areas whereas the ASTER GDEM is more reliable in the steep areas, where the slopes increase. These preliminary results highlight the GEE potentialities to perform DSM assessment on a global scale.

  7. FREE GLOBAL DSM ASSESSMENT ON LARGE SCALE AREAS EXPLOITING THE POTENTIALITIES OF THE INNOVATIVE GOOGLE EARTH ENGINE PLATFORM

    Directory of Open Access Journals (Sweden)

    A. Nascetti

    2017-05-01

    Full Text Available The high-performance cloud-computing platform Google Earth Engine has been developed for global-scale analysis based on the Earth observation data. In particular, in this work, the geometric accuracy of the two most used nearly-global free DSMs (SRTM and ASTER has been evaluated on the territories of four American States (Colorado, Michigan, Nevada, Utah and one Italian Region (Trentino Alto- Adige, Northern Italy exploiting the potentiality of this platform. These are large areas characterized by different terrain morphology, land covers and slopes. The assessment has been performed using two different reference DSMs: the USGS National Elevation Dataset (NED and a LiDAR acquisition. The DSMs accuracy has been evaluated through computation of standard statistic parameters, both at global scale (considering the whole State/Region and in function of the terrain morphology using several slope classes. The geometric accuracy in terms of Standard deviation and NMAD, for SRTM range from 2-3 meters in the first slope class to about 45 meters in the last one, whereas for ASTER, the values range from 5-6 to 30 meters. In general, the performed analysis shows a better accuracy for the SRTM in the flat areas whereas the ASTER GDEM is more reliable in the steep areas, where the slopes increase. These preliminary results highlight the GEE potentialities to perform DSM assessment on a global scale.

  8. Large-scale risk assessment of polycyclic aromatic hydrocarbons in shoreline sediments from Saudi Arabia: Environmental legacy after twelve years of the Gulf war oil spill

    Energy Technology Data Exchange (ETDEWEB)

    Bejarano, Adriana C., E-mail: ABejarano@researchplanning.co [Research Planning Inc., 1121 Park St., Columbia, SC 29201 (United States); Michel, Jacqueline [Research Planning Inc., 1121 Park St., Columbia, SC 29201 (United States)

    2010-05-15

    A large-scale assessment of polycyclic aromatic hydrocarbons (PAHs) from the 1991 Gulf War oil spill was performed for 2002-2003 sediment samples (n = 1679) collected from habitats along the shoreline of Saudi Arabia. Benthic sediment toxicity was characterized using the Equilibrium Partitioning Sediment Benchmark Toxic Unit approach for 43 PAHs (ESBTU{sub FCV,43}). Samples were assigned to risk categories according to ESBTU{sub FCV,43} values: no-risk (<=1), low (>1-<=2), low-medium (>2-<=3), medium (>3-<=5) and high-risk (>5). Sixty seven percent of samples had ESBTU{sub FCV,43} > 1 indicating potential adverse ecological effects. Sediments from the 0-30 cm layer from tidal flats, and the >30-<60 cm layer from heavily oiled halophytes and mangroves had high frequency of high-risk samples. No-risk samples were characterized by chrysene enrichment and depletion of lighter molecular weight PAHs, while high-risk samples showed little oil weathering and PAH patterns similar to 1993 samples. North of Safaniya sediments were not likely to pose adverse ecological effects contrary to sediments south of Tanaqib. Landscape and geomorphology has played a role on the distribution and persistence in sediments of oil from the Gulf War. - Risk Assessment of PAHs in shoreline sediments 12 years after the Gulf War oil spill.

  9. Large-scale risk assessment of polycyclic aromatic hydrocarbons in shoreline sediments from Saudi Arabia: Environmental legacy after twelve years of the Gulf war oil spill

    International Nuclear Information System (INIS)

    Bejarano, Adriana C.; Michel, Jacqueline

    2010-01-01

    A large-scale assessment of polycyclic aromatic hydrocarbons (PAHs) from the 1991 Gulf War oil spill was performed for 2002-2003 sediment samples (n = 1679) collected from habitats along the shoreline of Saudi Arabia. Benthic sediment toxicity was characterized using the Equilibrium Partitioning Sediment Benchmark Toxic Unit approach for 43 PAHs (ESBTU FCV,43 ). Samples were assigned to risk categories according to ESBTU FCV,43 values: no-risk (≤1), low (>1-≤2), low-medium (>2-≤3), medium (>3-≤5) and high-risk (>5). Sixty seven percent of samples had ESBTU FCV,43 > 1 indicating potential adverse ecological effects. Sediments from the 0-30 cm layer from tidal flats, and the >30-<60 cm layer from heavily oiled halophytes and mangroves had high frequency of high-risk samples. No-risk samples were characterized by chrysene enrichment and depletion of lighter molecular weight PAHs, while high-risk samples showed little oil weathering and PAH patterns similar to 1993 samples. North of Safaniya sediments were not likely to pose adverse ecological effects contrary to sediments south of Tanaqib. Landscape and geomorphology has played a role on the distribution and persistence in sediments of oil from the Gulf War. - Risk Assessment of PAHs in shoreline sediments 12 years after the Gulf War oil spill.

  10. Study on large scale knowledge base with real time operation for autonomous nuclear power plant. 1. Basic concept and expecting performance

    International Nuclear Information System (INIS)

    Ozaki, Yoshihiko; Suda, Kazunori; Yoshikawa, Shinji; Ozawa, Kenji

    1996-04-01

    Since it is desired to enhance availability and safety of nuclear power plants operation and maintenance by removing human factor, there are many researches and developments for intelligent operation or diagnosis using artificial intelligence (AI) technique. We have been developing an autonomous operation and maintenance system for nuclear power plants by substituting AI's and intelligent robots. It is indispensable to use various and large scale knowledge relative to plant design, operation, and maintenance, that is, whole life cycle data of the plant for the autonomous nuclear power plant. These knowledge must be given to AI system or intelligent robots adequately and opportunely. Moreover, it is necessary to insure real time operation using the large scale knowledge base for plant control and diagnosis performance. We have been studying on the large scale and real time knowledge base system for autonomous plant. In the report, we would like to present the basic concept and expecting performance of the knowledge base for autonomous plant, especially, autonomous control and diagnosis system. (author)

  11. Large-scale assessment of myxomatosis prevalence in European wild rabbits (Oryctolagus cuniculus) 60years after first outbreak in Spain.

    Science.gov (United States)

    Villafuerte, Rafael; Castro, Francisca; Ramírez, Esther; Cotilla, Irene; Parra, Francisco; Delibes-Mateos, Miguel; Recuerda, Pilar; Rouco, Carlos

    2017-10-01

    Myxomatosis is a viral disease that affects European rabbits (Oryctolagus cuniculus) worldwide. In Spain, populations of wild rabbits drastically decreased in the 1950s after the first outbreak of myxomatosis. Since that first appearance, it seems to be an annual epizootic in Spain with periodic outbreaks, predominantly in summer and autumn. Taking into account rabbit population structure, abundance, and genetic lineage, this paper attempts to make a large-scale characterization of myxomatosis seroprevalence based on the immune status of 29 rabbit populations distributed throughout Spain, where O. cuniculus cuniculus and O. c. algirus, the two known rabbit subspecies, naturally inhabit. A total of 654 samples were collected between 2003 and 2009, and seroprevalence of antibodies against Myxoma virus (MYXV) was determined. Overall, our results revealed that 53% of the rabbit samples were positive to antibodies against MYXV. Newborn and juvenile rabbits were the most susceptible animals to the virus, with 19% and 16% seropositivity for newborn and juveniles, respectively, while adult rabbits were the most protected, with 65% of seropositive samples. This suggests that prevalence is negatively related to the proportion of newborn and juvenile rabbits in a population. Our results also showed that seroprevalence against MYXV tended to be higher in high-abundance populations. In contrast, no differences were detected in seroprevalence between rabbit subspecies. This study confirms that >60years since first outbreak, myxomatosis is an endemic disease in Spain. Based on the results, the establishment of a myxomatosis surveillance protocol is proposed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Large-scale risk assessment of polycyclic aromatic hydrocarbons in shoreline sediments from Saudi Arabia: environmental legacy after twelve years of the Gulf war oil spill.

    Science.gov (United States)

    Bejarano, Adriana C; Michel, Jacqueline

    2010-05-01

    A large-scale assessment of polycyclic aromatic hydrocarbons (PAHs) from the 1991 Gulf War oil spill was performed for 2002-2003 sediment samples (n = 1679) collected from habitats along the shoreline of Saudi Arabia. Benthic sediment toxicity was characterized using the Equilibrium Partitioning Sediment Benchmark Toxic Unit approach for 43 PAHs (ESBTU(FCV,43)). Samples were assigned to risk categories according to ESBTU(FCV,43) values: no-risk (1 - 2 - 3 - 5). Sixty seven percent of samples had ESBTU(FCV,43) > 1 indicating potential adverse ecological effects. Sediments from the 0-30 cm layer from tidal flats, and the >30 - oiled halophytes and mangroves had high frequency of high-risk samples. No-risk samples were characterized by chrysene enrichment and depletion of lighter molecular weight PAHs, while high-risk samples showed little oil weathering and PAH patterns similar to 1993 samples. North of Safaniya sediments were not likely to pose adverse ecological effects contrary to sediments south of Tanaqib. Landscape and geomorphology has played a role on the distribution and persistence in sediments of oil from the Gulf War. Copyright 2009 Elsevier Ltd. All rights reserved.

  13. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  14. Environmental impact assessment and environmental audit in large-scale public infrastructure construction: the case of the Qinghai-Tibet Railway.

    Science.gov (United States)

    He, Guizhen; Zhang, Lei; Lu, Yonglong

    2009-09-01

    Large-scale public infrastructure projects have featured in China's modernization course since the early 1980s. During the early stages of China's rapid economic development, public attention focused on the economic and social impact of high-profile construction projects. In recent years, however, we have seen a shift in public concern toward the environmental and ecological effects of such projects, and today governments are required to provide valid environmental impact assessments prior to allowing large-scale construction. The official requirement for the monitoring of environmental conditions has led to an increased number of debates in recent years regarding the effectiveness of Environmental Impact Assessments (EIAs) and Governmental Environmental Audits (GEAs) as environmental safeguards in instances of large-scale construction. Although EIA and GEA are conducted by different institutions and have different goals and enforcement potential, these two practices can be closely related in terms of methodology. This article cites the construction of the Qinghai-Tibet Railway as an instance in which EIA and GEA offer complementary approaches to environmental impact management. This study concludes that the GEA approach can serve as an effective follow-up to the EIA and establishes that the EIA lays a base for conducting future GEAs. The relationship that emerges through a study of the Railway's construction calls for more deliberate institutional arrangements and cooperation if the two practices are to be used in concert to optimal effect.

  15. Coding task performance in early adolescence: A large-scale controlled study into boy-girl differences

    Directory of Open Access Journals (Sweden)

    Sanne eDekker

    2013-08-01

    Full Text Available This study examined differences between boys and girls regarding efficiency of information processing in early adolescence. 306 healthy adolescents (50.3% boys in grade 7 and 9 (aged 13 and 15 respectively performed a coding task based on over-learned symbols. An age effect was revealed as subjects in grade 9 performed better than subjects in grade 7. Main effects for sex were found in the advantage of girls. The 25% best-performing students comprised twice as many girls as boys. The opposite pattern was found for the worst performing 25%. In addition, a main effect was found for educational track in favor of the highest track. No interaction effects were found. School grades did not explain additional variance in LDST performance. This indicates that cognitive performance is relatively independent from school performance. Student characteristics like age, sex and education level were more important for efficiency of information processing than school performance. The findings imply that after age 13, efficiency of information processing is still developing and that girls outperform boys in this respect. The findings provide new information on the mechanisms underlying boy-girl differences in scholastic performance.

  16. Large-scale assessment of commensalistic–mutualistic associations between African birds and herbivorous mammals using internet photos

    Science.gov (United States)

    Hadrava, Jiří; Albrecht, Tomáš; Tryjanowski, Piotr

    2018-01-01

    Birds sitting or feeding on live large African herbivorous mammals are a visible, yet quite neglected, type of commensalistic–mutualistic association. Here, we investigate general patterns in such relationships at large spatial and taxonomic scales. To obtain large-scale data, an extensive internet-based search for photos was carried out on Google Images. To characterize patterns of the structural organization of commensalistic–mutualistic associations between African birds and herbivorous mammals, we used a network analysis approach. We then employed phylogenetically-informed comparative analysis to explore whether features of bird visitation of mammals, i.e., their mean number, mass and species richness per mammal species, are shaped by a combination of host mammal (body mass and herd size) and environmental (habitat openness) characteristics. We found that the association web structure was only weakly nested for commensalistic as well as for mutualistic birds (oxpeckers Buphagus spp.) and African mammals. Moreover, except for oxpeckers, nestedness did not differ significantly from a null model indicating that birds do not prefer mammal species which are visited by a large number of bird species. In oxpeckers, however, a nested structure suggests a non-random assignment of birds to their mammal hosts. We also identified some new or rare associations between birds and mammals, but we failed to find several previously described associations. Furthermore, we found that mammal body mass positively influenced the number and mass of birds observed sitting on them in the full set of species (i.e., taking oxpeckers together with other bird species). We also found a positive correlation between mammal body mass and mass of non-oxpecker species as well as oxpeckers. Mammal herd size was associated with a higher mass of birds in the full set of species as well as in non-oxpecker species, and mammal species living in larger herds also attracted more bird species in the

  17. Large-Scale Evaluation of Quality of Care in 6 Countries of Eastern Europe and Central Asia Using Clinical Performance and Value Vignettes.

    Science.gov (United States)

    Peabody, John W; DeMaria, Lisa; Smith, Owen; Hoth, Angela; Dragoti, Edmond; Luck, Jeff

    2017-09-27

    challenged by poor performance as measured by clinical care vignettes, but there is potential for provision of high-quality care by a sizable proportion of providers. Large-scale assessments of quality of care have been hampered by the lack of effective measurement tools that provide generalizable and reliable results across diverse economic, cultural, and social settings. The feasibility of quality measurement using CPV vignettes in these 6 countries and the ability to combine results with individual feedback could significantly enhance strategies to improve quality of care, and ultimately population health. © Peabody et al.

  18. Self-beliefs mediate mathematical performance between primary and lower secondary school: A large scale longitudinal cohort study

    NARCIS (Netherlands)

    Reed, Helen; Kirschner, Paul A.; Jolles, Jelle

    2016-01-01

    It is often argued that enhancement of self-beliefs should be one of the key goals ofeducation. However, very little is known about the relation between self-beliefs and performance when students move from primary to secondary school in highly differentiated educational systems with early tracking.

  19. The LHC Cryomagnet Supports in Glass-Fiber Reinforced Epoxy A Large Scale Industrial Production with High Reproducibility in Performance

    CERN Document Server

    Poncet, A; Trigo, J; Parma, V

    2008-01-01

    The about 1700 LHC main ring super-conducting magnets are supported within their cryostats on 4700 low heat in leak column-type supports. The supports were designed to ensure a precise and stable positioning of the heavy dipole and quadrupole magnets while keeping thermal conduction heat loads within budget. A trade-off between mechanical and thermal properties, as well as cost considerations, led to the choice of glass fibre reinforced epoxy (GFRE). Resin Transfer Moulding (RTM), featuring a high level of automation and control, was the manufacturing process retained to ensure the reproducibility of the performance of the supports throughout the large production. The Spanish aerospace company EADS-CASA Espacio developed the specific RTM process, and produced the total quantity of supports between 2001 and 2004. This paper describes the development and the production of the supports, and presents the production experience and the achieved performance.

  20. THE LHC CRYOMAGNET SUPPORTS IN GLASS-FIBER REINFORCED EPOXY: A LARGE SCALE INDUSTRIAL PRODUCTION WITH HIGH REPRODUCIBILITY IN PERFORMANCE

    International Nuclear Information System (INIS)

    Poncet, A.; Struik, M.; Parma, V.; Trigo, J.

    2008-01-01

    The about 1700 LHC main ring super-conducting magnets are supported within their cryostats on 4700 low heat in leak column-type supports. The supports were designed to ensure a precise and stable positioning of the heavy dipole and quadrupole magnets while keeping thermal conduction heat loads within budget. A trade-off between mechanical and thermal properties, as well as cost considerations, led to the choice of glass fibre reinforced epoxy (GFRE). Resin Transfer Moulding (RTM), featuring a high level of automation and control, was the manufacturing process retained to ensure the reproducibility of the performance of the supports throughout the large production.The Spanish aerospace company EADS-CASA Espacio developed the specific RTM process, and produced the total quantity of supports between 2001 and 2004.This paper describes the development and the production of the supports, and presents the production experience and the achieved performance

  1. Assessing the impact of large-scale computing on the size and complexity of first-principles electromagnetic models

    International Nuclear Information System (INIS)

    Miller, E.K.

    1990-01-01

    There is a growing need to determine the electromagnetic performance of increasingly complex systems at ever higher frequencies. The ideal approach would be some appropriate combination of measurement, analysis, and computation so that system design and assessment can be achieved to a needed degree of accuracy at some acceptable cost. Both measurement and computation benefit from the continuing growth in computer power that, since the early 1950s, has increased by a factor of more than a million in speed and storage. For example, a CRAY2 has an effective throughput (not the clock rate) of about 10 11 floating-point operations (FLOPs) per hour compared with the approximate 10 5 provided by the UNIVAC-1. The purpose of this discussion is to illustrate the computational complexity of modeling large (in wavelengths) electromagnetic problems. In particular the author makes the point that simply relying on faster computers for increasing the size and complexity of problems that can be modeled is less effective than might be anticipated from this raw increase in computer throughput. He suggests that rather than depending on faster computers alone, various analytical and numerical alternatives need development for reducing the overall FLOP count required to acquire the information desired. One approach is to decrease the operation count of the basic model computation itself, by reducing the order of the frequency dependence of the various numerical operations or their multiplying coefficients. Another is to decrease the number of model evaluations that are needed, an example being the number of frequency samples required to define a wideband response, by using an auxiliary model of the expected behavior. 11 refs., 5 figs., 2 tabs

  2. High performance architecture design for large scale fibre-optic sensor arrays using distributed EDFAs and hybrid TDM/DWDM

    Science.gov (United States)

    Liao, Yi; Austin, Ed; Nash, Philip J.; Kingsley, Stuart A.; Richardson, David J.

    2013-09-01

    A distributed amplified dense wavelength division multiplexing (DWDM) array architecture is presented for interferometric fibre-optic sensor array systems. This architecture employs a distributed erbium-doped fibre amplifier (EDFA) scheme to decrease the array insertion loss, and employs time division multiplexing (TDM) at each wavelength to increase the number of sensors that can be supported. The first experimental demonstration of this system is reported including results which show the potential for multiplexing and interrogating up to 4096 sensors using a single telemetry fibre pair with good system performance. The number can be increased to 8192 by using dual pump sources.

  3. Dynamic performance investigation of once-through-type steam generator for NPP using a large-scale model

    International Nuclear Information System (INIS)

    Kats, F.M.; Ostrovskij, L.A.; Ehskin, N.B.

    1985-01-01

    An experimental bench is described as well as the results of dynamic performance investigation of mass- and heat transfer of the once-through type steam generator for the NPP with weak superheat. Coolant for the primary and secondary circuit is water. Under investigation conditions the possibility of changin.o primary and secondary circuit temperatures has been supphed as well as the primary circuit flow rate and the secondary circuit pressure changes. Transients for differen.t operating conditions are considered. The possibility for construction of the steam generator automatic control system is based

  4. History of large scale maintenance operations performed by EDF on the steam generators of its nuclear power plants

    International Nuclear Information System (INIS)

    2010-01-01

    After a first part which describes the role of steam generators in nuclear reactors, highlights their importance for the reactor safety, and briefly presents their maintenance, this report describes the new types of degradation which have been observed, and their processing. Thus, it describes the clogging on cracked tubes, comments its impact on safety, describes the available control means, and discusses the use of chemical cleaning and the on-going work on this topic. It discusses the risk of fatigue cracking of tubes in abnormal support position. It comments the holding in position of stoppers used during maintenance. It describes and discusses the corrosion phenomena, and the performed and requested corrective actions

  5. Overview of the OGAP Formative Assessment Project and CPRE's Large-Scale Experimental Study of Implementation and Impacts

    Science.gov (United States)

    Supovitz, Jonathan

    2016-01-01

    In this presentation discussed in this brief abstracted report, the author presents about an ongoing partnership with the Philadelphia School District (PSD) to implement and research the Ongoing Assessment Project (OGAP). OGAP is a systematic, intentional and iterative formative assessment system grounded in the research on how students learn…

  6. Traditional methods v. new technologies – dilemmas for dietary assessment in large-scale nutrition surveys and studies

    DEFF Research Database (Denmark)

    Amoutzopoulos, B.; Steer, T.; Roberts, C.

    2018-01-01

    assessment in population surveys’, was held at the 9th International Conference on Diet and Activity Methods (ICDAM9), Brisbane, September 2015. Despite respondent and researcher burden, traditional methods have been most commonly used in nutrition surveys. However, dietary assessment technologies offer...... of traditional dietary assessment methods (food records, FFQ, 24 h recalls, diet history with interviewer-assisted data collection) v. new technology-based dietary assessment methods (web-based and mobile device applications). The panel discussion ‘Traditional methods v. new technologies: dilemmas for dietary......The aim of the present paper is to summarise current and future applications of dietary assessment technologies in nutrition surveys in developed countries. It includes the discussion of key points and highlights of subsequent developments from a panel discussion to address strengths and weaknesses...

  7. WImpiBLAST: web interface for mpiBLAST to help biologists perform large-scale annotation using high performance computing.

    Directory of Open Access Journals (Sweden)

    Parichit Sharma

    Full Text Available The function of a newly sequenced gene can be discovered by determining its sequence homology with known proteins. BLAST is the most extensively used sequence analysis program for sequence similarity search in large databases of sequences. With the advent of next generation sequencing technologies it has now become possible to study genes and their expression at a genome-wide scale through RNA-seq and metagenome sequencing experiments. Functional annotation of all the genes is done by sequence similarity search against multiple protein databases. This annotation task is computationally very intensive and can take days to obtain complete results. The program mpiBLAST, an open-source parallelization of BLAST that achieves superlinear speedup, can be used to accelerate large-scale annotation by using supercomputers and high performance computing (HPC clusters. Although many parallel bioinformatics applications using the Message Passing Interface (MPI are available in the public domain, researchers are reluctant to use them due to lack of expertise in the Linux command line and relevant programming experience. With these limitations, it becomes difficult for biologists to use mpiBLAST for accelerating annotation. No web interface is available in the open-source domain for mpiBLAST. We have developed WImpiBLAST, a user-friendly open-source web interface for parallel BLAST searches. It is implemented in Struts 1.3 using a Java backbone and runs atop the open-source Apache Tomcat Server. WImpiBLAST supports script creation and job submission features and also provides a robust job management interface for system administrators. It combines script creation and modification features with job monitoring and management through the Torque resource manager on a Linux-based HPC cluster. Use case information highlights the acceleration of annotation analysis achieved by using WImpiBLAST. Here, we describe the WImpiBLAST web interface features and architecture

  8. WImpiBLAST: web interface for mpiBLAST to help biologists perform large-scale annotation using high performance computing.

    Science.gov (United States)

    Sharma, Parichit; Mantri, Shrikant S

    2014-01-01

    The function of a newly sequenced gene can be discovered by determining its sequence homology with known proteins. BLAST is the most extensively used sequence analysis program for sequence similarity search in large databases of sequences. With the advent of next generation sequencing technologies it has now become possible to study genes and their expression at a genome-wide scale through RNA-seq and metagenome sequencing experiments. Functional annotation of all the genes is done by sequence similarity search against multiple protein databases. This annotation task is computationally very intensive and can take days to obtain complete results. The program mpiBLAST, an open-source parallelization of BLAST that achieves superlinear speedup, can be used to accelerate large-scale annotation by using supercomputers and high performance computing (HPC) clusters. Although many parallel bioinformatics applications using the Message Passing Interface (MPI) are available in the public domain, researchers are reluctant to use them due to lack of expertise in the Linux command line and relevant programming experience. With these limitations, it becomes difficult for biologists to use mpiBLAST for accelerating annotation. No web interface is available in the open-source domain for mpiBLAST. We have developed WImpiBLAST, a user-friendly open-source web interface for parallel BLAST searches. It is implemented in Struts 1.3 using a Java backbone and runs atop the open-source Apache Tomcat Server. WImpiBLAST supports script creation and job submission features and also provides a robust job management interface for system administrators. It combines script creation and modification features with job monitoring and management through the Torque resource manager on a Linux-based HPC cluster. Use case information highlights the acceleration of annotation analysis achieved by using WImpiBLAST. Here, we describe the WImpiBLAST web interface features and architecture, explain design

  9. Automatically assessing properties of dynamic cameras for camera selection and rapid deployment of video content analysis tasks in large-scale ad-hoc networks

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; van Rest, Jeroen H. C.; ten Hove, Johan-Martijn; ter Haar, Frank B.; Burghouts, Gertjan J.

    2017-10-01

    Video analytics is essential for managing large quantities of raw data that are produced by video surveillance systems (VSS) for the prevention, repression and investigation of crime and terrorism. Analytics is highly sensitive to changes in the scene, and for changes in the optical chain so a VSS with analytics needs careful configuration and prompt maintenance to avoid false alarms. However, there is a trend from static VSS consisting of fixed CCTV cameras towards more dynamic VSS deployments over public/private multi-organization networks, consisting of a wider variety of visual sensors, including pan-tilt-zoom (PTZ) cameras, body-worn cameras and cameras on moving platforms. This trend will lead to more dynamic scenes and more frequent changes in the optical chain, creating structural problems for analytics. If these problems are not adequately addressed, analytics will not be able to continue to meet end users' developing needs. In this paper, we present a three-part solution for managing the performance of complex analytics deployments. The first part is a register containing meta data describing relevant properties of the optical chain, such as intrinsic and extrinsic calibration, and parameters of the scene such as lighting conditions or measures for scene complexity (e.g. number of people). A second part frequently assesses these parameters in the deployed VSS, stores changes in the register, and signals relevant changes in the setup to the VSS administrator. A third part uses the information in the register to dynamically configure analytics tasks based on VSS operator input. In order to support the feasibility of this solution, we give an overview of related state-of-the-art technologies for autocalibration (self-calibration), scene recognition and lighting estimation in relation to person detection. The presented solution allows for rapid and robust deployment of Video Content Analysis (VCA) tasks in large scale ad-hoc networks.

  10. Extrinsic Motivation for Large-Scale Assessments: A Case Study of a Student Achievement Program at One Urban High School

    Science.gov (United States)

    Emmett, Joshua; McGee, Dean

    2013-01-01

    The purpose of this case study was to discover the critical attributes of a student achievement program, known as "Think Gold," implemented at one urban comprehensive high school as part of the improvement process. Student achievement on state assessments improved during the period under study. The study draws upon perspectives on…

  11. Large-scale assessment of flood risk and the effects of mitigation measures along the Elbe River

    NARCIS (Netherlands)

    de Kok, Jean-Luc; Grossmann, M.

    2010-01-01

    The downstream effects of flood risk mitigation measures and the necessity to develop flood risk management strategies that are effective on a basin scale call for a flood risk assessment methodology that can be applied at the scale of a large river. We present an example of a rapid flood risk

  12. 77 FR 14011 - Assessment of Potential Large-Scale Mining on the Bristol Bay Watershed of Alaska: Nomination of...

    Science.gov (United States)

    2012-03-08

    ... affect wildlife and human populations in the region. Additional information describing the assessment...) ecotoxicology, (8) wildlife ecology, and/or (9) indigenous Alaskan cultures. Selection Criteria: Selection...) absence of financial conflicts of interest; (5) no actual conflicts of interest or the appearance of bias...

  13. Large-scale assessment of benthic communities across multiple marine protected areas using an autonomous underwater vehicle.

    Science.gov (United States)

    Ferrari, Renata; Marzinelli, Ezequiel M; Ayroza, Camila Rezende; Jordan, Alan; Figueira, Will F; Byrne, Maria; Malcolm, Hamish A; Williams, Stefan B; Steinberg, Peter D

    2018-01-01

    Marine protected areas (MPAs) are designed to reduce threats to biodiversity and ecosystem functioning from anthropogenic activities. Assessment of MPAs effectiveness requires synchronous sampling of protected and non-protected areas at multiple spatial and temporal scales. We used an autonomous underwater vehicle to map benthic communities in replicate 'no-take' and 'general-use' (fishing allowed) zones within three MPAs along 7o of latitude. We recorded 92 taxa and 38 morpho-groups across three large MPAs. We found that important habitat-forming biota (e.g. massive sponges) were more prevalent and abundant in no-take zones, while short ephemeral algae were more abundant in general-use zones, suggesting potential short-term effects of zoning (5-10 years). Yet, short-term effects of zoning were not detected at the community level (community structure or composition), while community structure varied significantly among MPAs. We conclude that by allowing rapid, simultaneous assessments at multiple spatial scales, autonomous underwater vehicles are useful to document changes in marine communities and identify adequate scales to manage them. This study advanced knowledge of marine benthic communities and their conservation in three ways. First, we quantified benthic biodiversity and abundance, generating the first baseline of these benthic communities against which the effectiveness of three large MPAs can be assessed. Second, we identified the taxonomic resolution necessary to assess both short and long-term effects of MPAs, concluding that coarse taxonomic resolution is sufficient given that analyses of community structure at different taxonomic levels were generally consistent. Yet, observed differences were taxa-specific and may have not been evident using our broader taxonomic classifications, a classification of mid to high taxonomic resolution may be necessary to determine zoning effects on key taxa. Third, we provide an example of statistical analyses and

  14. Data and performance profiles applying an adaptive truncation criterion, within linesearch-based truncated Newton methods, in large scale nonconvex optimization

    Directory of Open Access Journals (Sweden)

    Andrea Caliciotti

    2018-04-01

    Full Text Available In this paper, we report data and experiments related to the research article entitled “An adaptive truncation criterion, for linesearch-based truncated Newton methods in large scale nonconvex optimization” by Caliciotti et al. [1]. In particular, in Caliciotti et al. [1], large scale unconstrained optimization problems are considered by applying linesearch-based truncated Newton methods. In this framework, a key point is the reduction of the number of inner iterations needed, at each outer iteration, to approximately solving the Newton equation. A novel adaptive truncation criterion is introduced in Caliciotti et al. [1] to this aim. Here, we report the details concerning numerical experiences over a commonly used test set, namely CUTEst (Gould et al., 2015 [2]. Moreover, comparisons are reported in terms of performance profiles (Dolan and Moré, 2002 [3], adopting different parameters settings. Finally, our linesearch-based scheme is compared with a renowned trust region method, namely TRON (Lin and Moré, 1999 [4].

  15. Your choice MATor(s) : large-scale quantitative anonymity assessment of Tor path selection algorithms against structural attacks

    OpenAIRE

    Backes, Michael; Meiser, Sebastian; Slowik, Marcin

    2015-01-01

    In this paper, we present a rigorous methodology for quantifying the anonymity provided by Tor against a variety of structural attacks, i.e., adversaries that compromise Tor nodes and thereby perform eavesdropping attacks to deanonymize Tor users. First, we provide an algorithmic approach for computing the anonymity impact of such structural attacks against Tor. The algorithm is parametric in the considered path selection algorithm and is, hence, capable of reasoning about variants of Tor and...

  16. TRAC code assessment using data from SCTF Core-III, a large-scale 2D/3D facility

    International Nuclear Information System (INIS)

    Boyack, B.E.; Shire, P.R.; Harmony, S.C.; Rhee, G.

    1988-01-01

    Nine tests from the SCTF Core-III configuration have been analyzed using TRAC-PF1/MOD1. The objectives of these assessment activities were to obtain a better understanding of the phenomena occurring during the refill and reflood phases of a large-break loss-of-coolant accident, to determine the accuracy to which key parameters are calculated, and to identify deficiencies in key code correlations and models that provide closure for the differential equations defining thermal-hydraulic phenomena in pressurized water reactors. Overall, the agreement between calculated and measured values of peak cladding temperature is reasonable. In addition, TRAC adequately predicts many of the trends observed in both the integral effect and separate effect tests conducted in SCTF Core-III. The importance of assessment activities that consider potential contributors to discrepancies between the measured and calculated results arising from three sources are described as those related to (1) knowledge about the facility configuration and operation, (2) facility modeling for code input, and (3) deficiencies in code correlations and models. An example is provided. 8 refs., 7 figs., 2 tabs

  17. Water shortage risk assessment considering large-scale regional transfers: a copula-based uncertainty case study in Lunan, China.

    Science.gov (United States)

    Gao, Xueping; Liu, Yinzhu; Sun, Bowen

    2018-06-05

    The risk of water shortage caused by uncertainties, such as frequent drought, varied precipitation, multiple water resources, and different water demands, brings new challenges to the water transfer projects. Uncertainties exist for transferring water and local surface water; therefore, the relationship between them should be thoroughly studied to prevent water shortage. For more effective water management, an uncertainty-based water shortage risk assessment model (UWSRAM) is developed to study the combined effect of multiple water resources and analyze the shortage degree under uncertainty. The UWSRAM combines copula-based Monte Carlo stochastic simulation and the chance-constrained programming-stochastic multiobjective optimization model, using the Lunan water-receiving area in China as an example. Statistical copula functions are employed to estimate the joint probability of available transferring water and local surface water and sampling from the multivariate probability distribution, which are used as inputs for the optimization model. The approach reveals the distribution of water shortage and is able to emphasize the importance of improving and updating transferring water and local surface water management, and examine their combined influence on water shortage risk assessment. The possible available water and shortages can be calculated applying the UWSRAM, also with the corresponding allocation measures under different water availability levels and violating probabilities. The UWSRAM is valuable for mastering the overall multi-water resource and water shortage degree, adapting to the uncertainty surrounding water resources, establishing effective water resource planning policies for managers and achieving sustainable development.

  18. Assessment of biological effects resulting from large scale applications of coal power plant wastes in building technology in Poland

    International Nuclear Information System (INIS)

    Pensko, J.; Geisler, J.

    1980-01-01

    Some of the building materials commonly used in Poland contain natural radioactive elements and some contain radioactive industrial wastes. It has been shown that these building materials could induce additional annual doses to the inhabitants of the order of 0.4 mGy gamma radiation to the whole body and about 13 mSv alpha radiation to the critical tissues of the respiratory tract. On the basis of these dosimetric data and demographic and forecasting data, the number of severe genetic effects and cancer deaths caused by the additional radiation doses in dwellings were assessed for the population of Poland for the period 1951-2010. It was estimated that additional somatic effects in six consecutive decades will result in approximately 31,200 cancer deaths, including about 26,300 deaths caused by lung cancer. The expected number of severe genetic effects resulting from additional doses of ionizing radiation absorbed by parents indoors will amount to about 260 cases in the first generation and about 7500 cases in succeeding generations. (H.K.)

  19. Comprehensive large-scale investigation and assessment of trace metal in the coastal sediments of Bohai Sea.

    Science.gov (United States)

    Li, Hongjun; Gao, Xuelu; Gu, Yanbin; Wang, Ruirui; Xie, Pengfei; Liang, Miao; Ming, Hongxia; Su, Jie

    2018-04-01

    The Bohai Sea is characterized as a semi-closed sea with limited water exchange ability, which has been regarded as one of the most contaminated regions in China and has attracted public attention over the past decades. In recent years, the rapid industrialization and urbanization around the coastal region has resulted in a severe pollution pressure in the Bohai Sea. Although efforts from official government and scientific experts have been made to protect and restore the marine ecosystem, satisfactory achievements were not gained. Moreover, partial coastal areas in the Bohai Sea seemingly remain heavily polluted. In this study, we focused on five coastal regions around the Bohai Sea to study the spatial distribution pattern of trace elements in the sediments and their ecological risk. A total of 108 sediment samples were analyzed to determine the contamination degree of trace elements (Cu, Cd, As, Pb, Zn, Cr, and Hg). Contamination factor (CF), pollution load index (PLI), geoaccumulation index (I geo ), and potential ecological risk index (PERI) were utilized to assess the pollution extent of these metals. Spatial distribution patterns revealed that the sedimentary environments of coastal Bohai were in good condition, except Jinzhou Bay, according to the Marine Sediment Quality of China. The concentrations of Hg and Cd were considerably higher than the average upper crust value and presented high potential ecological risk and considerable potential ecological risk, respectively. The overall environment quality of the coastal Bohai Sea does not seem to pose an extremely serious threat in terms of metal pollution. Thus, the government should continue implementing pollution control programs in the Bohai Sea. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. ‘Oorja’ in India: Assessing a large-scale commercial distribution of advanced biomass stoves to households

    Science.gov (United States)

    Thurber, Mark C.; Phadke, Himani; Nagavarapu, Sriniketh; Shrimali, Gireesh; Zerriffi, Hisham

    2015-01-01

    Replacing traditional stoves with advanced alternatives that burn more cleanly has the potential to ameliorate major health problems associated with indoor air pollution in developing countries. With a few exceptions, large government and charitable programs to distribute advanced stoves have not had the desired impact. Commercially-based distributions that seek cost recovery and even profits might plausibly do better, both because they encourage distributors to supply and promote products that people want and because they are based around properly-incentivized supply chains that could more be scalable, sustainable, and replicable. The sale in India of over 400,000 “Oorja” stoves to households from 2006 onwards represents the largest commercially-based distribution of a gasification-type advanced biomass stove. BP's Emerging Consumer Markets (ECM) division and then successor company First Energy sold this stove and the pelletized biomass fuel on which it operates. We assess the success of this effort and the role its commercial aspect played in outcomes using a survey of 998 households in areas of Maharashtra and Karnataka where the stove was sold as well as detailed interviews with BP and First Energy staff. Statistical models based on this data indicate that Oorja purchase rates were significantly influenced by the intensity of Oorja marketing in a region as well as by pre-existing stove mix among households. The highest rate of adoption came from LPG-using households for which Oorja's pelletized biomass fuel reduced costs. Smoke- and health-related messages from Oorja marketing did not significantly influence the purchase decision, although they did appear to affect household perceptions about smoke. By the time of our survey, only 9% of households that purchased Oorja were still using the stove, the result in large part of difficulties First Energy encountered in developing a viable supply chain around low-cost procurement of “agricultural waste” to

  1. Coupled Large Scale Hydro-mechanical Modelling for cap-rock Failure Risk Assessment of CO2 Storage in Deep Saline Aquifers

    International Nuclear Information System (INIS)

    Rohmer, J.; Seyedi, D.M.

    2010-01-01

    This work presents a numerical strategy of large scale hydro-mechanical simulations to assess the risk of damage in cap-rock formations during a CO 2 injection process. The proposed methodology is based on the development of a sequential coupling between a multiphase fluid flow (TOUGH2) and a hydro-mechanical calculation code (Code-Aster) that enables us to perform coupled hydro-mechanical simulation at a regional scale. The likelihood of different cap-rock damage mechanisms can then be evaluated based on the results of the coupled simulations. A scenario based approach is proposed to take into account the effect of the uncertainty of model parameters on damage likelihood. The developed methodology is applied for the cap-rock failure analysis of deep aquifer of the Dogger formation in the context of the Paris basin multilayered geological system as a demonstration example. The simulation is carried out at a regional scale (100 km) considering an industrial mass injection rate of CO 2 of 10 Mt/y. The assessment of the stress state after 10 years of injection is conducted through the developed sequential coupling. Two failure mechanisms have been taken into account, namely the tensile fracturing and the shear slip reactivation of pre-existing fractures. To deal with the large uncertainties due to sparse data on the layer formations, a scenario based strategy is undertaken. It consists in defining a first reference modelling scenario considering the mean values of the hydro-mechanical properties for each layer. A sensitivity analysis is then carried out and shows the importance of both the initial stress state and the reservoir hydraulic properties on the cap-rock failure tendency. On this basis, a second scenario denoted 'critical' is defined so that the most influential model parameters are taken in their worst configuration. None of these failure criteria is activated for the considered conditions. At a phenomenological level, this study points out three key

  2. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  3. Assessing the Challenges in the Application of Potential Probiotic Lactic Acid Bacteria in the Large-Scale Fermentation of Spanish-Style Table Olives

    Directory of Open Access Journals (Sweden)

    Francisco Rodríguez-Gómez

    2017-05-01

    Full Text Available This work studies the inoculation conditions for allowing the survival/predominance of a potential probiotic strain (Lactobacillus pentosus TOMC-LAB2 when used as a starter culture in large-scale fermentations of green Spanish-style olives. The study was performed in two successive seasons (2011/2012 and 2012/2013, using about 150 tons of olives. Inoculation immediately after brining (to prevent wild initial microbiota growth followed by re-inoculation 24 h later (to improve competitiveness was essential for inoculum predominance. Processing early in the season (September showed a favorable effect on fermentation and strain predominance on olives (particularly when using acidified brines containing 25 L HCl/vessel but caused the disappearance of the target strain from both brines and olives during the storage phase. On the contrary, processing in October slightly reduced the target strain predominance on olives (70–90% but allowed longer survival. The type of inoculum used (laboratory vs. industry pre-adapted never had significant effects. Thus, this investigation discloses key issues for the survival and predominance of starter cultures in large-scale industrial fermentations of green Spanish-style olives. Results can be of interest for producing probiotic table olives and open new research challenges on the causes of inoculum vanishing during the storage phase.

  4. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  5. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu

    2013-07-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  6. Large Scale System Defense

    Science.gov (United States)

    2008-10-01

    NUMBER 00 5f. WORK UNIT NUMBER 01 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Columbia University 1700 Broadway New York NY 10019-5905 8...PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) AFRL/RIGA 525 Brooks Rd. Rome NY 13441-4505...pealing because of the need to modify source code. Since source-level annotations serve as a vestigial policy, we articulated a way to augment self

  7. A method for the assessment of the visual impact caused by the large-scale deployment of renewable-energy facilities

    International Nuclear Information System (INIS)

    Rodrigues, Marcos; Montanes, Carlos; Fueyo, Norberto

    2010-01-01

    The production of energy from renewable sources requires a significantly larger use of the territory compared with conventional (fossil and nuclear) sources. For large penetrations of renewable technologies, such as wind power, the overall visual impact at the national level can be substantial, and may prompt public reaction. This study develops a methodology for the assessment of the visual impact that can be used to measure and report the level of impact caused by several renewable technologies (wind farms, solar photovoltaic plants or solar thermal ones), both at the local and regional (e.g. national) scales. Applications are shown to several large-scale, hypothetical scenarios of wind and solar-energy penetration in Spain, and also to the vicinity of an actual, single wind farm.

  8. Impact of tissue atrophy on high-pass filtered MRI signal phase-based assessment in large-scale group-comparison studies: A simulation study

    Science.gov (United States)

    Schweser, Ferdinand; Dwyer, Michael G.; Deistung, Andreas; Reichenbach, Jürgen R.; Zivadinov, Robert

    2013-10-01

    The assessment of abnormal accumulation of tissue iron in the basal ganglia nuclei and in white matter plaques using the gradient echo magnetic resonance signal phase has become a research focus in many neurodegenerative diseases such as multiple sclerosis or Parkinson’s disease. A common and natural approach is to calculate the mean high-pass-filtered phase of previously delineated brain structures. Unfortunately, the interpretation of such an analysis requires caution: in this paper we demonstrate that regional gray matter atrophy, which is concomitant with many neurodegenerative diseases, may itself directly result in a phase shift seemingly indicative of increased iron concentration even without any real change in the tissue iron concentration. Although this effect is relatively small results of large-scale group comparisons may be driven by anatomical changes rather than by changes of the iron concentration.

  9. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2011-01-01

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  10. Performance Characteristics of Hybrid MPI/OpenMP Implementations of NAS Parallel Benchmarks SP and BT on Large-Scale Multicore Clusters

    KAUST Repository

    Wu, X.; Taylor, V.

    2011-01-01

    The NAS Parallel Benchmarks (NPB) are well-known applications with fixed algorithms for evaluating parallel systems and tools. Multicore clusters provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node, and MPI can be used with the communication between nodes. In this paper, we use Scalar Pentadiagonal (SP) and Block Tridiagonal (BT) benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore clusters, Intrepid (BlueGene/P) at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76 %, and the hybrid BT outperforms the MPI BT by up to 8.58 % on up to 10 000 cores on Intrepid and Jaguar. We also use performance tools and MPI trace libraries available on these clusters to further investigate the performance characteristics of the hybrid SP and BT. © 2011 The Author. Published by Oxford University Press on behalf of The British Computer Society. All rights reserved.

  11. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-03-29

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  12. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization.

    Science.gov (United States)

    Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan

    2017-08-04

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.

  13. Performance Characteristics of Hybrid MPI/OpenMP Implementations of NAS Parallel Benchmarks SP and BT on Large-Scale Multicore Clusters

    KAUST Repository

    Wu, X.

    2011-07-18

    The NAS Parallel Benchmarks (NPB) are well-known applications with fixed algorithms for evaluating parallel systems and tools. Multicore clusters provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node, and MPI can be used with the communication between nodes. In this paper, we use Scalar Pentadiagonal (SP) and Block Tridiagonal (BT) benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore clusters, Intrepid (BlueGene/P) at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76 %, and the hybrid BT outperforms the MPI BT by up to 8.58 % on up to 10 000 cores on Intrepid and Jaguar. We also use performance tools and MPI trace libraries available on these clusters to further investigate the performance characteristics of the hybrid SP and BT. © 2011 The Author. Published by Oxford University Press on behalf of The British Computer Society. All rights reserved.

  14. Large scale GW calculations

    International Nuclear Information System (INIS)

    Govoni, Marco; Argonne National Lab., Argonne, IL; Galli, Giulia; Argonne National Lab., Argonne, IL

    2015-01-01

    We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm, which takes advantage of separable expressions of both the single particle Green's function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. The newly developed technique was applied to GW calculations of systems of unprecedented size, including water/semiconductor interfaces with thousands of electrons

  15. Conference on Large Scale Optimization

    CERN Document Server

    Hearn, D; Pardalos, P

    1994-01-01

    On February 15-17, 1993, a conference on Large Scale Optimization, hosted by the Center for Applied Optimization, was held at the University of Florida. The con­ ference was supported by the National Science Foundation, the U. S. Army Research Office, and the University of Florida, with endorsements from SIAM, MPS, ORSA and IMACS. Forty one invited speakers presented papers on mathematical program­ ming and optimal control topics with an emphasis on algorithm development, real world applications and numerical results. Participants from Canada, Japan, Sweden, The Netherlands, Germany, Belgium, Greece, and Denmark gave the meeting an important international component. At­ tendees also included representatives from IBM, American Airlines, US Air, United Parcel Serice, AT & T Bell Labs, Thinking Machines, Army High Performance Com­ puting Research Center, and Argonne National Laboratory. In addition, the NSF sponsored attendance of thirteen graduate students from universities in the United States and abro...

  16. Large-scale performance studies of the Resistive Plate Chamber fast tracker for the ATLAS 1st-level muon trigger

    CERN Document Server

    Cattani, G; The ATLAS collaboration

    2009-01-01

    In the ATLAS experiment, Resistive Plate Chambers provide the first-level muon trigger and bunch crossing identification over large area of the barrel region, as well as being used as a very fast 2D tracker. To achieve these goals a system of about ~4000 gas gaps operating in avalanche mode was built (resulting in a total readout surface of about 16000 m2 segmented into 350000 strips) and is now fully operational in the ATLAS pit, where its functionality has been widely tested up to now using cosmic rays. Such a large scale system allows to study the performance of RPCs (both from the point of view of gas gaps and readout electronics) with unprecedented sensitivity to rare effects, as well as providing the means to correlate (in a statistically significant way) characteristics at production sites with performance during operation. Calibrating such a system means fine tuning thousands of parameters (involving both front-end electronics and gap voltage), as well as constantly monitoring performance and environm...

  17. A Numeric Scorecard Assessing the Mental Health Preparedness for Large-Scale Crises at College and University Campuses: A Delphi Study

    Science.gov (United States)

    Burgin, Rick A.

    2012-01-01

    Large-scale crises continue to surprise, overwhelm, and shatter college and university campuses. While the devastation to physical plants and persons is often evident and is addressed with crisis management plans, the number of emotional casualties left in the wake of these large-scale crises may not be apparent and are often not addressed with…

  18. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  19. A pioneering healthcare model applying large-scale production concepts: Principles and performance after more than 11,000 transplants at Hospital do Rim

    Directory of Open Access Journals (Sweden)

    José Medina Pestana

    Full Text Available Summary The kidney transplant program at Hospital do Rim (hrim is a unique healthcare model that applies the same principles of repetition of processes used in industrial production. This model, devised by Frederick Taylor, is founded on principles of scientific management that involve planning, rational execution of work, and distribution of responsibilities. The expected result is increased efficiency, improvement of results and optimization of resources. This model, almost completely subsidized by the Unified Health System (SUS, in the Portuguese acronym, has been used at the hrim in more than 11,000 transplants over the last 18 years. The hrim model consists of eight interconnected modules: organ procurement organization, preparation for the transplant, admission for transplant, surgical procedure, post-operative period, outpatient clinic, support units, and coordination and quality control. The flow of medical activities enables organized and systematic care of all patients. The improvement of the activities in each module is constant, with full monitoring of various administrative, health care, and performance indicators. The continuous improvement in clinical results confirms the efficiency of the program. Between 1998 and 2015, an increase was noted in graft survival (77.4 vs. 90.4%, p<0.001 and patient survival (90.5 vs. 95.1%, p=0.001. The high productivity, efficiency, and progressive improvement of the results obtained with this model suggest that it could be applied to other therapeutic areas that require large-scale care, preserving the humanistic characteristic of providing health care activity.

  20. A pioneering healthcare model applying large-scale production concepts: Principles and performance after more than 11,000 transplants at Hospital do Rim.

    Science.gov (United States)

    Pestana, José Medina

    2016-10-01

    The kidney transplant program at Hospital do Rim (hrim) is a unique healthcare model that applies the same principles of repetition of processes used in industrial production. This model, devised by Frederick Taylor, is founded on principles of scientific management that involve planning, rational execution of work, and distribution of responsibilities. The expected result is increased efficiency, improvement of results and optimization of resources. This model, almost completely subsidized by the Unified Health System (SUS, in the Portuguese acronym), has been used at the hrim in more than 11,000 transplants over the last 18 years. The hrim model consists of eight interconnected modules: organ procurement organization, preparation for the transplant, admission for transplant, surgical procedure, post-operative period, outpatient clinic, support units, and coordination and quality control. The flow of medical activities enables organized and systematic care of all patients. The improvement of the activities in each module is constant, with full monitoring of various administrative, health care, and performance indicators. The continuous improvement in clinical results confirms the efficiency of the program. Between 1998 and 2015, an increase was noted in graft survival (77.4 vs. 90.4%, p<0.001) and patient survival (90.5 vs. 95.1%, p=0.001). The high productivity, efficiency, and progressive improvement of the results obtained with this model suggest that it could be applied to other therapeutic areas that require large-scale care, preserving the humanistic characteristic of providing health care activity.

  1. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Chase Qishi [New Jersey Inst. of Technology, Newark, NJ (United States); Univ. of Memphis, TN (United States); Zhu, Michelle Mengxia [Southern Illinois Univ., Carbondale, IL (United States)

    2016-06-06

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models feature diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific

  2. Seismic safety in conducting large-scale blasts

    Science.gov (United States)

    Mashukov, I. V.; Chaplygin, V. V.; Domanov, V. P.; Semin, A. A.; Klimkin, M. A.

    2017-09-01

    In mining enterprises to prepare hard rocks for excavation a drilling and blasting method is used. With the approach of mining operations to settlements the negative effect of large-scale blasts increases. To assess the level of seismic impact of large-scale blasts the scientific staff of Siberian State Industrial University carried out expertise for coal mines and iron ore enterprises. Determination of the magnitude of surface seismic vibrations caused by mass explosions was performed using seismic receivers, an analog-digital converter with recording on a laptop. The registration results of surface seismic vibrations during production of more than 280 large-scale blasts at 17 mining enterprises in 22 settlements are presented. The maximum velocity values of the Earth’s surface vibrations are determined. The safety evaluation of seismic effect was carried out according to the permissible value of vibration velocity. For cases with exceedance of permissible values recommendations were developed to reduce the level of seismic impact.

  3. Large-scale grid management

    International Nuclear Information System (INIS)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-01-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series

  4. Performance of granular activated carbon to remove micropollutants from municipal wastewater-A meta-analysis of pilot- and large-scale studies.

    Science.gov (United States)

    Benstoem, Frank; Nahrstedt, Andreas; Boehler, Marc; Knopp, Gregor; Montag, David; Siegrist, Hansruedi; Pinnekamp, Johannes

    2017-10-01

    For reducing organic micropollutants (MP) in municipal wastewater effluents, granular activated carbon (GAC) has been tested in various studies. We did systematic literature research and found 44 studies dealing with the adsorption of MPs (carbamazepine, diclofenac, sulfamethoxazole) from municipal wastewater on GAC in pilot- and large-scale plants. Within our meta-analysis we plot the bed volumes (BV [m 3 water /m 3 GAC ]) until the breakthrough criterion of MP-BV20% was reached, dependent on potential relevant parameters (empty bed contact time EBCT, influent DOC DOC 0 and manufacturing method). Moreover, we performed statistical tests (ANOVAs) to check the results for significance. Single adsorbers operating time differs i.e. by 2500% until breakthrough of diclofenac-BV20% was reached (800-20,000 BV). There was still elimination of the "very well/well" adsorbable MPs such as carbamazepine and diclofenac even when the equilibrium of DOC had already been reached. No strong statistical significance of EBCT and DOC 0 on MP-BV20% could be found due to lack of data and the high heterogeneity of the studies using GAC of different qualities. In further studies, adsorbers should be operated ≫20,000 BV for exact calculation of breakthrough curves, and the following parameters should be recorded: selected MPs; DOC 0; UVA 254 ; EBCT; product name, manufacturing method and raw material of GAC; suspended solids (TSS); backwash interval; backwash program and pressure drop within adsorber. Based on our investigations we generally recommend using reactivated GAC to reduce the environmental impact and to carry out tests on pilot scale to collect reliable data for process design. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Effects of cognitive design principles on user’s performance and preference : A large scale evaluation of a soccer stats display

    NARCIS (Netherlands)

    Westerbeek, Hans; van Amelsvoort, Marije; Maes, Fons; Swerts, Marc

    2014-01-01

    We present an analytic and a large scale experimental comparison of two informationally equivalent information displays of soccer statistics. Both displays were presented by the BBC during the 2010 FIFA World Cup. The displays mainly differ in terms of the number and types of cognitively natural

  6. Large scale structure and baryogenesis

    International Nuclear Information System (INIS)

    Kirilova, D.P.; Chizhov, M.V.

    2001-08-01

    We discuss a possible connection between the large scale structure formation and the baryogenesis in the universe. An update review of the observational indications for the presence of a very large scale 120h -1 Mpc in the distribution of the visible matter of the universe is provided. The possibility to generate a periodic distribution with the characteristic scale 120h -1 Mpc through a mechanism producing quasi-periodic baryon density perturbations during inflationary stage, is discussed. The evolution of the baryon charge density distribution is explored in the framework of a low temperature boson condensate baryogenesis scenario. Both the observed very large scale of a the visible matter distribution in the universe and the observed baryon asymmetry value could naturally appear as a result of the evolution of a complex scalar field condensate, formed at the inflationary stage. Moreover, for some model's parameters a natural separation of matter superclusters from antimatter ones can be achieved. (author)

  7. Large-scale carbon stock assessment of woody vegetation in tropical dry deciduous forest of Sathanur reserve forest, Eastern Ghats, India.

    Science.gov (United States)

    Gandhi, Durai Sanjay; Sundarapandian, Somaiah

    2017-04-01

    them in terms of biomass and carbon stocks which could be attributed to variation in anthropogenic pressures among the plots as well as to changes in tree density across landscapes. Total basal area of woody vegetation showed a significant positive (R 2  = 0.978; P = 0.000) relationship with carbon storage while juvenile tree basal area showed the negative relationship (R 2  = 0.4804; P = 0.000) with woody carbon storage. The present study generates a large-scale baseline data of dry deciduous forest carbon stock, which would facilitate carbon stock assessment at a national level as well as to understand its contribution on a global scale.

  8. Modelling bark beetle disturbances in a large scale forest scenario model to assess climate change impacts and evaluate adaptive management strategies

    NARCIS (Netherlands)

    Seidl, R.; Schelhaas, M.J.; Lindner, M.; Lexer, M.J.

    2009-01-01

    To study potential consequences of climate-induced changes in the biotic disturbance regime at regional to national scale we integrated a model of Ips typographus (L. Scol. Col.) damages into the large-scale forest scenario model EFISCEN. A two-stage multivariate statistical meta-model was used to

  9. Performance assessment

    International Nuclear Information System (INIS)

    Doe, T.

    1985-01-01

    The purpose of performance assessment is to show that the repository is expected to serve its stated function - disposing of radioactive waste safely both during operation and for the postclosure period. Performance assessment is a straightforward concept, but its application may be very complicated. The concept of performance assessment has been clarified by the Nuclear Regulatory Commission (NRC) in their Draft Generic Technical Position on Licensing Assessment Methodology for High-Level Waste Geologic Repositories (NRC, 1984). This document has gone a long way toward defining the criteria that the NRC will use to determine whether or not information from site characterization is adequate to meet the regulations of the Nuclear Regulatory Commission and the Environmental Protection Agency (EPA). A favorable determination is required for issuance of a construction authorization, which is the first major regulatory requirement for developing a working repository. It is, therefore, essential that a research program be developed that not only resolves the outstanding technical issues, but also does it in such a way that the results are clearly applicable to the formal performance assessment and licensing procedures. The definitions of performance assessment are reviewed and the current NRC thinking is summarized

  10. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  11. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  12. Large-Scale Survey Findings Inform Patients’ Experiences in Using Secure Messaging to Engage in Patient-Provider Communication and Self-Care Management: A Quantitative Assessment

    Science.gov (United States)

    Patel, Nitin R; Lind, Jason D; Antinori, Nicole

    2015-01-01

    Background Secure email messaging is part of a national transformation initiative in the United States to promote new models of care that support enhanced patient-provider communication. To date, only a limited number of large-scale studies have evaluated users’ experiences in using secure email messaging. Objective To quantitatively assess veteran patients’ experiences in using secure email messaging in a large patient sample. Methods A cross-sectional mail-delivered paper-and-pencil survey study was conducted with a sample of respondents identified as registered for the Veteran Health Administrations’ Web-based patient portal (My HealtheVet) and opted to use secure messaging. The survey collected demographic data, assessed computer and health literacy, and secure messaging use. Analyses conducted on survey data include frequencies and proportions, chi-square tests, and one-way analysis of variance. Results The majority of respondents (N=819) reported using secure messaging 6 months or longer (n=499, 60.9%). They reported secure messaging to be helpful for completing medication refills (n=546, 66.7%), managing appointments (n=343, 41.9%), looking up test results (n=350, 42.7%), and asking health-related questions (n=340, 41.5%). Notably, some respondents reported using secure messaging to address sensitive health topics (n=67, 8.2%). Survey responses indicated that younger age (P=.039) and higher levels of education (P=.025) and income (P=.003) were associated with more frequent use of secure messaging. Females were more likely to report using secure messaging more often, compared with their male counterparts (P=.098). Minorities were more likely to report using secure messaging more often, at least once a month, compared with nonminorities (P=.086). Individuals with higher levels of health literacy reported more frequent use of secure messaging (P=.007), greater satisfaction (P=.002), and indicated that secure messaging is a useful (P=.002) and easy

  13. Large-Scale Survey Findings Inform Patients' Experiences in Using Secure Messaging to Engage in Patient-Provider Communication and Self-Care Management: A Quantitative Assessment.

    Science.gov (United States)

    Haun, Jolie N; Patel, Nitin R; Lind, Jason D; Antinori, Nicole

    2015-12-21

    Secure email messaging is part of a national transformation initiative in the United States to promote new models of care that support enhanced patient-provider communication. To date, only a limited number of large-scale studies have evaluated users' experiences in using secure email messaging. To quantitatively assess veteran patients' experiences in using secure email messaging in a large patient sample. A cross-sectional mail-delivered paper-and-pencil survey study was conducted with a sample of respondents identified as registered for the Veteran Health Administrations' Web-based patient portal (My HealtheVet) and opted to use secure messaging. The survey collected demographic data, assessed computer and health literacy, and secure messaging use. Analyses conducted on survey data include frequencies and proportions, chi-square tests, and one-way analysis of variance. The majority of respondents (N=819) reported using secure messaging 6 months or longer (n=499, 60.9%). They reported secure messaging to be helpful for completing medication refills (n=546, 66.7%), managing appointments (n=343, 41.9%), looking up test results (n=350, 42.7%), and asking health-related questions (n=340, 41.5%). Notably, some respondents reported using secure messaging to address sensitive health topics (n=67, 8.2%). Survey responses indicated that younger age (P=.039) and higher levels of education (P=.025) and income (P=.003) were associated with more frequent use of secure messaging. Females were more likely to report using secure messaging more often, compared with their male counterparts (P=.098). Minorities were more likely to report using secure messaging more often, at least once a month, compared with nonminorities (P=.086). Individuals with higher levels of health literacy reported more frequent use of secure messaging (P=.007), greater satisfaction (P=.002), and indicated that secure messaging is a useful (P=.002) and easy-to-use (P≤.001) communication tool, compared

  14. Wind-tunnel investigation of the thrust augmentor performance of a large-scale swept wing model. [in the Ames 40 by 80 foot wind tunnel

    Science.gov (United States)

    Koenig, D. G.; Falarski, M. D.

    1979-01-01

    Tests were made in the Ames 40- by 80-foot wind tunnel to determine the forward speed effects on wing-mounted thrust augmentors. The large-scale model was powered by the compressor output of J-85 driven viper compressors. The flap settings used were 15 deg and 30 deg with 0 deg, 15 deg, and 30 deg aileron settings. The maximum duct pressure, and wind tunnel dynamic pressure were 66 cmHg (26 in Hg) and 1190 N/sq m (25 lb/sq ft), respectively. All tests were made at zero sideslip. Test results are presented without analysis.

  15. Assessment of the burning behavior of protected and unprotected cables and cable trays in nuclear installations using small- and large-scale experiments

    Energy Technology Data Exchange (ETDEWEB)

    Siemon, Matthias; Riese, Olaf; Zehfuss, Jochen [Technische Univ. Braunschweig (Germany). Inst. fuer Baustoffe, Massivbau und Brandschutz (iBMB)

    2015-12-15

    Electric installations and cables are a main fire risk source in industrial buildings and power plants. In general, cables and cable systems are associated with flash-over phenomena due to pyrolysis of fuel gases induced by the heat of an adjacent fire, fire spread along cable trays affecting additional areas besides the fire origin, being an ignition source due to malfunction. If burning, cables can emit large amounts of smoke and toxic products affecting occupants as well as the long-term functionality of structure and installations. Paying attention to these risks has led to the development of fire retardant non-corrosive (non-halogenated) cables which are qualified to reduce the individual or all of the risks mentioned. For existing installations in industrial buildings and power plants with halogenated cables, different protection measures are available and widely applied retroactively. Important protective measures are intumescent or ablative coatings, cable casings and bindings. For qualification of the effects of the protection measures, small-scale tests investigating a single cable specimen as well as large-scale cable tray test setups have been developed and carried out in the last 20 years at iBMB. In this paper, these test results are analysed regarding their effects on the heat release, ignition time and fire spread over cable trays. Furthermore, national and international research projects have investigated the burning behaviour of different cable types, tray installations, tray loading and spacing and ventilation conditions. As a conclusion, the main outcomes of past researches are summarized. Influence factors (e.g. pre-heating due to high power utilization, influence of cable aging) which have not been accounted for in detail are emphasized. The modelling of unprotected cables has been internationally studied in recent years. For future applications, the question of applicability of recently developed sub-models on the fire behaviour of protected

  16. VESPA: Very large-scale Evolutionary and Selective Pressure Analyses

    Directory of Open Access Journals (Sweden)

    Andrew E. Webb

    2017-06-01

    Full Text Available Background Large-scale molecular evolutionary analyses of protein coding sequences requires a number of preparatory inter-related steps from finding gene families, to generating alignments and phylogenetic trees and assessing selective pressure variation. Each phase of these analyses can represent significant challenges, particularly when working with entire proteomes (all protein coding sequences in a genome from a large number of species. Methods We present VESPA, software capable of automating a selective pressure analysis using codeML in addition to the preparatory analyses and summary statistics. VESPA is written in python and Perl and is designed to run within a UNIX environment. Results We have benchmarked VESPA and our results show that the method is consistent, performs well on both large scale and smaller scale datasets, and produces results in line with previously published datasets. Discussion Large-scale gene family identification, sequence alignment, and phylogeny reconstruction are all important aspects of large-scale molecular evolutionary analyses. VESPA provides flexible software for simplifying these processes along with downstream selective pressure variation analyses. The software automatically interprets results from codeML and produces simplified summary files to assist the user in better understanding the results. VESPA may be found at the following website: http://www.mol-evol.org/VESPA.

  17. Japanese large-scale interferometers

    CERN Document Server

    Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K

    2002-01-01

    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.

  18. The Large-scale Coronal Structure of the 2017 August 21 Great American Eclipse: An Assessment of Solar Surface Flux Transport Model Enabled Predictions and Observations

    Science.gov (United States)

    Nandy, Dibyendu; Bhowmik, Prantika; Yeates, Anthony R.; Panda, Suman; Tarafder, Rajashik; Dash, Soumyaranjan

    2018-01-01

    On 2017 August 21, a total solar eclipse swept across the contiguous United States, providing excellent opportunities for diagnostics of the Sun’s corona. The Sun’s coronal structure is notoriously difficult to observe except during solar eclipses; thus, theoretical models must be relied upon for inferring the underlying magnetic structure of the Sun’s outer atmosphere. These models are necessary for understanding the role of magnetic fields in the heating of the corona to a million degrees and the generation of severe space weather. Here we present a methodology for predicting the structure of the coronal field based on model forward runs of a solar surface flux transport model, whose predicted surface field is utilized to extrapolate future coronal magnetic field structures. This prescription was applied to the 2017 August 21 solar eclipse. A post-eclipse analysis shows good agreement between model simulated and observed coronal structures and their locations on the limb. We demonstrate that slow changes in the Sun’s surface magnetic field distribution driven by long-term flux emergence and its evolution governs large-scale coronal structures with a (plausibly cycle-phase dependent) dynamical memory timescale on the order of a few solar rotations, opening up the possibility for large-scale, global corona predictions at least a month in advance.

  19. Assessing the Capacity of the US Health Care System to Use Additional Mechanical Ventilators During a Large-Scale Public Health Emergency.

    Science.gov (United States)

    Ajao, Adebola; Nystrom, Scott V; Koonin, Lisa M; Patel, Anita; Howell, David R; Baccam, Prasith; Lant, Tim; Malatino, Eileen; Chamberlin, Margaret; Meltzer, Martin I

    2015-12-01

    A large-scale public health emergency, such as a severe influenza pandemic, can generate large numbers of critically ill patients in a short time. We modeled the number of mechanical ventilators that could be used in addition to the number of hospital-based ventilators currently in use. We identified key components of the health care system needed to deliver ventilation therapy, quantified the maximum number of additional ventilators that each key component could support at various capacity levels (ie, conventional, contingency, and crisis), and determined the constraining key component at each capacity level. Our study results showed that US hospitals could absorb between 26,200 and 56,300 additional ventilators at the peak of a national influenza pandemic outbreak with robust pre-pandemic planning. The current US health care system may have limited capacity to use additional mechanical ventilators during a large-scale public health emergency. Emergency planners need to understand their health care systems' capability to absorb additional resources and expand care. This methodology could be adapted by emergency planners to determine stockpiling goals for critical resources or to identify alternatives to manage overwhelming critical care need.

  20. Large scale chromatographic separations using continuous displacement chromatography (CDC)

    International Nuclear Information System (INIS)

    Taniguchi, V.T.; Doty, A.W.; Byers, C.H.

    1988-01-01

    A process for large scale chromatographic separations using a continuous chromatography technique is described. The process combines the advantages of large scale batch fixed column displacement chromatography with conventional analytical or elution continuous annular chromatography (CAC) to enable large scale displacement chromatography to be performed on a continuous basis (CDC). Such large scale, continuous displacement chromatography separations have not been reported in the literature. The process is demonstrated with the ion exchange separation of a binary lanthanide (Nd/Pr) mixture. The process is, however, applicable to any displacement chromatography separation that can be performed using conventional batch, fixed column chromatography

  1. Assessing the capacity of the healthcare system to use additional mechanical ventilators during a large-scale public health emergency (PHE)

    Science.gov (United States)

    Ajao, Adebola; Nystrom, Scott V.; Koonin, Lisa M.; Patel, Anita; Howell, David R.; Baccam, Prasith; Lant, Tim; Malatino, Eileen; Chamberlin, Margaret; Meltzer, Martin I.

    2015-01-01

    A large-scale Public Health Emergency (PHE), like a severe influenza pandemic can generate large numbers of critically ill patients in a short time. We modeled the number of mechanical ventilators that could be used in addition to the number of hospital-based ventilators currently in use. We identified key components of the healthcare system needed to deliver ventilation therapy, quantified the maximum number of additional ventilators that each key component could support at various capacity levels (i.e. conventional, contingency and crisis) and determined the constraining key component at each capacity level. Our study results showed that U.S. hospitals could absorb between 26,200 and 56,300 additional ventilators at the peak of a national influenza pandemic outbreak with robust pre-pandemic planning. This methodology could be adapted by emergency planners to determine stockpiling goals for critical resources or identify alternatives to manage overwhelming critical care need. PMID:26450633

  2. Assessment of long-term and large-scale even-odd license plate controlled plan effects on urban air quality and its implication

    Science.gov (United States)

    Zhao, Suping; Yu, Ye; Qin, Dahe; Yin, Daiying; He, Jianjun

    2017-12-01

    To solve traffic congestion and to improve urban air quality, long-lasting and large-scale even-odd license plate controlled plan was implemented by local government during 20 November to 26 December 2016 in urban Lanzhou, a semi-arid valley city of northwest China. The traffic control measures provided an invaluable opportunity to evaluate its effects on urban air quality in less developed cities of northwest China. Based on measured simultaneously air pollutants and meteorological parameters, the abatement of traffic-related pollutants induced by the implemented control measures such as CO, PM2.5 and PM10 (the particulate matter with diameter less than 2.5 μm and 10 μm) concentrations were firstly quantified by comparing the air quality data in urban areas with those in rural areas (uncontrolled zones). The concentrations of CO, NO2 from motor vehicles and fine particulate matter (PM2.5) were shown to have significant decreases of 15%-23% during traffic control period from those measured before control period with hourly maximum CO, PM2.5, and NO2/SO2 reduction of 43%, 35% and 141.4%, respectively. The influence of the control measures on AQI (air quality index) and ozone was less as compared to its effect on other air pollutants. Therefore, to alleviate serious winter haze pollution in China and to protect human health, the stringent long-term and large-scale even-odd license plate controlled plan should be implemented aperiodically in urban areas, especially for the periods with poor diffusion conditions.

  3. Navigation API Route Fuel Saving Opportunity Assessment on Large-Scale Real-World Travel Data for Conventional Vehicles and Hybrid Electric Vehicles: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Lei [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Holden, Jacob [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gonder, Jeffrey D [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-12-06

    The green routing strategy instructing a vehicle to select a fuel-efficient route benefits the current transportation system with fuel-saving opportunities. This paper introduces a navigation API route fuel-saving evaluation framework for estimating fuel advantages of alternative API routes based on large-scale, real-world travel data for conventional vehicles (CVs) and hybrid electric vehicles (HEVs). The navigation APIs, such Google Directions API, integrate traffic conditions and provide feasible alternative routes for origin-destination pairs. This paper develops two link-based fuel-consumption models stratified by link-level speed, road grade, and functional class (local/non-local), one for CVs and the other for HEVs. The link-based fuel-consumption models are built by assigning travel from a large number of GPS driving traces to the links in TomTom MultiNet as the underlying road network layer and road grade data from a U.S. Geological Survey elevation data set. Fuel consumption on a link is calculated by the proposed fuel consumption model. This paper envisions two kinds of applications: 1) identifying alternate routes that save fuel, and 2) quantifying the potential fuel savings for large amounts of travel. An experiment based on a large-scale California Household Travel Survey GPS trajectory data set is conducted. The fuel consumption and savings of CVs and HEVs are investigated. At the same time, the trade-off between fuel saving and time saving for choosing different routes is also examined for both powertrains.

  4. Large scale biomimetic membrane arrays

    DEFF Research Database (Denmark)

    Hansen, Jesper Søndergaard; Perry, Mark; Vogel, Jörg

    2009-01-01

    To establish planar biomimetic membranes across large scale partition aperture arrays, we created a disposable single-use horizontal chamber design that supports combined optical-electrical measurements. Functional lipid bilayers could easily and efficiently be established across CO2 laser micro......-structured 8 x 8 aperture partition arrays with average aperture diameters of 301 +/- 5 mu m. We addressed the electro-physical properties of the lipid bilayers established across the micro-structured scaffold arrays by controllable reconstitution of biotechnological and physiological relevant membrane...... peptides and proteins. Next, we tested the scalability of the biomimetic membrane design by establishing lipid bilayers in rectangular 24 x 24 and hexagonal 24 x 27 aperture arrays, respectively. The results presented show that the design is suitable for further developments of sensitive biosensor assays...

  5. Large scale nuclear structure studies

    International Nuclear Information System (INIS)

    Faessler, A.

    1985-01-01

    Results of large scale nuclear structure studies are reported. The starting point is the Hartree-Fock-Bogoliubov solution with angular momentum and proton and neutron number projection after variation. This model for number and spin projected two-quasiparticle excitations with realistic forces yields in sd-shell nuclei similar good results as the 'exact' shell-model calculations. Here the authors present results for a pf-shell nucleus 46 Ti and results for the A=130 mass region where they studied 58 different nuclei with the same single-particle energies and the same effective force derived from a meson exchange potential. They carried out a Hartree-Fock-Bogoliubov variation after mean field projection in realistic model spaces. In this way, they determine for each yrast state the optimal mean Hartree-Fock-Bogoliubov field. They apply this method to 130 Ce and 128 Ba using the same effective nucleon-nucleon interaction. (Auth.)

  6. Large-scale river regulation

    International Nuclear Information System (INIS)

    Petts, G.

    1994-01-01

    Recent concern over human impacts on the environment has tended to focus on climatic change, desertification, destruction of tropical rain forests, and pollution. Yet large-scale water projects such as dams, reservoirs, and inter-basin transfers are among the most dramatic and extensive ways in which our environment has been, and continues to be, transformed by human action. Water running to the sea is perceived as a lost resource, floods are viewed as major hazards, and wetlands are seen as wastelands. River regulation, involving the redistribution of water in time and space, is a key concept in socio-economic development. To achieve water and food security, to develop drylands, and to prevent desertification and drought are primary aims for many countries. A second key concept is ecological sustainability. Yet the ecology of rivers and their floodplains is dependent on the natural hydrological regime, and its related biochemical and geomorphological dynamics. (Author)

  7. Multiresolution comparison of precipitation datasets for large-scale models

    Science.gov (United States)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  8. Economically viable large-scale hydrogen liquefaction

    Science.gov (United States)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  9. Reviving large-scale projects

    International Nuclear Information System (INIS)

    Desiront, A.

    2003-01-01

    For the past decade, most large-scale hydro development projects in northern Quebec have been put on hold due to land disputes with First Nations. Hydroelectric projects have recently been revived following an agreement signed with Aboriginal communities in the province who recognized the need to find new sources of revenue for future generations. Many Cree are working on the project to harness the waters of the Eastmain River located in the middle of their territory. The work involves building an 890 foot long dam, 30 dikes enclosing a 603 square-km reservoir, a spillway, and a power house with 3 generating units with a total capacity of 480 MW of power for start-up in 2007. The project will require the use of 2,400 workers in total. The Cree Construction and Development Company is working on relations between Quebec's 14,000 Crees and the James Bay Energy Corporation, the subsidiary of Hydro-Quebec which is developing the project. Approximately 10 per cent of the $735-million project has been designated for the environmental component. Inspectors ensure that the project complies fully with environmental protection guidelines. Total development costs for Eastmain-1 are in the order of $2 billion of which $735 million will cover work on site and the remainder will cover generating units, transportation and financial charges. Under the treaty known as the Peace of the Braves, signed in February 2002, the Quebec government and Hydro-Quebec will pay the Cree $70 million annually for 50 years for the right to exploit hydro, mining and forest resources within their territory. The project comes at a time when electricity export volumes to the New England states are down due to growth in Quebec's domestic demand. Hydropower is a renewable and non-polluting source of energy that is one of the most acceptable forms of energy where the Kyoto Protocol is concerned. It was emphasized that large-scale hydro-electric projects are needed to provide sufficient energy to meet both

  10. Experiments performed on a man-made crack in the flat low-permeability basement as a basis for large-scale technical extraction of terrestrial heat

    Energy Technology Data Exchange (ETDEWEB)

    Kappelmeyer, O.; Jung, R.; Rummel, F.

    1984-01-01

    Research work is performed on an in-situ experimental field in the crystalline subsoil near Falkenberg in East Bavaria which are to help develop new technologies for exploiting geothermal energy. The aim is to make terrestrial heat available for technical utilization even with a relatively normal geologic structure of the subsoil - i.e. far away from volcanos and outside of layers carrying water or steam. To achieve this objective, artificial heat exchange systems were produced by hydraulic fracturing of crystalline rocks at a depth of 250 m. Geometric positions of these cracks were located by means of seismic and geo-electric methods. Seismic observations allowed deriving a crack model which helped with penetrating the man-made crack by sectional drilling. The circulation system consisting in production drill-hole, crack system and sectional drill-hole was studied for hydraulic parameter (e.g. flow resistance) and thermal efficiency at various pressure levels in the crack. Crack width was measured at different pressure stages for the first time. Thermal model calculations allow transferral of the results gained from the flat relatively cool basement to basement areas of an elevated temperature. A number of rock parameters which are relevant for an assessment whether or not the subsoil is suitable for creating artificial heat exchange systems, were examined on-site and bench-scale.

  11. Incorporation of Spatial Interactions in Location Networks to Identify Critical Geo-Referenced Routes for Assessing Disease Control Measures on a Large-Scale Campus

    Directory of Open Access Journals (Sweden)

    Tzai-Hung Wen

    2015-04-01

    Full Text Available Respiratory diseases mainly spread through interpersonal contact. Class suspension is the most direct strategy to prevent the spread of disease through elementary or secondary schools by blocking the contact network. However, as university students usually attend courses in different buildings, the daily contact patterns on a university campus are complicated, and once disease clusters have occurred, suspending classes is far from an efficient strategy to control disease spread. The purpose of this study is to propose a methodological framework for generating campus location networks from a routine administration database, analyzing the community structure of the network, and identifying the critical links and nodes for blocking respiratory disease transmission. The data comes from the student enrollment records of a major comprehensive university in Taiwan. We combined the social network analysis and spatial interaction model to establish a geo-referenced community structure among the classroom buildings. We also identified the critical links among the communities that were acting as contact bridges and explored the changes in the location network after the sequential removal of the high-risk buildings. Instead of conducting a questionnaire survey, the study established a standard procedure for constructing a location network on a large-scale campus from a routine curriculum database. We also present how a location network structure at a campus could function to target the high-risk buildings as the bridges connecting communities for blocking disease transmission.

  12. Impacts of Changing Climatic Drivers and Land use features on Future Stormwater Runoff in the Northwest Florida Basin: A Large-Scale Hydrologic Modeling Assessment

    Science.gov (United States)

    Khan, M.; Abdul-Aziz, O. I.

    2017-12-01

    Potential changes in climatic drivers and land cover features can significantly influence the stormwater budget in the Northwest Florida Basin. We investigated the hydro-climatic and land use sensitivities of stormwater runoff by developing a large-scale process-based rainfall-runoff model for the large basin by using the EPA Storm Water Management Model (SWMM 5.1). Climatic and hydrologic variables, as well as land use/cover features were incorporated into the model to account for the key processes of coastal hydrology and its dynamic interactions with groundwater and sea levels. We calibrated and validated the model by historical daily streamflow observations during 2009-2012 at four major rivers in the basin. Downscaled climatic drivers (precipitation, temperature, solar radiation) projected by twenty GCMs-RCMs under CMIP5, along with the projected future land use/cover features were also incorporated into the model. The basin storm runoff was then simulated for the historical (2000s = 1976-2005) and two future periods (2050s = 2030-2059, and 2080s = 2070-2099). Comparative evaluation of the historical and future scenarios leads to important guidelines for stormwater management in Northwest Florida and similar regions under a changing climate and environment.

  13. Large Scale Glazed Concrete Panels

    DEFF Research Database (Denmark)

    Bache, Anja Margrethe

    2010-01-01

    Today, there is a lot of focus on concrete surface’s aesthitic potential, both globally and locally. World famous architects such as Herzog De Meuron, Zaha Hadid, Richard Meyer and David Chippenfield challenge the exposure of concrete in their architecture. At home, this trend can be seen...... in the crinkly façade of DR-Byen (the domicile of the Danish Broadcasting Company) by architect Jean Nouvel and Zaha Hadid’s Ordrupgård’s black curved smooth concrete surfaces. Furthermore, one can point to initiatives such as “Synlig beton” (visible concrete) that can be seen on the website www.......synligbeton.dk and spæncom’s aesthetic relief effects by the designer Line Kramhøft (www.spaencom.com). It is my hope that the research-development project “Lasting large scale glazed concrete formwork,” I am working on at DTU, department of Architectural Engineering will be able to complement these. It is a project where I...

  14. Large scale cluster computing workshop

    International Nuclear Information System (INIS)

    Dane Skow; Alan Silverman

    2002-01-01

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community

  15. Large scale cross hole testing

    International Nuclear Information System (INIS)

    Ball, J.K.; Black, J.H.; Doe, T.

    1991-05-01

    As part of the Site Characterisation and Validation programme the results of the large scale cross hole testing have been used to document hydraulic connections across the SCV block, to test conceptual models of fracture zones and obtain hydrogeological properties of the major hydrogeological features. The SCV block is highly heterogeneous. This heterogeneity is not smoothed out even over scales of hundreds of meters. Results of the interpretation validate the hypothesis of the major fracture zones, A, B and H; not much evidence of minor fracture zones is found. The uncertainty in the flow path, through the fractured rock, causes sever problems in interpretation. Derived values of hydraulic conductivity were found to be in a narrow range of two to three orders of magnitude. Test design did not allow fracture zones to be tested individually. This could be improved by testing the high hydraulic conductivity regions specifically. The Piezomac and single hole equipment worked well. Few, if any, of the tests ran long enough to approach equilibrium. Many observation boreholes showed no response. This could either be because there is no hydraulic connection, or there is a connection but a response is not seen within the time scale of the pumping test. The fractional dimension analysis yielded credible results, and the sinusoidal testing procedure provided an effective means of identifying the dominant hydraulic connections. (10 refs.) (au)

  16. Large-scale pool fires

    Directory of Open Access Journals (Sweden)

    Steinhaus Thomas

    2007-01-01

    Full Text Available A review of research into the burning behavior of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low source Froude numbers and the radiative interaction with the fire source. In hydrocarbon fires, higher soot levels at increased diameters result in radiation blockage effects around the perimeter of large fire plumes; this yields lower emissive powers and a drastic reduction in the radiative loss fraction; whilst there are simplifying factors with these phenomena, arising from the fact that soot yield can saturate, there are other complications deriving from the intermittency of the behavior, with luminous regions of efficient combustion appearing randomly in the outer surface of the fire according the turbulent fluctuations in the fire plume. Knowledge of the fluid flow instabilities, which lead to the formation of large eddies, is also key to understanding the behavior of large-scale fires. Here modeling tools can be effectively exploited in order to investigate the fluid flow phenomena, including RANS- and LES-based computational fluid dynamics codes. The latter are well-suited to representation of the turbulent motions, but a number of challenges remain with their practical application. Massively-parallel computational resources are likely to be necessary in order to be able to adequately address the complex coupled phenomena to the level of detail that is necessary.

  17. The solar noise barrier project: 1. Effect of incident light orientation on the performance of a large-scale luminescent solar concentrator noise barrier

    NARCIS (Netherlands)

    Kanellis, M.; de Jong, M.; Slooff, L.H.; Debije, M.G.

    2017-01-01

    In this work we describe the relative performance of the largest luminescent solar concentrator (LSC) constructed to date. Comparisons are made for performance of North/South and East/West facing panels during a sunny day. It is shown that the East/West panels display much more varied performance

  18. Optical interconnect for large-scale systems

    Science.gov (United States)

    Dress, William

    2013-02-01

    This paper presents a switchless, optical interconnect module that serves as a node in a network of identical distribution modules for large-scale systems. Thousands to millions of hosts or endpoints may be interconnected by a network of such modules, avoiding the need for multi-level switches. Several common network topologies are reviewed and their scaling properties assessed. The concept of message-flow routing is discussed in conjunction with the unique properties enabled by the optical distribution module where it is shown how top-down software control (global routing tables, spanning-tree algorithms) may be avoided.

  19. Assessing outcomes of large-scale public health interventions in the absence of baseline data using a mixture of Cox and binomial regressions

    Science.gov (United States)

    2014-01-01

    Background Large-scale public health interventions with rapid scale-up are increasingly being implemented worldwide. Such implementation allows for a large target population to be reached in a short period of time. But when the time comes to investigate the effectiveness of these interventions, the rapid scale-up creates several methodological challenges, such as the lack of baseline data and the absence of control groups. One example of such an intervention is Avahan, the India HIV/AIDS initiative of the Bill & Melinda Gates Foundation. One question of interest is the effect of Avahan on condom use by female sex workers with their clients. By retrospectively reconstructing condom use and sex work history from survey data, it is possible to estimate how condom use rates evolve over time. However formal inference about how this rate changes at a given point in calendar time remains challenging. Methods We propose a new statistical procedure based on a mixture of binomial regression and Cox regression. We compare this new method to an existing approach based on generalized estimating equations through simulations and application to Indian data. Results Both methods are unbiased, but the proposed method is more powerful than the existing method, especially when initial condom use is high. When applied to the Indian data, the new method mostly agrees with the existing method, but seems to have corrected some implausible results of the latter in a few districts. We also show how the new method can be used to analyze the data of all districts combined. Conclusions The use of both methods can be recommended for exploratory data analysis. However for formal statistical inference, the new method has better power. PMID:24397563

  20. Large-scale galaxy bias

    Science.gov (United States)

    Desjacques, Vincent; Jeong, Donghui; Schmidt, Fabian

    2018-02-01

    This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy statistics. We then review the excursion-set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  1. Large-scale galaxy bias

    Science.gov (United States)

    Jeong, Donghui; Desjacques, Vincent; Schmidt, Fabian

    2018-01-01

    Here, we briefly introduce the key results of the recent review (arXiv:1611.09787), whose abstract is as following. This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy (or halo) statistics. We then review the excursion set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  2. Concepts for Large Scale Hydrogen Production

    OpenAIRE

    Jakobsen, Daniel; Åtland, Vegar

    2016-01-01

    The objective of this thesis is to perform a techno-economic analysis of large-scale, carbon-lean hydrogen production in Norway, in order to evaluate various production methods and estimate a breakeven price level. Norway possesses vast energy resources and the export of oil and gas is vital to the country s economy. The results of this thesis indicate that hydrogen represents a viable, carbon-lean opportunity to utilize these resources, which can prove key in the future of Norwegian energy e...

  3. The solar noise barrier project : 2. The effect of street art on performance of a large scale luminescent solar concentrator prototype

    NARCIS (Netherlands)

    Debije, M.G.; Tzikas, C.; Rajkumar, V.A.; de Jong, M.

    2017-01-01

    Noise barriers have been used worldwide to reduce the impact of sound generated from traffic on nearby areas. A common feature to appear on these noise barriers are all manner of graffiti and street art. In this work we describe the relative performance of a large area luminescent solar concentrator

  4. Computational Typologies of Multidimensional End-of-Primary-School Performance Profiles from an Educational Perspective of Large-Scale TIMSS and PIRLS Surveys

    Science.gov (United States)

    Unlu, Ali; Schurig, Michael

    2015-01-01

    Recently, performance profiles in reading, mathematics and science were created using the data collectively available in the Trends in International Mathematics and Science Study (TIMSS) and the Progress in International Reading Literacy Study (PIRLS) 2011. In addition, a classification of children to the end of their primary school years was…

  5. Foster Wheeler's Solutions for Large Scale CFB Boiler Technology: Features and Operational Performance of Łagisza 460 MWe CFB Boiler

    Science.gov (United States)

    Hotta, Arto

    During recent years, once-through supercritical (OTSC) CFB technology has been developed, enabling the CFB technology to proceed to medium-scale (500 MWe) utility projects such as Łagisza Power Plant in Poland owned by Poludniowy Koncern Energetyczny SA. (PKE), with net efficiency nearly 44%. Łagisza power plant is currently under commissioning and has reached full load operation in March 2009. The initial operation shows very good performance and confirms, that the CFB process has no problems with the scaling up to this size. Also the once-through steam cycle utilizing Siemens' vertical tube Benson technology has performed as predicted in the CFB process. Foster Wheeler has developed the CFB design further up to 800 MWe with net efficiency of ≥45%.

  6. A Third-Order Item Response Theory Model for Modeling the Effects of Domains and Subdomains in Large-Scale Educational Assessment Surveys

    Science.gov (United States)

    Rijmen, Frank; Jeon, Minjeong; von Davier, Matthias; Rabe-Hesketh, Sophia

    2014-01-01

    Second-order item response theory models have been used for assessments consisting of several domains, such as content areas. We extend the second-order model to a third-order model for assessments that include subdomains nested in domains. Using a graphical model framework, it is shown how the model does not suffer from the curse of…

  7. SMILE: experimental results of the WP4 PTS large scale test performed on a component in terms of cracked cylinder involving warm pre-stress

    International Nuclear Information System (INIS)

    Kerkhof, K.; Bezdikian, G.; Moinereau, D.; Dahl, A; Wadier, Y.; Gilles, P.; Keim, E.; Chapuliot, S.; Taylor, N.; Lidbury, D.; Sharples, J.; Budden, P.; Siegele, D.; Nagel, G.; Bass, R.; Emond, D.

    2005-01-01

    The Reactor Pressure Vessel (RPV) is an essential component, which is liable to limit the lifetime duration of PWR plants. The assessment of defects in RPV subjected to pressurized thermal shock (PTS) transients made at an European level generally does not necessarily consider the beneficial effect of the load history (Warm Pre-stress, WPS). The SMILE project - Structural Margin Improvements in aged embrittled RPV with Load history Effects-aims to give sufficient elements to demonstrate, to model and to validate the beneficial WPS effect. It also aims to harmonize the different approaches in the national codes and standards regarding the inclusion of the WPS effect in a RPV structural integrity assessment. The project includes significant experimental work on WPS type experiments with C(T) specimens and a PTS type transient experiment on a large component. This paper deals with the results of the PTS type transient experiment on a component-like specimen subjected to WPS- loading, the so called Validation Test, carried out within the framework of work package WP4. The test specimen consists of a cylindrical thick walled specimen with a thickness of 40 mm and an outer diameter of 160 mm, provided with an internal fully circumferential crack with a depth of about 15 mm. The specified load path type is Load-Cool-Unload-Fracture (LCUF). No crack initiation occurred during cooling (thermal shock loading) although the loading path crossed the fracture toughness curve in the transition region. The benefit of the WPS-effect by final re-loading up to fracture in the lower shelf region, was shown clearly. The corresponding fracture load during reloading in the lower shelf region was significantly higher than the crack initiation values of the original material in the lower shelf region. The post test fractographic evaluation showed that the fracture mode was predominantly cleavage fracture also with some secondary cracks emanating from major crack. (authors)

  8. Direct large-scale synthesis of 3D hierarchical mesoporous NiO microspheres as high-performance anode materials for lithium ion batteries.

    Science.gov (United States)

    bai, Zhongchao; Ju, Zhicheng; Guo, Chunli; Qian, Yitai; Tang, Bin; Xiong, Shenglin

    2014-03-21

    Hierarchically porous materials are an ideal material platform for constructing high performance Li-ion batteries (LIBs), offering great advantages such as large contact area between the electrode and the electrolyte, fast and flexible transport pathways for the electrolyte ions and the space for buffering the strain caused by repeated Li insertion/extraction. In this work, NiO microspheres with hierarchically porous structures have been synthesized via a facile thermal decomposition method by only using a simple precursor. The superstructures are composed of nanocrystals with high specific surface area, large pore volume, and broad pore size distribution. The electrochemical properties of 3D hierarchical mesoporous NiO microspheres were examined by cyclic voltammetry and galvanostatic charge-discharge studies. The results demonstrate that the as-prepared NiO nanospheres are excellent electrode materials in LIBs with high specific capacity, good retention and rate performance. The 3D hierarchical mesoporous NiO microspheres can retain a reversible capacity of 800.2 mA h g(-1) after 100 cycles at a high current density of 500 mA g(-1).

  9. Description of the person-environment interaction: methodological issues and empirical results of an Italian large-scale disability assessment study using an ICF-based protocol.

    Science.gov (United States)

    Francescutti, Carlo; Gongolo, Francesco; Simoncello, Andrea; Frattura, Lucilla

    2011-05-31

    There is a connection between the definition of disability in a person-environment framework, the development of appropriate assessment strategies and instruments, and the logic underpinning the organization of benefits and services to confront disability. The Italian Ministry of Health and Ministry of Labor and Social Policies supported a three-year project for the definition of a common framework and a standardised protocol for disability evaluation based on ICF. The research agenda of the project identified 6 phases: 1) adoption of a definition of disability; 2) analytical breakdown of the contents of disability definition, so as to indicate as clearly as possible the core information essential to guide the evaluation process; 3) definition of a data collection protocol; 4) national implementation of the protocol and collection of approximately 1,000 profiles; 5) proposal of a profile analysis and definition of groups of cases with similar functioning profiles; 6) trial of the proposal with the collected data. The data was analyzed in different ways: descriptive analysis, application of the person-environment interactions classification tree, and cluster analysis. A sample of 1,051 persons from 8 Italian regions was collected that represented different functioning conditions in all the phases of the life cycle. The aggregate result of the person-environment interactions was summarized. The majority of activities resulted with no problems in all of the A&P chapters. Nearly 50.000 facilitators codes were opened. The main frequent facilitators were family members, health and social professionals, assistive devices and both health and social systems, services and politics. The focus of the person-environment interaction evaluation was on the A&P domains, differentiating those in which performance presented limitations and restrictions from those in which performance had no or light limitations and restrictions. Communication(d3) and Learning and Applying Knowledge

  10. Performance Analysis and Scaling Behavior of the Terrestrial Systems Modeling Platform TerrSysMP in Large-Scale Supercomputing Environments

    Science.gov (United States)

    Kollet, S. J.; Goergen, K.; Gasper, F.; Shresta, P.; Sulis, M.; Rihani, J.; Simmer, C.; Vereecken, H.

    2013-12-01

    In studies of the terrestrial hydrologic, energy and biogeochemical cycles, integrated multi-physics simulation platforms take a central role in characterizing non-linear interactions, variances and uncertainties of system states and fluxes in reciprocity with observations. Recently developed integrated simulation platforms attempt to honor the complexity of the terrestrial system across multiple time and space scales from the deeper subsurface including groundwater dynamics into the atmosphere. Technically, this requires the coupling of atmospheric, land surface, and subsurface-surface flow models in supercomputing environments, while ensuring a high-degree of efficiency in the utilization of e.g., standard Linux clusters and massively parallel resources. A systematic performance analysis including profiling and tracing in such an application is crucial in the understanding of the runtime behavior, to identify optimum model settings, and is an efficient way to distinguish potential parallel deficiencies. On sophisticated leadership-class supercomputers, such as the 28-rack 5.9 petaFLOP IBM Blue Gene/Q 'JUQUEEN' of the Jülich Supercomputing Centre (JSC), this is a challenging task, but even more so important, when complex coupled component models are to be analysed. Here we want to present our experience from coupling, application tuning (e.g. 5-times speedup through compiler optimizations), parallel scaling and performance monitoring of the parallel Terrestrial Systems Modeling Platform TerrSysMP. The modeling platform consists of the weather prediction system COSMO of the German Weather Service; the Community Land Model, CLM of NCAR; and the variably saturated surface-subsurface flow code ParFlow. The model system relies on the Multiple Program Multiple Data (MPMD) execution model where the external Ocean-Atmosphere-Sea-Ice-Soil coupler (OASIS3) links the component models. TerrSysMP has been instrumented with the performance analysis tool Scalasca and analyzed

  11. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    The present paper presents overtopping measurements from small scale model test performed at the Haudraulic & Coastal Engineering Laboratory, Aalborg University, Denmark and large scale model tests performed at the Largde Wave Channel,Hannover, Germany. Comparison between results obtained from...... small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...

  12. Technology for Large-Scale Translation of Clinical Practice Guidelines: A Pilot Study of the Performance of a Hybrid Human and Computer-Assisted Approach.

    Science.gov (United States)

    Van de Velde, Stijn; Macken, Lieve; Vanneste, Koen; Goossens, Martine; Vanschoenbeek, Jan; Aertgeerts, Bert; Vanopstal, Klaar; Vander Stichele, Robert; Buysschaert, Joost

    2015-10-09

    The construction of EBMPracticeNet, a national electronic point-of-care information platform in Belgium, began in 2011 to optimize quality of care by promoting evidence-based decision making. The project involved, among other tasks, the translation of 940 EBM Guidelines of Duodecim Medical Publications from English into Dutch and French. Considering the scale of the translation process, it was decided to make use of computer-aided translation performed by certificated translators with limited expertise in medical translation. Our consortium used a hybrid approach, involving a human translator supported by a translation memory (using SDL Trados Studio), terminology recognition (using SDL MultiTerm terminology databases) from medical terminology databases, and support from online machine translation. This resulted in a validated translation memory, which is now in use for the translation of new and updated guidelines. The objective of this experiment was to evaluate the performance of the hybrid human and computer-assisted approach in comparison with translation unsupported by translation memory and terminology recognition. A comparison was also made with the translation efficiency of an expert medical translator. We conducted a pilot study in which two sets of 30 new and 30 updated guidelines were randomized to one of three groups. Comparable guidelines were translated (1) by certificated junior translators without medical specialization using the hybrid method, (2) by an experienced medical translator without this support, and (3) by the same junior translators without the support of the validated translation memory. A medical proofreader who was blinded for the translation procedure, evaluated the translated guidelines for acceptability and adequacy. Translation speed was measured by recording translation and post-editing time. The human translation edit rate was calculated as a metric to evaluate the quality of the translation. A further evaluation was made of

  13. Internationalization Measures in Large Scale Research Projects

    Science.gov (United States)

    Soeding, Emanuel; Smith, Nancy

    2017-04-01

    Internationalization measures in Large Scale Research Projects Large scale research projects (LSRP) often serve as flagships used by universities or research institutions to demonstrate their performance and capability to stakeholders and other interested parties. As the global competition among universities for the recruitment of the brightest brains has increased, effective internationalization measures have become hot topics for universities and LSRP alike. Nevertheless, most projects and universities are challenged with little experience on how to conduct these measures and make internationalization an cost efficient and useful activity. Furthermore, those undertakings permanently have to be justified with the Project PIs as important, valuable tools to improve the capacity of the project and the research location. There are a variety of measures, suited to support universities in international recruitment. These include e.g. institutional partnerships, research marketing, a welcome culture, support for science mobility and an effective alumni strategy. These activities, although often conducted by different university entities, are interlocked and can be very powerful measures if interfaced in an effective way. On this poster we display a number of internationalization measures for various target groups, identify interfaces between project management, university administration, researchers and international partners to work together, exchange information and improve processes in order to be able to recruit, support and keep the brightest heads to your project.

  14. Large scale integration of photovoltaics in cities

    International Nuclear Information System (INIS)

    Strzalka, Aneta; Alam, Nazmul; Duminil, Eric; Coors, Volker; Eicker, Ursula

    2012-01-01

    Highlights: ► We implement the photovoltaics on a large scale. ► We use three-dimensional modelling for accurate photovoltaic simulations. ► We consider the shadowing effect in the photovoltaic simulation. ► We validate the simulated results using detailed hourly measured data. - Abstract: For a large scale implementation of photovoltaics (PV) in the urban environment, building integration is a major issue. This includes installations on roof or facade surfaces with orientations that are not ideal for maximum energy production. To evaluate the performance of PV systems in urban settings and compare it with the building user’s electricity consumption, three-dimensional geometry modelling was combined with photovoltaic system simulations. As an example, the modern residential district of Scharnhauser Park (SHP) near Stuttgart/Germany was used to calculate the potential of photovoltaic energy and to evaluate the local own consumption of the energy produced. For most buildings of the district only annual electrical consumption data was available and only selected buildings have electronic metering equipment. The available roof area for one of these multi-family case study buildings was used for a detailed hourly simulation of the PV power production, which was then compared to the hourly measured electricity consumption. The results were extrapolated to all buildings of the analyzed area by normalizing them to the annual consumption data. The PV systems can produce 35% of the quarter’s total electricity consumption and half of this generated electricity is directly used within the buildings.

  15. Large-scale Intelligent Transporation Systems simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.

    1995-06-01

    A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.

  16. Psychometric Features of the General Aptitude Test-Verbal Part (GAT-V): A Large-Scale Assessment of High School Graduates in Saudi Arabia

    Science.gov (United States)

    Dimitrov, Dimiter M.; Shamrani, Abdul Rahman

    2015-01-01

    This study examines the psychometric features of a General Aptitude Test-Verbal Part, which is used with assessments of high school graduates in Saudi Arabia. The data supported a bifactor model, with one general factor and three content domains (Analogy, Sentence Completion, and Reading Comprehension) as latent aspects of verbal aptitude.

  17. Large-Scale Astrophysical Visualization on Smartphones

    Science.gov (United States)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  18. Ethics of large-scale change

    OpenAIRE

    Arler, Finn

    2006-01-01

      The subject of this paper is long-term large-scale changes in human society. Some very significant examples of large-scale change are presented: human population growth, human appropriation of land and primary production, the human use of fossil fuels, and climate change. The question is posed, which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, th...

  19. Exploiting Growing Stock Volume Maps for Large Scale Forest Resource Assessment: Cross-Comparisons of ASAR- and PALSAR-Based GSV Estimates with Forest Inventory in Central Siberia

    Directory of Open Access Journals (Sweden)

    Christian Hüttich

    2014-07-01

    Full Text Available Growing stock volume is an important biophysical parameter describing the state and dynamics of the Boreal zone. Validation of growing stock volume (GSV maps based on satellite remote sensing is challenging due to the lack of consistent ground reference data. The monitoring and assessment of the remote Russian forest resources of Siberia can only be done by integrating remote sensing techniques and interdisciplinary collaboration. In this paper, we assess the information content of GSV estimates in Central Siberian forests obtained at 25 m from ALOS-PALSAR and 1 km from ENVISAT-ASAR backscatter data. The estimates have been cross-compared with respect to forest inventory data showing 34% relative RMSE for the ASAR-based GSV retrievals and 39.4% for the PALSAR-based estimates of GSV. Fragmentation analyses using a MODIS-based land cover dataset revealed an increase of retrieval error with increasing fragmentation of the landscape. Cross-comparisons of multiple SAR-based GSV estimates helped to detect inconsistencies in the forest inventory data and can support an update of outdated forest inventory stands.

  20. Radiations: large scale monitoring in Japan

    International Nuclear Information System (INIS)

    Linton, M.; Khalatbari, A.

    2011-01-01

    As the consequences of radioactive leaks on their health are a matter of concern for Japanese people, a large scale epidemiological study has been launched by the Fukushima medical university. It concerns the two millions inhabitants of the Fukushima Prefecture. On the national level and with the support of public funds, medical care and follow-up, as well as systematic controls are foreseen, notably to check the thyroid of 360.000 young people less than 18 year old and of 20.000 pregnant women in the Fukushima Prefecture. Some measurements have already been performed on young children. Despite the sometimes rather low measures, and because they know that some parts of the area are at least as much contaminated as it was the case around Chernobyl, some people are reluctant to go back home

  1. What Types of Pornography Do People Find Arousing and Do They Cluster? Assessing Types and Categories of Pornography in a Large-Scale Online Sample.

    Science.gov (United States)

    Hald, Gert Martin; Štulhofer, Aleksandar

    2016-09-01

    Previous research on exposure to different types of pornography has primarily relied on analyses of millions of search terms and histories or on user exposure patterns within a given time period rather than the self-reported frequency of consumption. Further, previous research has almost exclusively relied on theoretical or ad hoc overarching categorizations of different types of pornography, when investigating patterns of pornography exposure, rather than latent structure analyses of these exposure patterns. In contrast, using a large sample of 18- to 40-year-old heterosexual and nonheterosexual Croatian men and women, this study investigated the self-reported frequency of using 27 different types of pornography and statistically explored their latent structures. The results showed substantial differences in consumption patterns across gender and sexual orientation. However, latent structure analyses of the 27 different types of pornography assessed suggested that although several categories of consumption were gender and sexual orientation specific, common categories across the different types of pornography could be established. Based on this finding, a five-item scale was proposed to indicate the use of nonmainstream (paraphilic) pornographic content, as this type of pornography has often been targeted in previous research. To the best of our knowledge, no similar measurement tool has been proposed before.

  2. Environmental life cycle assessment of a large-scale grid-connected PV power plant. Case study Moura 62 MW PV power plant

    Energy Technology Data Exchange (ETDEWEB)

    Suomalainen, Kiti

    2006-01-15

    An environmental life cycle assessment has been conducted for a 62 MW grid-connected photovoltaic installation to study the role of BOS components in the total environmental load. Also the influence of the current electricity supply has been investigated. For an alternative approach a net output approach has been used, where all electricity requirements are supplied by the photovoltaic installation itself. The components taken into account are monocrystalline silicon cells in frameless modules, steel support structures in concrete foundations, inverters, transformers, cables, transports and construction of roads and buildings. For stationary inert products without intrinsic energy requirements, such as cables, inverters, support structures etc., only raw material acquisition and processing are taken into account, since they are considered the most dominant stages in the life cycle. The results confirm a minor environmental load from BOS components compared to the module life cycle, showing approximately ten to twenty percent impact of the total. Uncertainties lie in the approximations for electronic devices as well as in the emissions from silicon processing. Concerning the electricity supply, the results differ considerably depending on which system perspective is used. In the net output approach the impacts decrease with approximately ninety percent from the traditional approach. Some increases are also shown in toxicity categories due to the increased module production needed for the enlargement of the installation.

  3. Using large-scale data analysis to assess life history and behavioural traits: the case of the reintroduced White stork Ciconia ciconia population in the Netherlands

    Directory of Open Access Journals (Sweden)

    Doligez, B.

    2004-06-01

    Full Text Available The White stork Ciconia ciconia has been the object of several successful reintroduction programmes in the last decades. As a consequence, populations have been monitored over large spatial scales. Despite these intense efforts, very few reliable estimates of life history traits are available for this species. Such general knowledge however constitutes a prerequisite for investigating the consequences of conservation measures. Using the large–scale and long–term ringing and resighting data set of White storks in the Netherlands, we investigated the variation of survival and resighting rates with age, time and previous individual resighting history, and in a second step supplementary feeding, using capture–recapture models. Providing food did not seem to affect survival directly, but may have an indirect effect via the alteration of migratory behaviour. Large–scale population monitoring is important in obtaining precise and reliable estimates of life history traits and assessing the consequences of conservation measures on these traits, which will prove useful for managers to take adequate measures in future conservation strategies.

  4. Large-scale population assessment informs conservation management for seabirds in Antarctica and the Southern Ocean: A case study of Adélie penguins

    Directory of Open Access Journals (Sweden)

    Colin Southwell

    2017-01-01

    Full Text Available Antarctica and the Southern Ocean are increasingly affected by fisheries, climate change and human presence. Antarctic seabirds are vulnerable to all these threats because they depend on terrestrial and marine environments to breed and forage. We assess the current distribution and total abundance of Adélie penguins in East Antarctica and find there are 3.5 (95% CI 2.9–4.2 million individuals of breeding age along the East Antarctic coastline and 5.9 (4.2–7.7 million individuals foraging in the adjacent ocean after the breeding season. One third of the breeding population numbering over 1 million individuals breed within 10 km of research stations, highlighting the potential for human activities to impact Adélie penguin populations despite their current high abundance. The 16 Antarctic Specially Protected Areas currently designated in East Antarctica offer protection to breeding populations close to stations in four of six regional populations. The East Antarctic breeding population consumes an average of 193 500 tonnes of krill and 18 800 tonnes of fish during a breeding season, with consumption peaking at the end of the breeding season. These findings can inform future conservation management decisions in the terrestrial environment under the Protocol on Environmental Protection to develop a systematic network of protected areas, and in the marine environment under the Convention for the Conservation of Antarctic Marine Living Resources to allow the consumption needs of Adélie penguins to be taken into account when setting fishery catch limits. Extending this work to other penguin, flying seabird, seal and whale species is a priority for conservation management in Antarctica and the Southern Ocean.

  5. Nuclear-pumped lasers for large-scale applications

    International Nuclear Information System (INIS)

    Anderson, R.E.; Leonard, E.M.; Shea, R.F.; Berggren, R.R.

    1989-05-01

    Efficient initiation of large-volume chemical lasers may be achieved by neutron induced reactions which produce charged particles in the final state. When a burst mode nuclear reactor is used as the neutron source, both a sufficiently intense neutron flux and a sufficiently short initiation pulse may be possible. Proof-of-principle experiments are planned to demonstrate lasing in a direct nuclear-pumped large-volume system; to study the effects of various neutron absorbing materials on laser performance; to study the effects of long initiation pulse lengths; to demonstrate the performance of large-scale optics and the beam quality that may be obtained; and to assess the performance of alternative designs of burst systems that increase the neutron output and burst repetition rate. 21 refs., 8 figs., 5 tabs

  6. Assessing the variability of glacier lake bathymetries and potential peak discharge based on large-scale measurements in the Cordillera Blanca, Peru

    Science.gov (United States)

    Cochachin, Alejo; Huggel, Christian; Salazar, Cesar; Haeberli, Wilfried; Frey, Holger

    2015-04-01

    Over timescales of hundreds to thousands of years ice masses in mountains produced erosion in bedrock and subglacial sediment, including the formation of overdeepenings and large moraine dams that now serve as basins for glacial lakes. Satellite based studies found a total of 8355 glacial lakes in Peru, whereof 830 lakes were observed in the Cordillera Blanca. Some of them have caused major disasters due to glacial lake outburst floods in the past decades. On the other hand, in view of shrinking glaciers, changing water resources, and formation of new lakes, glacial lakes could have a function as water reservoirs in the future. Here we present unprecedented bathymetric studies of 124 glacial lakes in the Cordillera Blanca, Huallanca, Huayhuash and Raura in the regions of Ancash, Huanuco and Lima. Measurements were carried out using a boat equipped with GPS, a total station and an echo sounder to measure the depth of the lakes. Autocad Civil 3D Land and ArcGIS were used to process the data and generate digital topographies of the lake bathymetries, and analyze parameters such as lake area, length and width, and depth and volume. Based on that, we calculated empirical equations for mean depth as related to (1) area, (2) maximum length, and (3) maximum width. We then applied these three equations to all 830 glacial lakes of the Cordillera Blanca to estimate their volumes. Eventually we used three relations from the literature to assess the peak discharge of potential lake outburst floods, based on lake volumes, resulting in 3 x 3 peak discharge estimates. In terms of lake topography and geomorphology results indicate that the maximum depth is located in the center part for bedrock lakes, and in the back part for lakes in moraine material. Best correlations are found for mean depth and maximum width, however, all three empirical relations show a large spread, reflecting the wide range of natural lake bathymetries. Volumes of the 124 lakes with bathymetries amount to 0

  7. First large-scale DNA barcoding assessment of reptiles in the biodiversity hotspot of Madagascar, based on newly designed COI primers.

    Science.gov (United States)

    Nagy, Zoltán T; Sonet, Gontran; Glaw, Frank; Vences, Miguel

    2012-01-01

    DNA barcoding of non-avian reptiles based on the cytochrome oxidase subunit I (COI) gene is still in a very early stage, mainly due to technical problems. Using a newly developed set of reptile-specific primers for COI we present the first comprehensive study targeting the entire reptile fauna of the fourth-largest island in the world, the biodiversity hotspot of Madagascar. Representatives of the majority of Madagascan non-avian reptile species (including Squamata and Testudines) were sampled and successfully DNA barcoded. The new primer pair achieved a constantly high success rate (72.7-100%) for most squamates. More than 250 species of reptiles (out of the 393 described ones; representing around 64% of the known diversity of species) were barcoded. The average interspecific genetic distance within families ranged from a low of 13.4% in the Boidae to a high of 29.8% in the Gekkonidae. Using the average genetic divergence between sister species as a threshold, 41-48 new candidate (undescribed) species were identified. Simulations were used to evaluate the performance of DNA barcoding as a function of completeness of taxon sampling and fragment length. Compared with available multi-gene phylogenies, DNA barcoding correctly assigned most samples to species, genus and family with high confidence and the analysis of fewer taxa resulted in an increased number of well supported lineages. Shorter marker-lengths generally decreased the number of well supported nodes, but even mini-barcodes of 100 bp correctly assigned many samples to genus and family. The new protocols might help to promote DNA barcoding of reptiles and the established library of reference DNA barcodes will facilitate the molecular identification of Madagascan reptiles. Our results might be useful to easily recognize undescribed diversity (i.e. novel taxa), to resolve taxonomic problems, and to monitor the international pet trade without specialized expert knowledge.

  8. First Large-Scale DNA Barcoding Assessment of Reptiles in the Biodiversity Hotspot of Madagascar, Based on Newly Designed COI Primers

    Science.gov (United States)

    Nagy, Zoltán T.; Sonet, Gontran; Glaw, Frank; Vences, Miguel

    2012-01-01

    Background DNA barcoding of non-avian reptiles based on the cytochrome oxidase subunit I (COI) gene is still in a very early stage, mainly due to technical problems. Using a newly developed set of reptile-specific primers for COI we present the first comprehensive study targeting the entire reptile fauna of the fourth-largest island in the world, the biodiversity hotspot of Madagascar. Methodology/Principal Findings Representatives of the majority of Madagascan non-avian reptile species (including Squamata and Testudines) were sampled and successfully DNA barcoded. The new primer pair achieved a constantly high success rate (72.7–100%) for most squamates. More than 250 species of reptiles (out of the 393 described ones; representing around 64% of the known diversity of species) were barcoded. The average interspecific genetic distance within families ranged from a low of 13.4% in the Boidae to a high of 29.8% in the Gekkonidae. Using the average genetic divergence between sister species as a threshold, 41–48 new candidate (undescribed) species were identified. Simulations were used to evaluate the performance of DNA barcoding as a function of completeness of taxon sampling and fragment length. Compared with available multi-gene phylogenies, DNA barcoding correctly assigned most samples to species, genus and family with high confidence and the analysis of fewer taxa resulted in an increased number of well supported lineages. Shorter marker-lengths generally decreased the number of well supported nodes, but even mini-barcodes of 100 bp correctly assigned many samples to genus and family. Conclusions/Significance The new protocols might help to promote DNA barcoding of reptiles and the established library of reference DNA barcodes will facilitate the molecular identification of Madagascan reptiles. Our results might be useful to easily recognize undescribed diversity (i.e. novel taxa), to resolve taxonomic problems, and to monitor the international pet trade

  9. Creating Large Scale Database Servers

    International Nuclear Information System (INIS)

    Becla, Jacek

    2001-01-01

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region

  10. Creating Large Scale Database Servers

    Energy Technology Data Exchange (ETDEWEB)

    Becla, Jacek

    2001-12-14

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region.

  11. Comparative Analysis of Different Protocols to Manage Large Scale Networks

    OpenAIRE

    Anil Rao Pimplapure; Dr Jayant Dubey; Prashant Sen

    2013-01-01

    In recent year the numbers, complexity and size is increased in Large Scale Network. The best example of Large Scale Network is Internet, and recently once are Data-centers in Cloud Environment. In this process, involvement of several management tasks such as traffic monitoring, security and performance optimization is big task for Network Administrator. This research reports study the different protocols i.e. conventional protocols like Simple Network Management Protocol and newly Gossip bas...

  12. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  13. Automating large-scale reactor systems

    International Nuclear Information System (INIS)

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig

  14. Decentralized Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2013-01-01

    problem is formulated as a centralized large-scale optimization problem but is then decomposed into smaller subproblems that are solved locally by each unit connected to an aggregator. For large-scale systems the method is faster than solving the full problem and can be distributed to include an arbitrary...

  15. Amplification of large-scale magnetic field in nonhelical magnetohydrodynamics

    KAUST Repository

    Kumar, Rohit

    2017-08-11

    It is typically assumed that the kinetic and magnetic helicities play a crucial role in the growth of large-scale dynamo. In this paper, we demonstrate that helicity is not essential for the amplification of large-scale magnetic field. For this purpose, we perform nonhelical magnetohydrodynamic (MHD) simulation, and show that the large-scale magnetic field can grow in nonhelical MHD when random external forcing is employed at scale 1/10 the box size. The energy fluxes and shell-to-shell transfer rates computed using the numerical data show that the large-scale magnetic energy grows due to the energy transfers from the velocity field at the forcing scales.

  16. Needs, opportunities, and options for large scale systems research

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  17. Ultra-large scale synthesis of high electrochemical performance SnO{sub 2} quantum dots within 5 min at room temperature following a growth self-termination mechanism

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Hongtao, E-mail: htcui@ytu.edu.cn; Xue, Junying; Ren, Wanzhong; Wang, Minmin

    2015-10-05

    Highlights: • SnO{sub 2} quantum dots were prepared at an ultra-large scale at room temperature within 5 min. • The grinding of SnCl{sub 2}⋅2H{sub 2}O and ammonium persulphate with morpholine produces quantum dots. • The reactions were self-terminated through the rapid consumption of water. • The obtained SnO{sub 2} quantum dots own high electrochemical performance. - Abstract: SnO{sub 2} quantum dots are prepared at an ultra-large scale by a productive synthetic procedure without using any organic ligand. The grinding of solid mixture of SnCl{sub 2}⋅2H{sub 2}O and ammonium persulphate with morpholine in a mortar at room temperature produces 1.2 nm SnO{sub 2} quantum dots within 5 min. The formation of SnO{sub 2} is initiated by the reaction between tin ions and hydroxyl groups generated from hydrolysis of morpholine in the released hydrate water from SnCl{sub 2}⋅2H{sub 2}O. It is considered that as water is rapidly consumed by the hydrolysis reaction of morpholine, the growth process of particles is self-terminated immediately after their transitory period of nucleation and growth. As a result of simple procedure and high toleration to scaling up of preparation, at least 50 g of SnO{sub 2} quantum dots can be produced in one batch in our laboratory. The as prepared quantum dots present high electrochemical performance due to the effective faradaic reaction and the alternative trapping of electrons and holes.

  18. Large-scale numerical simulations of plasmas

    International Nuclear Information System (INIS)

    Hamaguchi, Satoshi

    2004-01-01

    The recent trend of large scales simulations of fusion plasma and processing plasmas is briefly summarized. Many advanced simulation techniques have been developed for fusion plasmas and some of these techniques are now applied to analyses of processing plasmas. (author)

  19. Superconducting materials for large scale applications

    International Nuclear Information System (INIS)

    Dew-Hughes, D.

    1975-01-01

    Applications of superconductors capable of carrying large current densities in large-scale electrical devices are examined. Discussions are included on critical current density, superconducting materials available, and future prospects for improved superconducting materials. (JRD)

  20. Large-scale computing with Quantum Espresso

    International Nuclear Information System (INIS)

    Giannozzi, P.; Cavazzoni, C.

    2009-01-01

    This paper gives a short introduction to Quantum Espresso: a distribution of software for atomistic simulations in condensed-matter physics, chemical physics, materials science, and to its usage in large-scale parallel computing.

  1. Large-scale regions of antimatter

    International Nuclear Information System (INIS)

    Grobov, A. V.; Rubin, S. G.

    2015-01-01

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era

  2. Large-scale regions of antimatter

    Energy Technology Data Exchange (ETDEWEB)

    Grobov, A. V., E-mail: alexey.grobov@gmail.com; Rubin, S. G., E-mail: sgrubin@mephi.ru [National Research Nuclear University MEPhI (Russian Federation)

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  3. A large scale study of the assessment of the social environment of middle and secondary schools: the validity and utility of teachers' ratings of school climate, cultural pluralism, and safety problems for understanding school effects and school improvement.

    Science.gov (United States)

    Brand, Stephen; Felner, Robert D; Seitsinger, Anne; Burns, Amy; Bolton, Natalie

    2008-10-01

    Due to changes in state and federal policies, as well as logistical and fiscal limitations, researchers must increasingly rely on teachers' reports of school climate dimensions in order to investigate the developmental impact of these dimensions, and to evaluate efforts to enhance the impact of school environments on the development of young adolescents. Teachers' climate ratings exhibited a robust dimensional structure, high levels of internal consistency, and moderate levels of stability over 1-and 2-year time spans. Teachers' climate ratings were also found to be related consistently with students' ratings. In three large-scale samples of schools, teachers' climate ratings were associated significantly and consistently with students' performance on standardized tests of academic achievement, and with indexes of their academic, behavioral, and socio-emotional adjustment.

  4. Large-scale grid management; Storskala Nettforvaltning

    Energy Technology Data Exchange (ETDEWEB)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-07-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series.

  5. Political consultation and large-scale research

    International Nuclear Information System (INIS)

    Bechmann, G.; Folkers, H.

    1977-01-01

    Large-scale research and policy consulting have an intermediary position between sociological sub-systems. While large-scale research coordinates science, policy, and production, policy consulting coordinates science, policy and political spheres. In this very position, large-scale research and policy consulting lack of institutional guarantees and rational back-ground guarantee which are characteristic for their sociological environment. This large-scale research can neither deal with the production of innovative goods under consideration of rentability, nor can it hope for full recognition by the basis-oriented scientific community. Policy consulting knows neither the competence assignment of the political system to make decisions nor can it judge succesfully by the critical standards of the established social science, at least as far as the present situation is concerned. This intermediary position of large-scale research and policy consulting has, in three points, a consequence supporting the thesis which states that this is a new form of institutionalization of science: These are: 1) external control, 2) the organization form, 3) the theoretical conception of large-scale research and policy consulting. (orig.) [de

  6. Large-scale dynamic compaction of natural salt

    International Nuclear Information System (INIS)

    Hansen, F.D.; Ahrens, E.H.

    1996-01-01

    A large-scale dynamic compaction demonstration of natural salt was successfully completed. About 40 m 3 of salt were compacted in three, 2-m lifts by dropping a 9,000-kg weight from a height of 15 m in a systematic pattern to achieve desired compaction energy. To enhance compaction, 1 wt% water was added to the relatively dry mine-run salt. The average compacted mass fractional density was 0.90 of natural intact salt, and in situ nitrogen permeabilities averaged 9X10 -14 m 2 . This established viability of dynamic compacting for placing salt shaft seal components. The demonstration also provided compacted salt parameters needed for shaft seal system design and performance assessments of the Waste Isolation Pilot Plant

  7. Preliminary design study of a large scale graphite oxidation loop

    International Nuclear Information System (INIS)

    Epel, L.G.; Majeski, S.J.; Schweitzer, D.G.; Sheehan, T.V.

    1979-08-01

    A preliminary design study of a large scale graphite oxidation loop was performed in order to assess feasibility and to estimate capital costs. The nominal design operates at 50 atmospheres helium and 1800 F with a graphite specimen 30 inches long and 10 inches in diameter. It was determined that a simple single walled design was not practical at this time because of a lack of commercially available thick walled high temperature alloys. Two alternative concepts, at reduced operating pressure, were investigated. Both were found to be readily fabricable to operate at 1800 F and capital cost estimates for these are included. A design concept, which is outside the scope of this study, was briefly considered

  8. Food appropriation through large scale land acquisitions

    International Nuclear Information System (INIS)

    Cristina Rulli, Maria; D’Odorico, Paolo

    2014-01-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300–550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190–370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations. (letter)

  9. Dissecting the large-scale galactic conformity

    Science.gov (United States)

    Seo, Seongu

    2018-01-01

    Galactic conformity is an observed phenomenon that galaxies located in the same region have similar properties such as star formation rate, color, gas fraction, and so on. The conformity was first observed among galaxies within in the same halos (“one-halo conformity”). The one-halo conformity can be readily explained by mutual interactions among galaxies within a halo. Recent observations however further witnessed a puzzling connection among galaxies with no direct interaction. In particular, galaxies located within a sphere of ~5 Mpc radius tend to show similarities, even though the galaxies do not share common halos with each other ("two-halo conformity" or “large-scale conformity”). Using a cosmological hydrodynamic simulation, Illustris, we investigate the physical origin of the two-halo conformity and put forward two scenarios. First, back-splash galaxies are likely responsible for the large-scale conformity. They have evolved into red galaxies due to ram-pressure stripping in a given galaxy cluster and happen to reside now within a ~5 Mpc sphere. Second, galaxies in strong tidal field induced by large-scale structure also seem to give rise to the large-scale conformity. The strong tides suppress star formation in the galaxies. We discuss the importance of the large-scale conformity in the context of galaxy evolution.

  10. Growth Limits in Large Scale Networks

    DEFF Research Database (Denmark)

    Knudsen, Thomas Phillip

    limitations. The rising complexity of network management with the convergence of communications platforms is shown as problematic for both automatic management feasibility and for manpower resource management. In the fourth step the scope is extended to include the present society with the DDN project as its......The Subject of large scale networks is approached from the perspective of the network planner. An analysis of the long term planning problems is presented with the main focus on the changing requirements for large scale networks and the potential problems in meeting these requirements. The problems...... the fundamental technological resources in network technologies are analysed for scalability. Here several technological limits to continued growth are presented. The third step involves a survey of major problems in managing large scale networks given the growth of user requirements and the technological...

  11. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  12. Accelerating sustainability in large-scale facilities

    CERN Multimedia

    Marina Giampietro

    2011-01-01

    Scientific research centres and large-scale facilities are intrinsically energy intensive, but how can big science improve its energy management and eventually contribute to the environmental cause with new cleantech? CERN’s commitment to providing tangible answers to these questions was sealed in the first workshop on energy management for large scale scientific infrastructures held in Lund, Sweden, on the 13-14 October.   Participants at the energy management for large scale scientific infrastructures workshop. The workshop, co-organised with the European Spallation Source (ESS) and  the European Association of National Research Facilities (ERF), tackled a recognised need for addressing energy issues in relation with science and technology policies. It brought together more than 150 representatives of Research Infrastrutures (RIs) and energy experts from Europe and North America. “Without compromising our scientific projects, we can ...

  13. Large-Scale Analysis of Art Proportions

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2014-01-01

    While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square) and with majo......While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square...

  14. The Expanded Large Scale Gap Test

    Science.gov (United States)

    1987-03-01

    NSWC TR 86-32 DTIC THE EXPANDED LARGE SCALE GAP TEST BY T. P. LIDDIARD D. PRICE RESEARCH AND TECHNOLOGY DEPARTMENT ’ ~MARCH 1987 Ap~proved for public...arises, to reduce the spread in the LSGT 50% gap value.) The worst charges, such as those with the highest or lowest densities, the largest re-pressed...Arlington, VA 22217 PE 62314N INS3A 1 RJ14E31 7R4TBK 11 TITLE (Include Security CIlmsilficatiorn The Expanded Large Scale Gap Test . 12. PEIRSONAL AUTHOR() T

  15. LARGE-SCALE COMMERCIAL INVESTMENTS IN LAND: SEEKING ...

    African Journals Online (AJOL)

    extent of large-scale investment in land or to assess its impact on the people in recipient countries. .... favorable lease terms, apparently based on a belief that this is necessary to .... Harm to the rights of local occupiers of land can result from a dearth. 24. ..... applies to a self-identified group based on the group's traditions.

  16. Sensitivity technologies for large scale simulation

    International Nuclear Information System (INIS)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  17. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus

    2013-11-19

    Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous

  18. Configuration management in large scale infrastructure development

    NARCIS (Netherlands)

    Rijn, T.P.J. van; Belt, H. van de; Los, R.H.

    2000-01-01

    Large Scale Infrastructure (LSI) development projects such as the construction of roads, rail-ways and other civil engineering (water)works is tendered differently today than a decade ago. Traditional workflow requested quotes from construction companies for construction works where the works to be

  19. Large-scale Motion of Solar Filaments

    Indian Academy of Sciences (India)

    tribpo

    Large-scale Motion of Solar Filaments. Pavel Ambrož, Astronomical Institute of the Acad. Sci. of the Czech Republic, CZ-25165. Ondrejov, The Czech Republic. e-mail: pambroz@asu.cas.cz. Alfred Schroll, Kanzelhöehe Solar Observatory of the University of Graz, A-9521 Treffen,. Austria. e-mail: schroll@solobskh.ac.at.

  20. Ethics of large-scale change

    DEFF Research Database (Denmark)

    Arler, Finn

    2006-01-01

    , which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, the neoclassical economists' approach, and finally the so-called Concentric Circle Theories approach...

  1. The origin of large scale cosmic structure

    International Nuclear Information System (INIS)

    Jones, B.J.T.; Palmer, P.L.

    1985-01-01

    The paper concerns the origin of large scale cosmic structure. The evolution of density perturbations, the nonlinear regime (Zel'dovich's solution and others), the Gott and Rees clustering hierarchy, the spectrum of condensations, and biassed galaxy formation, are all discussed. (UK)

  2. Large-scale perspective as a challenge

    NARCIS (Netherlands)

    Plomp, M.G.A.

    2012-01-01

    1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that

  3. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data

  4. Learning from large scale neural simulations

    DEFF Research Database (Denmark)

    Serban, Maria

    2017-01-01

    Large-scale neural simulations have the marks of a distinct methodology which can be fruitfully deployed to advance scientific understanding of the human brain. Computer simulation studies can be used to produce surrogate observational data for better conceptual models and new how...

  5. Large-Scale Outflows in Seyfert Galaxies

    Science.gov (United States)

    Colbert, E. J. M.; Baum, S. A.

    1995-12-01

    \\catcode`\\@=11 \\ialign{m @th#1hfil ##hfil \\crcr#2\\crcr\\sim\\crcr}}} \\catcode`\\@=12 Highly collimated outflows extend out to Mpc scales in many radio-loud active galaxies. In Seyfert galaxies, which are radio-quiet, the outflows extend out to kpc scales and do not appear to be as highly collimated. In order to study the nature of large-scale (>~1 kpc) outflows in Seyferts, we have conducted optical, radio and X-ray surveys of a distance-limited sample of 22 edge-on Seyfert galaxies. Results of the optical emission-line imaging and spectroscopic survey imply that large-scale outflows are present in >~{{1} /{4}} of all Seyferts. The radio (VLA) and X-ray (ROSAT) surveys show that large-scale radio and X-ray emission is present at about the same frequency. Kinetic luminosities of the outflows in Seyferts are comparable to those in starburst-driven superwinds. Large-scale radio sources in Seyferts appear diffuse, but do not resemble radio halos found in some edge-on starburst galaxies (e.g. M82). We discuss the feasibility of the outflows being powered by the active nucleus (e.g. a jet) or a circumnuclear starburst.

  6. Stability of large scale interconnected dynamical systems

    International Nuclear Information System (INIS)

    Akpan, E.P.

    1993-07-01

    Large scale systems modelled by a system of ordinary differential equations are considered and necessary and sufficient conditions are obtained for the uniform asymptotic connective stability of the systems using the method of cone-valued Lyapunov functions. It is shown that this model significantly improves the existing models. (author). 9 refs

  7. Hydrogen combustion modelling in large-scale geometries

    International Nuclear Information System (INIS)

    Studer, E.; Beccantini, A.; Kudriakov, S.; Velikorodny, A.

    2014-01-01

    Hydrogen risk mitigation issues based on catalytic recombiners cannot exclude flammable clouds to be formed during the course of a severe accident in a Nuclear Power Plant. Consequences of combustion processes have to be assessed based on existing knowledge and state of the art in CFD combustion modelling. The Fukushima accidents have also revealed the need for taking into account the hydrogen explosion phenomena in risk management. Thus combustion modelling in a large-scale geometry is one of the remaining severe accident safety issues. At present day there doesn't exist a combustion model which can accurately describe a combustion process inside a geometrical configuration typical of the Nuclear Power Plant (NPP) environment. Therefore the major attention in model development has to be paid on the adoption of existing approaches or creation of the new ones capable of reliably predicting the possibility of the flame acceleration in the geometries of that type. A set of experiments performed previously in RUT facility and Heiss Dampf Reactor (HDR) facility is used as a validation database for development of three-dimensional gas dynamic model for the simulation of hydrogen-air-steam combustion in large-scale geometries. The combustion regimes include slow deflagration, fast deflagration, and detonation. Modelling is based on Reactive Discrete Equation Method (RDEM) where flame is represented as an interface separating reactants and combustion products. The transport of the progress variable is governed by different flame surface wrinkling factors. The results of numerical simulation are presented together with the comparisons, critical discussions and conclusions. (authors)

  8. A large-scale assessment of two-way SNP interactions in breast cancer susceptibility using 46,450 cases and 42,461 controls from the breast cancer association consortium.

    Science.gov (United States)

    Milne, Roger L; Herranz, Jesús; Michailidou, Kyriaki; Dennis, Joe; Tyrer, Jonathan P; Zamora, M Pilar; Arias-Perez, José Ignacio; González-Neira, Anna; Pita, Guillermo; Alonso, M Rosario; Wang, Qin; Bolla, Manjeet K; Czene, Kamila; Eriksson, Mikael; Humphreys, Keith; Darabi, Hatef; Li, Jingmei; Anton-Culver, Hoda; Neuhausen, Susan L; Ziogas, Argyrios; Clarke, Christina A; Hopper, John L; Dite, Gillian S; Apicella, Carmel; Southey, Melissa C; Chenevix-Trench, Georgia; Swerdlow, Anthony; Ashworth, Alan; Orr, Nicholas; Schoemaker, Minouk; Jakubowska, Anna; Lubinski, Jan; Jaworska-Bieniek, Katarzyna; Durda, Katarzyna; Andrulis, Irene L; Knight, Julia A; Glendon, Gord; Mulligan, Anna Marie; Bojesen, Stig E; Nordestgaard, Børge G; Flyger, Henrik; Nevanlinna, Heli; Muranen, Taru A; Aittomäki, Kristiina; Blomqvist, Carl; Chang-Claude, Jenny; Rudolph, Anja; Seibold, Petra; Flesch-Janys, Dieter; Wang, Xianshu; Olson, Janet E; Vachon, Celine; Purrington, Kristen; Winqvist, Robert; Pylkäs, Katri; Jukkola-Vuorinen, Arja; Grip, Mervi; Dunning, Alison M; Shah, Mitul; Guénel, Pascal; Truong, Thérèse; Sanchez, Marie; Mulot, Claire; Brenner, Hermann; Dieffenbach, Aida Karina; Arndt, Volker; Stegmaier, Christa; Lindblom, Annika; Margolin, Sara; Hooning, Maartje J; Hollestelle, Antoinette; Collée, J Margriet; Jager, Agnes; Cox, Angela; Brock, Ian W; Reed, Malcolm W R; Devilee, Peter; Tollenaar, Robert A E M; Seynaeve, Caroline; Haiman, Christopher A; Henderson, Brian E; Schumacher, Fredrick; Le Marchand, Loic; Simard, Jacques; Dumont, Martine; Soucy, Penny; Dörk, Thilo; Bogdanova, Natalia V; Hamann, Ute; Försti, Asta; Rüdiger, Thomas; Ulmer, Hans-Ulrich; Fasching, Peter A; Häberle, Lothar; Ekici, Arif B; Beckmann, Matthias W; Fletcher, Olivia; Johnson, Nichola; dos Santos Silva, Isabel; Peto, Julian; Radice, Paolo; Peterlongo, Paolo; Peissel, Bernard; Mariani, Paolo; Giles, Graham G; Severi, Gianluca; Baglietto, Laura; Sawyer, Elinor; Tomlinson, Ian; Kerin, Michael; Miller, Nicola; Marme, Federik; Burwinkel, Barbara; Mannermaa, Arto; Kataja, Vesa; Kosma, Veli-Matti; Hartikainen, Jaana M; Lambrechts, Diether; Yesilyurt, Betul T; Floris, Giuseppe; Leunen, Karin; Alnæs, Grethe Grenaker; Kristensen, Vessela; Børresen-Dale, Anne-Lise; García-Closas, Montserrat; Chanock, Stephen J; Lissowska, Jolanta; Figueroa, Jonine D; Schmidt, Marjanka K; Broeks, Annegien; Verhoef, Senno; Rutgers, Emiel J; Brauch, Hiltrud; Brüning, Thomas; Ko, Yon-Dschun; Couch, Fergus J; Toland, Amanda E; Yannoukakos, Drakoulis; Pharoah, Paul D P; Hall, Per; Benítez, Javier; Malats, Núria; Easton, Douglas F

    2014-04-01

    Part of the substantial unexplained familial aggregation of breast cancer may be due to interactions between common variants, but few studies have had adequate statistical power to detect interactions of realistic magnitude. We aimed to assess all two-way interactions in breast cancer susceptibility between 70,917 single nucleotide polymorphisms (SNPs) selected primarily based on prior evidence of a marginal effect. Thirty-eight international studies contributed data for 46,450 breast cancer cases and 42,461 controls of European origin as part of a multi-consortium project (COGS). First, SNPs were preselected based on evidence (P 10(-10)). In summary, we observed little evidence of two-way SNP interactions in breast cancer susceptibility, despite the large number of SNPs with potential marginal effects considered and the very large sample size. This finding may have important implications for risk prediction, simplifying the modelling required. Further comprehensive, large-scale genome-wide interaction studies may identify novel interacting loci if the inherent logistic and computational challenges can be overcome.

  9. Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

    Directory of Open Access Journals (Sweden)

    Qiang Liu

    2018-05-01

    Full Text Available Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal computer, a Graphics Processing Unit (GPU-based, high-performance computing method using the OpenACC application was adopted to parallelize the shallow water model. An unstructured data management method was presented to control the data transportation between the GPU and CPU (Central Processing Unit with minimum overhead, and then both computation and data were offloaded from the CPU to the GPU, which exploited the computational capability of the GPU as much as possible. The parallel model was validated using various benchmarks and real-world case studies. The results demonstrate that speed-ups of up to one order of magnitude can be achieved in comparison with the serial model. The proposed parallel model provides a fast and reliable tool with which to quickly assess flood hazards in large-scale areas and, thus, has a bright application prospect for dynamic inundation risk identification and disaster assessment.

  10. A large-scale peer teaching programme - acceptance and benefit.

    Science.gov (United States)

    Schuetz, Elisabeth; Obirei, Barbara; Salat, Daniela; Scholz, Julia; Hann, Dagmar; Dethleffsen, Kathrin

    2017-08-01

    performance in first assessments. 94% of the students participating in tutorials offered in the study year 2013/14 rated the tutorials as "excellent" or "good". An objective benefit has been shown by a significant increase in re-assessment scores with an effect size between the medium and large magnitudes for participants of tutorials compared to non-participants in the years 2012, 2013 and 2014. In addition, significantly higher pass rates of re-assessments could be observed. Acceptance, utilisation and benefit of the assessed peer teaching programme are high. Beyond the support of students, a contribution to the individualisation of studies and teaching is made. Further studies are necessary to investigate possible influences of large-scale peer teaching programmes, for example on the reduction of study length and drop-off rates, as well as additional effects on academic achievements. Copyright © 2017. Published by Elsevier GmbH.

  11. Large-scale fuel cycle centres

    International Nuclear Information System (INIS)

    Smiley, S.H.; Black, K.M.

    1977-01-01

    The US Nuclear Regulatory Commission (NRC) has considered the nuclear energy centre concept for fuel cycle plants in the Nuclear Energy Centre Site Survey 1975 (NECSS-75) Rep. No. NUREG-0001, an important study mandated by the US Congress in the Energy Reorganization Act of 1974 which created the NRC. For this study, the NRC defined fuel cycle centres as consisting of fuel reprocessing and mixed-oxide fuel fabrication plants, and optional high-level waste and transuranic waste management facilities. A range of fuel cycle centre sizes corresponded to the fuel throughput of power plants with a total capacity of 50,000-300,000MW(e). The types of fuel cycle facilities located at the fuel cycle centre permit the assessment of the role of fuel cycle centres in enhancing the safeguard of strategic special nuclear materials - plutonium and mixed oxides. Siting fuel cycle centres presents a smaller problem than siting reactors. A single reprocessing plant of the scale projected for use in the USA (1500-2000t/a) can reprocess fuel from reactors producing 50,000-65,000MW(e). Only two or three fuel cycle centres of the upper limit size considered in the NECSS-75 would be required in the USA by the year 2000. The NECSS-75 fuel cycle centre evaluation showed that large-scale fuel cycle centres present no real technical siting difficulties from a radiological effluent and safety standpoint. Some construction economies may be achievable with fuel cycle centres, which offer opportunities to improve waste-management systems. Combined centres consisting of reactors and fuel reprocessing and mixed-oxide fuel fabrication plants were also studied in the NECSS. Such centres can eliminate shipment not only of Pu but also mixed-oxide fuel. Increased fuel cycle costs result from implementation of combined centres unless the fuel reprocessing plants are commercial-sized. Development of Pu-burning reactors could reduce any economic penalties of combined centres. The need for effective fissile

  12. Large-scale structure of the Universe

    International Nuclear Information System (INIS)

    Doroshkevich, A.G.

    1978-01-01

    The problems, discussed at the ''Large-scale Structure of the Universe'' symposium are considered on a popular level. Described are the cell structure of galaxy distribution in the Universe, principles of mathematical galaxy distribution modelling. The images of cell structures, obtained after reprocessing with the computer are given. Discussed are three hypothesis - vortical, entropic, adiabatic, suggesting various processes of galaxy and galaxy clusters origin. A considerable advantage of the adiabatic hypothesis is recognized. The relict radiation, as a method of direct studying the processes taking place in the Universe is considered. The large-scale peculiarities and small-scale fluctuations of the relict radiation temperature enable one to estimate the turbance properties at the pre-galaxy stage. The discussion of problems, pertaining to studying the hot gas, contained in galaxy clusters, the interactions within galaxy clusters and with the inter-galaxy medium, is recognized to be a notable contribution into the development of theoretical and observational cosmology

  13. Emerging large-scale solar heating applications

    International Nuclear Information System (INIS)

    Wong, W.P.; McClung, J.L.

    2009-01-01

    Currently the market for solar heating applications in Canada is dominated by outdoor swimming pool heating, make-up air pre-heating and domestic water heating in homes, commercial and institutional buildings. All of these involve relatively small systems, except for a few air pre-heating systems on very large buildings. Together these applications make up well over 90% of the solar thermal collectors installed in Canada during 2007. These three applications, along with the recent re-emergence of large-scale concentrated solar thermal for generating electricity, also dominate the world markets. This paper examines some emerging markets for large scale solar heating applications, with a focus on the Canadian climate and market. (author)

  14. Emerging large-scale solar heating applications

    Energy Technology Data Exchange (ETDEWEB)

    Wong, W.P.; McClung, J.L. [Science Applications International Corporation (SAIC Canada), Ottawa, Ontario (Canada)

    2009-07-01

    Currently the market for solar heating applications in Canada is dominated by outdoor swimming pool heating, make-up air pre-heating and domestic water heating in homes, commercial and institutional buildings. All of these involve relatively small systems, except for a few air pre-heating systems on very large buildings. Together these applications make up well over 90% of the solar thermal collectors installed in Canada during 2007. These three applications, along with the recent re-emergence of large-scale concentrated solar thermal for generating electricity, also dominate the world markets. This paper examines some emerging markets for large scale solar heating applications, with a focus on the Canadian climate and market. (author)

  15. A new asynchronous parallel algorithm for inferring large-scale gene regulatory networks.

    Directory of Open Access Journals (Sweden)

    Xiangyun Xiao

    Full Text Available The reconstruction of gene regulatory networks (GRNs from high-throughput experimental data has been considered one of the most important issues in systems biology research. With the development of high-throughput technology and the complexity of biological problems, we need to reconstruct GRNs that contain thousands of genes. However, when many existing algorithms are used to handle these large-scale problems, they will encounter two important issues: low accuracy and high computational cost. To overcome these difficulties, the main goal of this study is to design an effective parallel algorithm to infer large-scale GRNs based on high-performance parallel computing environments. In this study, we proposed a novel asynchronous parallel framework to improve the accuracy and lower the time complexity of large-scale GRN inference by combining splitting technology and ordinary differential equation (ODE-based optimization. The presented algorithm uses the sparsity and modularity of GRNs to split whole large-scale GRNs into many small-scale modular subnetworks. Through the ODE-based optimization of all subnetworks in parallel and their asynchronous communications, we can easily obtain the parameters of the whole network. To test the performance of the proposed approach, we used well-known benchmark datasets from Dialogue for Reverse Engineering Assessments and Methods challenge (DREAM, experimentally determined GRN of Escherichia coli and one published dataset that contains more than 10 thousand genes to compare the proposed approach with several popular algorithms on the same high-performance computing environments in terms of both accuracy and time complexity. The numerical results demonstrate that our parallel algorithm exhibits obvious superiority in inferring large-scale GRNs.

  16. A new asynchronous parallel algorithm for inferring large-scale gene regulatory networks.

    Science.gov (United States)

    Xiao, Xiangyun; Zhang, Wei; Zou, Xiufen

    2015-01-01

    The reconstruction of gene regulatory networks (GRNs) from high-throughput experimental data has been considered one of the most important issues in systems biology research. With the development of high-throughput technology and the complexity of biological problems, we need to reconstruct GRNs that contain thousands of genes. However, when many existing algorithms are used to handle these large-scale problems, they will encounter two important issues: low accuracy and high computational cost. To overcome these difficulties, the main goal of this study is to design an effective parallel algorithm to infer large-scale GRNs based on high-performance parallel computing environments. In this study, we proposed a novel asynchronous parallel framework to improve the accuracy and lower the time complexity of large-scale GRN inference by combining splitting technology and ordinary differential equation (ODE)-based optimization. The presented algorithm uses the sparsity and modularity of GRNs to split whole large-scale GRNs into many small-scale modular subnetworks. Through the ODE-based optimization of all subnetworks in parallel and their asynchronous communications, we can easily obtain the parameters of the whole network. To test the performance of the proposed approach, we used well-known benchmark datasets from Dialogue for Reverse Engineering Assessments and Methods challenge (DREAM), experimentally determined GRN of Escherichia coli and one published dataset that contains more than 10 thousand genes to compare the proposed approach with several popular algorithms on the same high-performance computing environments in terms of both accuracy and time complexity. The numerical results demonstrate that our parallel algorithm exhibits obvious superiority in inferring large-scale GRNs.

  17. Challenges for Large Scale Structure Theory

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    I will describe some of the outstanding questions in Cosmology where answers could be provided by observations of the Large Scale Structure of the Universe at late times.I will discuss some of the theoretical challenges which will have to be overcome to extract this information from the observations. I will describe some of the theoretical tools that might be useful to achieve this goal. 

  18. Methods for Large-Scale Nonlinear Optimization.

    Science.gov (United States)

    1980-05-01

    STANFORD, CALIFORNIA 94305 METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION by Philip E. Gill, Waiter Murray, I Michael A. Saunden, and Masgaret H. Wright...typical iteration can be partitioned so that where B is an m X m basise matrix. This partition effectively divides the vari- ables into three classes... attention is given to the standard of the coding or the documentation. A much better way of obtaining mathematical software is from a software library

  19. Large scale inhomogeneities and the cosmological principle

    International Nuclear Information System (INIS)

    Lukacs, B.; Meszaros, A.

    1984-12-01

    The compatibility of cosmologic principles and possible large scale inhomogeneities of the Universe is discussed. It seems that the strongest symmetry principle which is still compatible with reasonable inhomogeneities, is a full conformal symmetry in the 3-space defined by the cosmological velocity field, but even in such a case, the standard model is isolated from the inhomogeneous ones when the whole evolution is considered. (author)

  20. Fires in large scale ventilation systems

    International Nuclear Information System (INIS)

    Gregory, W.S.; Martin, R.A.; White, B.W.; Nichols, B.D.; Smith, P.R.; Leslie, I.H.; Fenton, D.L.; Gunaji, M.V.; Blythe, J.P.

    1991-01-01

    This paper summarizes the experience gained simulating fires in large scale ventilation systems patterned after ventilation systems found in nuclear fuel cycle facilities. The series of experiments discussed included: (1) combustion aerosol loading of 0.61x0.61 m HEPA filters with the combustion products of two organic fuels, polystyrene and polymethylemethacrylate; (2) gas dynamic and heat transport through a large scale ventilation system consisting of a 0.61x0.61 m duct 90 m in length, with dampers, HEPA filters, blowers, etc.; (3) gas dynamic and simultaneous transport of heat and solid particulate (consisting of glass beads with a mean aerodynamic diameter of 10μ) through the large scale ventilation system; and (4) the transport of heat and soot, generated by kerosene pool fires, through the large scale ventilation system. The FIRAC computer code, designed to predict fire-induced transients in nuclear fuel cycle facility ventilation systems, was used to predict the results of experiments (2) through (4). In general, the results of the predictions were satisfactory. The code predictions for the gas dynamics, heat transport, and particulate transport and deposition were within 10% of the experimentally measured values. However, the code was less successful in predicting the amount of soot generation from kerosene pool fires, probably due to the fire module of the code being a one-dimensional zone model. The experiments revealed a complicated three-dimensional combustion pattern within the fire room of the ventilation system. Further refinement of the fire module within FIRAC is needed. (orig.)

  1. Large-scale Complex IT Systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2011-01-01

    This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that identifies the major challen...

  2. Large-scale complex IT systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2012-01-01

    12 pages, 2 figures This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that ident...

  3. LAVA: Large scale Automated Vulnerability Addition

    Science.gov (United States)

    2016-05-23

    LAVA: Large-scale Automated Vulnerability Addition Brendan Dolan -Gavitt∗, Patrick Hulin†, Tim Leek†, Fredrich Ulrich†, Ryan Whelan† (Authors listed...released, and thus rapidly become stale. We can expect tools to have been trained to detect bugs that have been released. Given the commercial price tag...low TCN) and dead (low liveness) program data is a powerful one for vulnera- bility injection. The DUAs it identifies are internal program quantities

  4. Large-Scale Transit Signal Priority Implementation

    OpenAIRE

    Lee, Kevin S.; Lozner, Bailey

    2018-01-01

    In 2016, the District Department of Transportation (DDOT) deployed Transit Signal Priority (TSP) at 195 intersections in highly urbanized areas of Washington, DC. In collaboration with a broader regional implementation, and in partnership with the Washington Metropolitan Area Transit Authority (WMATA), DDOT set out to apply a systems engineering–driven process to identify, design, test, and accept a large-scale TSP system. This presentation will highlight project successes and lessons learned.

  5. Sizing and scaling requirements of a large-scale physical model for code validation

    International Nuclear Information System (INIS)

    Khaleel, R.; Legore, T.

    1990-01-01

    Model validation is an important consideration in application of a code for performance assessment and therefore in assessing the long-term behavior of the engineered and natural barriers of a geologic repository. Scaling considerations relevant to porous media flow are reviewed. An analysis approach is presented for determining the sizing requirements of a large-scale, hydrology physical model. The physical model will be used to validate performance assessment codes that evaluate the long-term behavior of the repository isolation system. Numerical simulation results for sizing requirements are presented for a porous medium model in which the media properties are spatially uncorrelated

  6. Open TG-GATEs: a large-scale toxicogenomics database

    Science.gov (United States)

    Igarashi, Yoshinobu; Nakatsu, Noriyuki; Yamashita, Tomoya; Ono, Atsushi; Ohno, Yasuo; Urushidani, Tetsuro; Yamada, Hiroshi

    2015-01-01

    Toxicogenomics focuses on assessing the safety of compounds using gene expression profiles. Gene expression signatures from large toxicogenomics databases are expected to perform better than small databases in identifying biomarkers for the prediction and evaluation of drug safety based on a compound's toxicological mechanisms in animal target organs. Over the past 10 years, the Japanese Toxicogenomics Project consortium (TGP) has been developing a large-scale toxicogenomics database consisting of data from 170 compounds (mostly drugs) with the aim of improving and enhancing drug safety assessment. Most of the data generated by the project (e.g. gene expression, pathology, lot number) are freely available to the public via Open TG-GATEs (Toxicogenomics Project-Genomics Assisted Toxicity Evaluation System). Here, we provide a comprehensive overview of the database, including both gene expression data and metadata, with a description of experimental conditions and procedures used to generate the database. Open TG-GATEs is available from http://toxico.nibio.go.jp/english/index.html. PMID:25313160

  7. The Neglected Situation: Assessment Performance and Interaction in Context

    Science.gov (United States)

    Maddox, Bryan

    2015-01-01

    Informed by Goffman's influential essay on "The neglected situation" this paper examines the contextual and interactive dimensions of performance in large-scale educational assessments. The paper applies Goffman's participation framework and associated theory in linguistic anthropology to examine how testing situations are framed and…

  8. The Software Reliability of Large Scale Integration Circuit and Very Large Scale Integration Circuit

    OpenAIRE

    Artem Ganiyev; Jan Vitasek

    2010-01-01

    This article describes evaluation method of faultless function of large scale integration circuits (LSI) and very large scale integration circuits (VLSI). In the article there is a comparative analysis of factors which determine faultless of integrated circuits, analysis of already existing methods and model of faultless function evaluation of LSI and VLSI. The main part describes a proposed algorithm and program for analysis of fault rate in LSI and VLSI circuits.

  9. Modeling Change in Large-Scale Longitudinal Studies of Educational Growth: Four Decades of Contributions to the Assessment of Educational Growth. ETC R&D Scientific and Policy Contributions Series. ETS SPC-12-01. Research Report No. RR-12-04

    Science.gov (United States)

    Rock, Donald A.

    2012-01-01

    This paper provides a history of ETS's role in developing assessment instruments and psychometric procedures for measuring change in large-scale national assessments funded by the Longitudinal Studies branch of the National Center for Education Statistics. It documents the innovations developed during more than 30 years of working with…

  10. Modeling Change in Large-Scale Longitudinal Studies of Educational Growth: Four Decades of Contributions to the Assessment of Educational Growth. Research Report. ETS RR-12-04. ETS R&D Scientific and Policy Contributions Series. ETS SPC-12-01

    Science.gov (United States)

    Rock, Donald A.

    2012-01-01

    This paper provides a history of ETS's role in developing assessment instruments and psychometric procedures for measuring change in large-scale national assessments funded by the Longitudinal Studies branch of the National Center for Education Statistics. It documents the innovations developed during more than 30 years of working with…

  11. Large scale injection test (LASGIT) modelling

    International Nuclear Information System (INIS)

    Arnedo, D.; Olivella, S.; Alonso, E.E.

    2010-01-01

    Document available in extended abstract form only. With the objective of understanding the gas flow processes through clay barriers in schemes of radioactive waste disposal, the Lasgit in situ experiment was planned and is currently in progress. The modelling of the experiment will permit to better understand of the responses, to confirm hypothesis of mechanisms and processes and to learn in order to design future experiments. The experiment and modelling activities are included in the project FORGE (FP7). The in situ large scale injection test Lasgit is currently being performed at the Aespoe Hard Rock Laboratory by SKB and BGS. An schematic layout of the test is shown. The deposition hole follows the KBS3 scheme. A copper canister is installed in the axe of the deposition hole, surrounded by blocks of highly compacted MX-80 bentonite. A concrete plug is placed at the top of the buffer. A metallic lid anchored to the surrounding host rock is included in order to prevent vertical movements of the whole system during gas injection stages (high gas injection pressures are expected to be reached). Hydration of the buffer material is achieved by injecting water through filter mats, two placed at the rock walls and two at the interfaces between bentonite blocks. Water is also injected through the 12 canister filters. Gas injection stages are performed injecting gas to some of the canister injection filters. Since the water pressure and the stresses (swelling pressure development) will be high during gas injection, it is necessary to inject at high gas pressures. This implies mechanical couplings as gas penetrates after the gas entry pressure is achieved and may produce deformations which in turn lead to permeability increments. A 3D hydro-mechanical numerical model of the test using CODE-BRIGHT is presented. The domain considered for the modelling is shown. The materials considered in the simulation are the MX-80 bentonite blocks (cylinders and rings), the concrete plug

  12. RESTRUCTURING OF THE LARGE-SCALE SPRINKLERS

    Directory of Open Access Journals (Sweden)

    Paweł Kozaczyk

    2016-09-01

    Full Text Available One of the best ways for agriculture to become independent from shortages of precipitation is irrigation. In the seventies and eighties of the last century a number of large-scale sprinklers in Wielkopolska was built. At the end of 1970’s in the Poznan province 67 sprinklers with a total area of 6400 ha were installed. The average size of the sprinkler reached 95 ha. In 1989 there were 98 sprinklers, and the area which was armed with them was more than 10 130 ha. The study was conducted on 7 large sprinklers with the area ranging from 230 to 520 hectares in 1986÷1998. After the introduction of the market economy in the early 90’s and ownership changes in agriculture, large-scale sprinklers have gone under a significant or total devastation. Land on the State Farms of the State Agricultural Property Agency has leased or sold and the new owners used the existing sprinklers to a very small extent. This involved a change in crop structure, demand structure and an increase in operating costs. There has also been a threefold increase in electricity prices. Operation of large-scale irrigation encountered all kinds of barriers in practice and limitations of system solutions, supply difficulties, high levels of equipment failure which is not inclined to rational use of available sprinklers. An effect of a vision of the local area was to show the current status of the remaining irrigation infrastructure. The adopted scheme for the restructuring of Polish agriculture was not the best solution, causing massive destruction of assets previously invested in the sprinkler system.

  13. Adaptive visualization for large-scale graph

    International Nuclear Information System (INIS)

    Nakamura, Hiroko; Shinano, Yuji; Ohzahata, Satoshi

    2010-01-01

    We propose an adoptive visualization technique for representing a large-scale hierarchical dataset within limited display space. A hierarchical dataset has nodes and links showing the parent-child relationship between the nodes. These nodes and links are described using graphics primitives. When the number of these primitives is large, it is difficult to recognize the structure of the hierarchical data because many primitives are overlapped within a limited region. To overcome this difficulty, we propose an adaptive visualization technique for hierarchical datasets. The proposed technique selects an appropriate graph style according to the nodal density in each area. (author)

  14. Neutrinos and large-scale structure

    International Nuclear Information System (INIS)

    Eisenstein, Daniel J.

    2015-01-01

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos

  15. Puzzles of large scale structure and gravitation

    International Nuclear Information System (INIS)

    Sidharth, B.G.

    2006-01-01

    We consider the puzzle of cosmic voids bounded by two-dimensional structures of galactic clusters as also a puzzle pointed out by Weinberg: How can the mass of a typical elementary particle depend on a cosmic parameter like the Hubble constant? An answer to the first puzzle is proposed in terms of 'Scaled' Quantum Mechanical like behaviour which appears at large scales. The second puzzle can be answered by showing that the gravitational mass of an elementary particle has a Machian character (see Ahmed N. Cantorian small worked, Mach's principle and the universal mass network. Chaos, Solitons and Fractals 2004;21(4))

  16. Neutrinos and large-scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Eisenstein, Daniel J. [Daniel J. Eisenstein, Harvard-Smithsonian Center for Astrophysics, 60 Garden St., MS #20, Cambridge, MA 02138 (United States)

    2015-07-15

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.

  17. Stabilization Algorithms for Large-Scale Problems

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg

    2006-01-01

    The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some......-curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New...

  18. Large scale phononic metamaterials for seismic isolation

    International Nuclear Information System (INIS)

    Aravantinos-Zafiris, N.; Sigalas, M. M.

    2015-01-01

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials

  19. Large-scale fuel cycle centers

    International Nuclear Information System (INIS)

    Smiley, S.H.; Black, K.M.

    1977-01-01

    The United States Nuclear Regulatory Commission (NRC) has considered the nuclear energy center concept for fuel cycle plants in the Nuclear Energy Center Site Survey - 1975 (NECSS-75) -- an important study mandated by the U.S. Congress in the Energy Reorganization Act of 1974 which created the NRC. For the study, NRC defined fuel cycle centers to consist of fuel reprocessing and mixed oxide fuel fabrication plants, and optional high-level waste and transuranic waste management facilities. A range of fuel cycle center sizes corresponded to the fuel throughput of power plants with a total capacity of 50,000 - 300,000 MWe. The types of fuel cycle facilities located at the fuel cycle center permit the assessment of the role of fuel cycle centers in enhancing safeguarding of strategic special nuclear materials -- plutonium and mixed oxides. Siting of fuel cycle centers presents a considerably smaller problem than the siting of reactors. A single reprocessing plant of the scale projected for use in the United States (1500-2000 MT/yr) can reprocess the fuel from reactors producing 50,000-65,000 MWe. Only two or three fuel cycle centers of the upper limit size considered in the NECSS-75 would be required in the United States by the year 2000 . The NECSS-75 fuel cycle center evaluations showed that large scale fuel cycle centers present no real technical difficulties in siting from a radiological effluent and safety standpoint. Some construction economies may be attainable with fuel cycle centers; such centers offer opportunities for improved waste management systems. Combined centers consisting of reactors and fuel reprocessing and mixed oxide fuel fabrication plants were also studied in the NECSS. Such centers can eliminate not only shipment of plutonium, but also mixed oxide fuel. Increased fuel cycle costs result from implementation of combined centers unless the fuel reprocessing plants are commercial-sized. Development of plutonium-burning reactors could reduce any

  20. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  1. Dipolar modulation of Large-Scale Structure

    Science.gov (United States)

    Yoon, Mijin

    For the last two decades, we have seen a drastic development of modern cosmology based on various observations such as the cosmic microwave background (CMB), type Ia supernovae, and baryonic acoustic oscillations (BAO). These observational evidences have led us to a great deal of consensus on the cosmological model so-called LambdaCDM and tight constraints on cosmological parameters consisting the model. On the other hand, the advancement in cosmology relies on the cosmological principle: the universe is isotropic and homogeneous on large scales. Testing these fundamental assumptions is crucial and will soon become possible given the planned observations ahead. Dipolar modulation is the largest angular anisotropy of the sky, which is quantified by its direction and amplitude. We measured a huge dipolar modulation in CMB, which mainly originated from our solar system's motion relative to CMB rest frame. However, we have not yet acquired consistent measurements of dipolar modulations in large-scale structure (LSS), as they require large sky coverage and a number of well-identified objects. In this thesis, we explore measurement of dipolar modulation in number counts of LSS objects as a test of statistical isotropy. This thesis is based on two papers that were published in peer-reviewed journals. In Chapter 2 [Yoon et al., 2014], we measured a dipolar modulation in number counts of WISE matched with 2MASS sources. In Chapter 3 [Yoon & Huterer, 2015], we investigated requirements for detection of kinematic dipole in future surveys.

  2. Status: Large-scale subatmospheric cryogenic systems

    International Nuclear Information System (INIS)

    Peterson, T.

    1989-01-01

    In the late 1960's and early 1970's an interest in testing and operating RF cavities at 1.8K motivated the development and construction of four large (300 Watt) 1.8K refrigeration systems. in the past decade, development of successful superconducting RF cavities and interest in obtaining higher magnetic fields with the improved Niobium-Titanium superconductors has once again created interest in large-scale 1.8K refrigeration systems. The L'Air Liquide plant for Tore Supra is a recently commissioned 300 Watt 1.8K system which incorporates new technology, cold compressors, to obtain the low vapor pressure for low temperature cooling. CEBAF proposes to use cold compressors to obtain 5KW at 2.0K. Magnetic refrigerators of 10 Watt capacity or higher at 1.8K are now being developed. The state of the art of large-scale refrigeration in the range under 4K will be reviewed. 28 refs., 4 figs., 7 tabs

  3. Applicability of vector processing to large-scale nuclear codes

    International Nuclear Information System (INIS)

    Ishiguro, Misako; Harada, Hiroo; Matsuura, Toshihiko; Okuda, Motoi; Ohta, Fumio; Umeya, Makoto.

    1982-03-01

    To meet the growing trend of computational requirements in JAERI, introduction of a high-speed computer with vector processing faculty (a vector processor) is desirable in the near future. To make effective use of a vector processor, appropriate optimization of nuclear codes to pipelined-vector architecture is vital, which will pose new problems concerning code development and maintenance. In this report, vector processing efficiency is assessed with respect to large-scale nuclear codes by examining the following items: 1) The present feature of computational load in JAERI is analyzed by compiling the computer utilization statistics. 2) Vector processing efficiency is estimated for the ten heavily-used nuclear codes by analyzing their dynamic behaviors run on a scalar machine. 3) Vector processing efficiency is measured for the other five nuclear codes by using the current vector processors, FACOM 230-75 APU and CRAY-1. 4) Effectiveness of applying a high-speed vector processor to nuclear codes is evaluated by taking account of the characteristics in JAERI jobs. Problems of vector processors are also discussed from the view points of code performance and ease of use. (author)

  4. Large-scale derived flood frequency analysis based on continuous simulation

    Science.gov (United States)

    Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several

  5. Process of performance assessment

    International Nuclear Information System (INIS)

    King, C.M.; Halford, D.K.

    1987-01-01

    Performance assessment is the process used to evaluate the environmental consequences of disposal of radioactive waste in the biosphere. An introductory review of the subject is presented. Emphasis is placed on the process of performance assessment from the standpoint of defining the process. Performance assessment, from evolving experience at DOE sites, has short-term and long-term subprograms, the components of which are discussed. The role of mathematical modeling in performance assessment is addressed including the pros and cons of current approaches. Finally, the system/site/technology issues as the focal point of this symposium are reviewed

  6. The Phoenix series large scale LNG pool fire experiments.

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  7. A Diagnostic Comparison of Turkish and Korean Students’ Mathematics Performances on the TIMSS 2011 Assessment

    Directory of Open Access Journals (Sweden)

    Sedat Şen

    2015-11-01

    Full Text Available The purpose of the present study was to analyze an international large-scale data set using a cognitive assessment approach. Although some researchers question the usefulness of international large-scale assessments (e.g., TIMSS, participating countries have continued to use the results from these large-scale assessments to improve their curricula and teaching methods. Despite the common reporting practice—single-score—in these large scale assessments gives useful insights about students’ overall performances, they still lack diagnostic information. Cognitive diagnosis models (CDMs were developed to provide more feedback on students’ cognitive strengths and weaknesses. This study retrofitted the TIMSS 2011 eighth grade mathematics assessment by applying a specific CDM called the DINA (the deterministic, inputs, noisy, “and” gate model to data from South Korea and Turkey. Results of the DINA model were used to make a detailed comparison between students of these two countries.

  8. Real-time simulation of large-scale floods

    Science.gov (United States)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  9. Large-scale digitizer system, analog converters

    International Nuclear Information System (INIS)

    Althaus, R.F.; Lee, K.L.; Kirsten, F.A.; Wagner, L.J.

    1976-10-01

    Analog to digital converter circuits that are based on the sharing of common resources, including those which are critical to the linearity and stability of the individual channels, are described. Simplicity of circuit composition is valued over other more costly approaches. These are intended to be applied in a large-scale processing and digitizing system for use with high-energy physics detectors such as drift-chambers or phototube-scintillator arrays. Signal distribution techniques are of paramount importance in maintaining adequate signal-to-noise ratio. Noise in both amplitude and time-jitter senses is held sufficiently low so that conversions with 10-bit charge resolution and 12-bit time resolution are achieved

  10. Engineering management of large scale systems

    Science.gov (United States)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  11. Grid sensitivity capability for large scale structures

    Science.gov (United States)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  12. Large - scale Rectangular Ruler Automated Verification Device

    Science.gov (United States)

    Chen, Hao; Chang, Luping; Xing, Minjian; Xie, Xie

    2018-03-01

    This paper introduces a large-scale rectangular ruler automated verification device, which consists of photoelectric autocollimator and self-designed mechanical drive car and data automatic acquisition system. The design of mechanical structure part of the device refer to optical axis design, drive part, fixture device and wheel design. The design of control system of the device refer to hardware design and software design, and the hardware mainly uses singlechip system, and the software design is the process of the photoelectric autocollimator and the automatic data acquisition process. This devices can automated achieve vertical measurement data. The reliability of the device is verified by experimental comparison. The conclusion meets the requirement of the right angle test procedure.

  13. Large Scale Landform Mapping Using Lidar DEM

    Directory of Open Access Journals (Sweden)

    Türkay Gökgöz

    2015-08-01

    Full Text Available In this study, LIDAR DEM data was used to obtain a primary landform map in accordance with a well-known methodology. This primary landform map was generalized using the Focal Statistics tool (Majority, considering the minimum area condition in cartographic generalization in order to obtain landform maps at 1:1000 and 1:5000 scales. Both the primary and the generalized landform maps were verified visually with hillshaded DEM and an orthophoto. As a result, these maps provide satisfactory visuals of the landforms. In order to show the effect of generalization, the area of each landform in both the primary and the generalized maps was computed. Consequently, landform maps at large scales could be obtained with the proposed methodology, including generalization using LIDAR DEM.

  14. Constructing sites on a large scale

    DEFF Research Database (Denmark)

    Braae, Ellen Marie; Tietjen, Anne

    2011-01-01

    Since the 1990s, the regional scale has regained importance in urban and landscape design. In parallel, the focus in design tasks has shifted from master plans for urban extension to strategic urban transformation projects. A prominent example of a contemporary spatial development approach...... for setting the design brief in a large scale urban landscape in Norway, the Jaeren region around the city of Stavanger. In this paper, we first outline the methodological challenges and then present and discuss the proposed method based on our teaching experiences. On this basis, we discuss aspects...... is the IBA Emscher Park in the Ruhr area in Germany. Over a 10 years period (1988-1998), more than a 100 local transformation projects contributed to the transformation from an industrial to a post-industrial region. The current paradigm of planning by projects reinforces the role of the design disciplines...

  15. Large scale study of tooth enamel

    International Nuclear Information System (INIS)

    Bodart, F.; Deconninck, G.; Martin, M.T.

    Human tooth enamel contains traces of foreign elements. The presence of these elements is related to the history and the environment of the human body and can be considered as the signature of perturbations which occur during the growth of a tooth. A map of the distribution of these traces on a large scale sample of the population will constitute a reference for further investigations of environmental effects. On hundred eighty samples of teeth were first analyzed using PIXE, backscattering and nuclear reaction techniques. The results were analyzed using statistical methods. Correlations between O, F, Na, P, Ca, Mn, Fe, Cu, Zn, Pb and Sr were observed and cluster analysis was in progress. The techniques described in the present work have been developed in order to establish a method for the exploration of very large samples of the Belgian population. (author)

  16. Testing Einstein's Gravity on Large Scales

    Science.gov (United States)

    Prescod-Weinstein, Chandra

    2011-01-01

    A little over a decade has passed since two teams studying high redshift Type Ia supernovae announced the discovery that the expansion of the universe was accelerating. After all this time, we?re still not sure how cosmic acceleration fits into the theory that tells us about the large-scale universe: General Relativity (GR). As part of our search for answers, we have been forced to question GR itself. But how will we test our ideas? We are fortunate enough to be entering the era of precision cosmology, where the standard model of gravity can be subjected to more rigorous testing. Various techniques will be employed over the next decade or two in the effort to better understand cosmic acceleration and the theory behind it. In this talk, I will describe cosmic acceleration, current proposals to explain it, and weak gravitational lensing, an observational effect that allows us to do the necessary precision cosmology.

  17. Large-scale preparation of hollow graphitic carbon nanospheres

    International Nuclear Information System (INIS)

    Feng, Jun; Li, Fu; Bai, Yu-Jun; Han, Fu-Dong; Qi, Yong-Xin; Lun, Ning; Lu, Xi-Feng

    2013-01-01

    Hollow graphitic carbon nanospheres (HGCNSs) were synthesized on large scale by a simple reaction between glucose and Mg at 550 °C in an autoclave. Characterization by X-ray diffraction, Raman spectroscopy and transmission electron microscopy demonstrates the formation of HGCNSs with an average diameter of 10 nm or so and a wall thickness of a few graphenes. The HGCNSs exhibit a reversible capacity of 391 mAh g −1 after 60 cycles when used as anode materials for Li-ion batteries. -- Graphical abstract: Hollow graphitic carbon nanospheres could be prepared on large scale by the simple reaction between glucose and Mg at 550 °C, which exhibit superior electrochemical performance to graphite. Highlights: ► Hollow graphitic carbon nanospheres (HGCNSs) were prepared on large scale at 550 °C ► The preparation is simple, effective and eco-friendly. ► The in situ yielded MgO nanocrystals promote the graphitization. ► The HGCNSs exhibit superior electrochemical performance to graphite.

  18. Large-scale preparation of hollow graphitic carbon nanospheres

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Jun; Li, Fu [Key Laboratory for Liquid-Solid Structural Evolution and Processing of Materials, Ministry of Education, Shandong University, Jinan 250061 (China); Bai, Yu-Jun, E-mail: byj97@126.com [Key Laboratory for Liquid-Solid Structural Evolution and Processing of Materials, Ministry of Education, Shandong University, Jinan 250061 (China); State Key laboratory of Crystal Materials, Shandong University, Jinan 250100 (China); Han, Fu-Dong; Qi, Yong-Xin; Lun, Ning [Key Laboratory for Liquid-Solid Structural Evolution and Processing of Materials, Ministry of Education, Shandong University, Jinan 250061 (China); Lu, Xi-Feng [Lunan Institute of Coal Chemical Engineering, Jining 272000 (China)

    2013-01-15

    Hollow graphitic carbon nanospheres (HGCNSs) were synthesized on large scale by a simple reaction between glucose and Mg at 550 Degree-Sign C in an autoclave. Characterization by X-ray diffraction, Raman spectroscopy and transmission electron microscopy demonstrates the formation of HGCNSs with an average diameter of 10 nm or so and a wall thickness of a few graphenes. The HGCNSs exhibit a reversible capacity of 391 mAh g{sup -1} after 60 cycles when used as anode materials for Li-ion batteries. -- Graphical abstract: Hollow graphitic carbon nanospheres could be prepared on large scale by the simple reaction between glucose and Mg at 550 Degree-Sign C, which exhibit superior electrochemical performance to graphite. Highlights: Black-Right-Pointing-Pointer Hollow graphitic carbon nanospheres (HGCNSs) were prepared on large scale at 550 Degree-Sign C Black-Right-Pointing-Pointer The preparation is simple, effective and eco-friendly. Black-Right-Pointing-Pointer The in situ yielded MgO nanocrystals promote the graphitization. Black-Right-Pointing-Pointer The HGCNSs exhibit superior electrochemical performance to graphite.

  19. Large-scale sequential quadratic programming algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  20. Design study on sodium cooled large-scale reactor

    International Nuclear Information System (INIS)

    Murakami, Tsutomu; Hishida, Masahiko; Kisohara, Naoyuki

    2004-07-01

    In Phase 1 of the 'Feasibility Studies on Commercialized Fast Reactor Cycle Systems (F/S)', an advanced loop type reactor has been selected as a promising concept of sodium-cooled large-scale reactor, which has a possibility to fulfill the design requirements of the F/S. In Phase 2, design improvement for further cost reduction of establishment of the plant concept has been performed. This report summarizes the results of the design study on the sodium-cooled large-scale reactor performed in JFY2003, which is the third year of Phase 2. In the JFY2003 design study, critical subjects related to safety, structural integrity and thermal hydraulics which found in the last fiscal year has been examined and the plant concept has been modified. Furthermore, fundamental specifications of main systems and components have been set and economy has been evaluated. In addition, as the interim evaluation of the candidate concept of the FBR fuel cycle is to be conducted, cost effectiveness and achievability for the development goal were evaluated and the data of the three large-scale reactor candidate concepts were prepared. As a results of this study, the plant concept of the sodium-cooled large-scale reactor has been constructed, which has a prospect to satisfy the economic goal (construction cost: less than 200,000 yens/kWe, etc.) and has a prospect to solve the critical subjects. From now on, reflecting the results of elemental experiments, the preliminary conceptual design of this plant will be preceded toward the selection for narrowing down candidate concepts at the end of Phase 2. (author)

  1. NRC performance assessment program

    International Nuclear Information System (INIS)

    Coplan, S.M.

    1986-01-01

    The U.S. Nuclear Regulatory Commission's (NRC) performance assessment program includes the development of guidance to the U.S. Department of Energy (DOE) on preparation of a license application and on conducting the studies to support a license application. The nature of the licensing requirements of 10 CFR Part 60 create a need for performance assessments by the DOE. The NRC and DOE staffs each have specific roles in assuring the adequacy of those assessments. Performance allocation is an approach for determining what testing and analysis will be needed during site characterization to assure that an adequate data base is available to support the necessary performance assessments. From the standpoint of establishing is implementable methodology, the most challenging performance assessment needed for licensing is the one that will be used to determine compliance with the U.S. Environmental Protection Agency's (EPA) containment requirement

  2. Large-scale motions in the universe: a review

    International Nuclear Information System (INIS)

    Burstein, D.

    1990-01-01

    The expansion of the universe can be retarded in localised regions within the universe both by the presence of gravity and by non-gravitational motions generated in the post-recombination universe. The motions of galaxies thus generated are called 'peculiar motions', and the amplitudes, size scales and coherence of these peculiar motions are among the most direct records of the structure of the universe. As such, measurements of these properties of the present-day universe provide some of the severest tests of cosmological theories. This is a review of the current evidence for large-scale motions of galaxies out to a distance of ∼5000 km s -1 (in an expanding universe, distance is proportional to radial velocity). 'Large-scale' in this context refers to motions that are correlated over size scales larger than the typical sizes of groups of galaxies, up to and including the size of the volume surveyed. To orient the reader into this relatively new field of study, a short modern history is given together with an explanation of the terminology. Careful consideration is given to the data used to measure the distances, and hence the peculiar motions, of galaxies. The evidence for large-scale motions is presented in a graphical fashion, using only the most reliable data for galaxies spanning a wide range in optical properties and over the complete range of galactic environments. The kinds of systematic errors that can affect this analysis are discussed, and the reliability of these motions is assessed. The predictions of two models of large-scale motion are compared to the observations, and special emphasis is placed on those motions in which our own Galaxy directly partakes. (author)

  3. Analysis using large-scale ringing data

    Directory of Open Access Journals (Sweden)

    Baillie, S. R.

    2004-06-01

    survival and recruitment estimates from the French CES scheme to assess the relative contributions of survival and recruitment to overall population changes. He develops a novel approach to modelling survival rates from such multi–site data by using within–year recaptures to provide a covariate of between–year recapture rates. This provided parsimonious models of variation in recapture probabilities between sites and years. The approach provides promising results for the four species investigated and can potentially be extended to similar data from other CES/MAPS schemes. The final paper by Blandine Doligez, David Thomson and Arie van Noordwijk (Doligez et al., 2004 illustrates how large-scale studies of population dynamics can be important for evaluating the effects of conservation measures. Their study is concerned with the reintroduction of White Stork populations to the Netherlands where a re–introduction programme started in 1969 had resulted in a breeding population of 396 pairs by 2000. They demonstrate the need to consider a wide range of models in order to account for potential age, time, cohort and “trap–happiness” effects. As the data are based on resightings such trap–happiness must reflect some form of heterogeneity in resighting probabilities. Perhaps surprisingly, the provision of supplementary food did not influence survival, but it may havehad an indirect effect via the alteration of migratory behaviour. Spatially explicit modelling of data gathered at many sites inevitably results in starting models with very large numbers of parameters. The problem is often complicated further by having relatively sparse data at each site, even where the total amount of data gathered is very large. Both Julliard (2004 and Doligez et al. (2004 give explicit examples of problems caused by needing to handle very large numbers of parameters and show how they overcame them for their particular data sets. Such problems involve both the choice of appropriate

  4. A Decentralized Multivariable Robust Adaptive Voltage and Speed Regulator for Large-Scale Power Systems

    Science.gov (United States)

    Okou, Francis A.; Akhrif, Ouassima; Dessaint, Louis A.; Bouchard, Derrick

    2013-05-01

    This papter introduces a decentralized multivariable robust adaptive voltage and frequency regulator to ensure the stability of large-scale interconnnected generators. Interconnection parameters (i.e. load, line and transormer parameters) are assumed to be unknown. The proposed design approach requires the reformulation of conventiaonal power system models into a multivariable model with generator terminal voltages as state variables, and excitation and turbine valve inputs as control signals. This model, while suitable for the application of modern control methods, introduces problems with regards to current design techniques for large-scale systems. Interconnection terms, which are treated as perturbations, do not meet the common matching condition assumption. A new adaptive method for a certain class of large-scale systems is therefore introduces that does not require the matching condition. The proposed controller consists of nonlinear inputs that cancel some nonlinearities of the model. Auxiliary controls with linear and nonlinear components are used to stabilize the system. They compensate unknown parametes of the model by updating both the nonlinear component gains and excitation parameters. The adaptation algorithms involve the sigma-modification approach for auxiliary control gains, and the projection approach for excitation parameters to prevent estimation drift. The computation of the matrix-gain of the controller linear component requires the resolution of an algebraic Riccati equation and helps to solve the perturbation-mismatching problem. A realistic power system is used to assess the proposed controller performance. The results show that both stability and transient performance are considerably improved following a severe contingency.

  5. Parallel clustering algorithm for large-scale biological data sets.

    Science.gov (United States)

    Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang

    2014-01-01

    Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies.

  6. Accelerating large-scale phase-field simulations with GPU

    Directory of Open Access Journals (Sweden)

    Xiaoming Shi

    2017-10-01

    Full Text Available A new package for accelerating large-scale phase-field simulations was developed by using GPU based on the semi-implicit Fourier method. The package can solve a variety of equilibrium equations with different inhomogeneity including long-range elastic, magnetostatic, and electrostatic interactions. Through using specific algorithm in Compute Unified Device Architecture (CUDA, Fourier spectral iterative perturbation method was integrated in GPU package. The Allen-Cahn equation, Cahn-Hilliard equation, and phase-field model with long-range interaction were solved based on the algorithm running on GPU respectively to test the performance of the package. From the comparison of the calculation results between the solver executed in single CPU and the one on GPU, it was found that the speed on GPU is enormously elevated to 50 times faster. The present study therefore contributes to the acceleration of large-scale phase-field simulations and provides guidance for experiments to design large-scale functional devices.

  7. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  8. Image-based Exploration of Large-Scale Pathline Fields

    KAUST Repository

    Nagoor, Omniah H.

    2014-05-27

    While real-time applications are nowadays routinely used in visualizing large nu- merical simulations and volumes, handling these large-scale datasets requires high-end graphics clusters or supercomputers to process and visualize them. However, not all users have access to powerful clusters. Therefore, it is challenging to come up with a visualization approach that provides insight to large-scale datasets on a single com- puter. Explorable images (EI) is one of the methods that allows users to handle large data on a single workstation. Although it is a view-dependent method, it combines both exploration and modification of visual aspects without re-accessing the original huge data. In this thesis, we propose a novel image-based method that applies the concept of EI in visualizing large flow-field pathlines data. The goal of our work is to provide an optimized image-based method, which scales well with the dataset size. Our approach is based on constructing a per-pixel linked list data structure in which each pixel contains a list of pathlines segments. With this view-dependent method it is possible to filter, color-code and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination and deferred shading are applied, which further improves the performance and scalability of our approach.

  9. [A large-scale accident in Alpine terrain].

    Science.gov (United States)

    Wildner, M; Paal, P

    2015-02-01

    Due to the geographical conditions, large-scale accidents amounting to mass casualty incidents (MCI) in Alpine terrain regularly present rescue teams with huge challenges. Using an example incident, specific conditions and typical problems associated with such a situation are presented. The first rescue team members to arrive have the elementary tasks of qualified triage and communication to the control room, which is required to dispatch the necessary additional support. Only with a clear "concept", to which all have to adhere, can the subsequent chaos phase be limited. In this respect, a time factor confounded by adverse weather conditions or darkness represents enormous pressure. Additional hazards are frostbite and hypothermia. If priorities can be established in terms of urgency, then treatment and procedure algorithms have proven successful. For evacuation of causalities, a helicopter should be strived for. Due to the low density of hospitals in Alpine regions, it is often necessary to distribute the patients over a wide area. Rescue operations in Alpine terrain have to be performed according to the particular conditions and require rescue teams to have specific knowledge and expertise. The possibility of a large-scale accident should be considered when planning events. With respect to optimization of rescue measures, regular training and exercises are rational, as is the analysis of previous large-scale Alpine accidents.

  10. Design study on sodium-cooled large-scale reactor

    International Nuclear Information System (INIS)

    Shimakawa, Yoshio; Nibe, Nobuaki; Hori, Toru

    2002-05-01

    In Phase 1 of the 'Feasibility Study on Commercialized Fast Reactor Cycle Systems (F/S)', an advanced loop type reactor has been selected as a promising concept of sodium-cooled large-scale reactor, which has a possibility to fulfill the design requirements of the F/S. In Phase 2 of the F/S, it is planed to precede a preliminary conceptual design of a sodium-cooled large-scale reactor based on the design of the advanced loop type reactor. Through the design study, it is intended to construct such a plant concept that can show its attraction and competitiveness as a commercialized reactor. This report summarizes the results of the design study on the sodium-cooled large-scale reactor performed in JFY2001, which is the first year of Phase 2. In the JFY2001 design study, a plant concept has been constructed based on the design of the advanced loop type reactor, and fundamental specifications of main systems and components have been set. Furthermore, critical subjects related to safety, structural integrity, thermal hydraulics, operability, maintainability and economy have been examined and evaluated. As a result of this study, the plant concept of the sodium-cooled large-scale reactor has been constructed, which has a prospect to satisfy the economic goal (construction cost: less than 200,000yens/kWe, etc.) and has a prospect to solve the critical subjects. From now on, reflecting the results of elemental experiments, the preliminary conceptual design of this plant will be preceded toward the selection for narrowing down candidate concepts at the end of Phase 2. (author)

  11. A large scale field experiment in the Amazon basin (LAMBADA/BATERISTA)

    NARCIS (Netherlands)

    Dolman, A.J.; Kabat, P.; Gash, J.H.C.; Noilhan, J.; Jochum, A.M.; Nobre, C.

    1995-01-01

    A description is given of a large-scale field experiment planned in the Amazon basin, aimed at assessing the large-scale balances of energy, water and carbon dioxide. The embedding of this experiment in global change programmes is described, viz. the Biospheric Aspects of the Hydrological Cycle

  12. Combustion of biodiesel in a large-scale laboratory furnace

    International Nuclear Information System (INIS)

    Pereira, Caio; Wang, Gongliang; Costa, Mário

    2014-01-01

    Combustion tests in a large-scale laboratory furnace were carried out to assess the feasibility of using biodiesel as a fuel in industrial furnaces. For comparison purposes, petroleum-based diesel was also used as a fuel. Initially, the performance of the commercial air-assisted atomizer used in the combustion tests was scrutinized under non-reacting conditions. Subsequently, flue gas data, including PM (particulate matter), were obtained for various flame conditions to quantify the effects of the atomization quality and excess air on combustion performance. The combustion data was complemented with in-flame temperature measurements for two representative furnace operating conditions. The results reveal that (i) CO emissions from biodiesel and diesel combustion are rather similar and not affected by the atomization quality; (ii) NO x emissions increase slightly as spray quality improves for both liquid fuels, but NO x emissions from biodiesel combustion are always lower than those from diesel combustion; (iii) CO emissions decrease rapidly for both liquid fuels as the excess air level increases up to an O 2 concentration in the flue gas of 2%, beyond which they remain unchanged; (iv) NO x emissions increase with an increase in the excess air level for both liquid fuels; (v) the quality of the atomization has a significant impact on PM emissions, with the diesel combustion yielding significantly higher PM emissions than biodiesel combustion; and (vi) diesel combustion originates PM with elements such as Cr, Na, Ni and Pb, while biodiesel combustion produces PM with elements such as Ca, Mg and Fe. - Highlights: • CO emissions from biodiesel and diesel tested are similar. • NO x emissions from biodiesel tested are lower than those from diesel tested. • Diesel tested yields significantly higher PM (particulate matter) emissions than biodiesel tested. • Diesel tested originates PM with Cr, Na, Ni and Pb, while biodiesel tested produces PM with Ca, Mg and Fe

  13. Signatures of non-universal large scales in conditional structure functions from various turbulent flows

    International Nuclear Information System (INIS)

    Blum, Daniel B; Voth, Greg A; Bewley, Gregory P; Bodenschatz, Eberhard; Gibert, Mathieu; Xu Haitao; Gylfason, Ármann; Mydlarski, Laurent; Yeung, P K

    2011-01-01

    We present a systematic comparison of conditional structure functions in nine turbulent flows. The flows studied include forced isotropic turbulence simulated on a periodic domain, passive grid wind tunnel turbulence in air and in pressurized SF 6 , active grid wind tunnel turbulence (in both synchronous and random driving modes), the flow between counter-rotating discs, oscillating grid turbulence and the flow in the Lagrangian exploration module (in both constant and random driving modes). We compare longitudinal Eulerian second-order structure functions conditioned on the instantaneous large-scale velocity in each flow to assess the ways in which the large scales affect the small scales in a variety of turbulent flows. Structure functions are shown to have larger values when the large-scale velocity significantly deviates from the mean in most flows, suggesting that dependence on the large scales is typical in many turbulent flows. The effects of the large-scale velocity on the structure functions can be quite strong, with the structure function varying by up to a factor of 2 when the large-scale velocity deviates from the mean by ±2 standard deviations. In several flows, the effects of the large-scale velocity are similar at all the length scales we measured, indicating that the large-scale effects are scale independent. In a few flows, the effects of the large-scale velocity are larger on the smallest length scales. (paper)

  14. Large-scale stochasticity in Hamiltonian systems

    International Nuclear Information System (INIS)

    Escande, D.F.

    1982-01-01

    Large scale stochasticity (L.S.S.) in Hamiltonian systems is defined on the paradigm Hamiltonian H(v,x,t) =v 2 /2-M cos x-P cos k(x-t) which describes the motion of one particle in two electrostatic waves. A renormalization transformation Tsub(r) is described which acts as a microscope that focusses on a given KAM (Kolmogorov-Arnold-Moser) torus in phase space. Though approximate, Tsub(r) yields the threshold of L.S.S. in H with an error of 5-10%. The universal behaviour of KAM tori is predicted: for instance the scale invariance of KAM tori and the critical exponent of the Lyapunov exponent of Cantori. The Fourier expansion of KAM tori is computed and several conjectures by L. Kadanoff and S. Shenker are proved. Chirikov's standard mapping for stochastic layers is derived in a simpler way and the width of the layers is computed. A simpler renormalization scheme for these layers is defined. A Mathieu equation for describing the stability of a discrete family of cycles is derived. When combined with Tsub(r), it allows to prove the link between KAM tori and nearby cycles, conjectured by J. Greene and, in particular, to compute the mean residue of a torus. The fractal diagrams defined by G. Schmidt are computed. A sketch of a methodology for computing the L.S.S. threshold in any two-degree-of-freedom Hamiltonian system is given. (Auth.)

  15. Large scale molecular simulations of nanotoxicity.

    Science.gov (United States)

    Jimenez-Cruz, Camilo A; Kang, Seung-gu; Zhou, Ruhong

    2014-01-01

    The widespread use of nanomaterials in biomedical applications has been accompanied by an increasing interest in understanding their interactions with tissues, cells, and biomolecules, and in particular, on how they might affect the integrity of cell membranes and proteins. In this mini-review, we present a summary of some of the recent studies on this important subject, especially from the point of view of large scale molecular simulations. The carbon-based nanomaterials and noble metal nanoparticles are the main focus, with additional discussions on quantum dots and other nanoparticles as well. The driving forces for adsorption of fullerenes, carbon nanotubes, and graphene nanosheets onto proteins or cell membranes are found to be mainly hydrophobic interactions and the so-called π-π stacking (between aromatic rings), while for the noble metal nanoparticles the long-range electrostatic interactions play a bigger role. More interestingly, there are also growing evidences showing that nanotoxicity can have implications in de novo design of nanomedicine. For example, the endohedral metallofullerenol Gd@C₈₂(OH)₂₂ is shown to inhibit tumor growth and metastasis by inhibiting enzyme MMP-9, and graphene is illustrated to disrupt bacteria cell membranes by insertion/cutting as well as destructive extraction of lipid molecules. These recent findings have provided a better understanding of nanotoxicity at the molecular level and also suggested therapeutic potential by using the cytotoxicity of nanoparticles against cancer or bacteria cells. © 2014 Wiley Periodicals, Inc.

  16. Large-scale tides in general relativity

    Energy Technology Data Exchange (ETDEWEB)

    Ip, Hiu Yan; Schmidt, Fabian, E-mail: iphys@mpa-garching.mpg.de, E-mail: fabians@mpa-garching.mpg.de [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany)

    2017-02-01

    Density perturbations in cosmology, i.e. spherically symmetric adiabatic perturbations of a Friedmann-Lemaȋtre-Robertson-Walker (FLRW) spacetime, are locally exactly equivalent to a different FLRW solution, as long as their wavelength is much larger than the sound horizon of all fluid components. This fact is known as the 'separate universe' paradigm. However, no such relation is known for anisotropic adiabatic perturbations, which correspond to an FLRW spacetime with large-scale tidal fields. Here, we provide a closed, fully relativistic set of evolutionary equations for the nonlinear evolution of such modes, based on the conformal Fermi (CFC) frame. We show explicitly that the tidal effects are encoded by the Weyl tensor, and are hence entirely different from an anisotropic Bianchi I spacetime, where the anisotropy is sourced by the Ricci tensor. In order to close the system, certain higher derivative terms have to be dropped. We show that this approximation is equivalent to the local tidal approximation of Hui and Bertschinger [1]. We also show that this very simple set of equations matches the exact evolution of the density field at second order, but fails at third and higher order. This provides a useful, easy-to-use framework for computing the fully relativistic growth of structure at second order.

  17. Large Scale EOF Analysis of Climate Data

    Science.gov (United States)

    Prabhat, M.; Gittens, A.; Kashinath, K.; Cavanaugh, N. R.; Mahoney, M.

    2016-12-01

    We present a distributed approach towards extracting EOFs from 3D climate data. We implement the method in Apache Spark, and process multi-TB sized datasets on O(1000-10,000) cores. We apply this method to latitude-weighted ocean temperature data from CSFR, a 2.2 terabyte-sized data set comprising ocean and subsurface reanalysis measurements collected at 41 levels in the ocean, at 6 hour intervals over 31 years. We extract the first 100 EOFs of this full data set and compare to the EOFs computed simply on the surface temperature field. Our analyses provide evidence of Kelvin and Rossy waves and components of large-scale modes of oscillation including the ENSO and PDO that are not visible in the usual SST EOFs. Further, they provide information on the the most influential parts of the ocean, such as the thermocline, that exist below the surface. Work is ongoing to understand the factors determining the depth-varying spatial patterns observed in the EOFs. We will experiment with weighting schemes to appropriately account for the differing depths of the observations. We also plan to apply the same distributed approach to analysis of analysis of 3D atmospheric climatic data sets, including multiple variables. Because the atmosphere changes on a quicker time-scale than the ocean, we expect that the results will demonstrate an even greater advantage to computing 3D EOFs in lieu of 2D EOFs.

  18. Mirror dark matter and large scale structure

    International Nuclear Information System (INIS)

    Ignatiev, A.Yu.; Volkas, R.R.

    2003-01-01

    Mirror matter is a dark matter candidate. In this paper, we reexamine the linear regime of density perturbation growth in a universe containing mirror dark matter. Taking adiabatic scale-invariant perturbations as the input, we confirm that the resulting processed power spectrum is richer than for the more familiar cases of cold, warm and hot dark matter. The new features include a maximum at a certain scale λ max , collisional damping below a smaller characteristic scale λ S ' , with oscillatory perturbations between the two. These scales are functions of the fundamental parameters of the theory. In particular, they decrease for decreasing x, the ratio of the mirror plasma temperature to that of the ordinary. For x∼0.2, the scale λ max becomes galactic. Mirror dark matter therefore leads to bottom-up large scale structure formation, similar to conventional cold dark matter, for x(less-or-similar sign)0.2. Indeed, the smaller the value of x, the closer mirror dark matter resembles standard cold dark matter during the linear regime. The differences pertain to scales smaller than λ S ' in the linear regime, and generally in the nonlinear regime because mirror dark matter is chemically complex and to some extent dissipative. Lyman-α forest data and the early reionization epoch established by WMAP may hold the key to distinguishing mirror dark matter from WIMP-style cold dark matter

  19. Analysis of the applicability of fracture mechanics on the basis of large scale specimen testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Polachova, H.; Sulc, J.; Anikovskij, V.; Dragunov, Y.; Rivkin, E.; Filatov, V.

    1988-01-01

    The verification is dealt with of fracture mechanics calculations for WWER reactor pressure vessels by large scale model testing performed on the large testing machine ZZ 8000 (maximum load of 80 MN) in the Skoda Concern. The results of testing a large set of large scale test specimens with surface crack-type defects are presented. The nominal thickness of the specimens was 150 mm with defect depths between 15 and 100 mm, the testing temperature varying between -30 and +80 degC (i.e., in the temperature interval of T ko ±50 degC). Specimens with a scale of 1:8 and 1:12 were also tested, as well as standard (CT and TPB) specimens. Comparisons of results of testing and calculations suggest some conservatism of calculations (especially for small defects) based on Linear Elastic Fracture Mechanics, according to the Nuclear Reactor Pressure Vessel Codes which use the fracture mechanics values from J IC testing. On the basis of large scale tests the ''Defect Analysis Diagram'' was constructed and recommended for brittle fracture assessment of reactor pressure vessels. (author). 7 figs., 2 tabs., 3 refs

  20. Properties of large-scale methane/hydrogen jet fires

    Energy Technology Data Exchange (ETDEWEB)

    Studer, E. [CEA Saclay, DEN, LTMF Heat Transfer and Fluid Mech Lab, 91 - Gif-sur-Yvette (France); Jamois, D.; Leroy, G.; Hebrard, J. [INERIS, F-60150 Verneuil En Halatte (France); Jallais, S. [Air Liquide, F-78350 Jouy En Josas (France); Blanchetiere, V. [GDF SUEZ, 93 - La Plaine St Denis (France)

    2009-12-15

    A future economy based on reduction of carbon-based fuels for power generation and transportation may consider hydrogen as possible energy carrier Extensive and widespread use of hydrogen might require a pipeline network. The alternatives might be the use of the existing natural gas network or to design a dedicated network. Whatever the solution, mixing hydrogen with natural gas will modify the consequences of accidents, substantially The French National Research Agency (ANR) funded project called HYDROMEL focuses on these critical questions Within this project large-scale jet fires have been studied experimentally and numerically The main characteristics of these flames including visible length, radiation fluxes and blowout have been assessed. (authors)

  1. Status of large scale wind turbine technology development abroad?

    Institute of Scientific and Technical Information of China (English)

    Ye LI; Lei DUAN

    2016-01-01

    To facilitate the large scale (multi-megawatt) wind turbine development in China, the foreign e?orts and achievements in the area are reviewed and summarized. Not only the popular horizontal axis wind turbines on-land but also the o?shore wind turbines, vertical axis wind turbines, airborne wind turbines, and shroud wind turbines are discussed. The purpose of this review is to provide a comprehensive comment and assessment about the basic work principle, economic aspects, and environmental impacts of turbines.

  2. Large Scale Self-Organizing Information Distribution System

    National Research Council Canada - National Science Library

    Low, Steven

    2005-01-01

    This project investigates issues in "large-scale" networks. Here "large-scale" refers to networks with large number of high capacity nodes and transmission links, and shared by a large number of users...

  3. Distributed large-scale dimensional metrology new insights

    CERN Document Server

    Franceschini, Fiorenzo; Maisano, Domenico

    2011-01-01

    Focuses on the latest insights into and challenges of distributed large scale dimensional metrology Enables practitioners to study distributed large scale dimensional metrology independently Includes specific examples of the development of new system prototypes

  4. Probes of large-scale structure in the Universe

    International Nuclear Information System (INIS)

    Suto, Yasushi; Gorski, K.; Juszkiewicz, R.; Silk, J.

    1988-01-01

    Recent progress in observational techniques has made it possible to confront quantitatively various models for the large-scale structure of the Universe with detailed observational data. We develop a general formalism to show that the gravitational instability theory for the origin of large-scale structure is now capable of critically confronting observational results on cosmic microwave background radiation angular anisotropies, large-scale bulk motions and large-scale clumpiness in the galaxy counts. (author)

  5. Large scale obscuration and related climate effects open literature bibliography

    International Nuclear Information System (INIS)

    Russell, N.A.; Geitgey, J.; Behl, Y.K.; Zak, B.D.

    1994-05-01

    Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ''Nuclear Winter Controversy'' in the early 1980's. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest

  6. Enabling Large-Scale Biomedical Analysis in the Cloud

    Directory of Open Access Journals (Sweden)

    Ying-Chih Lin

    2013-01-01

    Full Text Available Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable.

  7. Large scale obscuration and related climate effects open literature bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Russell, N.A.; Geitgey, J.; Behl, Y.K.; Zak, B.D.

    1994-05-01

    Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ``Nuclear Winter Controversy`` in the early 1980`s. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest.

  8. Less is more: regularization perspectives on large scale machine learning

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Deep learning based techniques provide a possible solution at the expanse of theoretical guidance and, especially, of computational requirements. It is then a key challenge for large scale machine learning to devise approaches guaranteed to be accurate and yet computationally efficient. In this talk, we will consider a regularization perspectives on machine learning appealing to classical ideas in linear algebra and inverse problems to scale-up dramatically nonparametric methods such as kernel methods, often dismissed because of prohibitive costs. Our analysis derives optimal theoretical guarantees while providing experimental results at par or out-performing state of the art approaches.

  9. Large scale dynamics of protoplanetary discs

    Science.gov (United States)

    Béthune, William

    2017-08-01

    Planets form in the gaseous and dusty disks orbiting young stars. These protoplanetary disks are dispersed in a few million years, being accreted onto the central star or evaporated into the interstellar medium. To explain the observed accretion rates, it is commonly assumed that matter is transported through the disk by turbulence, although the mechanism sustaining turbulence is uncertain. On the other side, irradiation by the central star could heat up the disk surface and trigger a photoevaporative wind, but thermal effects cannot account for the observed acceleration and collimation of the wind into a narrow jet perpendicular to the disk plane. Both issues can be solved if the disk is sensitive to magnetic fields. Weak fields lead to the magnetorotational instability, whose outcome is a state of sustained turbulence. Strong fields can slow down the disk, causing it to accrete while launching a collimated wind. However, the coupling between the disk and the neutral gas is done via electric charges, each of which is outnumbered by several billion neutral molecules. The imperfect coupling between the magnetic field and the neutral gas is described in terms of "non-ideal" effects, introducing new dynamical behaviors. This thesis is devoted to the transport processes happening inside weakly ionized and weakly magnetized accretion disks; the role of microphysical effects on the large-scale dynamics of the disk is of primary importance. As a first step, I exclude the wind and examine the impact of non-ideal effects on the turbulent properties near the disk midplane. I show that the flow can spontaneously organize itself if the ionization fraction is low enough; in this case, accretion is halted and the disk exhibits axisymmetric structures, with possible consequences on planetary formation. As a second step, I study the launching of disk winds via a global model of stratified disk embedded in a warm atmosphere. This model is the first to compute non-ideal effects from

  10. Large-Scale Spacecraft Fire Safety Tests

    Science.gov (United States)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; hide

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests

  11. Detecting differential protein expression in large-scale population proteomics

    Energy Technology Data Exchange (ETDEWEB)

    Ryu, Soyoung; Qian, Weijun; Camp, David G.; Smith, Richard D.; Tompkins, Ronald G.; Davis, Ronald W.; Xiao, Wenzhong

    2014-06-17

    Mass spectrometry-based high-throughput quantitative proteomics shows great potential in clinical biomarker studies, identifying and quantifying thousands of proteins in biological samples. However, methods are needed to appropriately handle issues/challenges unique to mass spectrometry data in order to detect as many biomarker proteins as possible. One issue is that different mass spectrometry experiments generate quite different total numbers of quantified peptides, which can result in more missing peptide abundances in an experiment with a smaller total number of quantified peptides. Another issue is that the quantification of peptides is sometimes absent, especially for less abundant peptides and such missing values contain the information about the peptide abundance. Here, we propose a Significance Analysis for Large-scale Proteomics Studies (SALPS) that handles missing peptide intensity values caused by the two mechanisms mentioned above. Our model has a robust performance in both simulated data and proteomics data from a large clinical study. Because varying patients’ sample qualities and deviating instrument performances are not avoidable for clinical studies performed over the course of several years, we believe that our approach will be useful to analyze large-scale clinical proteomics data.

  12. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir; Ltaief, Hatem; Mikhalev, Aleksandr; Charara, Ali; Keyes, David E.

    2018-01-01

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  13. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir

    2018-02-24

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  14. Solving large scale structure in ten easy steps with COLA

    Energy Technology Data Exchange (ETDEWEB)

    Tassev, Svetlin [Department of Astrophysical Sciences, Princeton University, 4 Ivy Lane, Princeton, NJ 08544 (United States); Zaldarriaga, Matias [School of Natural Sciences, Institute for Advanced Study, Olden Lane, Princeton, NJ 08540 (United States); Eisenstein, Daniel J., E-mail: stassev@cfa.harvard.edu, E-mail: matiasz@ias.edu, E-mail: deisenstein@cfa.harvard.edu [Center for Astrophysics, Harvard University, 60 Garden Street, Cambridge, MA 02138 (United States)

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.

  15. Assessing Scientific Performance.

    Science.gov (United States)

    Weiner, John M.; And Others

    1984-01-01

    A method for assessing scientific performance based on relationships displayed numerically in published documents is proposed and illustrated using published documents in pediatric oncology for the period 1979-1982. Contributions of a major clinical investigations group, the Childrens Cancer Study Group, are analyzed. Twenty-nine references are…

  16. Large-scale assembly of colloidal particles

    Science.gov (United States)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  17. Potential Impact of Large Scale Abstraction on the Quality of Shallow ...

    African Journals Online (AJOL)

    PRO

    Significant increase in crop production would not, however, be ... sounding) using Geonics EM34-3 and Abem SAS300C Terrameter to determine the aquifer (fresh water lens) ..... Final report on environmental impact assessment of large scale.

  18. Performance assessment calculational exercises

    International Nuclear Information System (INIS)

    Barnard, R.W.; Dockery, H.A.

    1990-01-01

    The Performance Assessment Calculational Exercises (PACE) are an ongoing effort coordinated by Yucca Mountain Project Office. The objectives of fiscal year 1990 work, termed PACE-90, as outlined in the Department of Energy Performance Assessment (PA) Implementation Plan were to develop PA capabilities among Yucca Mountain Project (YMP) participants by calculating performance of a Yucca Mountain (YM) repository under ''expected'' and also ''disturbed'' conditions, to identify critical elements and processes necessary to assess the performance of YM, and to perform sensitivity studies on key parameters. It was expected that the PACE problems would aid in development of conceptual models and eventual evaluation of site data. The PACE-90 participants calculated transport of a selected set of radionuclides through a portion of Yucca Mountain for a period of 100,000 years. Results include analyses of fluid-flow profiles, development of a source term for radionuclide release, and simulations of contaminant transport in the fluid-flow field. Later work included development of a problem definition for perturbations to the originally modeled conditions and for some parametric sensitivity studies. 3 refs

  19. Context for performance assessment

    International Nuclear Information System (INIS)

    Kocher, D.C.

    1997-01-01

    In developing its recommendations on performance assessment for disposal of low-level radioactive waste, Scientific committee 87-3 of the National Council on Radiation Protection and Measurements (NCRP) has considered a number of topics that provide a context for the development of suitable approaches to performance assessment. This paper summarizes the Committee' discussions on these topics, including (1) the definition of low-level waste and its sources and properties, as they affect the variety of wastes that must be considered, (2) fundamental objectives and principles of radioactive waste disposal and their application to low-level waste, (3) current performance objectives for low-level waste disposal in the US, with particular emphasis on such unresolved issues of importance to performance assessment as the time frame for compliance, requirements for protection of groundwater and surface water, inclusion of doses from radon, demonstrating compliance with fixed performance objectives using highly uncertain model projections, and application of the principle that releases to the environment should be maintained as low as reasonably achievable (ALARA), (4) the role of active and passive institutional controls over disposal sites, (5) the role of the inadvertent human intruder in low-level waste disposal, (6) model validation and confidence in model outcomes, and (7) the concept of reasonable assurance of compliance

  20. Texas' performance assessment work

    International Nuclear Information System (INIS)

    Charbeneau, R.J.; Hertel, N.E.; Pollard, C.G.

    1990-01-01

    The Texas Low-Level Radioactive Waste Disposal Authority is completing two years of detailed on-site suitability studies of a potential low-level radioactive waste disposal site in Hudspeth County, Texas. The data from these studies have been used to estimate site specific parameters needed to do a performance assessment of the site. The radiological impacts of the site have been analyzed as required for a license application. The approach adopted for the performance assessment was to use simplified and yet conservative assumptions with regard to releases, radionuclide transport, and dose calculations. The methodologies employed in the performance assessment are reviewed in the paper. Rather than rely on a single computer code, a modular approach to the performance assessment was selected. The HELP code was used to calculate the infiltration rate through the trench covers and the amount of leachate released from this arid site. Individual pathway analyses used spreadsheet calculations. These calculations were compared with those from other computer models including CRRIS, INGDOS, PATHRAE, and MICROSHIELD copyright, and found to yield conservative estimates of the effective whole body dose. The greatest difficulty in performing the radiological assessment of the site was the selection of reasonable source terms for release into the environment. A surface water pathway is unreasonable for the site. Though also unlikely, the groundwater pathway with exposure through a site boundary well was found to yield the largest calculated dose. The more likely pathway including transport of leachate from the facility through the unsaturated zone and returning to the ground surface yields small doses. All calculated doses associated with normal releases of radioactivity are below the regulatory limits

  1. Large-scale hydrogen production using nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Ryland, D.; Stolberg, L.; Kettner, A.; Gnanapragasam, N.; Suppiah, S. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)

    2014-07-01

    For many years, Atomic Energy of Canada Limited (AECL) has been studying the feasibility of using nuclear reactors, such as the Supercritical Water-cooled Reactor, as an energy source for large scale hydrogen production processes such as High Temperature Steam Electrolysis and the Copper-Chlorine thermochemical cycle. Recent progress includes the augmentation of AECL's experimental capabilities by the construction of experimental systems to test high temperature steam electrolysis button cells at ambient pressure and temperatures up to 850{sup o}C and CuCl/HCl electrolysis cells at pressures up to 7 bar and temperatures up to 100{sup o}C. In parallel, detailed models of solid oxide electrolysis cells and the CuCl/HCl electrolysis cell are being refined and validated using experimental data. Process models are also under development to assess options for economic integration of these hydrogen production processes with nuclear reactors. Options for large-scale energy storage, including hydrogen storage, are also under study. (author)

  2. Large scale laboratory diffusion experiments in clay rocks

    International Nuclear Information System (INIS)

    Garcia-Gutierrez, M.; Missana, T.; Mingarro, M.; Martin, P.L.; Cormenzana, J.L.

    2005-01-01

    Full text of publication follows: Clay formations are potential host rocks for high-level radioactive waste repositories. In clay materials the radionuclide diffusion is the main transport mechanism. Thus, the understanding of the diffusion processes and the determination of diffusion parameters in conditions as similar as possible to the real ones, are critical for the performance assessment of deep geological repository. Diffusion coefficients are mainly measured in the laboratory using small samples, after a preparation to fit into the diffusion cell. In addition, a few field tests are usually performed for confirming laboratory results, and analyse scale effects. In field or 'in situ' tests the experimental set-up usually includes the injection of a tracer diluted in reconstituted formation water into a packed off section of a borehole. Both experimental systems may produce artefacts in the determination of diffusion coefficients. In laboratory the preparation of the sample can generate structural change mainly if the consolidated clay have a layered fabric, and in field test the introduction of water could modify the properties of the saturated clay in the first few centimeters, just where radionuclide diffusion is expected to take place. In this work, a large scale laboratory diffusion experiment is proposed, using a large cylindrical sample of consolidated clay that can overcome the above mentioned problems. The tracers used were mixed with clay obtained by drilling a central hole, re-compacted into the hole at approximately the same density as the consolidated block and finally sealed. Neither additional treatment of the sample nor external monitoring are needed. After the experimental time needed for diffusion to take place (estimated by scoping calculations) the block was sampled to obtain a 3D distribution of the tracer concentration and the results were modelled. An additional advantage of the proposed configuration is that it could be used in 'in situ

  3. Thermal power generation projects ``Large Scale Solar Heating``; EU-Thermie-Projekte ``Large Scale Solar Heating``

    Energy Technology Data Exchange (ETDEWEB)

    Kuebler, R.; Fisch, M.N. [Steinbeis-Transferzentrum Energie-, Gebaeude- und Solartechnik, Stuttgart (Germany)

    1998-12-31

    The aim of this project is the preparation of the ``Large-Scale Solar Heating`` programme for an Europe-wide development of subject technology. The following demonstration programme was judged well by the experts but was not immediately (1996) accepted for financial subsidies. In November 1997 the EU-commission provided 1,5 million ECU which allowed the realisation of an updated project proposal. By mid 1997 a small project was approved, that had been requested under the lead of Chalmes Industriteteknik (CIT) in Sweden and is mainly carried out for the transfer of technology. (orig.) [Deutsch] Ziel dieses Vorhabens ist die Vorbereitung eines Schwerpunktprogramms `Large Scale Solar Heating`, mit dem die Technologie europaweit weiterentwickelt werden sollte. Das daraus entwickelte Demonstrationsprogramm wurde von den Gutachtern positiv bewertet, konnte jedoch nicht auf Anhieb (1996) in die Foerderung aufgenommen werden. Im November 1997 wurden von der EU-Kommission dann kurzfristig noch 1,5 Mio ECU an Foerderung bewilligt, mit denen ein aktualisierter Projektvorschlag realisiert werden kann. Bereits Mitte 1997 wurde ein kleineres Vorhaben bewilligt, das unter Federfuehrung von Chalmers Industriteknik (CIT) in Schweden beantragt worden war und das vor allem dem Technologietransfer dient. (orig.)

  4. IP over optical multicasting for large-scale video delivery

    Science.gov (United States)

    Jin, Yaohui; Hu, Weisheng; Sun, Weiqiang; Guo, Wei

    2007-11-01

    In the IPTV systems, multicasting will play a crucial role in the delivery of high-quality video services, which can significantly improve bandwidth efficiency. However, the scalability and the signal quality of current IPTV can barely compete with the existing broadcast digital TV systems since it is difficult to implement large-scale multicasting with end-to-end guaranteed quality of service (QoS) in packet-switched IP network. China 3TNet project aimed to build a high performance broadband trial network to support large-scale concurrent streaming media and interactive multimedia services. The innovative idea of 3TNet is that an automatic switched optical networks (ASON) with the capability of dynamic point-to-multipoint (P2MP) connections replaces the conventional IP multicasting network in the transport core, while the edge remains an IP multicasting network. In this paper, we will introduce the network architecture and discuss challenges in such IP over Optical multicasting for video delivery.

  5. State-of-the-art of large scale biogas plants

    International Nuclear Information System (INIS)

    Prisum, J.M.; Noergaard, P.

    1992-01-01

    A survey of the technological state of large scale biogas plants in Europe treating manure is given. 83 plants are in operation at present. Of these, 16 are centralised digestion plants. Transport costs at centralised digestion plants amounts to between 25 and 40 percent of the total operational costs. Various transport equipment is used. Most large scale digesters are CSTRs, but serial, contact, 2-step, and plug-flow digesters are also found. Construction materials are mostly steel and concrete. Mesophilic digestion is most common (56%), thermophilic digestion is used in 17% of the plants, combined mesophilic and thermophilic digestion is used in 28% of the centralised plants. Mixing of digester content is performed with gas injection, propellers, and gas-liquid displacement. Heating is carried out using external or internal heat exchangers. Heat recovery is only used in Denmark. Gas purification equipment is commonplace, but not often needed. Several plants use separation of the digested manure, often as part of a post-treatment/-purification process or for the production of 'compost'. Screens, sieve belt separaters, centrifuges and filter presses are employed. The use of biogas varies considerably. In some cases, combined heat and power stations are supplying the grid and district heating systems. Other plants use only either the electricity or heat. (au)

  6. Large Scale Community Detection Using a Small World Model

    Directory of Open Access Journals (Sweden)

    Ranjan Kumar Behera

    2017-11-01

    Full Text Available In a social network, small or large communities within the network play a major role in deciding the functionalities of the network. Despite of diverse definitions, communities in the network may be defined as the group of nodes that are more densely connected as compared to nodes outside the group. Revealing such hidden communities is one of the challenging research problems. A real world social network follows small world phenomena, which indicates that any two social entities can be reachable in a small number of steps. In this paper, nodes are mapped into communities based on the random walk in the network. However, uncovering communities in large-scale networks is a challenging task due to its unprecedented growth in the size of social networks. A good number of community detection algorithms based on random walk exist in literature. In addition, when large-scale social networks are being considered, these algorithms are observed to take considerably longer time. In this work, with an objective to improve the efficiency of algorithms, parallel programming framework like Map-Reduce has been considered for uncovering the hidden communities in social network. The proposed approach has been compared with some standard existing community detection algorithms for both synthetic and real-world datasets in order to examine its performance, and it is observed that the proposed algorithm is more efficient than the existing ones.

  7. Literature Review: Herbal Medicine Treatment after Large-Scale Disasters.

    Science.gov (United States)

    Takayama, Shin; Kaneko, Soichiro; Numata, Takehiro; Kamiya, Tetsuharu; Arita, Ryutaro; Saito, Natsumi; Kikuchi, Akiko; Ohsawa, Minoru; Kohayagawa, Yoshitaka; Ishii, Tadashi

    2017-01-01

    Large-scale natural disasters, such as earthquakes, tsunamis, volcanic eruptions, and typhoons, occur worldwide. After the Great East Japan earthquake and tsunami, our medical support operation's experiences suggested that traditional medicine might be useful for treating the various symptoms of the survivors. However, little information is available regarding herbal medicine treatment in such situations. Considering that further disasters will occur, we performed a literature review and summarized the traditional medicine approaches for treatment after large-scale disasters. We searched PubMed and Cochrane Library for articles written in English, and Ichushi for those written in Japanese. Articles published before 31 March 2016 were included. Keywords "disaster" and "herbal medicine" were used in our search. Among studies involving herbal medicine after a disaster, we found two randomized controlled trials investigating post-traumatic stress disorder (PTSD), three retrospective investigations of trauma or common diseases, and seven case series or case reports of dizziness, pain, and psychosomatic symptoms. In conclusion, herbal medicine has been used to treat trauma, PTSD, and other symptoms after disasters. However, few articles have been published, likely due to the difficulty in designing high quality studies in such situations. Further study will be needed to clarify the usefulness of herbal medicine after disasters.

  8. GPU-Accelerated Sparse Matrix Solvers for Large-Scale Simulations, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Many large-scale numerical simulations can be broken down into common mathematical routines. While the applications may differ, the need to perform functions such as...

  9. Blood lipid profiles and factors associated with dyslipidemia assessed by a point-of-care testing device in an outpatient setting: A large-scale cross-sectional study in Southern China.

    Science.gov (United States)

    Zhang, Pei-dong; He, Lin-yun; Guo, Yang; Liu, Peng; Li, Gong-xin; Wang, Li-zi; Liu, Ying-feng

    2015-06-01

    To promote the concept of POCT and to investigate dyslipidemia in Guangzhou, China, we performed a study examining blood lipids assessed by POCT and reported factors associated with dyslipidemia. This multicenter, cross-sectional study enrolled outpatients from 9 Guangzhou hospitals from May through September 2013. After informed consent was obtained, the following information was collected: age; gender; the presence of diabetes mellitus, obesity, and hypertension as well as current use of cigarettes or alcohol. Patients were asked to fast for 8h before the blood examination performed on a POCT device, the CardioChek PA. Of 4012 patients enrolled (1544 males, 2468 females; mean age 60.35±9.41 years), 1993 (49.7%) patients had dyslipidemia, but only 101 (5.1%) took statins. The multivariate tests of associations between demographic variables, comorbidities, and the risk of having dyslipidemia found that the significant predictors of dyslipidemia were male gender, age ≥60 years, being a current smoker or alcohol drinker, and hypertension. Most dyslipidemia patients in Guangzhou remain untreated. POCT in China is feasible, and its widespread use might improve dyslipidemia awareness, treatment and control. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  10. Energy performance assessment methodology

    Energy Technology Data Exchange (ETDEWEB)

    Platzer, W.J. [Fraunhofer Inst. for Solar Energy Systems, Freiburg (Germany)

    2006-01-15

    The energy performance of buildings are intimately connected to the energy performance of building envelopes. The better we understand the relation between the quality of the envelope and the energy consumption of the building, the better we can improve both. We have to consider not only heating but all service energies related to the human comfort in the building, such as cooling, ventilation, lighting as well. The complexity coming from this embracing approach is not to be underestimated. It is less and less possible to realted simple characteristic performance indicators of building envelopes (such as the U-value) to the overall energy performance. On the one hand much more paramters (e.g. light transmittance) come into the picture we have to assess the product quality in a multidimensional world. Secondly buildings more and more have to work on a narrow optimum: For an old, badly insulated building all solar gains are useful for a high-performance building with very good insulation and heat recovery systems in the ventilation overheating becomes more likely. Thus we have to control the solar gains, and sometimes we need high gains, sometimes low ones. And thirdly we see that the technology within the building and the user patterns and interactions as well influence the performance of a building envelope. The aim of this project within IEA Task27 was to improve our knowledge on the complex situation and also to give a principal approach how to assess the performance of the building envelope. The participants have contributed to this aim not pretending that we have reached the end. (au)

  11. Hydrogen-combustion analyses of large-scale tests

    International Nuclear Information System (INIS)

    Gido, R.G.; Koestel, A.

    1986-01-01

    This report uses results of the large-scale tests with turbulence performed by the Electric Power Research Institute at the Nevada Test Site to evaluate hydrogen burn-analysis procedures based on lumped-parameter codes like COMPARE-H2 and associated burn-parameter models. The test results: (1) confirmed, in a general way, the procedures for application to pulsed burning, (2) increased significantly our understanding of the burn phenomenon by demonstrating that continuous burning can occur, and (3) indicated that steam can terminate continuous burning. Future actions recommended include: (1) modification of the code to perform continuous-burn analyses, which is demonstrated, (2) analyses to determine the type of burning (pulsed or continuous) that will exist in nuclear containments and the stable location if the burning is continuous, and (3) changes to the models for estimating burn parameters

  12. Hydrogen-combustion analyses of large-scale tests

    International Nuclear Information System (INIS)

    Gido, R.G.; Koestel, A.

    1986-01-01

    This report uses results of the large-scale tests with turbulence performed by the Electric Power Research Institute at the Nevada Test Site to evaluate hydrogen burn-analysis procedures based on lumped-parameter codes like COMPARE-H2 and associated burn-parameter models. The test results (a) confirmed, in a general way, the procedures for application to pulsed burning, (b) increased significantly our understanding of the burn phenomenon by demonstrating that continuous burning can occur and (c) indicated that steam can terminate continuous burning. Future actions recommended include (a) modification of the code to perform continuous-burn analyses, which is demonstrated, (b) analyses to determine the type of burning (pulsed or continuous) that will exist in nuclear containments and the stable location if the burning is continuous, and (c) changes to the models for estimating burn parameters

  13. Coordinated SLNR based Precoding in Large-Scale Heterogeneous Networks

    KAUST Repository

    Boukhedimi, Ikram; Kammoun, Abla; Alouini, Mohamed-Slim

    2017-01-01

    This work focuses on the downlink of large-scale two-tier heterogeneous networks composed of a macro-cell overlaid by micro-cell networks. Our interest is on the design of coordinated beamforming techniques that allow to mitigate the inter-cell interference. Particularly, we consider the case in which the coordinating base stations (BSs) have imperfect knowledge of the channel state information. Under this setting, we propose a regularized SLNR based precoding design in which the regularization factor is used to allow better resilience with respect to the channel estimation errors. Based on tools from random matrix theory, we provide an analytical analysis of the SINR and SLNR performances. These results are then exploited to propose a proper setting of the regularization factor. Simulation results are finally provided in order to validate our findings and to confirm the performance of the proposed precoding scheme.

  14. Coordinated SLNR based Precoding in Large-Scale Heterogeneous Networks

    KAUST Repository

    Boukhedimi, Ikram

    2017-03-06

    This work focuses on the downlink of large-scale two-tier heterogeneous networks composed of a macro-cell overlaid by micro-cell networks. Our interest is on the design of coordinated beamforming techniques that allow to mitigate the inter-cell interference. Particularly, we consider the case in which the coordinating base stations (BSs) have imperfect knowledge of the channel state information. Under this setting, we propose a regularized SLNR based precoding design in which the regularization factor is used to allow better resilience with respect to the channel estimation errors. Based on tools from random matrix theory, we provide an analytical analysis of the SINR and SLNR performances. These results are then exploited to propose a proper setting of the regularization factor. Simulation results are finally provided in order to validate our findings and to confirm the performance of the proposed precoding scheme.

  15. Large scale gas chromatographic demonstration system for hydrogen isotope separation

    International Nuclear Information System (INIS)

    Cheh, C.H.

    1988-01-01

    A large scale demonstration system was designed for a throughput of 3 mol/day equimolar mixture of H,D, and T. The demonstration system was assembled and an experimental program carried out. This project was funded by Kernforschungszentrum Karlsruhe, Canadian Fusion Fuel Technology Projects and Ontario Hydro Research Division. Several major design innovations were successfully implemented in the demonstration system and are discussed in detail. Many experiments were carried out in the demonstration system to study the performance of the system to separate hydrogen isotopes at high throughput. Various temperature programming schemes were tested, heart-cutting operation was evaluated, and very large (up to 138 NL/injection) samples were separated in the system. The results of the experiments showed that the specially designed column performed well as a chromatographic column and good separation could be achieved even when a 138 NL sample was injected

  16. Automatic management software for large-scale cluster system

    International Nuclear Information System (INIS)

    Weng Yunjian; Chinese Academy of Sciences, Beijing; Sun Gongxing

    2007-01-01

    At present, the large-scale cluster system faces to the difficult management. For example the manager has large work load. It needs to cost much time on the management and the maintenance of large-scale cluster system. The nodes in large-scale cluster system are very easy to be chaotic. Thousands of nodes are put in big rooms so that some managers are very easy to make the confusion with machines. How do effectively carry on accurate management under the large-scale cluster system? The article introduces ELFms in the large-scale cluster system. Furthermore, it is proposed to realize the large-scale cluster system automatic management. (authors)

  17. Large scale processing of dielectric electroactive polymers

    DEFF Research Database (Denmark)

    Vudayagiri, Sindhu

    Efficient processing techniques are vital to the success of any manufacturing industry. The processing techniques determine the quality of the products and thus to a large extent the performance and reliability of the products that are manufactured. The dielectric electroactive polymer (DEAP...

  18. Pro website development and operations streamlining DevOps for large-scale websites

    CERN Document Server

    Sacks, Matthew

    2012-01-01

    Pro Website Development and Operations gives you the experience you need to create and operate a large-scale production website. Large-scale websites have their own unique set of problems regarding their design-problems that can get worse when agile methodologies are adopted for rapid results. Managing large-scale websites, deploying applications, and ensuring they are performing well often requires a full scale team involving the development and operations sides of the company-two departments that don't always see eye to eye. When departments struggle with each other, it adds unnecessary comp

  19. Large-Scale Brain Network Coupling Predicts Total Sleep Deprivation Effects on Cognitive Capacity.

    Directory of Open Access Journals (Sweden)

    Yu Lei

    Full Text Available Interactions between large-scale brain networks have received most attention in the study of cognitive dysfunction of human brain. In this paper, we aimed to test the hypothesis that the coupling strength of large-scale brain networks will reflect the pressure for sleep and will predict cognitive performance, referred to as sleep pressure index (SPI. Fourteen healthy subjects underwent this within-subject functional magnetic resonance imaging (fMRI study during rested wakefulness (RW and after 36 h of total sleep deprivation (TSD. Self-reported scores of sleepiness were higher for TSD than for RW. A subsequent working memory (WM task showed that WM performance was lower after 36 h of TSD. Moreover, SPI was developed based on the coupling strength of salience network (SN and default mode network (DMN. Significant increase of SPI was observed after 36 h of TSD, suggesting stronger pressure for sleep. In addition, SPI was significantly correlated with both the visual analogue scale score of sleepiness and the WM performance. These results showed that alterations in SN-DMN coupling might be critical in cognitive alterations that underlie the lapse after TSD. Further studies may validate the SPI as a potential clinical biomarker to assess the impact of sleep deprivation.

  20. Large scale particle simulations in a virtual memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Million, R.; Wagner, J.S.; Tajima, T.

    1983-01-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceeds the computer core size. The required address space is automatically mapped onto slow disc memory the the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Assesses to slow memory significantly reduce the excecution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time. (orig.)