WorldWideScience

Sample records for long-scale model prognoses

  1. Prognose: SF den store vinder af kommunalvalget

    DEFF Research Database (Denmark)

    Thomsen, Søren Risbjerg

    2009-01-01

    KV-09: SF stormer ifølge Altinget.dks prognose ind i byråd og ikke mindst regionsråd ved valget til november, mens Radikale halveres.......KV-09: SF stormer ifølge Altinget.dks prognose ind i byråd og ikke mindst regionsråd ved valget til november, mens Radikale halveres....

  2. Dispersion prognoses and consequences in the environment. A Nordic development and harmonization effort

    International Nuclear Information System (INIS)

    Tveten, U.

    1994-01-01

    The project 'BER-1, Dispersion prognoses and environmental consequences' is described. The report describes the work performed and the results obtained. The bulk of the report is concerned with the first subject area, atmospheric dispersion models. The world-wide status of long-range atmospheric dispersion models at the start of the project period is described, descriptions are given of the models in use at the Nordic meteorological institutes, and validation/verification and intercomparison efforts that have been performed within the project are described. The main results of this work have been published separately. All aspects of environmental impact of releases to the atmosphere have been treated, and the end product of this part of the project is a computerized 'handbook' giving easy access to data on e.g. deposition, shielding, filtering, weathering, radionuclide transfer via all possible exposure pathways. (au)

  3. Prognose van luchtkwaliteit: signalering van fotochemische smogepisoden

    NARCIS (Netherlands)

    van Rheineck Leyssius HJ; de Leeuw FAAM

    1990-01-01

    A description of two procedures which are used at RIVM in forecasting photochemical smog episodes is presented. Both procedures lead to the estimation of daily average oxidant (O3 + NO2) concentrations. The procedure OXPRO provides prognoses for the concentrations of "today" and

  4. Follow-up and prognosis of thyroid carcinoma; Nachsorge und Prognose der Schilddruesenkarzinome

    Energy Technology Data Exchange (ETDEWEB)

    Sandrock, D.; Munz, D.L. [Humboldt-Universitaet, Berlin (Germany). Medizinische Fakultaet Charite

    2000-03-01

    The follow-up of patients with differentiated thyroid carcinoma is based on guidelines of the appropriate scientific societies like the German Society of Nuclear Medicine (DGN). After (successfull) radioiodine treatment there are follow-up investigations in intervalls (every [half] year, life-long) including ultrasound, serum level of thyroglobulin, TSH, (f)T3, and (f)T4. If recurrent/metastatic disease is suspected further special methods will follow (radioiodine scan, CT, MRI..). 5- and 10-year overall survival rates of paillary carcinoma are in the range of 90% and 80%, of follicular carcinoma in the range of 80% and 70%, respectively. Individual prognosis can be assessed by different scoring systems including factors like age, sex, size, grading, and extent of the disease. The medullary carcinoma has a mediocre prognosis and similar follow-up intervals focusing on the tumor markers calcitonin and CEA and, if necessary, a variety of imaging methods. The anaplastic carcinoma with its bad prognosis needs individual (palliative) symptomatic follow-up. (orig.) [German] Die Nachsorge des differenzierten Schilddruesenkarzinoms orientiert sich an Empfehlungen und Leitlinien der DGN sowie weiterer Fachgesellschaften. Nach (erfolgreicher) Therapie sind in Intervallen (1/2jaehrlich, jaehrlich, lebenslang) ambulante Nachsorgeuntersuchungen mit Sonographie, Thyreoglobulin- und TSH-, [f]T3-, [f]T4-Bestimmung erforderlich. Je nach Rezidiv-/Metastasenverdacht sind gezielte weitere Untersuchungen durchzufuehren (Radiojoddiagnostik, CT, MRT...). Die 5- und 10-Jahres-Ueberlebensraten des papillaeren Karzinoms liegen um 90% bzw. 80%, die des follikulaeren um 80% bzw. 70%. Die Prognose kann mittels diverser Scores bestimmt werden, die u.a. Alter, Geschlecht, Grading, und Ausdehnung der Erkrankung beruecksichtigen. Das medullaere Karzinom hat bei mittlerer Prognose aehnliche Nachsorgeintervalle, orientiert sich an den Tumormarkern Kalzitonin und CEA und gegebenenfalls komplexer

  5. Encephalic hemodynamic phases in subarachnoid hemorrhage: how to improve the protective effect in patient prognoses

    Directory of Open Access Journals (Sweden)

    Marcelo de Lima Oliveira

    2015-01-01

    Full Text Available Subarachnoid hemorrhage is frequently associated with poor prognoses. Three different hemodynamic phases were identified during subarachnoid hemorrhage: oligemia, hyperemia, and vasospasm. Each phase is associated with brain metabolic changes. In this review, we correlated the hemodynamic phases with brain metabolism and potential treatment options in the hopes of improving patient prognoses.

  6. Demographic Prognoses for Some Seats in the Ostrava Region

    Czech Academy of Sciences Publication Activity Database

    Vaishar, Antonín

    2006-01-01

    Roč. 14, č. 2 (2006), s. 16-26 ISSN 1210-8812 R&D Projects: GA AV ČR IBS3086005 Institutional research plan: CEZ:AV0Z30860518 Keywords : economic restructuring * settlement system * population development * demographic prognoses * unemployment * Ostrava region Subject RIV: DE - Earth Magnetism, Geodesy, Geography

  7. Etiologie en prognose van de perifere facialis verlamming. Een virologisch en electrodiagnostisch onderzoek

    NARCIS (Netherlands)

    Mulkens, Paulus Servatius Johannes Zoltanus

    1980-01-01

    Dit proefschrift behandelt een klinisch onderzoek bij perifere facialis verlammingen waarbij is getracht na te gaan, enerzijds de pathogenese van de idiopathische vorm van verlamming, de Bell's palsy, anderzijds de wijze waarop de prognose snel en betrouwbaar kan worden opgemaakt. ... Zie:

  8. Propagation prognoses on rivers Rhine, Neckar, Main and Moselle based on 3HHO tracer dispersion investigations

    International Nuclear Information System (INIS)

    Krause, W.J.

    1998-01-01

    Intermittent discharges of 3 HHO from nuclear installations have been used to determine flow times, flow velocities and values characterizing the longitudinal dispersion of soluble substances under natural conditions. The data and the knowledge thus gained are the basis for developing propagation prognoses. In case of an accidental input of radioactive or inactive water-soluble substances into the river, the flow and propagation behaviour in the contaminated river sections can be described, what permits also an estimation of the radiologic or toxic effects. The formal interrelation of the values required for a tabular presentation of prognoses on the longitudinal dispersion of these noxious substances is described and exemplified. (orig.) [de

  9. True and apparent scaling: The proximity of the Markov-switching multifractal model to long-range dependence

    Science.gov (United States)

    Liu, Ruipeng; Di Matteo, T.; Lux, Thomas

    2007-09-01

    In this paper, we consider daily financial data of a collection of different stock market indices, exchange rates, and interest rates, and we analyze their multi-scaling properties by estimating a simple specification of the Markov-switching multifractal (MSM) model. In order to see how well the estimated model captures the temporal dependence of the data, we estimate and compare the scaling exponents H(q) (for q=1,2) for both empirical data and simulated data of the MSM model. In most cases the multifractal model appears to generate ‘apparent’ long memory in agreement with the empirical scaling laws.

  10. The influence of climate changes on carbon cycle in the russian forests. Data inventory and long-scale model prognoses

    Energy Technology Data Exchange (ETDEWEB)

    Kokorin, A.O.; Nazarov, I.M.; Lelakin, A.L. [Inst. Global Climate and Ecology, Moscow (Russian Federation)

    1995-12-31

    The growing up climate changes arise the question about reaction of forests. Forests cover 770 Mha in Russia and are giant carbon reservoir. Climate changes cause disbalance in carbon budget that give additional CO{sub 2} exchange between forests and the atmosphere. The aim of the work is estimation of these fluxes. This problem is directly connected with an GHG inventory, vulnerability and mitigation assessment, which are necessary for future Russian Reports to UN FCCC. The work includes the following steps: (1) Collection of literature data as well as processing of the experimental data on influence of climate changes on forests, (2) Calculation of carbon budget as base for calculations of CO{sub 2} fluxes, (3) Developing of new version of CCBF (Carbon and Climate in Boreal Forests) model, (4) Model estimations of current and future CO{sub 2} fluxes caused by climate changes, forest cuttings, fires and reforestation

  11. The influence of climate changes on carbon cycle in the russian forests. Data inventory and long-scale model prognoses

    Energy Technology Data Exchange (ETDEWEB)

    Kokorin, A O; Nazarov, I M; Lelakin, A L [Inst. Global Climate and Ecology, Moscow (Russian Federation)

    1996-12-31

    The growing up climate changes arise the question about reaction of forests. Forests cover 770 Mha in Russia and are giant carbon reservoir. Climate changes cause disbalance in carbon budget that give additional CO{sub 2} exchange between forests and the atmosphere. The aim of the work is estimation of these fluxes. This problem is directly connected with an GHG inventory, vulnerability and mitigation assessment, which are necessary for future Russian Reports to UN FCCC. The work includes the following steps: (1) Collection of literature data as well as processing of the experimental data on influence of climate changes on forests, (2) Calculation of carbon budget as base for calculations of CO{sub 2} fluxes, (3) Developing of new version of CCBF (Carbon and Climate in Boreal Forests) model, (4) Model estimations of current and future CO{sub 2} fluxes caused by climate changes, forest cuttings, fires and reforestation

  12. Development of a Watershed-Scale Long-Term Hydrologic Impact Assessment Model with the Asymptotic Curve Number Regression Equation

    Directory of Open Access Journals (Sweden)

    Jichul Ryu

    2016-04-01

    Full Text Available In this study, 52 asymptotic Curve Number (CN regression equations were developed for combinations of representative land covers and hydrologic soil groups. In addition, to overcome the limitations of the original Long-term Hydrologic Impact Assessment (L-THIA model when it is applied to larger watersheds, a watershed-scale L-THIA Asymptotic CN (ACN regression equation model (watershed-scale L-THIA ACN model was developed by integrating the asymptotic CN regressions and various modules for direct runoff/baseflow/channel routing. The watershed-scale L-THIA ACN model was applied to four watersheds in South Korea to evaluate the accuracy of its streamflow prediction. The coefficient of determination (R2 and Nash–Sutcliffe Efficiency (NSE values for observed versus simulated streamflows over intervals of eight days were greater than 0.6 for all four of the watersheds. The watershed-scale L-THIA ACN model, including the asymptotic CN regression equation method, can simulate long-term streamflow sufficiently well with the ten parameters that have been added for the characterization of streamflow.

  13. Hydrodynamics of long-scale-length plasmas. Summary

    International Nuclear Information System (INIS)

    Craxton, R.S.

    1984-01-01

    A summary is given relating to the importance of long-scale-length plasmas to laser fusion. Some experiments are listed in which long-scale-length plasmas have been produced and studied. This talk presents SAGE simulations of most of these experiments with the emphasis being placed on understanding the hydrodynamic conditions rather than the parametric/plasma-physics processes themselves which are not modeled by SAGE. However, interpretation of the experiments can often depend on a good understanding of the hydrodynamics, including optical ray tracing

  14. Stochastic models for structured populations scaling limits and long time behavior

    CERN Document Server

    Meleard, Sylvie

    2015-01-01

    In this contribution, several probabilistic tools to study population dynamics are developed. The focus is on scaling limits of qualitatively different stochastic individual based models and the long time behavior of some classes of limiting processes. Structured population dynamics are modeled by measure-valued processes describing the individual behaviors and taking into account the demographic and mutational parameters, and possible interactions between individuals. Many quantitative parameters appear in these models and several relevant normalizations are considered, leading  to infinite-dimensional deterministic or stochastic large-population approximations. Biologically relevant questions are considered, such as extinction criteria, the effect of large birth events, the impact of  environmental catastrophes, the mutation-selection trade-off, recovery criteria in parasite infections, genealogical properties of a sample of individuals. These notes originated from a lecture series on Structured P...

  15. CT findings and prognoses of anoxic brain damage due to near-drowning in children

    Energy Technology Data Exchange (ETDEWEB)

    Asano, Jun-ichi; Ieshima, Atsushi (Tottori Univ., Yonago (Japan). School of Medicine); Kisa, Toshiro; Ohtani, Kyouichi

    1991-05-01

    We investigated the relationship between serial cranial CT findings and prognoses in 11 children after near-drowning. These patients were rescued after heart arrest for more than 10 minutes and all comatose on admission. CT scans were performed within 2 weeks, at 3 weeks-1 month, 2-4 months and more than 5 months after admission. Characteristics of CT findings and prognoses were classified into four groups. Group 1 - low density areas in thalami, basal ganglia and cortical white matters within 2 weeks (three cases; one died, two became vegitative). Group 2 - enlargement of the third ventricle at 3 weeks-1 month, and atrophy of pons at 2-4 months (three cases; severe quadriplegia and mental retardation). Group 3 - enlargement of the third ventricle at 3 weeks-1 month, but atrophy of pons not observed at 2-4 months (three cases; mild motor disabilities and mild mental retardation). Group 4 - enlargement of third ventricle not observed at 3 weeks-1 month (two cases; neither paralysis nor mental retardation). (author).

  16. CT findings and prognoses of anoxic brain damage due to near-drowning in children

    International Nuclear Information System (INIS)

    Asano, Jun-ichi; Ieshima, Atsushi; Kisa, Toshiro; Ohtani, Kyouichi.

    1991-01-01

    We investigated the relationship between serial cranial CT findings and prognoses in 11 children after near-drowning. These patients were rescued after heart arrest for more than 10 minutes and all comatose on admission. CT scans were performed within 2 weeks, at 3 weeks-1 month, 2-4 months and more than 5 months after admission. Characteristics of CT findings and prognoses were classified into four groups. Group 1 - low density areas in thalami, basal ganglia and cortical white matters within 2 weeks (three cases; one died, two became vegitative). Group 2 - enlargement of the third ventricle at 3 weeks-1 month, and atrophy of pons at 2-4 months (three cases; severe quadriplegia and mental retardation). Group 3 - enlargement of the third ventricle at 3 weeks-1 month, but atrophy of pons not observed at 2-4 months (three cases; mild motor disabilities and mild mental retardation). Group 4 - enlargement of third ventricle not observed at 3 weeks-1 month (two cases; neither paralysis nor mental retardation). (author)

  17. Infrastructure of radiotherapy in the Netherlands: evaluation of prognoses and introduction of a new model for determining the needs

    International Nuclear Information System (INIS)

    Slotman, Ben J.; Leer, Jan W.H.

    2003-01-01

    Background and purpose: In the Netherlands, the radiotherapy infrastructure is regulated by a Governmental license system. This requires timely and realistic prognostication of the needs for radiotherapy. In the present study, the latest prognoses (1993) and the realized changes in infrastructure are evaluated and a new prognosis for the period until 2010, which has been calculated using a new model, is presented. Materials and methods: Data on cancer incidence and use of radiotherapy were obtained from various published national reports and from a survey of all radiotherapy departments. Results: The cancer incidence over the period 1993-1997 was about 10% higher than predicted. In 1996 and 1997, the percentage of new cancer patients treated with radiotherapy was 45.6 and 48.2%, respectively. The absolute number of newly irradiated patients was about 10% higher than foreseen in the prognosis. The needs for radiotherapy infrastructure not only depend on epidemiological data and changes in indications for radiotherapy, but also on changes in types of treatment with different workloads. A new model, which uses four categories for teletherapy and four categories for brachytherapy is described and a new prognosis for the required number of linear accelerators and staff up to the year 2010 is presented. Conclusion: The original prognosis on cancer incidence and radiotherapy patients underestimated the actual figures considerably. The new prognosis, based on a model, which not only accounts for an increase in number of patients, but also for changes in treatment techniques, is expected to more accurately predict and acquire the required radiotherapy capacity

  18. Application of Large-Scale Database-Based Online Modeling to Plant State Long-Term Estimation

    Science.gov (United States)

    Ogawa, Masatoshi; Ogai, Harutoshi

    Recently, attention has been drawn to the local modeling techniques of a new idea called “Just-In-Time (JIT) modeling”. To apply “JIT modeling” to a large amount of database online, “Large-scale database-based Online Modeling (LOM)” has been proposed. LOM is a technique that makes the retrieval of neighboring data more efficient by using both “stepwise selection” and quantization. In order to predict the long-term state of the plant without using future data of manipulated variables, an Extended Sequential Prediction method of LOM (ESP-LOM) has been proposed. In this paper, the LOM and the ESP-LOM are introduced.

  19. The ``Grey System`` theory and its application to target cost forecasting at Chinese collieries; Die Theorie des grauen Systems und ihre Anwendung bei der Prognose der Zielkosten des gesamten Bergwerks

    Energy Technology Data Exchange (ETDEWEB)

    Ding Rijia; Yi Maosheng; Wang Shiyuan

    1997-09-01

    A production-cost forecasting system has been developed for use at Chinese collieries. Known as the `Grey System`, the theory and model-based process distinguishes between `white` information, which is human controlled, `black` information, which cannot be controlled by human actions, and `grey information`, which forms the majority component and comprises information which can only be partly or not completely collected. A grey system is therefore one in which the necessary information is absent. Future developments in the system can be forecast by analysing the regularity of the time-series values for the technical and economic characteristics of previous periods. The system employs the GM (1.1) prognostic model, which is a `grey` systematic forecasting model with one variable and one linear differential equation. It can be applied to systems which are affected simultaneously be several factors, with no overriding main factor, or in which no marked regularity exists between the controlling factors. The mathematical model constructed on the basis of the `Grey theory` is designed for large-scale economic systems which are affected by several control factors. A bigger system makes for a greater randomness element, which in turn lessens the influence of subjective interference factors and increases the accuracy of the prognosis. (orig./MSK) [Deutsch] Mit Hilfe der Theorie und von Modellen des `grauen Systems` wird eine Prognose der Produktionskosten von chinesischen Bergwerken erstellt. Die Theorie des `grauen Systems` unterscheidet zwischen den von den Menschen beherrschbaren `weissen` Informationen, den von den Menschen nicht beherrschbaren `schwarzen` Informationen und der ueberwiegenden Anzahl der Informationen, die nur teilweise oder nicht vollstaendig erfasst werden koennen und als `graue` Informationen bezeichnet werden. Ein graues System ist folglich ein System, in dem es an notwendigen Informationen fehlt. Die zukuenftige Entwicklung des Systems kann dadurch

  20. Test results on the long models and full scale prototypes of the second generation LHC arc dipoles

    CERN Document Server

    Billan, J; Bottura, L; Leroy, D; Pagano, O; Perin, R; Perini, D; Savary, F; Siemko, A; Sievers, P; Spigo, G; Vlogaert, J; Walckiers, L; Wyss, C; Rossi, L

    1999-01-01

    With the test of the first full scale prototype in June-July 1998, the R&D on the long superconducting dipoles based on the LHC design of 1993-95 has come to an end. This second generation of long magnets has a 56 mm coil aperture, is wound with 15 mm wide cable arranged in a 5 coil block layout. The series includes four 10 m long model dipoles, whose coils have been wound and collared in industry and the cold mass assembled and cryostated at CERN, as well as one 15 m long dipole prototype, manufactured totally in industry in the framework of a CERN-INFN collaboration for the LHC. After a brief description of particular features of the design and of the manufacturing, test results are reported and compared with the expectations. One magnet reached the record field for long model dipoles of 9.8 T but results have not been well reproducible from magnet to magnet. Guidelines for modifications that will appear in the next generation of long magnets, based on a six block coil design, are indicated in the concl...

  1. Sizing and scaling requirements of a large-scale physical model for code validation

    International Nuclear Information System (INIS)

    Khaleel, R.; Legore, T.

    1990-01-01

    Model validation is an important consideration in application of a code for performance assessment and therefore in assessing the long-term behavior of the engineered and natural barriers of a geologic repository. Scaling considerations relevant to porous media flow are reviewed. An analysis approach is presented for determining the sizing requirements of a large-scale, hydrology physical model. The physical model will be used to validate performance assessment codes that evaluate the long-term behavior of the repository isolation system. Numerical simulation results for sizing requirements are presented for a porous medium model in which the media properties are spatially uncorrelated

  2. Modeling and simulation of nuclear fuel in scenarios with long time scales

    Energy Technology Data Exchange (ETDEWEB)

    Espinosa, Carlos E.; Bodmann, Bardo E.J., E-mail: eduardo.espinosa@ufrgs.br, E-mail: bardo.bodmann@ufrgs.br [Universidade Federal do Rio Grande do Sul (DENUC/PROMEC/UFRGS), Porto Alegre, RS (Brazil). Departamento de Engenharia Nuclear. Programa de Pos Graduacao em Engenharia Mecanica

    2015-07-01

    Nuclear reactors play a key role in defining the energy matrix. A study by the Fraunhofer Society shows in different time scales for long periods of time the distribution of energy sources. Regardless of scale, the use of nuclear energy is practically constant. In these scenarios, the nuclear fuel behavior over time is of interest. For kinetics of long-term scales, changing the chemical composition of fuel is significant. Thus, it is appropriate to consider fission products called neutron poisons. Such products are of interest in the nuclear reactor, since they become parasitic neutron absorbers and result in long thermal heat sources. The objective of this work is to solve the kinetics system coupled to neutron poison products. To solve this system, we use similar ideas to the method of Adomian decomposition. Initially, one separates the system of equations as the sum of a linear part and a non-linear part in order to solve a recursive system. The nonlinearity is treated as Adomian polynomial. We present numerical results of the effects of changing the power of a reactor, scenarios such as start-up and shut-down. For these results we consider time dependent reactivity, such as linear reactivity, quadratic polynomial and oscillatory. With these results one can simulate the chemical composition of the fuel due to the reuse of the spent fuel in subsequent cycles. (author)

  3. Modeling and simulation of nuclear fuel in scenarios with long time scales

    International Nuclear Information System (INIS)

    Espinosa, Carlos E.; Bodmann, Bardo E.J.

    2015-01-01

    Nuclear reactors play a key role in defining the energy matrix. A study by the Fraunhofer Society shows in different time scales for long periods of time the distribution of energy sources. Regardless of scale, the use of nuclear energy is practically constant. In these scenarios, the nuclear fuel behavior over time is of interest. For kinetics of long-term scales, changing the chemical composition of fuel is significant. Thus, it is appropriate to consider fission products called neutron poisons. Such products are of interest in the nuclear reactor, since they become parasitic neutron absorbers and result in long thermal heat sources. The objective of this work is to solve the kinetics system coupled to neutron poison products. To solve this system, we use similar ideas to the method of Adomian decomposition. Initially, one separates the system of equations as the sum of a linear part and a non-linear part in order to solve a recursive system. The nonlinearity is treated as Adomian polynomial. We present numerical results of the effects of changing the power of a reactor, scenarios such as start-up and shut-down. For these results we consider time dependent reactivity, such as linear reactivity, quadratic polynomial and oscillatory. With these results one can simulate the chemical composition of the fuel due to the reuse of the spent fuel in subsequent cycles. (author)

  4. Forest landscape models, a tool for understanding the effect of the large-scale and long-term landscape processes

    Science.gov (United States)

    Hong S. He; Robert E. Keane; Louis R. Iverson

    2008-01-01

    Forest landscape models have become important tools for understanding large-scale and long-term landscape (spatial) processes such as climate change, fire, windthrow, seed dispersal, insect outbreak, disease propagation, forest harvest, and fuel treatment, because controlled field experiments designed to study the effects of these processes are often not possible (...

  5. Long-term modelling of Carbon Capture and Storage, Nuclear Fusion, and large-scale District Heating

    DEFF Research Database (Denmark)

    Grohnheit, Poul Erik; Korsholm, Søren Bang; Lüthje, Mikael

    2011-01-01

    before 2050. The modelling tools developed by the International Energy Agency (IEA) Implementing Agreement ETSAP include both multi-regional global and long-term energy models till 2100, as well as national or regional models with shorter time horizons. Examples are the EFDA-TIMES model, focusing...... on nuclear fusion and the Pan European TIMES model, respectively. In the next decades CCS can be a driver for the development and expansion of large-scale district heating systems, which are currently widespread in Europe, Korea and China, and with large potentials in North America. If fusion will replace...... fossil fuel power plants with CCS in the second half of the century, the same infrastructure for heat distribution can be used which will support the penetration of both technologies. This paper will address the issue of infrastructure development and the use of CCS and fusion technologies using...

  6. Long term socio-ecological research across temporal and spatial scales

    Science.gov (United States)

    Singh, S. J.; Haberl, H.

    2012-04-01

    Long term socio-ecological research across temporal and spatial scales Simron Jit Singh and Helmut Haberl Institute of Social Ecology, Vienna, Austria Understanding trajectories of change in coupled socio-ecological (or human-environment) systems requires monitoring and analysis at several spatial and temporal scales. Long-term ecosystem research (LTER) is a strand of research coupled with observation systems and infrastructures (LTER sites) aimed at understanding how global change affects ecosystems around the world. In recent years it has been increasingly recognized that sustainability concerns require extending this approach to long-term socio-ecological research, i.e. a more integrated perspective that focuses on interaction processes between society and ecosystems over longer time periods. Thus, Long-Term Socio-Ecological Research, abbreviated LTSER, aims at observing, analyzing, understanding and modelling of changes in coupled socio-ecological systems over long periods of time. Indeed, the magnitude of the problems we now face is an outcome of a much longer process, accelerated by industrialisation since the nineteenth century. The paper will provide an overview of a book (in press) on LTSER with particular emphasis on 'socio-ecological transitions' in terms of material, energy and land use dynamics across temporal and spatial scales.

  7. The ability of intensive care unit physicians to estimate long-term prognosis in survivors of critical illness.

    Science.gov (United States)

    Soliman, Ivo W; Cremer, Olaf L; de Lange, Dylan W; Slooter, Arjen J C; van Delden, Johannes Hans J M; van Dijk, Diederik; Peelen, Linda M

    2018-02-01

    To assess the reliability of physicians' prognoses for intensive care unit (ICU) survivors with respect to long-term survival and health related quality of life (HRQoL). We performed an observational cohort-study in a single mixed tertiary ICU in The Netherlands. ICU survivors with a length of stay >48h were included. At ICU discharge, one-year prognosis was estimated by physicians using the four-option Sabadell score to record their expectations. The outcome of interest was poor outcome, which was defined as dying within one-year follow-up, or surviving with an EuroQoL5D-3L index <0.4. Among 1399 ICU survivors, 1068 (76%) subjects were expected to have a good outcome; 243 (18%) a poor long-term prognosis; 43 (3%) a poor short-term prognosis, and 45 (3%) to die in hospital (i.e. Sabadell score levels). Poor outcome was observed in 38%, 55%, 86%, and 100% of these groups respectively (concomitant c-index: 0.61). The expected prognosis did not match observed outcome in 365 (36%) patients. This was almost exclusively (99%) due to overoptimism. Physician experience did not affect results. Prognoses estimated by physicians incorrectly predicted long-term survival and HRQoL in one-third of ICU survivors. Moreover, inaccurate prognoses were generally the result of overoptimistic expectations of outcome. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Model for low temperature oxidation during long term interim storage

    Energy Technology Data Exchange (ETDEWEB)

    Desgranges, Clara; Bertrand, Nathalie; Gauvain, Danielle; Terlain, Anne [Service de la Corrosion et du Comportement des Materiaux dans leur Environnement, CEA/Saclay - 91191 Gif-sur-Yvette Cedex (France); Poquillon, Dominique; Monceau, Daniel [CIRIMAT UMR 5085, ENSIACET-INPT, 31077 Toulouse Cedex 4 (France)

    2004-07-01

    For high-level nuclear waste containers in long-term interim storage, dry oxidation will be the first and the main degradation mode during about one century. The metal lost by dry oxidation over such a long period must be evaluated with a good reliability. To achieve this goal, modelling of the oxide scale growth is necessary and this is the aim of the dry oxidation studies performed in the frame of the COCON program. An advanced model based on the description of elementary mechanisms involved in scale growth at low temperatures, like partial interfacial control of the oxidation kinetics and/or grain boundary diffusion, is developed in order to increase the reliability of the long term extrapolations deduced from basic models developed from short time experiments. Since only few experimental data on dry oxidation are available in the temperature range of interest, experiments have also been performed to evaluate the relevant input parameters for models like grain size of oxide scale, considering iron as simplified material. (authors)

  9. Model for low temperature oxidation during long term interim storage

    International Nuclear Information System (INIS)

    Desgranges, Clara; Bertrand, Nathalie; Gauvain, Danielle; Terlain, Anne; Poquillon, Dominique; Monceau, Daniel

    2004-01-01

    For high-level nuclear waste containers in long-term interim storage, dry oxidation will be the first and the main degradation mode during about one century. The metal lost by dry oxidation over such a long period must be evaluated with a good reliability. To achieve this goal, modelling of the oxide scale growth is necessary and this is the aim of the dry oxidation studies performed in the frame of the COCON program. An advanced model based on the description of elementary mechanisms involved in scale growth at low temperatures, like partial interfacial control of the oxidation kinetics and/or grain boundary diffusion, is developed in order to increase the reliability of the long term extrapolations deduced from basic models developed from short time experiments. Since only few experimental data on dry oxidation are available in the temperature range of interest, experiments have also been performed to evaluate the relevant input parameters for models like grain size of oxide scale, considering iron as simplified material. (authors)

  10. Predicting the durability of mineral external plaster using accelerated ageing tests; Prognose der Dauerhaftigkeit mineralischer Aussenputze mit Hilfe von beschleunigten Alterungstests

    Energy Technology Data Exchange (ETDEWEB)

    Bochen, Jerzy; Nowak, Henryk A. [Schlesische Technische Universitaet, Lehrstuhl Bauprozesse, Gliwice (Poland)

    2004-08-01

    This article presents a technique for accelerated ageing tests for different plasters. Two models were developed for describing the durability. The models describe changes in selected characteristics due to frost damage and leaching. The first model is based on changes in mass, the second on changes in strength. The models consider the main characteristics that influence durability, such as strength, porosity, permeability, adhesive tensil strength and aggressive environmental influences. Both models enable the durability of external plaster to be predicted. (Abstract Copyright [2004], Wiley Periodicals, Inc.) [German] Der Beitrag stellt eine Methode fuer beschleunigte Alterungstests von verschiedenen Putzen vor. Fuer die Beschreibung der Dauerhaftigkeit wurden zwei Modelle entwickelt. Die Modelle beschreiben Veraenderungen ausgewaehlter Eigenschaften, die infolge von Frostschaeden und Auslaugung auftreten. Das erste Modell basiert auf der Veraenderung der Masse und das zweite auf der Veraenderung der Festigkeit. In den Modellen werden die wesentlichen Eigenschaften beruecksichtigt, die die Dauerhaftigkeit beeinflussen, wie z. B. Festigkeit, Porositaet, Dichtigkeit, Haftzugfestigkeit und agressive Umwelteinwirkungen. Beide Modelle ermoeglichen die Prognose der Dauerhaftigkeit von Aussenputzen. (Abstract Copyright [2004], Wiley Periodicals, Inc.)

  11. De essentie van de daling in het aantal verkeersdoden : ontwikkelingen in 2004 en 2005, en nieuwe prognoses voor 2010 en 2020.

    NARCIS (Netherlands)

    Stipdonk, H.L. Aarts, L.T. Schoon, C.C. & Wesemann, P.

    2006-01-01

    The essence of the decrease in the number of road deaths; Developments in 2004 and 2005, and new prognoses for 2010 and 2020. During the last 15 to 20 years there was a 2.5% annual decrease in the number of road deaths. This decrease is attributed to all kinds of important, gradual improvements of

  12. Microphysics in Multi-scale Modeling System with Unified Physics

    Science.gov (United States)

    Tao, Wei-Kuo

    2012-01-01

    Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.

  13. Mechanistically-Based Field-Scale Models of Uranium Biogeochemistry from Upscaling Pore-Scale Experiments and Models

    International Nuclear Information System (INIS)

    Tim Scheibe; Alexandre Tartakovsky; Brian Wood; Joe Seymour

    2007-01-01

    generally not be applicable to other broad classes of problems, we believe that this approach (if applied over time to many types of problems) offers greater potential for long-term progress than attempts to discover a universal solution or theory. We are developing and testing this approach using porous media and model reaction systems that can be both experimentally measured and quantitatively simulated at the pore scale, specifically biofilm development and metal reduction in granular porous media. The general approach we are using in this research follows the following steps: (1) Perform pore-scale characterization of pore geometry and biofilm development in selected porous media systems. (2) Simulate selected reactive transport processes at the pore scale in experimentally measured pore geometries. (3) Validate pore-scale models against laboratory-scale experiments. (4) Perform upscaling to derive continuum-scale (local darcy scale) process descriptions and effective parameters. (5) Use upscaled models and parameters to simulate reactive transport at the continuum scale in a macroscopically heterogeneous medium

  14. Mechanistically-Based Field-Scale Models of Uranium Biogeochemistry from Upscaling Pore-Scale Experiments and Models

    Energy Technology Data Exchange (ETDEWEB)

    Tim Scheibe; Alexandre Tartakovsky; Brian Wood; Joe Seymour

    2007-04-19

    generally not be applicable to other broad classes of problems, we believe that this approach (if applied over time to many types of problems) offers greater potential for long-term progress than attempts to discover a universal solution or theory. We are developing and testing this approach using porous media and model reaction systems that can be both experimentally measured and quantitatively simulated at the pore scale, specifically biofilm development and metal reduction in granular porous media. The general approach we are using in this research follows the following steps: (1) Perform pore-scale characterization of pore geometry and biofilm development in selected porous media systems. (2) Simulate selected reactive transport processes at the pore scale in experimentally measured pore geometries. (3) Validate pore-scale models against laboratory-scale experiments. (4) Perform upscaling to derive continuum-scale (local darcy scale) process descriptions and effective parameters. (5) Use upscaled models and parameters to simulate reactive transport at the continuum scale in a macroscopically heterogeneous medium.

  15. Finite-range-scaling analysis of metastability in an Ising model with long-range interactions

    International Nuclear Information System (INIS)

    Gorman, B.M.; Rikvold, P.A.; Novotny, M.A.

    1994-01-01

    We apply both a scalar field theory and a recently developed transfer-matrix method to study the stationary properties of metastability in a two-state model with weak, long-range interactions: the Nx∞ quasi-one-dimensional Ising model. Using the field theory, we find the analytic continuation f of the free energy across the first-order transition, assuming that the system escapes the metastable state by the nucleation of noninteracting droplets. We find that corrections to the field dependence are substantial, and, by solving the Euler-Lagrange equation for the model numerically, we have verified the form of the free-energy cost of nucleation, including the first correction. In the transfer-matrix method, we associate with the subdominant eigenvectors of the transfer matrix a complex-valued ''constrained'' free-energy density f α computed directly from the matrix. For the eigenvector with an associated magnetization most strongly opposed to the applied magnetic field, f α exhibits finite-range scaling behavior in agreement with f over a wide range of temperatures and fields, extending nearly to the classical spinodal. Some implications of these results for numerical studies of metastability are discussed

  16. A Decade-Long European-Scale Convection-Resolving Climate Simulation on GPUs

    Science.gov (United States)

    Leutwyler, D.; Fuhrer, O.; Ban, N.; Lapillonne, X.; Lüthi, D.; Schar, C.

    2016-12-01

    Convection-resolving models have proven to be very useful tools in numerical weather prediction and in climate research. However, due to their extremely demanding computational requirements, they have so far been limited to short simulations and/or small computational domains. Innovations in the supercomputing domain have led to new supercomputer designs that involve conventional multi-core CPUs and accelerators such as graphics processing units (GPUs). One of the first atmospheric models that has been fully ported to GPUs is the Consortium for Small-Scale Modeling weather and climate model COSMO. This new version allows us to expand the size of the simulation domain to areas spanning continents and the time period up to one decade. We present results from a decade-long, convection-resolving climate simulation over Europe using the GPU-enabled COSMO version on a computational domain with 1536x1536x60 gridpoints. The simulation is driven by the ERA-interim reanalysis. The results illustrate how the approach allows for the representation of interactions between synoptic-scale and meso-scale atmospheric circulations at scales ranging from 1000 to 10 km. We discuss some of the advantages and prospects from using GPUs, and focus on the performance of the convection-resolving modeling approach on the European scale. Specifically we investigate the organization of convective clouds and on validate hourly rainfall distributions with various high-resolution data sets.

  17. Modelling the Long-term Periglacial Imprint on Mountain Landscapes

    DEFF Research Database (Denmark)

    Andersen, Jane Lund; Egholm, David Lundbek; Knudsen, Mads Faurschou

    Studies of periglacial processes usually focus on small-scale, isolated phenomena, leaving less explored questions of how such processes shape vast areas of Earth’s surface. Here we use numerical surface process modelling to better understand how periglacial processes drive large-scale, long-term...

  18. Global scale groundwater flow model

    Science.gov (United States)

    Sutanudjaja, Edwin; de Graaf, Inge; van Beek, Ludovicus; Bierkens, Marc

    2013-04-01

    As the world's largest accessible source of freshwater, groundwater plays vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater sustains water flows in streams, rivers, lakes and wetlands, and thus supports ecosystem habitat and biodiversity, while its large natural storage provides a buffer against water shortages. Yet, the current generation of global scale hydrological models does not include a groundwater flow component that is a crucial part of the hydrological cycle and allows the simulation of groundwater head dynamics. In this study we present a steady-state MODFLOW (McDonald and Harbaugh, 1988) groundwater model on the global scale at 5 arc-minutes resolution. Aquifer schematization and properties of this groundwater model were developed from available global lithological model (e.g. Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moorsdorff, in press). We force the groundwtaer model with the output from the large-scale hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the long term net groundwater recharge and average surface water levels derived from routed channel discharge. We validated calculated groundwater heads and depths with available head observations, from different regions, including the North and South America and Western Europe. Our results show that it is feasible to build a relatively simple global scale groundwater model using existing information, and estimate water table depths within acceptable accuracy in many parts of the world.

  19. How to correct for long-term externalities of large-scale wind power development by a capacity mechanism?

    International Nuclear Information System (INIS)

    Cepeda, Mauricio; Finon, Dominique

    2013-01-01

    This paper deals with the practical problems related to long-term security of supply in electricity markets in the presence of large-scale wind power development. The success of recent renewable promotion schemes adds a new dimension to ensuring long-term security of supply: it necessitates designing second-best policies to prevent large-scale wind power development from distorting long-run equilibrium prices and investments in conventional generation and in particular in peaking units. We rely upon a long-term simulation model which simulates electricity market players' investment decisions in a market regime and incorporates large-scale wind power development in the presence of either subsidized or market driven development scenarios. We test the use of capacity mechanisms to compensate for long-term effects of large-scale wind power development on prices and reliability of supply. The first finding is that capacity mechanisms can help to reduce the social cost of large scale wind power development in terms of decrease of loss of load probability. The second finding is that, in a market-based wind power deployment without subsidy, wind generators are penalised for insufficient contribution to the long term system's reliability. - Highlights: • We model power market players’ investment decisions incorporating wind power. • We examine two market designs: an energy-only market and a capacity mechanism. • We test two types of wind power development paths: subsidised and market-driven. • Capacity mechanisms compensate for the externalities of wind power developments

  20. The Goddard multi-scale modeling system with unified physics

    Directory of Open Access Journals (Sweden)

    W.-K. Tao

    2009-08-01

    Full Text Available Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1 a cloud-resolving model (CRM, (2 a regional-scale model, the NASA unified Weather Research and Forecasting Model (WRF, and (3 a coupled CRM-GCM (general circulation model, known as the Goddard Multi-scale Modeling Framework or MMF. The same cloud-microphysical processes, long- and short-wave radiative transfer and land-surface processes are applied in all of the models to study explicit cloud-radiation and cloud-surface interactive processes in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator for comparison and validation with NASA high-resolution satellite data.

    This paper reviews the development and presents some applications of the multi-scale modeling system, including results from using the multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols. In addition, use of the multi-satellite simulator to identify the strengths and weaknesses of the model-simulated precipitation processes will be discussed as well as future model developments and applications.

  1. Modeling the Long-term Transport and Accumulation of Radionuclides in the Landscape for Derivation of Dose Conversion Factors

    International Nuclear Information System (INIS)

    Avila, Rodolfo Moreno; Ekstroem, Per-Anders; Kautsky, Ulrik

    2006-01-01

    To evaluate the radiological impact of potential releases to the biosphere from a geological repository for spent nuclear fuel, it is necessary to assess the long-term dynamics of the distribution of radionuclides in the environment. In this paper, we propose an approach for making prognoses of the distribution and fluxes of radionuclides released from the geosphere, in discharges of contaminated groundwater, to an evolving landscape. The biosphere changes during the temperate part (spanning approximately 20,000 years) of an interglacial period are handled by building biosphere models for the projected succession of situations. Radionuclide transport in the landscape is modeled dynamically with a series of interconnected radioecological models of those ecosystem types (sea, lake, running water, mire, agricultural land and forest) that occur at present, and are projected to occur in the future, in a candidate area for a geological repository in Sweden. The transformation between ecosystems is modeled as discrete events occurring every thousand years by substituting one model by another. Examples of predictions of the radionuclide distribution in the landscape are presented for several scenarios with discharge locations varying in time and space. The article also outlines an approach for estimating the exposure of man resulting from all possible reasonable uses of a potentially contaminated landscape, which was used for derivation of Landscape Dose Factors

  2. How to correct long-term system externality of large scale wind power development by a capacity mechanism?

    International Nuclear Information System (INIS)

    Cepeda, Mauricio; Finon, Dominique

    2013-04-01

    This paper deals with the practical problems related to long-term security of supply in electricity markets in the presence of large-scale wind power development. The success of renewable promotion schemes adds a new dimension to ensuring long-term security of supply. It necessitates designing second-best policies to prevent large-scale wind power development from distorting long-run equilibrium prices and investments in conventional generation and in particular in peaking units. We rely upon a long-term simulation model which simulates electricity market players' investment decisions in a market regime and incorporates large-scale wind power development either in the presence of either subsidised wind production or in market-driven development. We test the use of capacity mechanisms to compensate for the long-term effects of large-scale wind power development on the system reliability. The first finding is that capacity mechanisms can help to reduce the social cost of large scale wind power development in terms of decrease of loss of load probability. The second finding is that, in a market-based wind power deployment without subsidy, wind generators are penalized for insufficient contribution to the long term system's reliability. (authors)

  3. Cross-Scale Modelling of Subduction from Minute to Million of Years Time Scale

    Science.gov (United States)

    Sobolev, S. V.; Muldashev, I. A.

    2015-12-01

    Subduction is an essentially multi-scale process with time-scales spanning from geological to earthquake scale with the seismic cycle in-between. Modelling of such process constitutes one of the largest challenges in geodynamic modelling today.Here we present a cross-scale thermomechanical model capable of simulating the entire subduction process from rupture (1 min) to geological time (millions of years) that employs elasticity, mineral-physics-constrained non-linear transient viscous rheology and rate-and-state friction plasticity. The model generates spontaneous earthquake sequences. The adaptive time-step algorithm recognizes moment of instability and drops the integration time step to its minimum value of 40 sec during the earthquake. The time step is then gradually increased to its maximal value of 5 yr, following decreasing displacement rates during the postseismic relaxation. Efficient implementation of numerical techniques allows long-term simulations with total time of millions of years. This technique allows to follow in details deformation process during the entire seismic cycle and multiple seismic cycles. We observe various deformation patterns during modelled seismic cycle that are consistent with surface GPS observations and demonstrate that, contrary to the conventional ideas, the postseismic deformation may be controlled by viscoelastic relaxation in the mantle wedge, starting within only a few hours after the great (M>9) earthquakes. Interestingly, in our model an average slip velocity at the fault closely follows hyperbolic decay law. In natural observations, such deformation is interpreted as an afterslip, while in our model it is caused by the viscoelastic relaxation of mantle wedge with viscosity strongly varying with time. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku Earthquake for the day-to-year time range. We will also present results of the modeling of deformation of the

  4. Variable Renewable Energy in Long-Term Planning Models: A Multi-Model Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Cole, Wesley [National Renewable Energy Lab. (NREL), Golden, CO (United States); Frew, Bethany [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mai, Trieu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sun, Yinong [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bistline, John [Electric Power Research Inst. (EPRI), Knoxville, TN (United States); Blanford, Geoffrey [Electric Power Research Inst. (EPRI), Knoxville, TN (United States); Young, David [Electric Power Research Inst. (EPRI), Knoxville, TN (United States); Marcy, Cara [U.S. Energy Information Administration, Washington, DC (United States); Namovicz, Chris [U.S. Energy Information Administration, Washington, DC (United States); Edelman, Risa [US Environmental Protection Agency (EPA), Washington, DC (United States); Meroney, Bill [US Environmental Protection Agency (EPA), Washington, DC (United States); Sims, Ryan [US Environmental Protection Agency (EPA), Washington, DC (United States); Stenhouse, Jeb [US Environmental Protection Agency (EPA), Washington, DC (United States); Donohoo-Vallett, Paul [Dept. of Energy (DOE), Washington DC (United States)

    2017-11-01

    Long-term capacity expansion models of the U.S. electricity sector have long been used to inform electric sector stakeholders and decision-makers. With the recent surge in variable renewable energy (VRE) generators — primarily wind and solar photovoltaics — the need to appropriately represent VRE generators in these long-term models has increased. VRE generators are especially difficult to represent for a variety of reasons, including their variability, uncertainty, and spatial diversity. This report summarizes the analyses and model experiments that were conducted as part of two workshops on modeling VRE for national-scale capacity expansion models. It discusses the various methods for treating VRE among four modeling teams from the Electric Power Research Institute (EPRI), the U.S. Energy Information Administration (EIA), the U.S. Environmental Protection Agency (EPA), and the National Renewable Energy Laboratory (NREL). The report reviews the findings from the two workshops and emphasizes the areas where there is still need for additional research and development on analysis tools to incorporate VRE into long-term planning and decision-making. This research is intended to inform the energy modeling community on the modeling of variable renewable resources, and is not intended to advocate for or against any particular energy technologies, resources, or policies.

  5. Multi-scale variability and long-range memory in indoor Radon concentrations from Coimbra, Portugal

    Science.gov (United States)

    Donner, Reik V.; Potirakis, Stelios; Barbosa, Susana

    2014-05-01

    The presence or absence of long-range correlations in the variations of indoor Radon concentrations has recently attracted considerable interest. As a radioactive gas naturally emitted from the ground in certain geological settings, understanding environmental factors controlling Radon concentrations and their dynamics is important for estimating its effect on human health and the efficiency of possible measures for reducing the corresponding exposition. In this work, we re-analyze two high-resolution records of indoor Radon concentrations from Coimbra, Portugal, each of which spans several months of continuous measurements. In order to evaluate the presence of long-range correlations and fractal scaling, we utilize a multiplicity of complementary methods, including power spectral analysis, ARFIMA modeling, classical and multi-fractal detrended fluctuation analysis, and two different estimators of the signals' fractal dimensions. Power spectra and fluctuation functions reveal some complex behavior with qualitatively different properties on different time-scales: white noise in the high-frequency part, indications of some long-range correlated process dominating time scales of several hours to days, and pronounced low-frequency variability associated with tidal and/or meteorological forcing. In order to further decompose these different scales of variability, we apply two different approaches. On the one hand, applying multi-resolution analysis based on the discrete wavelet transform allows separately studying contributions on different time scales and characterize their specific correlation and scaling properties. On the other hand, singular system analysis (SSA) provides a reconstruction of the essential modes of variability. Specifically, by considering only the first leading SSA modes, we achieve an efficient de-noising of our environmental signals, highlighting the low-frequency variations together with some distinct scaling on sub-daily time-scales resembling

  6. Homogenization of Large-Scale Movement Models in Ecology

    Science.gov (United States)

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  7. Model abstraction addressing long-term simulations of chemical degradation of large-scale concrete structures

    International Nuclear Information System (INIS)

    Jacques, D.; Perko, J.; Seetharam, S.; Mallants, D.

    2012-01-01

    This paper presents a methodology to assess the spatial-temporal evolution of chemical degradation fronts in real-size concrete structures typical of a near-surface radioactive waste disposal facility. The methodology consists of the abstraction of a so-called full (complicated) model accounting for the multicomponent - multi-scale nature of concrete to an abstracted (simplified) model which simulates chemical concrete degradation based on a single component in the aqueous and solid phase. The abstracted model is verified against chemical degradation fronts simulated with the full model under both diffusive and advective transport conditions. Implementation in the multi-physics simulation tool COMSOL allows simulation of the spatial-temporal evolution of chemical degradation fronts in large-scale concrete structures. (authors)

  8. Replica scale modelling of long rod tank penetrators

    NARCIS (Netherlands)

    Diederen, A.M.; Hoeneveld, J.C.

    2001-01-01

    Experiments and simulations have been conducted using scale size tungsten alloy penetrators at ordnance velocity against an oblique plate array consisting of an inert sandwich and a base armour. The penetrators are made from 2 types of tungsten alloy with different tensile strength. Two scale sizes

  9. Nonlinearities in Drug Release Process from Polymeric Microparticles: Long-Time-Scale Behaviour

    Directory of Open Access Journals (Sweden)

    Elena Simona Bacaita

    2012-01-01

    Full Text Available A theoretical model of the drug release process from polymeric microparticles (a particular type of polymer matrix, through dispersive fractal approximation of motion, is built. As a result, the drug release process takes place through cnoidal oscillations modes of a normalized concentration field. This indicates that, in the case of long-time-scale evolutions, the drug particles assemble in a lattice of nonlinear oscillators occur macroscopically, through variations of drug concentration. The model is validated by experimental results.

  10. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  11. Synaptic scaling enables dynamically distinct short- and long-term memory formation.

    Directory of Open Access Journals (Sweden)

    Christian Tetzlaff

    2013-10-01

    Full Text Available Memory storage in the brain relies on mechanisms acting on time scales from minutes, for long-term synaptic potentiation, to days, for memory consolidation. During such processes, neural circuits distinguish synapses relevant for forming a long-term storage, which are consolidated, from synapses of short-term storage, which fade. How time scale integration and synaptic differentiation is simultaneously achieved remains unclear. Here we show that synaptic scaling - a slow process usually associated with the maintenance of activity homeostasis - combined with synaptic plasticity may simultaneously achieve both, thereby providing a natural separation of short- from long-term storage. The interaction between plasticity and scaling provides also an explanation for an established paradox where memory consolidation critically depends on the exact order of learning and recall. These results indicate that scaling may be fundamental for stabilizing memories, providing a dynamic link between early and late memory formation processes.

  12. Synaptic scaling enables dynamically distinct short- and long-term memory formation.

    Science.gov (United States)

    Tetzlaff, Christian; Kolodziejski, Christoph; Timme, Marc; Tsodyks, Misha; Wörgötter, Florentin

    2013-10-01

    Memory storage in the brain relies on mechanisms acting on time scales from minutes, for long-term synaptic potentiation, to days, for memory consolidation. During such processes, neural circuits distinguish synapses relevant for forming a long-term storage, which are consolidated, from synapses of short-term storage, which fade. How time scale integration and synaptic differentiation is simultaneously achieved remains unclear. Here we show that synaptic scaling - a slow process usually associated with the maintenance of activity homeostasis - combined with synaptic plasticity may simultaneously achieve both, thereby providing a natural separation of short- from long-term storage. The interaction between plasticity and scaling provides also an explanation for an established paradox where memory consolidation critically depends on the exact order of learning and recall. These results indicate that scaling may be fundamental for stabilizing memories, providing a dynamic link between early and late memory formation processes.

  13. Markov dynamic models for long-timescale protein motion.

    KAUST Repository

    Chiang, Tsung-Han

    2010-06-01

    Molecular dynamics (MD) simulation is a well-established method for studying protein motion at the atomic scale. However, it is computationally intensive and generates massive amounts of data. One way of addressing the dual challenges of computation efficiency and data analysis is to construct simplified models of long-timescale protein motion from MD simulation data. In this direction, we propose to use Markov models with hidden states, in which the Markovian states represent potentially overlapping probabilistic distributions over protein conformations. We also propose a principled criterion for evaluating the quality of a model by its ability to predict long-timescale protein motions. Our method was tested on 2D synthetic energy landscapes and two extensively studied peptides, alanine dipeptide and the villin headpiece subdomain (HP-35 NleNle). One interesting finding is that although a widely accepted model of alanine dipeptide contains six states, a simpler model with only three states is equally good for predicting long-timescale motions. We also used the constructed Markov models to estimate important kinetic and dynamic quantities for protein folding, in particular, mean first-passage time. The results are consistent with available experimental measurements.

  14. Markov dynamic models for long-timescale protein motion.

    KAUST Repository

    Chiang, Tsung-Han; Hsu, David; Latombe, Jean-Claude

    2010-01-01

    Molecular dynamics (MD) simulation is a well-established method for studying protein motion at the atomic scale. However, it is computationally intensive and generates massive amounts of data. One way of addressing the dual challenges of computation efficiency and data analysis is to construct simplified models of long-timescale protein motion from MD simulation data. In this direction, we propose to use Markov models with hidden states, in which the Markovian states represent potentially overlapping probabilistic distributions over protein conformations. We also propose a principled criterion for evaluating the quality of a model by its ability to predict long-timescale protein motions. Our method was tested on 2D synthetic energy landscapes and two extensively studied peptides, alanine dipeptide and the villin headpiece subdomain (HP-35 NleNle). One interesting finding is that although a widely accepted model of alanine dipeptide contains six states, a simpler model with only three states is equally good for predicting long-timescale motions. We also used the constructed Markov models to estimate important kinetic and dynamic quantities for protein folding, in particular, mean first-passage time. The results are consistent with available experimental measurements.

  15. Conformal invariance in the long-range Ising model

    Directory of Open Access Journals (Sweden)

    Miguel F. Paulos

    2016-01-01

    Full Text Available We consider the question of conformal invariance of the long-range Ising model at the critical point. The continuum description is given in terms of a nonlocal field theory, and the absence of a stress tensor invalidates all of the standard arguments for the enhancement of scale invariance to conformal invariance. We however show that several correlation functions, computed to second order in the epsilon expansion, are nontrivially consistent with conformal invariance. We proceed to give a proof of conformal invariance to all orders in the epsilon expansion, based on the description of the long-range Ising model as a defect theory in an auxiliary higher-dimensional space. A detailed review of conformal invariance in the d-dimensional short-range Ising model is also included and may be of independent interest.

  16. Conformal Invariance in the Long-Range Ising Model

    CERN Document Server

    Paulos, Miguel F; van Rees, Balt C; Zan, Bernardo

    2016-01-01

    We consider the question of conformal invariance of the long-range Ising model at the critical point. The continuum description is given in terms of a nonlocal field theory, and the absence of a stress tensor invalidates all of the standard arguments for the enhancement of scale invariance to conformal invariance. We however show that several correlation functions, computed to second order in the epsilon expansion, are nontrivially consistent with conformal invariance. We proceed to give a proof of conformal invariance to all orders in the epsilon expansion, based on the description of the long-range Ising model as a defect theory in an auxiliary higher-dimensional space. A detailed review of conformal invariance in the d-dimensional short-range Ising model is also included and may be of independent interest.

  17. Conformal invariance in the long-range Ising model

    Energy Technology Data Exchange (ETDEWEB)

    Paulos, Miguel F. [CERN, Theory Group, Geneva (Switzerland); Rychkov, Slava, E-mail: slava.rychkov@lpt.ens.fr [CERN, Theory Group, Geneva (Switzerland); Laboratoire de Physique Théorique de l' École Normale Supérieure (LPTENS), Paris (France); Faculté de Physique, Université Pierre et Marie Curie (UPMC), Paris (France); Rees, Balt C. van [CERN, Theory Group, Geneva (Switzerland); Zan, Bernardo [Institute of Physics, Universiteit van Amsterdam, Amsterdam (Netherlands)

    2016-01-15

    We consider the question of conformal invariance of the long-range Ising model at the critical point. The continuum description is given in terms of a nonlocal field theory, and the absence of a stress tensor invalidates all of the standard arguments for the enhancement of scale invariance to conformal invariance. We however show that several correlation functions, computed to second order in the epsilon expansion, are nontrivially consistent with conformal invariance. We proceed to give a proof of conformal invariance to all orders in the epsilon expansion, based on the description of the long-range Ising model as a defect theory in an auxiliary higher-dimensional space. A detailed review of conformal invariance in the d-dimensional short-range Ising model is also included and may be of independent interest.

  18. Perspective: Markov models for long-timescale biomolecular dynamics

    International Nuclear Information System (INIS)

    Schwantes, C. R.; McGibbon, R. T.; Pande, V. S.

    2014-01-01

    Molecular dynamics simulations have the potential to provide atomic-level detail and insight to important questions in chemical physics that cannot be observed in typical experiments. However, simply generating a long trajectory is insufficient, as researchers must be able to transform the data in a simulation trajectory into specific scientific insights. Although this analysis step has often been taken for granted, it deserves further attention as large-scale simulations become increasingly routine. In this perspective, we discuss the application of Markov models to the analysis of large-scale biomolecular simulations. We draw attention to recent improvements in the construction of these models as well as several important open issues. In addition, we highlight recent theoretical advances that pave the way for a new generation of models of molecular kinetics

  19. Perspective: Markov models for long-timescale biomolecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Schwantes, C. R.; McGibbon, R. T. [Department of Chemistry, Stanford University, Stanford, California 94305 (United States); Pande, V. S., E-mail: pande@stanford.edu [Department of Chemistry, Stanford University, Stanford, California 94305 (United States); Department of Computer Science, Stanford University, Stanford, California 94305 (United States); Department of Structural Biology, Stanford University, Stanford, California 94305 (United States); Biophysics Program, Stanford University, Stanford, California 94305 (United States)

    2014-09-07

    Molecular dynamics simulations have the potential to provide atomic-level detail and insight to important questions in chemical physics that cannot be observed in typical experiments. However, simply generating a long trajectory is insufficient, as researchers must be able to transform the data in a simulation trajectory into specific scientific insights. Although this analysis step has often been taken for granted, it deserves further attention as large-scale simulations become increasingly routine. In this perspective, we discuss the application of Markov models to the analysis of large-scale biomolecular simulations. We draw attention to recent improvements in the construction of these models as well as several important open issues. In addition, we highlight recent theoretical advances that pave the way for a new generation of models of molecular kinetics.

  20. Biointerface dynamics--Multi scale modeling considerations.

    Science.gov (United States)

    Pajic-Lijakovic, Ivana; Levic, Steva; Nedovic, Viktor; Bugarski, Branko

    2015-08-01

    Irreversible nature of matrix structural changes around the immobilized cell aggregates caused by cell expansion is considered within the Ca-alginate microbeads. It is related to various effects: (1) cell-bulk surface effects (cell-polymer mechanical interactions) and cell surface-polymer surface effects (cell-polymer electrostatic interactions) at the bio-interface, (2) polymer-bulk volume effects (polymer-polymer mechanical and electrostatic interactions) within the perturbed boundary layers around the cell aggregates, (3) cumulative surface and volume effects within the parts of the microbead, and (4) macroscopic effects within the microbead as a whole based on multi scale modeling approaches. All modeling levels are discussed at two time scales i.e. long time scale (cell growth time) and short time scale (cell rearrangement time). Matrix structural changes results in the resistance stress generation which have the feedback impact on: (1) single and collective cell migrations, (2) cell deformation and orientation, (3) decrease of cell-to-cell separation distances, and (4) cell growth. Herein, an attempt is made to discuss and connect various multi scale modeling approaches on a range of time and space scales which have been proposed in the literature in order to shed further light to this complex course-consequence phenomenon which induces the anomalous nature of energy dissipation during the structural changes of cell aggregates and matrix quantified by the damping coefficients (the orders of the fractional derivatives). Deeper insight into the matrix partial disintegration within the boundary layers is useful for understanding and minimizing the polymer matrix resistance stress generation within the interface and on that base optimizing cell growth. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Application of simplified models to CO2 migration and immobilization in large-scale geological systems

    KAUST Repository

    Gasda, Sarah E.

    2012-07-01

    Long-term stabilization of injected carbon dioxide (CO 2) is an essential component of risk management for geological carbon sequestration operations. However, migration and trapping phenomena are inherently complex, involving processes that act over multiple spatial and temporal scales. One example involves centimeter-scale density instabilities in the dissolved CO 2 region leading to large-scale convective mixing that can be a significant driver for CO 2 dissolution. Another example is the potentially important effect of capillary forces, in addition to buoyancy and viscous forces, on the evolution of mobile CO 2. Local capillary effects lead to a capillary transition zone, or capillary fringe, where both fluids are present in the mobile state. This small-scale effect may have a significant impact on large-scale plume migration as well as long-term residual and dissolution trapping. Computational models that can capture both large and small-scale effects are essential to predict the role of these processes on the long-term storage security of CO 2 sequestration operations. Conventional modeling tools are unable to resolve sufficiently all of these relevant processes when modeling CO 2 migration in large-scale geological systems. Herein, we present a vertically-integrated approach to CO 2 modeling that employs upscaled representations of these subgrid processes. We apply the model to the Johansen formation, a prospective site for sequestration of Norwegian CO 2 emissions, and explore the sensitivity of CO 2 migration and trapping to subscale physics. Model results show the relative importance of different physical processes in large-scale simulations. The ability of models such as this to capture the relevant physical processes at large spatial and temporal scales is important for prediction and analysis of CO 2 storage sites. © 2012 Elsevier Ltd.

  2. Progress in Long Scale Length Laser-Plasma Interactions

    International Nuclear Information System (INIS)

    Glenzer, S H; Arnold, P; Bardsley, G; Berger, R L; Bonanno, G; Borger, T; Bower, D E; Bowers, M; Bryant, R; Buckman, S.; Burkhart, S C; Campbell, K; Chrisp, M P; Cohen, B I; Constantin, G; Cooper, F; Cox, J; Dewald, E; Divol, L; Dixit, S; Duncan, J; Eder, D; Edwards, J; Erbert, G; Felker, B; Fornes, J; Frieders, G; Froula, D H; Gardner, S D; Gates, C; Gonzalez, M; Grace, S; Gregori, G; Greenwood, A; Griffith, R; Hall, T; Hammel, B A; Haynam, C; Heestand, G; Henesian, M; Hermes, G; Hinkel, D; Holder, J; Holdner, F; Holtmeier, G; Hsing, W; Huber, S; James, T; Johnson, S; Jones, O S; Kalantar, D; Kamperschroer, J H; Kauffman, R; Kelleher, T; Knight, J; Kirkwood, R K; Kruer, W L; Labiak, W; Landen, O L; Langdon, A B; Langer, S; Latray, D; Lee, A; Lee, F D; Lund, D; MacGowan, B; Marshall, S; McBride, J; McCarville, T; McGrew, L; Mackinnon, A J; Mahavandi, S; Manes, K; Marshall, C; Mertens, E; Meezan, N; Miller, G; Montelongo, S; Moody, J D; Moses, E; Munro, D; Murray, J; Neumann, J; Newton, M; Ng, E; Niemann, C; Nikitin, A; Opsahl, P; Padilla, E; Parham, T; Parrish, G; Petty, C; Polk, M; Powell, C; Reinbachs, I; Rekow, V; Rinnert, R; Riordan, B; Rhodes, M.

    2003-01-01

    The first experiments on the National Ignition Facility (NIF) have employed the first four beams to measure propagation and laser backscattering losses in large ignition-size plasmas. Gas-filled targets between 2 mm and 7 mm length have been heated from one side by overlapping the focal spots of the four beams from one quad operated at 351 nm (3ω) with a total intensity of 2 x 10 15 W cm -2 . The targets were filled with 1 atm of CO 2 producing of up to 7 mm long homogeneously heated plasmas with densities of n e = 6 x 10 20 cm -3 and temperatures of T e = 2 keV. The high energy in a NIF quad of beams of 16kJ, illuminating the target from one direction, creates unique conditions for the study of laser plasma interactions at scale lengths not previously accessible. The propagation through the large-scale plasma was measured with a gated x-ray imager that was filtered for 3.5 keV x rays. These data indicate that the beams interact with the full length of this ignition-scale plasma during the last ∼1 ns of the experiment. During that time, the full aperture measurements of the stimulated Brillouin scattering and stimulated Raman scattering show scattering into the four focusing lenses of 6% for the smallest length (∼2 mm). increasing to 12% for ∼7 mm. These results demonstrate the NIF experimental capabilities and further provide a benchmark for three-dimensional modeling of the laser-plasma interactions at ignition-size scale lengths

  3. Self-Organized Criticality in a Simple Neuron Model Based on Scale-Free Networks

    International Nuclear Information System (INIS)

    Lin Min; Wang Gang; Chen Tianlun

    2006-01-01

    A simple model for a set of interacting idealized neurons in scale-free networks is introduced. The basic elements of the model are endowed with the main features of a neuron function. We find that our model displays power-law behavior of avalanche sizes and generates long-range temporal correlation. More importantly, we find different dynamical behavior for nodes with different connectivity in the scale-free networks.

  4. Variable Renewable Energy in Long-Term Planning Models: A Multi-Model Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Cole, Wesley J. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Frew, Bethany A. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Mai, Trieu T. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sun, Yinong [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bistline, John [Electric Power Research Inst., Palo Alto, CA (United States); Blanford, Geoffrey [Electric Power Research Inst., Palo Alto, CA (United States); Young, David [Electric Power Research Inst., Palo Alto, CA (United States); Marcy, Cara [Energy Information Administration, Washington, DC (United States); Namovicz, Chris [Energy Information Administration, Washington, DC (United States); Edelman, Risa [Environmental Protection Agency, Washington, DC (United States); Meroney, Bill [Environmental Protection Agency; Sims, Ryan [Environmental Protection Agency; Stenhouse, Jeb [Environmental Protection Agency; Donohoo-Vallett, Paul [U.S. Department of Energy

    2017-11-03

    Long-term capacity expansion models of the U.S. electricity sector have long been used to inform electric sector stakeholders and decision makers. With the recent surge in variable renewable energy (VRE) generators - primarily wind and solar photovoltaics - the need to appropriately represent VRE generators in these long-term models has increased. VRE generators are especially difficult to represent for a variety of reasons, including their variability, uncertainty, and spatial diversity. To assess current best practices, share methods and data, and identify future research needs for VRE representation in capacity expansion models, four capacity expansion modeling teams from the Electric Power Research Institute, the U.S. Energy Information Administration, the U.S. Environmental Protection Agency, and the National Renewable Energy Laboratory conducted two workshops of VRE modeling for national-scale capacity expansion models. The workshops covered a wide range of VRE topics, including transmission and VRE resource data, VRE capacity value, dispatch and operational modeling, distributed generation, and temporal and spatial resolution. The objectives of the workshops were both to better understand these topics and to improve the representation of VRE across the suite of models. Given these goals, each team incorporated model updates and performed additional analyses between the first and second workshops. This report summarizes the analyses and model 'experiments' that were conducted as part of these workshops as well as the various methods for treating VRE among the four modeling teams. The report also reviews the findings and learnings from the two workshops. We emphasize the areas where there is still need for additional research and development on analysis tools to incorporate VRE into long-term planning and decision-making.

  5. Long time scale hard X-ray variability in Seyfert 1 galaxies

    Science.gov (United States)

    Markowitz, Alex Gary

    This dissertation examines the relationship between long-term X-ray variability characteristics, black hole mass, and luminosity of Seyfert 1 Active Galactic Nuclei. High dynamic range power spectral density functions (PSDs) have been constructed for six Seyfert 1 galaxies. These PSDs show "breaks" or characteristic time scales, typically on the order of a few days. There is resemblance to PSDs of lower-mass Galactic X-ray binaries (XRBs), with the ratios of putative black hole masses and variability time scales approximately the same (106--7) between the two classes of objects. The data are consistent with a linear correlation between Seyfert PSD break time scale and black hole mass estimate; the relation extrapolates reasonably well over 6--7 orders of magnitude to XRBs. All of this strengthens the case for a physical similarity between Seyfert galaxies and XRBs. The first six years of RXTE monitoring of Seyfert 1s have been systematically analyzed to probe hard X-ray variability on multiple time scales in a total of 19 Seyfert is in an expansion of the survey of Markowitz & Edelson (2001). Correlations between variability amplitude, luminosity, and black hole mass are explored, the data support the model of PSD movement with black hole mass suggested by the PSD survey. All of the continuum variability results are consistent with relatively more massive black holes hosting larger X-ray emission regions, resulting in 'slower' observed variability. Nearly all sources in the sample exhibit stronger variability towards softer energies, consistent with softening as they brighten. Direct time-resolved spectral fitting has been performed on continuous RXTE monitoring of seven Seyfert is to study long-term spectral variability and Fe Kalpha variability characteristics. The Fe Kalpha line displays a wide range of behavior but varies less strongly than the broadband continuum. Overall, however, there is no strong evidence for correlated variability between the line and

  6. Application of physical scaling towards downscaling climate model precipitation data

    Science.gov (United States)

    Gaur, Abhishek; Simonovic, Slobodan P.

    2018-04-01

    Physical scaling (SP) method downscales climate model data to local or regional scales taking into consideration physical characteristics of the area under analysis. In this study, multiple SP method based models are tested for their effectiveness towards downscaling North American regional reanalysis (NARR) daily precipitation data. Model performance is compared with two state-of-the-art downscaling methods: statistical downscaling model (SDSM) and generalized linear modeling (GLM). The downscaled precipitation is evaluated with reference to recorded precipitation at 57 gauging stations located within the study region. The spatial and temporal robustness of the downscaling methods is evaluated using seven precipitation based indices. Results indicate that SP method-based models perform best in downscaling precipitation followed by GLM, followed by the SDSM model. Best performing models are thereafter used to downscale future precipitations made by three global circulation models (GCMs) following two emission scenarios: representative concentration pathway (RCP) 2.6 and RCP 8.5 over the twenty-first century. The downscaled future precipitation projections indicate an increase in mean and maximum precipitation intensity as well as a decrease in the total number of dry days. Further an increase in the frequency of short (1-day), moderately long (2-4 day), and long (more than 5-day) precipitation events is projected.

  7. Spatial modeling of agricultural land use change at global scale

    Science.gov (United States)

    Meiyappan, P.; Dalton, M.; O'Neill, B. C.; Jain, A. K.

    2014-11-01

    Long-term modeling of agricultural land use is central in global scale assessments of climate change, food security, biodiversity, and climate adaptation and mitigation policies. We present a global-scale dynamic land use allocation model and show that it can reproduce the broad spatial features of the past 100 years of evolution of cropland and pastureland patterns. The modeling approach integrates economic theory, observed land use history, and data on both socioeconomic and biophysical determinants of land use change, and estimates relationships using long-term historical data, thereby making it suitable for long-term projections. The underlying economic motivation is maximization of expected profits by hypothesized landowners within each grid cell. The model predicts fractional land use for cropland and pastureland within each grid cell based on socioeconomic and biophysical driving factors that change with time. The model explicitly incorporates the following key features: (1) land use competition, (2) spatial heterogeneity in the nature of driving factors across geographic regions, (3) spatial heterogeneity in the relative importance of driving factors and previous land use patterns in determining land use allocation, and (4) spatial and temporal autocorrelation in land use patterns. We show that land use allocation approaches based solely on previous land use history (but disregarding the impact of driving factors), or those accounting for both land use history and driving factors by mechanistically fitting models for the spatial processes of land use change do not reproduce well long-term historical land use patterns. With an example application to the terrestrial carbon cycle, we show that such inaccuracies in land use allocation can translate into significant implications for global environmental assessments. The modeling approach and its evaluation provide an example that can be useful to the land use, Integrated Assessment, and the Earth system modeling

  8. Multi-Scale Modelling of the Gamma Radiolysis of Nitrate Solutions

    OpenAIRE

    Horne, Gregory; Donoclift, Thomas; Sims, Howard E.; M. Orr, Robin; Pimblott, Simon

    2016-01-01

    A multi-scale modelling approach has been developed for the extended timescale long-term radiolysis of aqueous systems. The approach uses a combination of stochastic track structure and track chemistry as well as deterministic homogeneous chemistry techniques and involves four key stages; radiation track structure simulation, the subsequent physicochemical processes, nonhomogeneous diffusion-reaction kinetic evolution, and homogeneous bulk chemistry modelling. The first three components model...

  9. Multiple Scale Analysis of the Dynamic State Index (DSI)

    Science.gov (United States)

    Müller, A.; Névir, P.

    2016-12-01

    The Dynamic State Index (DSI) is a novel parameter that indicates local deviations of the atmospheric flow field from a stationary, inviscid and adiabatic solution of the primitive equations of fluid mechanics. This is in contrast to classical methods, which often diagnose deviations from temporal or spatial mean states. We show some applications of the DSI to atmospheric flow phenomena on different scales. The DSI is derived from the Energy-Vorticity-Theory (EVT) which is based on two global conserved quantities, the total energy and Ertel's potential enstrophy. Locally, these global quantities lead to the Bernoulli function and the PV building together with the potential temperature the DSI.If the Bernoulli function and the PV are balanced, the DSI vanishes and the basic state is obtained. Deviations from the basic state provide an indication of diabatic and non-stationary weather events. Therefore, the DSI offers a tool to diagnose and even prognose different atmospheric events on different scales.On synoptic scale, the DSI can help to diagnose storms and hurricanes, where also the dipole structure of the DSI plays an important role. In the scope of the collaborative research center "Scaling Cascades in Complex Systems" we show high correlations between the DSI and precipitation on convective scale. Moreover, we compare the results with reduced models and different spatial resolutions.

  10. Modelo para prognose do crescimento e da produção e análise econômica de regimes de manejo para Pinus taeda L. Growth and yield prognosis model and economic evaluation of several management regimes for Pinus taeda L.

    Directory of Open Access Journals (Sweden)

    Fausto Weimar Acerbi Jr.

    2002-11-01

    Full Text Available Os objetivos deste estudo foram desenvolver um sistema para prognose do crescimento e da produção de Pinus taeda L. para simular e avaliar economicamente diversos regimes de manejo, visando produzir madeira livre de nós (clearwood e madeira para múltiplos usos; e analisar a lucratividade dos regimes de manejo em diversas condições de sítio, espaçamento, taxas de desconto e preço da madeira, considerando plantios realizados em terras da própria empresa e em terras arrendadas. O modelo desenvolvido baseia-se no conceito de compatibilidade em área basal entre o modelo para o povoamento e o modelo por classe diamétrica. Utiliza-se a distribuição Weibull, que em conjunto com os atributos do povoamento permite fazer a prognose para diferentes estratos e idades desejadas. Aplica-se ent��o o simulador de desbaste para obter a floresta remanescente desejada. A partir desta faz-se nova prognose até a idade desejada, e novamente aplica-se o simulador de desbaste. Este procedimento é repetido até o corte final, utilizando-se para tal do software SPPpinus - Sistema de Prognose da Produção para Pinus sp. Na análise econômica foram testados dois cenários, com diversos números, épocas e intensidade de desbaste, a partir de diferentes densidades iniciais de plantio, considerando vários níveis de produtividade. Foi realizada uma análise de sensibilidade da lucratividade dos regimes de manejo gerados, considerando três taxas de desconto, dois níveis de preço da madeira e as opções de plantar Pinus sp. em terras arrendadas e em terra da própria empresa, viabilizada através da integração do SSPpinus com o software de análise de investimento Invest. Concluiu-se que o modelo de crescimento e produção desenvolvido não apresentou tendenciosidade nas estimativas, sendo, portanto, um sistema preciso; os regimes de manejo com a realização de um desbaste pré-comercial seguido de dois desbastes comerciais e desrama devem ser

  11. Long-term Observations of Intense Precipitation Small-scale Spatial Variability in a Semi-arid Catchment

    Science.gov (United States)

    Cropp, E. L.; Hazenberg, P.; Castro, C. L.; Demaria, E. M.

    2017-12-01

    In the southwestern US, the summertime North American Monsoon (NAM) provides about 60% of the region's annual precipitation. Recent research using high-resolution atmospheric model simulations and retrospective predictions has shown that since the 1950's, and more specifically in the last few decades, the mean daily precipitation in the southwestern U.S. during the NAM has followed a decreasing trend. Furthermore, days with more extreme precipitation have intensified. The current work focuses the impact of these long-term changes on the observed small-scale spatial variability of intense precipitation. Since limited long-term high-resolution observational data exist to support such climatological-induced spatial changes in precipitation frequency and intensity, the current work utilizes observations from the USDA-ARS Walnut Gulch Experimental Watershed (WGEW) in southeastern Arizona. Within this 150 km^2 catchment over 90 rain gauges have been installed since the 1950s, measuring at sub-hourly resolution. We have applied geospatial analyses and the kriging interpolation technique to identify long-term changes in the spatial and temporal correlation and anisotropy of intense precipitation. The observed results will be compared with the previously model simulated results, as well as related to large-scale variations in climate patterns, such as the El Niño Southern Oscillation (ENSO) and the Pacific Decadal Oscillation (PDO).

  12. Correlated continuous time random walks: combining scale-invariance with long-range memory for spatial and temporal dynamics

    International Nuclear Information System (INIS)

    Schulz, Johannes H P; Chechkin, Aleksei V; Metzler, Ralf

    2013-01-01

    Standard continuous time random walk (CTRW) models are renewal processes in the sense that at each jump a new, independent pair of jump length and waiting time are chosen. Globally, anomalous diffusion emerges through scale-free forms of the jump length and/or waiting time distributions by virtue of the generalized central limit theorem. Here we present a modified version of recently proposed correlated CTRW processes, where we incorporate a power-law correlated noise on the level of both jump length and waiting time dynamics. We obtain a very general stochastic model, that encompasses key features of several paradigmatic models of anomalous diffusion: discontinuous, scale-free displacements as in Lévy flights, scale-free waiting times as in subdiffusive CTRWs, and the long-range temporal correlations of fractional Brownian motion (FBM). We derive the exact solutions for the single-time probability density functions and extract the scaling behaviours. Interestingly, we find that different combinations of the model parameters lead to indistinguishable shapes of the emerging probability density functions and identical scaling laws. Our model will be useful for describing recent experimental single particle tracking data that feature a combination of CTRW and FBM properties. (paper)

  13. ELBE - Validation and improvement of load prognoses - Phase 1; ELBE - Validierung und Verbesserung von Lastprognosen (Projektphase 1) - Zwischenbericht

    Energy Technology Data Exchange (ETDEWEB)

    Kronig, P.; Hoeckel, M.

    2008-07-01

    This comprehensive interim report for the Swiss Federal Office of Energy (SFOE) reviews work done at the Bernese University of Applied Sciences concerning various methods for the making of prognoses for grid loading in a liberalised electricity market with increasing supply of power from renewable resources. The liberalised Swiss electricity market is reviewed and production planning instruments are examined, as are the factors that influence such planning. Examples of load profiles are presented and discussed, as are the social, technical and meteorological factors that influence demand. The accuracy of various prognosis methods is reviewed. Simple methods using Excel spread-sheets and MatLab are compared and discussed. Commercially available systems are also briefly examined and work to be done in a second phase of the project is reviewed.

  14. Modelling of long term nitrogen retention in surface waters

    Science.gov (United States)

    Halbfaß, S.; Gebel, M.; Bürger, S.

    2010-12-01

    In order to derive measures to reduce nutrient loadings into waters in Saxony, we calculated nitrogen inputs with the model STOFFBILANZ on the regional scale. Thereby we have to compare our modelling results to measured loadings at the river basin outlets, considering long term nutrient retention in surface waters. The most important mechanism of nitrogen retention is the denitrification in the contact zone of water and sediment, being controlled by hydraulic and micro-biological processes. Retention capacity is derived on the basis of the nutrient spiralling concept, using water residence time (hydraulic aspect) and time-specific N-uptake by microorganisms (biological aspect). Short time related processes of mobilization and immobilization are neglected, because they are of minor importance for the derivation of measures on the regional scale.

  15. Studies on correlation of positive surgical margin with clinicopathological factors and prognoses in breast conserving surgery

    International Nuclear Information System (INIS)

    Nishimura, Reiki; Nagao, Kazuharu; Miyayama, Haruhiko

    1999-01-01

    Out of 484 cases with breast conserving surgery between April 1989 and March 1999, surgical procedures of 34 cases were changed to total mastectomy due to positive surgical margins. In this study we evaluated a clinical significance of surgical margin in relation to clinicopathological factors and prognoses. Ninety-nine cases (20.5%) had positive margins that were judged when cancer cells existed within 5 mm from margin. In multivariate analysis of factors for surgical margin, EIC-comedo status, ly, located site, proliferative activity, and age were significant and independent factors. Regarding local recurrence, positive margin, age, ER and proliferative activity were significant factors in multivariate analysis, especially in cases not receiving postoperative radiation therapy. Radiation therapy may be beneficial for patients with positive surgical margin. And patients with breast recurrence alone had significantly higher survival rates. Therefore, it is suggested that surgical margin may not reflect survival, although it is a significant factor for local recurrence. (author)

  16. Studies on correlation of positive surgical margin with clinicopathological factors and prognoses in breast conserving surgery

    Energy Technology Data Exchange (ETDEWEB)

    Nishimura, Reiki; Nagao, Kazuharu; Miyayama, Haruhiko [Kumamoto City Hospital (Japan)

    1999-09-01

    Out of 484 cases with breast conserving surgery between April 1989 and March 1999, surgical procedures of 34 cases were changed to total mastectomy due to positive surgical margins. In this study we evaluated a clinical significance of surgical margin in relation to clinicopathological factors and prognoses. Ninety-nine cases (20.5%) had positive margins that were judged when cancer cells existed within 5 mm from margin. In multivariate analysis of factors for surgical margin, EIC-comedo status, ly, located site, proliferative activity, and age were significant and independent factors. Regarding local recurrence, positive margin, age, ER and proliferative activity were significant factors in multivariate analysis, especially in cases not receiving postoperative radiation therapy. Radiation therapy may be beneficial for patients with positive surgical margin. And patients with breast recurrence alone had significantly higher survival rates. Therefore, it is suggested that surgical margin may not reflect survival, although it is a significant factor for local recurrence. (author)

  17. Upscaling of Long-Term U9VI) Desorption from Pore Scale Kinetics to Field-Scale Reactive Transport Models

    Energy Technology Data Exchange (ETDEWEB)

    Andy Miller

    2009-01-25

    Environmental systems exhibit a range of complexities which exist at a range of length and mass scales. Within the realm of radionuclide fate and transport, much work has been focused on understanding pore scale processes where complexity can be reduced to a simplified system. In describing larger scale behavior, the results from these simplified systems must be combined to create a theory of the whole. This process can be quite complex, and lead to models which lack transparency. The underlying assumption of this approach is that complex systems will exhibit complex behavior, requiring a complex system of equations to describe behavior. This assumption has never been tested. The goal of the experiments presented is to ask the question: Do increasingly complex systems show increasingly complex behavior? Three experimental tanks at the intermediate scale (Tank 1: 2.4m x 1.2m x 7.6cm, Tank 2: 2.4m x 0.61m x 7.6cm, Tank 3: 2.4m x 0.61m x 0.61m (LxHxW)) have been completed. These tanks were packed with various physical orientations of different particle sizes of a uranium contaminated sediment from a former uranium mill near Naturita, Colorado. Steady state water flow was induced across the tanks using constant head boundaries. Pore water was removed from within the flow domain through sampling ports/wells; effluent samples were also taken. Each sample was analyzed for a variety of analytes relating to the solubility and transport of uranium. Flow fields were characterized using inert tracers and direct measurements of pressure head. The results show that although there is a wide range of chemical variability within the flow domain of the tank, the effluent uranium behavior is simple enough to be described using a variety of conceptual models. Thus, although there is a wide range in variability caused by pore scale behaviors, these behaviors appear to be smoothed out as uranium is transported through the tank. This smoothing of uranium transport behavior transcends

  18. Modelling long-distance seed dispersal in heterogeneous landscapes.

    Energy Technology Data Exchange (ETDEWEB)

    Levey, Douglas, J.; Tewlsbury, Joshua, J.; Bolker, Benjamin, M.

    2008-01-01

    results suggest that long-distance dispersal events can be predicted using spatially explicit modelling to scale-up local movements, placing them in a landscape context. Similar techniques are commonly used by landscape ecologists to model other types of movement; they offer much promise to the study of seed dispersal.

  19. A high-resolution global-scale groundwater model

    Science.gov (United States)

    de Graaf, I. E. M.; Sutanudjaja, E. H.; van Beek, L. P. H.; Bierkens, M. F. P.

    2015-02-01

    Groundwater is the world's largest accessible source of fresh water. It plays a vital role in satisfying basic needs for drinking water, agriculture and industrial activities. During times of drought groundwater sustains baseflow to rivers and wetlands, thereby supporting ecosystems. Most global-scale hydrological models (GHMs) do not include a groundwater flow component, mainly due to lack of geohydrological data at the global scale. For the simulation of lateral flow and groundwater head dynamics, a realistic physical representation of the groundwater system is needed, especially for GHMs that run at finer resolutions. In this study we present a global-scale groundwater model (run at 6' resolution) using MODFLOW to construct an equilibrium water table at its natural state as the result of long-term climatic forcing. The used aquifer schematization and properties are based on available global data sets of lithology and transmissivities combined with the estimated thickness of an upper, unconfined aquifer. This model is forced with outputs from the land-surface PCRaster Global Water Balance (PCR-GLOBWB) model, specifically net recharge and surface water levels. A sensitivity analysis, in which the model was run with various parameter settings, showed that variation in saturated conductivity has the largest impact on the groundwater levels simulated. Validation with observed groundwater heads showed that groundwater heads are reasonably well simulated for many regions of the world, especially for sediment basins (R2 = 0.95). The simulated regional-scale groundwater patterns and flow paths demonstrate the relevance of lateral groundwater flow in GHMs. Inter-basin groundwater flows can be a significant part of a basin's water budget and help to sustain river baseflows, especially during droughts. Also, water availability of larger aquifer systems can be positively affected by additional recharge from inter-basin groundwater flows.

  20. Energy forecasts, perspectives and methods

    Energy Technology Data Exchange (ETDEWEB)

    Svensson, J E; Mogren, A

    1984-01-01

    The authors have analyzed different methods for long term energy prognoses, in particular energy consumption forecasts. Energy supply and price prognoses are also treated, but in a less detailed manner. After defining and discussing the various methods/models used in forecasts, a generalized discussion of the influence on the prognoses from the perspectives (background factors, world view, norms, ideology) of the prognosis makers is given. Some basic formal demands that should be asked from any rational forecast are formulated and discussed. The authors conclude that different forecasting methodologies are supplementing each other. There is no best method, forecasts should be accepted as views of the future from differing perspectives. The primary prognostic problem is to show the possible futures, selecting the wanted future is a question of political process.

  1. A multi scale model for small scale plasticity

    International Nuclear Information System (INIS)

    Zbib, Hussein M.

    2002-01-01

    Full text.A framework for investigating size-dependent small-scale plasticity phenomena and related material instabilities at various length scales ranging from the nano-microscale to the mesoscale is presented. The model is based on fundamental physical laws that govern dislocation motion and their interaction with various defects and interfaces. Particularly, a multi-scale model is developed merging two scales, the nano-microscale where plasticity is determined by explicit three-dimensional dislocation dynamics analysis providing the material length-scale, and the continuum scale where energy transport is based on basic continuum mechanics laws. The result is a hybrid simulation model coupling discrete dislocation dynamics with finite element analyses. With this hybrid approach, one can address complex size-dependent problems, including dislocation boundaries, dislocations in heterogeneous structures, dislocation interaction with interfaces and associated shape changes and lattice rotations, as well as deformation in nano-structured materials, localized deformation and shear band

  2. Multi-scale modelling for HEDP experiments on Orion

    Science.gov (United States)

    Sircombe, N. J.; Ramsay, M. G.; Hughes, S. J.; Hoarty, D. J.

    2016-05-01

    The Orion laser at AWE couples high energy long-pulse lasers with high intensity short-pulses, allowing material to be compressed beyond solid density and heated isochorically. This experimental capability has been demonstrated as a platform for conducting High Energy Density Physics material properties experiments. A clear understanding of the physics in experiments at this scale, combined with a robust, flexible and predictive modelling capability, is an important step towards more complex experimental platforms and ICF schemes which rely on high power lasers to achieve ignition. These experiments present a significant modelling challenge, the system is characterised by hydrodynamic effects over nanoseconds, driven by long-pulse lasers or the pre-pulse of the petawatt beams, and fast electron generation, transport, and heating effects over picoseconds, driven by short-pulse high intensity lasers. We describe the approach taken at AWE; to integrate a number of codes which capture the detailed physics for each spatial and temporal scale. Simulations of the heating of buried aluminium microdot targets are discussed and we consider the role such tools can play in understanding the impact of changes to the laser parameters, such as frequency and pre-pulse, as well as understanding effects which are difficult to observe experimentally.

  3. 1/3-scale model testing program

    International Nuclear Information System (INIS)

    Yoshimura, H.R.; Attaway, S.W.; Bronowski, D.R.; Uncapher, W.L.; Huerta, M.; Abbott, D.G.

    1989-01-01

    This paper describes the drop testing of a one-third scale model transport cask system. Two casks were supplied by Transnuclear, Inc. (TN) to demonstrate dual purpose shipping/storage casks. These casks will be used to ship spent fuel from DOEs West Valley demonstration project in New York to the Idaho National Engineering Laboratory (INEL) for long term spent fuel dry storage demonstration. As part of the certification process, one-third scale model tests were performed to obtain experimental data. Two 9-m (30-ft) drop tests were conducted on a mass model of the cask body and scaled balsa and redwood filled impact limiters. In the first test, the cask system was tested in an end-on configuration. In the second test, the system was tested in a slap-down configuration where the axis of the cask was oriented at a 10 degree angle with the horizontal. Slap-down occurs for shallow angle drops where the primary impact at one end of the cask is followed by a secondary impact at the other end. The objectives of the testing program were to (1) obtain deceleration and displacement information for the cask and impact limiter system, (2) obtain dynamic force-displacement data for the impact limiters, (3) verify the integrity of the impact limiter retention system, and (4) examine the crush behavior of the limiters. This paper describes both test results in terms of measured deceleration, post test deformation measurements, and the general structural response of the system

  4. Small scale models equal large scale savings

    International Nuclear Information System (INIS)

    Lee, R.; Segroves, R.

    1994-01-01

    A physical scale model of a reactor is a tool which can be used to reduce the time spent by workers in the containment during an outage and thus to reduce the radiation dose and save money. The model can be used for worker orientation, and for planning maintenance, modifications, manpower deployment and outage activities. Examples of the use of models are presented. These were for the La Salle 2 and Dresden 1 and 2 BWRs. In each case cost-effectiveness and exposure reduction due to the use of a scale model is demonstrated. (UK)

  5. Improving Shade Modelling in a Regional River Temperature Model Using Fine-Scale LIDAR Data

    Science.gov (United States)

    Hannah, D. M.; Loicq, P.; Moatar, F.; Beaufort, A.; Melin, E.; Jullian, Y.

    2015-12-01

    Air temperature is often considered as a proxy of the stream temperature to model the distribution areas of aquatic species water temperature is not available at a regional scale. To simulate the water temperature at a regional scale (105 km²), a physically-based model using the equilibrium temperature concept and including upstream-downstream propagation of the thermal signal was developed and applied to the entire Loire basin (Beaufort et al., submitted). This model, called T-NET (Temperature-NETwork) is based on a hydrographical network topology. Computations are made hourly on 52,000 reaches which average 1.7 km long in the Loire drainage basin. The model gives a median Root Mean Square Error of 1.8°C at hourly time step on the basis of 128 water temperature stations (2008-2012). In that version of the model, tree shadings is modelled by a constant factor proportional to the vegetation cover on 10 meters sides the river reaches. According to sensitivity analysis, improving the shade representation would enhance T-NET accuracy, especially for the maximum daily temperatures, which are currently not very well modelized. This study evaluates the most efficient way (accuracy/computing time) to improve the shade model thanks to 1-m resolution LIDAR data available on tributary of the LoireRiver (317 km long and an area of 8280 km²). Two methods are tested and compared: the first one is a spatially explicit computation of the cast shadow for every LIDAR pixel. The second is based on averaged vegetation cover characteristics of buffers and reaches of variable size. Validation of the water temperature model is made against 4 temperature sensors well spread along the stream, as well as two airborne thermal infrared imageries acquired in summer 2014 and winter 2015 over a 80 km reach. The poster will present the optimal length- and crosswise scale to characterize the vegetation from LIDAR data.

  6. Development and evaluation of a watershed-scale hybrid hydrologic model

    OpenAIRE

    Cho, Younghyun

    2016-01-01

    A watershed-scale hybrid hydrologic model (Distributed-Clark), which is a lumped conceptual and distributed feature model, was developed to predict spatially distributed short- and long-term rainfall runoff generation and routing using relatively simple methodologies and state-of-the-art spatial data in a GIS environment. In Distributed-Clark, spatially distributed excess rainfall estimated with the SCS curve number method and a GIS-based set of separated unit hydrographs (spatially distribut...

  7. Evolution of sorption properties in large-scale concrete structures accounting for long-term physical-chemical concrete degradation - 59297

    International Nuclear Information System (INIS)

    Perko, Janez; Jacques, Diederik; Mallants, Dirk; Seetharam, Suresh

    2012-01-01

    Long-term safety of radioactive waste disposal facilities relies on the longevity of natural or engineered barriers designed to minimize the migration of contaminants from the facility into the environment. Especially near surface disposal facilities, such as planned by ONDRAF/NIRAS for the Dessel site in Belgium, long-term safety relies almost exclusively on the containment ability of the engineered barriers (EB) with concrete being the most important EB material used. Concrete is preferred over other materials mainly due to its favourable chemical properties resulting in a high chemical retention capacity, and owing to its good hydraulic isolation properties. However, due to the long time frames typically involved in safety assessment, the chemical, physical and mechanical properties of concrete evolve in time. The alterations in concrete mineralogy also cause changes in pH and sorption behaviour for many radionuclides during chemical degradation processes. Application of dynamic sorption of concrete requires an adequate knowledge of long-term concrete degradation processes, knowledge of the effect of changing mineralogy to sorption of radionuclides and knowledge of large-scale system behaviour over time. Moreover, when applied to safety assessment models, special attention is required to assure robustness and transparency of the implementation. The discussion in this paper focuses on the sorption properties of concrete; selection of data, rescaling issues and on the hypotheses used to build a robust and yet transparent dynamic model for large-scale concrete structures for assessing the long-term performance. In this paper we summarize the steps required for the appropriate use of sorption values for large-scale cementitious components accounting for long-term concrete degradation in safety assessment studies. Four steps were recognized through the safety assessment in the framework of the license application for the near-surface disposal facility in Dessel

  8. Multiresolution comparison of precipitation datasets for large-scale models

    Science.gov (United States)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  9. Hydrodynamic simulations of long-scale-length two-plasmon–decay experiments at the Omega Laser Facility

    International Nuclear Information System (INIS)

    Hu, S. X.; Michel, D. T.; Edgell, D. H.; Froula, D. H.; Follett, R. K.; Goncharov, V. N.; Myatt, J. F.; Skupsky, S.; Yaakobi, B.

    2013-01-01

    Direct-drive–ignition designs with plastic CH ablators create plasmas of long density scale lengths (L n ≥ 500 μm) at the quarter-critical density (N qc ) region of the driving laser. The two-plasmon–decay (TPD) instability can exceed its threshold in such long-scale-length plasmas (LSPs). To investigate the scaling of TPD-induced hot electrons to laser intensity and plasma conditions, a series of planar experiments have been conducted at the Omega Laser Facility with 2-ns square pulses at the maximum laser energies available on OMEGA and OMEGA EP. Radiation–hydrodynamic simulations have been performed for these LSP experiments using the two-dimensional hydrocode draco. The simulated hydrodynamic evolution of such long-scale-length plasmas has been validated with the time-resolved full-aperture backscattering and Thomson-scattering measurements. draco simulations for CH ablator indicate that (1) ignition-relevant long-scale-length plasmas of L n approaching ∼400 μm have been created; (2) the density scale length at N qc scales as L n (μm)≃(R DPP ×I 1/4 /2); and (3) the electron temperature T e at N qc scales as T e (keV)≃0.95×√(I), with the incident intensity (I) measured in 10 14 W/cm 2 for plasmas created on both OMEGA and OMEGA EP configurations with different-sized (R DPP ) distributed phase plates. These intensity scalings are in good agreement with the self-similar model predictions. The measured conversion fraction of laser energy into hot electrons f hot is found to have a similar behavior for both configurations: a rapid growth [f hot ≃f c ×(G c /4) 6 for G c hot ≃f c ×(G c /4) 1.2 for G c ≥ 4, with the common wave gain is defined as G c =3 × 10 −2 ×I qc L n λ 0 /T e , where the laser intensity contributing to common-wave gain I qc , L n , T e at N qc , and the laser wavelength λ 0 are, respectively, measured in [10 14 W/cm 2 ], [μm], [keV], and [μm]. The saturation level f c is observed to be f c ≃ 10 –2 at around

  10. Pilot-Scale Field Validation Of The Long Electrode Electrical Resistivity Tomography Method

    International Nuclear Information System (INIS)

    Glaser, D.R.; Rucker, D.F.; Crook, N.; Loke, M.H.

    2011-01-01

    Field validation for the long electrode electrical resistivity tomography (LE-ERT) method was attempted in order to demonstrate the performance of the technique in imaging a simple buried target. The experiment was an approximately 1/17 scale mock-up of a region encompassing a buried nuclear waste tank on the Hanford site. The target of focus was constructed by manually forming a simulated plume within the vadose zone using a tank waste simulant. The LE-ERT results were compared to ERT using conventional point electrodes on the surface and buried within the survey domain. Using a pole-pole array, both point and long electrode imaging techniques identified the lateral extents of the pre-formed plume with reasonable fidelity, but the LE-ERT was handicapped in reconstructing the vertical boundaries. The pole-dipole and dipole-dipole arrays were also tested with the LE-ERT method and were shown to have the least favorable target properties, including the position of the reconstructed plume relative to the known plume and the intensity of false positive targets. The poor performance of the pole-dipole and dipole-dipole arrays was attributed to an inexhaustive and non-optimal coverage of data at key electrodes, as well as an increased noise for electrode combinations with high geometric factors. However, when comparing the model resolution matrix among the different acquisition strategies, the pole-dipole and dipole-dipole arrays using long electrodes were shown to have significantly higher average and maximum values than any pole-pole array. The model resolution describes how well the inversion model resolves the subsurface. Given the model resolution performance of the pole-dipole and dipole-dipole arrays, it may be worth investing in tools to understand the optimum subset of randomly distributed electrode pairs to produce maximum performance from the inversion model.

  11. PILOT-SCALE FIELD VALIDATION OF THE LONG ELECTRODE ELECTRICAL RESISTIVITY TOMOGRAPHY METHOD

    Energy Technology Data Exchange (ETDEWEB)

    GLASER DR; RUCKER DF; CROOK N; LOKE MH

    2011-07-14

    Field validation for the long electrode electrical resistivity tomography (LE-ERT) method was attempted in order to demonstrate the performance of the technique in imaging a simple buried target. The experiment was an approximately 1/17 scale mock-up of a region encompassing a buried nuclear waste tank on the Hanford site. The target of focus was constructed by manually forming a simulated plume within the vadose zone using a tank waste simulant. The LE-ERT results were compared to ERT using conventional point electrodes on the surface and buried within the survey domain. Using a pole-pole array, both point and long electrode imaging techniques identified the lateral extents of the pre-formed plume with reasonable fidelity, but the LE-ERT was handicapped in reconstructing the vertical boundaries. The pole-dipole and dipole-dipole arrays were also tested with the LE-ERT method and were shown to have the least favorable target properties, including the position of the reconstructed plume relative to the known plume and the intensity of false positive targets. The poor performance of the pole-dipole and dipole-dipole arrays was attributed to an inexhaustive and non-optimal coverage of data at key electrodes, as well as an increased noise for electrode combinations with high geometric factors. However, when comparing the model resolution matrix among the different acquisition strategies, the pole-dipole and dipole-dipole arrays using long electrodes were shown to have significantly higher average and maximum values than any pole-pole array. The model resolution describes how well the inversion model resolves the subsurface. Given the model resolution performance of the pole-dipole and dipole-dipole arrays, it may be worth investing in tools to understand the optimum subset of randomly distributed electrode pairs to produce maximum performance from the inversion model.

  12. SITE-94. Discrete-feature modelling of the Aespoe site: 2. Development of the integrated site-scale model

    International Nuclear Information System (INIS)

    Geier, J.E.

    1996-12-01

    A 3-dimensional, discrete-feature hydrological model is developed. The model integrates structural and hydrologic data for the Aespoe site, on scales ranging from semi regional fracture zones to individual fractures in the vicinity of the nuclear waste canisters. Hydrologic properties of the large-scale structures are initially estimated from cross-hole hydrologic test data, and automatically calibrated by numerical simulation of network flow, and comparison with undisturbed heads and observed drawdown in selected cross-hole tests. The calibrated model is combined with a separately derived fracture network model, to yield the integrated model. This model is partly validated by simulation of transient responses to a long-term pumping test and a convergent tracer test, based on the LPT2 experiment at Aespoe. The integrated model predicts that discharge from the SITE-94 repository is predominantly via fracture zones along the eastern shore of Aespoe. Similar discharge loci are produced by numerous model variants that explore uncertainty with regard to effective semi regional boundary conditions, hydrologic properties of the site-scale structures, and alternative structural/hydrological interpretations. 32 refs

  13. SITE-94. Discrete-feature modelling of the Aespoe site: 2. Development of the integrated site-scale model

    Energy Technology Data Exchange (ETDEWEB)

    Geier, J.E. [Golder Associates AB, Uppsala (Sweden)

    1996-12-01

    A 3-dimensional, discrete-feature hydrological model is developed. The model integrates structural and hydrologic data for the Aespoe site, on scales ranging from semi regional fracture zones to individual fractures in the vicinity of the nuclear waste canisters. Hydrologic properties of the large-scale structures are initially estimated from cross-hole hydrologic test data, and automatically calibrated by numerical simulation of network flow, and comparison with undisturbed heads and observed drawdown in selected cross-hole tests. The calibrated model is combined with a separately derived fracture network model, to yield the integrated model. This model is partly validated by simulation of transient responses to a long-term pumping test and a convergent tracer test, based on the LPT2 experiment at Aespoe. The integrated model predicts that discharge from the SITE-94 repository is predominantly via fracture zones along the eastern shore of Aespoe. Similar discharge loci are produced by numerous model variants that explore uncertainty with regard to effective semi regional boundary conditions, hydrologic properties of the site-scale structures, and alternative structural/hydrological interpretations. 32 refs.

  14. A Multi-scale Modeling System with Unified Physics to Study Precipitation Processes

    Science.gov (United States)

    Tao, W. K.

    2017-12-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), and (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF). The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitation, processes and their sensitivity on model resolution and microphysics schemes will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.

  15. Upscaling of Long-Term U(VI) Desorption from Pore Scale Kinetics to Field-Scale Reactive Transport Models. Final report

    International Nuclear Information System (INIS)

    Miller, Andy

    2009-01-01

    Environmental systems exhibit a range of complexities which exist at a range of length and mass scales. Within the realm of radionuclide fate and transport, much work has been focused on understanding pore scale processes where complexity can be reduced to a simplified system. In describing larger scale behavior, the results from these simplified systems must be combined to create a theory of the whole. This process can be quite complex, and lead to models which lack transparency. The underlying assumption of this approach is that complex systems will exhibit complex behavior, requiring a complex system of equations to describe behavior. This assumption has never been tested. The goal of the experiments presented is to ask the question: Do increasingly complex systems show increasingly complex behavior? Three experimental tanks at the intermediate scale (Tank 1: 2.4m x 1.2m x 7.6cm, Tank 2: 2.4m x 0.61m x 7.6cm, Tank 3: 2.4m x 0.61m x 0.61m (LxHxW)) have been completed. These tanks were packed with various physical orientations of different particle sizes of a uranium contaminated sediment from a former uranium mill near Naturita, Colorado. Steady state water flow was induced across the tanks using constant head boundaries. Pore water was removed from within the flow domain through sampling ports/wells; effluent samples were also taken. Each sample was analyzed for a variety of analytes relating to the solubility and transport of uranium. Flow fields were characterized using inert tracers and direct measurements of pressure head. The results show that although there is a wide range of chemical variability within the flow domain of the tank, the effluent uranium behavior is simple enough to be described using a variety of conceptual models. Thus, although there is a wide range in variability caused by pore scale behaviors, these behaviors appear to be smoothed out as uranium is transported through the tank. This smoothing of uranium transport behavior transcends

  16. The nutritional index 'CONUT' is useful for predicting long-term prognosis of patients with end-stage liver diseases.

    Science.gov (United States)

    Fukushima, Koji; Ueno, Yoshiyuki; Kawagishi, Naoki; Kondo, Yasuteru; Inoue, Jun; Kakazu, Eiji; Ninomiya, Masashi; Wakui, Yuta; Saito, Naoko; Satomi, Susumu; Shimosegawa, Tooru

    2011-07-01

    Organ allocation in Japan remains difficult due to the shortage of deceased-donor livers. The screening tool for controlling nutritional status (CONUT) has been considered to be an established assessment model for evaluating nutritional aspects in surgical patients. However, the application of this CONUT for evaluating the prognosis of patients with end-stage liver diseases has not been evaluated. We assessed the predictability of the prognoses of 58 patients with end-stage liver disease using various prognostic models. The patients registered at the transplantation center of Tohoku University Hospital for the waiting list of Japan Organ Transplant Network for liver transplantation were retrospectively analyzed. The prognoses of the patients were evaluated using the following 5 models: CONUT, the model for ELD with incorporation of sodium (MELD-Na), Child-Turcotte-Pugh score (CTP), prognostic nutritional indices (Onodera: PNI-O), and the Japan Medical Urgency criteria of the liver (JMU). Cox's proportional hazard model, log-rank test and concordance(c)-static were used for the statistics. The indices were 17.74 ± 5.80 for MELD-Na, 9.21 ± 2.19 for CTP, 33.92 ± 11.16 for PNI-O, and 7.57 ± 3.09 for CONUT. Univariate analysis revealed the significance of CONUT (p = 0.017, Odds: 1.325) but not MELD-Na, CTP, JMU or PNI-O for prediction. The cumulative survival rate was clearly discriminated at CONUT point 7. The c-static was 0.081 for the 6-month (M) survival rate, 0.172 for 12M, 0.517 for 36M, 0.821 for 48M, and 0.938 for 60M for CONUT. In conclusion, CONUT shows best predictability for the distant prognoses of patients with ELD.

  17. LongLine: Visual Analytics System for Large-scale Audit Logs

    Directory of Open Access Journals (Sweden)

    Seunghoon Yoo

    2018-03-01

    Full Text Available Audit logs are different from other software logs in that they record the most primitive events (i.e., system calls in modern operating systems. Audit logs contain a detailed trace of an operating system, and thus have received great attention from security experts and system administrators. However, the complexity and size of audit logs, which increase in real time, have hindered analysts from understanding and analyzing them. In this paper, we present a novel visual analytics system, LongLine, which enables interactive visual analyses of large-scale audit logs. LongLine lowers the interpretation barrier of audit logs by employing human-understandable representations (e.g., file paths and commands instead of abstract indicators of operating systems (e.g., file descriptors as well as revealing the temporal patterns of the logs in a multi-scale fashion with meaningful granularity of time in mind (e.g., hourly, daily, and weekly. LongLine also streamlines comparative analysis between interesting subsets of logs, which is essential in detecting anomalous behaviors of systems. In addition, LongLine allows analysts to monitor the system state in a streaming fashion, keeping the latency between log creation and visualization less than one minute. Finally, we evaluate our system through a case study and a scenario analysis with security experts.

  18. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.

    2003-01-01

    Long-term energy market models can be used to examine investments in production technologies, however, with market liberalisation it is crucial that such models include investment risks and investor behaviour. This paper analyses how the effect of investment risk on production technology selection...... can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate...... the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice...

  19. International Symposia on Scale Modeling

    CERN Document Server

    Ito, Akihiko; Nakamura, Yuji; Kuwana, Kazunori

    2015-01-01

    This volume thoroughly covers scale modeling and serves as the definitive source of information on scale modeling as a powerful simplifying and clarifying tool used by scientists and engineers across many disciplines. The book elucidates techniques used when it would be too expensive, or too difficult, to test a system of interest in the field. Topics addressed in the current edition include scale modeling to study weather systems, diffusion of pollution in air or water, chemical process in 3-D turbulent flow, multiphase combustion, flame propagation, biological systems, behavior of materials at nano- and micro-scales, and many more. This is an ideal book for students, both graduate and undergraduate, as well as engineers and scientists interested in the latest developments in scale modeling. This book also: Enables readers to evaluate essential and salient aspects of profoundly complex systems, mechanisms, and phenomena at scale Offers engineers and designers a new point of view, liberating creative and inno...

  20. Multi-Scale Models for the Scale Interaction of Organized Tropical Convection

    Science.gov (United States)

    Yang, Qiu

    Assessing the upscale impact of organized tropical convection from small spatial and temporal scales is a research imperative, not only for having a better understanding of the multi-scale structures of dynamical and convective fields in the tropics, but also for eventually helping in the design of new parameterization strategies to improve the next-generation global climate models. Here self-consistent multi-scale models are derived systematically by following the multi-scale asymptotic methods and used to describe the hierarchical structures of tropical atmospheric flows. The advantages of using these multi-scale models lie in isolating the essential components of multi-scale interaction and providing assessment of the upscale impact of the small-scale fluctuations onto the large-scale mean flow through eddy flux divergences of momentum and temperature in a transparent fashion. Specifically, this thesis includes three research projects about multi-scale interaction of organized tropical convection, involving tropical flows at different scaling regimes and utilizing different multi-scale models correspondingly. Inspired by the observed variability of tropical convection on multiple temporal scales, including daily and intraseasonal time scales, the goal of the first project is to assess the intraseasonal impact of the diurnal cycle on the planetary-scale circulation such as the Hadley cell. As an extension of the first project, the goal of the second project is to assess the intraseasonal impact of the diurnal cycle over the Maritime Continent on the Madden-Julian Oscillation. In the third project, the goals are to simulate the baroclinic aspects of the ITCZ breakdown and assess its upscale impact on the planetary-scale circulation over the eastern Pacific. These simple multi-scale models should be useful to understand the scale interaction of organized tropical convection and help improve the parameterization of unresolved processes in global climate models.

  1. A model for allometric scaling of mammalian metabolism with ambient heat loss

    KAUST Repository

    Kwak, Ho Sang

    2016-02-02

    Background Allometric scaling, which represents the dependence of biological trait or process relates on body size, is a long-standing subject in biological science. However, there has been no study to consider heat loss to the ambient and an insulation layer representing mammalian skin and fur for the derivation of the scaling law of metabolism. Methods A simple heat transfer model is proposed to analyze the allometry of mammalian metabolism. The present model extends existing studies by incorporating various external heat transfer parameters and additional insulation layers. The model equations were solved numerically and by an analytic heat balance approach. Results A general observation is that the present heat transfer model predicted the 2/3 surface scaling law, which is primarily attributed to the dependence of the surface area on the body mass. External heat transfer effects introduced deviations in the scaling law, mainly due to natural convection heat transfer which becomes more prominent at smaller mass. These deviations resulted in a slight modification of the scaling exponent to a value smaller than 2/3. Conclusion The finding that additional radiative heat loss and the consideration of an outer insulation fur layer attenuate these deviation effects and render the scaling law closer to 2/3 provides in silico evidence for a functional impact of heat transfer mode on the allometric scaling law in mammalian metabolism.

  2. Sensitivity, Error and Uncertainty Quantification: Interfacing Models at Different Scales

    International Nuclear Information System (INIS)

    Krstic, Predrag S.

    2014-01-01

    Discussion on accuracy of AMO data to be used in the plasma modeling codes for astrophysics and nuclear fusion applications, including plasma-material interfaces (PMI), involves many orders of magnitude of energy, spatial and temporal scales. Thus, energies run from tens of K to hundreds of millions of K, temporal and spatial scales go from fs to years and from nm’s to m’s and more, respectively. The key challenge for the theory and simulation in this field is the consistent integration of all processes and scales, i.e. an “integrated AMO science” (IAMO). The principal goal of the IAMO science is to enable accurate studies of interactions of electrons, atoms, molecules, photons, in many-body environment, including complex collision physics of plasma-material interfaces, leading to the best decisions and predictions. However, the accuracy requirement for a particular data strongly depends on the sensitivity of the respective plasma modeling applications to these data, which stresses a need for immediate sensitivity analysis feedback of the plasma modeling and material design communities. Thus, the data provision to the plasma modeling community is a “two-way road” as long as the accuracy of the data is considered, requiring close interactions of the AMO and plasma modeling communities.

  3. Modelling Long-Term Evolution of Cementitious Materials Used in Waste Disposal

    International Nuclear Information System (INIS)

    Jacques, D.; Perko, J.; Seetharam, S.; Govaerts, J.; Mallants, D.

    2013-01-01

    This report summarizes the latest developments at SCK-CEN in modelling long-term evolution of cementitious materials used as engineered barriers in waste disposal. In a first section chemical degradation of concrete during leaching with rain and soil water types is discussed. The geochemical evolution of concrete thus obtained forms the basis for all further modelling. Next we show how the leaching model is coupled with a reactive transport module to determine leaching of cement minerals under diffusive or advective boundary conditions. The module also contains a simplified microstructural model from which hydraulic and transport properties of concrete may be calculated dynamically. This coupled model is simplified, i.e. abstracted prior to being applied to large-scale concrete structures typical of a near-surface repository. Both the original and simplified models are then used to calculate the evolution of hydraulic, transport, and chemical properties of concrete. Characteristic degradation states of concrete are further linked to distribution ratios that describe sorption onto hardened cement via a linear and reversible sorption process. As concrete degrades and pH drops the distribution ratios are continuously updated. We have thus integrated all major chemical and physical concrete degradation processes into one simulator for a particular scale of interest. Two simulators are used: one that can operate at relatively small spatial scales using all process details and another one which simulates concrete degradation at the scale of the repository but with a simplified cement model representation. (author)

  4. Nonstationary modeling of a long record of rainfall and temperature over Rome

    Science.gov (United States)

    Villarini, Gabriele; Smith, James A.; Napolitano, Francesco

    2010-10-01

    A long record (1862-2004) of seasonal rainfall and temperature from the Rome observatory of Collegio Romano are modeled in a nonstationary framework by means of the Generalized Additive Models in Location, Scale and Shape (GAMLSS). Modeling analyses are used to characterize nonstationarities in rainfall and related climate variables. It is shown that the GAMLSS models are able to represent the magnitude and spread in the seasonal time series with parameters which are a smooth function of time. Covariate analyses highlight the role of seasonal and interannual variability of large-scale climate forcing, as reflected in three teleconnection indexes (Atlantic Multidecadal Oscillation, North Atlantic Oscillation, and Mediterranean Index), for modeling seasonal rainfall and temperature over Rome. In particular, the North Atlantic Oscillation is a significant predictor during the winter, while the Mediterranean Index is a significant predictor for almost all seasons.

  5. Long-term Simulation of Photo-oxidants and Particulate Matter Over Europe With The Eurad Modeling System

    Science.gov (United States)

    Memmesheimer, M.; Friese, E.; Jakobs, H. J.; Feldmann, H.; Ebel, A.; Kerschgens, M. J.

    During recent years the interest in long-term applications of air pollution modeling systems (AQMS) has strongly increased. Most of these models have been developed for the application to photo-oxidant episodes during the last decade. In this contribu- tion a long-term application of the EURAD modeling sytem to the year 1997 is pre- sented. Atmospheric particles are included using the Modal Aerosol Dynamics Model for Europe (MADE). Meteorological fields are simulated by the mesoscale meteoro- logical model MM5, gas-phase chemistry has been treated with the RACM mecha- nism. The nesting option is used to zoom in areas of specific interest. Horizontal grid sizes are 125 km for the reginal scale, and 5 km for the local scale covering the area of North-Rhine-Westfalia (NRW). The results have been compared to observations of the air quality network of the environmental agency of NRW for the year 1997. The model results have been evaluated using the data quality objectives of the EU direc- tive 99/30. Further improvement for application of regional-scale air quality models is needed with respect to emission data bases, coupling to global models to improve the boundary values, interaction between aerosols and clouds and multiphase modeling.

  6. Wave-processing of long-scale information by neuronal chains.

    Directory of Open Access Journals (Sweden)

    José Antonio Villacorta-Atienza

    Full Text Available Investigation of mechanisms of information handling in neural assemblies involved in computational and cognitive tasks is a challenging problem. Synergetic cooperation of neurons in time domain, through synchronization of firing of multiple spatially distant neurons, has been widely spread as the main paradigm. Complementary, the brain may also employ information coding and processing in spatial dimension. Then, the result of computation depends also on the spatial distribution of long-scale information. The latter bi-dimensional alternative is notably less explored in the literature. Here, we propose and theoretically illustrate a concept of spatiotemporal representation and processing of long-scale information in laminar neural structures. We argue that relevant information may be hidden in self-sustained traveling waves of neuronal activity and then their nonlinear interaction yields efficient wave-processing of spatiotemporal information. Using as a testbed a chain of FitzHugh-Nagumo neurons, we show that the wave-processing can be achieved by incorporating into the single-neuron dynamics an additional voltage-gated membrane current. This local mechanism provides a chain of such neurons with new emergent network properties. In particular, nonlinear waves as a carrier of long-scale information exhibit a variety of functionally different regimes of interaction: from complete or asymmetric annihilation to transparent crossing. Thus neuronal chains can work as computational units performing different operations over spatiotemporal information. Exploiting complexity resonance these composite units can discard stimuli of too high or too low frequencies, while selectively compress those in the natural frequency range. We also show how neuronal chains can contextually interpret raw wave information. The same stimulus can be processed differently or identically according to the context set by a periodic wave train injected at the opposite end of the

  7. Applying the Nominal Response Model within a Longitudinal Framework to Construct the Positive Family Relationships Scale

    Science.gov (United States)

    Preston, Kathleen Suzanne Johnson; Parral, Skye N.; Gottfried, Allen W.; Oliver, Pamella H.; Gottfried, Adele Eskeles; Ibrahim, Sirena M.; Delany, Danielle

    2015-01-01

    A psychometric analysis was conducted using the nominal response model under the item response theory framework to construct the Positive Family Relationships scale. Using data from the Fullerton Longitudinal Study, this scale was constructed within a long-term longitudinal framework spanning middle childhood through adolescence. Items tapping…

  8. Using Scaling to Understand, Model and Predict Global Scale Anthropogenic and Natural Climate Change

    Science.gov (United States)

    Lovejoy, S.; del Rio Amador, L.

    2014-12-01

    The atmosphere is variable over twenty orders of magnitude in time (≈10-3 to 1017 s) and almost all of the variance is in the spectral "background" which we show can be divided into five scaling regimes: weather, macroweather, climate, macroclimate and megaclimate. We illustrate this with instrumental and paleo data. Based the signs of the fluctuation exponent H, we argue that while the weather is "what you get" (H>0: fluctuations increasing with scale), that it is macroweather (Hdecreasing with scale) - not climate - "that you expect". The conventional framework that treats the background as close to white noise and focuses on quasi-periodic variability assumes a spectrum that is in error by a factor of a quadrillion (≈ 1015). Using this scaling framework, we can quantify the natural variability, distinguish it from anthropogenic variability, test various statistical hypotheses and make stochastic climate forecasts. For example, we estimate the probability that the warming is simply a giant century long natural fluctuation is less than 1%, most likely less than 0.1% and estimate return periods for natural warming events of different strengths and durations, including the slow down ("pause") in the warming since 1998. The return period for the pause was found to be 20-50 years i.e. not very unusual; however it immediately follows a 6 year "pre-pause" warming event of almost the same magnitude with a similar return period (30 - 40 years). To improve on these unconditional estimates, we can use scaling models to exploit the long range memory of the climate process to make accurate stochastic forecasts of the climate including the pause. We illustrate stochastic forecasts on monthly and annual scale series of global and northern hemisphere surface temperatures. We obtain forecast skill nearly as high as the theoretical (scaling) predictability limits allow: for example, using hindcasts we find that at 10 year forecast horizons we can still explain ≈ 15% of the

  9. How uncertainty in socio-economic variables affects large-scale transport model forecasts

    DEFF Research Database (Denmark)

    Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2015-01-01

    A strategic task assigned to large-scale transport models is to forecast the demand for transport over long periods of time to assess transport projects. However, by modelling complex systems transport models have an inherent uncertainty which increases over time. As a consequence, the longer...... the period forecasted the less reliable is the forecasted model output. Describing uncertainty propagation patterns over time is therefore important in order to provide complete information to the decision makers. Among the existing literature only few studies analyze uncertainty propagation patterns over...

  10. Multi-scale modeling of composites

    DEFF Research Database (Denmark)

    Azizi, Reza

    A general method to obtain the homogenized response of metal-matrix composites is developed. It is assumed that the microscopic scale is sufficiently small compared to the macroscopic scale such that the macro response does not affect the micromechanical model. Therefore, the microscopic scale......-Mandel’s energy principle is used to find macroscopic operators based on micro-mechanical analyses using the finite element method under generalized plane strain condition. A phenomenologically macroscopic model for metal matrix composites is developed based on constitutive operators describing the elastic...... to plastic deformation. The macroscopic operators found, can be used to model metal matrix composites on the macroscopic scale using a hierarchical multi-scale approach. Finally, decohesion under tension and shear loading is studied using a cohesive law for the interface between matrix and fiber....

  11. Multi-scale modeling and analysis of convective boiling: towards the prediction of CHF in rod bundles

    International Nuclear Information System (INIS)

    Niceno, B.; Sato, Y.; Badillo, A.; Andreani, M.

    2010-01-01

    In this paper we describe current activities on the project Multi-Scale Modeling and Analysis of convective boiling (MSMA), conducted jointly by the Paul Scherrer Institute (PSI) and the Swiss Nuclear Utilities (Swissnuclear). The long-term aim of the MSMA project is to formulate improved closure laws for Computational Fluid Dynamics (CFD) simulations for prediction of convective boiling and eventually of the Critical Heat Flux (CHF). As boiling is controlled by the competition of numerous phenomena at various length and time scales, a multi-scale approach is employed to tackle the problem at different scales. In the MSMA project, the scales on which we focus range from the CFD scale (macro-scale), bubble size scale (meso-scale), liquid micro-layer and triple interline scale (micro-scale), and molecular scale (nano-scale). The current focus of the project is on micro- and meso- scales modeling. The numerical framework comprises a highly efficient, parallel DNS solver, the PSI-BOIL code. The code has incorporated an Immersed Boundary Method (IBM) to tackle complex geometries. For simulation of meso-scales (bubbles), we use the Constrained Interpolation Profile method: Conservative Semi-Lagrangian 2nd order (CIP-CSL2). The phase change is described either by applying conventional jump conditions at the interface, or by using the Phase Field (PF) approach. In this work, we present selected results for flows in complex geometry using the IBM, selected bubbly flow simulations using the CIP-CSL2 method and results for phase change using the PF approach. In the subsequent stage of the project, the importance of effects of nano-scale processes on the global boiling heat transfer will be evaluated. To validate the models, more experimental information will be needed in the future, so it is expected that the MSMA project will become the seed for a long-term, combined theoretical and experimental program

  12. The long-range correlation and evolution law of centennial-scale temperatures in Northeast China.

    Science.gov (United States)

    Zheng, Xiaohui; Lian, Yi; Wang, Qiguang

    2018-01-01

    This paper applies the detrended fluctuation analysis (DFA) method to investigate the long-range correlation of monthly mean temperatures from three typical measurement stations at Harbin, Changchun, and Shenyang in Northeast China from 1909 to 2014. The results reveal the memory characteristics of the climate system in this region. By comparing the temperatures from different time periods and investigating the variations of its scaling exponents at the three stations during these different time periods, we found that the monthly mean temperature has long-range correlation, which indicates that the temperature in Northeast China has long-term memory and good predictability. The monthly time series of temperatures over the past 106 years also shows good long-range correlation characteristics. These characteristics are also obviously observed in the annual mean temperature time series. Finally, we separated the centennial-length temperature time series into two time periods. These results reveal that the long-range correlations at the Harbin station over these two time periods have large variations, whereas no obvious variations are observed at the other two stations. This indicates that warming affects the regional climate system's predictability differently at different time periods. The research results can provide a quantitative reference point for regional climate predictability assessment and future climate model evaluation.

  13. Large-scale, long-term silvicultural experiments in the United States: historical overview and contemporary examples.

    Science.gov (United States)

    R. S. Seymour; J. Guldin; D. Marshall; B. Palik

    2006-01-01

    This paper provides a synopsis of large-scale, long-term silviculture experiments in the United States. Large-scale in a silvicultural context means that experimental treatment units encompass entire stands (5 to 30 ha); long-term means that results are intended to be monitored over many cutting cycles or an entire rotation, typically for many decades. Such studies...

  14. Spatial scale separation in regional climate modelling

    Energy Technology Data Exchange (ETDEWEB)

    Feser, F.

    2005-07-01

    In this thesis the concept of scale separation is introduced as a tool for first improving regional climate model simulations and, secondly, to explicitly detect and describe the added value obtained by regional modelling. The basic idea behind this is that global and regional climate models have their best performance at different spatial scales. Therefore the regional model should not alter the global model's results at large scales. The for this purpose designed concept of nudging of large scales controls the large scales within the regional model domain and keeps them close to the global forcing model whereby the regional scales are left unchanged. For ensemble simulations nudging of large scales strongly reduces the divergence of the different simulations compared to the standard approach ensemble that occasionally shows large differences for the individual realisations. For climate hindcasts this method leads to results which are on average closer to observed states than the standard approach. Also the analysis of the regional climate model simulation can be improved by separating the results into different spatial domains. This was done by developing and applying digital filters that perform the scale separation effectively without great computational effort. The separation of the results into different spatial scales simplifies model validation and process studies. The search for 'added value' can be conducted on the spatial scales the regional climate model was designed for giving clearer results than by analysing unfiltered meteorological fields. To examine the skill of the different simulations pattern correlation coefficients were calculated between the global reanalyses, the regional climate model simulation and, as a reference, of an operational regional weather analysis. The regional climate model simulation driven with large-scale constraints achieved a high increase in similarity to the operational analyses for medium-scale 2 meter

  15. Review of Dynamic Modeling and Simulation of Large Scale Belt Conveyor System

    Science.gov (United States)

    He, Qing; Li, Hong

    Belt conveyor is one of the most important devices to transport bulk-solid material for long distance. Dynamic analysis is the key to decide whether the design is rational in technique, safe and reliable in running, feasible in economy. It is very important to study dynamic properties, improve efficiency and productivity, guarantee conveyor safe, reliable and stable running. The dynamic researches and applications of large scale belt conveyor are discussed. The main research topics, the state-of-the-art of dynamic researches on belt conveyor are analyzed. The main future works focus on dynamic analysis, modeling and simulation of main components and whole system, nonlinear modeling, simulation and vibration analysis of large scale conveyor system.

  16. Incorporating interspecific competition into species-distribution mapping by upward scaling of small-scale model projections to the landscape.

    Directory of Open Access Journals (Sweden)

    Mark Baah-Acheamfour

    Full Text Available There are a number of overarching questions and debate in the scientific community concerning the importance of biotic interactions in species distribution models at large spatial scales. In this paper, we present a framework for revising the potential distribution of tree species native to the Western Ecoregion of Nova Scotia, Canada, by integrating the long-term effects of interspecific competition into an existing abiotic-factor-based definition of potential species distribution (PSD. The PSD model is developed by combining spatially explicit data of individualistic species' response to normalized incident photosynthetically active radiation, soil water content, and growing degree days. A revised PSD model adds biomass output simulated over a 100-year timeframe with a robust forest gap model and scaled up to the landscape using a forestland classification technique. To demonstrate the method, we applied the calculation to the natural range of 16 target tree species as found in 1,240 provincial forest-inventory plots. The revised PSD model, with the long-term effects of interspecific competition accounted for, predicted that eastern hemlock (Tsuga canadensis, American beech (Fagus grandifolia, white birch (Betula papyrifera, red oak (Quercus rubra, sugar maple (Acer saccharum, and trembling aspen (Populus tremuloides would experience a significant decline in their original distribution compared with balsam fir (Abies balsamea, black spruce (Picea mariana, red spruce (Picea rubens, red maple (Acer rubrum L., and yellow birch (Betula alleghaniensis. True model accuracy improved from 64.2% with original PSD evaluations to 81.7% with revised PSD. Kappa statistics slightly increased from 0.26 (fair to 0.41 (moderate for original and revised PSDs, respectively.

  17. Using Multi-Scale Modeling Systems and Satellite Data to Study the Precipitation Processes

    Science.gov (United States)

    Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.

    2011-01-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the recent developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitating systems and hurricanes/typhoons will be presented. The high-resolution spatial and temporal visualization will be utilized to show the evolution of precipitation processes. Also how to

  18. Fluctuations and pseudo long range dependence in network flows: A non-stationary Poisson process model

    International Nuclear Information System (INIS)

    Yu-Dong, Chen; Li, Li; Yi, Zhang; Jian-Ming, Hu

    2009-01-01

    In the study of complex networks (systems), the scaling phenomenon of flow fluctuations refers to a certain power-law between the mean flux (activity) (F i ) of the i-th node and its variance σ i as σ i α (F i ) α . Such scaling laws are found to be prevalent both in natural and man-made network systems, but the understanding of their origins still remains limited. This paper proposes a non-stationary Poisson process model to give an analytical explanation of the non-universal scaling phenomenon: the exponent α varies between 1/2 and 1 depending on the size of sampling time window and the relative strength of the external/internal driven forces of the systems. The crossover behaviour and the relation of fluctuation scaling with pseudo long range dependence are also accounted for by the model. Numerical experiments show that the proposed model can recover the multi-scaling phenomenon. (general)

  19. Photometric survey, modelling, and scaling of long-period and low-amplitude asteroids

    Science.gov (United States)

    Marciniak, A.; Bartczak, P.; Müller, T.; Sanabria, J. J.; Alí-Lagoa, V.; Antonini, P.; Behrend, R.; Bernasconi, L.; Bronikowska, M.; Butkiewicz-Bąk, M.; Cikota, A.; Crippa, R.; Ditteon, R.; Dudziński, G.; Duffard, R.; Dziadura, K.; Fauvaud, S.; Geier, S.; Hirsch, R.; Horbowicz, J.; Hren, M.; Jerosimic, L.; Kamiński, K.; Kankiewicz, P.; Konstanciak, I.; Korlevic, P.; Kosturkiewicz, E.; Kudak, V.; Manzini, F.; Morales, N.; Murawiecka, M.; Ogłoza, W.; Oszkiewicz, D.; Pilcher, F.; Polakis, T.; Poncy, R.; Santana-Ros, T.; Siwak, M.; Skiff, B.; Sobkowiak, K.; Stoss, R.; Żejmo, M.; Żukowski, K.

    2018-02-01

    Context. The available set of spin and shape modelled asteroids is strongly biased against slowly rotating targets and those with low lightcurve amplitudes. This is due to the observing selection effects. As a consequence, the current picture of asteroid spin axis distribution, rotation rates, radiometric properties, or aspects related to the object's internal structure might be affected too. Aims: To counteract these selection effects, we are running a photometric campaign of a large sample of main belt asteroids omitted in most previous studies. Using least chi-squared fitting we determined synodic rotation periods and verified previous determinations. When a dataset for a given target was sufficiently large and varied, we performed spin and shape modelling with two different methods to compare their performance. Methods: We used the convex inversion method and the non-convex SAGE algorithm, applied on the same datasets of dense lightcurves. Both methods search for the lowest deviations between observed and modelled lightcurves, though using different approaches. Unlike convex inversion, the SAGE method allows for the existence of valleys and indentations on the shapes based only on lightcurves. Results: We obtain detailed spin and shape models for the first five targets of our sample: (159) Aemilia, (227) Philosophia, (329) Svea, (478) Tergeste, and (487) Venetia. When compared to stellar occultation chords, our models obtained an absolute size scale and major topographic features of the shape models were also confirmed. When applied to thermophysical modelling (TPM), they provided a very good fit to the infrared data and allowed their size, albedo, and thermal inertia to be determined. Conclusions: Convex and non-convex shape models provide comparable fits to lightcurves. However, some non-convex models fit notably better to stellar occultation chords and to infrared data in sophisticated thermophysical modelling (TPM). In some cases TPM showed strong

  20. Ground-water solute transport modeling using a three-dimensional scaled model

    International Nuclear Information System (INIS)

    Crider, S.S.

    1987-01-01

    Scaled models are used extensively in current hydraulic research on sediment transport and solute dispersion in free surface flows (rivers, estuaries), but are neglected in current ground-water model research. Thus, an investigation was conducted to test the efficacy of a three-dimensional scaled model of solute transport in ground water. No previous results from such a model have been reported. Experiments performed on uniform scaled models indicated that some historical problems (e.g., construction and scaling difficulties; disproportionate capillary rise in model) were partly overcome by using simple model materials (sand, cement and water), by restricting model application to selective classes of problems, and by physically controlling the effect of the model capillary zone. Results from these tests were compared with mathematical models. Model scaling laws were derived for ground-water solute transport and used to build a three-dimensional scaled model of a ground-water tritium plume in a prototype aquifer on the Savannah River Plant near Aiken, South Carolina. Model results compared favorably with field data and with a numerical model. Scaled models are recommended as a useful additional tool for prediction of ground-water solute transport

  1. Molecular scale modeling of polymer imprint nanolithography.

    Science.gov (United States)

    Chandross, Michael; Grest, Gary S

    2012-01-10

    We present the results of large-scale molecular dynamics simulations of two different nanolithographic processes, step-flash imprint lithography (SFIL), and hot embossing. We insert rigid stamps into an entangled bead-spring polymer melt above the glass transition temperature. After equilibration, the polymer is then hardened in one of two ways, depending on the specific process to be modeled. For SFIL, we cross-link the polymer chains by introducing bonds between neighboring beads. To model hot embossing, we instead cool the melt to below the glass transition temperature. We then study the ability of these methods to retain features by removing the stamps, both with a zero-stress removal process in which stamp atoms are instantaneously deleted from the system as well as a more physical process in which the stamp is pulled from the hardened polymer at fixed velocity. We find that it is necessary to coat the stamp with an antifriction coating to achieve clean removal of the stamp. We further find that a high density of cross-links is necessary for good feature retention in the SFIL process. The hot embossing process results in good feature retention at all length scales studied as long as coated, low surface energy stamps are used.

  2. Development and validation of logistic prognostic models by predefined SAS-macros

    Directory of Open Access Journals (Sweden)

    Ziegler, Christoph

    2006-02-01

    Full Text Available In medical decision making about therapies or diagnostic procedures in the treatment of patients the prognoses of the course or of the magnitude of diseases plays a relevant role. Beside of the subjective attitude of the clinician mathematical models can help in providing such prognoses. Such models are mostly multivariate regression models. In the case of a dichotomous outcome the logistic model will be applied as the standard model. In this paper we will describe SAS-macros for the development of such a model, for examination of the prognostic performance, and for model validation. The rational for this developmental approach of a prognostic modelling and the description of the macros can only given briefly in this paper. Much more details are given in. These 14 SAS-macros are a tool for setting up the whole process of deriving a prognostic model. Especially the possibility of validating the model by a standardized software tool gives an opportunity, which is not used in general in published prognostic models. Therefore, this can help to develop new models with good prognostic performance for use in medical applications.

  3. Scaling laws for modeling nuclear reactor systems

    International Nuclear Information System (INIS)

    Nahavandi, A.N.; Castellana, F.S.; Moradkhanian, E.N.

    1979-01-01

    Scale models are used to predict the behavior of nuclear reactor systems during normal and abnormal operation as well as under accident conditions. Three types of scaling procedures are considered: time-reducing, time-preserving volumetric, and time-preserving idealized model/prototype. The necessary relations between the model and the full-scale unit are developed for each scaling type. Based on these relationships, it is shown that scaling procedures can lead to distortion in certain areas that are discussed. It is advised that, depending on the specific unit to be scaled, a suitable procedure be chosen to minimize model-prototype distortion

  4. A probabilistic assessment of large scale wind power development for long-term energy resource planning

    Science.gov (United States)

    Kennedy, Scott Warren

    A steady decline in the cost of wind turbines and increased experience in their successful operation have brought this technology to the forefront of viable alternatives for large-scale power generation. Methodologies for understanding the costs and benefits of large-scale wind power development, however, are currently limited. In this thesis, a new and widely applicable technique for estimating the social benefit of large-scale wind power production is presented. The social benefit is based upon wind power's energy and capacity services and the avoidance of environmental damages. The approach uses probabilistic modeling techniques to account for the stochastic interaction between wind power availability, electricity demand, and conventional generator dispatch. A method for including the spatial smoothing effect of geographically dispersed wind farms is also introduced. The model has been used to analyze potential offshore wind power development to the south of Long Island, NY. If natural gas combined cycle (NGCC) and integrated gasifier combined cycle (IGCC) are the alternative generation sources, wind power exhibits a negative social benefit due to its high capacity cost and the relatively low emissions of these advanced fossil-fuel technologies. Environmental benefits increase significantly if charges for CO2 emissions are included. Results also reveal a diminishing social benefit as wind power penetration increases. The dependence of wind power benefits on natural gas and coal prices is also discussed. In power systems with a high penetration of wind generated electricity, the intermittent availability of wind power may influence hourly spot prices. A price responsive electricity demand model is introduced that shows a small increase in wind power value when consumers react to hourly spot prices. The effectiveness of this mechanism depends heavily on estimates of the own- and cross-price elasticities of aggregate electricity demand. This work makes a valuable

  5. Graviton production in the scaling of a long-cosmic-string network

    International Nuclear Information System (INIS)

    Kleidis, Kostas; Kuiroukidis, Apostolos; Papadopoulos, Demetrios B.; Verdaguer, Enric

    2011-01-01

    In a previous paper [K. Kleidis, D. B. Papadopoulos, E. Verdaguer, and L. Vlahos, Phys. Rev. D 78, 024027 (2008).] we considered the possibility that (within the early-radiation epoch) there has been (also) a short period of a significant presence of cosmic strings. During this radiation-plus-strings stage the Universe matter-energy content can be modeled by a two-component fluid, consisting of radiation (dominant) and a cosmic-string fluid (subdominant). It was found that, during this stage, the cosmological gravitational waves--that had been produced in an earlier (inflationary) epoch--with comoving wave numbers below a critical value (which depends on the physics of the cosmic-string network) were filtered, leading to a distorsion in the expected (scale-invariant) cosmological gravitational wave power spectrum. In any case, the cosmological evolution gradually results in the scaling of any long-cosmic-string network and, hence, after a short time interval, the Universe enters into the late-radiation era. However, along the transition from an early-radiation epoch to the late-radiation era through the radiation-plus-strings stage, the time dependence of the cosmological scale factor is modified, something that leads to a discontinuous change of the corresponding scalar curvature, which, in turn, triggers the quantum-mechanical creation of gravitons. In this paper we discuss several aspects of such a process, and, in particular, the observational consequences on the expected gravitational-wave power spectrum.

  6. Looking for a relevant potential evapotranspiration model at the watershed scale

    Science.gov (United States)

    Oudin, L.; Hervieu, F.; Michel, C.; Perrin, C.; Anctil, F.; Andréassian, V.

    2003-04-01

    In this paper, we try to identify the most relevant approach to calculate Potential Evapotranspiration (PET) for use in a daily watershed model, to try to bring an answer to the following question: "how can we use commonly available atmospheric parameters to represent the evaporative demand at the catchment scale?". Hydrologists generally see the Penman model as the ideal model regarding to its good adequacy with lysimeter measurements and its physically-based formulation. However, in real-world engineering situations, where meteorological stations are scarce, hydrologists are often constrained to use other PET formulae with less data requirements or/and long-term average of PET values (the rationale being that PET is an inherently conservative variable). We chose to test 28 commonly used PET models coupled with 4 different daily watershed models. For each test, we compare both PET input options: actual data and long-term average data. The comparison is made in terms of streamflow simulation efficiency, over a large sample of 308 watersheds. The watersheds are located in France, Australia and the United States of America and represent varied climates. Strikingly, we find no systematic improvements of the watershed model efficiencies when using actual PET series instead of long-term averages. This suggests either that watershed models may not conveniently use the climatic information contained in PET values or that formulae are only awkward indicators of the real PET which watershed models need.

  7. Modeling Seismic Cycles of Great Megathrust Earthquakes Across the Scales With Focus at Postseismic Phase

    Science.gov (United States)

    Sobolev, Stephan V.; Muldashev, Iskander A.

    2017-12-01

    Subduction is substantially multiscale process where the stresses are built by long-term tectonic motions, modified by sudden jerky deformations during earthquakes, and then restored by following multiple relaxation processes. Here we develop a cross-scale thermomechanical model aimed to simulate the subduction process from 1 min to million years' time scale. The model employs elasticity, nonlinear transient viscous rheology, and rate-and-state friction. It generates spontaneous earthquake sequences and by using an adaptive time step algorithm, recreates the deformation process as observed naturally during the seismic cycle and multiple seismic cycles. The model predicts that viscosity in the mantle wedge drops by more than three orders of magnitude during the great earthquake with a magnitude above 9. As a result, the surface velocities just an hour or day after the earthquake are controlled by viscoelastic relaxation in the several hundred km of mantle landward of the trench and not by the afterslip localized at the fault as is currently believed. Our model replicates centuries-long seismic cycles exhibited by the greatest earthquakes and is consistent with the postseismic surface displacements recorded after the Great Tohoku Earthquake. We demonstrate that there is no contradiction between extremely low mechanical coupling at the subduction megathrust in South Chile inferred from long-term geodynamic models and appearance of the largest earthquakes, like the Great Chile 1960 Earthquake.

  8. Visualization experimental investigation on long stripe coherent structure in small-scale rectangular channel

    International Nuclear Information System (INIS)

    Su Jiqiang; Sun Zhongning; Fan Guangming; Wang Shiming

    2013-01-01

    The long stripe coherent structure of the turbulent boundary layer in a small- scale vertical rectangular channel was observed by using hydrogen bubble flow trace visualization technique. The statistical properties of the long stripe in the experimental channel boundary layer were compared with that in the smooth flat plate boundary layer. The pitch characteristics were explained by the formation mechanism of the long stripe. It was analyzed that how the change of y + affected the distribution of the long stripe. In addition, the frequency characteristics of the long stripe were also investigated, and the correlation of the long stripe frequency in such a flow channel was obtained. (authors)

  9. Modeling and simulation with operator scaling

    OpenAIRE

    Cohen, Serge; Meerschaert, Mark M.; Rosiński, Jan

    2010-01-01

    Self-similar processes are useful in modeling diverse phenomena that exhibit scaling properties. Operator scaling allows a different scale factor in each coordinate. This paper develops practical methods for modeling and simulating stochastic processes with operator scaling. A simulation method for operator stable Levy processes is developed, based on a series representation, along with a Gaussian approximation of the small jumps. Several examples are given to illustrate practical application...

  10. State-of-the-Art Report on Multi-scale Modelling of Nuclear Fuels

    International Nuclear Information System (INIS)

    Bartel, T.J.; Dingreville, R.; Littlewood, D.; Tikare, V.; Bertolus, M.; Blanc, V.; Bouineau, V.; Carlot, G.; Desgranges, C.; Dorado, B.; Dumas, J.C.; Freyss, M.; Garcia, P.; Gatt, J.M.; Gueneau, C.; Julien, J.; Maillard, S.; Martin, G.; Masson, R.; Michel, B.; Piron, J.P.; Sabathier, C.; Skorek, R.; Toffolon, C.; Valot, C.; Van Brutzel, L.; Besmann, Theodore M.; Chernatynskiy, A.; Clarno, K.; Gorti, S.B.; Radhakrishnan, B.; Devanathan, R.; Dumont, M.; Maugis, P.; El-Azab, A.; Iglesias, F.C.; Lewis, B.J.; Krack, M.; Yun, Y.; Kurata, M.; Kurosaki, K.; Largenton, R.; Lebensohn, R.A.; Malerba, L.; Oh, J.Y.; Phillpot, S.R.; Tulenko, J. S.; Rachid, J.; Stan, M.; Sundman, B.; Tonks, M.R.; Williamson, R.; Van Uffelen, P.; Welland, M.J.; Valot, Carole; Stan, Marius; Massara, Simone; Tarsi, Reka

    2015-10-01

    Fuels is to document the development of multi-scale modelling approaches for fuels in support of current fuel optimisation programmes and innovative fuel designs. The objectives of the effort are: - assess international multi-scale modelling approaches devoted to nuclear fuels from the atomic to the macroscopic scale in order to share and promote such approaches; - address all types of fuels: both current (mainly oxide fuels) and advanced fuels (such as minor actinide containing oxide, carbide, nitride, or metal fuels); - address key engineering issues associated with each type of fuel; - assess the quality of existing links between the various scales and list needs for strengthening multi-scale modelling approaches; - identify the most relevant experimental data or experimental characterisation techniques that are missing for validation of fuel multi-scale modelling; - promote exchange between the actors involved at various scales; - promote exchange between multi-scale modelling experts and experimentalists; - exchange information with other expert groups of the WPMM. This report is organised as follows: - Part I lays out the different classes of phenomena relevant to nuclear fuel behaviour. Each chapter is further divided into topics relevant for each class of phenomena. - Part II is devoted to a description of the techniques used to obtain material properties necessary for describing the phenomena and their assessment. - Part III covers details relative to the principles and limits behind each modelling/computational technique as a reference for more detailed information. Included within the appropriate sections are critical analyses of the mid- and long-term challenges for the future (i.e., approximations, methods, scales, key experimental data, characterisation techniques missing or to be strengthened)

  11. One-scale supersymmetric inflationary models

    International Nuclear Information System (INIS)

    Bertolami, O.; Ross, G.G.

    1986-01-01

    The reheating phase is studied in a class of supergravity inflationary models involving a two-component hidden sector in which the scale of supersymmetry breaking and the scale generating inflation are related. It is shown that these models have an ''entropy crisis'' in which there is a large entropy release after nucleosynthesis leading to unacceptable low nuclear abundances. (orig.)

  12. Modelling of rate effects at multiple scales

    DEFF Research Database (Denmark)

    Pedersen, R.R.; Simone, A.; Sluys, L. J.

    2008-01-01

    , the length scale in the meso-model and the macro-model can be coupled. In this fashion, a bridging of length scales can be established. A computational analysis of  a Split Hopkinson bar test at medium and high impact load is carried out at macro-scale and meso-scale including information from  the micro-scale.......At the macro- and meso-scales a rate dependent constitutive model is used in which visco-elasticity is coupled to visco-plasticity and damage. A viscous length scale effect is introduced to control the size of the fracture process zone. By comparison of the widths of the fracture process zone...

  13. Time-series modeling of long-term weight self-monitoring data.

    Science.gov (United States)

    Helander, Elina; Pavel, Misha; Jimison, Holly; Korhonen, Ilkka

    2015-08-01

    Long-term self-monitoring of weight is beneficial for weight maintenance, especially after weight loss. Connected weight scales accumulate time series information over long term and hence enable time series analysis of the data. The analysis can reveal individual patterns, provide more sensitive detection of significant weight trends, and enable more accurate and timely prediction of weight outcomes. However, long term self-weighing data has several challenges which complicate the analysis. Especially, irregular sampling, missing data, and existence of periodic (e.g. diurnal and weekly) patterns are common. In this study, we apply time series modeling approach on daily weight time series from two individuals and describe information that can be extracted from this kind of data. We study the properties of weight time series data, missing data and its link to individuals behavior, periodic patterns and weight series segmentation. Being able to understand behavior through weight data and give relevant feedback is desired to lead to positive intervention on health behaviors.

  14. A hands-on approach for fitting long-term survival models under the GAMLSS framework.

    Science.gov (United States)

    de Castro, Mário; Cancho, Vicente G; Rodrigues, Josemar

    2010-02-01

    In many data sets from clinical studies there are patients insusceptible to the occurrence of the event of interest. Survival models which ignore this fact are generally inadequate. The main goal of this paper is to describe an application of the generalized additive models for location, scale, and shape (GAMLSS) framework to the fitting of long-term survival models. In this work the number of competing causes of the event of interest follows the negative binomial distribution. In this way, some well known models found in the literature are characterized as particular cases of our proposal. The model is conveniently parameterized in terms of the cured fraction, which is then linked to covariates. We explore the use of the gamlss package in R as a powerful tool for inference in long-term survival models. The procedure is illustrated with a numerical example. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  15. Fission time-scale in experiments and in multiple initiation model

    Energy Technology Data Exchange (ETDEWEB)

    Karamian, S. A., E-mail: karamian@nrmail.jinr.ru [Joint Institute for Nuclear Research (Russian Federation)

    2011-12-15

    Rate of fission for highly-excited nuclei is affected by the viscose character of the systemmotion in deformation coordinates as was reported for very heavy nuclei with Z{sub C} > 90. The long time-scale of fission can be described in a model of 'fission by diffusion' that includes an assumption of the overdamped diabatic motion. The fission-to-spallation ratio at intermediate proton energy could be influenced by the viscosity, as well. Within a novel approach of the present work, the cross examination of the fission probability, time-scales, and pre-fission neutron multiplicities is resulted in the consistent interpretation of a whole set of the observables. Earlier, different aspects could be reproduced in partial simulations without careful coordination.

  16. Fractional-order leaky integrate-and-fire model with long-term memory and power law dynamics.

    Science.gov (United States)

    Teka, Wondimu W; Upadhyay, Ranjit Kumar; Mondal, Argha

    2017-09-01

    Pyramidal neurons produce different spiking patterns to process information, communicate with each other and transform information. These spiking patterns have complex and multiple time scale dynamics that have been described with the fractional-order leaky integrate-and-Fire (FLIF) model. Models with fractional (non-integer) order differentiation that generalize power law dynamics can be used to describe complex temporal voltage dynamics. The main characteristic of FLIF model is that it depends on all past values of the voltage that causes long-term memory. The model produces spikes with high interspike interval variability and displays several spiking properties such as upward spike-frequency adaptation and long spike latency in response to a constant stimulus. We show that the subthreshold voltage and the firing rate of the fractional-order model make transitions from exponential to power law dynamics when the fractional order α decreases from 1 to smaller values. The firing rate displays different types of spike timing adaptation caused by changes on initial values. We also show that the voltage-memory trace and fractional coefficient are the causes of these different types of spiking properties. The voltage-memory trace that represents the long-term memory has a feedback regulatory mechanism and affects spiking activity. The results suggest that fractional-order models might be appropriate for understanding multiple time scale neuronal dynamics. Overall, a neuron with fractional dynamics displays history dependent activities that might be very useful and powerful for effective information processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. On very-large-scale motions (VLSMs) and long-wavelength patterns in turbine wakes

    Science.gov (United States)

    Önder, Asim; Meyers, Johan

    2017-11-01

    It is now widely accepted that very-large-scale motions (VLSMs) are a prominent feature of thermally-neutral atmospheric boundary layers (ABL). Up to date, the influence of these very long active motions on wind-energy harvesting is not sufficiently explored. This work is an effort in this direction. We perform large-eddy simulation of a turbine row operating under neutral conditions. The ABL data is produced separately in a very long domain of 240 δ . VLSMs are isolated from smaller-scale ABL and wake motions using a spectral cutoff at streamwise wavelength λx = 3.125 δ . Reynolds-averaging of low-pass filtered fields shows that the interaction of VLSMs and turbines produce very-long-wavelength motions in the wake region, which contain about 20 % of the Reynolds-shear stress, and 30 % of the streamwise kinetic energy. A conditional analysis of filtered fields further reveals that these long-wavelength wakes are produced by modification of very long velocity streaks in ABL. In particular, the turbine row acts as a sharp boundary between low and high velocity streaks, and accompanying roller structures remain relatively unaffected. This reorganization creates a two-way flux towards the wake region, which elucidates the side-way domination in turbulent transport. The authors acknowledg funding from ERC Grant No 306471.

  18. Drift-Scale THC Seepage Model

    International Nuclear Information System (INIS)

    C.R. Bryan

    2005-01-01

    The purpose of this report (REV04) is to document the thermal-hydrologic-chemical (THC) seepage model, which simulates the composition of waters that could potentially seep into emplacement drifts, and the composition of the gas phase. The THC seepage model is processed and abstracted for use in the total system performance assessment (TSPA) for the license application (LA). This report has been developed in accordance with ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Post-Processing Analysis for THC Seepage) Report Integration'' (BSC 2005 [DIRS 172761]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this report. The plan for validation of the models documented in this report is given in Section 2.2.2, ''Model Validation for the DS THC Seepage Model,'' of the TWP. The TWP (Section 3.2.2) identifies Acceptance Criteria 1 to 4 for ''Quantity and Chemistry of Water Contacting Engineered Barriers and Waste Forms'' (NRC 2003 [DIRS 163274]) as being applicable to this report; however, in variance to the TWP, Acceptance Criterion 5 has also been determined to be applicable, and is addressed, along with the other Acceptance Criteria, in Section 4.2 of this report. Also, three FEPS not listed in the TWP (2.2.10.01.0A, 2.2.10.06.0A, and 2.2.11.02.0A) are partially addressed in this report, and have been added to the list of excluded FEPS in Table 6.1-2. This report has been developed in accordance with LP-SIII.10Q-BSC, ''Models''. This report documents the THC seepage model and a derivative used for validation, the Drift Scale Test (DST) THC submodel. The THC seepage model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral alteration on flow in rocks surrounding drifts. The DST THC submodel uses a drift-scale

  19. Multi-scale modeling for sustainable chemical production.

    Science.gov (United States)

    Zhuang, Kai; Bakshi, Bhavik R; Herrgård, Markus J

    2013-09-01

    With recent advances in metabolic engineering, it is now technically possible to produce a wide portfolio of existing petrochemical products from biomass feedstock. In recent years, a number of modeling approaches have been developed to support the engineering and decision-making processes associated with the development and implementation of a sustainable biochemical industry. The temporal and spatial scales of modeling approaches for sustainable chemical production vary greatly, ranging from metabolic models that aid the design of fermentative microbial strains to material and monetary flow models that explore the ecological impacts of all economic activities. Research efforts that attempt to connect the models at different scales have been limited. Here, we review a number of existing modeling approaches and their applications at the scales of metabolism, bioreactor, overall process, chemical industry, economy, and ecosystem. In addition, we propose a multi-scale approach for integrating the existing models into a cohesive framework. The major benefit of this proposed framework is that the design and decision-making at each scale can be informed, guided, and constrained by simulations and predictions at every other scale. In addition, the development of this multi-scale framework would promote cohesive collaborations across multiple traditionally disconnected modeling disciplines to achieve sustainable chemical production. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Multi-scale individual-based model of microbial and bioconversion dynamics in aerobic granular sludge.

    Science.gov (United States)

    Xavier, Joao B; De Kreuk, Merle K; Picioreanu, Cristian; Van Loosdrecht, Mark C M

    2007-09-15

    Aerobic granular sludge is a novel compact biological wastewater treatment technology for integrated removal of COD (chemical oxygen demand), nitrogen, and phosphate charges. We present here a multiscale model of aerobic granular sludge sequencing batch reactors (GSBR) describing the complex dynamics of populations and nutrient removal. The macro scale describes bulk concentrations and effluent composition in six solutes (oxygen, acetate, ammonium, nitrite, nitrate, and phosphate). A finer scale, the scale of one granule (1.1 mm of diameter), describes the two-dimensional spatial arrangement of four bacterial groups--heterotrophs, ammonium oxidizers, nitrite oxidizers, and phosphate accumulating organisms (PAO)--using individual based modeling (IbM) with species-specific kinetic models. The model for PAO includes three internal storage compounds: polyhydroxyalkanoates (PHA), poly phosphate, and glycogen. Simulations of long-term reactor operation show how the microbial population and activity depends on the operating conditions. Short-term dynamics of solute bulk concentrations are also generated with results comparable to experimental data from lab scale reactors. Our results suggest that N-removal in GSBR occurs mostly via alternating nitrification/denitrification rather than simultaneous nitrification/denitrification, supporting an alternative strategy to improve N-removal in this promising wastewater treatment process.

  1. The Grand Challenge of Basin-Scale Groundwater Quality Management Modelling

    Science.gov (United States)

    Fogg, G. E.

    2017-12-01

    The last 50+ years of agricultural, urban and industrial land and water use practices have accelerated the degradation of groundwater quality in the upper portions of many major aquifer systems upon which much of the world relies for water supply. In the deepest and most extensive systems (e.g., sedimentary basins) that typically have the largest groundwater production rates and hold fresh groundwaters on decadal to millennial time scales, most of the groundwater is not yet contaminated. Predicting the long-term future groundwater quality in such basins is a grand scientific challenge. Moreover, determining what changes in land and water use practices would avert future, irreversible degradation of these massive freshwater stores is a grand challenge both scientifically and societally. It is naïve to think that the problem can be solved by eliminating or reducing enough of the contaminant sources, for human exploitation of land and water resources will likely always result in some contamination. The key lies in both reducing the contaminant sources and more proactively managing recharge in terms of both quantity and quality, such that the net influx of contaminants is sufficiently moderate and appropriately distributed in space and time to reverse ongoing groundwater quality degradation. Just as sustainable groundwater quantity management is greatly facilitated with groundwater flow management models, sustainable groundwater quality management will require the use of groundwater quality management models. This is a new genre of hydrologic models do not yet exist, partly because of the lack of modeling tools and the supporting research to model non-reactive as well as reactive transport on large space and time scales. It is essential that the contaminant hydrogeology community, which has heretofore focused almost entirely on point-source plume-scale problems, direct it's efforts toward the development of process-based transport modeling tools and analyses capable

  2. Scale modelling in LMFBR safety

    International Nuclear Information System (INIS)

    Cagliostro, D.J.; Florence, A.L.; Abrahamson, G.R.

    1979-01-01

    This paper reviews scale modelling techniques used in studying the structural response of LMFBR vessels to HCDA loads. The geometric, material, and dynamic similarity parameters are presented and identified using the methods of dimensional analysis. Complete similarity of the structural response requires that each similarity parameter be the same in the model as in the prototype. The paper then focuses on the methods, limitations, and problems of duplicating these parameters in scale models and mentions an experimental technique for verifying the scaling. Geometric similarity requires that all linear dimensions of the prototype be reduced in proportion to the ratio of a characteristic dimension of the model to that of the prototype. The overall size of the model depends on the structural detail required, the size of instrumentation, and the costs of machining and assemblying the model. Material similarity requires that the ratio of the density, bulk modulus, and constitutive relations for the structure and fluid be the same in the model as in the prototype. A practical choice of a material for the model is one with the same density and stress-strain relationship as the operating temperature. Ni-200 and water are good simulant materials for the 304 SS vessel and the liquid sodium coolant, respectively. Scaling of the strain rate sensitivity and fracture toughness of materials is very difficult, but may not be required if these effects do not influence the structural response of the reactor components. Dynamic similarity requires that the characteristic pressure of a simulant source equal that of the prototype HCDA for geometrically similar volume changes. The energy source is calibrated in the geometry and environment in which it will be used to assure that heat transfer between high temperature loading sources and the coolant simulant and that non-equilibrium effects in two-phase sources are accounted for. For the geometry and flow conitions of interest, the

  3. Defect evolution in cosmology and condensed matter quantitative analysis with the velocity-dependent one-scale model

    CERN Document Server

    Martins, C J A P

    2016-01-01

    This book sheds new light on topological defects in widely differing systems, using the Velocity-Dependent One-Scale Model to better understand their evolution. Topological defects – cosmic strings, monopoles, domain walls or others - necessarily form at cosmological (and condensed matter) phase transitions. If they are stable and long-lived they will be fossil relics of higher-energy physics. Understanding their behaviour and consequences is a key part of any serious attempt to understand the universe, and this requires modelling their evolution. The velocity-dependent one-scale model is the only fully quantitative model of defect network evolution, and the canonical model in the field. This book provides a review of the model, explaining its physical content and describing its broad range of applicability.

  4. Long term modelling in a second rank world: application to climate policies

    International Nuclear Information System (INIS)

    Crassous, R.

    2008-11-01

    This research aims at the identification of the dissatisfaction reasons with respect to the existing climate models, at the design of an innovating modelling architecture which would respond to these dissatisfactions, and at proposing climate policy assessment pathways. The authors gives a critique assessment of the modelling activity within the field of climate policies, outlines the fact that the large number and the scattering of existing long term scenarios hides a weak control of uncertainties and of the inner consistency of the produced paths, as well as the very low number of modelling paradigms. After a deepened analysis of modelling practices, the author presents the IMACLIM-R modelling architecture which is presented on a world scale and includes 12 areas and 12 sectors, and allows the simulation of evolutions by 2050, and even 2100, with a one-year time step. The author describes a scenario without any climate policy, highlights reassessment possibilities for economical trajectories which would allow greenhouse gas concentration stabilisation on a long term basis through the application of IMACLIM-R innovations. He outlines adjustment and refinement possibilities for climate policies which would robustly limit the transition cost risks

  5. A neuro-fuzzy model to predict the inflow to the guardialfiera multipurpose dam (Southern Italy at medium-long time scales

    Directory of Open Access Journals (Sweden)

    L.F. Termite

    2013-09-01

    Full Text Available Intelligent computing tools based on fuzzy logic and artificial neural networks have been successfully applied in various problems with superior performances. A new approach of combining these two powerful tools, known as neuro-fuzzy systems, has increasingly attracted scientists in different fields. Few studies have been undertaken to evaluate their performances in hydrologic modeling. Specifically are available rainfall-runoff modeling typically at very short time scales (hourly, daily or event for the real-time forecasting of floods with in input precipitation and past runoff (i.e. inflow rate and in few cases models for the prediction of the monthly inflows to a dam using the past inflows as input. This study presents an application of an Adaptive Network-based Fuzzy Inference System (ANFIS, as a neuro-fuzzy-computational technique, in the forecasting of the inflow to the Guardialfiera multipurpose dam (CB, Italy at the weekly and monthly time scale. The latter has been performed both directly at monthly scale (monthly input data and iterating the weekly model. Twenty-nine years of rainfall, temperature, water level in the reservoir and releases to the different uses were available. In all simulations meteorological input data were used and in some cases also the past inflows. The performance of the defined ANFIS models were established by different efficiency and correlation indices. The results at the weekly time scale can be considered good, with a Nash- Sutcliffe efficiency index E = 0.724 in the testing phase. At the monthly time scale, satisfactory results were obtained with the iteration of the weekly model for the prediction of the incoming volume up to 3 weeks ahead (E = 0.574, while the direct simulation of monthly inflows gave barely satisfactory results (E = 0.502. The greatest difficulties encountered in the analysis were related to the reliability of the available data. The results of this study demonstrate the promising

  6. Multi-scale dynamic modeling of atmospheric pollution in urban environment

    International Nuclear Information System (INIS)

    Thouron, Laetitia

    2017-01-01

    Urban air pollution has been identified as an important cause of health impacts, including premature deaths. In particular, ambient concentrations of gaseous pollutants such as nitrogen dioxide (NO 2 ) and particulate matter (PM10 and PM2.5) are regulated, which means that emission reduction strategies must be put in place to reduce these concentrations in places where the corresponding regulations are not respected. Besides, air pollution can contribute to the contamination of other media, for example through the contribution of atmospheric deposition to runoff contamination. The multifactorial and multi-scale aspects of urban make the pollution sources difficult to identify. Indeed, the urban environment is a heterogeneous space characterized by complex architectural structures (old buildings alongside a more modern building, residential, commercial, industrial zones, roads, etc.), non-uniform atmospheric pollutant emissions and therefore the population exposure to pollution is variable in space and time. The modeling of urban air pollution aims to understand the origin of pollutants, their spatial extent and their concentration/deposition levels. Some pollutants have long residence times and can stay several weeks in the atmosphere (PM2.5) and therefore be transported over long distances, while others are more local (NO x in the vicinity of traffic). The spatial distribution of a pollutant will therefore depend on several factors, and in particular on the surfaces encountered. Air quality depends strongly on weather, buildings (canyon-street) and emissions. The aim of this thesis is to address some of these aspects by modeling: (1) urban background pollution with a transport-chemical model (Polyphemus / POLAIR3D), which makes it possible to estimate atmospheric pollutants by type of urban surfaces (roofs, walls and roadways), (2) street-level pollution by explicitly integrating the effects of the building in a three-dimensional way with a multi-scale model of

  7. On scaling of human body models

    Directory of Open Access Journals (Sweden)

    Hynčík L.

    2007-10-01

    Full Text Available Human body is not an unique being, everyone is another from the point of view of anthropometry and mechanical characteristics which means that division of the human body population to categories like 5%-tile, 50%-tile and 95%-tile from the application point of view is not enough. On the other hand, the development of a particular human body model for all of us is not possible. That is why scaling and morphing algorithms has started to be developed. The current work describes the development of a tool for scaling of the human models. The idea is to have one (or couple of standard model(s as a base and to create other models based on these basic models. One has to choose adequate anthropometrical and biomechanical parameters that describe given group of humans to be scaled and morphed among.

  8. Multi-Scale Analysis for Characterizing Near-Field Constituent Concentrations in the Context of a Macro-Scale Semi-Lagrangian Numerical Model

    Science.gov (United States)

    Yearsley, J. R.

    2017-12-01

    The semi-Lagrangian numerical scheme employed by RBM, a model for simulating time-dependent, one-dimensional water quality constituents in advection-dominated rivers, is highly scalable both in time and space. Although the model has been used at length scales of 150 meters and time scales of three hours, the majority of applications have been at length scales of 1/16th degree latitude/longitude (about 5 km) or greater and time scales of one day. Applications of the method at these scales has proven successful for characterizing the impacts of climate change on water temperatures in global rivers and on the vulnerability of thermoelectric power plants to changes in cooling water temperatures in large river systems. However, local effects can be very important in terms of ecosystem impacts, particularly in the case of developing mixing zones for wastewater discharges with pollutant loadings limited by regulations imposed by the Federal Water Pollution Control Act (FWPCA). Mixing zone analyses have usually been decoupled from large-scale watershed influences by developing scenarios that represent critical scenarios for external processes associated with streamflow and weather conditions . By taking advantage of the particle-tracking characteristics of the numerical scheme, RBM can provide results at any point in time within the model domain. We develop a proof of concept for locations in the river network where local impacts such as mixing zones may be important. Simulated results from the semi-Lagrangian numerical scheme are treated as input to a finite difference model of the two-dimensional diffusion equation for water quality constituents such as water temperature or toxic substances. Simulations will provide time-dependent, two-dimensional constituent concentration in the near-field in response to long-term basin-wide processes. These results could provide decision support to water quality managers for evaluating mixing zone characteristics.

  9. Manufacturing test of large scale hollow capsule and long length cladding in the large scale oxide dispersion strengthened (ODS) martensitic steel

    International Nuclear Information System (INIS)

    Narita, Takeshi; Ukai, Shigeharu; Kaito, Takeji; Ohtsuka, Satoshi; Fujiwara, Masayuki

    2004-04-01

    Mass production capability of oxide dispersion strengthened (ODS) martensitic steel cladding (9Cr) has being evaluated in the Phase II of the Feasibility Studies on Commercialized Fast Reactor Cycle System. The cost for manufacturing mother tube (raw materials powder production, mechanical alloying (MA) by ball mill, canning, hot extrusion, and machining) is a dominant factor in the total cost for manufacturing ODS ferritic steel cladding. In this study, the large-sale 9Cr-ODS martensitic steel mother tube which is made with a large-scale hollow capsule, and long length claddings were manufactured, and the applicability of these processes was evaluated. Following results were obtained in this study. (1) Manufacturing the large scale mother tube in the dimension of 32 mm OD, 21 mm ID, and 2 m length has been successfully carried out using large scale hollow capsule. This mother tube has a high degree of accuracy in size. (2) The chemical composition and the micro structure of the manufactured mother tube are similar to the existing mother tube manufactured by a small scale can. And the remarkable difference between the bottom and top sides in the manufactured mother tube has not been observed. (3) The long length cladding has been successfully manufactured from the large scale mother tube which was made using a large scale hollow capsule. (4) For reducing the manufacturing cost of the ODS steel claddings, manufacturing process of the mother tubes using a large scale hollow capsules is promising. (author)

  10. Congestion management in power systems. Long-term modeling framework and large-scale application

    Energy Technology Data Exchange (ETDEWEB)

    Bertsch, Joachim; Hagspiel, Simeon; Just, Lisa

    2015-06-15

    In liberalized power systems, generation and transmission services are unbundled, but remain tightly interlinked. Congestion management in the transmission network is of crucial importance for the efficiency of these inter-linkages. Different regulatory designs have been suggested, analyzed and followed, such as uniform zonal pricing with redispatch or nodal pricing. However, the literature has either focused on the short-term efficiency of congestion management or specific issues of timing investments. In contrast, this paper presents a generalized and flexible economic modeling framework based on a decomposed inter-temporal equilibrium model including generation, transmission, as well as their inter-linkages. Short and long-term effects of different congestion management designs can hence be analyzed. Specifically, we are able to identify and isolate implicit frictions and sources of inefficiencies in the different regulatory designs, and to provide a comparative analysis including a benchmark against a first-best welfare-optimal result. To demonstrate the applicability of our framework, we calibrate and numerically solve our model for a detailed representation of the Central Western European (CWE) region, consisting of 70 nodes and 174 power lines. Analyzing six different congestion management designs until 2030, we show that compared to the first-best benchmark, i.e., nodal pricing, inefficiencies of up to 4.6% arise. Inefficiencies are mainly driven by the approach of determining cross-border capacities as well as the coordination of transmission system operators' activities.

  11. Drift-Scale THC Seepage Model

    Energy Technology Data Exchange (ETDEWEB)

    C.R. Bryan

    2005-02-17

    The purpose of this report (REV04) is to document the thermal-hydrologic-chemical (THC) seepage model, which simulates the composition of waters that could potentially seep into emplacement drifts, and the composition of the gas phase. The THC seepage model is processed and abstracted for use in the total system performance assessment (TSPA) for the license application (LA). This report has been developed in accordance with ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Post-Processing Analysis for THC Seepage) Report Integration'' (BSC 2005 [DIRS 172761]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this report. The plan for validation of the models documented in this report is given in Section 2.2.2, ''Model Validation for the DS THC Seepage Model,'' of the TWP. The TWP (Section 3.2.2) identifies Acceptance Criteria 1 to 4 for ''Quantity and Chemistry of Water Contacting Engineered Barriers and Waste Forms'' (NRC 2003 [DIRS 163274]) as being applicable to this report; however, in variance to the TWP, Acceptance Criterion 5 has also been determined to be applicable, and is addressed, along with the other Acceptance Criteria, in Section 4.2 of this report. Also, three FEPS not listed in the TWP (2.2.10.01.0A, 2.2.10.06.0A, and 2.2.11.02.0A) are partially addressed in this report, and have been added to the list of excluded FEPS in Table 6.1-2. This report has been developed in accordance with LP-SIII.10Q-BSC, ''Models''. This report documents the THC seepage model and a derivative used for validation, the Drift Scale Test (DST) THC submodel. The THC seepage model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral

  12. Islands Climatology at Local Scale. Downscaling with CIELO model

    Science.gov (United States)

    Azevedo, Eduardo; Reis, Francisco; Tomé, Ricardo; Rodrigues, Conceição

    2016-04-01

    Islands with horizontal scales of the order of tens of km, as is the case of the Atlantic Islands of Macaronesia, are subscale orographic features for Global Climate Models (GCMs) since the horizontal scales of these models are too coarse to give a detailed representation of the islands' topography. Even the Regional Climate Models (RCMs) reveals limitations when they are forced to reproduce the climate of small islands mainly by the way they flat and lowers the elevation of the islands, reducing the capacity of the model to reproduce important local mechanisms that lead to a very deep local climate differentiation. Important local thermodynamics mechanisms like Foehn effect, or the influence of topography on radiation balance, have a prominent role in the climatic spatial differentiation. Advective transport of air - and the consequent induced adiabatic cooling due to orography - lead to transformations of the state parameters of the air that leads to the spatial configuration of the fields of pressure, temperature and humidity. The same mechanism is in the origin of the orographic clouds cover that, besides the direct role as water source by the reinforcement of precipitation, act like a filter to direct solar radiation and as a source of long-wave radiation that affect the local balance of energy. Also, the saturation (or near saturation) conditions that they provide constitute a barrier to water vapour diffusion in the mechanisms of evapotranspiration. Topographic factors like slope, aspect and orographic mask have also significant importance in the local energy balance. Therefore, the simulation of the local scale climate (past, present and future) in these archipelagos requires the use of downscaling techniques to adjust locally outputs obtained at upper scales. This presentation will discuss and analyse the evolution of the CIELO model (acronym for Clima Insular à Escala LOcal) a statistical/dynamical technique developed at the University of the Azores

  13. The improved long-term prognoses of surface waters contamination after Chernobyl accident for the territories of Bryansk Region of Russia

    International Nuclear Information System (INIS)

    Novitsky, M.A.

    2004-01-01

    The precision of information about the density of contamination by long-lived radionuclides the territory of Russia was improved repeatedly after the Chernobyl accident. Also has appeared a lot of new information on processes of radionuclides migration in a surface layer of soil. Through the modified complex of the models with using the updated set of parameters the prognostic calculations for the annual concentration of radionuclides in the rivers and lakes of South-Western areas of Bryansk region up to twenty years were performed. The performed prognostic calculations show that it is possible to expect only decreasing with time annual concentrations of Chernobyl-generated radionuclides in surface waters for the South-Western territories of Bryansk region relative to current levels.Yet, special attention should be given to the reservoirs with weak and no outflow in which contamination levels are a little higher than the tolerable levels.On this basis, the guidelines on water using in the areas under consideration and on the realization of further operations are prepared. (author)

  14. Generalized thick strip modelling for vortex-induced vibration of long flexible cylinders

    International Nuclear Information System (INIS)

    Bao, Y.; Palacios, R.; Graham, M.; Sherwin, S.

    2016-01-01

    We propose a generalized strip modelling method that is computationally efficient for the VIV prediction of long flexible cylinders in three-dimensional incompressible flow. In order to overcome the shortcomings of conventional strip-theory-based 2D models, the fluid domain is divided into “thick” strips, which are sufficiently thick to locally resolve the small scale turbulence effects and three dimensionality of the flow around the cylinder. An attractive feature of the model is that we independently construct a three-dimensional scale resolving model for individual strips, which have local spanwise scale along the cylinder's axial direction and are only coupled through the structural model of the cylinder. Therefore, this approach is able to cover the full spectrum for fully resolved 3D modelling to 2D strip theory. The connection between these strips is achieved through the calculation of a tensioned beam equation, which is used to represent the dynamics of the flexible body. In the limit, however, a single “thick” strip would fill the full 3D domain. A parallel Fourier spectral/hp element method is employed to solve the 3D flow dynamics in the strip-domain, and then the VIV response prediction is achieved through the strip–structure interactions. Numerical tests on both laminar and turbulent flows as well as the comparison against the fully resolved DNS are presented to demonstrate the applicability of this approach.

  15. Generalized thick strip modelling for vortex-induced vibration of long flexible cylinders

    Energy Technology Data Exchange (ETDEWEB)

    Bao, Y., E-mail: ybao@sjtu.edu.cn [Department of Civil Engineering, School of Naval Architecture, Ocean and Civil Engineering, Shanghai Jiaotong University, No. 800 Dongchuan Road, Shanghai (China); Department of Aeronautics, Imperial College London, South Kensington Campus, London (United Kingdom); Palacios, R., E-mail: r.palacios@imperial.ac.uk [Department of Aeronautics, Imperial College London, South Kensington Campus, London (United Kingdom); Graham, M., E-mail: m.graham@imperial.ac.uk [Department of Aeronautics, Imperial College London, South Kensington Campus, London (United Kingdom); Sherwin, S., E-mail: s.sherwin@imperial.ac.uk [Department of Aeronautics, Imperial College London, South Kensington Campus, London (United Kingdom)

    2016-09-15

    We propose a generalized strip modelling method that is computationally efficient for the VIV prediction of long flexible cylinders in three-dimensional incompressible flow. In order to overcome the shortcomings of conventional strip-theory-based 2D models, the fluid domain is divided into “thick” strips, which are sufficiently thick to locally resolve the small scale turbulence effects and three dimensionality of the flow around the cylinder. An attractive feature of the model is that we independently construct a three-dimensional scale resolving model for individual strips, which have local spanwise scale along the cylinder's axial direction and are only coupled through the structural model of the cylinder. Therefore, this approach is able to cover the full spectrum for fully resolved 3D modelling to 2D strip theory. The connection between these strips is achieved through the calculation of a tensioned beam equation, which is used to represent the dynamics of the flexible body. In the limit, however, a single “thick” strip would fill the full 3D domain. A parallel Fourier spectral/hp element method is employed to solve the 3D flow dynamics in the strip-domain, and then the VIV response prediction is achieved through the strip–structure interactions. Numerical tests on both laminar and turbulent flows as well as the comparison against the fully resolved DNS are presented to demonstrate the applicability of this approach.

  16. Observations and 3D hydrodynamics-based modeling of decadal-scale shoreline change along the Outer Banks, North Carolina

    Science.gov (United States)

    Safak, Ilgar; List, Jeffrey; Warner, John C.; Kumar, Nirnimesh

    2017-01-01

    Long-term decadal-scale shoreline change is an important parameter for quantifying the stability of coastal systems. The decadal-scale coastal change is controlled by processes that occur on short time scales (such as storms) and long-term processes (such as prevailing waves). The ability to predict decadal-scale shoreline change is not well established and the fundamental physical processes controlling this change are not well understood. Here we investigate the processes that create large-scale long-term shoreline change along the Outer Banks of North Carolina, an uninterrupted 60 km stretch of coastline, using both observations and a numerical modeling approach. Shoreline positions for a 24-yr period were derived from aerial photographs of the Outer Banks. Analysis of the shoreline position data showed that, although variable, the shoreline eroded an average of 1.5 m/yr throughout this period. The modeling approach uses a three-dimensional hydrodynamics-based numerical model coupled to a spectral wave model and simulates the full 24-yr time period on a spatial grid running on a short (second scale) time-step to compute the sediment transport patterns. The observations and the model results show similar magnitudes (O(105 m3/yr)) and patterns of alongshore sediment fluxes. Both the observed and the modeled alongshore sediment transport rates have more rapid changes at the north of our section due to continuously curving coastline, and possible effects of alongshore variations in shelf bathymetry. The southern section with a relatively uniform orientation, on the other hand, has less rapid transport rate changes. Alongshore gradients of the modeled sediment fluxes are translated into shoreline change rates that have agreement in some locations but vary in others. Differences between observations and model results are potentially influenced by geologic framework processes not included in the model. Both the observations and the model results show higher rates of

  17. The swan song in context: long-time-scale X-ray variability of NGC 4051

    Science.gov (United States)

    Uttley, P.; McHardy, I. M.; Papadakis, I. E.; Guainazzi, M.; Fruscione, A.

    1999-07-01

    On 1998 May 9-11, the highly variable, low-luminosity Seyfert 1 galaxy NGC 4051 was observed in an unusual low-flux state by BeppoSAX, RXTE and EUVE. We present fits of the 4-15keV RXTE spectrum and BeppoSAX MECS spectrum obtained during this observation, which are consistent with the interpretation that the source had switched off, leaving only the spectrum of pure reflection from distant cold matter. We place this result in context by showing the X-ray light curve of NGC 4051 obtained by our RXTE monitoring campaign over the past two and a half years, which shows that the low state lasted for ~150d before the May observations (implying that the reflecting material is >10^17cm from the continuum source) and forms part of a light curve showing distinct variations in long-term average flux over time-scales > months. We show that the long-time-scale component to X-ray variability is intrinsic to the primary continuum and is probably distinct from the variability at shorter time-scales. The long-time-scale component to variability maybe associated with variations in the accretion flow of matter on to the central black hole. As the source approaches the low state, the variability process becomes non-linear. NGC 4051 may represent a microcosm of all X-ray variability in radio-quiet active galactic nuclei (AGNs), displaying in a few years a variety of flux states and variability properties which more luminous AGNs may pass through on time-scales of decades to thousands of years.

  18. Downscaling modelling system for multi-scale air quality forecasting

    Science.gov (United States)

    Nuterman, R.; Baklanov, A.; Mahura, A.; Amstrup, B.; Weismann, J.

    2010-09-01

    Urban modelling for real meteorological situations, in general, considers only a small part of the urban area in a micro-meteorological model, and urban heterogeneities outside a modelling domain affect micro-scale processes. Therefore, it is important to build a chain of models of different scales with nesting of higher resolution models into larger scale lower resolution models. Usually, the up-scaled city- or meso-scale models consider parameterisations of urban effects or statistical descriptions of the urban morphology, whereas the micro-scale (street canyon) models are obstacle-resolved and they consider a detailed geometry of the buildings and the urban canopy. The developed system consists of the meso-, urban- and street-scale models. First, it is the Numerical Weather Prediction (HIgh Resolution Limited Area Model) model combined with Atmospheric Chemistry Transport (the Comprehensive Air quality Model with extensions) model. Several levels of urban parameterisation are considered. They are chosen depending on selected scales and resolutions. For regional scale, the urban parameterisation is based on the roughness and flux corrections approach; for urban scale - building effects parameterisation. Modern methods of computational fluid dynamics allow solving environmental problems connected with atmospheric transport of pollutants within urban canopy in a presence of penetrable (vegetation) and impenetrable (buildings) obstacles. For local- and micro-scales nesting the Micro-scale Model for Urban Environment is applied. This is a comprehensive obstacle-resolved urban wind-flow and dispersion model based on the Reynolds averaged Navier-Stokes approach and several turbulent closures, i.e. k -ɛ linear eddy-viscosity model, k - ɛ non-linear eddy-viscosity model and Reynolds stress model. Boundary and initial conditions for the micro-scale model are used from the up-scaled models with corresponding interpolation conserving the mass. For the boundaries a

  19. High resolution remote sensing for reducing uncertainties in urban forest carbon offset life cycle assessments.

    Science.gov (United States)

    Tigges, Jan; Lakes, Tobia

    2017-10-04

    Urban forests reduce greenhouse gas emissions by storing and sequestering considerable amounts of carbon. However, few studies have considered the local scale of urban forests to effectively evaluate their potential long-term carbon offset. The lack of precise, consistent and up-to-date forest details is challenging for long-term prognoses. Therefore, this review aims to identify uncertainties in urban forest carbon offset assessment and discuss the extent to which such uncertainties can be reduced by recent progress in high resolution remote sensing. We do this by performing an extensive literature review and a case study combining remote sensing and life cycle assessment of urban forest carbon offset in Berlin, Germany. Recent progress in high resolution remote sensing and methods is adequate for delivering more precise details on the urban tree canopy, individual tree metrics, species, and age structures compared to conventional land use/cover class approaches. These area-wide consistent details can update life cycle inventories for more precise future prognoses. Additional improvements in classification accuracy can be achieved by a higher number of features derived from remote sensing data of increasing resolution, but first studies on this subject indicated that a smart selection of features already provides sufficient data that avoids redundancies and enables more efficient data processing. Our case study from Berlin could use remotely sensed individual tree species as consistent inventory of a life cycle assessment. However, a lack of growth, mortality and planting data forced us to make assumptions, therefore creating uncertainty in the long-term prognoses. Regarding temporal changes and reliable long-term estimates, more attention is required to detect changes of gradual growth, pruning and abrupt changes in tree planting and mortality. As such, precise long-term urban ecological monitoring using high resolution remote sensing should be intensified

  20. Multi-scale Modeling of Arctic Clouds

    Science.gov (United States)

    Hillman, B. R.; Roesler, E. L.; Dexheimer, D.

    2017-12-01

    The presence and properties of clouds are critically important to the radiative budget in the Arctic, but clouds are notoriously difficult to represent in global climate models (GCMs). The challenge stems partly from a disconnect in the scales at which these models are formulated and the scale of the physical processes important to the formation of clouds (e.g., convection and turbulence). Because of this, these processes are parameterized in large-scale models. Over the past decades, new approaches have been explored in which a cloud system resolving model (CSRM), or in the extreme a large eddy simulation (LES), is embedded into each gridcell of a traditional GCM to replace the cloud and convective parameterizations to explicitly simulate more of these important processes. This approach is attractive in that it allows for more explicit simulation of small-scale processes while also allowing for interaction between the small and large-scale processes. The goal of this study is to quantify the performance of this framework in simulating Arctic clouds relative to a traditional global model, and to explore the limitations of such a framework using coordinated high-resolution (eddy-resolving) simulations. Simulations from the global model are compared with satellite retrievals of cloud fraction partioned by cloud phase from CALIPSO, and limited-area LES simulations are compared with ground-based and tethered-balloon measurements from the ARM Barrow and Oliktok Point measurement facilities.

  1. Cross-scale modelling of the climate-change mitigation potential of biochar systems: Global implications of nano-scale processes

    Science.gov (United States)

    Woolf, Dominic; Lehmann, Johannes

    2014-05-01

    production, land use, thermochemical conversion (to both biochar and energy products), climate, economics, and also the interactions between these components. Early efforts to model the life-cycle impacts of biochar systems have typically used simple empirical estimates of the strength of various feedback mechanisms, such as the impact of biochar on crop-growth, soil GHG fluxes, and native soil organic carbon. However, an environmental management perspective demands consideration of impacts over a longer time-scale and in broader agroecological situations than can be reliably extrapolated from simple empirical relationships derived from trials and experiments of inevitably limited scope and duration. Therefore, reliable quantification of long-term and large-scale impacts demands an understanding of the fundamental underlying mechanisms. Here, a systems-modelling approach that incorporates mechanistic assumptions will be described, and used to examine how uncertainties in the biogeochemical processes which drive the biochar-plant-soil interactions (particularly those responsible for priming, crop-growth and soil GHG emissions) translate into sensitivities of large scale and long-term impacts. This approach elucidates the aspects of process-level biochar biogeochemistry most critical to determining the large-scale GHG and economic impacts, and thus provides a useful guide to future model-led research.

  2. Improvement of long-distance atmospheric transfer models Post-Chernobyl action

    International Nuclear Information System (INIS)

    Sinnaeve, J.

    1991-01-01

    The Chernobyl accident, although a tragedy in human terms, provided a valuable opportunity to examine our ability to model the dispersion and deposition of pollutants released into the atmosphere as they are transported over long distances by the wind. Models of long-range pollutant transport have a variety of uses in the context of accidental releases of radioactivity: in the early stages after or during an incident, they would assist in providing an indication of when and where contamination might be expected to appear in subsequent days and what its severity would be for a postulated (or known) release magnitude. As measurements of contamination become available, models can play a further role in emergency response: if the characteristics of the release, particularly the amounts of various radionuclides, are not known, they could be used to work back from measurements to properties of the release. They also provide a tool for an intelligent interpolation or extrapolation from the measurements to estimates of contamination levels in areas having no data. On a longer time-scale after an accident, they could assist in forming a total view of the situation and in assessing how important various phenomena were in determining the final contamination patterns

  3. CO2 emissions and economic activity: Short- and long-run economic determinants of scale, energy intensity and carbon intensity

    International Nuclear Information System (INIS)

    Andersson, Fredrik N.G.; Karpestam, Peter

    2013-01-01

    We analyze the short-term and the long-term determinants of energy intensity, carbon intensity and scale effects for eight developed economies and two emerging economies from 1973 to 2007. Our results show that there is a difference between the short-term and the long-term results and that climate policy are more likely to affect emission over the long-term than over the short-term. Climate policies should therefore be aimed at a time horizon of at least 8 years and year-on-year changes in emissions contains little information about the trend path of emissions. In the long-run capital accumulation is the main driver of emissions. Productivity growth reduces the energy intensity while the real oil price reduces both the energy intensity and the carbon intensity. The real oil price effect suggests that a global carbon tax is an important policy tool to reduce emissions, but our results also suggest that a carbon tax is likely to be insufficient decouple emission from economic growth. Such a decoupling is likely to require a structural transformation of the economy. The key policy challenge is thus to build new economic structures where investments in green technologies are more profitable. - Highlights: • We model determinants of scale, energy intensity and carbon intensity. • Using band spectrum regressions, we separate between short and long run effects. • Different economic variables affect emission in the short and long run. • CO 2 reducing policies should have a long run horizon of (at least 8 years). • A low carbon society requires a structural transformation of the economy

  4. Long Memory Models to Generate Synthetic Hydrological Series

    Directory of Open Access Journals (Sweden)

    Guilherme Armando de Almeida Pereira

    2014-01-01

    Full Text Available In Brazil, much of the energy production comes from hydroelectric plants whose planning is not trivial due to the strong dependence on rainfall regimes. This planning is accomplished through optimization models that use inputs such as synthetic hydrologic series generated from the statistical model PAR(p (periodic autoregressive. Recently, Brazil began the search for alternative models able to capture the effects that the traditional model PAR(p does not incorporate, such as long memory effects. Long memory in a time series can be defined as a significant dependence between lags separated by a long period of time. Thus, this research develops a study of the effects of long dependence in the series of streamflow natural energy in the South subsystem, in order to estimate a long memory model capable of generating synthetic hydrologic series.

  5. Modelling atmospheric dispersion of mercury, lead and cadmium at european scale

    International Nuclear Information System (INIS)

    Roustan, Yelva

    2005-01-01

    Lead, mercury and cadmium are identified as the most worrying heavy metals within the framework of the long range air pollution. Understanding and modeling their transport and fate allow for making effective decisions in order to reduce their impact on people and their environment. The first two parts of this thesis relate to the modeling of these trace pollutants for the impact study at the European scale. While mercury is mainly present under gaseous form and likely to chemically react, the other heavy metals are primarily carried by the fine particles and considered as inert. The third part of this thesis presents a methodological development based on an adjoint approach. It has been used to perform a sensitivity analysis of the model and to carry out inverse modeling to improve boundary conditions which are crucial with a restricted area model. (author) [fr

  6. Design of scaled down structural models

    Science.gov (United States)

    Simitses, George J.

    1994-07-01

    In the aircraft industry, full scale and large component testing is a very necessary, time consuming, and expensive process. It is essential to find ways by which this process can be minimized without loss of reliability. One possible alternative is the use of scaled down models in testing and use of the model test results in order to predict the behavior of the larger system, referred to herein as prototype. This viewgraph presentation provides justifications and motivation for the research study, and it describes the necessary conditions (similarity conditions) for two structural systems to be structurally similar with similar behavioral response. Similarity conditions provide the relationship between a scaled down model and its prototype. Thus, scaled down models can be used to predict the behavior of the prototype by extrapolating their experimental data. Since satisfying all similarity conditions simultaneously is in most cases impractical, distorted models with partial similarity can be employed. Establishment of similarity conditions, based on the direct use of the governing equations, is discussed and their use in the design of models is presented. Examples include the use of models for the analysis of cylindrical bending of orthotropic laminated beam plates, of buckling of symmetric laminated rectangular plates subjected to uniform uniaxial compression and shear, applied individually, and of vibrational response of the same rectangular plates. Extensions and future tasks are also described.

  7. Validation of a plant-wide phosphorus modelling approach with minerals precipitation in a full-scale WWTP

    DEFF Research Database (Denmark)

    Mbamba, Christian Kazadi; Flores Alsina, Xavier; Batstone, Damien John

    2016-01-01

    approach describing ion speciation and ion pairing with kinetic multiple minerals precipitation. Model performance is evaluated against data sets from a full-scale wastewater treatment plant, assessing capability to describe water and sludge lines across the treatment process under steady-state operation...... plant. Dynamic influent profiles were generated using a calibrated influent generator and were used to study the effect of long-term influent dynamics on plant performance. Model-based analysis shows that minerals precipitation strongly influences composition in the anaerobic digesters, but also impacts......The focus of modelling in wastewater treatment is shifting from single unit to plant-wide scale. Plant wide modelling approaches provide opportunities to study the dynamics and interactions of different transformations in water and sludge streams. Towards developing more general and robust...

  8. NEON: Contributing continental-scale long-term environmental data for the benefit of society

    Science.gov (United States)

    Wee, B.; Aulenbach, S.

    2011-12-01

    natural-human systems cannot be understood in the absence of data about the human dimension. Another essential element is the community of tool and platform developers who create the infrastructure for scientists, educators, resource managers, and policy analysts to discover, analyze, and collaborate on problems using the diverse data that are required to address emerging large-scale environmental challenges. These challenges are very unlikely to be problems confined to this generation: they are urgent, compelling, and long-term problems that require a sustained effort to generate and curate data and information from observations, models, and experiments. NEON's long-term national physical and information infrastructure for environmental observation is one of the cornerstones of a framework that transforms science and information for the benefit of society.

  9. Simplified models of dark matter with a long-lived co-annihilation partner

    Science.gov (United States)

    Khoze, Valentin V.; Plascencia, Alexis D.; Sakurai, Kazuki

    2017-06-01

    We introduce a new set of simplified models to address the effects of 3-point interactions between the dark matter particle, its dark co-annihilation partner, and the Standard Model degree of freedom, which we take to be the tau lepton. The contributions from dark matter co-annihilation channels are highly relevant for a determination of the correct relic abundance. We investigate these effects as well as the discovery potential for dark matter co-annihilation partners at the LHC. A small mass splitting between the dark matter and its partner is preferred by the co-annihilation mechanism and suggests that the co-annihilation partners may be long-lived (stable or meta-stable) at collider scales. It is argued that such long-lived electrically charged particles can be looked for at the LHC in searches of anomalous charged tracks. This approach and the underlying models provide an alternative/complementarity to the mono-jet and multi-jet based dark matter searches widely used in the context of simplified models with s-channel mediators. We consider four types of simplified models with different particle spins and coupling structures. Some of these models are manifestly gauge invariant and renormalizable, others would ultimately require a UV completion. These can be realised in terms of supersymmetric models in the neutralino-stau co-annihilation regime, as well as models with extra dimensions or composite models.

  10. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  11. Site-Scale Saturated Zone Flow Model

    International Nuclear Information System (INIS)

    G. Zyvoloski

    2003-01-01

    The purpose of this model report is to document the components of the site-scale saturated-zone flow model at Yucca Mountain, Nevada, in accordance with administrative procedure (AP)-SIII.lOQ, ''Models''. This report provides validation and confidence in the flow model that was developed for site recommendation (SR) and will be used to provide flow fields in support of the Total Systems Performance Assessment (TSPA) for the License Application. The output from this report provides the flow model used in the ''Site-Scale Saturated Zone Transport'', MDL-NBS-HS-000010 Rev 01 (BSC 2003 [162419]). The Site-Scale Saturated Zone Transport model then provides output to the SZ Transport Abstraction Model (BSC 2003 [164870]). In particular, the output from the SZ site-scale flow model is used to simulate the groundwater flow pathways and radionuclide transport to the accessible environment for use in the TSPA calculations. Since the development and calibration of the saturated-zone flow model, more data have been gathered for use in model validation and confidence building, including new water-level data from Nye County wells, single- and multiple-well hydraulic testing data, and new hydrochemistry data. In addition, a new hydrogeologic framework model (HFM), which incorporates Nye County wells lithology, also provides geologic data for corroboration and confidence in the flow model. The intended use of this work is to provide a flow model that generates flow fields to simulate radionuclide transport in saturated porous rock and alluvium under natural or forced gradient flow conditions. The flow model simulations are completed using the three-dimensional (3-D), finite-element, flow, heat, and transport computer code, FEHM Version (V) 2.20 (software tracking number (STN): 10086-2.20-00; LANL 2003 [161725]). Concurrently, process-level transport model and methodology for calculating radionuclide transport in the saturated zone at Yucca Mountain using FEHM V 2.20 are being

  12. Non steady-state model for dry oxidation of nuclear wastes metallic containers in long term interim storage conditions

    International Nuclear Information System (INIS)

    Bertrand, Nathalie; Desgranges, Clara; Poquillon, Dominique; Monceau, Daniel

    2006-01-01

    For high-level nuclear waste containers in long-term interim storage, dry oxidation will be the first and the main degradation mode. The reason is that, for this kind of waste, the temperature on the surface of the containers will be high enough to avoid any condensation phenomena for several years. Even if the scale growth kinetics is expected to be very slow since the temperature will be moderate at the beginning of the storage (around 300 deg. C) and will keep on decreasing, the metal thickness lost by dry oxidation over such a long period must be evaluated with a good reliability. To achieve this goal, modelling of the oxide scale growth is necessary and this is the aim of the dry oxidation studies, performed in the frame of the COCON programme. All existing oxidation models are based on the two main oxidation theories developed by Wagner between the 1930's and 1970's on the one hand, and by Cabrera and Mott in the 1960 and next by Fromhold on the other hand. These used to be associated with high temperature behaviour for Wagner's theory and with low temperature for the second one. Indeed it is certainly more relevant to consider their range of application in terms of the oxide scale thickness rather than in terms of temperature. The question is posed about which theory should an appropriate model rely on. It can be expected that the oxide scale could have a thickness ranging from a few tens of nanometers up to several tens of micrometers depending on temperature and class of alloys chosen. At the present time, low-alloyed steels or carbon steels are considered candidate materials for high-level nuclear waste containers in long term interim storage. For this type of alloys, the scale formed during the dry oxidation stage will be 'rapidly' thick enough to neglect the Mott field. Hence, in a first step, some basic models based on a parabolic rate assumption, that is to say Wagner's model, have been derived from experimental data on iron and on low-alloy steel

  13. Cross-scale intercomparison of climate change impacts simulated by regional and global hydrological models in eleven large river basins

    Energy Technology Data Exchange (ETDEWEB)

    Hattermann, F. F.; Krysanova, V.; Gosling, S. N.; Dankers, R.; Daggupati, P.; Donnelly, C.; Flörke, M.; Huang, S.; Motovilov, Y.; Buda, S.; Yang, T.; Müller, C.; Leng, G.; Tang, Q.; Portmann, F. T.; Hagemann, S.; Gerten, D.; Wada, Y.; Masaki, Y.; Alemayehu, T.; Satoh, Y.; Samaniego, L.

    2017-01-04

    Ideally, the results from models operating at different scales should agree in trend direction and magnitude of impacts under climate change. However, this implies that the sensitivity of impact models designed for either scale to climate variability and change is comparable. In this study, we compare hydrological changes simulated by 9 global and 9 regional hydrological models (HM) for 11 large river basins in all continents under reference and scenario conditions. The foci are on model validation runs, sensitivity of annual discharge to climate variability in the reference period, and sensitivity of the long-term average monthly seasonal dynamics to climate change. One major result is that the global models, mostly not calibrated against observations, often show a considerable bias in mean monthly discharge, whereas regional models show a much better reproduction of reference conditions. However, the sensitivity of two HM ensembles to climate variability is in general similar. The simulated climate change impacts in terms of long-term average monthly dynamics evaluated for HM ensemble medians and spreads show that the medians are to a certain extent comparable in some cases with distinct differences in others, and the spreads related to global models are mostly notably larger. Summarizing, this implies that global HMs are useful tools when looking at large-scale impacts of climate change and variability, but whenever impacts for a specific river basin or region are of interest, e.g. for complex water management applications, the regional-scale models validated against observed discharge should be used.

  14. MOUNTAIN-SCALE COUPLED PROCESSES (TH/THC/THM) MODELS

    International Nuclear Information System (INIS)

    Y.S. Wu

    2005-01-01

    This report documents the development and validation of the mountain-scale thermal-hydrologic (TH), thermal-hydrologic-chemical (THC), and thermal-hydrologic-mechanical (THM) models. These models provide technical support for screening of features, events, and processes (FEPs) related to the effects of coupled TH/THC/THM processes on mountain-scale unsaturated zone (UZ) and saturated zone (SZ) flow at Yucca Mountain, Nevada (BSC 2005 [DIRS 174842], Section 2.1.1.1). The purpose and validation criteria for these models are specified in ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Drift-Scale Abstraction) Model Report Integration'' (BSC 2005 [DIRS 174842]). Model results are used to support exclusion of certain FEPs from the total system performance assessment for the license application (TSPA-LA) model on the basis of low consequence, consistent with the requirements of 10 CFR 63.342 [DIRS 173273]. Outputs from this report are not direct feeds to the TSPA-LA. All the FEPs related to the effects of coupled TH/THC/THM processes on mountain-scale UZ and SZ flow are discussed in Sections 6 and 7 of this report. The mountain-scale coupled TH/THC/THM processes models numerically simulate the impact of nuclear waste heat release on the natural hydrogeological system, including a representation of heat-driven processes occurring in the far field. The mountain-scale TH simulations provide predictions for thermally affected liquid saturation, gas- and liquid-phase fluxes, and water and rock temperature (together called the flow fields). The main focus of the TH model is to predict the changes in water flux driven by evaporation/condensation processes, and drainage between drifts. The TH model captures mountain-scale three-dimensional flow effects, including lateral diversion and mountain-scale flow patterns. The mountain-scale THC model evaluates TH effects on water and gas

  15. MOUNTAIN-SCALE COUPLED PROCESSES (TH/THC/THM)MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Y.S. Wu

    2005-08-24

    This report documents the development and validation of the mountain-scale thermal-hydrologic (TH), thermal-hydrologic-chemical (THC), and thermal-hydrologic-mechanical (THM) models. These models provide technical support for screening of features, events, and processes (FEPs) related to the effects of coupled TH/THC/THM processes on mountain-scale unsaturated zone (UZ) and saturated zone (SZ) flow at Yucca Mountain, Nevada (BSC 2005 [DIRS 174842], Section 2.1.1.1). The purpose and validation criteria for these models are specified in ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Drift-Scale Abstraction) Model Report Integration'' (BSC 2005 [DIRS 174842]). Model results are used to support exclusion of certain FEPs from the total system performance assessment for the license application (TSPA-LA) model on the basis of low consequence, consistent with the requirements of 10 CFR 63.342 [DIRS 173273]. Outputs from this report are not direct feeds to the TSPA-LA. All the FEPs related to the effects of coupled TH/THC/THM processes on mountain-scale UZ and SZ flow are discussed in Sections 6 and 7 of this report. The mountain-scale coupled TH/THC/THM processes models numerically simulate the impact of nuclear waste heat release on the natural hydrogeological system, including a representation of heat-driven processes occurring in the far field. The mountain-scale TH simulations provide predictions for thermally affected liquid saturation, gas- and liquid-phase fluxes, and water and rock temperature (together called the flow fields). The main focus of the TH model is to predict the changes in water flux driven by evaporation/condensation processes, and drainage between drifts. The TH model captures mountain-scale three-dimensional flow effects, including lateral diversion and mountain-scale flow patterns. The mountain-scale THC model evaluates TH effects on

  16. A Simple Laboratory Scale Model of Iceberg Dynamics and its Role in Undergraduate Education

    Science.gov (United States)

    Burton, J. C.; MacAyeal, D. R.; Nakamura, N.

    2011-12-01

    Lab-scale models of geophysical phenomena have a long history in research and education. For example, at the University of Chicago, Dave Fultz developed laboratory-scale models of atmospheric flows. The results from his laboratory were so stimulating that similar laboratories were subsequently established at a number of other institutions. Today, the Dave Fultz Memorial Laboratory for Hydrodynamics (http://geosci.uchicago.edu/~nnn/LAB/) teaches general circulation of the atmosphere and oceans to hundreds of students each year. Following this tradition, we have constructed a lab model of iceberg-capsize dynamics for use in the Fultz Laboratory, which focuses on the interface between glaciology and physical oceanography. The experiment consists of a 2.5 meter long wave tank containing water and plastic "icebergs". The motion of the icebergs is tracked using digital video. Movies can be found at: http://geosci.uchicago.edu/research/glaciology_files/tsunamigenesis_research.shtml. We have had 3 successful undergraduate interns with backgrounds in mathematics, engineering, and geosciences perform experiments, analyze data, and interpret results. In addition to iceberg dynamics, the wave-tank has served as a teaching tool in undergraduate classes studying dam-breaking and tsunami run-up. Motivated by the relatively inexpensive cost of our apparatus (~1K-2K dollars) and positive experiences of undergraduate students, we hope to serve as a model for undergraduate research and education that other universities may follow.

  17. Mapping using the Tsyganenko long magnetospheric model and its relationship to Viking auroral images

    International Nuclear Information System (INIS)

    Elphinstone, R.D.; Hearn, D.; Murphree, J.S.; Cogger, L.L.

    1991-01-01

    The Tsyganenko long magnetospheric model (1987) has been used in conjunction with ultra-violet images taken by the Viking spacecraft to investigate the relationship of the auroral distribution to different magnetospheric regions. The model describes the large-scale structure of the magnetosphere reasonably well for dipole tilt angles near zero, but it appears to break down at higher tilt angles. Even so, a wide variety of auroral configurations can be accurately described by the model. It appears that the open-closed field line boundary is a poor indicator of auroral arc systems with the possible exception of high-latitude polar arcs. The auroral distribution typically called the oval maps to a region in the equatorial plane quite close to the Earth and can be approximately located by mapping the model current density maximum from the equatorial plane into the ionosphere. Although the model may break down along the flanks of the magnetotail, the large-scale auroral distribution generally reflects variations in the near-Earth region and can be modeled quite effectively

  18. Integrated multi-scale modelling and simulation of nuclear fuels

    International Nuclear Information System (INIS)

    Valot, C.; Bertolus, M.; Masson, R.; Malerba, L.; Rachid, J.; Besmann, T.; Phillpot, S.; Stan, M.

    2015-01-01

    This chapter aims at discussing the objectives, implementation and integration of multi-scale modelling approaches applied to nuclear fuel materials. We will first show why the multi-scale modelling approach is required, due to the nature of the materials and by the phenomena involved under irradiation. We will then present the multiple facets of multi-scale modelling approach, while giving some recommendations with regard to its application. We will also show that multi-scale modelling must be coupled with appropriate multi-scale experiments and characterisation. Finally, we will demonstrate how multi-scale modelling can contribute to solving technology issues. (authors)

  19. Modelling across bioreactor scales: methods, challenges and limitations

    DEFF Research Database (Denmark)

    Gernaey, Krist

    that it is challenging and expensive to acquire experimental data of good quality that can be used for characterizing gradients occurring inside a large industrial scale bioreactor. But which model building methods are available? And how can one ensure that the parameters in such a model are properly estimated? And what......Scale-up and scale-down of bioreactors are very important in industrial biotechnology, especially with the currently available knowledge on the occurrence of gradients in industrial-scale bioreactors. Moreover, it becomes increasingly appealing to model such industrial scale systems, considering...

  20. Real-world-time simulation of memory consolidation in a large-scale cerebellar model

    Directory of Open Access Journals (Sweden)

    Masato eGosui

    2016-03-01

    Full Text Available We report development of a large-scale spiking network model of thecerebellum composed of more than 1 million neurons. The model isimplemented on graphics processing units (GPUs, which are dedicatedhardware for parallel computing. Using 4 GPUs simultaneously, we achieve realtime simulation, in which computer simulation ofcerebellar activity for 1 sec completes within 1 sec in thereal-world time, with temporal resolution of 1 msec.This allows us to carry out a very long-term computer simulationof cerebellar activity in a practical time with millisecond temporalresolution. Using the model, we carry out computer simulationof long-term gain adaptation of optokinetic response (OKR eye movementsfor 5 days aimed to study the neural mechanisms of posttraining memoryconsolidation. The simulation results are consistent with animal experimentsand our theory of posttraining memory consolidation. These resultssuggest that realtime computing provides a useful means to studya very slow neural process such as memory consolidation in the brain.

  1. On the random cascading model study of anomalous scaling in multiparticle production with continuously diminishing scale

    International Nuclear Information System (INIS)

    Liu Lianshou; Zhang Yang; Wu Yuanfang

    1996-01-01

    The anomalous scaling of factorial moments with continuously diminishing scale is studied using a random cascading model. It is shown that the model currently used have the property of anomalous scaling only for descrete values of elementary cell size. A revised model is proposed which can give good scaling property also for continuously varying scale. It turns out that the strip integral has good scaling property provided the integral regions are chosen correctly, and that this property is insensitive to the concrete way of self-similar subdivision of phase space in the models. (orig.)

  2. Multi-scale modeling for sustainable chemical production

    DEFF Research Database (Denmark)

    Zhuang, Kai; Bakshi, Bhavik R.; Herrgard, Markus

    2013-01-01

    associated with the development and implementation of a su stainable biochemical industry. The temporal and spatial scales of modeling approaches for sustainable chemical production vary greatly, ranging from metabolic models that aid the design of fermentative microbial strains to material and monetary flow......With recent advances in metabolic engineering, it is now technically possible to produce a wide portfolio of existing petrochemical products from biomass feedstock. In recent years, a number of modeling approaches have been developed to support the engineering and decision-making processes...... models that explore the ecological impacts of all economic activities. Research efforts that attempt to connect the models at different scales have been limited. Here, we review a number of existing modeling approaches and their applications at the scales of metabolism, bioreactor, overall process...

  3. Bottom friction models for shallow water equations: Manning’s roughness coefficient and small-scale bottom heterogeneity

    Science.gov (United States)

    Dyakonova, Tatyana; Khoperskov, Alexander

    2018-03-01

    The correct description of the surface water dynamics in the model of shallow water requires accounting for friction. To simulate a channel flow in the Chezy model the constant Manning roughness coefficient is frequently used. The Manning coefficient nM is an integral parameter which accounts for a large number of physical factors determining the flow braking. We used computational simulations in a shallow water model to determine the relationship between the Manning coefficient and the parameters of small-scale perturbations of a bottom in a long channel. Comparing the transverse water velocity profiles in the channel obtained in the models with a perturbed bottom without bottom friction and with bottom friction on a smooth bottom, we constructed the dependence of nM on the amplitude and spatial scale of perturbation of the bottom relief.

  4. Corrections to scaling in random resistor networks and diluted continuous spin models near the percolation threshold.

    Science.gov (United States)

    Janssen, Hans-Karl; Stenull, Olaf

    2004-02-01

    We investigate corrections to scaling induced by irrelevant operators in randomly diluted systems near the percolation threshold. The specific systems that we consider are the random resistor network and a class of continuous spin systems, such as the x-y model. We focus on a family of least irrelevant operators and determine the corrections to scaling that originate from this family. Our field theoretic analysis carefully takes into account that irrelevant operators mix under renormalization. It turns out that long standing results on corrections to scaling are respectively incorrect (random resistor networks) or incomplete (continuous spin systems).

  5. Relationship between long working hours and depression in two working populations: a structural equation model approach.

    Science.gov (United States)

    Amagasa, Takashi; Nakayama, Takeo

    2012-07-01

    To test the hypothesis that relationship reported between long working hours and depression was inconsistent in previous studies because job demand was treated as a confounder. Structural equation modeling was used to construct five models, using work-related factors and depressive mood scale obtained from 218 clerical workers, to test for goodness of fit and was externally validated with data obtained from 1160 sales workers. Multiple logistic regression analysis was also performed. The model that showed that long working hours increased depression risk when job demand was regarded as an intermediate variable was the best fitted model (goodness-of-fit index/root-mean-square error of approximation: 0.981 to 0.996/0.042 to 0.044). The odds ratio for depression risk with work that was high demand and 60 hours or more per week was estimated at 2 to 4 versus work that was low demand and less than 60 hours per week. Long working hours increased depression risk, with job demand being an intermediate variable.

  6. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  7. A rate-dependent multi-scale crack model for concrete

    NARCIS (Netherlands)

    Karamnejad, A.; Nguyen, V.P.; Sluys, L.J.

    2013-01-01

    A multi-scale numerical approach for modeling cracking in heterogeneous quasi-brittle materials under dynamic loading is presented. In the model, a discontinuous crack model is used at macro-scale to simulate fracture and a gradient-enhanced damage model has been used at meso-scale to simulate

  8. Regional Scale Modelling for Exploring Energy Strategies for Africa

    International Nuclear Information System (INIS)

    Welsch, M.

    2015-01-01

    KTH Royal Institute of Technology was founded in 1827 and it is the largest technical university in Sweden with five campuses and Around 15,000 students. KTH-dESA combines an outstanding knowledge in the field of energy systems analysis. This is demonstrated by the successful collaborations with many (UN) organizations. Regional Scale Modelling for Exploring Energy Strategies for Africa include Assessing renewable energy potentials; Analysing investment strategies; ) Assessing climate resilience; Comparing electrification options; Providing web-based decision support; and Quantifying energy access. It is conclude that Strategies required to ensure a robust and flexible energy system (-> no-regret choices); Capacity investments should be in line with national & regional strategies; Climate change important to consider, as it may strongly influence the energy flows in a region; Long-term models can help identify robust energy investment strategies and pathways that Can help assess future markets and profitability of individual projects

  9. Scale Issues in Modeling the Water Resources Sector in National Economic Models: A Case study of China

    Science.gov (United States)

    Strzepek, K. M.; Kirshen, P.; Yohe, G.

    2001-05-01

    The fundamental theme of this research was to investigate tradeoffs in model resolution for modeling water resources in the context of national economic development and capital investment decisions.. Based on a case study of China, the research team has developed water resource models at relatively fine scales, then investigated how they can be aggregated to regional or national scales and for use in national level planning decisions or global scale integrated assessment models of food and/or environmental change issues. The team has developed regional water supply and water demand functions.. Simplifying and aggregating the supply and demand functions will allow reduced form functions of the water sector for inclusion in large scale national economic models. Water Supply Cost functions were developed looking at both surface and groundwater supplies. Surface Water: Long time series of flows at the mouths of the 36 major river sub-basins in China are used in conjunction with different basin reservoir storage quantities to obtain storage-yield curves. These are then combined with reservoir and transmission cost data to obtain yield-cost or surface water demand curves. The methodology to obtain the long time series of flows for each basin is to fit a simple abcd water balance model to each basin. The costs of reservoir storage have been estimated by using a methodology developed in the USA that relates marginal storage costs to existing storage, slope and geological conditions. USA costs functions have then been adjusted to Chinese costs. The costs of some actual dams in China were used to "ground-truth" the methodology. Groundwater: The purpose of the groundwater work is to estimate the recharge in each basin, and the depths and quality of water of aquifers. A byproduct of the application of the abcd water balance model is the recharge. Depths and quality of aquifers are being taken from many separate reports on groundwater in different parts of China; we have been

  10. Scaling forecast models for wind turbulence and wind turbine power intermittency

    Science.gov (United States)

    Duran Medina, Olmo; Schmitt, Francois G.; Calif, Rudy

    2017-04-01

    The intermittency of the wind turbine power remains an important issue for the massive development of this renewable energy. The energy peaks injected in the electric grid produce difficulties in the energy distribution management. Hence, a correct forecast of the wind power in the short and middle term is needed due to the high unpredictability of the intermittency phenomenon. We consider a statistical approach through the analysis and characterization of stochastic fluctuations. The theoretical framework is the multifractal modelisation of wind velocity fluctuations. Here, we consider three wind turbine data where two possess a direct drive technology. Those turbines are producing energy in real exploitation conditions and allow to test our forecast models of power production at a different time horizons. Two forecast models were developed based on two physical principles observed in the wind and the power time series: the scaling properties on the one hand and the intermittency in the wind power increments on the other. The first tool is related to the intermittency through a multifractal lognormal fit of the power fluctuations. The second tool is based on an analogy of the power scaling properties with a fractional brownian motion. Indeed, an inner long-term memory is found in both time series. Both models show encouraging results since a correct tendency of the signal is respected over different time scales. Those tools are first steps to a search of efficient forecasting approaches for grid adaptation facing the wind energy fluctuations.

  11. Long-wave forcing for regional atmospheric modelling

    Energy Technology Data Exchange (ETDEWEB)

    Storch, H. von; Langenberg, H.; Feser, F. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Hydrophysik

    1999-07-01

    A new method, named 'spectral nudging', of linking a regional model to the driving large-scale model simulated or analyzed by a global model is proposed and tested. Spectral nudging is based on the idea that regional-scale climate statistics are conditioned by the interplay between continental-scale atmospheric conditions and such regional features as marginal seas and mountain ranges. Following this 'downscaling' idea, the regional model is forced to satisfy not only boundary conditions, possibly in a boundary sponge region, but also large-scale flow conditions inside the integration area. We demonstrate that spectral nudging succeeds in keeping the simulated state close to the driving state at large scales, while generating smaller-scale features. We also show that the standard boundary forcing technique in current use allows the regional model to develop internal states conflicting with the large-scale state. It is concluded that spectral nudging may be seen as a suboptimal and indirect data assimilation technique. (orig.) [German] Eine neue Methode, genannt 'spektrales nudging', ein Regionalmodell an das durch ein Globalmodell simulierte grossskalige Antriebsfeld zu koppeln, wird vorgestellt und getestet. Das spektrale nudging basiert auf der Annahme, dass regionale Klimastatistik durch die Wechselwirkung zwischen dem kontinental-skaligen atmosphaerischen Zustand und regionalen Gegebenheiten, wie kleinere Seen und Gebirgszuege, bestimmt wird. Demnach muss das Regionalmodell nicht nur die Randbedingungen erfuellen, sondern auch die grossskaligen Zustaende innerhalb des Integrationsgebietes wiedergeben koennen. Wir zeigen, dass durch das spektrale nudging der grossskalige modellierte Zustand nahe an dem des Antriebsfeldes liegt, ohne die Modellierung regionaler Phaenomene zu beeintraechtigen. Ausserdem zeigen wir, dass das Regionalmodell durch die zur Zeit benutzte Antriebstechnik ueber den Modellrand interne Felder produzieren kann

  12. Magnetic hysteresis at the domain scale of a multi-scale material model for magneto-elastic behaviour

    Energy Technology Data Exchange (ETDEWEB)

    Vanoost, D., E-mail: dries.vanoost@kuleuven-kulak.be [KU Leuven Technology Campus Ostend, ReMI Research Group, Oostende B-8400 (Belgium); KU Leuven Kulak, Wave Propagation and Signal Processing Research Group, Kortrijk B-8500 (Belgium); Steentjes, S. [Institute of Electrical Machines, RWTH Aachen University, Aachen D-52062 (Germany); Peuteman, J. [KU Leuven Technology Campus Ostend, ReMI Research Group, Oostende B-8400 (Belgium); KU Leuven, Department of Electrical Engineering, Electrical Energy and Computer Architecture, Heverlee B-3001 (Belgium); Gielen, G. [KU Leuven, Department of Electrical Engineering, Microelectronics and Sensors, Heverlee B-3001 (Belgium); De Gersem, H. [KU Leuven Kulak, Wave Propagation and Signal Processing Research Group, Kortrijk B-8500 (Belgium); TU Darmstadt, Institut für Theorie Elektromagnetischer Felder, Darmstadt D-64289 (Germany); Pissoort, D. [KU Leuven Technology Campus Ostend, ReMI Research Group, Oostende B-8400 (Belgium); KU Leuven, Department of Electrical Engineering, Microelectronics and Sensors, Heverlee B-3001 (Belgium); Hameyer, K. [Institute of Electrical Machines, RWTH Aachen University, Aachen D-52062 (Germany)

    2016-09-15

    This paper proposes a multi-scale energy-based material model for poly-crystalline materials. Describing the behaviour of poly-crystalline materials at three spatial scales of dominating physical mechanisms allows accounting for the heterogeneity and multi-axiality of the material behaviour. The three spatial scales are the poly-crystalline, grain and domain scale. Together with appropriate scale transitions rules and models for local magnetic behaviour at each scale, the model is able to describe the magneto-elastic behaviour (magnetostriction and hysteresis) at the macroscale, although the data input is merely based on a set of physical constants. Introducing a new energy density function that describes the demagnetisation field, the anhysteretic multi-scale energy-based material model is extended to the hysteretic case. The hysteresis behaviour is included at the domain scale according to the micro-magnetic domain theory while preserving a valid description for the magneto-elastic coupling. The model is verified using existing measurement data for different mechanical stress levels. - Highlights: • A ferromagnetic hysteretic energy-based multi-scale material model is proposed. • The hysteresis is obtained by new proposed hysteresis energy density function. • Avoids tedious parameter identification.

  13. Developing Tools to Test the Thermo-Mechanical Models, Examples at Crustal and Upper Mantle Scale

    Science.gov (United States)

    Le Pourhiet, L.; Yamato, P.; Burov, E.; Gurnis, M.

    2005-12-01

    Testing geodynamical model is never an easy task. Depending on the spatio-temporal scale of the model, different testable predictions are needed and no magic reciepe exist. This contribution first presents different methods that have been used to test themo-mechanical modeling results at upper crustal, lithospheric and upper mantle scale using three geodynamical examples : the Gulf of Corinth (Greece), the Western Alps, and the Sierra Nevada. At short spatio-temporal scale (e.g. Gulf of Corinth). The resolution of the numerical models is usually sufficient to catch the timing and kinematics of the faults precisely enough to be tested by tectono-stratigraphic arguments. In active deforming area, microseismicity can be compared to the effective rheology and P and T axes of the focal mechanism can be compared with local orientation of the major component of the stress tensor. At lithospheric scale the resolution of the models doesn't permit anymore to constrain the models by direct observations (i.e. structural data from field or seismic reflection). Instead, synthetic P-T-t path may be computed and compared to natural ones in term of rate of exhumation for ancient orogens. Topography may also help but on continent it mainly depends on erosion laws that are complicated to constrain. Deeper in the mantle, the only available constrain are long wave length topographic data and tomographic "data". The major problem to overcome now at lithospheric and upper mantle scale, is that the so called "data" results actually from inverse models of the real data and that those inverse model are based on synthetic models. Post processing P and S wave velocities is not sufficient to be able to make testable prediction at upper mantle scale. Instead of that, direct wave propagations model must be computed. This allows checking if the differences between two models constitute a testable prediction or not. On longer term, we may be able to use those synthetic models to reduce the residue

  14. Nonpointlike-parton model with asymptotic scaling and with scaling violationat moderate Q2 values

    International Nuclear Information System (INIS)

    Chen, C.K.

    1981-01-01

    A nonpointlike-parton model is formulated on the basis of the assumption of energy-independent total cross sections of partons and the current-algebra sum rules. No specific strong-interaction Lagrangian density is introduced in this approach. This model predicts asymptotic scaling for the inelastic structure functions of nucleons on the one hand and scaling violation at moderate Q 2 values on the other hand. The predicted scaling-violation patterns at moderate Q 2 values are consistent with the observed scaling-violation patterns. A numerical fit of F 2 functions is performed in order to demonstrate that the predicted scaling-violation patterns of this model at moderate Q 2 values fit the data, and to see how the predicted asymptotic scaling behavior sets in at various x values. Explicit analytic forms of F 2 functions are obtained from this numerical fit, and are compared in detail with the analytic forms of F 2 functions obtained from the numerical fit of the quantum-chromodynamics (QCD) parton model. This comparison shows that this nonpointlike-parton model fits the data better than the QCD parton model, especially at large and small x values. Nachtman moments are computed from the F 2 functions of this model and are shown to agree with data well. It is also shown that the two-dimensional plot of the logarithm of a nonsinglet moment versus the logarithm of another such moment is not a good way to distinguish this nonpointlike-parton model from the QCD parton model

  15. Wetting at the nanometer scale: effects of long-range forces and substrate heterogeneities

    International Nuclear Information System (INIS)

    Checco, Antonio

    2003-01-01

    Wetting phenomena on the nano-scale remain poorly understood in spite of their growing theoretical and practical interest. In this context, the present work aimed at studying partial wetting of nanometer-sized alkane droplets on 'model' surfaces build by self-assembly of organic monolayers. For this purpose a novel technique, based on 'noncontact' Atomic Force Microscopy (AFM), has been developed to image, with minimal artefacts, drops of adjustable size directly condensed on so- lid surfaces. We have thus shown that contact angle of alkanes, wetting a weakly heterogeneous, silanized substrate, noticeably decreases from its macroscopic value for droplets sizes in the submicron range. The line tension, arising in this case from purely dispersive long-range interactions between the liquid and the substrate, is theoretically too weak to be responsible for the observed effect. Therefore we have supposed that contact angle is affected by mesoscopic chemical heterogeneities of the substrate whenever the droplets size becomes sufficiently small. This scenario has been supported by numerical simulations based on a simplified model of the spatial distribution of surface defects. Similar experiments, performed on different substrates (monolayers made of alkane-thiols self-assembled on gold and of alkyl chains covalently bound onto a silicon surface), have also shown that wetting on small scales is strongly affected by minimal physical and chemical surface heterogeneities. Finally, to provide further examples of the potential of the above mentioned AFM technique, we have studied the wettability of nano-structured surfaces and the local wetting properties of hair. (author) [fr

  16. Asymptotic solution for the El Niño time delay sea—air oscillator model

    International Nuclear Information System (INIS)

    Mo Jia-Qi; Lin Wan-Tao; Lin Yi-Hua

    2011-01-01

    A sea—air oscillator model is studied using the time delay theory. The aim is to find an asymptotic solving method for the El Niño-southern oscillation (ENSO) model. Employing the perturbed method, an asymptotic solution of the corresponding problem is obtained. Thus we can obtain the prognoses of the sea surface temperature (SST) anomaly and the related physical quantities. (general)

  17. FINE-SCALE STRUCTURE OF THE QUASAR 3C 279 MEASURED WITH 1.3 mm VERY LONG BASELINE INTERFEROMETRY

    Energy Technology Data Exchange (ETDEWEB)

    Lu Rusen; Fish, Vincent L.; Doeleman, Sheperd S.; Crew, Geoffrey; Cappallo, Roger J. [Massachusetts Institute of Technology, Haystack Observatory, Route 40, Westford, MA 01886 (United States); Akiyama, Kazunori; Honma, Mareki [National Astronomical Observatory of Japan, Osawa 2-21-1, Mitaka, Tokyo 181-8588 (Japan); Algaba, Juan C.; Ho, Paul T. P.; Inoue, Makoto [Institute of Astronomy and Astrophysics, Academia Sinica, P.O. Box 23-141, Taipei 10617, Taiwan, R.O.C. (China); Bower, Geoffrey C.; Dexter, Matt [Department of Astronomy, Radio Astronomy Laboratory, University of California Berkeley, 601 Campbell, Berkeley, CA 94720-3411 (United States); Brinkerink, Christiaan [Department of Astrophysics, IMAPP, Radboud University Nijmegen, P.O. Box 9010, 6500-GL Nijmegen (Netherlands); Chamberlin, Richard [Caltech Submillimeter Observatory, 111 Nowelo Street, Hilo, HI 96720 (United States); Freund, Robert [Arizona Radio Observatory, Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721-0065 (United States); Friberg, Per [James Clerk Maxwell Telescope, Joint Astronomy Centre, 660 North A' ohoku Place, University Park, Hilo, HI 96720 (United States); Gurwell, Mark A. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Jorstad, Svetlana G. [Institute for Astrophysical Research, Boston University, Boston, MA 02215 (United States); Krichbaum, Thomas P. [Max-Planck-Institut fuer Radioastronomie, Auf dem Huegel 69, D-53121 Bonn (Germany); Loinard, Laurent, E-mail: rslu@haystack.mit.edu [Centro de Radiostronomia y Astrofisica, Universidad Nacional Autonoma de Mexico, 58089 Morelia, Michoacan (Mexico); and others

    2013-07-20

    We report results from five day very long baseline interferometry observations of the well-known quasar 3C 279 at 1.3 mm (230 GHz) in 2011. The measured nonzero closure phases on triangles including stations in Arizona, California, and Hawaii indicate that the source structure is spatially resolved. We find an unusual inner jet direction at scales of {approx}1 pc extending along the northwest-southeast direction (P.A. = 127 Degree-Sign {+-} 3 Degree-Sign ), as opposed to other (previously) reported measurements on scales of a few parsecs showing inner jet direction extending to the southwest. The 1.3 mm structure corresponds closely with that observed in the central region of quasi-simultaneous super-resolution Very Long Baseline Array images at 7 mm. The closure phase changed significantly on the last day when compared with the rest of observations, indicating that the inner jet structure may be variable on daily timescales. The observed new direction of the inner jet shows inconsistency with the prediction of a class of jet precession models. Our observations indicate a brightness temperature of {approx}8 Multiplication-Sign 10{sup 10} K in the 1.3 mm core, much lower than that at centimeter wavelengths. Observations with better uv coverage and sensitivity in the coming years will allow the discrimination between different structure models and will provide direct images of the inner regions of the jet with 20-30 {mu}as (5-7 light months) resolution.

  18. FINE-SCALE STRUCTURE OF THE QUASAR 3C 279 MEASURED WITH 1.3 mm VERY LONG BASELINE INTERFEROMETRY

    International Nuclear Information System (INIS)

    Lu Rusen; Fish, Vincent L.; Doeleman, Sheperd S.; Crew, Geoffrey; Cappallo, Roger J.; Akiyama, Kazunori; Honma, Mareki; Algaba, Juan C.; Ho, Paul T. P.; Inoue, Makoto; Bower, Geoffrey C.; Dexter, Matt; Brinkerink, Christiaan; Chamberlin, Richard; Freund, Robert; Friberg, Per; Gurwell, Mark A.; Jorstad, Svetlana G.; Krichbaum, Thomas P.; Loinard, Laurent

    2013-01-01

    We report results from five day very long baseline interferometry observations of the well-known quasar 3C 279 at 1.3 mm (230 GHz) in 2011. The measured nonzero closure phases on triangles including stations in Arizona, California, and Hawaii indicate that the source structure is spatially resolved. We find an unusual inner jet direction at scales of ∼1 pc extending along the northwest-southeast direction (P.A. = 127° ± 3°), as opposed to other (previously) reported measurements on scales of a few parsecs showing inner jet direction extending to the southwest. The 1.3 mm structure corresponds closely with that observed in the central region of quasi-simultaneous super-resolution Very Long Baseline Array images at 7 mm. The closure phase changed significantly on the last day when compared with the rest of observations, indicating that the inner jet structure may be variable on daily timescales. The observed new direction of the inner jet shows inconsistency with the prediction of a class of jet precession models. Our observations indicate a brightness temperature of ∼8 × 10 10 K in the 1.3 mm core, much lower than that at centimeter wavelengths. Observations with better uv coverage and sensitivity in the coming years will allow the discrimination between different structure models and will provide direct images of the inner regions of the jet with 20-30 μas (5-7 light months) resolution.

  19. Murine model of long term obstructive jaundice

    Science.gov (United States)

    Aoki, Hiroaki; Aoki, Masayo; Yang, Jing; Katsuta, Eriko; Mukhopadhyay, Partha; Ramanathan, Rajesh; Woelfel, Ingrid A.; Wang, Xuan; Spiegel, Sarah; Zhou, Huiping; Takabe, Kazuaki

    2016-01-01

    Background With the recent emergence of conjugated bile acids as signaling molecules in cancer, a murine model of obstructive jaundice by cholestasis with long-term survival is in need. Here, we investigated the characteristics of 3 murine models of obstructive jaundice. Methods C57BL/6J mice were used for total ligation of the common bile duct (tCL), partial common bile duct ligation (pCL), and ligation of left and median hepatic bile duct with gallbladder removal (LMHL) models. Survival was assessed by Kaplan-Meier method. Fibrotic change was determined by Masson-Trichrome staining and Collagen expression. Results 70% (7/10) of tCL mice died by Day 7, whereas majority 67% (10/15) of pCL mice survived with loss of jaundice. 19% (3/16) of LMHL mice died; however, jaundice continued beyond Day 14, with survival of more than a month. Compensatory enlargement of the right lobe was observed in both pCL and LMHL models. The pCL model demonstrated acute inflammation due to obstructive jaundice 3 days after ligation but jaundice rapidly decreased by Day 7. The LHML group developed portal hypertension as well as severe fibrosis by Day 14 in addition to prolonged jaundice. Conclusion The standard tCL model is too unstable with high mortality for long-term studies. pCL may be an appropriate model for acute inflammation with obstructive jaundice but long term survivors are no longer jaundiced. The LHML model was identified to be the most feasible model to study the effect of long-term obstructive jaundice. PMID:27916350

  20. Loss models for long Josephson junctions

    DEFF Research Database (Denmark)

    Olsen, O. H.; Samuelsen, Mogens Rugholm

    1984-01-01

    A general model for loss mechanisms in long Josephson junctions is presented. An expression for the zero-field step is found for a junction of overlap type by means of a perturbation method. Comparison between analytic solution and perturbation result shows good agreement.......A general model for loss mechanisms in long Josephson junctions is presented. An expression for the zero-field step is found for a junction of overlap type by means of a perturbation method. Comparison between analytic solution and perturbation result shows good agreement....

  1. Scale model test on a novel 400 kV double-circuit composite pylon

    DEFF Research Database (Denmark)

    Wang, Qian; Bak, Claus Leth; Silva, Filipe Miguel Faria da

    This paper investigates lightning shielding performance of a novel 400 kV double-circuit composite pylon, with the method of scale model test. Lightning strikes to overhead lines were simulated by long-gap discharges between a high voltage electrode with an impulse voltage and equivalent conductors...... around the pylon is discussed. Combined test results and striking distance equation in electro-geometric model, the approximate maximum lightning current that can lead to shielding failure is calculated. Test results verify that the unusual negative shielding angle of - 60° in the composite pylon meets...... requirement and the shielding wires provide acceptable protection from lightning strikes....

  2. Geo-Semantic Framework for Integrating Long-Tail Data and Model Resources for Advancing Earth System Science

    Science.gov (United States)

    Elag, M.; Kumar, P.

    2014-12-01

    Often, scientists and small research groups collect data, which target to address issues and have limited geographic or temporal range. A large number of such collections together constitute a large database that is of immense value to Earth Science studies. Complexity of integrating these data include heterogeneity in dimensions, coordinate systems, scales, variables, providers, users and contexts. They have been defined as long-tail data. Similarly, we use "long-tail models" to characterize a heterogeneous collection of models and/or modules developed for targeted problems by individuals and small groups, which together provide a large valuable collection. Complexity of integrating across these models include differing variable names and units for the same concept, model runs at different time steps and spatial resolution, use of differing naming and reference conventions, etc. Ability to "integrate long-tail models and data" will provide an opportunity for the interoperability and reusability of communities' resources, where not only models can be combined in a workflow, but each model will be able to discover and (re)use data in application specific context of space, time and questions. This capability is essential to represent, understand, predict, and manage heterogeneous and interconnected processes and activities by harnessing the complex, heterogeneous, and extensive set of distributed resources. Because of the staggering production rate of long-tail models and data resulting from the advances in computational, sensing, and information technologies, an important challenge arises: how can geoinformatics bring together these resources seamlessly, given the inherent complexity among model and data resources that span across various domains. We will present a semantic-based framework to support integration of "long-tail" models and data. This builds on existing technologies including: (i) SEAD (Sustainable Environmental Actionable Data) which supports curation

  3. Comments on intermediate-scale models

    International Nuclear Information System (INIS)

    Ellis, J.; Enqvist, K.; Nanopoulos, D.V.; Olive, K.

    1987-01-01

    Some superstring-inspired models employ intermediate scales m I of gauge symmetry breaking. Such scales should exceed 10 16 GeV in order to avoid prima facie problems with baryon decay through heavy particles and non-perturbative behaviour of the gauge couplings above m I . However, the intermediate-scale phase transition does not occur until the temperature of the Universe falls below O(m W ), after which an enormous excess of entropy is generated. Moreover, gauge symmetry breaking by renormalization group-improved radiative corrections is inapplicable because the symmetry-breaking field has not renormalizable interactions at scales below m I . We also comment on the danger of baryon and lepton number violation in the effective low-energy theory. (orig.)

  4. A long-term, continuous simulation approach for large-scale flood risk assessments

    Science.gov (United States)

    Falter, Daniela; Schröter, Kai; Viet Dung, Nguyen; Vorogushyn, Sergiy; Hundecha, Yeshewatesfa; Kreibich, Heidi; Apel, Heiko; Merz, Bruno

    2014-05-01

    The Regional Flood Model (RFM) is a process based model cascade developed for flood risk assessments of large-scale basins. RFM consists of four model parts: the rainfall-runoff model SWIM, a 1D channel routing model, a 2D hinterland inundation model and the flood loss estimation model for residential buildings FLEMOps+r. The model cascade was recently undertaken a proof-of-concept study at the Elbe catchment (Germany) to demonstrate that flood risk assessments, based on a continuous simulation approach, including rainfall-runoff, hydrodynamic and damage estimation models, are feasible for large catchments. The results of this study indicated that uncertainties are significant, especially for hydrodynamic simulations. This was basically a consequence of low data quality and disregarding dike breaches. Therefore, RFM was applied with a refined hydraulic model setup for the Elbe tributary Mulde. The study area Mulde catchment comprises about 6,000 km2 and 380 river-km. The inclusion of more reliable information on overbank cross-sections and dikes considerably improved the results. For the application of RFM for flood risk assessments, long-term climate input data is needed to drive the model chain. This model input was provided by a multi-site, multi-variate weather generator that produces sets of synthetic meteorological data reproducing the current climate statistics. The data set comprises 100 realizations of 100 years of meteorological data. With the proposed continuous simulation approach of RFM, we simulated a virtual period of 10,000 years covering the entire flood risk chain including hydrological, 1D/2D hydrodynamic and flood damage estimation models. This provided a record of around 2.000 inundation events affecting the study area with spatially detailed information on inundation depths and damage to residential buildings on a resolution of 100 m. This serves as basis for a spatially consistent, flood risk assessment for the Mulde catchment presented in

  5. Virtual Models of Long-Term Care

    Science.gov (United States)

    Phenice, Lillian A.; Griffore, Robert J.

    2012-01-01

    Nursing homes, assisted living facilities and home-care organizations, use web sites to describe their services to potential consumers. This virtual ethnographic study developed models representing how potential consumers may understand this information using data from web sites of 69 long-term-care providers. The content of long-term-care web…

  6. Beyond the Young-Laplace model for cluster growth during dewetting of thin films: effective coarsening exponents and the role of long range dewetting interactions.

    Science.gov (United States)

    Constantinescu, Adi; Golubović, Leonardo; Levandovsky, Artem

    2013-09-01

    Long range dewetting forces acting across thin films, such as the fundamental van der Waals interactions, may drive the formation of large clusters (tall multilayer islands) and pits, observed in thin films of diverse materials such as polymers, liquid crystals, and metals. In this study we further develop the methodology of the nonequilibrium statistical mechanics of thin films coarsening within continuum interface dynamics model incorporating long range dewetting interactions. The theoretical test bench model considered here is a generalization of the classical Mullins model for the dynamics of solid film surfaces. By analytic arguments and simulations of the model, we study the coarsening growth laws of clusters formed in thin films due to the dewetting interactions. The ultimate cluster growth scaling laws at long times are strongly universal: Short and long range dewetting interactions yield the same coarsening exponents. However, long range dewetting interactions, such as the van der Waals forces, introduce a distinct long lasting early time scaling behavior characterized by a slow growth of the cluster height/lateral size aspect ratio (i.e., a time-dependent Young angle) and by effective coarsening exponents that depend on cluster size. In this study, we develop a theory capable of analytically calculating these effective size-dependent coarsening exponents characterizing the cluster growth in the early time regime. Such a pronounced early time scaling behavior has been indeed seen in experiments; however, its physical origin has remained elusive to this date. Our theory attributes these observed phenomena to ubiquitous long range dewetting interactions acting across thin solid and liquid films. Our results are also applicable to cluster growth in initially very thin fluid films, formed by depositing a few monolayers or by a submonolayer deposition. Under this condition, the dominant coarsening mechanism is diffusive intercluster mass transport while the

  7. Risk assessment of flood disaster and forewarning model at different spatial-temporal scales

    Science.gov (United States)

    Zhao, Jun; Jin, Juliang; Xu, Jinchao; Guo, Qizhong; Hang, Qingfeng; Chen, Yaqian

    2018-05-01

    Aiming at reducing losses from flood disaster, risk assessment of flood disaster and forewarning model is studied. The model is built upon risk indices in flood disaster system, proceeding from the whole structure and its parts at different spatial-temporal scales. In this study, on the one hand, it mainly establishes the long-term forewarning model for the surface area with three levels of prediction, evaluation, and forewarning. The method of structure-adaptive back-propagation neural network on peak identification is used to simulate indices in prediction sub-model. Set pair analysis is employed to calculate the connection degrees of a single index, comprehensive index, and systematic risk through the multivariate connection number, and the comprehensive assessment is made by assessment matrixes in evaluation sub-model. The comparison judging method is adopted to divide warning degree of flood disaster on risk assessment comprehensive index with forewarning standards in forewarning sub-model and then the long-term local conditions for proposing planning schemes. On the other hand, it mainly sets up the real-time forewarning model for the spot, which introduces the real-time correction technique of Kalman filter based on hydrological model with forewarning index, and then the real-time local conditions for presenting an emergency plan. This study takes Tunxi area, Huangshan City of China, as an example. After risk assessment and forewarning model establishment and application for flood disaster at different spatial-temporal scales between the actual and simulated data from 1989 to 2008, forewarning results show that the development trend for flood disaster risk remains a decline on the whole from 2009 to 2013, despite the rise in 2011. At the macroscopic level, project and non-project measures are advanced, while at the microcosmic level, the time, place, and method are listed. It suggests that the proposed model is feasible with theory and application, thus

  8. Predicting the influence of long-range molecular interactions on macroscopic-scale diffusion by homogenization of the Smoluchowski equation

    Energy Technology Data Exchange (ETDEWEB)

    Kekenes-Huskey, P. M., E-mail: pkekeneshuskey@ucsd.edu [Department of Pharmacology, University of California San Diego, La Jolla, California 92093-0636 (United States); Gillette, A. K. [Department of Mathematics, University of Arizona, Tucson, Arizona 85721-0089 (United States); McCammon, J. A. [Department of Pharmacology, University of California San Diego, La Jolla, California 92093-0636 (United States); Department of Chemistry, Howard Hughes Medical Institute, University of California San Diego, La Jolla, California 92093-0636 (United States)

    2014-05-07

    The macroscopic diffusion constant for a charged diffuser is in part dependent on (1) the volume excluded by solute “obstacles” and (2) long-range interactions between those obstacles and the diffuser. Increasing excluded volume reduces transport of the diffuser, while long-range interactions can either increase or decrease diffusivity, depending on the nature of the potential. We previously demonstrated [P. M. Kekenes-Huskey et al., Biophys. J. 105, 2130 (2013)] using homogenization theory that the configuration of molecular-scale obstacles can both hinder diffusion and induce diffusional anisotropy for small ions. As the density of molecular obstacles increases, van der Waals (vdW) and electrostatic interactions between obstacle and a diffuser become significant and can strongly influence the latter's diffusivity, which was neglected in our original model. Here, we extend this methodology to include a fixed (time-independent) potential of mean force, through homogenization of the Smoluchowski equation. We consider the diffusion of ions in crowded, hydrophilic environments at physiological ionic strengths and find that electrostatic and vdW interactions can enhance or depress effective diffusion rates for attractive or repulsive forces, respectively. Additionally, we show that the observed diffusion rate may be reduced independent of non-specific electrostatic and vdW interactions by treating obstacles that exhibit specific binding interactions as “buffers” that absorb free diffusers. Finally, we demonstrate that effective diffusion rates are sensitive to distribution of surface charge on a globular protein, Troponin C, suggesting that the use of molecular structures with atomistic-scale resolution can account for electrostatic influences on substrate transport. This approach offers new insight into the influence of molecular-scale, long-range interactions on transport of charged species, particularly for diffusion-influenced signaling events

  9. Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution

    Science.gov (United States)

    Rajulapati, C. R.; Mujumdar, P. P.

    2017-12-01

    Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.

  10. SDG and qualitative trend based model multiple scale validation

    Science.gov (United States)

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  11. Murine model of long-term obstructive jaundice.

    Science.gov (United States)

    Aoki, Hiroaki; Aoki, Masayo; Yang, Jing; Katsuta, Eriko; Mukhopadhyay, Partha; Ramanathan, Rajesh; Woelfel, Ingrid A; Wang, Xuan; Spiegel, Sarah; Zhou, Huiping; Takabe, Kazuaki

    2016-11-01

    With the recent emergence of conjugated bile acids as signaling molecules in cancer, a murine model of obstructive jaundice by cholestasis with long-term survival is in need. Here, we investigated the characteristics of three murine models of obstructive jaundice. C57BL/6J mice were used for total ligation of the common bile duct (tCL), partial common bile duct ligation (pCL), and ligation of left and median hepatic bile duct with gallbladder removal (LMHL) models. Survival was assessed by Kaplan-Meier method. Fibrotic change was determined by Masson-Trichrome staining and Collagen expression. Overall, 70% (7 of 10) of tCL mice died by day 7, whereas majority 67% (10 of 15) of pCL mice survived with loss of jaundice. A total of 19% (3 of 16) of LMHL mice died; however, jaundice continued beyond day 14, with survival of more than a month. Compensatory enlargement of the right lobe was observed in both pCL and LMHL models. The pCL model demonstrated acute inflammation due to obstructive jaundice 3 d after ligation but jaundice rapidly decreased by day 7. The LHML group developed portal hypertension and severe fibrosis by day 14 in addition to prolonged jaundice. The standard tCL model is too unstable with high mortality for long-term studies. pCL may be an appropriate model for acute inflammation with obstructive jaundice, but long-term survivors are no longer jaundiced. The LHML model was identified to be the most feasible model to study the effect of long-term obstructive jaundice. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Validation of the Spanish versions of the long (26 items) and short (12 items) forms of the Self-Compassion Scale (SCS).

    Science.gov (United States)

    Garcia-Campayo, Javier; Navarro-Gil, Mayte; Andrés, Eva; Montero-Marin, Jesús; López-Artal, Lorena; Demarzo, Marcelo Marcos Piva

    2014-01-10

    Self-compassion is a key psychological construct for assessing clinical outcomes in mindfulness-based interventions. The aim of this study was to validate the Spanish versions of the long (26 item) and short (12 item) forms of the Self-Compassion Scale (SCS). The translated Spanish versions of both subscales were administered to two independent samples: Sample 1 was comprised of university students (n = 268) who were recruited to validate the long form, and Sample 2 was comprised of Aragon Health Service workers (n = 271) who were recruited to validate the short form. In addition to SCS, the Mindful Attention Awareness Scale (MAAS), the State-Trait Anxiety Inventory-Trait (STAI-T), the Beck Depression Inventory (BDI) and the Perceived Stress Questionnaire (PSQ) were administered. Construct validity, internal consistency, test-retest reliability and convergent validity were tested. The Confirmatory Factor Analysis (CFA) of the long and short forms of the SCS confirmed the original six-factor model in both scales, showing goodness of fit. Cronbach's α for the 26 item SCS was 0.87 (95% CI = 0.85-0.90) and ranged between 0.72 and 0.79 for the 6 subscales. Cronbach's α for the 12-item SCS was 0.85 (95% CI = 0.81-0.88) and ranged between 0.71 and 0.77 for the 6 subscales. The long (26-item) form of the SCS showed a test-retest coefficient of 0.92 (95% CI = 0.89-0.94). The Intraclass Correlation (ICC) for the 6 subscales ranged from 0.84 to 0.93. The short (12-item) form of the SCS showed a test-retest coefficient of 0.89 (95% CI: 0.87-0.93). The ICC for the 6 subscales ranged from 0.79 to 0.91. The long and short forms of the SCS exhibited a significant negative correlation with the BDI, the STAI and the PSQ, and a significant positive correlation with the MAAS. The correlation between the total score of the long and short SCS form was r = 0.92. The Spanish versions of the long (26-item) and short (12-item) forms of the SCS are valid and

  13. Large-scale modeling of rain fields from a rain cell deterministic model

    Science.gov (United States)

    FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia

    2006-04-01

    A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.

  14. Land-Atmosphere Coupling in the Multi-Scale Modelling Framework

    Science.gov (United States)

    Kraus, P. M.; Denning, S.

    2015-12-01

    The Multi-Scale Modeling Framework (MMF), in which cloud-resolving models (CRMs) are embedded within general circulation model (GCM) gridcells to serve as the model's cloud parameterization, has offered a number of benefits to GCM simulations. The coupling of these cloud-resolving models directly to land surface model instances, rather than passing averaged atmospheric variables to a single instance of a land surface model, the logical next step in model development, has recently been accomplished. This new configuration offers conspicuous improvements to estimates of precipitation and canopy through-fall, but overall the model exhibits warm surface temperature biases and low productivity.This work presents modifications to a land-surface model that take advantage of the new multi-scale modeling framework, and accommodate the change in spatial scale from a typical GCM range of ~200 km to the CRM grid-scale of 4 km.A parameterization is introduced to apportion modeled surface radiation into direct-beam and diffuse components. The diffuse component is then distributed among the land-surface model instances within each GCM cell domain. This substantially reduces the number excessively low light values provided to the land-surface model when cloudy conditions are modeled in the CRM, associated with its 1-D radiation scheme. The small spatial scale of the CRM, ~4 km, as compared with the typical ~200 km GCM scale, provides much more realistic estimates of precipitation intensity, this permits the elimination of a model parameterization of canopy through-fall. However, runoff at such scales can no longer be considered as an immediate flow to the ocean. Allowing sub-surface water flow between land-surface instances within the GCM domain affords better realism and also reduces temperature and productivity biases.The MMF affords a number of opportunities to land-surface modelers, providing both the advantages of direct simulation at the 4 km scale and a much reduced

  15. Transdisciplinary application of the cross-scale resilience model

    Science.gov (United States)

    Sundstrom, Shana M.; Angeler, David G.; Garmestani, Ahjond S.; Garcia, Jorge H.; Allen, Craig R.

    2014-01-01

    The cross-scale resilience model was developed in ecology to explain the emergence of resilience from the distribution of ecological functions within and across scales, and as a tool to assess resilience. We propose that the model and the underlying discontinuity hypothesis are relevant to other complex adaptive systems, and can be used to identify and track changes in system parameters related to resilience. We explain the theory behind the cross-scale resilience model, review the cases where it has been applied to non-ecological systems, and discuss some examples of social-ecological, archaeological/ anthropological, and economic systems where a cross-scale resilience analysis could add a quantitative dimension to our current understanding of system dynamics and resilience. We argue that the scaling and diversity parameters suitable for a resilience analysis of ecological systems are appropriate for a broad suite of systems where non-normative quantitative assessments of resilience are desired. Our planet is currently characterized by fast environmental and social change, and the cross-scale resilience model has the potential to quantify resilience across many types of complex adaptive systems.

  16. Comments on intermediate-scale models

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, J.; Enqvist, K.; Nanopoulos, D.V.; Olive, K.

    1987-04-23

    Some superstring-inspired models employ intermediate scales m/sub I/ of gauge symmetry breaking. Such scales should exceed 10/sup 16/ GeV in order to avoid prima facie problems with baryon decay through heavy particles and non-perturbative behaviour of the gauge couplings above m/sub I/. However, the intermediate-scale phase transition does not occur until the temperature of the Universe falls below O(m/sub W/), after which an enormous excess of entropy is generated. Moreover, gauge symmetry breaking by renormalization group-improved radiative corrections is inapplicable because the symmetry-breaking field has not renormalizable interactions at scales below m/sub I/. We also comment on the danger of baryon and lepton number violation in the effective low-energy theory.

  17. [Antibodies against TSH receptors (TRAb) as indicators in prognosing the effectiveness of Tiamazol therapy for Grave's Disease].

    Science.gov (United States)

    Bojarska-Szmygin, Anna; Ciechanek, Roman

    2003-01-01

    The aim of the study was to evaluate the usefulness of TRAb determinations in prognosing and monitoring the efficacy of conservative treatment in Graves' disease. The examinations were performed in 54 patients. During the 18-month observation all the patients were treated with Tiamazol. The control group consisted of 20 healthy volunteers. The TRAb levels were determined before as well as 12 and 18 months after thyrostatic treatment. Simultaneously, the levels of TSH and FT4 were analysed. Moreover, all the patients underwent ultrasound examinations to assess the size of the thyroid gland. The findings of the 18-month follow up showed that in 31 patients (57%) the thyroid function became normal (group I--euthyreosis), in 23 patients (43%) hyperactivity persisted (group II--hyperthyreosis). The TRAb levels were analysed in both groups of patients. An increased initial level of TRAb was found in the hyperactivity group mean -54.39 + 31.21 U/l which was statistically significantly different from the TRAb levels in the euthyreosis group mean -29.13 +/- 19.14 U/l and in controls mean -2.75 +/- 2.06 U/l (p Graves' disease. High initial levels of antibodies are the poor prognostic factors. The TRAb determinations are of some prognostic value not only before but also 12 months since the onset of therapy. The lack of antibody level normalization during treatment is connected with persisting hyperactivity. The TRAb concentration correlates with the thyroid size.

  18. Design and validation of a clinical-scale bioreactor for long-term isolated lung culture.

    Science.gov (United States)

    Charest, Jonathan M; Okamoto, Tatsuya; Kitano, Kentaro; Yasuda, Atsushi; Gilpin, Sarah E; Mathisen, Douglas J; Ott, Harald C

    2015-06-01

    The primary treatment for end-stage lung disease is lung transplantation. However, donor organ shortage remains a major barrier for many patients. In recent years, techniques for maintaining lungs ex vivo for evaluation and short-term (advance to more complex interventions for lung repair and regeneration, the need for a long-term organ culture system becomes apparent. Herein we describe a novel clinical scale bioreactor capable of maintaining functional porcine and human lungs for at least 72 h in isolated lung culture (ILC). The fully automated, computer controlled, sterile, closed circuit system enables physiologic pulsatile perfusion and negative pressure ventilation, while gas exchange function, and metabolism can be evaluated. Creation of this stable, biomimetic long-term culture environment will enable advanced interventions in both donor lungs and engineered grafts of human scale. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Modeling Long-Term Fluvial Incision : Shall we Care for the Details of Short-Term Fluvial Dynamics?

    Science.gov (United States)

    Lague, D.; Davy, P.

    2008-12-01

    Fluvial incision laws used in numerical models of coupled climate, erosion and tectonics systems are mainly based on the family of stream power laws for which the rate of local erosion E is a power function of the topographic slope S and the local mean discharge Q : E = K Qm Sn. The exponents m and n are generally taken as (0.35, 0.7) or (0.5, 1), and K is chosen such that the predicted topographic elevation given the prevailing rates of precipitation and tectonics stay within realistic values. The resulting topographies are reasonably realistic, and the coupled system dynamics behaves somehow as expected : more precipitation induces increased erosion and localization of the deformation. Yet, if we now focus on smaller scale fluvial dynamics (the reach scale), recent advances have suggested that discharge variability, channel width dynamics or sediment flux effects may play a significant role in controlling incision rates. These are not factored in the simple stream power law model. In this work, we study how these short- term details propagate into long-term incision dynamics within the framework of surface/tectonics coupled numerical models. To upscale the short term dynamics to geological timescales, we use a numerical model of a trapezoidal river in which vertical and lateral incision processes are computed from fluid shear stress at a daily timescale, sediment transport and protection effects are factored in, as well as a variable discharge. We show that the stream power law model might still be a valid model but that as soon as realistic effects are included such as a threshold for sediment transport, variable discharge and dynamic width the resulting exponents m and n can be as high as 2 and 4. This high non-linearity has a profound consequence on the sensitivity of fluvial relief to incision rate. We also show that additional complexity does not systematically translates into more non-linear behaviour. For instance, considering only a dynamical width

  20. Simultaneous nested modeling from the synoptic scale to the LES scale for wind energy applications

    DEFF Research Database (Denmark)

    Liu, Yubao; Warner, Tom; Liu, Yuewei

    2011-01-01

    This paper describes an advanced multi-scale weather modeling system, WRF–RTFDDA–LES, designed to simulate synoptic scale (~2000 km) to small- and micro-scale (~100 m) circulations of real weather in wind farms on simultaneous nested grids. This modeling system is built upon the National Center f...

  1. Urban scale air quality modelling using detailed traffic emissions estimates

    Science.gov (United States)

    Borrego, C.; Amorim, J. H.; Tchepel, O.; Dias, D.; Rafael, S.; Sá, E.; Pimentel, C.; Fontes, T.; Fernandes, P.; Pereira, S. R.; Bandeira, J. M.; Coelho, M. C.

    2016-04-01

    The atmospheric dispersion of NOx and PM10 was simulated with a second generation Gaussian model over a medium-size south-European city. Microscopic traffic models calibrated with GPS data were used to derive typical driving cycles for each road link, while instantaneous emissions were estimated applying a combined Vehicle Specific Power/Co-operative Programme for Monitoring and Evaluation of the Long-range Transmission of Air Pollutants in Europe (VSP/EMEP) methodology. Site-specific background concentrations were estimated using time series analysis and a low-pass filter applied to local observations. Air quality modelling results are compared against measurements at two locations for a 1 week period. 78% of the results are within a factor of two of the observations for 1-h average concentrations, increasing to 94% for daily averages. Correlation significantly improves when background is added, with an average of 0.89 for the 24 h record. The results highlight the potential of detailed traffic and instantaneous exhaust emissions estimates, together with filtered urban background, to provide accurate input data to Gaussian models applied at the urban scale.

  2. Modelling cross-scale relationships between climate, hydrology, and individual animals: Generating scenarios for stream salamanders

    Directory of Open Access Journals (Sweden)

    Philippe eGirard

    2015-07-01

    Full Text Available Hybrid modelling provides a unique opportunity to study cross-scale relationships in environmental systems by linking together models of global, regional, landscape, and local-scale processes, yet the approach is rarely applied to address conservation and management questions. Here, we demonstrate how a hybrid modelling approach can be used to assess the effect of cross-scale interactions on the survival of the Allegheny Mountain Dusky Salamander (Desmognathus ochrophaeus in response to changes in temperature and water availability induced by climate change at the northern limits of its distribution. To do so, we combine regional climate modelling with a landscape-scale integrated surface-groundwater flow model and an individual-based model of stream salamanders. On average, climate scenarios depict a warmer and wetter environment for the 2050 horizon. The increase in average annual temperature and extended hydrological activity time series in the future, combined with a better synchronization with the salamanders’ reproduction period, result in a significant increase in the long-term population viability of the salamanders. This indicates that climate change may not necessarily limit the survivability of small, stream-dwelling animals in headwater basins located in cold and humid regions. This new knowledge suggests that habitat conservation initiatives for amphibians with large latitudinal distributions in Eastern North America should be prioritized at the northern limits of their ranges to facilitate species migration and persistence in the face of climate change. This example demonstrates how hybrid models can serve as powerful tools for informing management and conservation decisions.

  3. Quantification of structural uncertainties in multi-scale models; case study of the Lublin Basin, Poland

    Science.gov (United States)

    Małolepszy, Zbigniew; Szynkaruk, Ewa

    2015-04-01

    same degrees of generalization shall be applied to uncertainties. However, approach for uncertainty assessment and quantification may vary depending on the scale of the model. In small scale regional and sub-regional models deterministic modelling methods are used, while stochastic algorithms can be applied for uncertainty modelling at large scale multi-prospect and field models. We believe that the 3D multiscale modelling describing geological architecture with quantified structure uncertainties, presented on standard deviation maps and grids, will allow us to outline exploration opportunities as well as to refine existing and build new conceptual models. As the tectonic setting of the area is the subject of long-term dispute, the model depicting at different resolutions both structures and gaps in geological knowledge shall allow to confirm some of the concepts related to geological history of the Lublin Basin and reject or modify the others.

  4. The Scaling LInear Macroweather model (SLIM): using scaling to forecast global scale macroweather from months to decades

    Science.gov (United States)

    Lovejoy, S.; del Rio Amador, L.; Hébert, R.

    2015-03-01

    At scales of ≈ 10 days (the lifetime of planetary scale structures), there is a drastic transition from high frequency weather to low frequency macroweather. This scale is close to the predictability limits of deterministic atmospheric models; so that in GCM macroweather forecasts, the weather is a high frequency noise. But neither the GCM noise nor the GCM climate is fully realistic. In this paper we show how simple stochastic models can be developped that use empirical data to force the statistics and climate to be realistic so that even a two parameter model can outperform GCM's for annual global temperature forecasts. The key is to exploit the scaling of the dynamics and the enormous stochastic memories that it implies. Since macroweather intermittency is low, we propose using the simplest model based on fractional Gaussian noise (fGn): the Scaling LInear Macroweather model (SLIM). SLIM is based on a stochastic ordinary differential equations, differing from usual linear stochastic models (such as the Linear Inverse Modelling, LIM) in that it is of fractional rather than integer order. Whereas LIM implicitly assumes there is no low frequency memory, SLIM has a huge memory that can be exploited. Although the basic mathematical forecast problem for fGn has been solved, we approach the problem in an original manner notably using the method of innovations to obtain simpler results on forecast skill and on the size of the effective system memory. A key to successful forecasts of natural macroweather variability is to first remove the low frequency anthropogenic component. A previous attempt to use fGn for forecasts had poor results because this was not done. We validate our theory using hindcasts of global and Northern Hemisphere temperatures at monthly and annual resolutions. Several nondimensional measures of forecast skill - with no adjustable parameters - show excellent agreement with hindcasts and these show some skill even at decadal scales. We also compare

  5. Autotransplantation of immature third molars using a computer-aided rapid prototyping model: a report of 4 cases.

    Science.gov (United States)

    Jang, Ji-Hyun; Lee, Seung-Jong; Kim, Euiseong

    2013-11-01

    Autotransplantation of immature teeth can be an option for premature tooth loss in young patients as an alternative to immediately replacing teeth with fixed or implant-supported prostheses. The present case series reports 4 successful autotransplantation cases using computer-aided rapid prototyping (CARP) models with immature third molars. The compromised upper and lower molars (n = 4) of patients aged 15-21 years old were transplanted with third molars using CARP models. Postoperatively, the pulp vitality and the development of the roots were examined clinically and radiographically. The patient follow-up period was 2-7.5 years after surgery. The long-term follow-up showed that all of the transplants were asymptomatic and functional. Radiographic examination indicated that the apices developed continuously and the root length and thickness increased. The final follow-up examination revealed that all of the transplants kept the vitality, and the apices were fully developed with normal periodontal ligaments and trabecular bony patterns. Based on long-term follow-up observations, our 4 cases of autotransplantation of immature teeth using CARP models resulted in favorable prognoses. The CARP model assisted in minimizing the extraoral time and the possible Hertwig epithelial root sheath injury of the transplanted tooth. Copyright © 2013 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  6. Analysis of chromosome aberration data by hybrid-scale models

    International Nuclear Information System (INIS)

    Indrawati, Iwiq; Kumazawa, Shigeru

    2000-02-01

    This paper presents a new methodology for analyzing data of chromosome aberrations, which is useful to understand the characteristics of dose-response relationships and to construct the calibration curves for the biological dosimetry. The hybrid scale of linear and logarithmic scales brings a particular plotting paper, where the normal section paper, two types of semi-log papers and the log-log paper are continuously connected. The hybrid-hybrid plotting paper may contain nine kinds of linear relationships, and these are conveniently called hybrid scale models. One can systematically select the best-fit model among the nine models by among the conditions for a straight line of data points. A biological interpretation is possible with some hybrid-scale models. In this report, the hybrid scale models were applied to separately reported data on chromosome aberrations in human lymphocytes as well as on chromosome breaks in Tradescantia. The results proved that the proposed models fit the data better than the linear-quadratic model, despite the demerit of the increased number of model parameters. We showed that the hybrid-hybrid model (both variables of dose and response using the hybrid scale) provides the best-fit straight lines to be used as the reliable and readable calibration curves of chromosome aberrations. (author)

  7. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    The present paper presents overtopping measurements from small scale model test performed at the Haudraulic & Coastal Engineering Laboratory, Aalborg University, Denmark and large scale model tests performed at the Largde Wave Channel,Hannover, Germany. Comparison between results obtained from...... small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...

  8. Long-Term Calculations with Large Air Pollution Models

    DEFF Research Database (Denmark)

    Ambelas Skjøth, C.; Bastrup-Birk, A.; Brandt, J.

    1999-01-01

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  9. Advances in Large-Scale Solar Heating and Long Term Storage in Denmark

    DEFF Research Database (Denmark)

    Heller, Alfred

    2000-01-01

    According to (the) information from the European Large-Scale Solar Heating Network, (See http://www.hvac.chalmers.se/cshp/), the area of installed solar collectors for large-scale application is in Europe, approximately 8 mill m2, corresponding to about 4000 MW thermal power. The 11 plants...... the last 10 years and the corresponding cost per collector area for the final installed plant is kept constant, even so the solar production is increased. Unfortunately large-scale seasonal storage was not able to keep up with the advances in solar technology, at least for pit water and gravel storage...... of the total 51 plants are equipped with long-term storage. In Denmark, 7 plants are installed, comprising of approx. 18,000-m2 collector area with new plants planned. The development of these plants and the involved technologies will be presented in this paper, with a focus on the improvements for Danish...

  10. Long-distance entanglement and quantum teleportation in XX spin chains

    International Nuclear Information System (INIS)

    Campos Venuti, L.; Giampaolo, S. M.; Illuminati, F.; Zanardi, P.

    2007-01-01

    Isotropic XX models of one-dimensional spin-1/2 chains are investigated with the aim to elucidate the formal structure and the physical properties that allow these systems to act as channels for long-distance, high-fidelity quantum teleportation. We introduce two types of models: (i) open, dimerized XX chains, and (ii) open XX chains with small end bonds. For both models we obtain the exact expressions for the end-to-end correlations and the scaling of the energy gap with the length of the chain. We determine the end-to-end concurrence and show that model (i) supports true long-distance entanglement at zero temperature, while model (ii) supports 'quasi-long-distance' entanglement that slowly falls off with the size of the chain. Due to the different scalings of the gaps, respectively exponential for model (i) and algebraic in model (ii), we demonstrate that the latter allows for efficient qubit teleportation with high fidelity in sufficiently long chains even at moderately low temperatures

  11. Multi-scale climate modelling over Southern Africa using a variable-resolution global model

    CSIR Research Space (South Africa)

    Engelbrecht, FA

    2011-12-01

    Full Text Available -mail: fengelbrecht@csir.co.za Multi-scale climate modelling over Southern Africa using a variable-resolution global model FA Engelbrecht1, 2*, WA Landman1, 3, CJ Engelbrecht4, S Landman5, MM Bopape1, B Roux6, JL McGregor7 and M Thatcher7 1 CSIR Natural... improvement. Keywords: multi-scale climate modelling, variable-resolution atmospheric model Introduction Dynamic climate models have become the primary tools for the projection of future climate change, at both the global and regional scales. Dynamic...

  12. Parameter study on dynamic behavior of ITER tokamak scaled model

    International Nuclear Information System (INIS)

    Nakahira, Masataka; Takeda, Nobukazu

    2004-12-01

    This report summarizes that the study on dynamic behavior of ITER tokamak scaled model according to the parametric analysis of base plate thickness, in order to find a reasonable solution to give the sufficient rigidity without affecting the dynamic behavior. For this purpose, modal analyses were performed changing the base plate thickness from the present design of 55 mm to 100 mm, 150 mm and 190 mm. Using these results, the modification plan of the plate thickness was studied. It was found that the thickness of 150 mm gives well fitting of 1st natural frequency about 90% of ideal rigid case. Thus, the modification study was performed to find out the adequate plate thickness. Considering the material availability, transportation and weldability, it was found that the 300mm thickness would be a limitation. The analysis result of 300mm thickness case showed 97% fitting of 1st natural frequency to the ideal rigid case. It was however found that the bolt length was too long and it gave additional twisting mode. As a result, it was concluded that the base plate thickness of 150mm or 190mm gives sufficient rigidity for the dynamic behavior of the scaled model. (author)

  13. Modelling accelerated degradation data using Wiener diffusion with a time scale transformation.

    Science.gov (United States)

    Whitmore, G A; Schenkelberg, F

    1997-01-01

    Engineering degradation tests allow industry to assess the potential life span of long-life products that do not fail readily under accelerated conditions in life tests. A general statistical model is presented here for performance degradation of an item of equipment. The degradation process in the model is taken to be a Wiener diffusion process with a time scale transformation. The model incorporates Arrhenius extrapolation for high stress testing. The lifetime of an item is defined as the time until performance deteriorates to a specified failure threshold. The model can be used to predict the lifetime of an item or the extent of degradation of an item at a specified future time. Inference methods for the model parameters, based on accelerated degradation test data, are presented. The model and inference methods are illustrated with a case application involving self-regulating heating cables. The paper also discusses a number of practical issues encountered in applications.

  14. Factor Structure, Reliability and Measurement Invariance of the Alberta Context Tool and the Conceptual Research Utilization Scale, for German Residential Long Term Care

    Science.gov (United States)

    Hoben, Matthias; Estabrooks, Carole A.; Squires, Janet E.; Behrens, Johann

    2016-01-01

    We translated the Canadian residential long term care versions of the Alberta Context Tool (ACT) and the Conceptual Research Utilization (CRU) Scale into German, to study the association between organizational context factors and research utilization in German nursing homes. The rigorous translation process was based on best practice guidelines for tool translation, and we previously published methods and results of this process in two papers. Both instruments are self-report questionnaires used with care providers working in nursing homes. The aim of this study was to assess the factor structure, reliability, and measurement invariance (MI) between care provider groups responding to these instruments. In a stratified random sample of 38 nursing homes in one German region (Metropolregion Rhein-Neckar), we collected questionnaires from 273 care aides, 196 regulated nurses, 152 allied health providers, 6 quality improvement specialists, 129 clinical leaders, and 65 nursing students. The factor structure was assessed using confirmatory factor models. The first model included all 10 ACT concepts. We also decided a priori to run two separate models for the scale-based and the count-based ACT concepts as suggested by the instrument developers. The fourth model included the five CRU Scale items. Reliability scores were calculated based on the parameters of the best-fitting factor models. Multiple-group confirmatory factor models were used to assess MI between provider groups. Rather than the hypothesized ten-factor structure of the ACT, confirmatory factor models suggested 13 factors. The one-factor solution of the CRU Scale was confirmed. The reliability was acceptable (>0.7 in the entire sample and in all provider groups) for 10 of 13 ACT concepts, and high (0.90–0.96) for the CRU Scale. We could demonstrate partial strong MI for both ACT models and partial strict MI for the CRU Scale. Our results suggest that the scores of the German ACT and the CRU Scale for nursing

  15. Hysteresis-controlled instability waves in a scale-free driven current sheet model

    Directory of Open Access Journals (Sweden)

    V. M. Uritsky

    2005-01-01

    Full Text Available Magnetospheric dynamics is a complex multiscale process whose statistical features can be successfully reproduced using high-dimensional numerical transport models exhibiting the phenomenon of self-organized criticality (SOC. Along this line of research, a 2-dimensional driven current sheet (DCS model has recently been developed that incorporates an idealized current-driven instability with a resistive MHD plasma system (Klimas et al., 2004a, b. The dynamics of the DCS model is dominated by the scale-free diffusive energy transport characterized by a set of broadband power-law distribution functions similar to those governing the evolution of multiscale precipitation regions of energetic particles in the nighttime sector of aurora (Uritsky et al., 2002b. The scale-free DCS behavior is supported by localized current-driven instabilities that can communicate in an avalanche fashion over arbitrarily long distances thus producing current sheet waves (CSW. In this paper, we derive the analytical expression for CSW speed as a function of plasma parameters controlling local anomalous resistivity dynamics. The obtained relation indicates that the CSW propagation requires sufficiently high initial current densities, and predicts a deceleration of CSWs moving from inner plasma sheet regions toward its northern and southern boundaries. We also show that the shape of time-averaged current density profile in the DCS model is in agreement with steady-state spatial configuration of critical avalanching models as described by the singular diffusion theory of the SOC. Over shorter time scales, SOC dynamics is associated with rather complex spatial patterns and, in particular, can produce bifurcated current sheets often seen in multi-satellite observations.

  16. Long-Term Monitoring of Utility-Scale Solar Energy Development and Application of Remote Sensing Technologies: Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Hamada, Yuki [Argonne National Lab. (ANL), Argonne, IL (United States). Environmental Science Division; Grippo, Mark A. [Argonne National Lab. (ANL), Argonne, IL (United States). Environmental Science Division; Smith, Karen P. [Argonne National Lab. (ANL), Argonne, IL (United States). Environmental Science Division

    2014-09-30

    In anticipation of increased utility-scale solar energy development over the next 20 to 50 years, federal agencies and other organizations have identified a need to develop comprehensive long-term monitoring programs specific to solar energy development. Increasingly, stakeholders are requesting that federal agencies, such as the U.S. Department of the Interior Bureau of Land Management (BLM), develop rigorous and comprehensive long-term monitoring programs. Argonne National Laboratory (Argonne) is assisting the BLM in developing an effective long-term monitoring plan as required by the BLM Solar Energy Program to study the environmental effects of solar energy development. The monitoring data can be used to protect land resources from harmful development practices while at the same time reducing restrictions on utility-scale solar energy development that are determined to be unnecessary. The development of a long-term monitoring plan that incorporates regional datasets, prioritizes requirements in the context of landscape-scale conditions and trends, and integrates cost-effective data collection methods (such as remote sensing technologies) will translate into lower monitoring costs and increased certainty for solar developers regarding requirements for developing projects on public lands. This outcome will support U.S. Department of Energy (DOE) Sunshot Program goals. For this reason, the DOE provided funding for the work presented in this report.

  17. Evaluation process radiological in ternopil region method of box models

    Directory of Open Access Journals (Sweden)

    І.В. Матвєєва

    2006-02-01

    Full Text Available  Results of radionuclides Sr-90 flows analyses in the ecosystem of Kotsubinchiky village of Ternopolskaya oblast were analyzed. The block-scheme of ecosystem and its mathematical model using the box models method were made. It allowed us to evaluate the ways of dose’s loadings formation of internal irradiation for miscellaneous population groups – working people, retirees, children, and also to prognose the dynamic of these loadings during the years after the Chernobyl accident.

  18. Empirical spatial econometric modelling of small scale neighbourhood

    Science.gov (United States)

    Gerkman, Linda

    2012-07-01

    The aim of the paper is to model small scale neighbourhood in a house price model by implementing the newest methodology in spatial econometrics. A common problem when modelling house prices is that in practice it is seldom possible to obtain all the desired variables. Especially variables capturing the small scale neighbourhood conditions are hard to find. If there are important explanatory variables missing from the model, the omitted variables are spatially autocorrelated and they are correlated with the explanatory variables included in the model, it can be shown that a spatial Durbin model is motivated. In the empirical application on new house price data from Helsinki in Finland, we find the motivation for a spatial Durbin model, we estimate the model and interpret the estimates for the summary measures of impacts. By the analysis we show that the model structure makes it possible to model and find small scale neighbourhood effects, when we know that they exist, but we are lacking proper variables to measure them.

  19. Heating of field-reversed plasma rings estimated with two scaling models

    Energy Technology Data Exchange (ETDEWEB)

    Shearer, J.W.

    1978-05-18

    Scaling calculations are presented of the one temperature heating of a field-reversed plasma ring. Two sharp-boundary models of the ring are considered: the long thin approximation and a pinch model. Isobaric, adiabatic, and isovolumetric cases are considered, corresponding to various ways of heating the plasma in a real experiment by using neutral beams, or by raising the magnetic field. It is found that the shape of the plasma changes markedly with heating. The least sensitive shape change (as a function of temperature) is found for the isovolumetric heating case, which can be achieved by combining neutral beam heating with compression. The complications introduced by this heating problem suggest that it is desirable, if possible, to create a field reversed ring which is already quite hot, rather than cold.

  20. Genome-scale biological models for industrial microbial systems.

    Science.gov (United States)

    Xu, Nan; Ye, Chao; Liu, Liming

    2018-04-01

    The primary aims and challenges associated with microbial fermentation include achieving faster cell growth, higher productivity, and more robust production processes. Genome-scale biological models, predicting the formation of an interaction among genetic materials, enzymes, and metabolites, constitute a systematic and comprehensive platform to analyze and optimize the microbial growth and production of biological products. Genome-scale biological models can help optimize microbial growth-associated traits by simulating biomass formation, predicting growth rates, and identifying the requirements for cell growth. With regard to microbial product biosynthesis, genome-scale biological models can be used to design product biosynthetic pathways, accelerate production efficiency, and reduce metabolic side effects, leading to improved production performance. The present review discusses the development of microbial genome-scale biological models since their emergence and emphasizes their pertinent application in improving industrial microbial fermentation of biological products.

  1. Dynamically Scaled Model Experiment of a Mooring Cable

    Directory of Open Access Journals (Sweden)

    Lars Bergdahl

    2016-01-01

    Full Text Available The dynamic response of mooring cables for marine structures is scale-dependent, and perfect dynamic similitude between full-scale prototypes and small-scale physical model tests is difficult to achieve. The best possible scaling is here sought by means of a specific set of dimensionless parameters, and the model accuracy is also evaluated by two alternative sets of dimensionless parameters. A special feature of the presented experiment is that a chain was scaled to have correct propagation celerity for longitudinal elastic waves, thus providing perfect geometrical and dynamic scaling in vacuum, which is unique. The scaling error due to incorrect Reynolds number seemed to be of minor importance. The 33 m experimental chain could then be considered a scaled 76 mm stud chain with the length 1240 m, i.e., at the length scale of 1:37.6. Due to the correct elastic scale, the physical model was able to reproduce the effect of snatch loads giving rise to tensional shock waves propagating along the cable. The results from the experiment were used to validate the newly developed cable-dynamics code, MooDy, which utilises a discontinuous Galerkin FEM formulation. The validation of MooDy proved to be successful for the presented experiments. The experimental data is made available here for validation of other numerical codes by publishing digitised time series of two of the experiments.

  2. Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale

    Science.gov (United States)

    Kreibich, Heidi; Schröter, Kai; Merz, Bruno

    2016-05-01

    Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.

  3. Modelling of large-scale structures arising under developed turbulent convection in a horizontal fluid layer (with application to the problem of tropical cyclone origination

    Directory of Open Access Journals (Sweden)

    G. V. Levina

    2000-01-01

    Full Text Available The work is concerned with the results of theoretical and laboratory modelling the processes of the large-scale structure generation under turbulent convection in the rotating-plane horizontal layer of an incompressible fluid with unstable stratification. The theoretical model describes three alternative ways of creating unstable stratification: a layer heating from below, a volumetric heating of a fluid with internal heat sources and combination of both factors. The analysis of the model equations show that under conditions of high intensity of the small-scale convection and low level of heat loss through the horizontal layer boundaries a long wave instability may arise. The condition for the existence of an instability and criterion identifying the threshold of its initiation have been determined. The principle of action of the discovered instability mechanism has been described. Theoretical predictions have been verified by a series of experiments on a laboratory model. The horizontal dimensions of the experimentally-obtained long-lived vortices are 4÷6 times larger than the thickness of the fluid layer. This work presents a description of the laboratory setup and experimental procedure. From the geophysical viewpoint the examined mechanism of the long wave instability is supposed to be adequate to allow a description of the initial step in the evolution of such large-scale vortices as tropical cyclones - a transition form the small-scale cumulus clouds to the state of the atmosphere involving cloud clusters (the stage of initial tropical perturbation.

  4. Post Audit of a Field Scale Reactive Transport Model of Uranium at a Former Mill Site

    Science.gov (United States)

    Curtis, G. P.

    2015-12-01

    Reactive transport of hexavalent uranium (U(VI)) in a shallow alluvial aquifer at a former uranium mill tailings site near Naturita CO has been monitored for nearly 30 years by the US Department of Energy and the US Geological Survey. Groundwater at the site has high concentrations of chloride, alkalinity and U(VI) as a owing to ore processing at the site from 1941 to 1974. We previously calibrated a multicomponent reactive transport model to data collected at the site from 1986 to 2001. A two dimensional nonreactive transport model used a uniform hydraulic conductivity which was estimated from observed chloride concentrations and tritium helium age dates. A reactive transport model for the 2km long site was developed by including an equilibrium U(VI) surface complexation model calibrated to laboratory data and calcite equilibrium. The calibrated model reproduced both nonreactive tracers as well as the observed U(VI), pH and alkalinity. Forward simulations for the period 2002-2015 conducted with the calibrated model predict significantly faster natural attenuation of U(VI) concentrations than has been observed by the persistent high U(VI) concentrations at the site. Alternative modeling approaches are being evaluating evaluated using recent data to determine if the persistence can be explained by multirate mass transfer models developed from experimental observations at the column scale(~0.2m), the laboratory tank scale (~2m), the field tracer test scale (~1-4m) or geophysical observation scale (~1-5m). Results of this comparison should provide insight into the persistence of U(VI) plumes and improved management options.

  5. New phenomena in the standard no-scale supergravity model

    CERN Document Server

    Kelley, S; Nanopoulos, Dimitri V; Zichichi, Antonino; Kelley, S; Lopez, J L; Nanopoulos, D V; Zichichi, A

    1994-01-01

    We revisit the no-scale mechanism in the context of the simplest no-scale supergravity extension of the Standard Model. This model has the usual five-dimensional parameter space plus an additional parameter \\xi_{3/2}\\equiv m_{3/2}/m_{1/2}. We show how predictions of the model may be extracted over the whole parameter space. A necessary condition for the potential to be stable is {\\rm Str}{\\cal M}^4>0, which is satisfied if \\bf m_{3/2}\\lsim2 m_{\\tilde q}. Order of magnitude calculations reveal a no-lose theorem guaranteeing interesting and potentially observable new phenomena in the neutral scalar sector of the theory which would constitute a ``smoking gun'' of the no-scale mechanism. This new phenomenology is model-independent and divides into three scenarios, depending on the ratio of the weak scale to the vev at the minimum of the no-scale direction. We also calculate the residual vacuum energy at the unification scale (C_0\\, m^4_{3/2}), and find that in typical models one must require C_0>10. Such constrai...

  6. Extra-Tropical Cyclones at Climate Scales: Comparing Models to Observations

    Science.gov (United States)

    Tselioudis, G.; Bauer, M.; Rossow, W.

    2009-04-01

    Climate is often defined as the accumulation of weather, and weather is not the concern of climate models. Justification for this latter sentiment has long been hidden behind coarse model resolutions and blunt validation tools based on climatological maps. The spatial-temporal resolutions of today's climate models and observations are converging onto meteorological scales, however, which means that with the correct tools we can test the largely unproven assumption that climate model weather is correct enough that its accumulation results in a robust climate simulation. Towards this effort we introduce a new tool for extracting detailed cyclone statistics from observations and climate model output. These include the usual cyclone characteristics (centers, tracks), but also adaptive cyclone-centric composites. We have created a novel dataset, the MAP Climatology of Mid-latitude Storminess (MCMS), which provides a detailed 6 hourly assessment of the areas under the influence of mid-latitude cyclones, using a search algorithm that delimits the boundaries of each system from the outer-most closed SLP contour. Using this we then extract composites of cloud, radiation, and precipitation properties from sources such as ISCCP and GPCP to create a large comparative dataset for climate model validation. A demonstration of the potential usefulness of these tools in process-based climate model evaluation studies will be shown.

  7. The ScaLIng Macroweather Model (SLIMM): using scaling to forecast global-scale macroweather from months to decades

    Science.gov (United States)

    Lovejoy, S.; del Rio Amador, L.; Hébert, R.

    2015-09-01

    On scales of ≈ 10 days (the lifetime of planetary-scale structures), there is a drastic transition from high-frequency weather to low-frequency macroweather. This scale is close to the predictability limits of deterministic atmospheric models; thus, in GCM (general circulation model) macroweather forecasts, the weather is a high-frequency noise. However, neither the GCM noise nor the GCM climate is fully realistic. In this paper we show how simple stochastic models can be developed that use empirical data to force the statistics and climate to be realistic so that even a two-parameter model can perform as well as GCMs for annual global temperature forecasts. The key is to exploit the scaling of the dynamics and the large stochastic memories that we quantify. Since macroweather temporal (but not spatial) intermittency is low, we propose using the simplest model based on fractional Gaussian noise (fGn): the ScaLIng Macroweather Model (SLIMM). SLIMM is based on a stochastic ordinary differential equation, differing from usual linear stochastic models (such as the linear inverse modelling - LIM) in that it is of fractional rather than integer order. Whereas LIM implicitly assumes that there is no low-frequency memory, SLIMM has a huge memory that can be exploited. Although the basic mathematical forecast problem for fGn has been solved, we approach the problem in an original manner, notably using the method of innovations to obtain simpler results on forecast skill and on the size of the effective system memory. A key to successful stochastic forecasts of natural macroweather variability is to first remove the low-frequency anthropogenic component. A previous attempt to use fGn for forecasts had disappointing results because this was not done. We validate our theory using hindcasts of global and Northern Hemisphere temperatures at monthly and annual resolutions. Several nondimensional measures of forecast skill - with no adjustable parameters - show excellent

  8. Integrating an agent-based model into a large-scale hydrological model for evaluating drought management in California

    Science.gov (United States)

    Sheffield, J.; He, X.; Wada, Y.; Burek, P.; Kahil, M.; Wood, E. F.; Oppenheimer, M.

    2017-12-01

    California has endured record-breaking drought since winter 2011 and will likely experience more severe and persistent drought in the coming decades under changing climate. At the same time, human water management practices can also affect drought frequency and intensity, which underscores the importance of human behaviour in effective drought adaptation and mitigation. Currently, although a few large-scale hydrological and water resources models (e.g., PCR-GLOBWB) consider human water use and management practices (e.g., irrigation, reservoir operation, groundwater pumping), none of them includes the dynamic feedback between local human behaviors/decisions and the natural hydrological system. It is, therefore, vital to integrate social and behavioral dimensions into current hydrological modeling frameworks. This study applies the agent-based modeling (ABM) approach and couples it with a large-scale hydrological model (i.e., Community Water Model, CWatM) in order to have a balanced representation of social, environmental and economic factors and a more realistic representation of the bi-directional interactions and feedbacks in coupled human and natural systems. In this study, we focus on drought management in California and considers two types of agents, which are (groups of) farmers and state management authorities, and assumed that their corresponding objectives are to maximize the net crop profit and to maintain sufficient water supply, respectively. Farmers' behaviors are linked with local agricultural practices such as cropping patterns and deficit irrigation. More precisely, farmers' decisions are incorporated into CWatM across different time scales in terms of daily irrigation amount, seasonal/annual decisions on crop types and irrigated area as well as the long-term investment of irrigation infrastructure. This simulation-based optimization framework is further applied by performing different sets of scenarios to investigate and evaluate the effectiveness

  9. Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale

    Directory of Open Access Journals (Sweden)

    H. Kreibich

    2016-05-01

    Full Text Available Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB.In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.

  10. Site-scale groundwater flow modelling of Aberg

    Energy Technology Data Exchange (ETDEWEB)

    Walker, D. [Duke Engineering and Services (United States); Gylling, B. [Kemakta Konsult AB, Stockholm (Sweden)

    1998-12-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) SR 97 study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Aberg, which adopts input parameters from the Aespoe Hard Rock Laboratory in southern Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister locations. A series of variant cases addresses uncertainties in the inference of parameters and the boundary conditions. The study uses HYDRASTAR, the SKB stochastic continuum groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position and the advective travel times and paths through the geosphere. The nested modelling approach and the scale dependency of hydraulic conductivity raise a number of questions regarding the regional to site-scale mass balance and the method`s self-consistency. The transfer of regional heads via constant head boundaries preserves the regional pattern recharge and discharge in the site-scale model, and the regional to site-scale mass balance is thought to be adequate. The upscaling method appears to be approximately self-consistent with respect to the median performance measures at various grid scales. A series of variant cases indicates that the study results are insensitive to alternative methods on transferring boundary conditions from the regional model to the site-scale model. The flow paths, travel times and simulated heads appear to be consistent with on-site observations and simple scoping calculations. The variabilities of the performance measures are quite high for the Base Case, but the

  11. Site-scale groundwater flow modelling of Aberg

    International Nuclear Information System (INIS)

    Walker, D.; Gylling, B.

    1998-12-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) SR 97 study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Aberg, which adopts input parameters from the Aespoe Hard Rock Laboratory in southern Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister locations. A series of variant cases addresses uncertainties in the inference of parameters and the boundary conditions. The study uses HYDRASTAR, the SKB stochastic continuum groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position and the advective travel times and paths through the geosphere. The nested modelling approach and the scale dependency of hydraulic conductivity raise a number of questions regarding the regional to site-scale mass balance and the method's self-consistency. The transfer of regional heads via constant head boundaries preserves the regional pattern recharge and discharge in the site-scale model, and the regional to site-scale mass balance is thought to be adequate. The upscaling method appears to be approximately self-consistent with respect to the median performance measures at various grid scales. A series of variant cases indicates that the study results are insensitive to alternative methods on transferring boundary conditions from the regional model to the site-scale model. The flow paths, travel times and simulated heads appear to be consistent with on-site observations and simple scoping calculations. The variabilities of the performance measures are quite high for the Base Case, but the

  12. Global long-term ozone trends derived from different observed and modelled data sets

    Science.gov (United States)

    Coldewey-Egbers, M.; Loyola, D.; Zimmer, W.; van Roozendael, M.; Lerot, C.; Dameris, M.; Garny, H.; Braesicke, P.; Koukouli, M.; Balis, D.

    2012-04-01

    The long-term behaviour of stratospheric ozone amounts during the past three decades is investigated on a global scale using different observed and modelled data sets. Three European satellite sensors GOME/ERS-2, SCIAMACHY/ENVISAT, and GOME-2/METOP are combined and a merged global monthly mean total ozone product has been prepared using an inter-satellite calibration approach. The data set covers the 16-years period from June 1995 to June 2011 and it exhibits an excellent long-term stability, which is required for such trend studies. A multiple linear least-squares regression algorithm using different explanatory variables is applied to the time series and statistically significant positive trends are detected in the northern mid latitudes and subtropics. Global trends are also estimated using a second satellite-based Merged Ozone Data set (MOD) provided by NASA. For few selected geographical regions ozone trends are additionally calculated using well-maintained measurements of individual Dobson/Brewer ground-based instruments. A reasonable agreement in the spatial patterns of the trends is found amongst the European satellite, the NASA satellite, and the ground-based observations. Furthermore, two long-term simulations obtained with the Chemistry-Climate Models E39C-A provided by German Aerospace Center and UMUKCA-UCAM provided by University of Cambridge are analysed.

  13. Mouse models of long QT syndrome

    Science.gov (United States)

    Salama, Guy; London, Barry

    2007-01-01

    Congenital long QT syndrome is a rare inherited condition characterized by prolongation of action potential duration (APD) in cardiac myocytes, prolongation of the QT interval on the surface electrocardiogram (ECG), and an increased risk of syncope and sudden death due to ventricular tachyarrhythmias. Mutations of cardiac ion channel genes that affect repolarization cause the majority of the congenital cases. Despite detailed characterizations of the mutated ion channels at the molecular level, a complete understanding of the mechanisms by which individual mutations may lead to arrhythmias and sudden death requires study of the intact heart and its modulation by the autonomic nervous system. Here, we will review studies of molecularly engineered mice with mutations in the genes (a) known to cause long QT syndrome in humans and (b) specific to cardiac repolarization in the mouse. Our goal is to provide the reader with a comprehensive overview of mouse models with long QT syndrome and to emphasize the advantages and limitations of these models. PMID:17038432

  14. Physically representative atomistic modeling of atomic-scale friction

    Science.gov (United States)

    Dong, Yalin

    Nanotribology is a research field to study friction, adhesion, wear and lubrication occurred between two sliding interfaces at nano scale. This study is motivated by the demanding need of miniaturization mechanical components in Micro Electro Mechanical Systems (MEMS), improvement of durability in magnetic storage system, and other industrial applications. Overcoming tribological failure and finding ways to control friction at small scale have become keys to commercialize MEMS with sliding components as well as to stimulate the technological innovation associated with the development of MEMS. In addition to the industrial applications, such research is also scientifically fascinating because it opens a door to understand macroscopic friction from the most bottom atomic level, and therefore serves as a bridge between science and engineering. This thesis focuses on solid/solid atomic friction and its associated energy dissipation through theoretical analysis, atomistic simulation, transition state theory, and close collaboration with experimentalists. Reduced-order models have many advantages for its simplification and capacity to simulating long-time event. We will apply Prandtl-Tomlinson models and their extensions to interpret dry atomic-scale friction. We begin with the fundamental equations and build on them step-by-step from the simple quasistatic one-spring, one-mass model for predicting transitions between friction regimes to the two-dimensional and multi-atom models for describing the effect of contact area. Theoretical analysis, numerical implementation, and predicted physical phenomena are all discussed. In the process, we demonstrate the significant potential for this approach to yield new fundamental understanding of atomic-scale friction. Atomistic modeling can never be overemphasized in the investigation of atomic friction, in which each single atom could play a significant role, but is hard to be captured experimentally. In atomic friction, the

  15. THE ANALYSIS OF THE COMMODITY PRICE FORECASTING SUCCESS CONSIDERING DIFFERENT LENGTHS OF THE INITIAL CONDITION DRIFT

    Directory of Open Access Journals (Sweden)

    Marcela Lascsáková

    2015-09-01

    Full Text Available In the paper the numerical model based on the exponential approximation of commodity stock exchanges was derived. The price prognoses of aluminium on the London Metal Exchange were determined as numerical solution of the Cauchy initial problem for the 1st order ordinary differential equation. To make the numerical model more accurate the idea of the modification of the initial condition value by the stock exchange was realized. By having analyzed the forecasting success of the chosen initial condition drift types, the initial condition drift providing the most accurate prognoses for the commodity price movements was determined. The suggested modification of the original model made the commodity price prognoses more accurate.

  16. The Multi-Scale Model Approach to Thermohydrology at Yucca Mountain

    International Nuclear Information System (INIS)

    Glascoe, L; Buscheck, T A; Gansemer, J; Sun, Y

    2002-01-01

    The Multi-Scale Thermo-Hydrologic (MSTH) process model is a modeling abstraction of them1 hydrology (TH) of the potential Yucca Mountain repository at multiple spatial scales. The MSTH model as described herein was used for the Supplemental Science and Performance Analyses (BSC, 2001) and is documented in detail in CRWMS M and O (2000) and Glascoe et al. (2002). The model has been validated to a nested grid model in Buscheck et al. (In Review). The MSTH approach is necessary for modeling thermal hydrology at Yucca Mountain for two reasons: (1) varying levels of detail are necessary at different spatial scales to capture important TH processes and (2) a fully-coupled TH model of the repository which includes the necessary spatial detail is computationally prohibitive. The MSTH model consists of six ''submodels'' which are combined in a manner to reduce the complexity of modeling where appropriate. The coupling of these models allows for appropriate consideration of mountain-scale thermal hydrology along with the thermal hydrology of drift-scale discrete waste packages of varying heat load. Two stages are involved in the MSTH approach, first, the execution of submodels, and second, the assembly of submodels using the Multi-scale Thermohydrology Abstraction Code (MSTHAC). MSTHAC assembles the submodels in a five-step process culminating in the TH model output of discrete waste packages including a mountain-scale influence

  17. Modeling the effects of LID practices on streams health at watershed scale

    Science.gov (United States)

    Shannak, S.; Jaber, F. H.

    2013-12-01

    Increasing impervious covers due to urbanization will lead to an increase in runoff volumes, and eventually increase flooding. Stream channels adjust by widening and eroding stream bank which would impact downstream property negatively (Chin and Gregory, 2001). Also, urban runoff drains in sediment bank areas in what's known as riparian zones and constricts stream channels (Walsh, 2009). Both physical and chemical factors associated with urbanization such as high peak flows and low water quality further stress aquatic life and contribute to overall biological condition of urban streams (Maxted et al., 1995). While LID practices have been mentioned and studied in literature for stormwater management, they have not been studied in respect to reducing potential impact on stream health. To evaluate the performance and the effectiveness of LID practices at a watershed scale, sustainable detention pond, bioretention, and permeable pavement will be modeled at watershed scale. These measures affect the storm peak flows and base flow patterns over long periods, and there is a need to characterize their effect on stream bank and bed erosion, and aquatic life. These measures will create a linkage between urban watershed development and stream conditions specifically biological health. The first phase of this study is to design and construct LID practices at the Texas A&M AgriLife Research and Extension Center-Dallas, TX to collect field data about the performance of these practices on a smaller scale. The second phase consists of simulating the performance of LID practices on a watershed scale. This simulation presents a long term model (23 years) using SWAT to evaluate the potential impacts of these practices on; potential stream bank and bed erosion, and potential impact on aquatic life in the Blunn Watershed located in Austin, TX. Sub-daily time step model simulations will be developed to simulate the effectiveness of the three LID practices with respect to reducing

  18. Scaled Experimental Modeling of VHTR Plenum Flows

    Energy Technology Data Exchange (ETDEWEB)

    ICONE 15

    2007-04-01

    Abstract The Very High Temperature Reactor (VHTR) is the leading candidate for the Next Generation Nuclear Power (NGNP) Project in the U.S. which has the goal of demonstrating the production of emissions free electricity and hydrogen by 2015. Various scaled heated gas and water flow facilities were investigated for modeling VHTR upper and lower plenum flows during the decay heat portion of a pressurized conduction-cooldown scenario and for modeling thermal mixing and stratification (“thermal striping”) in the lower plenum during normal operation. It was concluded, based on phenomena scaling and instrumentation and other practical considerations, that a heated water flow scale model facility is preferable to a heated gas flow facility and to unheated facilities which use fluids with ranges of density to simulate the density effect of heating. For a heated water flow lower plenum model, both the Richardson numbers and Reynolds numbers may be approximately matched for conduction-cooldown natural circulation conditions. Thermal mixing during normal operation may be simulated but at lower, but still fully turbulent, Reynolds numbers than in the prototype. Natural circulation flows in the upper plenum may also be simulated in a separate heated water flow facility that uses the same plumbing as the lower plenum model. However, Reynolds number scaling distortions will occur at matching Richardson numbers due primarily to the necessity of using a reduced number of channels connected to the plenum than in the prototype (which has approximately 11,000 core channels connected to the upper plenum) in an otherwise geometrically scaled model. Experiments conducted in either or both facilities will meet the objectives of providing benchmark data for the validation of codes proposed for NGNP designs and safety studies, as well as providing a better understanding of the complex flow phenomena in the plenums.

  19. Long term modelling in a second rank world: application to climate policies; Modeliser le long terme dans un monde de second rang: application aux politiques climatiques

    Energy Technology Data Exchange (ETDEWEB)

    Crassous, R

    2008-11-15

    This research aims at the identification of the dissatisfaction reasons with respect to the existing climate models, at the design of an innovating modelling architecture which would respond to these dissatisfactions, and at proposing climate policy assessment pathways. The authors gives a critique assessment of the modelling activity within the field of climate policies, outlines the fact that the large number and the scattering of existing long term scenarios hides a weak control of uncertainties and of the inner consistency of the produced paths, as well as the very low number of modelling paradigms. After a deepened analysis of modelling practices, the author presents the IMACLIM-R modelling architecture which is presented on a world scale and includes 12 areas and 12 sectors, and allows the simulation of evolutions by 2050, and even 2100, with a one-year time step. The author describes a scenario without any climate policy, highlights reassessment possibilities for economical trajectories which would allow greenhouse gas concentration stabilisation on a long term basis through the application of IMACLIM-R innovations. He outlines adjustment and refinement possibilities for climate policies which would robustly limit the transition cost risks.

  20. Exploiting the atmosphere's memory for monthly, seasonal and interannual temperature forecasting using Scaling LInear Macroweather Model (SLIMM)

    Science.gov (United States)

    Del Rio Amador, Lenin; Lovejoy, Shaun

    2016-04-01

    Traditionally, most of the models for prediction of the atmosphere behavior in the macroweather and climate regimes follow a deterministic approach. However, modern ensemble forecasting systems using stochastic parameterizations are in fact deterministic/ stochastic hybrids that combine both elements to yield a statistical distribution of future atmospheric states. Nevertheless, the result is both highly complex (both numerically and theoretically) as well as being theoretically eclectic. In principle, it should be advantageous to exploit higher level turbulence type scaling laws. Concretely, in the case for the Global Circulation Models (GCM's), due to sensitive dependence on initial conditions, there is a deterministic predictability limit of the order of 10 days. When these models are coupled with ocean, cryosphere and other process models to make long range, climate forecasts, the high frequency "weather" is treated as a driving noise in the integration of the modelling equations. Following Hasselman, 1976, this has led to stochastic models that directly generate the noise, and model the low frequencies using systems of integer ordered linear ordinary differential equations, the most well-known are the Linear Inverse Models (LIM). For annual global scale forecasts, they are somewhat superior to the GCM's and have been presented as a benchmark for surface temperature forecasts with horizons up to decades. A key limitation for the LIM approach is that it assumes that the temperature has only short range (exponential) decorrelations. In contrast, an increasing body of evidence shows that - as with the models - the atmosphere respects a scale invariance symmetry leading to power laws with potentially enormous memories so that LIM greatly underestimates the memory of the system. In this talk we show that, due to the relatively low macroweather intermittency, the simplest scaling models - fractional Gaussian noise - can be used for making greatly improved forecasts

  1. Long-term stability of the Wechsler Intelligence Scale for Children--Fourth Edition.

    Science.gov (United States)

    Watkins, Marley W; Smith, Lourdes G

    2013-06-01

    Long-term stability of the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV; Wechsler, 2003) was investigated with a sample of 344 students from 2 school districts twice evaluated for special education eligibility at an average interval of 2.84 years. Test-retest reliability coefficients for the Verbal Comprehension Index (VCI), Perceptual Reasoning Index (PRI), Working Memory Index (WMI), Processing Speed Index (PSI), and the Full Scale IQ (FSIQ) were .72, .76, .66, .65, and .82, respectively. As predicted, the test-retest reliability coefficients for the subtests (Mdn = .56) were generally lower than the index scores (Mdn = .69) and the FSIQ (.82). On average, subtest scores did not differ by more than 1 point, and index scores did not differ by more than 2 points across the test-retest interval. However, 25% of the students earned FSIQ scores that differed by 10 or more points, and 29%, 39%, 37%, and 44% of the students earned VCI, PRI, WMI, and PSI scores, respectively, that varied by 10 or more points. Given this variability, it cannot be assumed that WISC-IV scores will be consistent across long test-retest intervals for individual students. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  2. Development of lichen response indexes using a regional gradient modeling approach for large-scale monitoring of forests

    Science.gov (United States)

    Susan Will-Wolf; Peter Neitlich

    2010-01-01

    Development of a regional lichen gradient model from community data is a powerful tool to derive lichen indexes of response to environmental factors for large-scale and long-term monitoring of forest ecosystems. The Forest Inventory and Analysis (FIA) Program of the U.S. Department of Agriculture Forest Service includes lichens in its national inventory of forests of...

  3. Long-range correlations from colour confinement

    International Nuclear Information System (INIS)

    Jurkiewicz, J.; Zenczykowski, P.

    1979-01-01

    A class of independent parton emission models is generalized by the introduction of the colour degrees of freedom. In the proposed models colour confinement extorts strong long-range forward-backward correlations, the rise of one-particle inclusive distribution and the KNO scaling. It leads to the analytically calculable definite asymptotic predictions for the D/ ratio which depends only on the choice of the colour group. Multiplicity distribution develops a remarkably long tail. (author)

  4. Study of the interaction of a 10 TW femtosecond laser with a high-density long-scale pulsed gas jet

    International Nuclear Information System (INIS)

    Monot, P.; D'Oliveira, P.; Hulin, S.; Faenov, A.Ya.; Dobosz, S.; Auguste, T.; Pikuz, T.A.; Magunov, A.I.; Skobelev, I.Yu.; Rosmej, F.; Andreev, N.E.; Lefebvre, E.

    2001-01-01

    A study on the interaction of a 10 TW, 60 fs, Ti-Sapphire laser with a high-density long-scale pulsed nitrogen gas jet is reported. Experimental data on the laser propagation are analyzed with the help of a ray-tracing model. The plasma dynamics is investigated by means of time-resolved shadowgraphy and time-integrated high-resolution x-ray spectroscopy. Shadowgrams show that the plasma does not expand during the first 55 ps, while x-ray spectra exhibit an unusual continuum-like structure attributed to hollow atoms produced by charge exchange process between bare nuclei expelled from the plasma and molecules of the surrounding gas. The interpretation of the results is supported by particle-in-cell simulations. The question of x-ray lasing is also examined using a hydrodynamic code to simulate the long lasting regime of recombination

  5. Optimizing the design of large-scale ground-coupled heat pump systems using groundwater and heat transport modeling

    Energy Technology Data Exchange (ETDEWEB)

    Fujii, H.; Itoi, R.; Fujii, J. [Kyushu University, Fukuoka (Japan). Faculty of Engineering, Department of Earth Resources Engineering; Uchida, Y. [Geological Survey of Japan, Tsukuba (Japan)

    2005-06-01

    In order to predict the long-term performance of large-scale ground-coupled heat pump (GCHP) systems, it is necessary to take into consideration well-to-well interference, especially in the presence of groundwater flow. A mass and heat transport model was developed to simulate the behavior of this type of system in the Akita Plain, northern Japan. The model was used to investigate different operational schemes and to maximize the heat extraction rate from the GCHP system. (author)

  6. Biology meets Physics: Reductionism and Multi-scale Modeling of Morphogenesis

    DEFF Research Database (Denmark)

    Green, Sara; Batterman, Robert

    2017-01-01

    A common reductionist assumption is that macro-scale behaviors can be described "bottom-up" if only sufficient details about lower-scale processes are available. The view that an "ideal" or "fundamental" physics would be sufficient to explain all macro-scale phenomena has been met with criticism ...... modeling in developmental biology. In such contexts, the relation between models at different scales and from different disciplines is neither reductive nor completely autonomous, but interdependent....... from philosophers of biology. Specifically, scholars have pointed to the impossibility of deducing biological explanations from physical ones, and to the irreducible nature of distinctively biological processes such as gene regulation and evolution. This paper takes a step back in asking whether bottom......-up modeling is feasible even when modeling simple physical systems across scales. By comparing examples of multi-scale modeling in physics and biology, we argue that the “tyranny of scales” problem present a challenge to reductive explanations in both physics and biology. The problem refers to the scale...

  7. Modeling Wettability Variation during Long-Term Water Flooding

    Directory of Open Access Journals (Sweden)

    Renyi Cao

    2015-01-01

    Full Text Available Surface property of rock affects oil recovery during water flooding. Oil-wet polar substances adsorbed on the surface of the rock will gradually be desorbed during water flooding, and original reservoir wettability will change towards water-wet, and the change will reduce the residual oil saturation and improve the oil displacement efficiency. However there is a lack of an accurate description of wettability alternation model during long-term water flooding and it will lead to difficulties in history match and unreliable forecasts using reservoir simulators. This paper summarizes the mechanism of wettability variation and characterizes the adsorption of polar substance during long-term water flooding from injecting water or aquifer and relates the residual oil saturation and relative permeability to the polar substance adsorbed on clay and pore volumes of flooding water. A mathematical model is presented to simulate the long-term water flooding and the model is validated with experimental results. The simulation results of long-term water flooding are also discussed.

  8. A testing facility for large scale models at 100 bar and 3000C to 10000C

    International Nuclear Information System (INIS)

    Zemann, H.

    1978-07-01

    A testing facility for large scale model tests is in construction under support of the Austrian Industry. It will contain a Prestressed Concrete Pressure Vessel (PCPV) with hot linear (300 0 C at 100 bar), an electrical heating system (1.2 MW, 1000 0 C), a gas supply system, and a cooling system for the testing space. The components themselves are models for advanced high temperature applications. The first main component which was tested successfully was the PCPV. Basic investigation of the building materials, improvements of concrete gauges, large scale model tests and measurements within the structural concrete and on the liner from the beginning of construction during the period of prestressing, the period of stabilization and the final pressurizing tests have been made. On the basis of these investigations a computer controlled safety surveillance system for long term high pressure, high temperature tests has been developed. (author)

  9. Secondary clarifier hybrid model calibration in full scale pulp and paper activated sludge wastewater treatment

    Energy Technology Data Exchange (ETDEWEB)

    Sreckovic, G.; Hall, E.R. [British Columbia Univ., Dept. of Civil Engineering, Vancouver, BC (Canada); Thibault, J. [Laval Univ., Dept. of Chemical Engineering, Ste-Foy, PQ (Canada); Savic, D. [Exeter Univ., School of Engineering, Exeter (United Kingdom)

    1999-05-01

    The issue of proper model calibration techniques applied to mechanistic mathematical models relating to activated sludge systems was discussed. Such calibrations are complex because of the non-linearity and multi-model objective functions of the process. This paper presents a hybrid model which was developed using two techniques to model and calibrate secondary clarifier parts of an activated sludge system. Genetic algorithms were used to successfully calibrate the settler mechanistic model, and neural networks were used to reduce the error between the mechanistic model output and real world data. Results of the modelling study show that the long term response of a one-dimensional settler mechanistic model calibrated by genetic algorithms and compared to full scale plant data can be improved by coupling the calibrated mechanistic model to as black-box model, such as a neural network. 11 refs., 2 figs.

  10. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    Science.gov (United States)

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  11. Long-Period Oscillations of Hydraulic Fractures: Attenuation, Scaling Relationships, and Flow Stability

    Science.gov (United States)

    Lipovsky, B.; Dunham, E. M.

    2013-12-01

    Long-period seismicity due to the excitation of hydraulic fracture normal modes is thought to occur in many geological systems, including volcanoes, glaciers and ice sheets, and hydrocarbon reservoirs. To better quantify the physical dimensions of fluid-filled cracks and properties of the fluid within them, we study wave motion along a thin hydraulic fracture waveguide. We present a linearized analysis that accounts for quasi-dynamic elasticity of the fracture wall, as well as fluid drag, inertia, and compressibility. We consider symmetric perturbations and neglect the effects of stratification and gravity. In the long-wavelength or thin-fracture limit, dispersive guided waves known as crack waves propagate with phase velocity cw=√(G*|k|w/ρ), where G* = G/(1-υ) for shear modulus G and Poisson ratio υ, w is the crack half-width, k is the wavenumber, and ρ is the fluid density. Restoring forces from elastic wall deformation drive wave motions. In the opposite, short-wavelength limit, guided waves are simply sound waves within the fluid and little seismic excitation occurs due to minimal fluid-solid coupling. We focus on long-wavelength crack waves, which, in the form of standing wave modes in finite-length cracks, are thought to be a common mechanism for long-period seismicity. The dispersive nature of crack waves implies several basic scaling relations that might be useful when interpreting statistics of long-period events. Seismic observations may constrain a characteristic frequency f0 and seismic moment M0~GδwR2, where δw is the change in crack width and R is the crack dimension. Resonant modes of a fluid-filled crack have associated frequencies f~cw/R. Linear elasticity provides a link between pressure changes δp in the crack and the induced opening δw: δp~G δw/R. Combining these, and assuming that pressure changes have no variation with crack dimension, leads to the scaling law relating seismic moment and oscillation frequency, M0~(Gwδp/ρ)f0

  12. Properties of Brownian Image Models in Scale-Space

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup

    2003-01-01

    Brownian images) will be discussed in relation to linear scale-space theory, and it will be shown empirically that the second order statistics of natural images mapped into jet space may, within some scale interval, be modeled by the Brownian image model. This is consistent with the 1/f 2 power spectrum...... law that apparently governs natural images. Furthermore, the distribution of Brownian images mapped into jet space is Gaussian and an analytical expression can be derived for the covariance matrix of Brownian images in jet space. This matrix is also a good approximation of the covariance matrix......In this paper it is argued that the Brownian image model is the least committed, scale invariant, statistical image model which describes the second order statistics of natural images. Various properties of three different types of Gaussian image models (white noise, Brownian and fractional...

  13. Spatiotemporal exploratory models for broad-scale survey data.

    Science.gov (United States)

    Fink, Daniel; Hochachka, Wesley M; Zuckerberg, Benjamin; Winkler, David W; Shaby, Ben; Munson, M Arthur; Hooker, Giles; Riedewald, Mirek; Sheldon, Daniel; Kelling, Steve

    2010-12-01

    The distributions of animal populations change and evolve through time. Migratory species exploit different habitats at different times of the year. Biotic and abiotic features that determine where a species lives vary due to natural and anthropogenic factors. This spatiotemporal variation needs to be accounted for in any modeling of species' distributions. In this paper we introduce a semiparametric model that provides a flexible framework for analyzing dynamic patterns of species occurrence and abundance from broad-scale survey data. The spatiotemporal exploratory model (STEM) adds essential spatiotemporal structure to existing techniques for developing species distribution models through a simple parametric structure without requiring a detailed understanding of the underlying dynamic processes. STEMs use a multi-scale strategy to differentiate between local and global-scale spatiotemporal structure. A user-specified species distribution model accounts for spatial and temporal patterning at the local level. These local patterns are then allowed to "scale up" via ensemble averaging to larger scales. This makes STEMs especially well suited for exploring distributional dynamics arising from a variety of processes. Using data from eBird, an online citizen science bird-monitoring project, we demonstrate that monthly changes in distribution of a migratory species, the Tree Swallow (Tachycineta bicolor), can be more accurately described with a STEM than a conventional bagged decision tree model in which spatiotemporal structure has not been imposed. We also demonstrate that there is no loss of model predictive power when a STEM is used to describe a spatiotemporal distribution with very little spatiotemporal variation; the distribution of a nonmigratory species, the Northern Cardinal (Cardinalis cardinalis).

  14. Logarithmic corrections to scaling in the XY2-model

    International Nuclear Information System (INIS)

    Kenna, R.; Irving, A.C.

    1995-01-01

    We study the distribution of partition function zeroes for the XY-model in two dimensions. In particular we find the scaling behaviour of the end of the distribution of zeroes in the complex external magnetic field plane in the thermodynamic limit (the Yang-Lee edge) and the form for the density of these zeroes. Assuming that finite-size scaling holds, we show that there have to exist logarithmic corrections to the leading scaling behaviour of thermodynamic quantities in this model. These logarithmic corrections are also manifest in the finite-size scaling formulae and we identify them numerically. The method presented here can be used to check the compatibility of scaling behaviour of odd and even thermodynamic functions in other models too. ((orig.))

  15. Multi-scale Modeling of Plasticity in Tantalum.

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Hojun [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Battaile, Corbett Chandler. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carroll, Jay [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Buchheit, Thomas E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Weinberger, Christopher [Drexel Univ., Philadelphia, PA (United States)

    2015-12-01

    In this report, we present a multi-scale computational model to simulate plastic deformation of tantalum and validating experiments. In atomistic/ dislocation level, dislocation kink- pair theory is used to formulate temperature and strain rate dependent constitutive equations. The kink-pair theory is calibrated to available data from single crystal experiments to produce accurate and convenient constitutive laws. The model is then implemented into a BCC crystal plasticity finite element method (CP-FEM) model to predict temperature and strain rate dependent yield stresses of single and polycrystalline tantalum and compared with existing experimental data from the literature. Furthermore, classical continuum constitutive models describing temperature and strain rate dependent flow behaviors are fit to the yield stresses obtained from the CP-FEM polycrystal predictions. The model is then used to conduct hydro- dynamic simulations of Taylor cylinder impact test and compared with experiments. In order to validate the proposed tantalum CP-FEM model with experiments, we introduce a method for quantitative comparison of CP-FEM models with various experimental techniques. To mitigate the effects of unknown subsurface microstructure, tantalum tensile specimens with a pseudo-two-dimensional grain structure and grain sizes on the order of millimeters are used. A technique combining an electron back scatter diffraction (EBSD) and high resolution digital image correlation (HR-DIC) is used to measure the texture and sub-grain strain fields upon uniaxial tensile loading at various applied strains. Deformed specimens are also analyzed with optical profilometry measurements to obtain out-of- plane strain fields. These high resolution measurements are directly compared with large-scale CP-FEM predictions. This computational method directly links fundamental dislocation physics to plastic deformations in the grain-scale and to the engineering-scale applications. Furthermore, direct

  16. Modeling long-term dynamics of electricity markets

    International Nuclear Information System (INIS)

    Olsina, Fernando; Garces, Francisco; Haubrich, H.-J.

    2006-01-01

    In the last decade, many countries have restructured their electricity industries by introducing competition in their power generation sectors. Although some restructuring has been regarded as successful, the short experience accumulated with liberalized power markets does not allow making any founded assertion about their long-term behavior. Long-term prices and long-term supply reliability are now center of interest. This concerns firms considering investments in generation capacity and regulatory authorities interested in assuring the long-term supply adequacy and the stability of power markets. In order to gain significant insight into the long-term behavior of liberalized power markets, in this paper, a simulation model based on system dynamics is proposed and the underlying mathematical formulations extensively discussed. Unlike classical market models based on the assumption that market outcomes replicate the results of a centrally made optimization, the approach presented here focuses on replicating the system structure of power markets and the logic of relationships among system components in order to derive its dynamical response. The simulations suggest that there might be serious problems to adjust early enough the generation capacity necessary to maintain stable reserve margins, and consequently, stable long-term price levels. Because of feedback loops embedded in the structure of power markets and the existence of some time lags, the long-term market development might exhibit a quite volatile behavior. By varying some exogenous inputs, a sensitivity analysis is carried out to assess the influence of these factors on the long-run market dynamics

  17. Probabilistic, meso-scale flood loss modelling

    Science.gov (United States)

    Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno

    2016-04-01

    Flood risk analyses are an important basis for decisions on flood risk management and adaptation. However, such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments and even more for flood loss modelling. State of the art in flood loss modelling is still the use of simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood loss models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we demonstrate and evaluate the upscaling of the approach to the meso-scale, namely on the basis of land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany (Botto et al. submitted). The application of bagging decision tree based loss models provide a probability distribution of estimated loss per municipality. Validation is undertaken on the one hand via a comparison with eight deterministic loss models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official loss data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of loss estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation approach is that it inherently provides quantitative information about the uncertainty of the prediction. References: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64. Botto A, Kreibich H, Merz B, Schröter K (submitted) Probabilistic, multi-variable flood loss modelling on the meso-scale with BT-FLEMO. Risk Analysis.

  18. Probing Mantle Heterogeneity Across Spatial Scales

    Science.gov (United States)

    Hariharan, A.; Moulik, P.; Lekic, V.

    2017-12-01

    Inferences of mantle heterogeneity in terms of temperature, composition, grain size, melt and crystal structure may vary across local, regional and global scales. Probing these scale-dependent effects require quantitative comparisons and reconciliation of tomographic models that vary in their regional scope, parameterization, regularization and observational constraints. While a range of techniques like radial correlation functions and spherical harmonic analyses have revealed global features like the dominance of long-wavelength variations in mantle heterogeneity, they have limited applicability for specific regions of interest like subduction zones and continental cratons. Moreover, issues like discrepant 1-D reference Earth models and related baseline corrections have impeded the reconciliation of heterogeneity between various regional and global models. We implement a new wavelet-based approach that allows for structure to be filtered simultaneously in both the spectral and spatial domain, allowing us to characterize heterogeneity on a range of scales and in different geographical regions. Our algorithm extends a recent method that expanded lateral variations into the wavelet domain constructed on a cubed sphere. The isolation of reference velocities in the wavelet scaling function facilitates comparisons between models constructed with arbitrary 1-D reference Earth models. The wavelet transformation allows us to quantify the scale-dependent consistency between tomographic models in a region of interest and investigate the fits to data afforded by heterogeneity at various dominant wavelengths. We find substantial and spatially varying differences in the spectrum of heterogeneity between two representative global Vp models constructed using different data and methodologies. Applying the orthonormality of the wavelet expansion, we isolate detailed variations in velocity from models and evaluate additional fits to data afforded by adding such complexities to long

  19. Two-dimensional divertor modeling and scaling laws

    International Nuclear Information System (INIS)

    Catto, P.J.; Connor, J.W.; Knoll, D.A.

    1996-01-01

    Two-dimensional numerical models of divertors contain large numbers of dimensionless parameters that must be varied to investigate all operating regimes of interest. To simplify the task and gain insight into divertor operation, we employ similarity techniques to investigate whether model systems of equations plus boundary conditions in the steady state admit scaling transformations that lead to useful divertor similarity scaling laws. A short mean free path neutral-plasma model of the divertor region below the x-point is adopted in which all perpendicular transport is due to the neutrals. We illustrate how the results can be used to benchmark large computer simulations by employing a modified version of UEDGE which contains a neutral fluid model. (orig.)

  20. ELMO model predicts the price of electric power

    International Nuclear Information System (INIS)

    Antila, H.

    2001-01-01

    Electrowatt-Ekono has developed a new model, by which it is possible to make long-term prognoses on the development of electricity prices in the Nordic Countries. The ELMO model can be used as an analysis service of the electricity markets and estimation of the profitability of long-term power distribution contracts with different scenarios. It can also be applied for calculation of technical and economical fundamentals for new power plants, and for estimation of the effects of different taxation models on the emissions of power generation. The model describes the whole power generation system, the power and heat consumption and transmission. The Finnish power generation system is based on the Electrowatt-Ekono's boiler database by combining different data elements. Calculation is based on the assumption that the Nordic power generation system is used optimally, and that the production costs are minimised. In practise the effectively operated electricity markets ensure the optimal use of the production system. The market area to be described consists of Finland and Sweden. The spot prices have long been the same. Norway has been treated as a separate market area. The most potential power generation system, the power consumption and the power transmission system are presumed for the target year during a normal rainfall situation. The basic scenario is calculated on the basis of the preconditional data. The calculation is carried out on hourly basis, which enables the estimation of the price variation of electric power between different times during the day and seasons. The system optimises the power generation on the basis of electricity and heat consumption curves and fuel prices. The result is an hourly limit price for electric power. Estimates are presented as standard form reports. Prices are presented as average annuals, in the seasonal base, and in hourly or daily basis for different seasons

  1. 3-3-1 models at electroweak scale

    International Nuclear Information System (INIS)

    Dias, Alex G.; Montero, J.C.; Pleitez, V.

    2006-01-01

    We show that in 3-3-1 models there exist a natural relation among the SU(3) L coupling constant g, the electroweak mixing angle θ W , the mass of the W, and one of the vacuum expectation values, which implies that those models can be realized at low energy scales and, in particular, even at the electroweak scale. So that, being that symmetries realized in Nature, new physics may be really just around the corner

  2. Validating a continental-scale groundwater diffuse pollution model using regional datasets.

    Science.gov (United States)

    Ouedraogo, Issoufou; Defourny, Pierre; Vanclooster, Marnik

    2017-12-11

    In this study, we assess the validity of an African-scale groundwater pollution model for nitrates. In a previous study, we identified a statistical continental-scale groundwater pollution model for nitrate. The model was identified using a pan-African meta-analysis of available nitrate groundwater pollution studies. The model was implemented in both Random Forest (RF) and multiple regression formats. For both approaches, we collected as predictors a comprehensive GIS database of 13 spatial attributes, related to land use, soil type, hydrogeology, topography, climatology, region typology, nitrogen fertiliser application rate, and population density. In this paper, we validate the continental-scale model of groundwater contamination by using a nitrate measurement dataset from three African countries. We discuss the issue of data availability, and quality and scale issues, as challenges in validation. Notwithstanding that the modelling procedure exhibited very good success using a continental-scale dataset (e.g. R 2  = 0.97 in the RF format using a cross-validation approach), the continental-scale model could not be used without recalibration to predict nitrate pollution at the country scale using regional data. In addition, when recalibrating the model using country-scale datasets, the order of model exploratory factors changes. This suggests that the structure and the parameters of a statistical spatially distributed groundwater degradation model for the African continent are strongly scale dependent.

  3. Understanding Long-term, Large-scale Shoreline Change and the Sediment Budget on Fire Island, NY, using a 3D hydrodynamics-based model

    Science.gov (United States)

    List, J. H.; Safak, I.; Warner, J. C.; Schwab, W. C.; Hapke, C. J.; Lentz, E. E.

    2016-02-01

    The processes responsible for long-term (decadal) shoreline change and the related imbalance in the sediment budget on Fire Island, a 50 km long barrier island on the south coast of Long Island, NY, has been the subject of debate. The estimated net rate of sediment leaving the barrier at the west end of the island is approximately double the estimated net rate of sediment entering in the east, but the island-wide average sediment volume change associated with shoreline change is near zero and cannot account for this deficit. A long-held hypothesis is that onshore sediment flux from the inner continental shelf within the western half of the island is responsible for balancing the sediment budget. To investigate this possibility, we use a nested, 3-D, hydrodynamics-based modeling system (COAWST) to simulate the island-wide alongshore and cross-shore transport, in combination with shoreline change observations. The modeled, net alongshore transport gradients in the nearshore predict that the central part of Fire Island should be erosional, yet shoreline change observations show this area to be accretionary. We compare the model-predicted alongshore transport gradients with the flux gradients that would be required to generate the observed shoreline change, to give the pattern of sediment volume gains or losses that cannot be explained by the modeled alongshore transport gradients. Results show that the western 30 km of coast requires an input of sediment, supporting the hypothesis of onshore flux in this area. The modeled cross-shore flux of sediment between the shoreface and inner shelf is consistent these results, with onshore-directed bottom currents creating an environment more conducive to onshore sediment flux in the western 30 km of the island compared to the eastern 20 km. We conclude that the cross-shore flux of sediment can explain the shoreline change observations, and is an integral component of Fire Island's sediment budget.

  4. Physical modelling of granular flows at multiple-scales and stress levels

    Science.gov (United States)

    Take, Andy; Bowman, Elisabeth; Bryant, Sarah

    2015-04-01

    The rheology of dry granular flows is an area of significant focus within the granular physics, geoscience, and geotechnical engineering research communities. Studies performed to better understand granular flows in manufacturing, materials processing or bulk handling applications have typically focused on the behavior of steady, continuous flows. As a result, much of the research on relating the fundamental interaction of particles to the rheological or constitutive behaviour of granular flows has been performed under (usually) steady-state conditions and low stress levels. However, landslides, which are the primary focus of the geoscience and geotechnical engineering communities, are by nature unsteady flows defined by a finite source volume and at flow depths much larger than typically possible in laboratory experiments. The objective of this paper is to report initial findings of experimental studies currently being conducted using a new large-scale landslide flume (8 m long, 2 m wide slope inclined at 30° with a 35 m long horizontal base section) and at elevated particle self-weight in a 10 m diameter geotechnical centrifuge to investigate the granular flow behavior at multiple-scales and stress levels. The transparent sidewalls of the two flumes used in the experimental investigation permit the combination of observations of particle-scale interaction (using high-speed imaging through transparent vertical sidewalls at over 1000 frames per second) with observations of the distal reach of the landslide debris. These observations are used to investigate the applicability of rheological models developed for steady state flows (e.g. the dimensionless inertial number) in landslide applications and the robustness of depth-averaged approaches to modelling dry granular flow at multiple scales. These observations indicate that the dimensionless inertial number calculated for the flow may be of limited utility except perhaps to define a general state (e.g. liquid

  5. Drift Scale Modeling: Study of Unsaturated Flow into a Drift Using a Stochastic Continuum Model

    International Nuclear Information System (INIS)

    Birkholzer, J.T.; Tsang, C.F.; Tsang, Y.W.; Wang, J.S

    1996-01-01

    Unsaturated flow in heterogeneous fractured porous rock was simulated using a stochastic continuum model (SCM). In this model, both the more conductive fractures and the less permeable matrix are generated within the framework of a single continuum stochastic approach, based on non-parametric indicator statistics. High-permeable fracture zones are distinguished from low-permeable matrix zones in that they have assigned a long range correlation structure in prescribed directions. The SCM was applied to study small-scale flow in the vicinity of an access tunnel, which is currently being drilled in the unsaturated fractured tuff formations at Yucca Mountain, Nevada. Extensive underground testing is underway in this tunnel to investigate the suitability of Yucca Mountain as an underground nuclear waste repository. Different flow scenarios were studied in the present paper, considering the flow conditions before and after the tunnel emplacement, and assuming steady-state net infiltration as well as episodic pulse infiltration. Although the capability of the stochastic continuum model has not yet been fully explored, it has been demonstrated that the SCM is a good alternative model feasible of describing heterogeneous flow processes in unsaturated fractured tuff at Yucca Mountain

  6. Process based modelling of soil organic carbon redistribution on landscape scale

    Science.gov (United States)

    Schindewolf, Marcus; Seher, Wiebke; Amorim, Amorim S. S.; Maeso, Daniel L.; Jürgen, Schmidt

    2014-05-01

    Recent studies have pointed out the great importance of erosion processes in global carbon cycling. Continuous erosion leads to a massive loss of top soils including the loss of organic carbon accumulated over long time in the soil humus fraction. Lal (2003) estimates that 20% of the organic carbon eroded with top soils is emitted into atmosphere, due to aggregate breakdown and carbon mineralization during transport by surface runoff. Furthermore soil erosion causes a progressive decrease of natural soil fertility, since cation exchange capacity is associated with organic colloids. As a consequence the ability of soils to accumulate organic carbon is reduced proportionately to the drop in soil productivity. The colluvial organic carbon might be protected from further degradation depending on the depth of the colluvial cover and local decomposing conditions. Some colluvial sites can act as long-term sinks for organic carbon. The erosional transport of organic carbon may have an effect on the global carbon budget, however, it is uncertain, whether erosion is a sink or a source for carbon in the atmosphere. Another part of eroded soils and organic carbon will enter surface water bodies and might be transported over long distances. These sediments might be deposited in the riparian zones of river networks. Erosional losses of organic carbon will not pass over into atmosphere for the most part. But soil erosion limits substantially the potential of soils to sequester atmospheric CO2 by generating humus. The present study refers to lateral carbon flux modelling on landscape scale using the process based EROSION 3D soil loss simulation model, using existing parameter values. The selective nature of soil erosion results in a preferentially transport of fine particles while less carbonic larger particles remain on site. Consequently organic carbon is enriched in the eroded sediment compared to the origin soil. For this reason it is essential that EROSION 3D provides the

  7. Two-time scale subordination in physical processes with long-term memory

    International Nuclear Information System (INIS)

    Stanislavsky, Aleksander; Weron, Karina

    2008-01-01

    We describe dynamical processes in continuous media with a long-term memory. Our consideration is based on a stochastic subordination idea and concerns two physical examples in detail. First we study a temporal evolution of the species concentration in a trapping reaction in which a diffusing reactant is surrounded by a sea of randomly moving traps. The analysis uses the random-variable formalism of anomalous diffusive processes. We find that the empirical trapping-reaction law, according to which the reactant concentration decreases in time as a product of an exponential and a stretched exponential function, can be explained by a two-time scale subordination of random processes. Another example is connected with a state equation for continuous media with memory. If the pressure and the density of a medium are subordinated in two different random processes, then the ordinary state equation becomes fractional with two-time scales. This allows one to arrive at the Bagley-Torvik type of state equation

  8. Long Term Large Scale river nutrient changes across the UK

    Science.gov (United States)

    Bell, Victoria; Naden, Pam; Tipping, Ed; Davies, Helen; Davies, Jessica; Dragosits, Ulli; Muhammed, Shibu; Quinton, John; Stuart, Marianne; Whitmore, Andy; Wu, Lianhai

    2017-04-01

    During recent decades and centuries, pools and fluxes of Carbon, Nitrogen and Phosphorus (C, N and P) in UK rivers and ecosystems have been transformed by the spread and fertiliser-based intensification of agriculture (necessary to sustain human populations), by atmospheric pollution, by human waste (rising in line with population growth), and now by climate change. The principal objective of the UK's NERC-funded Macronutrients LTLS research project has been to account for observable terrestrial and aquatic pools, concentrations and fluxes of C, N and P on the basis of past inputs, biotic and abiotic interactions, and transport processes. More specifically, over the last 200 years, what have been the temporal responses of plant and soil nutrient pools in different UK catchments to nutrient enrichment, and what have been the consequent effects on nutrient transfers from land to the atmosphere, freshwaters and estuaries? The work described here addresses the second question by providing an integrated quantitative description of the interlinked land and water pools and annual fluxes of C, N and P for UK catchments over time. A national-scale modelling environment has been developed, combining simple physically-based gridded models that can be parameterised using recent observations before application to long timescales. The LTLS Integrated Model (LTLS-IM) uses readily-available driving data (climate, land-use, nutrient inputs, topography), and model estimates of both terrestrial and freshwater nutrient loads have been compared with measurements from sites across the UK. Here, the focus is on the freshwater nutrient component of the LTLS-IM, but the terrestrial nutrient inputs required for this are provided by models of nutrient processes in semi-natural and agricultural systems, and from simple models of nutrients arising from human waste. In the freshwater model, lateral routing of dissolved and particulate nutrients and within-river processing such as

  9. Multi-scale Drivers of Variations in Atmospheric Evaporative Demand Based on Observations and Physically-based Modeling

    Science.gov (United States)

    Peng, L.; Sheffield, J.; Li, D.

    2015-12-01

    Evapotranspiration (ET) is a key link between the availability of water resources and climate change and climate variability. Variability of ET has important environmental and socioeconomic implications for managing hydrological hazards, food and energy production. Although there have been many observational and modeling studies of ET, how ET has varied and the drivers of the variations at different temporal scales remain elusive. Much of the uncertainty comes from the atmospheric evaporative demand (AED), which is the combined effect of radiative and aerodynamic controls. The inconsistencies among modeled AED estimates and the limited observational data may originate from multiple sources including the limited time span and uncertainties in the data. To fully investigate and untangle the intertwined drivers of AED, we present a spectrum analysis to identify key controls of AED across multiple temporal scales. We use long-term records of observed pan evaporation for 1961-2006 from 317 weather stations across China and physically-based model estimates of potential evapotranspiration (PET). The model estimates are based on surface meteorology and radiation derived from reanalysis, satellite retrievals and station data. Our analyses show that temperature plays a dominant role in regulating variability of AED at the inter-annual scale. At the monthly and seasonal scales, the primary control of AED shifts from radiation in humid regions to humidity in dry regions. Unlike many studies focusing on the spatial pattern of ET drivers based on a traditional supply and demand framework, this study underlines the importance of temporal scales when discussing controls of ET variations.

  10. Analysis, scale modeling, and full-scale tests of low-level nuclear-waste-drum response to accident environments

    International Nuclear Information System (INIS)

    Huerta, M.; Lamoreaux, G.H.; Romesberg, L.E.; Yoshimura, H.R.; Joseph, B.J.; May, R.A.

    1983-01-01

    This report describes extensive full-scale and scale-model testing of 55-gallon drums used for shipping low-level radioactive waste materials. The tests conducted include static crush, single-can impact tests, and side impact tests of eight stacked drums. Static crush forces were measured and crush energies calculated. The tests were performed in full-, quarter-, and eighth-scale with different types of waste materials. The full-scale drums were modeled with standard food product cans. The response of the containers is reported in terms of drum deformations and lid behavior. The results of the scale model tests are correlated to the results of the full-scale drums. Two computer techniques for calculating the response of drum stacks are presented. 83 figures, 9 tables

  11. Scale changes in air quality modelling and assessment of associated uncertainties

    International Nuclear Information System (INIS)

    Korsakissok, Irene

    2009-01-01

    After an introduction of issues related to a scale change in the field of air quality (existing scales for emissions, transport, turbulence and loss processes, hierarchy of data and models, methods of scale change), the author first presents Gaussian models which have been implemented within the Polyphemus modelling platform. These models are assessed by comparison with experimental observations and with other commonly used Gaussian models. The second part reports the coupling of the puff-based Gaussian model with the Eulerian Polair3D model for the sub-mesh processing of point sources. This coupling is assessed at the continental scale for a passive tracer, and at the regional scale for photochemistry. Different statistical methods are assessed

  12. Two general models that generate long range correlation

    Science.gov (United States)

    Gan, Xiaocong; Han, Zhangang

    2012-06-01

    In this paper we study two models that generate sequences with LRC (long range correlation). For the IFT (inverse Fourier transform) model, our conclusion is the low frequency part leads to LRC, while the high frequency part tends to eliminate it. Therefore, a typical method to generate a sequence with LRC is multiplying the spectrum of a white noise sequence by a decaying function. A special case is analyzed: the linear combination of a smooth curve and a white noise sequence, in which the DFA plot consists of two line segments. For the patch model, our conclusion is long subsequences leads to LRC, while short subsequences tend to eliminate it. Therefore, we can generate a sequence with LRC by using a fat-tailed PDF (probability distribution function) of the length of the subsequences. A special case is also analyzed: if a patch model with long subsequences is mixed with a white noise sequence, the DFA plot will consist of two line segments. We have checked known models and actual data, and found they are all consistent with this study.

  13. A Comparative Study of Glasgow Coma Scale and Full Outline of Unresponsiveness Scores for Predicting Long-Term Outcome After Brain Injury.

    Science.gov (United States)

    McNett, Molly M; Amato, Shelly; Philippbar, Sue Ann

    2016-01-01

    The aim of this study was to compare predictive ability of hospital Glasgow Coma Scale (GCS) scores and scores obtained using a novel coma scoring tool (the Full Outline of Unresponsiveness [FOUR] scale) on long-term outcomes among patients with traumatic brain injury. Preliminary research of the FOUR scale suggests that it is comparable with GCS for predicting mortality and functional outcome at hospital discharge. No research has investigated relationships between coma scores and outcome 12 months postinjury. This is a prospective cohort study. Data were gathered on adult patients with traumatic brain injury admitted to urban level I trauma center. GCS and FOUR scores were assigned at 24 and 72 hours and at hospital discharge. Glasgow Outcome Scale scores were assigned at 6 and 12 months. The sample size was n = 107. Mean age was 53.5 (SD = ±21, range = 18-91) years. Spearman correlations were comparable and strongest among discharge GCS and FOUR scores and 12-month outcome (r = .73, p coma scores performed best for both tools, with GCS discharge scores predictive in multivariate models.

  14. Evaluation of remotely sensed actual evapotranspiration data for modeling small scale irrigation in Ethiopia.

    Science.gov (United States)

    Taddele, Y. D.; Ayana, E.; Worqlul, A. W.; Srinivasan, R.; Gerik, T.; Clarke, N.

    2017-12-01

    The research presented in this paper is conducted in Ethiopia, which is located in the horn of Africa. Ethiopian economy largely depends on rainfed agriculture, which employs 80% of the labor force. The rainfed agriculture is frequently affected by droughts and dry spells. Small scale irrigation is considered as the lifeline for the livelihoods of smallholder farmers in Ethiopia. Biophysical models are highly used to determine the agricultural production, environmental sustainability, and socio-economic outcomes of small scale irrigation in Ethiopia. However, detailed spatially explicit data is not adequately available to calibrate and validate simulations from biophysical models. The Soil and Water Assessment Tool (SWAT) model was setup using finer resolution spatial and temporal data. The actual evapotranspiration (AET) estimation from the SWAT model was compared with two remotely sensed data, namely the Advanced Very High Resolution Radiometer (AVHRR) and Moderate Resolution Imaging Spectrometer (MODIS). The performance of the monthly satellite data was evaluated with correlation coefficient (R2) over the different land use groups. The result indicated that over the long term and monthly the AVHRR AET captures the pattern of SWAT simulated AET reasonably well, especially on agricultural dominated landscapes. A comparison between SWAT simulated AET and AVHRR AET provided mixed results on grassland dominated landscapes and poor agreement on forest dominated landscapes. Results showed that the AVHRR AET products showed superior agreement with the SWAT simulated AET than MODIS AET. This suggests that remotely sensed products can be used as valuable tool in properly modeling small scale irrigation.

  15. Containing Terrorism: A Dynamic Model

    Directory of Open Access Journals (Sweden)

    Giti Zahedzadeh

    2017-06-01

    Full Text Available The strategic interplay between counterterror measures and terror activity is complex. Herein, we propose a dynamic model to depict this interaction. The model generates stylized prognoses: (i under conditions of inefficient counterterror measures, terror groups enjoy longer period of activity but only if recruitment into terror groups remains low; high recruitment shortens the period of terror activity (ii highly efficient counterterror measures effectively contain terror activity, but only if recruitment remains low. Thus, highly efficient counterterror measures can effectively contain terrorism if recruitment remains restrained. We conclude that the trajectory of the dynamics between counterterror measures and terror activity is heavily altered by recruitment.

  16. Scaling of musculoskeletal models from static and dynamic trials

    DEFF Research Database (Denmark)

    Lund, Morten Enemark; Andersen, Michael Skipper; de Zee, Mark

    2015-01-01

    Subject-specific scaling of cadaver-based musculoskeletal models is important for accurate musculoskeletal analysis within multiple areas such as ergonomics, orthopaedics and occupational health. We present two procedures to scale ‘generic’ musculoskeletal models to match segment lengths and joint...... three scaling methods to an inverse dynamics-based musculoskeletal model and compared predicted knee joint contact forces to those measured with an instrumented prosthesis during gait. Additionally, a Monte Carlo study was used to investigate the sensitivity of the knee joint contact force to random...

  17. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  18. Holographic models with anisotropic scaling

    Science.gov (United States)

    Brynjolfsson, E. J.; Danielsson, U. H.; Thorlacius, L.; Zingg, T.

    2013-12-01

    We consider gravity duals to d+1 dimensional quantum critical points with anisotropic scaling. The primary motivation comes from strongly correlated electron systems in condensed matter theory but the main focus of the present paper is on the gravity models in their own right. Physics at finite temperature and fixed charge density is described in terms of charged black branes. Some exact solutions are known and can be used to obtain a maximally extended spacetime geometry, which has a null curvature singularity inside a single non-degenerate horizon, but generic black brane solutions in the model can only be obtained numerically. Charged matter gives rise to black branes with hair that are dual to the superconducting phase of a holographic superconductor. Our numerical results indicate that holographic superconductors with anisotropic scaling have vanishing zero temperature entropy when the back reaction of the hair on the brane geometry is taken into account.

  19. Scale problems in assessment of hydrogeological parameters of groundwater flow models

    Science.gov (United States)

    Nawalany, Marek; Sinicyn, Grzegorz

    2015-09-01

    An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i) spatial extent and geometry of hydrogeological system, (ii) spatial continuity and granularity of both natural and man-made objects within the system, (iii) duration of the system and (iv) continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale - scale of pores, meso-scale - scale of laboratory sample, macro-scale - scale of typical blocks in numerical models of groundwater flow, local-scale - scale of an aquifer/aquitard and regional-scale - scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical) block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here). Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.

  20. Scale problems in assessment of hydrogeological parameters of groundwater flow models

    Directory of Open Access Journals (Sweden)

    Nawalany Marek

    2015-09-01

    Full Text Available An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i spatial extent and geometry of hydrogeological system, (ii spatial continuity and granularity of both natural and man-made objects within the system, (iii duration of the system and (iv continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scalescale of pores, meso-scalescale of laboratory sample, macro-scalescale of typical blocks in numerical models of groundwater flow, local-scalescale of an aquifer/aquitard and regional-scalescale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here. Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.

  1. Allometric Scaling and Resource Limitations Model of Total Aboveground Biomass in Forest Stands: Site-scale Test of Model

    Science.gov (United States)

    CHOI, S.; Shi, Y.; Ni, X.; Simard, M.; Myneni, R. B.

    2013-12-01

    Sparseness in in-situ observations has precluded the spatially explicit and accurate mapping of forest biomass. The need for large-scale maps has raised various approaches implementing conjugations between forest biomass and geospatial predictors such as climate, forest type, soil property, and topography. Despite the improved modeling techniques (e.g., machine learning and spatial statistics), a common limitation is that biophysical mechanisms governing tree growth are neglected in these black-box type models. The absence of a priori knowledge may lead to false interpretation of modeled results or unexplainable shifts in outputs due to the inconsistent training samples or study sites. Here, we present a gray-box approach combining known biophysical processes and geospatial predictors through parametric optimizations (inversion of reference measures). Total aboveground biomass in forest stands is estimated by incorporating the Forest Inventory and Analysis (FIA) and Parameter-elevation Regressions on Independent Slopes Model (PRISM). Two main premises of this research are: (a) The Allometric Scaling and Resource Limitations (ASRL) theory can provide a relationship between tree geometry and local resource availability constrained by environmental conditions; and (b) The zeroth order theory (size-frequency distribution) can expand individual tree allometry into total aboveground biomass at the forest stand level. In addition to the FIA estimates, two reference maps from the National Biomass and Carbon Dataset (NBCD) and U.S. Forest Service (USFS) were produced to evaluate the model. This research focuses on a site-scale test of the biomass model to explore the robustness of predictors, and to potentially improve models using additional geospatial predictors such as climatic variables, vegetation indices, soil properties, and lidar-/radar-derived altimetry products (or existing forest canopy height maps). As results, the optimized ASRL estimates satisfactorily

  2. Efficient coupling of 527 nm laser beam power to a long scale-length plasma

    International Nuclear Information System (INIS)

    Moody, J.D.; Divol, L.; Glenzer, S.H.; MacKinnon, A.J.; Froula, D.H.; Gregori, G.; Kruer, W.L.; Meezan, N.B.; Suter, L.J.; Williams, E.A.; Bahr, R.; Seka, W.

    2006-01-01

    We experimentally demonstrate that application of laser smoothing schemes including smoothing by spectral dispersion (SSD) and polarization smoothing (PS) increases the intensity range for efficient coupling of frequency doubled (527 nm) laser light to a long scale-length plasma with n e /n cr equals 0.14 and T e equals 2 keV. (authors)

  3. Quantum critical scaling of fidelity in BCS-like model

    International Nuclear Information System (INIS)

    Adamski, Mariusz; Jedrzejewski, Janusz; Krokhmalskii, Taras

    2013-01-01

    We study scaling of the ground-state fidelity in neighborhoods of quantum critical points in a model of interacting spinful fermions—a BCS-like model. Due to the exact diagonalizability of the model, in one and higher dimensions, scaling of the ground-state fidelity can be analyzed numerically with great accuracy, not only for small systems but also for macroscopic ones, together with the crossover region between them. Additionally, in the one-dimensional case we have been able to derive a number of analytical formulas for fidelity and show that they accurately fit our numerical results; these results are reported in the paper. Besides regular critical points and their neighborhoods, where well-known scaling laws are obeyed, there is the multicritical point and critical points in its proximity where anomalous scaling behavior is found. We also consider scaling of fidelity in neighborhoods of critical points where fidelity oscillates strongly as the system size or the chemical potential is varied. Our results for a one-dimensional version of a BCS-like model are compared with those obtained recently by Rams and Damski in similar studies of a quantum spin chain—an anisotropic XY model in a transverse magnetic field. (paper)

  4. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  5. A hybrid procedure for MSW generation forecasting at multiple time scales in Xiamen City, China

    International Nuclear Information System (INIS)

    Xu, Lilai; Gao, Peiqing; Cui, Shenghui; Liu, Chun

    2013-01-01

    Highlights: ► We propose a hybrid model that combines seasonal SARIMA model and grey system theory. ► The model is robust at multiple time scales with the anticipated accuracy. ► At month-scale, the SARIMA model shows good representation for monthly MSW generation. ► At medium-term time scale, grey relational analysis could yield the MSW generation. ► At long-term time scale, GM (1, 1) provides a basic scenario of MSW generation. - Abstract: Accurate forecasting of municipal solid waste (MSW) generation is crucial and fundamental for the planning, operation and optimization of any MSW management system. Comprehensive information on waste generation for month-scale, medium-term and long-term time scales is especially needed, considering the necessity of MSW management upgrade facing many developing countries. Several existing models are available but of little use in forecasting MSW generation at multiple time scales. The goal of this study is to propose a hybrid model that combines the seasonal autoregressive integrated moving average (SARIMA) model and grey system theory to forecast MSW generation at multiple time scales without needing to consider other variables such as demographics and socioeconomic factors. To demonstrate its applicability, a case study of Xiamen City, China was performed. Results show that the model is robust enough to fit and forecast seasonal and annual dynamics of MSW generation at month-scale, medium- and long-term time scales with the desired accuracy. In the month-scale, MSW generation in Xiamen City will peak at 132.2 thousand tonnes in July 2015 – 1.5 times the volume in July 2010. In the medium term, annual MSW generation will increase to 1518.1 thousand tonnes by 2015 at an average growth rate of 10%. In the long term, a large volume of MSW will be output annually and will increase to 2486.3 thousand tonnes by 2020 – 2.5 times the value for 2010. The hybrid model proposed in this paper can enable decision makers to

  6. A hybrid procedure for MSW generation forecasting at multiple time scales in Xiamen City, China

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Lilai, E-mail: llxu@iue.ac.cn [Key Lab of Urban Environment and Health, Institute of Urban Environment, Chinese Academy of Sciences, 1799 Jimei Road, Xiamen 361021 (China); Xiamen Key Lab of Urban Metabolism, Xiamen 361021 (China); Gao, Peiqing, E-mail: peiqing15@yahoo.com.cn [Xiamen City Appearance and Environmental Sanitation Management Office, 51 Hexiangxi Road, Xiamen 361004 (China); Cui, Shenghui, E-mail: shcui@iue.ac.cn [Key Lab of Urban Environment and Health, Institute of Urban Environment, Chinese Academy of Sciences, 1799 Jimei Road, Xiamen 361021 (China); Xiamen Key Lab of Urban Metabolism, Xiamen 361021 (China); Liu, Chun, E-mail: xmhwlc@yahoo.com.cn [Xiamen City Appearance and Environmental Sanitation Management Office, 51 Hexiangxi Road, Xiamen 361004 (China)

    2013-06-15

    Highlights: ► We propose a hybrid model that combines seasonal SARIMA model and grey system theory. ► The model is robust at multiple time scales with the anticipated accuracy. ► At month-scale, the SARIMA model shows good representation for monthly MSW generation. ► At medium-term time scale, grey relational analysis could yield the MSW generation. ► At long-term time scale, GM (1, 1) provides a basic scenario of MSW generation. - Abstract: Accurate forecasting of municipal solid waste (MSW) generation is crucial and fundamental for the planning, operation and optimization of any MSW management system. Comprehensive information on waste generation for month-scale, medium-term and long-term time scales is especially needed, considering the necessity of MSW management upgrade facing many developing countries. Several existing models are available but of little use in forecasting MSW generation at multiple time scales. The goal of this study is to propose a hybrid model that combines the seasonal autoregressive integrated moving average (SARIMA) model and grey system theory to forecast MSW generation at multiple time scales without needing to consider other variables such as demographics and socioeconomic factors. To demonstrate its applicability, a case study of Xiamen City, China was performed. Results show that the model is robust enough to fit and forecast seasonal and annual dynamics of MSW generation at month-scale, medium- and long-term time scales with the desired accuracy. In the month-scale, MSW generation in Xiamen City will peak at 132.2 thousand tonnes in July 2015 – 1.5 times the volume in July 2010. In the medium term, annual MSW generation will increase to 1518.1 thousand tonnes by 2015 at an average growth rate of 10%. In the long term, a large volume of MSW will be output annually and will increase to 2486.3 thousand tonnes by 2020 – 2.5 times the value for 2010. The hybrid model proposed in this paper can enable decision makers to

  7. Modeling generation expansion in the context of a security of supply mechanism based on long-term auctions. Application to the Colombian case

    International Nuclear Information System (INIS)

    Rodilla, P.; Batlle, C.; Salazar, J.; Sanchez, J.J.

    2011-01-01

    In an attempt to provide electricity generation investors with appropriate economic incentives so as to maintain quality of supply at socially optimal levels, a growing number of electricity market regulators have opted for implementing a security of supply mechanism based on long-term auctions. In this context, the ability to analyze long-term investment dynamics is a key issue not only for market agents, but also for regulators. This paper describes a model developed to serve this purpose. A general system-dynamics-inspired methodology has been designed to be able to simulate these long-term auction mechanisms in the formats presently in place. A full-scale simulation based on the Colombian system was conducted to illustrate model capabilities. (author)

  8. Modeling generation expansion in the context of a security of supply mechanism based on long-term auctions. Application to the Colombian case

    Energy Technology Data Exchange (ETDEWEB)

    Rodilla, P.; Batlle, C. [Institute for Research in Technology, University Pontificia Comillas, Sta. Cruz de Marcenado 26, 28015 Madrid (Spain); Salazar, J. [Empresas Publicas de Medellin, Carrera 58 No. 42-125 Edificio Inteligente, Medellin (Colombia); Sanchez, J.J. [Secretaria de Estado de Cambio Climatico, Ministerio de Medio Ambiente, Rural y Marino. Plaza San Juan de la Cruz, 28071 Madrid (Spain)

    2011-01-15

    In an attempt to provide electricity generation investors with appropriate economic incentives so as to maintain quality of supply at socially optimal levels, a growing number of electricity market regulators have opted for implementing a security of supply mechanism based on long-term auctions. In this context, the ability to analyze long-term investment dynamics is a key issue not only for market agents, but also for regulators. This paper describes a model developed to serve this purpose. A general system-dynamics-inspired methodology has been designed to be able to simulate these long-term auction mechanisms in the formats presently in place. A full-scale simulation based on the Colombian system was conducted to illustrate model capabilities. (author)

  9. Modelling solute dispersion in periodic heterogeneous porous media: Model benchmarking against intermediate scale experiments

    Science.gov (United States)

    Majdalani, Samer; Guinot, Vincent; Delenne, Carole; Gebran, Hicham

    2018-06-01

    This paper is devoted to theoretical and experimental investigations of solute dispersion in heterogeneous porous media. Dispersion in heterogenous porous media has been reported to be scale-dependent, a likely indication that the proposed dispersion models are incompletely formulated. A high quality experimental data set of breakthrough curves in periodic model heterogeneous porous media is presented. In contrast with most previously published experiments, the present experiments involve numerous replicates. This allows the statistical variability of experimental data to be accounted for. Several models are benchmarked against the data set: the Fickian-based advection-dispersion, mobile-immobile, multirate, multiple region advection dispersion models, and a newly proposed transport model based on pure advection. A salient property of the latter model is that its solutions exhibit a ballistic behaviour for small times, while tending to the Fickian behaviour for large time scales. Model performance is assessed using a novel objective function accounting for the statistical variability of the experimental data set, while putting equal emphasis on both small and large time scale behaviours. Besides being as accurate as the other models, the new purely advective model has the advantages that (i) it does not exhibit the undesirable effects associated with the usual Fickian operator (namely the infinite solute front propagation speed), and (ii) it allows dispersive transport to be simulated on every heterogeneity scale using scale-independent parameters.

  10. On Modeling Large-Scale Multi-Agent Systems with Parallel, Sequential and Genuinely Asynchronous Cellular Automata

    International Nuclear Information System (INIS)

    Tosic, P.T.

    2011-01-01

    We study certain types of Cellular Automata (CA) viewed as an abstraction of large-scale Multi-Agent Systems (MAS). We argue that the classical CA model needs to be modified in several important respects, in order to become a relevant and sufficiently general model for the large-scale MAS, and so that thus generalized model can capture many important MAS properties at the level of agent ensembles and their long-term collective behavior patterns. We specifically focus on the issue of inter-agent communication in CA, and propose sequential cellular automata (SCA) as the first step, and genuinely Asynchronous Cellular Automata (ACA) as the ultimate deterministic CA-based abstract models for large-scale MAS made of simple reactive agents. We first formulate deterministic and nondeterministic versions of sequential CA, and then summarize some interesting configuration space properties (i.e., possible behaviors) of a restricted class of sequential CA. In particular, we compare and contrast those properties of sequential CA with the corresponding properties of the classical (that is, parallel and perfectly synchronous) CA with the same restricted class of update rules. We analytically demonstrate failure of the studied sequential CA models to simulate all possible behaviors of perfectly synchronous parallel CA, even for a very restricted class of non-linear totalistic node update rules. The lesson learned is that the interleaving semantics of concurrency, when applied to sequential CA, is not refined enough to adequately capture the perfect synchrony of parallel CA updates. Last but not least, we outline what would be an appropriate CA-like abstraction for large-scale distributed computing insofar as the inter-agent communication model is concerned, and in that context we propose genuinely asynchronous CA. (author)

  11. Multi-Scale Modeling of the Gamma Radiolysis of Nitrate Solutions.

    Science.gov (United States)

    Horne, Gregory P; Donoclift, Thomas A; Sims, Howard E; Orr, Robin M; Pimblott, Simon M

    2016-11-17

    A multiscale modeling approach has been developed for the extended time scale long-term radiolysis of aqueous systems. The approach uses a combination of stochastic track structure and track chemistry as well as deterministic homogeneous chemistry techniques and involves four key stages: radiation track structure simulation, the subsequent physicochemical processes, nonhomogeneous diffusion-reaction kinetic evolution, and homogeneous bulk chemistry modeling. The first three components model the physical and chemical evolution of an isolated radiation chemical track and provide radiolysis yields, within the extremely low dose isolated track paradigm, as the input parameters for a bulk deterministic chemistry model. This approach to radiation chemical modeling has been tested by comparison with the experimentally observed yield of nitrite from the gamma radiolysis of sodium nitrate solutions. This is a complex radiation chemical system which is strongly dependent on secondary reaction processes. The concentration of nitrite is not just dependent upon the evolution of radiation track chemistry and the scavenging of the hydrated electron and its precursors but also on the subsequent reactions of the products of these scavenging reactions with other water radiolysis products. Without the inclusion of intratrack chemistry, the deterministic component of the multiscale model is unable to correctly predict experimental data, highlighting the importance of intratrack radiation chemistry in the chemical evolution of the irradiated system.

  12. Phenomenological aspects of no-scale inflation models

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, John [Theoretical Particle Physics and Cosmology Group, Department of Physics, King' s College London, WC2R 2LS London (United Kingdom); Garcia, Marcos A.G.; Olive, Keith A. [William I. Fine Theoretical Physics Institute, School of Physics and Astronomy, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455 (United States); Nanopoulos, Dimitri V., E-mail: john.ellis@cern.ch, E-mail: garciagarcia@physics.umn.edu, E-mail: dimitri@physics.tamu.edu, E-mail: olive@physics.umn.edu [George P. and Cynthia W. Mitchell Institute for Fundamental Physics and Astronomy, Texas A and M University, College Station, 77843 Texas (United States)

    2015-10-01

    We discuss phenomenological aspects of inflationary models wiith a no-scale supergravity Kähler potential motivated by compactified string models, in which the inflaton may be identified either as a Kähler modulus or an untwisted matter field, focusing on models that make predictions for the scalar spectral index n{sub s} and the tensor-to-scalar ratio r that are similar to the Starobinsky model. We discuss possible patterns of soft supersymmetry breaking, exhibiting examples of the pure no-scale type m{sub 0} = B{sub 0} = A{sub 0} = 0, of the CMSSM type with universal A{sub 0} and m{sub 0} ≠ 0 at a high scale, and of the mSUGRA type with A{sub 0} = B{sub 0} + m{sub 0} boundary conditions at the high input scale. These may be combined with a non-trivial gauge kinetic function that generates gaugino masses m{sub 1/2} ≠ 0, or one may have a pure gravity mediation scenario where trilinear terms and gaugino masses are generated through anomalies. We also discuss inflaton decays and reheating, showing possible decay channels for the inflaton when it is either an untwisted matter field or a Kähler modulus. Reheating is very efficient if a matter field inflaton is directly coupled to MSSM fields, and both candidates lead to sufficient reheating in the presence of a non-trivial gauge kinetic function.

  13. Phenomenological aspects of no-scale inflation models

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, John [Theoretical Particle Physics and Cosmology Group, Department of Physics,King’s College London,WC2R 2LS London (United Kingdom); Theory Division, CERN,CH-1211 Geneva 23 (Switzerland); Garcia, Marcos A.G. [William I. Fine Theoretical Physics Institute, School of Physics and Astronomy,University of Minnesota,116 Church Street SE, Minneapolis, MN 55455 (United States); Nanopoulos, Dimitri V. [George P. and Cynthia W. Mitchell Institute for Fundamental Physics andAstronomy, Texas A& M University,College Station, 77843 Texas (United States); Astroparticle Physics Group, Houston Advanced Research Center (HARC), Mitchell Campus, Woodlands, 77381 Texas (United States); Academy of Athens, Division of Natural Sciences, 28 Panepistimiou Avenue, 10679 Athens (Greece); Olive, Keith A. [William I. Fine Theoretical Physics Institute, School of Physics and Astronomy,University of Minnesota,116 Church Street SE, Minneapolis, MN 55455 (United States)

    2015-10-01

    We discuss phenomenological aspects of inflationary models wiith a no-scale supergravity Kähler potential motivated by compactified string models, in which the inflaton may be identified either as a Kähler modulus or an untwisted matter field, focusing on models that make predictions for the scalar spectral index n{sub s} and the tensor-to-scalar ratio r that are similar to the Starobinsky model. We discuss possible patterns of soft supersymmetry breaking, exhibiting examples of the pure no-scale type m{sub 0}=B{sub 0}=A{sub 0}=0, of the CMSSM type with universal A{sub 0} and m{sub 0}≠0 at a high scale, and of the mSUGRA type with A{sub 0}=B{sub 0}+m{sub 0} boundary conditions at the high input scale. These may be combined with a non-trivial gauge kinetic function that generates gaugino masses m{sub 1/2}≠0, or one may have a pure gravity mediation scenario where trilinear terms and gaugino masses are generated through anomalies. We also discuss inflaton decays and reheating, showing possible decay channels for the inflaton when it is either an untwisted matter field or a Kähler modulus. Reheating is very efficient if a matter field inflaton is directly coupled to MSSM fields, and both candidates lead to sufficient reheating in the presence of a non-trivial gauge kinetic function.

  14. Scale effect challenges in urban hydrology highlighted with a distributed hydrological model

    Science.gov (United States)

    Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe; Ten Veldhuis, Marie-Claire

    2018-01-01

    Hydrological models are extensively used in urban water management, development and evaluation of future scenarios and research activities. There is a growing interest in the development of fully distributed and grid-based models. However, some complex questions related to scale effects are not yet fully understood and still remain open issues in urban hydrology. In this paper we propose a two-step investigation framework to illustrate the extent of scale effects in urban hydrology. First, fractal tools are used to highlight the scale dependence observed within distributed data input into urban hydrological models. Then an intensive multi-scale modelling work is carried out to understand scale effects on hydrological model performance. Investigations are conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model is implemented at 17 spatial resolutions ranging from 100 to 5 m. Results clearly exhibit scale effect challenges in urban hydrology modelling. The applicability of fractal concepts highlights the scale dependence observed within distributed data. Patterns of geophysical data change when the size of the observation pixel changes. The multi-scale modelling investigation confirms scale effects on hydrological model performance. Results are analysed over three ranges of scales identified in the fractal analysis and confirmed through modelling. This work also discusses some remaining issues in urban hydrology modelling related to the availability of high-quality data at high resolutions, and model numerical instabilities as well as the computation time requirements. The main findings of this paper enable a replacement of traditional methods of model calibration by innovative methods of model resolution alteration based on the spatial data variability and scaling of flows in urban hydrology.

  15. Scaling considerations for modeling the in situ vitrification process

    International Nuclear Information System (INIS)

    Langerman, M.A.; MacKinnon, R.J.

    1990-09-01

    Scaling relationships for modeling the in situ vitrification waste remediation process are documented based upon similarity considerations derived from fundamental principles. Requirements for maintaining temperature and electric potential field similarity between the model and the prototype are determined as well as requirements for maintaining similarity in off-gas generation rates. A scaling rationale for designing reduced-scale experiments is presented and the results are assessed numerically. 9 refs., 6 figs

  16. Nucleon electric dipole moments in high-scale supersymmetric models

    International Nuclear Information System (INIS)

    Hisano, Junji; Kobayashi, Daiki; Kuramoto, Wataru; Kuwahara, Takumi

    2015-01-01

    The electric dipole moments (EDMs) of electron and nucleons are promising probes of the new physics. In generic high-scale supersymmetric (SUSY) scenarios such as models based on mixture of the anomaly and gauge mediations, gluino has an additional contribution to the nucleon EDMs. In this paper, we studied the effect of the CP-violating gluon Weinberg operator induced by the gluino chromoelectric dipole moment in the high-scale SUSY scenarios, and we evaluated the nucleon and electron EDMs in the scenarios. We found that in the generic high-scale SUSY models, the nucleon EDMs may receive the sizable contribution from the Weinberg operator. Thus, it is important to compare the nucleon EDMs with the electron one in order to discriminate among the high-scale SUSY models.

  17. Nucleon electric dipole moments in high-scale supersymmetric models

    Energy Technology Data Exchange (ETDEWEB)

    Hisano, Junji [Kobayashi-Maskawa Institute for the Origin of Particles and the Universe (KMI),Nagoya University,Nagoya 464-8602 (Japan); Department of Physics, Nagoya University,Nagoya 464-8602 (Japan); Kavli IPMU (WPI), UTIAS, University of Tokyo,Kashiwa, Chiba 277-8584 (Japan); Kobayashi, Daiki; Kuramoto, Wataru; Kuwahara, Takumi [Department of Physics, Nagoya University,Nagoya 464-8602 (Japan)

    2015-11-12

    The electric dipole moments (EDMs) of electron and nucleons are promising probes of the new physics. In generic high-scale supersymmetric (SUSY) scenarios such as models based on mixture of the anomaly and gauge mediations, gluino has an additional contribution to the nucleon EDMs. In this paper, we studied the effect of the CP-violating gluon Weinberg operator induced by the gluino chromoelectric dipole moment in the high-scale SUSY scenarios, and we evaluated the nucleon and electron EDMs in the scenarios. We found that in the generic high-scale SUSY models, the nucleon EDMs may receive the sizable contribution from the Weinberg operator. Thus, it is important to compare the nucleon EDMs with the electron one in order to discriminate among the high-scale SUSY models.

  18. Wind and Photovoltaic Large-Scale Regional Models for hourly production evaluation

    DEFF Research Database (Denmark)

    Marinelli, Mattia; Maule, Petr; Hahmann, Andrea N.

    2015-01-01

    This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesosca...... of the transmission system, especially regarding the cross-border power flows. The tuning of these regional models is done using historical meteorological data acquired on a per-country basis and using publicly available data of installed capacity.......This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesoscale...

  19. Damage Modeling Of Injection-Molded Short- And Long-Fiber Thermoplastics

    International Nuclear Information System (INIS)

    Nguyen, Ba Nghiep; Kunc, Vlastimil; Bapanapalli, Satish K.; Phelps, Jay; Tucker, Charles L. III

    2009-01-01

    This article applies the recent anisotropic rotary diffusion - reduced strain closure (ARD-RSC) model for predicting fiber orientation and a new damage model for injection-molded long-fiber thermoplastics (LFTs) to analyze progressive damage leading to total failure of injection-molded long-glass-fiber/polypropylene (PP) specimens. The ARD-RSC model was implemented in a research version of the Autodesk Moldflow Plastics Insight (MPI) processing code, and it has been used to simulate injection-molding of a long-glass-fiber/PP plaque. The damage model combines micromechanical modeling with a continuum damage mechanics description to predict the nonlinear behavior due to plasticity coupled with damage in LFTs. This model has been implemented in the ABAQUS finite element code via user-subroutines and has been used in the damage analyses of tensile specimens removed from the injection-molded long-glass-fiber/PP plaques. Experimental characterization and mechanical testing were performed to provide input data to support and validate both process modeling and damage analyses. The predictions are in agreement with the experimental results.

  20. Multi-scale modeling of the environmental impact and energy performance of open-loop groundwater heat pumps in urban areas

    International Nuclear Information System (INIS)

    Sciacovelli, A.; Guelpa, E.; Verda, V.

    2014-01-01

    Groundwater heat pumps are expected to play a major role in future energy scenarios. Proliferation of such systems in urban areas may generate issues related to possible interference between installations. These issues are associated with the thermal plume produced by heat pumps during operation and are particularly evident in the case of groundwater flow, because of the advection heat transfer. In this paper, the impact of an installation is investigated through a thermo-fluid dynamic model of the subsurface which considers fluid flow in the saturated unit and heat transfer in both the saturated and unsaturated units. Due to the large extension of the affected area, a multiscale numerical model that combines a three-dimensional CFD model and a network model is proposed. The thermal request of the user and the heat pump performances are considered in the multi-scale numerical model through appropriate boundary conditions imposed at the wells. Various scenarios corresponding to different operating modes of the heat pump are considered. - Highlights: • A groundwater heat pump of a skyscraper under construction is considered. • The thermal plume induced in the groundwater is evaluated using a multi-scale model. • The multi-scale model is constituted by a full 3D model and a network model. • Multi-scale permits to study large space for long time with low computational costs. • In some cases thermal plume can reduce the COP of other heat pumps of 20%

  1. Scaling, soil moisture and evapotranspiration in runoff models

    Science.gov (United States)

    Wood, Eric F.

    1993-01-01

    The effects of small-scale heterogeneity in land surface characteristics on the large-scale fluxes of water and energy in the land-atmosphere system has become a central focus of many of the climatology research experiments. The acquisition of high resolution land surface data through remote sensing and intensive land-climatology field experiments (like HAPEX and FIFE) has provided data to investigate the interactions between microscale land-atmosphere interactions and macroscale models. One essential research question is how to account for the small scale heterogeneities and whether 'effective' parameters can be used in the macroscale models. To address this question of scaling, the probability distribution for evaporation is derived which illustrates the conditions for which scaling should work. A correction algorithm that may appropriate for the land parameterization of a GCM is derived using a 2nd order linearization scheme. The performance of the algorithm is evaluated.

  2. Long-Term Outcome of Steroid-Resistant Nephrotic Syndrome in Children.

    Science.gov (United States)

    Trautmann, Agnes; Schnaidt, Sven; Lipska-Ziętkiewicz, Beata S; Bodria, Monica; Ozaltin, Fatih; Emma, Francesco; Anarat, Ali; Melk, Anette; Azocar, Marta; Oh, Jun; Saeed, Bassam; Gheisari, Alaleh; Caliskan, Salim; Gellermann, Jutta; Higuita, Lina Maria Serna; Jankauskiene, Augustina; Drozdz, Dorota; Mir, Sevgi; Balat, Ayse; Szczepanska, Maria; Paripovic, Dusan; Zurowska, Alexandra; Bogdanovic, Radovan; Yilmaz, Alev; Ranchin, Bruno; Baskin, Esra; Erdogan, Ozlem; Remuzzi, Giuseppe; Firszt-Adamczyk, Agnieszka; Kuzma-Mroczkowska, Elzbieta; Litwin, Mieczyslaw; Murer, Luisa; Tkaczyk, Marcin; Jardim, Helena; Wasilewska, Anna; Printza, Nikoleta; Fidan, Kibriya; Simkova, Eva; Borzecka, Halina; Staude, Hagen; Hees, Katharina; Schaefer, Franz

    2017-10-01

    We investigated the value of genetic, histopathologic, and early treatment response information in prognosing long-term renal outcome in children with primary steroid-resistant nephrotic syndrome. From the PodoNet Registry, we obtained longitudinal clinical information for 1354 patients (disease onset at >3 months and children, respectively, with the highest remission rates achieved with calcineurin inhibitor-based protocols. Ten-year ESRD-free survival rates were 43%, 94%, and 72% in children with IIS resistance, complete remission, and partial remission, respectively; 27% in children with a genetic diagnosis; and 79% and 52% in children with histopathologic findings of minimal change glomerulopathy and FSGS, respectively. Five-year ESRD-free survival rate was 21% for diffuse mesangial sclerosis. IIS responsiveness, presence of a genetic diagnosis, and FSGS or diffuse mesangial sclerosis on initial biopsy as well as age, serum albumin concentration, and CKD stage at onset affected ESRD risk. Our findings suggest that responsiveness to initial IIS and detection of a hereditary podocytopathy are prognostic indicators of favorable and poor long-term outcome, respectively, in children with steroid-resistant nephrotic syndrome. Children with multidrug-resistant sporadic disease show better renal survival than those with genetic disease. Furthermore, histopathologic findings may retain prognostic relevance when a genetic diagnosis is established. Copyright © 2017 by the American Society of Nephrology.

  3. Does quasi-long-range order in the two-dimensional XY model really survive weak random phase fluctuations?

    International Nuclear Information System (INIS)

    Mudry, Christopher; Wen Xiaogang

    1999-01-01

    Effective theories for random critical points are usually non-unitary, and thus may contain relevant operators with negative scaling dimensions. To study the consequences of the existence of negative-dimensional operators, we consider the random-bond XY model. It has been argued that the XY model on a square lattice, when weakly perturbed by random phases, has a quasi-long-range ordered phase (the random spin wave phase) at sufficiently low temperatures. We show that infinitely many relevant perturbations to the proposed critical action for the random spin wave phase were omitted in all previous treatments. The physical origin of these perturbations is intimately related to the existence of broadly distributed correlation functions. We find that those relevant perturbations do enter the Renormalization Group equations, and affect critical behavior. This raises the possibility that the random XY model has no quasi-long-range ordered phase and no Kosterlitz-Thouless (KT) phase transition

  4. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    International Nuclear Information System (INIS)

    Fonseca, R A; Vieira, J; Silva, L O; Fiuza, F; Davidson, A; Tsung, F S; Mori, W B

    2013-01-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ∼10 6 cores and sustained performance over ∼2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios. (paper)

  5. Groundwater development stress: Global-scale indices compared to regional modeling

    Science.gov (United States)

    Alley, William; Clark, Brian R.; Ely, Matt; Faunt, Claudia

    2018-01-01

    The increased availability of global datasets and technologies such as global hydrologic models and the Gravity Recovery and Climate Experiment (GRACE) satellites have resulted in a growing number of global-scale assessments of water availability using simple indices of water stress. Developed initially for surface water, such indices are increasingly used to evaluate global groundwater resources. We compare indices of groundwater development stress for three major agricultural areas of the United States to information available from regional water budgets developed from detailed groundwater modeling. These comparisons illustrate the potential value of regional-scale analyses to supplement global hydrological models and GRACE analyses of groundwater depletion. Regional-scale analyses allow assessments of water stress that better account for scale effects, the dynamics of groundwater flow systems, the complexities of irrigated agricultural systems, and the laws, regulations, engineering, and socioeconomic factors that govern groundwater use. Strategic use of regional-scale models with global-scale analyses would greatly enhance knowledge of the global groundwater depletion problem.

  6. Solar radiation transmissivity of a single-span greenhouse through measurements on scale models

    International Nuclear Information System (INIS)

    Papadakis, G.; Manolakos, D.; Kyritsis, S.

    1998-01-01

    The solar transmissivity of a single-span greenhouse has been investigated experimentally using a scale model, of dimensions 40 cm width and 80 cm length. The solar transmissivity was measured at 48 positions on the “ground” surface of the scale model using 48 small silicon solar cells. The greenhouse model was positioned horizontally on a specially made goniometric mechanism. In this way, the greenhouse azimuth could be changed so that typical days of the year could be simulated using different combinations of greenhouse azimuth and the position of the sun in the sky. The measured solar transmissivity distribution at the “ground” surface and the average greenhouse solar transmissivity are presented and analysed, for characteristic days of the year, for winter and summer for a latitude of 37°58′ (Athens, Greece). It is shown that for the latitude of 37°58′ N during winter, the E–W orientation is preferable to the N–S one. The side walls, and especially the East and West ones for the E–W orientation, reduce considerably the greenhouse transmissivity at areas close to the walls for long periods of the day when the angle of incidence of the solar rays to these walls is large. (author)

  7. Optimization of large-scale heterogeneous system-of-systems models.

    Energy Technology Data Exchange (ETDEWEB)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  8. [Unfolding item response model using best-worst scaling].

    Science.gov (United States)

    Ikehara, Kazuya

    2015-02-01

    In attitude measurement and sensory tests, the unfolding model is typically used. In this model, response probability is formulated by the distance between the person and the stimulus. In this study, we proposed an unfolding item response model using best-worst scaling (BWU model), in which a person chooses the best and worst stimulus among repeatedly presented subsets of stimuli. We also formulated an unfolding model using best scaling (BU model), and compared the accuracy of estimates between the BU and BWU models. A simulation experiment showed that the BWU modell performed much better than the BU model in terms of bias and root mean square errors of estimates. With reference to Usami (2011), the proposed models were apllied to actual data to measure attitudes toward tardiness. Results indicated high similarity between stimuli estimates generated with the proposed models and those of Usami (2011).

  9. Building long-term and high spatio-temporal resolution precipitation and air temperature reanalyses by mixing local observations and global atmospheric reanalyses: the ANATEM model

    Directory of Open Access Journals (Sweden)

    A. Kuentz

    2015-06-01

    The ANATEM model has been also evaluated for the regional scale against independent long-term time series and was able to capture regional low-frequency variability over more than a century (1883–2010.

  10. Accounting for small scale heterogeneity in ecohydrologic watershed models

    Science.gov (United States)

    Burke, W.; Tague, C.

    2017-12-01

    Spatially distributed ecohydrologic models are inherently constrained by the spatial resolution of their smallest units, below which land and processes are assumed to be homogenous. At coarse scales, heterogeneity is often accounted for by computing store and fluxes of interest over a distribution of land cover types (or other sources of heterogeneity) within spatially explicit modeling units. However this approach ignores spatial organization and the lateral transfer of water and materials downslope. The challenge is to account both for the role of flow network topology and fine-scale heterogeneity. We present a new approach that defines two levels of spatial aggregation and that integrates spatially explicit network approach with a flexible representation of finer-scale aspatial heterogeneity. Critically, this solution does not simply increase the resolution of the smallest spatial unit, and so by comparison, results in improved computational efficiency. The approach is demonstrated by adapting Regional Hydro-Ecologic Simulation System (RHESSys), an ecohydrologic model widely used to simulate climate, land use, and land management impacts. We illustrate the utility of our approach by showing how the model can be used to better characterize forest thinning impacts on ecohydrology. Forest thinning is typically done at the scale of individual trees, and yet management responses of interest include impacts on watershed scale hydrology and on downslope riparian vegetation. Our approach allow us to characterize the variability in tree size/carbon reduction and water transfers between neighboring trees while still capturing hillslope to watershed scale effects, Our illustrative example demonstrates that accounting for these fine scale effects can substantially alter model estimates, in some cases shifting the impacts of thinning on downslope water availability from increases to decreases. We conclude by describing other use cases that may benefit from this approach

  11. Effects of coarse-graining on the scaling behavior of long-range correlated and anti-correlated signals

    OpenAIRE

    Xu, Yinlin; Ma, Qianli D.Y.; Schmitt, Daniel T.; Bernaola-Galván, Pedro; Ivanov, Plamen Ch.

    2011-01-01

    We investigate how various coarse-graining methods affect the scaling properties of long-range power-law correlated and anti-correlated signals, quantified by the detrended fluctuation analysis. Specifically, for coarse-graining in the magnitude of a signal, we consider (i) the Floor, (ii) the Symmetry and (iii) the Centro-Symmetry coarse-graining methods. We find, that for anti-correlated signals coarse-graining in the magnitude leads to a crossover to random behavior at large scales, and th...

  12. Short- and Long-Term Feedbacks on Vegetation Water Use: Unifying Evidence from Observations and Modeling

    Science.gov (United States)

    Mackay, D. S.

    2001-05-01

    Recent efforts to measure and model the interacting influences of climate, soil, and vegetation on soil water and nutrient dynamics have identified numerous important feedbacks that produce nonlinear responses. In particular, plant physiological factors that control rates of transpiration respond to soil water deficits and vapor pressure deficits (VPD) in the short-term, and to climate, nutrient cycling and disturbance in the long-term. The starting point of this presentation is the observation that in many systems, in particular forest ecosystems, conservative water use emerges as a result of short-term closure of stomata in response to high evaporative demand, and long-term vegetative canopy development under nutrient limiting conditions. Evidence for important short-term controls is presented from sap flux measurements of stand transpiration, remote sensing, and modeling of transpiration through a combination of physically-based modeling and Monte Carlo analysis. A common result is a strong association between stomatal conductance (gs) and the negative evaporative gain (∂ gs/∂ VPD) associated with the sensitivity of stomatal closure to rates of water loss. The importance of this association from the standpoint of modeling transpiration depends on the degree of canopy-atmosphere coupling. This suggests possible simplifications to future canopy component models for use in watershed and larger-scale hydrologic models for short-term processes. However, further results are presented from theoretical modeling, which suggest that feedbacks between hydrology and vegetation in current long-term (inter-annual to century) models may be too simple, as they do not capture the spatially variable nature of slow nutrient cycling in response to soil water dynamics and site history. Memory effects in the soil nutrient pools can leave lasting effects on more rapid processes associated with soil, vegetation, atmosphere coupling.

  13. Site-scale groundwater flow modelling of Beberg

    International Nuclear Information System (INIS)

    Gylling, B.; Walker, D.; Hartley, L.

    1999-08-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) Safety Report for 1997 (SR 97) study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Beberg, which adopts input parameters from the SKB study site near Finnsjoen, in central Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister positions. A series of variant cases addresses uncertainties in the inference of parameters and the boundary conditions. The study uses HYDRASTAR, the SKB stochastic continuum (SC) groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position, and the advective travel times and paths through the geosphere. The Base Case simulation takes its constant head boundary conditions from a modified version of the deterministic regional scale model of Hartley et al. The flow balance between the regional and site-scale models suggests that the nested modelling conserves mass only in a general sense, and that the upscaling is only approximately valid. The results for 100 realisation of 120 starting positions, a flow porosity of ε f 10 -4 , and a flow-wetted surface of a r = 1.0 m 2 /(m 3 rock) suggest the following statistics for the Base Case: The median travel time is 56 years. The median canister flux is 1.2 x 10 -3 m/year. The median F-ratio is 5.6 x 10 5 year/m. The travel times, flow paths and exit locations were compatible with the observations on site, approximate scoping calculations and the results of related modelling studies. Variability within realisations indicates that the change in hydraulic gradient

  14. LOSCAR: Long-term Ocean-atmosphere-Sediment CArbon cycle Reservoir Model v2.0.4

    Directory of Open Access Journals (Sweden)

    R. E. Zeebe

    2012-01-01

    Full Text Available The LOSCAR model is designed to efficiently compute the partitioning of carbon between ocean, atmosphere, and sediments on time scales ranging from centuries to millions of years. While a variety of computationally inexpensive carbon cycle models are already available, many are missing a critical sediment component, which is indispensable for long-term integrations. One of LOSCAR's strengths is the coupling of ocean-atmosphere routines to a computationally efficient sediment module. This allows, for instance, adequate computation of CaCO3 dissolution, calcite compensation, and long-term carbon cycle fluxes, including weathering of carbonate and silicate rocks. The ocean component includes various biogeochemical tracers such as total carbon, alkalinity, phosphate, oxygen, and stable carbon isotopes. LOSCAR's configuration of ocean geometry is flexible and allows for easy switching between modern and paleo-versions. We have previously published applications of the model tackling future projections of ocean chemistry and weathering, pCO2 sensitivity to carbon cycle perturbations throughout the Cenozoic, and carbon/calcium cycling during the Paleocene-Eocene Thermal Maximum. The focus of the present contribution is the detailed description of the model including numerical architecture, processes and parameterizations, tuning, and examples of input and output. Typical CPU integration times of LOSCAR are of order seconds for several thousand model years on current standard desktop machines. The LOSCAR source code in C can be obtained from the author by sending a request to loscar.model@gmail.com.

  15. Multi-scale modeling of dispersed gas-liquid two-phase flow

    NARCIS (Netherlands)

    Deen, N.G.; Sint Annaland, van M.; Kuipers, J.A.M.

    2004-01-01

    In this work the concept of multi-scale modeling is demonstrated. The idea of this approach is to use different levels of modeling, each developed to study phenomena at a certain length scale. Information obtained at the level of small length scales can be used to provide closure information at the

  16. Dynamic subgrid scale model of large eddy simulation of cross bundle flows

    International Nuclear Information System (INIS)

    Hassan, Y.A.; Barsamian, H.R.

    1996-01-01

    The dynamic subgrid scale closure model of Germano et. al (1991) is used in the large eddy simulation code GUST for incompressible isothermal flows. Tube bundle geometries of staggered and non-staggered arrays are considered in deep bundle simulations. The advantage of the dynamic subgrid scale model is the exclusion of an input model coefficient. The model coefficient is evaluated dynamically for each nodal location in the flow domain. Dynamic subgrid scale results are obtained in the form of power spectral densities and flow visualization of turbulent characteristics. Comparisons are performed among the dynamic subgrid scale model, the Smagorinsky eddy viscosity model (that is used as the base model for the dynamic subgrid scale model) and available experimental data. Spectral results of the dynamic subgrid scale model correlate better with experimental data. Satisfactory turbulence characteristics are observed through flow visualization

  17. Multi-scales modeling of reactive transport mechanisms. Impact on petrophysical properties during CO2 storage

    International Nuclear Information System (INIS)

    Varloteaux, C.

    2012-01-01

    The geo-sequestration of carbon dioxide (CO 2 ) is an attractive option to reduce the emission of greenhouse gases. Within carbonate reservoirs, acidification of brine in place can occur during CO 2 injection. This acidification leads to mineral dissolution which can modify the transport properties of a solute in porous media. The aim of this study is to quantify the impact of reactive transport on a solute distribution and on the structural modification induced by the reaction from the pore to the reservoir scale. This study is focused on reactive transport problem in the case of single phase flow in the limit of long time. To do so, we used a multi-scale up-scaling method that takes into account (i) the local scale, where flow, reaction and transport are known; (ii) the pore scale, where the reactive transport is addressed by using averaged formulation of the local equations; (iii) the Darcy scale (also called core scale), where the structure of the rock is taken into account by using a three-dimensions network of pore-bodies connected by pore-throats; and (iv) the reservoir scale, where physical phenomenon, within each cell of the reservoir model, are taken into account by introducing macroscopic coefficients deduced from the study of these phenomenon at the Darcy scale, such as the permeability, the apparent reaction rate, the solute apparent velocity and dispersion. (author)

  18. Incorporating Protein Biosynthesis into the Saccharomyces cerevisiae Genome-scale Metabolic Model

    DEFF Research Database (Denmark)

    Olivares Hernandez, Roberto

    Based on stoichiometric biochemical equations that occur into the cell, the genome-scale metabolic models can quantify the metabolic fluxes, which are regarded as the final representation of the physiological state of the cell. For Saccharomyces Cerevisiae the genome scale model has been construc......Based on stoichiometric biochemical equations that occur into the cell, the genome-scale metabolic models can quantify the metabolic fluxes, which are regarded as the final representation of the physiological state of the cell. For Saccharomyces Cerevisiae the genome scale model has been...

  19. Modelling Changes in the Unconditional Variance of Long Stock Return Series

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    In this paper we develop a testing and modelling procedure for describing the long-term volatility movements over very long return series. For the purpose, we assume that volatility is multiplicatively decomposed into a conditional and an unconditional component as in Amado and Teräsvirta (2011...... show that the long-memory property in volatility may be explained by ignored changes in the unconditional variance of the long series. Finally, based on a formal statistical test we find evidence of the superiority of volatility forecast accuracy of the new model over the GJR-GARCH model at all...... horizons for a subset of the long return series....

  20. Modelling changes in the unconditional variance of long stock return series

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    2014-01-01

    In this paper we develop a testing and modelling procedure for describing the long-term volatility movements over very long daily return series. For this purpose we assume that volatility is multiplicatively decomposed into a conditional and an unconditional component as in Amado and Teräsvirta...... that the apparent long memory property in volatility may be interpreted as changes in the unconditional variance of the long series. Finally, based on a formal statistical test we find evidence of the superiority of volatility forecasting accuracy of the new model over the GJR-GARCH model at all horizons for eight...... subsets of the long return series....

  1. [Modeling continuous scaling of NDVI based on fractal theory].

    Science.gov (United States)

    Luan, Hai-Jun; Tian, Qing-Jiu; Yu, Tao; Hu, Xin-Li; Huang, Yan; Du, Ling-Tong; Zhao, Li-Min; Wei, Xi; Han, Jie; Zhang, Zhou-Wei; Li, Shao-Peng

    2013-07-01

    Scale effect was one of the very important scientific problems of remote sensing. The scale effect of quantitative remote sensing can be used to study retrievals' relationship between different-resolution images, and its research became an effective way to confront the challenges, such as validation of quantitative remote sensing products et al. Traditional up-scaling methods cannot describe scale changing features of retrievals on entire series of scales; meanwhile, they are faced with serious parameters correction issues because of imaging parameters' variation of different sensors, such as geometrical correction, spectral correction, etc. Utilizing single sensor image, fractal methodology was utilized to solve these problems. Taking NDVI (computed by land surface radiance) as example and based on Enhanced Thematic Mapper Plus (ETM+) image, a scheme was proposed to model continuous scaling of retrievals. Then the experimental results indicated that: (a) For NDVI, scale effect existed, and it could be described by fractal model of continuous scaling; (2) The fractal method was suitable for validation of NDVI. All of these proved that fractal was an effective methodology of studying scaling of quantitative remote sensing.

  2. Evaluating cloud processes in large-scale models: Of idealized case studies, parameterization testbeds and single-column modelling on climate time-scales

    Science.gov (United States)

    Neggers, Roel

    2016-04-01

    Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach

  3. 0-d energetics scaling models for Z-pinch-driven hohlraums

    International Nuclear Information System (INIS)

    CUNEO, MICHAEL E.; VESEY, ROGER A.; HAMMER, J.H.; PORTER, JOHN L.

    2000-01-01

    Wire array Z-pinches on the Z accelerator provide the most intense laboratory source of soft x-rays in the world. The unique combination of a highly-Planckian radiation source with high x-ray production efficiency (15% wall plug), large x-ray powers and energies ( >150 TW, ge1 MJ in 7 ns), large characteristic hohlraum volumes (0.5 to >10 cm 3 ), and long pulse-lengths (5 to 20 ns) may make Z-pinches a good match to the requirements for driving high-yield scale ICF capsules with adequate radiation symmetry and margin. The Z-pinch driven hohlraum approach of Hammer and Porter [Phys.Plasmas, 6, 2129(1999)] may provide a conservative and robust solution to the requirements for high yield, and is currently being studied on the Z accelerator. This paper describes a multiple region, 0-d hohlraum energetic model for Z-pinch driven hohlraums in four configurations. The authors observe consistency between the models and the measured x-ray powers and hohlraum wall temperatures to within ±20% in flux, for the four configurations

  4. Multi-Scale Modelling of Deformation and Fracture in a Biomimetic Apatite-Protein Composite: Molecular-Scale Processes Lead to Resilience at the μm-Scale.

    Directory of Open Access Journals (Sweden)

    Dirk Zahn

    Full Text Available Fracture mechanisms of an enamel-like hydroxyapatite-collagen composite model are elaborated by means of molecular and coarse-grained dynamics simulation. Using fully atomistic models, we uncover molecular-scale plastic deformation and fracture processes initiated at the organic-inorganic interface. Furthermore, coarse-grained models are developed to investigate fracture patterns at the μm-scale. At the meso-scale, micro-fractures are shown to reduce local stress and thus prevent material failure after loading beyond the elastic limit. On the basis of our multi-scale simulation approach, we provide a molecular scale rationalization of this phenomenon, which seems key to the resilience of hierarchical biominerals, including teeth and bone.

  5. Multi-scale modeling in morphogenesis: a critical analysis of the cellular Potts model.

    Directory of Open Access Journals (Sweden)

    Anja Voss-Böhme

    Full Text Available Cellular Potts models (CPMs are used as a modeling framework to elucidate mechanisms of biological development. They allow a spatial resolution below the cellular scale and are applied particularly when problems are studied where multiple spatial and temporal scales are involved. Despite the increasing usage of CPMs in theoretical biology, this model class has received little attention from mathematical theory. To narrow this gap, the CPMs are subjected to a theoretical study here. It is asked to which extent the updating rules establish an appropriate dynamical model of intercellular interactions and what the principal behavior at different time scales characterizes. It is shown that the longtime behavior of a CPM is degenerate in the sense that the cells consecutively die out, independent of the specific interdependence structure that characterizes the model. While CPMs are naturally defined on finite, spatially bounded lattices, possible extensions to spatially unbounded systems are explored to assess to which extent spatio-temporal limit procedures can be applied to describe the emergent behavior at the tissue scale. To elucidate the mechanistic structure of CPMs, the model class is integrated into a general multiscale framework. It is shown that the central role of the surface fluctuations, which subsume several cellular and intercellular factors, entails substantial limitations for a CPM's exploitation both as a mechanistic and as a phenomenological model.

  6. Multiphysics pore-scale model for the rehydration of porous foods

    NARCIS (Netherlands)

    Sman, van der R.G.M.; Vergeldt, F.J.; As, van H.; Dalen, van G.; Voda, A.; Duynhoven, van J.P.M.

    2014-01-01

    In this paper we present a pore-scale model describing the multiphysics occurring during the rehydration of freeze-dried vegetables. This pore-scale model is part of a multiscale simulation model, which should explain the effect of microstructure and pre-treatments on the rehydration rate.

  7. Feasibility, reliability, and validity of the Pediatric Quality of Life Inventory ™ generic core scales, cancer module, and multidimensional fatigue scale in long-term adult survivors of pediatric cancer.

    Science.gov (United States)

    Robert, Rhonda S; Paxton, Raheem J; Palla, Shana L; Yang, Grace; Askins, Martha A; Joy, Shaini E; Ater, Joann L

    2012-10-01

    Most health-related quality of life assessments are designed for either children or adults and have not been evaluated for adolescent and young adult survivors of pediatric cancer. The objective of this study was to examine the feasibility, reliability, and validity of the Pediatric Quality of Life Inventory (PedsQL ™ Generic Core Scales, Cancer Module, and Multidimensional Fatigue Scale in adult survivors of pediatric cancer. Adult survivors (n = 64; Mean age 35 year old; >2 years after treatment) completed the PedsQL™ Generic Core Scales, Cancer Module, and Multidimensional Fatigue Scale. Feasibility was examined with floor and ceiling effects; and internal consistency was determined by Cronbach's coefficient alpha calculations. Inter-factor correlations were also assessed. Significant ceiling effects were observed for the scales of social function, nausea, procedural anxiety, treatment anxiety, and communication. Internal consistency for all subscales was within the recommended ranges (α ≥ 0.70). Moderate to strong correlations between most Cancer Module and Generic Core Scales (r = 0.25 to r = 0.76) and between the Multidimensional Fatigue Scale and Generic Core Scales (r = 0.37 to r = 0.73). The PedsQL™ Generic Core Scales, Cancer Module, and Multidimensional Fatigue Scale appear to be feasible for an older population of pediatric cancer survivors; however, some of the Cancer Module Scales (nausea, procedural/treatment anxiety, and communication) were deemed not relevant for long-term survivors. More information is needed to determine whether the issues addressed by these modules are meaningful to long-term adult survivors of pediatric cancers. Copyright © 2012 Wiley Periodicals, Inc.

  8. 'Time is costly': modelling the macroeconomic impact of scaling-up antiretroviral treatment in sub-Saharan Africa.

    Science.gov (United States)

    Ventelou, Bruno; Moatti, Jean-Paul; Videau, Yann; Kazatchkine, Michel

    2008-01-02

    Macroeconomic policy requirements may limit the capacity of national and international policy-makers to allocate sufficient resources for scaling-up access to HIV care and treatment in developing countries. An endogenous growth model, which takes into account the evolution of society's human capital, was used to assess the macroeconomic impact of policies aimed at scaling-up access to HIV/AIDS treatment in six African countries (Angola, Benin, Cameroon, Central African Republic, Ivory Coast and Zimbabwe). The model results showed that scaling-up access to treatment in the affected population would limit gross domestic product losses due to AIDS although differently from country to country. In our simulated scenarios of access to antiretroviral therapy, only 10.3% of the AIDS shock is counterbalanced in Zimbabwe, against 85.2% in Angola and even 100.0% in Benin (a total recovery). For four out of the six countries (Angola, Benin, Cameroon, Ivory Coast), the macro-economic gains of scaling-up would become potentially superior to its associated costs in 2010. Despite the variability of HIV prevalence rates between countries, macro-economic estimates strongly suggest that a massive investment in scaling-up access to HIV treatment may efficiently counteract the detrimental long-term impact of the HIV pandemic on economic growth, to the extent that the AIDS shock has not already driven the economy beyond an irreversible 'no-development epidemiological trap'.

  9. Calibration of the Site-Scale Saturated Zone Flow Model

    International Nuclear Information System (INIS)

    Zyvoloski, G. A.

    2001-01-01

    The purpose of the flow calibration analysis work is to provide Performance Assessment (PA) with the calibrated site-scale saturated zone (SZ) flow model that will be used to make radionuclide transport calculations. As such, it is one of the most important models developed in the Yucca Mountain project. This model will be a culmination of much of our knowledge of the SZ flow system. The objective of this study is to provide a defensible site-scale SZ flow and transport model that can be used for assessing total system performance. A defensible model would include geologic and hydrologic data that are used to form the hydrogeologic framework model; also, it would include hydrochemical information to infer transport pathways, in-situ permeability measurements, and water level and head measurements. In addition, the model should include information on major model sensitivities. Especially important are those that affect calibration, the direction of transport pathways, and travel times. Finally, if warranted, alternative calibrations representing different conceptual models should be included. To obtain a defensible model, all available data should be used (or at least considered) to obtain a calibrated model. The site-scale SZ model was calibrated using measured and model-generated water levels and hydraulic head data, specific discharge calculations, and flux comparisons along several of the boundaries. Model validity was established by comparing model-generated permeabilities with the permeability data from field and laboratory tests; by comparing fluid pathlines obtained from the SZ flow model with those inferred from hydrochemical data; and by comparing the upward gradient generated with the model with that observed in the field. This analysis is governed by the Office of Civilian Radioactive Waste Management (OCRWM) Analysis and Modeling Report (AMR) Development Plan ''Calibration of the Site-Scale Saturated Zone Flow Model'' (CRWMS M and O 1999a)

  10. A comparison of large scale changes in surface humidity over land in observations and CMIP3 general circulation models

    International Nuclear Information System (INIS)

    Willett, Katharine M; Thorne, Peter W; Jones, Philip D; Gillett, Nathan P

    2010-01-01

    Observed changes in the HadCRUH global land surface specific humidity and CRUTEM3 surface temperature from 1973 to 1999 are compared to CMIP3 archive climate model simulations with 20th Century forcings. Observed humidity increases are proportionately largest in the Northern Hemisphere, especially in winter. At the largest spatio-temporal scales moistening is close to the Clausius-Clapeyron scaling of the saturated specific humidity (∼7% K -1 ). At smaller scales in water-limited regions, changes in specific humidity are strongly inversely correlated with total changes in temperature. Conversely, in some regions increases are faster than implied by the Clausius-Clapeyron relation. The range of climate model specific humidity seasonal climatology and variance encompasses the observations. The models also reproduce the magnitude of observed interannual variance over all large regions. Observed and modelled trends and temperature-humidity relationships are comparable except for the extratropical Southern Hemisphere where observations exhibit no trend but models exhibit moistening. This may arise from: long-term biases remaining in the observations; the relative paucity of observational coverage; or common model errors. The overall degree of consistency of anthropogenically forced models with the observations is further evidence for anthropogenic influence on the climate of the late 20th century.

  11. Modelling the long-run supply of coal

    International Nuclear Information System (INIS)

    Steenblik, R.P.

    1992-01-01

    There are many issues facing policy-makers in the fields of energy and the environment that require knowledge of coal supply and cost. Such questions arise in relation to decisions concerning, for example, the discontinuation of subsidies, or the effects of new environmental laws. The very complexity of these questions makes them suitable for analysis by models. Indeed, models have been used for analysing the behaviour of coal markets and the effects of public policies on them for many years. For estimating short-term responses econometric models are the most suitable. For estimating the supply of coal over the longer term, however - i.e., coal that would come from mines as yet not developed - depletion has to be taken into account. Underlying the normal supply curve relating cost to the rate of production is a curve that increases with cumulative production - what mineral economists refer to as the potential supply curve. To derive such a curve requires at some point in the analysis using process-oriented modelling techniques. Because coal supply curves can convey so succinctly information about the resource's long-run supply potential and costs, they have been influential in several major public debates on energy policy. And, within the coal industry itself, they have proved to be powerful tools for undertaking market research and long-range planning. The purpose of this paper is to describe in brief the various approaches that have been used to model long-run coal supply, to highlight their strengths, and to identify areas in which further progress is needed. (author)

  12. A Pareto scale-inflated outlier model and its Bayesian analysis

    OpenAIRE

    Scollnik, David P. M.

    2016-01-01

    This paper develops a Pareto scale-inflated outlier model. This model is intended for use when data from some standard Pareto distribution of interest is suspected to have been contaminated with a relatively small number of outliers from a Pareto distribution with the same shape parameter but with an inflated scale parameter. The Bayesian analysis of this Pareto scale-inflated outlier model is considered and its implementation using the Gibbs sampler is discussed. The paper contains three wor...

  13. Numerical modelling of disintegration of basin-scale internal waves in a tank filled with stratified water

    Directory of Open Access Journals (Sweden)

    N. Stashchuk

    2005-01-01

    Full Text Available We present the results of numerical experiments performed with the use of a fully non-linear non-hydrostatic numerical model to study the baroclinic response of a long narrow tank filled with stratified water to an initially tilted interface. Upon release, the system starts to oscillate with an eigen frequency corresponding to basin-scale baroclinic gravitational seiches. Field observations suggest that the disintegration of basin-scale internal waves into packets of solitary waves, shear instabilities, billows and spots of mixed water are important mechanisms for the transfer of energy within stratified lakes. Laboratory experiments performed by D. A. Horn, J. Imberger and G. N. Ivey (JFM, 2001 reproduced several regimes, which include damped linear waves and solitary waves. The generation of billows and shear instabilities induced by the basin-scale wave was, however, not sufficiently studied. The developed numerical model computes a variety of flows, which were not observed with the experimental set-up. In particular, the model results showed that under conditions of low dissipation, the regimes of billows and supercritical flows may transform into a solitary wave regime. The obtained results can help in the interpretation of numerous observations of mixing processes in real lakes.

  14. Long-term predictive capability of erosion models

    Science.gov (United States)

    Veerabhadra, P.; Buckley, D. H.

    1983-01-01

    A brief overview of long-term cavitation and liquid impingement erosion and modeling methods proposed by different investigators, including the curve-fit approach is presented. A table was prepared to highlight the number of variables necessary for each model in order to compute the erosion-versus-time curves. A power law relation based on the average erosion rate is suggested which may solve several modeling problems.

  15. A Lagrangian dynamic subgrid-scale model turbulence

    Science.gov (United States)

    Meneveau, C.; Lund, T. S.; Cabot, W.

    1994-01-01

    A new formulation of the dynamic subgrid-scale model is tested in which the error associated with the Germano identity is minimized over flow pathlines rather than over directions of statistical homogeneity. This procedure allows the application of the dynamic model with averaging to flows in complex geometries that do not possess homogeneous directions. The characteristic Lagrangian time scale over which the averaging is performed is chosen such that the model is purely dissipative, guaranteeing numerical stability when coupled with the Smagorinsky model. The formulation is tested successfully in forced and decaying isotropic turbulence and in fully developed and transitional channel flow. In homogeneous flows, the results are similar to those of the volume-averaged dynamic model, while in channel flow, the predictions are superior to those of the plane-averaged dynamic model. The relationship between the averaged terms in the model and vortical structures (worms) that appear in the LES is investigated. Computational overhead is kept small (about 10 percent above the CPU requirements of the volume or plane-averaged dynamic model) by using an approximate scheme to advance the Lagrangian tracking through first-order Euler time integration and linear interpolation in space.

  16. Two-scale modelling for hydro-mechanical damage

    International Nuclear Information System (INIS)

    Frey, J.; Chambon, R.; Dascalu, C.

    2010-01-01

    Document available in extended abstract form only. Excavation works for underground storage create a damage zone for the rock nearby and affect its hydraulics properties. This degradation, already observed by laboratory tests, can create a leading path for fluids. The micro fracture phenomenon, which occur at a smaller scale and affect the rock permeability, must be fully understood to minimize the transfer process. Many methods can be used in order to take into account the microstructure of heterogeneous materials. Among them a method has been developed recently. Instead of using a constitutive equation obtained by phenomenological considerations or by some homogenization techniques, the representative elementary volume (R.E.V.) is modelled as a structure and the links between a prescribed kinematics and the corresponding dual forces are deduced numerically. This yields the so called Finite Element square method (FE2). In a numerical point of view, a finite element model is used at the macroscopic level, and for each Gauss point, computations on the microstructure gives the usual results of a constitutive law. This numerical approach is now classical in order to properly model some materials such as composites and the efficiency of such numerical homogenization process has been shown, and allows numerical modelling of deformation processes associated with various micro-structural changes. The aim of this work is to describe trough such a method, damage of the rock with a two scale hydro-mechanical model. The rock damage at the macroscopic scale is directly link with an analysis on the microstructure. At the macroscopic scale a two phase's problem is studied. A solid skeleton is filled up by a filtrating fluid. It is necessary to enforce two balance equation and two mass conservation equations. A classical way to deal with such a problem is to work with the balance equation of the whole mixture, and the mass fluid conservation written in a weak form, the mass

  17. BLEVE overpressure: multi-scale comparison of blast wave modeling

    International Nuclear Information System (INIS)

    Laboureur, D.; Buchlin, J.M.; Rambaud, P.; Heymes, F.; Lapebie, E.

    2014-01-01

    BLEVE overpressure modeling has been already widely studied but only few validations including the scale effect have been made. After a short overview of the main models available in literature, a comparison is done with different scales of measurements, taken from previous studies or coming from experiments performed in the frame of this research project. A discussion on the best model to use in different cases is finally proposed. (authors)

  18. New time scale based k-epsilon model for near-wall turbulence

    Science.gov (United States)

    Yang, Z.; Shih, T. H.

    1993-01-01

    A k-epsilon model is proposed for wall bonded turbulent flows. In this model, the eddy viscosity is characterized by a turbulent velocity scale and a turbulent time scale. The time scale is bounded from below by the Kolmogorov time scale. The dissipation equation is reformulated using this time scale and no singularity exists at the wall. The damping function used in the eddy viscosity is chosen to be a function of R(sub y) = (k(sup 1/2)y)/v instead of y(+). Hence, the model could be used for flows with separation. The model constants used are the same as in the high Reynolds number standard k-epsilon model. Thus, the proposed model will be also suitable for flows far from the wall. Turbulent channel flows at different Reynolds numbers and turbulent boundary layer flows with and without pressure gradient are calculated. Results show that the model predictions are in good agreement with direct numerical simulation and experimental data.

  19. Numerical Modelling of a Bidirectional Long Ring Raman Fiber Laser Dynamics

    Science.gov (United States)

    Sukhanov, S. V.; Melnikov, L. A.; Mazhirina, Yu A.

    2017-11-01

    The numerical model for the simulation of the dynamics of a bidirectional long ring Raman fiber laser is proposed. The model is based on the transport equations and Courant-Isaacson-Rees method. Different regimes of a bidirectional long ring Raman fiber laser and long time-domain realizations are investigated.

  20. Bioclim Deliverable D8b: development of the physical/statistical down-scaling methodology and application to climate model Climber for BIOCLIM Work-package 3

    International Nuclear Information System (INIS)

    2003-01-01

    The overall aim of BIOCLIM is to assess the possible long term impacts due to climate change on the safety of radioactive waste repositories in deep formations. The main aim of this deliverable is to provide time series of climatic variables at the high resolution as needed by performance assessment (PA) of radioactive waste repositories, on the basis of coarse output from the CLIMBER-GREMLINS climate model. The climatological variables studied here are long-term (monthly) mean temperature and precipitation, as these are the main variables of interest for performance assessment. CLIMBER-GREMLINS is an earth-system model of intermediate complexity (EMIC), designed for long climate simulations (glacial cycles). Thus, this model has a coarse resolution (about 50 degrees in longitude) and other limitations which are sketched in this report. For the purpose of performance assessment, the climatological variables are required at scales pertinent for the knowledge of the conditions at the depository site. In this work, the final resolution is that of the best available global gridded present-day climatology, which is 1/6 degree in both longitude and latitude. To obtain climate-change information at this high resolution on the basis of the climate model outputs, a 2-step down-scaling method is designed. First, physical considerations are used to define variables which are expected to have links which climatological values; secondly a statistical model is used to find the links between these variables and the high-resolution climatology of temperature and precipitation. Thus the method is termed as 'physical/statistical': it involves physically based assumptions to compute predictors from model variables and then relies on statistics to find empirical links between these predictors and the climatology. The simple connection of coarse model results to regional values can not be done on a purely empirical way because the model does not provide enough information - it is both

  1. Development and testing of watershed-scale models for poorly drained soils

    Science.gov (United States)

    Glenn P. Fernandez; George M. Chescheir; R. Wayne Skaggs; Devendra M. Amatya

    2005-01-01

    Watershed-scale hydrology and water quality models were used to evaluate the crrmulative impacts of land use and management practices on dowrzstream hydrology and nitrogen loading of poorly drained watersheds. Field-scale hydrology and nutrient dyyrutmics are predicted by DRAINMOD in both models. In the first model (DRAINMOD-DUFLOW), field-scale predictions are coupled...

  2. Coulomb-gas scaling, superfluid films, and the XY model

    International Nuclear Information System (INIS)

    Minnhagen, P.; Nylen, M.

    1985-01-01

    Coulomb-gas-scaling ideas are invoked as a link between the superfluid density of two-dimensional 4 He films and the XY model; the Coulomb-gas-scaling function epsilon(X) is extracted from experiments and is compared with Monte Carlo simulations of the XY model. The agreement is found to be excellent

  3. Model of cosmology and particle physics at an intermediate scale

    International Nuclear Information System (INIS)

    Bastero-Gil, M.; Di Clemente, V.; King, S. F.

    2005-01-01

    We propose a model of cosmology and particle physics in which all relevant scales arise in a natural way from an intermediate string scale. We are led to assign the string scale to the intermediate scale M * ∼10 13 GeV by four independent pieces of physics: electroweak symmetry breaking; the μ parameter; the axion scale; and the neutrino mass scale. The model involves hybrid inflation with the waterfall field N being responsible for generating the μ term, the right-handed neutrino mass scale, and the Peccei-Quinn symmetry breaking scale. The large scale structure of the Universe is generated by the lightest right-handed sneutrino playing the role of a coupled curvaton. We show that the correct curvature perturbations may be successfully generated providing the lightest right-handed neutrino is weakly coupled in the seesaw mechanism, consistent with sequential dominance

  4. Geometrical scaling vs factorizable eikonal models

    CERN Document Server

    Kiang, D

    1975-01-01

    Among various theoretical explanations or interpretations for the experimental data on the differential cross-sections of elastic proton-proton scattering at CERN ISR, the following two seem to be most remarkable: A) the excellent agreement of the Chou-Yang model prediction of d sigma /dt with data at square root s=53 GeV, B) the general manifestation of geometrical scaling (GS). The paper confronts GS with eikonal models with factorizable opaqueness, with special emphasis on the Chou-Yang model. (12 refs).

  5. Interest Rates with Long Memory: A Generalized Affine Term-Structure Model

    DEFF Research Database (Denmark)

    Osterrieder, Daniela

    .S. government bonds, we model the time series of the state vector by means of a co-fractional vector autoregressive model. The implication is that yields of all maturities exhibit nonstationary, yet mean-reverting, long-memory behavior of the order d ≈ 0.87. The long-run dynamics of the state vector are driven......We propose a model for the term structure of interest rates that is a generalization of the discrete-time, Gaussian, affine yield-curve model. Compared to standard affine models, our model allows for general linear dynamics in the vector of state variables. In an application to real yields of U...... forecasts that outperform several benchmark models, especially at long forecasting horizons....

  6. Modelling rapid subsurface flow at the hillslope scale with explicit representation of preferential flow paths

    Science.gov (United States)

    Wienhöfer, J.; Zehe, E.

    2012-04-01

    produced acceptable matches to the observed behaviour. These setups were selected for long-term simulation, the results of which were compared against water level measurements at two piezometers along the hillslope and the integral discharge response of the spring to reject some non-behavioural model setups and further reduce equifinality. The results of this study indicate that process-based modelling can provide a means to distinguish preferential flow networks on the hillslope scale when complementary measurements to constrain the range of behavioural model setups are available. These models can further be employed as a virtual reality to investigate the characteristics of flow path architectures and explore effective parameterisations for larger scale applications.

  7. Multi-scale modeling of inter-granular fracture in UO2

    Energy Technology Data Exchange (ETDEWEB)

    Chakraborty, Pritam [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zhang, Yongfeng [Idaho National Lab. (INL), Idaho Falls, ID (United States); Tonks, Michael R. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Biner, S. Bulent [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-03-01

    A hierarchical multi-scale approach is pursued in this work to investigate the influence of porosity, pore and grain size on the intergranular brittle fracture in UO2. In this approach, molecular dynamics simulations are performed to obtain the fracture properties for different grain boundary types. A phase-field model is then utilized to perform intergranular fracture simulations of representative microstructures with different porosities, pore and grain sizes. In these simulations the grain boundary fracture properties obtained from molecular dynamics simulations are used. The responses from the phase-field fracture simulations are then fitted with a stress-based brittle fracture model usable at the engineering scale. This approach encapsulates three different length and time scales, and allows the development of microstructurally informed engineering scale model from properties evaluated at the atomistic scale.

  8. Psychosocial deprivation in women with gestational diabetes mellitus is associated with poor fetomaternal prognoses: an observational study.

    Science.gov (United States)

    Cosson, Emmanuel; Bihan, Hélène; Reach, Gérard; Vittaz, Laurence; Carbillon, Lionel; Valensi, Paul

    2015-03-06

    To evaluate the prognoses associated with psychosocial deprivation in women with gestational diabetes mellitus (GDM). Observational study considering the 1498 multiethnic women with GDM who gave birth between January 2009 and February 2012. Four largest maternity units in the northeastern suburban area of Paris. The 994 women who completed the Evaluation of Precarity and Inequalities in Health Examination Centers (EPICES) questionnaire. Main complications of GDM (large infant for gestational age (LGA), shoulder dystocia, caesarean section, pre-eclampsia). Psychosocial deprivation (EPICES score ≥30.17) affected 577 women (56%) and was positively associated with overweight/obesity, parity and non-European origin, and negatively associated with family history of diabetes, fruit and vegetable consumption and working status. The psychosocially deprived women were diagnosed with GDM earlier, received insulin treatment during pregnancy more often and were more likely to have LGA infants (15.1% vs 10.6%, OR=1.5 (95% CI 1.02 to 2.2), p<0.05) and shoulder dystocia (3.1% vs 1.2%, OR=2.7 (0.97 to 7.2), p<0.05). In addition to psychosocial deprivation, LGA was associated with greater parity, obesity, history of GDM, ethnicity, excessive gestational weight gain and insulin therapy. A multivariate analysis using these covariates revealed that the EPICES score was independently associated with LGA infants (per 10 units, OR=1.12 (1.03 to 1.20), p<0.01). In our area, psychosocial deprivation is common in women with GDM and is associated with earlier GDM diagnoses and greater insulin treatment, an increased likelihood of shoulder dystocia and, independently of obesity, gestational weight gain and other confounders with LGA infants. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  9. Large-scale hydrology in Europe : observed patterns and model performance

    Energy Technology Data Exchange (ETDEWEB)

    Gudmundsson, Lukas

    2011-06-15

    In a changing climate, terrestrial water storages are of great interest as water availability impacts key aspects of ecosystem functioning. Thus, a better understanding of the variations of wet and dry periods will contribute to fully grasp processes of the earth system such as nutrient cycling and vegetation dynamics. Currently, river runoff from small, nearly natural, catchments is one of the few variables of the terrestrial water balance that is regularly monitored with detailed spatial and temporal coverage on large scales. River runoff, therefore, provides a foundation to approach European hydrology with respect to observed patterns on large scales, with regard to the ability of models to capture these.The analysis of observed river flow from small catchments, focused on the identification and description of spatial patterns of simultaneous temporal variations of runoff. These are dominated by large-scale variations of climatic variables but also altered by catchment processes. It was shown that time series of annual low, mean and high flows follow the same atmospheric drivers. The observation that high flows are more closely coupled to large scale atmospheric drivers than low flows, indicates the increasing influence of catchment properties on runoff under dry conditions. Further, it was shown that the low-frequency variability of European runoff is dominated by two opposing centres of simultaneous variations, such that dry years in the north are accompanied by wet years in the south.Large-scale hydrological models are simplified representations of our current perception of the terrestrial water balance on large scales. Quantification of the models strengths and weaknesses is the prerequisite for a reliable interpretation of simulation results. Model evaluations may also enable to detect shortcomings with model assumptions and thus enable a refinement of the current perception of hydrological systems. The ability of a multi model ensemble of nine large-scale

  10. Classically scale-invariant B–L model and conformal gravity

    International Nuclear Information System (INIS)

    Oda, Ichiro

    2013-01-01

    We consider a coupling of conformal gravity to the classically scale-invariant B–L extended standard model which has been recently proposed as a phenomenologically viable model realizing the Coleman–Weinberg mechanism of breakdown of the electroweak symmetry. As in a globally scale-invariant dilaton gravity, it is also shown in a locally scale-invariant conformal gravity that without recourse to the Coleman–Weinberg mechanism, the B–L gauge symmetry is broken in the process of spontaneous symmetry breakdown of the local scale invariance (Weyl invariance) at the tree level and as a result the B–L gauge field becomes massive via the Higgs mechanism. As a bonus of conformal gravity, the massless dilaton field does not appear and the parameters in front of the non-minimal coupling of gravity are completely fixed in the present model. This observation clearly shows that the conformal gravity has a practical application even if the scalar field does not possess any dynamical degree of freedom owing to the local scale symmetry

  11. Scaling limit for the Derezi\\'nski-G\\'erard model

    OpenAIRE

    OHKUBO, Atsushi

    2010-01-01

    We consider a scaling limit for the Derezi\\'nski-G\\'erard model. We derive an effective potential by taking a scaling limit for the total Hamiltonian of the Derezi\\'nski-G\\'erard model. Our method to derive an effective potential is independent of whether or not the quantum field has a nonnegative mass. As an application of our theory developed in the present paper, we derive an effective potential of the Nelson model.

  12. Using LISREL to Evaluate Measurement Models and Scale Reliability.

    Science.gov (United States)

    Fleishman, John; Benson, Jeri

    1987-01-01

    LISREL program was used to examine measurement model assumptions and to assess reliability of Coopersmith Self-Esteem Inventory for Children, Form B. Data on 722 third-sixth graders from over 70 schools in large urban school district were used. LISREL program assessed (1) nature of basic measurement model for scale, (2) scale invariance across…

  13. Long-term performance and fouling analysis of full-scale direct nanofiltration (NF) installations treating anoxic groundwater

    NARCIS (Netherlands)

    Beyer, F.; Rietman, B.M.; Zwijnenburg, A.; Brink, van den P.; Vrouwenvelder, J.S.; Jarzembowska, M.; Laurinonyte, J.; Stams, A.J.M.; Plugge, C.M.

    2014-01-01

    Long-term performance and fouling behavior of four full-scale nanofiltration (NF) plants, treating anoxic groundwater at 80% recovery for drinking water production, were characterized and compared with oxic NF and reverse osmosis systems. Plant operating times varied between 6 and 10 years and

  14. Site-scale groundwater flow modelling of Beberg

    Energy Technology Data Exchange (ETDEWEB)

    Gylling, B. [Kemakta Konsult AB, Stockholm (Sweden); Walker, D. [Duke Engineering and Services (United States); Hartley, L. [AEA Technology, Harwell (United Kingdom)

    1999-08-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) Safety Report for 1997 (SR 97) study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Beberg, which adopts input parameters from the SKB study site near Finnsjoen, in central Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister positions. A series of variant cases addresses uncertainties in the inference of parameters and the boundary conditions. The study uses HYDRASTAR, the SKB stochastic continuum (SC) groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position, and the advective travel times and paths through the geosphere. The Base Case simulation takes its constant head boundary conditions from a modified version of the deterministic regional scale model of Hartley et al. The flow balance between the regional and site-scale models suggests that the nested modelling conserves mass only in a general sense, and that the upscaling is only approximately valid. The results for 100 realisation of 120 starting positions, a flow porosity of {epsilon}{sub f} 10{sup -4}, and a flow-wetted surface of a{sub r} = 1.0 m{sup 2}/(m{sup 3} rock) suggest the following statistics for the Base Case: The median travel time is 56 years. The median canister flux is 1.2 x 10{sup -3} m/year. The median F-ratio is 5.6 x 10{sup 5} year/m. The travel times, flow paths and exit locations were compatible with the observations on site, approximate scoping calculations and the results of related modelling studies. Variability within realisations indicates

  15. Use of natural analogues to support radionuclide transport models for deep geological repositories for long lived radioactive wastes

    International Nuclear Information System (INIS)

    1999-10-01

    Plans to dispose high level and long lived radioactive wastes in deep geological repositories have raised a number of unique problems, mainly due to the very long time-scales which have to be considered. An important way to help to evaluate performance and provide confidence in the assessment of safety in the long term is to carry out natural analogue studies. Natural analogues can be regarded as long term natural experiments the results or outcome of which can be observed, but which, by definition, are uncontrolled by humans. Studies of natural analogues have been carried out for more than two decades, although the application of information from them is only relatively recently becoming scientifically well ordered. This report is part of a the IAEA's programme on radioactive waste management dealing with disposal system technology for high level and long lived radioactive waste. It presents the current status of natural analogue information in evaluating models for radionuclide transport by groundwater. In particular, emphasis is given to the most useful aspects of quantitative applications for model development and testing (geochemistry and coupled transport models). The report provides an overview of various natural analogues as reference for those planning to develop a research programme in this field. Recommendations are given on the use of natural analogues to engender confidence in the safety of disposal systems. This report is a follow up of Technical Reports Series No. 304 on Natural Analogues in Performance Assessments for the Disposal of Long Lived Radioactive Waste (1989)

  16. Optogenetic stimulation of a meso-scale human cortical model

    Science.gov (United States)

    Selvaraj, Prashanth; Szeri, Andrew; Sleigh, Jamie; Kirsch, Heidi

    2015-03-01

    Neurological phenomena like sleep and seizures depend not only on the activity of individual neurons, but on the dynamics of neuron populations as well. Meso-scale models of cortical activity provide a means to study neural dynamics at the level of neuron populations. Additionally, they offer a safe and economical way to test the effects and efficacy of stimulation techniques on the dynamics of the cortex. Here, we use a physiologically relevant meso-scale model of the cortex to study the hypersynchronous activity of neuron populations during epileptic seizures. The model consists of a set of stochastic, highly non-linear partial differential equations. Next, we use optogenetic stimulation to control seizures in a hyperexcited cortex, and to induce seizures in a normally functioning cortex. The high spatial and temporal resolution this method offers makes a strong case for the use of optogenetics in treating meso scale cortical disorders such as epileptic seizures. We use bifurcation analysis to investigate the effect of optogenetic stimulation in the meso scale model, and its efficacy in suppressing the non-linear dynamics of seizures.

  17. Drift-Scale Coupled Processes (DST and THC Seepage) Models

    International Nuclear Information System (INIS)

    Dixon, P.

    2004-01-01

    The purpose of this Model Report (REV02) is to document the unsaturated zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrological-chemical (THC) processes on UZ flow and transport. This Model Report has been developed in accordance with the ''Technical Work Plan for: Performance Assessment Unsaturated Zone'' (Bechtel SAIC Company, LLC (BSC) 2002 [160819]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this Model Report in Section 1.12, Work Package AUZM08, ''Coupled Effects on Flow and Seepage''. The plan for validation of the models documented in this Model Report is given in Attachment I, Model Validation Plans, Section I-3-4, of the TWP. Except for variations in acceptance criteria (Section 4.2), there were no deviations from this TWP. This report was developed in accordance with AP-SIII.10Q, ''Models''. This Model Report documents the THC Seepage Model and the Drift Scale Test (DST) THC Model. The THC Seepage Model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral alteration on flow in rocks surrounding drifts. The DST THC model is a drift-scale process model relying on the same conceptual model and much of the same input data (i.e., physical, hydrological, thermodynamic, and kinetic) as the THC Seepage Model. The DST THC Model is the primary method for validating the THC Seepage Model. The DST THC Model compares predicted water and gas compositions, as well as mineral alteration patterns, with observed data from the DST. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal-loading conditions, and predict the evolution of mineral alteration and fluid chemistry around potential waste emplacement drifts. The DST THC Model is used solely for the validation of the THC

  18. a Model Study of Small-Scale World Map Generalization

    Science.gov (United States)

    Cheng, Y.; Yin, Y.; Li, C. M.; Wu, W.; Guo, P. P.; Ma, X. L.; Hu, F. M.

    2018-04-01

    With the globalization and rapid development every filed is taking an increasing interest in physical geography and human economics. There is a surging demand for small scale world map in large formats all over the world. Further study of automated mapping technology, especially the realization of small scale production on a large scale global map, is the key of the cartographic field need to solve. In light of this, this paper adopts the improved model (with the map and data separated) in the field of the mapmaking generalization, which can separate geographic data from mapping data from maps, mainly including cross-platform symbols and automatic map-making knowledge engine. With respect to the cross-platform symbol library, the symbol and the physical symbol in the geographic information are configured at all scale levels. With respect to automatic map-making knowledge engine consists 97 types, 1086 subtypes, 21845 basic algorithm and over 2500 relevant functional modules.In order to evaluate the accuracy and visual effect of our model towards topographic maps and thematic maps, we take the world map generalization in small scale as an example. After mapping generalization process, combining and simplifying the scattered islands make the map more explicit at 1 : 2.1 billion scale, and the map features more complete and accurate. Not only it enhance the map generalization of various scales significantly, but achieve the integration among map-makings of various scales, suggesting that this model provide a reference in cartographic generalization for various scales.

  19. Long-term responses of rainforest erosional systems at different spatial scales to selective logging and climatic change

    Science.gov (United States)

    Walsh, R. P. D.; Bidin, K.; Blake, W. H.; Chappell, N. A.; Clarke, M. A.; Douglas, I.; Ghazali, R.; Sayer, A. M.; Suhaimi, J.; Tych, W.; Annammala, K. V.

    2011-01-01

    Long-term (21–30 years) erosional responses of rainforest terrain in the Upper Segama catchment, Sabah, to selective logging are assessed at slope, small and large catchment scales. In the 0.44 km2 Baru catchment, slope erosion measurements over 1990–2010 and sediment fingerprinting indicate that sediment sources 21 years after logging in 1989 are mainly road-linked, including fresh landslips and gullying of scars and toe deposits of 1994–1996 landslides. Analysis and modelling of 5–15 min stream-suspended sediment and discharge data demonstrate a reduction in storm-sediment response between 1996 and 2009, but not yet to pre-logging levels. An unmixing model using bed-sediment geochemical data indicates that 49 per cent of the 216 t km−2 a−1 2009 sediment yield comes from 10 per cent of its area affected by road-linked landslides. Fallout 210Pb and 137Cs values from a lateral bench core indicate that sedimentation rates in the 721 km2 Upper Segama catchment less than doubled with initially highly selective, low-slope logging in the 1980s, but rose 7–13 times when steep terrain was logged in 1992–1993 and 1999–2000. The need to keep steeplands under forest is emphasized if landsliding associated with current and predicted rises in extreme rainstorm magnitude-frequency is to be reduced in scale. PMID:22006973

  20. Pelamis wave energy converter. Verification of full-scale control using a 7th scale model

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-07-01

    The Pelamis Wave Energy Converter is a new concept for converting wave energy for several applications including generation of electric power. The machine is flexibly moored and swings to meet the water waves head-on. The system is semi-submerged and consists of cylindrical sections linked by hinges. The mechanical operation is described in outline. A one-seventh scale model was built and tested and the outcome was sufficiently successful to warrant the building of a full-scale prototype. In addition, a one-twentieth scale model was built and has contributed much to the research programme. The work is supported financially by the DTI.

  1. An interactive display system for large-scale 3D models

    Science.gov (United States)

    Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman

    2018-04-01

    With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.

  2. Scaling and percolation in the small-world network model

    Energy Technology Data Exchange (ETDEWEB)

    Newman, M. E. J. [Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, New Mexico 87501 (United States); Watts, D. J. [Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, New Mexico 87501 (United States)

    1999-12-01

    In this paper we study the small-world network model of Watts and Strogatz, which mimics some aspects of the structure of networks of social interactions. We argue that there is one nontrivial length-scale in the model, analogous to the correlation length in other systems, which is well-defined in the limit of infinite system size and which diverges continuously as the randomness in the network tends to zero, giving a normal critical point in this limit. This length-scale governs the crossover from large- to small-world behavior in the model, as well as the number of vertices in a neighborhood of given radius on the network. We derive the value of the single critical exponent controlling behavior in the critical region and the finite size scaling form for the average vertex-vertex distance on the network, and, using series expansion and Pade approximants, find an approximate analytic form for the scaling function. We calculate the effective dimension of small-world graphs and show that this dimension varies as a function of the length-scale on which it is measured, in a manner reminiscent of multifractals. We also study the problem of site percolation on small-world networks as a simple model of disease propagation, and derive an approximate expression for the percolation probability at which a giant component of connected vertices first forms (in epidemiological terms, the point at which an epidemic occurs). The typical cluster radius satisfies the expected finite size scaling form with a cluster size exponent close to that for a random graph. All our analytic results are confirmed by extensive numerical simulations of the model. (c) 1999 The American Physical Society.

  3. Scaling and percolation in the small-world network model

    International Nuclear Information System (INIS)

    Newman, M. E. J.; Watts, D. J.

    1999-01-01

    In this paper we study the small-world network model of Watts and Strogatz, which mimics some aspects of the structure of networks of social interactions. We argue that there is one nontrivial length-scale in the model, analogous to the correlation length in other systems, which is well-defined in the limit of infinite system size and which diverges continuously as the randomness in the network tends to zero, giving a normal critical point in this limit. This length-scale governs the crossover from large- to small-world behavior in the model, as well as the number of vertices in a neighborhood of given radius on the network. We derive the value of the single critical exponent controlling behavior in the critical region and the finite size scaling form for the average vertex-vertex distance on the network, and, using series expansion and Pade approximants, find an approximate analytic form for the scaling function. We calculate the effective dimension of small-world graphs and show that this dimension varies as a function of the length-scale on which it is measured, in a manner reminiscent of multifractals. We also study the problem of site percolation on small-world networks as a simple model of disease propagation, and derive an approximate expression for the percolation probability at which a giant component of connected vertices first forms (in epidemiological terms, the point at which an epidemic occurs). The typical cluster radius satisfies the expected finite size scaling form with a cluster size exponent close to that for a random graph. All our analytic results are confirmed by extensive numerical simulations of the model. (c) 1999 The American Physical Society

  4. Dry oxidation behaviour of metallic containers during long term interim storages

    International Nuclear Information System (INIS)

    Desgranges, C.; Terlain, A.; Bertrand, N.; Gauvain, D.

    2004-01-01

    Low-alloyed steels or carbon steels are considered candidate materials for the fabrication of some nuclear waste package containers for long term interim storage. The containers are required to remain retrievable for centuries. One factor limiting their performance on this time scale is corrosion. The estimation of the metal thickness lost by dry oxidation over such long periods requires the construction of reliable models from short-time experimental data. Two complementary approaches for modelling dry oxidation have been considered. First, basic models following simple analytical laws from classical oxidation theories have been adjusted on the apparent activation energy of oxidation deduced from experimental data. Their extrapolation to long oxidation periods confirms that the expected damage due to dry oxidation could be small. Second, a numerical model able to take in consideration several mechanisms controlling the oxide scale growth is under development. Several preliminary results are presented. (authors)

  5. A Multi-Scale Energy Food Systems Modeling Framework For Climate Adaptation

    Science.gov (United States)

    Siddiqui, S.; Bakker, C.; Zaitchik, B. F.; Hobbs, B. F.; Broaddus, E.; Neff, R.; Haskett, J.; Parker, C.

    2016-12-01

    Our goal is to understand coupled system dynamics across scales in a manner that allows us to quantify the sensitivity of critical human outcomes (nutritional satisfaction, household economic well-being) to development strategies and to climate or market induced shocks in sub-Saharan Africa. We adopt both bottom-up and top-down multi-scale modeling approaches focusing our efforts on food, energy, water (FEW) dynamics to define, parameterize, and evaluate modeled processes nationally as well as across climate zones and communities. Our framework comprises three complementary modeling techniques spanning local, sub-national and national scales to capture interdependencies between sectors, across time scales, and on multiple levels of geographic aggregation. At the center is a multi-player micro-economic (MME) partial equilibrium model for the production, consumption, storage, and transportation of food, energy, and fuels, which is the focus of this presentation. We show why such models can be very useful for linking and integrating across time and spatial scales, as well as a wide variety of models including an agent-based model applied to rural villages and larger population centers, an optimization-based electricity infrastructure model at a regional scale, and a computable general equilibrium model, which is applied to understand FEW resources and economic patterns at national scale. The MME is based on aggregating individual optimization problems for relevant players in an energy, electricity, or food market and captures important food supply chain components of trade and food distribution accounting for infrastructure and geography. Second, our model considers food access and utilization by modeling food waste and disaggregating consumption by income and age. Third, the model is set up to evaluate the effects of seasonality and system shocks on supply, demand, infrastructure, and transportation in both energy and food.

  6. Ecosystem function in complex mountain terrain: Combining models and long-term observations to advance process-based understanding

    Science.gov (United States)

    Wieder, William R.; Knowles, John F.; Blanken, Peter D.; Swenson, Sean C.; Suding, Katharine N.

    2017-04-01

    Abiotic factors structure plant community composition and ecosystem function across many different spatial scales. Often, such variation is considered at regional or global scales, but here we ask whether ecosystem-scale simulations can be used to better understand landscape-level variation that might be particularly important in complex terrain, such as high-elevation mountains. We performed ecosystem-scale simulations by using the Community Land Model (CLM) version 4.5 to better understand how the increased length of growing seasons may impact carbon, water, and energy fluxes in an alpine tundra landscape. The model was forced with meteorological data and validated with observations from the Niwot Ridge Long Term Ecological Research Program site. Our results demonstrate that CLM is capable of reproducing the observed carbon, water, and energy fluxes for discrete vegetation patches across this heterogeneous ecosystem. We subsequently accelerated snowmelt and increased spring and summer air temperatures in order to simulate potential effects of climate change in this region. We found that vegetation communities that were characterized by different snow accumulation dynamics showed divergent biogeochemical responses to a longer growing season. Contrary to expectations, wet meadow ecosystems showed the strongest decreases in plant productivity under extended summer scenarios because of disruptions in hydrologic connectivity. These findings illustrate how Earth system models such as CLM can be used to generate testable hypotheses about the shifting nature of energy, water, and nutrient limitations across space and through time in heterogeneous landscapes; these hypotheses may ultimately guide further experimental work and model development.

  7. Long-range forecasting of intermittent streamflow

    OpenAIRE

    F. F. van Ogtrop; R. W. Vervoort; G. Z. Heller; D. M. Stasinopoulos; R. A. Rigby

    2011-01-01

    Long-range forecasting of intermittent streamflow in semi-arid Australia poses a number of major challenges. One of the challenges relates to modelling zero, skewed, non-stationary, and non-linear data. To address this, a statistical model to forecast streamflow up to 12 months ahead is applied to five semi-arid catchments in South Western Queensland. The model uses logistic regression through Generalised Additive Models for Location, Scale and Shape (GAMLSS) to determine th...

  8. Long-range forecasting of intermittent streamflow

    OpenAIRE

    F. F. van Ogtrop; R. W. Vervoort; G. Z. Heller; D. M. Stasinopoulos; R. A. Rigby

    2011-01-01

    Long-range forecasting of intermittent streamflow in semi-arid Australia poses a number of major challenges. One of the challenges relates to modelling zero, skewed, non-stationary, and non-linear data. To address this, a probabilistic statistical model to forecast streamflow 12 months ahead is applied to five semi-arid catchments in South Western Queensland. The model uses logistic regression through Generalised Additive Models for Location, Scale and Shape (GAMLSS) to determine the probabil...

  9. Brain Metabolite Diffusion from Ultra-Short to Ultra-Long Time Scales: What Do We Learn, Where Should We Go?

    Directory of Open Access Journals (Sweden)

    Julien Valette

    2018-01-01

    Full Text Available In vivo diffusion-weighted MR spectroscopy (DW-MRS allows measuring diffusion properties of brain metabolites. Unlike water, most metabolites are confined within cells. Hence, their diffusion is expected to purely reflect intracellular properties, opening unique possibilities to use metabolites as specific probes to explore cellular organization and structure. However, interpretation and modeling of DW-MRS, and more generally of intracellular diffusion, remains difficult. In this perspective paper, we will focus on the study of the time-dependency of brain metabolite apparent diffusion coefficient (ADC. We will see how measuring ADC over several orders of magnitude of diffusion times, from less than 1 ms to more than 1 s, allows clarifying our understanding of brain metabolite diffusion, by firmly establishing that metabolites are neither massively transported by active mechanisms nor massively confined in subcellular compartments or cell bodies. Metabolites appear to be instead diffusing in long fibers typical of neurons and glial cells such as astrocytes. Furthermore, we will evoke modeling of ADC time-dependency to evaluate the effect of, and possibly quantify, some structural parameters at various spatial scales, departing from a simple model of hollow cylinders and introducing additional complexity, either short-ranged (such as dendritic spines or long-ranged (such as cellular fibers ramification. Finally, we will discuss the experimental feasibility and expected benefits of extending the range of diffusion times toward even shorter and longer values.

  10. COMBINING LONG MEMORY AND NONLINEAR MODEL OUTPUTS FOR INFLATION FORECAST

    OpenAIRE

    Heri Kuswanto; Irhamah Alimuhajin; Laylia Afidah

    2014-01-01

    Long memory and nonlinearity have been proven as two models that are easily to be mistaken. In other words, nonlinearity is a strong candidate of spurious long memory by introducing a certain degree of fractional integration that lies in the region of long memory. Indeed, nonlinear process belongs to short memory with zero integration order. The idea of the forecast is to obtain the future condition with minimum error. Some researches argued that no matter what the model is, the important thi...

  11. Past Holocene detritism quantification and modeling from lacustrine archives in order to deconvoluate human-climate interactions on natural ecosystem over long time-scale

    Science.gov (United States)

    Simonneau, Anaëlle; Chapron, Emmanuel; Di Giovanni, Christian; Galop, Didier; Darboux, Frédéric

    2014-05-01

    Water budget is one of the main challenges to paleoclimate researchers in relation to present-day global warming and its consequences for human societies. Associated soil degradation and erosion are thereby becoming a major concern in many parts of the world and more particularly in the Alps. Moreover, humans are considered as geomorphologic agents since few thousand years and it is now recognized that such an impact on natural ecosystem profoundly modified soils properties as well as aquatic ecosystems dynamics over long-term periods. The quantification of such inference over long time-scale is therefore essential to establish new policies to reduce mechanic soil erosion, which is one of the dominant processes in Europe, and anticipate the potential consequences of future climate change on hydric erosion. The mechanical erosion of continental surfaces results from climatic forcing, but can be amplified by the anthropogenic one. We therefore suggest that quantifying and modelling soil erosion processes within comparable Holocene lacustrine archives, allows to estimate and date which and when past human activities have had an impact on soil fluxes over the last 10000 years. Based on the present-day geomorphology of the surrounding watershed and the evolution of the vegetation cover during the Holocene, we develop an interdisciplinary approach combining quantitative organic petrography (i.e. optical characterization and quantification of soil particles within lake sediments) with high-resolution seismic profiling, age-depth models on lacustrine sediment cores and soil erosional susceptibility modeling, in order to estimate the annual volume of soil eroded over the last 10000 years, and in fine to quantify the volume of human-induced soil erosion during the Holocene period. This method is applied to close but contrasted mountainous lacustrine environments from the western French Alps: lakes Blanc Huez and Paladru, sensitive to same climatic influences but where past

  12. Evaluating the Long-term Water Cycle Trends at a Global-scale using Satellite and Assimilation Datasets

    Science.gov (United States)

    Kim, H.; Lakshmi, V.

    2017-12-01

    Global-scale soil moisture and rainfall products retrieved from remotely sensed and assimilation datasets provide an effective way to monitor near surface soil moisture content and precipitation with sub-daily temporal resolution. In the present study, we employed the concept of the stored precipitation fraction Fp(f) in order to examine the long-term water cycle trends at a global-scale. The analysis was done for Fp(f) trends with the various geophysical aspects such as climate zone, land use classifications, amount of vegetation, and soil properties. Furthermore, we compared a global-scale Fp(f) using different microwave-based satellite soil moisture datasets. The Fp(f) is calculated by utilized surface soil moisture dataset from Soil Moisture Active Passive (SMAP), Soil Moisture and Ocean Salinity, Advanced Scatterometer, Advanced Microwave Scanning Radiometer 2, and precipitation information from Global Precipitation Measurement Mission and Global Land Data Assimilation System. Different results from microwave-based soil moisture dataset showed discordant results particularly over arid and highly vegetated regions. The results of this study provide us new insights of the long-term water cycle trends over different land surface areas. Thereby also highlighting the advantages of the recently available GPM and SMAP datasets for the uses in various hydrometeorological applications.

  13. Reference Priors for the General Location-Scale Model

    NARCIS (Netherlands)

    Fernández, C.; Steel, M.F.J.

    1997-01-01

    The reference prior algorithm (Berger and Bernardo 1992) is applied to multivariate location-scale models with any regular sampling density, where we establish the irrelevance of the usual assumption of Normal sampling if our interest is in either the location or the scale. This result immediately

  14. Model Scaling of Hydrokinetic Ocean Renewable Energy Systems

    Science.gov (United States)

    von Ellenrieder, Karl; Valentine, William

    2013-11-01

    Numerical simulations are performed to validate a non-dimensional dynamic scaling procedure that can be applied to subsurface and deeply moored systems, such as hydrokinetic ocean renewable energy devices. The prototype systems are moored in water 400 m deep and include: subsurface spherical buoys moored in a shear current and excited by waves; an ocean current turbine excited by waves; and a deeply submerged spherical buoy in a shear current excited by strong current fluctuations. The corresponding model systems, which are scaled based on relative water depths of 10 m and 40 m, are also studied. For each case examined, the response of the model system closely matches the scaled response of the corresponding full-sized prototype system. The results suggest that laboratory-scale testing of complete ocean current renewable energy systems moored in a current is possible. This work was supported by the U.S. Southeast National Marine Renewable Energy Center (SNMREC).

  15. Scale-free, axisymmetry galaxy models with little angular momentum

    International Nuclear Information System (INIS)

    Richstone, D.O.

    1980-01-01

    Two scale-free models of elliptical galaxies are constructed using a self-consistent field approach developed by Schwarschild. Both models have concentric, oblate spheroidal, equipotential surfaces, with a logarithmic potential dependence on central distance. The axial ratio of the equipotential surfaces is 4:3, and the extent ratio of density level surfaces id 2.5:1 (corresponding to an E6 galaxy). Each model satisfies the Poisson and steady state Boltzmann equaion for time scales of order 100 galactic years

  16. A Network Contention Model for the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2015-01-01

    The Extreme-scale Simulator (xSim) is a performance investigation toolkit for high-performance computing (HPC) hardware/software co-design. It permits running a HPC application with millions of concurrent execution threads, while observing its performance in a simulated extreme-scale system. This paper details a newly developed network modeling feature for xSim, eliminating the shortcomings of the existing network modeling capabilities. The approach takes a different path for implementing network contention and bandwidth capacity modeling using a less synchronous and accurate enough model design. With the new network modeling feature, xSim is able to simulate on-chip and on-node networks with reasonable accuracy and overheads.

  17. Macro-scale turbulence modelling for flows in porous media

    International Nuclear Information System (INIS)

    Pinson, F.

    2006-03-01

    - This work deals with the macroscopic modeling of turbulence in porous media. It concerns heat exchangers, nuclear reactors as well as urban flows, etc. The objective of this study is to describe in an homogenized way, by the mean of a spatial average operator, turbulent flows in a solid matrix. In addition to this first operator, the use of a statistical average operator permits to handle the pseudo-aleatory character of turbulence. The successive application of both operators allows us to derive the balance equations of the kind of flows under study. Two major issues are then highlighted, the modeling of dispersion induced by the solid matrix and the turbulence modeling at a macroscopic scale (Reynolds tensor and turbulent dispersion). To this aim, we lean on the local modeling of turbulence and more precisely on the k - ε RANS models. The methodology of dispersion study, derived thanks to the volume averaging theory, is extended to turbulent flows. Its application includes the simulation, at a microscopic scale, of turbulent flows within a representative elementary volume of the porous media. Applied to channel flows, this analysis shows that even within the turbulent regime, dispersion remains one of the dominating phenomena within the macro-scale modeling framework. A two-scale analysis of the flow allows us to understand the dominating role of the drag force in the kinetic energy transfers between scales. Transfers between the mean part and the turbulent part of the flow are formally derived. This description significantly improves our understanding of the issue of macroscopic modeling of turbulence and leads us to define the sub-filter production and the wake dissipation. A f - f - w >f model is derived. It is based on three balance equations for the turbulent kinetic energy, the viscous dissipation and the wake dissipation. Furthermore, a dynamical predictor for the friction coefficient is proposed. This model is then successfully applied to the study of

  18. Application and comparison of the SCS-CN-based rainfall-runoff model in meso-scale watershed and field scale

    Science.gov (United States)

    Luo, L.; Wang, Z.

    2010-12-01

    Soil Conservation Service Curve Number (SCS-CN) based hydrologic model, has widely been used for agricultural watersheds in recent years. However, there will be relative error when applying it due to differentiation of geographical and climatological conditions. This paper introduces a more adaptable and propagable model based on the modified SCS-CN method, which specializes into two different scale cases of research regions. Combining the typical conditions of the Zhanghe irrigation district in southern part of China, such as hydrometeorologic conditions and surface conditions, SCS-CN based models were established. The Xinbu-Qiao River basin (area =1207 km2) and the Tuanlin runoff test area (area =2.87 km2)were taken as the study areas of basin scale and field scale in Zhanghe irrigation district. Applications were extended from ordinary meso-scale watershed to field scale in Zhanghe paddy field-dominated irrigated . Based on actual measurement data of land use, soil classification, hydrology and meteorology, quantitative evaluation and modifications for two coefficients, i.e. preceding loss and runoff curve, were proposed with corresponding models, table of CN values for different landuse and AMC(antecedent moisture condition) grading standard fitting for research cases were proposed. The simulation precision was increased by putting forward a 12h unit hydrograph of the field area, and 12h unit hydrograph were simplified. Comparison between different scales show that it’s more effectively to use SCS-CN model on field scale after parameters calibrated in basin scale These results can help discovering the rainfall-runoff rule in the district. Differences of established SCS-CN model's parameters between the two study regions are also considered. Varied forms of landuse and impacts of human activities were the important factors which can impact the rainfall-runoff relations in Zhanghe irrigation district.

  19. Modeling Forest Biomass and Growth: Coupling Long-Term Inventory and Lidar Data

    Science.gov (United States)

    Babcock, Chad; Finley, Andrew O.; Cook, Bruce D.; Weiskittel, Andrew; Woodall, Christopher W.

    2016-01-01

    Combining spatially-explicit long-term forest inventory and remotely sensed information from Light Detection and Ranging (LiDAR) datasets through statistical models can be a powerful tool for predicting and mapping above-ground biomass (AGB) at a range of geographic scales. We present and examine a novel modeling approach to improve prediction of AGB and estimate AGB growth using LiDAR data. The proposed model accommodates temporal misalignment between field measurements and remotely sensed data-a problem pervasive in such settings-by including multiple time-indexed measurements at plot locations to estimate AGB growth. We pursue a Bayesian modeling framework that allows for appropriately complex parameter associations and uncertainty propagation through to prediction. Specifically, we identify a space-varying coefficients model to predict and map AGB and its associated growth simultaneously. The proposed model is assessed using LiDAR data acquired from NASA Goddard's LiDAR, Hyper-spectral & Thermal imager and field inventory data from the Penobscot Experimental Forest in Bradley, Maine. The proposed model outperformed the time-invariant counterpart models in predictive performance as indicated by a substantial reduction in root mean squared error. The proposed model adequately accounts for temporal misalignment through the estimation of forest AGB growth and accommodates residual spatial dependence. Results from this analysis suggest that future AGB models informed using remotely sensed data, such as LiDAR, may be improved by adapting traditional modeling frameworks to account for temporal misalignment and spatial dependence using random effects.

  20. Multi-scale habitat selection modeling: A review and outlook

    Science.gov (United States)

    Kevin McGarigal; Ho Yi Wan; Kathy A. Zeller; Brad C. Timm; Samuel A. Cushman

    2016-01-01

    Scale is the lens that focuses ecological relationships. Organisms select habitat at multiple hierarchical levels and at different spatial and/or temporal scales within each level. Failure to properly address scale dependence can result in incorrect inferences in multi-scale habitat selection modeling studies.

  1. Long-range spatial dependence in fractured rock. Empirical evidence and implications for tracer transport

    International Nuclear Information System (INIS)

    Painter, S.

    1999-02-01

    Nonclassical stochastic continuum models incorporating long-range spatial dependence are evaluated as models for fractured crystalline rock. Open fractures and fracture zones are not modeled explicitly in this approach. The fracture zones and intact rock are modeled as a single stochastic continuum. The large contrasts between the fracture zones and unfractured rock are accounted for by making use of random field models specifically designed for highly variable systems. Hydraulic conductivity data derived from packer tests in the vicinity of the Aespoe Hard Rock Laboratory form the basis for the evaluation. The Aespoe log K data were found to be consistent with a fractal scaling model based on bounded fractional Levy motion (bfLm), a model that has been used previously to model highly variable sedimentary formations. However, the data are not sufficient to choose between this model, a fractional Brownian motion model for the normal-score transform of log K, and a conventional geostatistical model. Stochastic simulations conditioned by the Aespoe data coupled with flow and tracer transport calculations demonstrate that the models with long-range dependence predict earlier arrival times for contaminants. This demonstrates the need to evaluate this class of models when assessing the performance of proposed waste repositories. The relationship between intermediate-scale and large-scale transport properties in media with long-range dependence is also addressed. A new Monte Carlo method for stochastic upscaling of intermediate-scale field data is proposed

  2. Test results of full-scale high temperature superconductors cable models destined for a 36 kV, 2 kA(rms) utility demonstration

    DEFF Research Database (Denmark)

    Daumling, M.; Rasmussen, C.N.; Hansen, F.

    2001-01-01

    Power cable systems using high temperature superconductors (HTS) are nearing technical feasibility. This presentation summarises the advancements and status of a project aimed at demonstrating a 36 kV, 2 kA(rms) AC cable system by installing a 30 m long full-scale functional model in a power...

  3. Dry corrosion prediction of radioactive waste containers in long term interim storage: mechanisms of low temperature oxidation of pure iron and numerical simulation of an oxide scale growth

    International Nuclear Information System (INIS)

    Bertrand, N.

    2006-10-01

    In the framework of research on long term behaviour of radioactive waste containers, this work consists on the one hand in the study of low temperature oxidation of iron and on the other hand in the development of a numerical model of oxide scale growth. Isothermal oxidation experiments are performed on pure iron at 300 and 400 C in dry and humid air at atmospheric pressure. Oxide scales formed in these conditions are characterized. They are composed of a duplex magnetite scale under a thin hematite scale. The inner layer of the duplex scale is thinner than the outer one. Both are composed of columnar grains, that are smaller in the inner part. The outer hematite layer is made of very small equiaxed grains. Markers and tracers experiments show that a part of the scale grows at metal/oxide interface thanks to short-circuits diffusion of oxygen. A model for iron oxide scale growth at low temperature is then deduced. Besides this experimental study, the numerical model EKINOX (Estimation Kinetics Oxidation) is developed. It allows to simulate the growth of an oxide scale controlled by mixed mechanisms, such as anionic and cationic vacancies diffusion through the scale, as well as metal transfer at metal/oxide interface. It is based on the calculation of concentration profiles of chemical species and also point defects in the oxide scale and in the substrate. This numerical model does not use the classical quasi-steady-state approximation and calculates the future of cationic vacancies at metal/oxide interface. Indeed, these point defects can either be eliminated by interface motion or injected in the substrate, where they can be annihilated, considering sinks as the climb of dislocations. Hence, the influence of substrate cold-work can be investigated. The EKINOX model is validated in the conditions of Wagner's theory and is confronted with experimental results by its application to the case of high temperature oxidation of nickel. (author)

  4. The use of scale models in impact testing

    International Nuclear Information System (INIS)

    Donelan, P.J.; Dowling, A.R.

    1985-01-01

    Theoretical analysis, component testing and model flask testing are employed to investigate the validity of scale models for demonstrating the behaviour of Magnox flasks under impact conditions. Model testing is shown to be a powerful and convenient tool provided adequate care is taken with detail design and manufacture of models and with experimental control. (author)

  5. Estimating and Forecasting Generalized Fractional Long Memory Stochastic Volatility Models

    Directory of Open Access Journals (Sweden)

    Shelton Peiris

    2017-12-01

    Full Text Available This paper considers a flexible class of time series models generated by Gegenbauer polynomials incorporating the long memory in stochastic volatility (SV components in order to develop the General Long Memory SV (GLMSV model. We examine the corresponding statistical properties of this model, discuss the spectral likelihood estimation and investigate the finite sample properties via Monte Carlo experiments. We provide empirical evidence by applying the GLMSV model to three exchange rate return series and conjecture that the results of out-of-sample forecasts adequately confirm the use of GLMSV model in certain financial applications.

  6. Large-scale Modeling of the Greenland Ice Sheet on Long Timescales

    DEFF Research Database (Denmark)

    Solgaard, Anne Munck

    is investigated as well as its early history. The studies are performed using an ice-sheet model in combination with relevant forcing from observed and modeled climate. Changes in ice-sheet geometry influences atmospheric flow (and vice versa) hereby changing the forcing patterns. Changes in the overall climate...... and climate model is included shows, however, that a Föhn effect is activated and hereby increasing temperatures inland and inhibiting further ice-sheet expansion into the interior. This indicates that colder than present temperatures are needed in order for the ice sheet to regrow to the current geometry....... Accordingto this hypothesis, two stages of uplift since the Late Miocene lead to the present-day topography. The results of the ice-sheet simulations show geometries in line with geologicobservations through the period, and it is found that the uplift events enhance the effect of the climatic deterioration...

  7. A model-based framework for incremental scale-up of wastewater treatment processes

    DEFF Research Database (Denmark)

    Mauricio Iglesias, Miguel; Sin, Gürkan

    Scale-up is traditionally done following specific ratios or rules of thumb which do not lead to optimal results. We present a generic framework to assist in scale-up of wastewater treatment processes based on multiscale modelling, multiobjective optimisation and a validation of the model at the new...... large scale. The framework is illustrated by the scale-up of a complete autotropic nitrogen removal process. The model based multiobjective scaleup offers a promising improvement compared to the rule of thumbs based emprical scale up rules...

  8. Long-term interactions of full-scale cemented waste simulates with salt brines

    Energy Technology Data Exchange (ETDEWEB)

    Kienzler, B.; Borkel, C.; Metz, V.; Schlieker, M.

    2016-07-01

    Since 1967 radioactive wastes have been disposed of in the Asse II salt mine in Northern Germany. A significant part of these wastes originated from the pilot reprocessing plant WAK in Karlsruhe and consisted of cemented NaNO{sub 3} solutions bearing fission products, actinides, as well as process chemicals. With respect to the long-term behavior of these wastes, the licensing authorities requested leaching experiments with full scale samples in relevant salt solutions which were performed since 1979. The experiments aimed at demonstrating the transferability of results obtained with laboratory samples to real waste forms and at the investigation of the effects of the industrial cementation process on the properties of the waste forms. This research program lasted until 2013. The corroding salt solutions were sampled several times and after termination of the experiments, the solid materials were analyzed by various methods. The results presented in this report cover the evolution of the solutions and the chemical and mineralogical characterization of the solids including radionuclides and waste components, and the paragenesis of solid phases (corrosion products). The outcome is compared to the results of model calculations. For safety analysis, conclusions are drawn on radionuclide retention, evolution of the geochemical environment, evolution of the density of solutions, and effects of temperature and porosity of the cement waste simulates on cesium mobilization.

  9. Long-term interactions of full-scale cemented waste simulates with salt brines

    International Nuclear Information System (INIS)

    Kienzler, B.; Borkel, C.; Metz, V.; Schlieker, M.

    2016-01-01

    Since 1967 radioactive wastes have been disposed of in the Asse II salt mine in Northern Germany. A significant part of these wastes originated from the pilot reprocessing plant WAK in Karlsruhe and consisted of cemented NaNO 3 solutions bearing fission products, actinides, as well as process chemicals. With respect to the long-term behavior of these wastes, the licensing authorities requested leaching experiments with full scale samples in relevant salt solutions which were performed since 1979. The experiments aimed at demonstrating the transferability of results obtained with laboratory samples to real waste forms and at the investigation of the effects of the industrial cementation process on the properties of the waste forms. This research program lasted until 2013. The corroding salt solutions were sampled several times and after termination of the experiments, the solid materials were analyzed by various methods. The results presented in this report cover the evolution of the solutions and the chemical and mineralogical characterization of the solids including radionuclides and waste components, and the paragenesis of solid phases (corrosion products). The outcome is compared to the results of model calculations. For safety analysis, conclusions are drawn on radionuclide retention, evolution of the geochemical environment, evolution of the density of solutions, and effects of temperature and porosity of the cement waste simulates on cesium mobilization.

  10. Complex scaling in the cluster model

    International Nuclear Information System (INIS)

    Kruppa, A.T.; Lovas, R.G.; Gyarmati, B.

    1987-01-01

    To find the positions and widths of resonances, a complex scaling of the intercluster relative coordinate is introduced into the resonating-group model. In the generator-coordinate technique used to solve the resonating-group equation the complex scaling requires minor changes in the formulae and code. The finding of the resonances does not need any preliminary guess or explicit reference to any asymptotic prescription. The procedure is applied to the resonances in the relative motion of two ground-state α clusters in 8 Be, but is appropriate for any systems consisting of two clusters. (author) 23 refs.; 5 figs

  11. Ares I Scale Model Acoustic Test Instrumentation for Acoustic and Pressure Measurements

    Science.gov (United States)

    Vargas, Magda B.; Counter, Douglas

    2011-01-01

    Ares I Scale Model Acoustic Test (ASMAT) is a 5% scale model test of the Ares I vehicle, launch pad and support structures conducted at MSFC to verify acoustic and ignition environments and evaluate water suppression systems Test design considerations 5% measurements must be scaled to full scale requiring high frequency measurements Users had different frequencies of interest Acoustics: 200 - 2,000 Hz full scale equals 4,000 - 40,000 Hz model scale Ignition Transient: 0 - 100 Hz full scale equals 0 - 2,000 Hz model scale Environment exposure Weather exposure: heat, humidity, thunderstorms, rain, cold and snow Test environments: Plume impingement heat and pressure, and water deluge impingement Several types of sensors were used to measure the environments Different instrument mounts were used according to the location and exposure to the environment This presentation addresses the observed effects of the selected sensors and mount design on the acoustic and pressure measurements

  12. The regional climate model as a tool for long-term planning of Quebec water resources

    International Nuclear Information System (INIS)

    Frigon, A.

    2008-01-01

    'Full text': In recent years, important progress has been made in downscaling GCM (Global Climate Model) projections to a resolution where hydrological studies become feasible. Climate change simulations performed with RCMs (Regional Climate Models) have reached a level of confidence that allows us to take advantage of this information in long-term planning of water resources. The RCMs' main advantage consist in their construction based on balanced land as well as atmosphere water and energy budgets, and on their inclusion of feedbacks between the surface and the atmosphere. Such models therefore generate sequences of weather events, providing long time series of hydro-climatic variables that are internally consistent, allowing the analysis of hydrologic regimes. At OURANOS, special attention is placed on the hydrological cycle, given its key role on socioeconomic activities. The Canadian Regional Climate Model (CRCM) was developed as a potential tool to provide climate projections at the watershed scale. Various analyses performed over small basins in Quebec provide information on the level of confidence we have in the CRCM for use in hydrological studies. Even though this approach is not free of uncertainty, it was found useful by some water resource managers and hence this information should be considered. One of the keys to retain usefulness, despite the associated uncertainties, is to make use of more than a single regional climate projection. This approach will allow for the evaluation of the climate change signal and its associated level of confidence. Such a methodology is already applied by Hydro-Quebec in the long-term planning of its water resources for hydroelectric generation over the Quebec territory. (author)

  13. Drift-Scale Coupled Processes (DST and THC Seepage) Models

    Energy Technology Data Exchange (ETDEWEB)

    E. Gonnenthal; N. Spyoher

    2001-02-05

    The purpose of this Analysis/Model Report (AMR) is to document the Near-Field Environment (NFE) and Unsaturated Zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrologic-chemical (THC) processes on unsaturated zone flow and transport. This is in accordance with the ''Technical Work Plan (TWP) for Unsaturated Zone Flow and Transport Process Model Report'', Addendum D, Attachment D-4 (Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M and O) 2000 [153447]) and ''Technical Work Plan for Nearfield Environment Thermal Analyses and Testing'' (CRWMS M and O 2000 [153309]). These models include the Drift Scale Test (DST) THC Model and several THC seepage models. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal loading conditions, and predict the chemistry of waters and gases entering potential waste-emplacement drifts. The intended use of this AMR is to provide input for the following: (1) Performance Assessment (PA); (2) Abstraction of Drift-Scale Coupled Processes AMR (ANL-NBS-HS-000029); (3) UZ Flow and Transport Process Model Report (PMR); and (4) Near-Field Environment (NFE) PMR. The work scope for this activity is presented in the TWPs cited above, and summarized as follows: continue development of the repository drift-scale THC seepage model used in support of the TSPA in-drift geochemical model; incorporate heterogeneous fracture property realizations; study sensitivity of results to changes in input data and mineral assemblage; validate the DST model by comparison with field data; perform simulations to predict mineral dissolution and precipitation and their effects on fracture properties and chemistry of water (but not flow rates) that may seep into drifts; submit modeling results to the TDMS and document the models. The model development, input data

  14. Drift-Scale Coupled Processes (DST and THC Seepage) Models

    International Nuclear Information System (INIS)

    Sonnenthale, E.

    2001-01-01

    The purpose of this Analysis/Model Report (AMR) is to document the Near-Field Environment (NFE) and Unsaturated Zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrologic-chemical (THC) processes on unsaturated zone flow and transport. This is in accordance with the ''Technical Work Plan (TWP) for Unsaturated Zone Flow and Transport Process Model Report'', Addendum D, Attachment D-4 (Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M and O) 2000 [1534471]) and ''Technical Work Plan for Nearfield Environment Thermal Analyses and Testing'' (CRWMS M and O 2000 [153309]). These models include the Drift Scale Test (DST) THC Model and several THC seepage models. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal loading conditions, and predict the chemistry of waters and gases entering potential waste-emplacement drifts. The intended use of this AMR is to provide input for the following: Performance Assessment (PA); Near-Field Environment (NFE) PMR; Abstraction of Drift-Scale Coupled Processes AMR (ANL-NBS-HS-000029); and UZ Flow and Transport Process Model Report (PMR). The work scope for this activity is presented in the TWPs cited above, and summarized as follows: Continue development of the repository drift-scale THC seepage model used in support of the TSPA in-drift geochemical model; incorporate heterogeneous fracture property realizations; study sensitivity of results to changes in input data and mineral assemblage; validate the DST model by comparison with field data; perform simulations to predict mineral dissolution and precipitation and their effects on fracture properties and chemistry of water (but not flow rates) that may seep into drifts; submit modeling results to the TDMS and document the models. The model development, input data, sensitivity and validation studies described in this AMR are

  15. A rainfall disaggregation scheme for sub-hourly time scales: Coupling a Bartlett-Lewis based model with adjusting procedures

    Science.gov (United States)

    Kossieris, Panagiotis; Makropoulos, Christos; Onof, Christian; Koutsoyiannis, Demetris

    2018-01-01

    Many hydrological applications, such as flood studies, require the use of long rainfall data at fine time scales varying from daily down to 1 min time step. However, in the real world there is limited availability of data at sub-hourly scales. To cope with this issue, stochastic disaggregation techniques are typically employed to produce possible, statistically consistent, rainfall events that aggregate up to the field data collected at coarser scales. A methodology for the stochastic disaggregation of rainfall at fine time scales was recently introduced, combining the Bartlett-Lewis process to generate rainfall events along with adjusting procedures to modify the lower-level variables (i.e., hourly) so as to be consistent with the higher-level one (i.e., daily). In the present paper, we extend the aforementioned scheme, initially designed and tested for the disaggregation of daily rainfall into hourly depths, for any sub-hourly time scale. In addition, we take advantage of the recent developments in Poisson-cluster processes incorporating in the methodology a Bartlett-Lewis model variant that introduces dependence between cell intensity and duration in order to capture the variability of rainfall at sub-hourly time scales. The disaggregation scheme is implemented in an R package, named HyetosMinute, to support disaggregation from daily down to 1-min time scale. The applicability of the methodology was assessed on a 5-min rainfall records collected in Bochum, Germany, comparing the performance of the above mentioned model variant against the original Bartlett-Lewis process (non-random with 5 parameters). The analysis shows that the disaggregation process reproduces adequately the most important statistical characteristics of rainfall at wide range of time scales, while the introduction of the model with dependent intensity-duration results in a better performance in terms of skewness, rainfall extremes and dry proportions.

  16. Validity of thermally-driven small-scale ventilated filling box models

    Science.gov (United States)

    Partridge, Jamie L.; Linden, P. F.

    2013-11-01

    The majority of previous work studying building ventilation flows at laboratory scale have used saline plumes in water. The production of buoyancy forces using salinity variations in water allows dynamic similarity between the small-scale models and the full-scale flows. However, in some situations, such as including the effects of non-adiabatic boundaries, the use of a thermal plume is desirable. The efficacy of using temperature differences to produce buoyancy-driven flows representing natural ventilation of a building in a small-scale model is examined here, with comparison between previous theoretical and new, heat-based, experiments.

  17. Modeling Hydrodynamics on the Wave Group Scale in Topographically Complex Reef Environments

    Science.gov (United States)

    Reyns, J.; Becker, J. M.; Merrifield, M. A.; Roelvink, J. A.

    2016-02-01

    The knowledge of the characteristics of waves and the associated wave-driven currents is important for sediment transport and morphodynamics, nutrient dynamics and larval dispersion within coral reef ecosystems. Reef-lined coasts differ from sandy beaches in that they have a steep offshore slope, that the non-sandy bottom topography is very rough, and that the distance between the point of maximum short wave dissipation and the actual coastline is usually large. At this short wave breakpoint, long waves are released, and these infragravity (IG) scale motions account for the bulk of the water level variance on the reef flat, the lagoon and eventually, the sandy beaches fronting the coast through run-up. These IG energy dominated water level motions are reinforced during extreme events such as cyclones or swells through larger incident band wave heights and low frequency wave resonance on the reef. Recently, a number of hydro(-morpho)dynamic models that have the capability to model these IG waves have successfully been applied to morphologically differing reef environments. One of these models is the XBeach model, which is curvilinear in nature. This poses serious problems when trying to model an entire atoll for example, as it is extremely difficult to build curvilinear grids that are optimal for the simulation of hydrodynamic processes, while maintaining the topology in the grid. One solution to remediate this problem of grid connectivity is the use of unstructured grids. We present an implementation of the wave action balance on the wave group scale with feedback to the flow momentum balance, which is the foundation of XBeach, within the framework of the unstructured Delft3D Flexible Mesh model. The model can be run in stationary as well as in instationary mode, and it can be forced by regular waves, time series or wave spectra. We show how the code is capable of modeling the wave generated flow at a number of topographically complex reef sites and for a number of

  18. Assessment of applications of transport models on regional scale solute transport

    Science.gov (United States)

    Guo, Z.; Fogg, G. E.; Henri, C.; Pauloo, R.

    2017-12-01

    Regional scale transport models are needed to support the long-term evaluation of groundwater quality and to develop management strategies aiming to prevent serious groundwater degradation. The purpose of this study is to evaluate the capacity of previously-developed upscaling approaches to accurately describe main solute transport processes including the capture of late-time tails under changing boundary conditions. Advective-dispersive contaminant transport in a 3D heterogeneous domain was simulated and used as a reference solution. Equivalent transport under homogeneous flow conditions were then evaluated applying the Multi-Rate Mass Transfer (MRMT) model. The random walk particle tracking method was used for both heterogeneous and homogeneous-MRMT scenarios under steady state and transient conditions. The results indicate that the MRMT model can capture the tails satisfactorily for plume transported with ambient steady-state flow field. However, when boundary conditions change, the mass transfer model calibrated for transport under steady-state conditions cannot accurately reproduce the tailing effect observed for the heterogeneous scenario. The deteriorating impact of transient boundary conditions on the upscaled model is more significant for regions where flow fields are dramatically affected, highlighting the poor applicability of the MRMT approach for complex field settings. Accurately simulating mass in both mobile and immobile zones is critical to represent the transport process under transient flow conditions and will be the future focus of our study.

  19. Severity Stages in Essential Tremor: A Long-Term Retrospective Study Using the Glass Scale

    Directory of Open Access Journals (Sweden)

    Alexandre Gironell

    2015-03-01

    Full Text Available Background:  Few prospective studies have attempted to estimate the rate of decline of essential tremor (ET and these were over a relatively short time period (less than 10 years.  We performed a long-term study of severity stages in ET using the Glass scale scoring system.Methods: Fifty consecutive patients with severe ET were included. We retrospectively obtained Glass Scale scores throughout the patient’s life. Common milestone events were used to help recall changes in tremor severity.Results:  According to the Glass Scale, the age distributions were as follows: score I, 40±17 years, score II, 55±12 years, score III, 64±9 years, and score IV, 69±7 years. A significant negative correlation between age at first symptom and rate of progression was found (r=−0.669, p<0.001. The rate of progression was significantly different (p<0.001 when the first symptom appeared at a younger age (under 40 years of age compared with older age (40 years or older.Discussion:  Our results support the progressive nature of ET. Age at onset was a prognostic factor. The Glass Scale may be a useful tool to determine severity stages during the course of ET in a manner similar to the Hoehn and Yahr Scale for Parkinson’s disease.

  20. Severity Stages in Essential Tremor: A Long-Term Retrospective Study Using the Glass Scale

    Science.gov (United States)

    Gironell, Alexandre; Ribosa-Nogué, Roser; Gich, Ignasi; Marin-Lahoz, Juan; Pascual-Sedano, Berta

    2015-01-01

    Background Few prospective studies have attempted to estimate the rate of decline of essential tremor (ET) and these were over a relatively short time period (less than 10 years). We performed a long-term study of severity stages in ET using the Glass Scale scoring system. Methods Fifty consecutive patients with severe ET were included. We retrospectively obtained Glass Scale scores throughout the patient's life. Common milestone events were used to help recall changes in tremor severity. Results According to the Glass Scale, the age distributions were as follows: score I, 40±17 years, score II, 55±12 years, score III, 64±9 years, and score IV, 69±7 years. A significant negative correlation between age at first symptom and rate of progression was found (r = −0.669, p<0.001). The rate of progression was significantly different (p<0.001) when the first symptom appeared at a younger age (under 40 years of age) compared with older age (40 years or older). Discussion Our results support the progressive nature of ET. Age at onset was a prognostic factor. The Glass Scale may be a useful tool to determine severity stages during the course of ET in a manner similar to the Hoehn and Yahr Scale for Parkinson's disease. PMID:25793146

  1. Investigations of model polymers: Dynamics of melts and statics of a long chain in a dilute melt of shorter chains

    International Nuclear Information System (INIS)

    Bishop, M.; Ceperley, D.; Frisch, H.L.; Kalos, M.H.

    1982-01-01

    We report additional results on a simple model of polymers, namely the diffusion in concentrated polymer systems and the static properties of one long chain in a dilute melt of shorter chains. It is found, for the polymer sizes and time scales amenable to our computer calculations, that there is as yet no evidence for a ''reptation'' regime in a melt. There is some indication of reptation in the case of a single chain moving through fixed obstacles. No statistically significant effect of the change, from excluded volume behavior of the long chain to ideal behavior as the shorter chains grow, is observed

  2. Statistical modeling of the long-range-dependent structure of barrier island framework geology and surface geomorphology

    Directory of Open Access Journals (Sweden)

    B. A. Weymer

    2018-06-01

    Full Text Available Shorelines exhibit long-range dependence (LRD and have been shown in some environments to be described in the wave number domain by a power-law characteristic of scale independence. Recent evidence suggests that the geomorphology of barrier islands can, however, exhibit scale dependence as a result of systematic variations in the underlying framework geology. The LRD of framework geology, which influences island geomorphology and its response to storms and sea level rise, has not been previously examined. Electromagnetic induction (EMI surveys conducted along Padre Island National Seashore (PAIS, Texas, United States, reveal that the EMI apparent conductivity (σa signal and, by inference, the framework geology exhibits LRD at scales of up to 101 to 102 km. Our study demonstrates the utility of describing EMI σa and lidar spatial series by a fractional autoregressive integrated moving average (ARIMA process that specifically models LRD. This method offers a robust and compact way of quantifying the geological variations along a barrier island shoreline using three statistical parameters (p, d, q. We discuss how ARIMA models that use a single parameter d provide a quantitative measure for determining free and forced barrier island evolutionary behavior across different scales. Statistical analyses at regional, intermediate, and local scales suggest that the geologic framework within an area of paleo-channels exhibits a first-order control on dune height. The exchange of sediment amongst nearshore, beach, and dune in areas outside this region are scale independent, implying that barrier islands like PAIS exhibit a combination of free and forced behaviors that affect the response of the island to sea level rise.

  3. Statistical modeling of the long-range-dependent structure of barrier island framework geology and surface geomorphology

    Science.gov (United States)

    Weymer, Bradley A.; Wernette, Phillipe; Everett, Mark E.; Houser, Chris

    2018-06-01

    Shorelines exhibit long-range dependence (LRD) and have been shown in some environments to be described in the wave number domain by a power-law characteristic of scale independence. Recent evidence suggests that the geomorphology of barrier islands can, however, exhibit scale dependence as a result of systematic variations in the underlying framework geology. The LRD of framework geology, which influences island geomorphology and its response to storms and sea level rise, has not been previously examined. Electromagnetic induction (EMI) surveys conducted along Padre Island National Seashore (PAIS), Texas, United States, reveal that the EMI apparent conductivity (σa) signal and, by inference, the framework geology exhibits LRD at scales of up to 101 to 102 km. Our study demonstrates the utility of describing EMI σa and lidar spatial series by a fractional autoregressive integrated moving average (ARIMA) process that specifically models LRD. This method offers a robust and compact way of quantifying the geological variations along a barrier island shoreline using three statistical parameters (p, d, q). We discuss how ARIMA models that use a single parameter d provide a quantitative measure for determining free and forced barrier island evolutionary behavior across different scales. Statistical analyses at regional, intermediate, and local scales suggest that the geologic framework within an area of paleo-channels exhibits a first-order control on dune height. The exchange of sediment amongst nearshore, beach, and dune in areas outside this region are scale independent, implying that barrier islands like PAIS exhibit a combination of free and forced behaviors that affect the response of the island to sea level rise.

  4. Multi-scale modeling of carbon capture systems

    Energy Technology Data Exchange (ETDEWEB)

    Kress, Joel David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-03

    The development and scale up of cost effective carbon capture processes is of paramount importance to enable the widespread deployment of these technologies to significantly reduce greenhouse gas emissions. The U.S. Department of Energy initiated the Carbon Capture Simulation Initiative (CCSI) in 2011 with the goal of developing a computational toolset that would enable industry to more effectively identify, design, scale up, operate, and optimize promising concepts. The first half of the presentation will introduce the CCSI Toolset consisting of basic data submodels, steady-state and dynamic process models, process optimization and uncertainty quantification tools, an advanced dynamic process control framework, and high-resolution filtered computationalfluid- dynamics (CFD) submodels. The second half of the presentation will describe a high-fidelity model of a mesoporous silica supported, polyethylenimine (PEI)-impregnated solid sorbent for CO2 capture. The sorbent model includes a detailed treatment of transport and amine-CO2- H2O interactions based on quantum chemistry calculations. Using a Bayesian approach for uncertainty quantification, we calibrate the sorbent model to Thermogravimetric (TGA) data.

  5. A functional model for characterizing long-distance movement behaviour

    Science.gov (United States)

    Buderman, Frances E.; Hooten, Mevin B.; Ivan, Jacob S.; Shenk, Tanya M.

    2016-01-01

    Advancements in wildlife telemetry techniques have made it possible to collect large data sets of highly accurate animal locations at a fine temporal resolution. These data sets have prompted the development of a number of statistical methodologies for modelling animal movement.Telemetry data sets are often collected for purposes other than fine-scale movement analysis. These data sets may differ substantially from those that are collected with technologies suitable for fine-scale movement modelling and may consist of locations that are irregular in time, are temporally coarse or have large measurement error. These data sets are time-consuming and costly to collect but may still provide valuable information about movement behaviour.We developed a Bayesian movement model that accounts for error from multiple data sources as well as movement behaviour at different temporal scales. The Bayesian framework allows us to calculate derived quantities that describe temporally varying movement behaviour, such as residence time, speed and persistence in direction. The model is flexible, easy to implement and computationally efficient.We apply this model to data from Colorado Canada lynx (Lynx canadensis) and use derived quantities to identify changes in movement behaviour.

  6. Beyond the first episode: candidate factors for a risk prediction model of schizophrenia.

    Science.gov (United States)

    Murphy, Brendan P

    2010-01-01

    Many early psychosis services are financially compromised and cannot offer a full tenure of care to all patients. To maintain viability of services it is important that those with schizophrenia are identified early to maximize long-term outcomes, as are those with better prognoses who can be discharged early. The duration of untreated psychosis remains the mainstay in determining those who will benefit from extended care, yet its ability to inform on prognosis is modest in both the short and medium term. There are a number of known or putative genetic and environmental risk factors that have the potential to improve prognostication, though a multivariate risk prediction model combining them with clinical characteristics has yet to be developed. Candidate risk factors for such a model are presented, with an emphasis on environmental risk factors. More work is needed to corroborate many putative factors and to determine which of the established factors are salient and which are merely proxy measures. Future research should help clarify how gene-environment and environment-environment interactions occur and whether risk factors are dose-dependent, or if they act additively or synergistically, or are redundant in the presence (or absence) of other factors.

  7. A hybrid procedure for MSW generation forecasting at multiple time scales in Xiamen City, China.

    Science.gov (United States)

    Xu, Lilai; Gao, Peiqing; Cui, Shenghui; Liu, Chun

    2013-06-01

    Accurate forecasting of municipal solid waste (MSW) generation is crucial and fundamental for the planning, operation and optimization of any MSW management system. Comprehensive information on waste generation for month-scale, medium-term and long-term time scales is especially needed, considering the necessity of MSW management upgrade facing many developing countries. Several existing models are available but of little use in forecasting MSW generation at multiple time scales. The goal of this study is to propose a hybrid model that combines the seasonal autoregressive integrated moving average (SARIMA) model and grey system theory to forecast MSW generation at multiple time scales without needing to consider other variables such as demographics and socioeconomic factors. To demonstrate its applicability, a case study of Xiamen City, China was performed. Results show that the model is robust enough to fit and forecast seasonal and annual dynamics of MSW generation at month-scale, medium- and long-term time scales with the desired accuracy. In the month-scale, MSW generation in Xiamen City will peak at 132.2 thousand tonnes in July 2015 - 1.5 times the volume in July 2010. In the medium term, annual MSW generation will increase to 1518.1 thousand tonnes by 2015 at an average growth rate of 10%. In the long term, a large volume of MSW will be output annually and will increase to 2486.3 thousand tonnes by 2020 - 2.5 times the value for 2010. The hybrid model proposed in this paper can enable decision makers to develop integrated policies and measures for waste management over the long term. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Analysis of the Professional Choice Self-Efficacy Scale Using the Rasch-Andrich Rating Scale Model

    Science.gov (United States)

    Ambiel, Rodolfo A. M.; Noronha, Ana Paula Porto; de Francisco Carvalho, Lucas

    2015-01-01

    The aim of this research was to analyze the psychometrics properties of the professional choice self-efficacy scale (PCSES), using the Rasch-Andrich rating scale model. The PCSES assesses four factors: self-appraisal, gathering occupational information, practical professional information search and future planning. Participants were 883 Brazilian…

  9. Roadmap for Scaling and Multifractals in Geosciences: still a long way to go ?

    Science.gov (United States)

    Schertzer, Daniel; Lovejoy, Shaun

    2010-05-01

    The interest in scale symmetries (scaling) in Geosciences has never lessened since the first pioneering EGS session on chaos and fractals 22 years ago. The corresponding NP activities have been steadily increasing, covering a wider and wider diversity of geophysical phenomena and range of space-time scales. Whereas interest was initially largely focused on atmospheric turbulence, rain and clouds at small scales, it has quickly broadened to much larger scales and to much wider scale ranges, to include ocean sciences, solid earth and space physics. Indeed, the scale problem being ubiquitous in Geosciences, it is indispensable to share the efforts and the resulting knowledge as much as possible. There have been numerous achievements which have followed from the exploration of larger and larger datasets with finer and finer resolutions, from both modelling and theoretical discussions, particularly on formalisms for intermittency, anisotropy and scale symmetry, multiple scaling (multifractals) vs. simple scaling,. We are now way beyond the early pioneering but tentative attempts using crude estimates of unique scaling exponents to bring some credence to the fact that scale symmetries are key to most nonlinear geoscience problems. Nowadays, we need to better demonstrate that scaling brings effective solutions to geosciences and therefore to society. A large part of the answer corresponds to our capacity to create much more universal and flexible tools to multifractally analyse in straightforward and reliable manners complex and complicated systems such as the climate. Preliminary steps in this direction are already quite encouraging: they show that such approaches explain both the difficulty of classical techniques to find trends in climate scenarios (particularly for extremes) and resolve them with the help of scaling estimators. The question of the reliability and accuracy of these methods is not trivial. After discussing these important, but rather short term issues

  10. A long term model of circulation. [human body

    Science.gov (United States)

    White, R. J.

    1974-01-01

    A quantitative approach to modeling human physiological function, with a view toward ultimate application to long duration space flight experiments, was undertaken. Data was obtained on the effect of weightlessness on certain aspects of human physiological function during 1-3 month periods. Modifications in the Guyton model are reviewed. Design considerations for bilateral interface models are discussed. Construction of a functioning whole body model was studied, as well as the testing of the model versus available data.

  11. ELMO model predicts the price of electric power; ELMO-malli saehkoen hinnan ennustamiseksi

    Energy Technology Data Exchange (ETDEWEB)

    Antila, H. [Electrowatt-Ekono Oy, Helsinki (Finland)

    2001-07-01

    Electrowatt-Ekono has developed a new model, by which it is possible to make long-term prognoses on the development of electricity prices in the Nordic Countries. The ELMO model can be used as an analysis service of the electricity markets and estimation of the profitability of long-term power distribution contracts with different scenarios. It can also be applied for calculation of technical and economical fundamentals for new power plants, and for estimation of the effects of different taxation models on the emissions of power generation. The model describes the whole power generation system, the power and heat consumption and transmission. The Finnish power generation system is based on the Electrowatt-Ekono's boiler database by combining different data elements. Calculation is based on the assumption that the Nordic power generation system is used optimally, and that the production costs are minimised. In practise the effectively operated electricity markets ensure the optimal use of the production system. The market area to be described consists of Finland and Sweden. The spot prices have long been the same. Norway has been treated as a separate market area. The most potential power generation system, the power consumption and the power transmission system are presumed for the target year during a normal rainfall situation. The basic scenario is calculated on the basis of the preconditional data. The calculation is carried out on hourly basis, which enables the estimation of the price variation of electric power between different times during the day and seasons. The system optimises the power generation on the basis of electricity and heat consumption curves and fuel prices. The result is an hourly limit price for electric power. Estimates are presented as standard form reports. Prices are presented as average annuals, in the seasonal base, and in hourly or daily basis for different seasons.

  12. Long-term follow-up on affinity maturation and memory B-cell generation in patients with common variable immunodeficiency

    DEFF Research Database (Denmark)

    Ballegaard, Vibe Cecilie Diederich; Permin, H; Katzenstein, T L

    2013-01-01

    Common variable immunodeficiency (CVID) comprises a heterogeneous group of primary immunodeficiency disorders. Immunophenotyping of memory B cells at the time of diagnosis is increasingly used for the classification of patients into subgroups with different clinical prognoses. The EUROclass...

  13. Fixing the EW scale in supersymmetric models after the Higgs discovery

    CERN Document Server

    Ghilencea, D M

    2013-01-01

    TeV-scale supersymmetry was originally introduced to solve the hierarchy problem and therefore fix the electroweak (EW) scale in the presence of quantum corrections. Numerical methods testing the SUSY models often report a good likelihood L (or chi^2=-2ln L) to fit the data {\\it including} the EW scale itself (m_Z^0) with a {\\it simultaneously} large fine-tuning i.e. a large variation of this scale under a small variation of the SUSY parameters. We argue that this is inconsistent and we identify the origin of this problem. Our claim is that the likelihood (or chi^2) to fit the data that is usually reported in such models does not account for the chi^2 cost of fixing the EW scale. When this constraint is implemented, the likelihood (or chi^2) receives a significant correction (delta_chi^2) that worsens the current data fits of SUSY models. We estimate this correction for the models: constrained MSSM (CMSSM), models with non-universal gaugino masses (NUGM) or higgs soft masses (NUHM1, NUHM2), the NMSSM and the ...

  14. Anomalous scaling in an age-dependent branching model

    OpenAIRE

    Keller-Schmidt, Stephanie; Tugrul, Murat; Eguiluz, Victor M.; Hernandez-Garcia, Emilio; Klemm, Konstantin

    2010-01-01

    We introduce a one-parametric family of tree growth models, in which branching probabilities decrease with branch age $\\tau$ as $\\tau^{-\\alpha}$. Depending on the exponent $\\alpha$, the scaling of tree depth with tree size $n$ displays a transition between the logarithmic scaling of random trees and an algebraic growth. At the transition ($\\alpha=1$) tree depth grows as $(\\log n)^2$. This anomalous scaling is in good agreement with the trend observed in evolution of biological species, thus p...

  15. A Bayesian method for construction of Markov models to describe dynamics on various time-scales.

    Science.gov (United States)

    Rains, Emily K; Andersen, Hans C

    2010-10-14

    The dynamics of many biological processes of interest, such as the folding of a protein, are slow and complicated enough that a single molecular dynamics simulation trajectory of the entire process is difficult to obtain in any reasonable amount of time. Moreover, one such simulation may not be sufficient to develop an understanding of the mechanism of the process, and multiple simulations may be necessary. One approach to circumvent this computational barrier is the use of Markov state models. These models are useful because they can be constructed using data from a large number of shorter simulations instead of a single long simulation. This paper presents a new Bayesian method for the construction of Markov models from simulation data. A Markov model is specified by (τ,P,T), where τ is the mesoscopic time step, P is a partition of configuration space into mesostates, and T is an N(P)×N(P) transition rate matrix for transitions between the mesostates in one mesoscopic time step, where N(P) is the number of mesostates in P. The method presented here is different from previous Bayesian methods in several ways. (1) The method uses Bayesian analysis to determine the partition as well as the transition probabilities. (2) The method allows the construction of a Markov model for any chosen mesoscopic time-scale τ. (3) It constructs Markov models for which the diagonal elements of T are all equal to or greater than 0.5. Such a model will be called a "consistent mesoscopic Markov model" (CMMM). Such models have important advantages for providing an understanding of the dynamics on a mesoscopic time-scale. The Bayesian method uses simulation data to find a posterior probability distribution for (P,T) for any chosen τ. This distribution can be regarded as the Bayesian probability that the kinetics observed in the atomistic simulation data on the mesoscopic time-scale τ was generated by the CMMM specified by (P,T). An optimization algorithm is used to find the most

  16. Thermal stability of the French nuclear waste glass - long term behavior modeling

    International Nuclear Information System (INIS)

    Orlhac, X.

    2000-01-01

    The thermal stability of the French nuclear waste glass was investigated experimentally and by modeling to predict its long-term evolution at low temperature. The crystallization mechanisms were analyzed by studying devitrification in the supercooled liquid. Three main crystalline phases were characterized (CaMoO 4 , CeCO 2 , ZnCr 2 O 4 ). Their crystallisation was TO 4.24 wt%, due to the low concentration of the constituent elements. The nucleation and growth curves showed that platinoid elements catalysed nucleation but did not affect growth, which was governed by volume diffusion. The criteria of classic nucleation theory were applied to determine the thermodynamic and diffusional activation energies. Viscosity measurements illustrate the analogy between the activation energy of viscous flow and diffusion, indicating control of crystallization by viscous flow phenomena. The combined action of nucleation and growth was assessed by TTT plots, revealing a crystallization equilibrium line that enables the crystallized fractions to be predicted over the long term. The authors show that hetero-genetics catalyze the transformation without modifying the maximum crystallized fraction. A kinetic model was developed to describe devitrification in the glass based on the nucleation and growth curves alone. The authors show that the low-temperature growth exhibits scale behavior (between time and temperature) similar to thermo-rheological simplicity. The analogy between the resulting activation energy and that of the viscosity was used to model growth on the basis of viscosity. After validation with a simplified (BaO 2 SiO 2 ) glass, the model was applied to the containment glass. The result indicated that the glass remained completely vitreous after a cooling scenario with the one measured at the glass core. Under isothermal conditions, several million years would be required to reach the maximum theoretical crystallization fraction. (author)

  17. Large transverse momentum processes in a non-scaling parton model

    International Nuclear Information System (INIS)

    Stirling, W.J.

    1977-01-01

    The production of large transverse momentum mesons in hadronic collisions by the quark fusion mechanism is discussed in a parton model which gives logarithmic corrections to Bjorken scaling. It is found that the moments of the large transverse momentum structure function exhibit a simple scale breaking behaviour similar to the behaviour of the Drell-Yan and deep inelastic structure functions of the model. An estimate of corresponding experimental consequences is made and the extent to which analogous results can be expected in an asymptotically free gauge theory is discussed. A simple set of rules is presented for incorporating the logarithmic corrections to scaling into all covariant parton model calculations. (Auth.)

  18. Real-Time Observation of Apathy in Long-Term Care Residents With Dementia: Reliability of the Person-Environment Apathy Rating Scale.

    Science.gov (United States)

    Jao, Ying-Ling; Mogle, Jacqueline; Williams, Kristine; McDermott, Caroline; Behrens, Liza

    2018-04-01

    Apathy is prevalent in individuals with dementia. Lack of responsiveness to environmental stimulation is a key characteristic of apathy. The Person-Environment Apathy Rating (PEAR) scale consists of environment and apathy subscales, which allow for examination of environmental impact on apathy. The interrater reliability of the PEAR scale was examined via real-time observation. The current study included 45 observations of 15 long-term care residents with dementia. Each participant was observed at three time points for 10 minutes each. Two raters observed the participant and surrounding environment and independently rated the participant's apathy and environmental stimulation using the PEAR scale. Weighted Kappa was 0.5 to 0.82 for the PEAR-Environment subscale and 0.5 to 0.8 for the PEAR-Apathy subscale. Overall, with the exception of three items with relatively weak reliability (0.50 to 0.56), the PEAR scale showed moderate to strong interrater reliability (0.63 to 0.82). The results support the use of the PEAR scale to measure environmental stimulation and apathy via real-time observation in long-term care residents with dementia. [Journal of Gerontological Nursing, 44(4), 23-28.]. Copyright 2018, SLACK Incorporated.

  19. Scaling for deuteron structure functions in a relativistic light-front model

    International Nuclear Information System (INIS)

    Polyzou, W.N.; Gloeckle, W.

    1996-01-01

    Scaling limits of the structure functions [B.D. Keister, Phys. Rev. C 37, 1765 (1988)], W 1 and W 2 , are studied in a relativistic model of the two-nucleon system. The relativistic model is defined by a unitary representation, U(Λ,a), of the Poincaracute e group which acts on the Hilbert space of two spinless nucleons. The representation is in Dirac close-quote s [P.A.M. Dirac, Rev. Mod. Phys. 21, 392 (1949)] light-front formulation of relativistic quantum mechanics and is designed to give the experimental deuteron mass and n-p scattering length. A model hadronic current operator that is conserved and covariant with respect to this representation is used to define the structure tensor. This work is the first step in a relativistic extension of the results of Hueber, Gloeckle, and Boemelburg. The nonrelativistic limit of the model is shown to be consistent with the nonrelativistic model of Hueber, Gloeckle, and Boemelburg. [D. Hueber et al. Phys. Rev. C 42, 2342 (1990)]. The relativistic and nonrelativistic scaling limits, for both Bjorken and y scaling are compared. The interpretation of y scaling in the relativistic model is studied critically. The standard interpretation of y scaling requires a soft wave function which is not realized in this model. The scaling limits in both the relativistic and nonrelativistic case are related to probability distributions associated with the target deuteron. copyright 1996 The American Physical Society

  20. Long Island Solar Farm

    Energy Technology Data Exchange (ETDEWEB)

    Anders, R.

    2013-05-01

    The Long Island Solar Farm (LISF) is a remarkable success story, whereby very different interest groups found a way to capitalize on unusual circumstances to develop a mutually beneficial source of renewable energy. The uniqueness of the circumstances that were necessary to develop the Long Island Solar Farm make it very difficult to replicate. The project is, however, an unparalleled resource for solar energy research, which will greatly inform large-scale PV solar development in the East. Lastly, the LISF is a superb model for the process by which the project developed and the innovation and leadership shown by the different players.

  1. Economic Model Predictive Control for Large-Scale and Distributed Energy Systems

    DEFF Research Database (Denmark)

    Standardi, Laura

    Sources (RESs) in the smart grids is increasing. These energy sources bring uncertainty to the production due to their fluctuations. Hence,smart grids need suitable control systems that are able to continuously balance power production and consumption.  We apply the Economic Model Predictive Control (EMPC......) strategy to optimise the economic performances of the energy systems and to balance the power production and consumption. In the case of large-scale energy systems, the electrical grid connects a high number of power units. Because of this, the related control problem involves a high number of variables......In this thesis, we consider control strategies for large and distributed energy systems that are important for the implementation of smart grid technologies.  An electrical grid has to ensure reliability and avoid long-term interruptions in the power supply. Moreover, the share of Renewable Energy...

  2. Modeling High Frequency Data with Long Memory and Structural Change: A-HYEGARCH Model

    Directory of Open Access Journals (Sweden)

    Yanlin Shi

    2018-03-01

    Full Text Available In this paper, we propose an Adaptive Hyperbolic EGARCH (A-HYEGARCH model to estimate the long memory of high frequency time series with potential structural breaks. Based on the original HYGARCH model, we use the logarithm transformation to ensure the positivity of conditional variance. The structural change is further allowed via a flexible time-dependent intercept in the conditional variance equation. To demonstrate its effectiveness, we perform a range of Monte Carlo studies considering various data generating processes with and without structural changes. Empirical testing of the A-HYEGARCH model is also conducted using high frequency returns of S&P 500, FTSE 100, ASX 200 and Nikkei 225. Our simulation and empirical evidence demonstrate that the proposed A-HYEGARCH model outperforms various competing specifications and can effectively control for structural breaks. Therefore, our model may provide more reliable estimates of long memory and could be a widely useful tool for modelling financial volatility in other contexts.

  3. Dementia Rating Scale psychometric study and its applicability in long term care institutions in Brazil

    OpenAIRE

    Alessandro Ferrari Jacinto; Ana Cristina Procópio de Oliveira Aguiar; Fabio Gazelato de Melo Franco; Miriam Ikeda Ribeiro; Vanessa de Albuquerque Citero

    2012-01-01

    Objective: To evaluate the diagnostic sensitivity, specificity, andagreement of the Dementia Rating Scale with clinical diagnosis ofcognitive impairment and to compare its psychometric measureswith those from Mini Mental State Examination. Methods: Eighty-sixelders from a long-term care institution were invited to participatein a study, and fifty-eight agreed to participate. The global healthassessment protocol applied to these elders contained Mini MentalState Examination and Dementia Rating...

  4. Scale Economies and Industry Agglomeration Externalities: A Dynamic Cost Function Approach

    OpenAIRE

    Donald S. Siegel; Catherine J. Morrison Paul

    1999-01-01

    Scale economies and agglomeration externalities are alleged to be important determinants of economic growth. To assess these effects, the authors outline and estimate a microfoundations model based on a dynamic cost function specification. This model provides for the separate identification of the impacts of externalities and cyclical utilization on short- and long-run scale economies and input substitution patterns. The authors find that scale economies are prevalent in U.S manufacturing; co...

  5. Toward micro-scale spatial modeling of gentrification

    Science.gov (United States)

    O'Sullivan, David

    A simple preliminary model of gentrification is presented. The model is based on an irregular cellular automaton architecture drawing on the concept of proximal space, which is well suited to the spatial externalities present in housing markets at the local scale. The rent gap hypothesis on which the model's cell transition rules are based is discussed. The model's transition rules are described in detail. Practical difficulties in configuring and initializing the model are described and its typical behavior reported. Prospects for further development of the model are discussed. The current model structure, while inadequate, is well suited to further elaboration and the incorporation of other interesting and relevant effects.

  6. Scale Model Thruster Acoustic Measurement Results

    Science.gov (United States)

    Vargas, Magda; Kenny, R. Jeremy

    2013-01-01

    The Space Launch System (SLS) Scale Model Acoustic Test (SMAT) is a 5% scale representation of the SLS vehicle, mobile launcher, tower, and launch pad trench. The SLS launch propulsion system will be comprised of the Rocket Assisted Take-Off (RATO) motors representing the solid boosters and 4 Gas Hydrogen (GH2) thrusters representing the core engines. The GH2 thrusters were tested in a horizontal configuration in order to characterize their performance. In Phase 1, a single thruster was fired to determine the engine performance parameters necessary for scaling a single engine. A cluster configuration, consisting of the 4 thrusters, was tested in Phase 2 to integrate the system and determine their combined performance. Acoustic and overpressure data was collected during both test phases in order to characterize the system's acoustic performance. The results from the single thruster and 4- thuster system are discussed and compared.

  7. Temperature dependence of fluctuation time scales in spin glasses

    DEFF Research Database (Denmark)

    Kenning, Gregory G.; Bowen, J.; Sibani, Paolo

    2010-01-01

    Using a series of fast cooling protocols we have probed aging effects in the spin glass state as a function of temperature. Analyzing the logarithmic decay found at very long time scales within a simple phenomenological barrier model, leads to the extraction of the fluctuation time scale of the s...

  8. Building a Shared Definitional Model of Long Duration Human Spaceflight

    Science.gov (United States)

    Orr, M.; Whitmire, A.; Sandoval, L.; Leveton, L.; Arias, D.

    2011-01-01

    In 1956, on the eve of human space travel Strughold first proposed a simple classification of the present and future stages of manned flight that identified key factors, risks and developmental stages for the evolutionary journey ahead. As we look to optimize the potential of the ISS as a gateway to new destinations, we need a current shared working definitional model of long duration human space flight to help guide our path. Initial search of formal and grey literature augmented by liaison with subject matter experts. Search strategy focused on both the use of term long duration mission and long duration spaceflight, and also broader related current and historical definitions and classification models of spaceflight. The related sea and air travel literature was also subsequently explored with a view to identifying analogous models or classification systems. There are multiple different definitions and classification systems for spaceflight including phase and type of mission, craft and payload and related risk management models. However the frequently used concepts of long duration mission and long duration spaceflight are infrequently operationally defined by authors, and no commonly referenced classical or gold standard definition or model of these terms emerged from the search. The categorization (Cat) system for sailing was found to be of potential analogous utility, with its focus on understanding the need for crew and craft autonomy at various levels of potential adversity and inability to gain outside support or return to a safe location, due to factors of time, distance and location.

  9. Distinguishing globally-driven changes from regional- and local-scale impacts: The case for long-term and broad-scale studies of recovery from pollution.

    Science.gov (United States)

    Hawkins, S J; Evans, A J; Mieszkowska, N; Adams, L C; Bray, S; Burrows, M T; Firth, L B; Genner, M J; Leung, K M Y; Moore, P J; Pack, K; Schuster, H; Sims, D W; Whittington, M; Southward, E C

    2017-11-30

    Marine ecosystems are subject to anthropogenic change at global, regional and local scales. Global drivers interact with regional- and local-scale impacts of both a chronic and acute nature. Natural fluctuations and those driven by climate change need to be understood to diagnose local- and regional-scale impacts, and to inform assessments of recovery. Three case studies are used to illustrate the need for long-term studies: (i) separation of the influence of fishing pressure from climate change on bottom fish in the English Channel; (ii) recovery of rocky shore assemblages from the Torrey Canyon oil spill in the southwest of England; (iii) interaction of climate change and chronic Tributyltin pollution affecting recovery of rocky shore populations following the Torrey Canyon oil spill. We emphasize that "baselines" or "reference states" are better viewed as envelopes that are dependent on the time window of observation. Recommendations are made for adaptive management in a rapidly changing world. Copyright © 2017. Published by Elsevier Ltd.

  10. Atomic scale modelling of materials of the nuclear fuel cycle

    International Nuclear Information System (INIS)

    Bertolus, M.

    2011-10-01

    This document written to obtain the French accreditation to supervise research presents the research I conducted at CEA Cadarache since 1999 on the atomic scale modelling of non-metallic materials involved in the nuclear fuel cycle: host materials for radionuclides from nuclear waste (apatites), fuel (in particular uranium dioxide) and ceramic cladding materials (silicon carbide). These are complex materials at the frontier of modelling capabilities since they contain heavy elements (rare earths or actinides), exhibit complex structures or chemical compositions and/or are subjected to irradiation effects: creation of point defects and fission products, amorphization. The objective of my studies is to bring further insight into the physics and chemistry of the elementary processes involved using atomic scale modelling and its coupling with higher scale models and experimental studies. This work is organised in two parts: on the one hand the development, adaptation and implementation of atomic scale modelling methods and validation of the approximations used; on the other hand the application of these methods to the investigation of nuclear materials under irradiation. This document contains a synthesis of the studies performed, orientations for future research, a detailed resume and a list of publications and communications. (author)

  11. Transport simulations TFTR: Theoretically-based transport models and current scaling

    International Nuclear Information System (INIS)

    Redi, M.H.; Cummings, J.C.; Bush, C.E.; Fredrickson, E.; Grek, B.; Hahm, T.S.; Hill, K.W.; Johnson, D.W.; Mansfield, D.K.; Park, H.; Scott, S.D.; Stratton, B.C.; Synakowski, E.J.; Tang, W.M.; Taylor, G.

    1991-12-01

    In order to study the microscopic physics underlying observed L-mode current scaling, 1-1/2-d BALDUR has been used to simulate density and temperature profiles for high and low current, neutral beam heated discharges on TFTR with several semi-empirical, theoretically-based models previously compared for TFTR, including several versions of trapped electron drift wave driven transport. Experiments at TFTR, JET and D3-D show that I p scaling of τ E does not arise from edge modes as previously thought, and is most likely to arise from nonlocal processes or from the I p -dependence of local plasma core transport. Consistent with this, it is found that strong current scaling does not arise from any of several edge models of resistive ballooning. Simulations with the profile consistent drift wave model and with a new model for toroidal collisionless trapped electron mode core transport in a multimode formalism, lead to strong current scaling of τ E for the L-mode cases on TFTR. None of the theoretically-based models succeeded in simulating the measured temperature and density profiles for both high and low current experiments

  12. Small-Scale Testing Rig for Long-Term Cyclically Loaded Monopiles in Cohesionless Soil

    DEFF Research Database (Denmark)

    Roesen, Hanne Ravn; Ibsen, Lars Bo; Andersen, Lars Vabbersgaard

    2012-01-01

    , and the period of the cyclic loading. However, the design guidance on these issues is limited. Thus, in order to investigate the pile behaviour for cyclically long-term loaded monopiles, a test setup for small-scale tests in saturated dense cohesionless soil is constructed and presented in here. The cyclic...... loading is applied mechanically by means of a testing rig, where the important input parameters: mean level, amplitude, number of cycles, and period of the loading can be varied. The results from a monotonic and a cyclic loading test on an open-ended aluminium pile with diameter = 100 mm and embedded...... length = 600 mm proves that the test setup is capable of applying the cyclic long-term loading. The plastic deformations during loading depend not only on the loading applied but also of the relative density of the soil and, thus, the tests are carried out with relative densities of 77-88%, i.e. similar...

  13. Development and analysis of prognostic equations for mesoscale kinetic energy and mesoscale (subgrid scale) fluxes for large-scale atmospheric models

    Science.gov (United States)

    Avissar, Roni; Chen, Fei

    1993-01-01

    Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes

  14. On the sub-model errors of a generalized one-way coupling scheme for linking models at different scales

    Science.gov (United States)

    Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong

    2017-11-01

    Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.

  15. Long-term dimensional stability and longitudinal uniformity of line scales made of glass ceramics

    International Nuclear Information System (INIS)

    Takahashi, Akira

    2010-01-01

    Line scales are commonly used as a working standard of length for the calibration of optical measuring instruments such as profile projectors, measuring microscopes and video measuring systems. For high-precision calibration, line scales with low thermal expansion are commonly used. Glass ceramics have a very low coefficient of thermal expansion (CTE) and are widely used for precision line scales. From a previous study, it is known that glass ceramics decrease in length from the time of production or heat treatment. The line scale measurement method can evaluate more than one section of the line scale and is capable of the evaluation of the longitudinal uniformity of the secular change of glass ceramics. In this paper, an arithmetic model of the secular change of a line scale and its longitudinal uniformity is proposed. Six line scales made of Zerodur®, Clearceram® and synthetic quartz were manufactured at the same time. The dimensional changes of the six line scales were experimentally evaluated over 2 years using a line scale calibration system

  16. PSA modeling of long-term accident sequences

    International Nuclear Information System (INIS)

    Georgescu, Gabriel; Corenwinder, Francois; Lanore, Jeanne-Marie

    2014-01-01

    In the context of the extension of PSA scope to include external hazards, in France, both operator (EDF) and IRSN work for the improvement of methods to better take into account in the PSA the accident sequences induced by initiators which affect a whole site containing several nuclear units (reactors, fuel pools,...). These methodological improvements represent an essential prerequisite for the development of external hazards PSA. However, it has to be noted that in French PSA, even before Fukushima, long term accident sequences were taken into account: many insight were therefore used, as complementary information, to enhance the safety level of the plants. IRSN proposed an external events PSA development program. One of the first steps of the program is the development of methods to model in the PSA the long term accident sequences, based on the experience gained. At short term IRSN intends to enhance the modeling of the 'long term' accident sequences induced by the loss of the heat sink or/and the loss of external power supply. The experience gained by IRSN and EDF from the development of several probabilistic studies treating long term accident sequences shows that the simple extension of the mission time of the mitigation systems from 24 hours to longer times is not sufficient to realistically quantify the risk and to obtain a correct ranking of the risk contributions and that treatment of recoveries is also necessary. IRSN intends to develop a generic study which can be used as a general methodology for the assessment of the long term accident sequences, mainly generated by external hazards and their combinations. This first attempt to develop this generic study allowed identifying some aspects, which may be hazard (or combinations of hazards) or related to initial boundary conditions, which should be taken into account for further developments. (authors)

  17. Energy-Water Modeling and Impacts at Urban and Infrastructure Scales

    Science.gov (United States)

    Saleh, F.; Pullen, J. D.; Schoonen, M. A.; Gonzalez, J.; Bhatt, V.; Fellows, J. D.

    2017-12-01

    We converge multi-disciplinary, multi-sectoral modeling and data analysis tools on an urban watershed to examine the feedbacks of concentrated and connected infrastructure on the environment. Our focus area is the Lower Hudson River Basin (LHRB). The LHRB captures long-term and short- term energy/water stressors as it represents: 1) a coastal environment subject to sea level rise that is among the fastest in the East impacted by a wide array of various storms; 2) one of the steepest gradients in population density in the US, with Manhattan the most densely populated coastal county in the nation; 3) energy/water infrastructure serving the largest metropolitan area in the US; 4) a history of environmental impacts, ranging from heatwaves to hurricanes, that can be used to hindcast; and 5) a wealth of historic and real-time data, extensive monitoring facilities and existing specific sector models that can be leveraged. We detail two case studies on "water infrastructure and stressors", and "heatwaves and energy-water demands." The impact of a hypothetical failure of Oradell Dam (on the Hackensack River, a tributary of the Hudson River) coincident with a hurricane, and urban power demands under current and future heat waves are examined with high-resolution (meter to km scale) earth system models to illustrate energy water nexus issues where detailed predictions can shape response and mitigation strategies.

  18. Value of river discharge data for global-scale hydrological modeling

    Directory of Open Access Journals (Sweden)

    M. Hunger

    2008-05-01

    Full Text Available This paper investigates the value of observed river discharge data for global-scale hydrological modeling of a number of flow characteristics that are e.g. required for assessing water resources, flood risk and habitat alteration of aquatic ecosystems. An improved version of the WaterGAP Global Hydrology Model (WGHM was tuned against measured discharge using either the 724-station dataset (V1 against which former model versions were tuned or an extended dataset (V2 of 1235 stations. WGHM is tuned by adjusting one model parameter (γ that affects runoff generation from land areas in order to fit simulated and observed long-term average discharge at tuning stations. In basins where γ does not suffice to tune the model, two correction factors are applied successively: the areal correction factor corrects local runoff in a basin and the station correction factor adjusts discharge directly the gauge. Using station correction is unfavorable, as it makes discharge discontinuous at the gauge and inconsistent with runoff in the upstream basin. The study results are as follows. (1 Comparing V2 to V1, the global land area covered by tuning basins increases by 5% and the area where the model can be tuned by only adjusting γ increases by 8%. However, the area where a station correction factor (and not only an areal correction factor has to be applied more than doubles. (2 The value of additional discharge information for representing the spatial distribution of long-term average discharge (and thus renewable water resources with WGHM is high, particularly for river basins outside of the V1 tuning area and in regions where the refined dataset provides a significant subdivision of formerly extended tuning basins (average V2 basin size less than half the V1 basin size. If the additional discharge information were not used for tuning, simulated long-term average discharge would differ from the observed one by a factor of, on average, 1.8 in the formerly

  19. Extension of landscape-based population viability models to ecoregional scales for conservation planning

    Science.gov (United States)

    Thomas W. Bonnot; Frank R. III Thompson; Joshua Millspaugh

    2011-01-01

    Landscape-based population models are potentially valuable tools in facilitating conservation planning and actions at large scales. However, such models have rarely been applied at ecoregional scales. We extended landscape-based population models to ecoregional scales for three species of concern in the Central Hardwoods Bird Conservation Region and compared model...

  20. Impact of Age and Hearing Impairment on Work Performance during Long Working Hours

    Directory of Open Access Journals (Sweden)

    Verena Wagner-Hartl

    2018-01-01

    Full Text Available Based on demographic prognoses, it must be assumed that a greater number of older workers will be found in the future labor market. How to deal with their possible age-related impairments of sensory functions, like hearing impairment and work performance during extended working time, has not been addressed explicitly until now. The study addresses this interplay. The study was performed on two consecutive days after normal working hours. The 55 participants had to “work” in the study at least three additional hours to simulate a situation of long working hours. The tested measures for (job performance were: general attention, long-term selective attention, concentration, and reaction time. All of the investigated variables were taken at both days of the study (2 × 2 × 2 repeated measurement design. The results show effects for age, the interaction of hearing impairment and time of measurement, and effects of the measurement time. Older participants reacted slower than younger participants did. Furthermore, younger participants reacted more frequently in a correct way. Hearing impairment seems to have a negative impact especially on measures of false reactions, and therefore especially on measurement time 1. The results can be interpreted in a way that hearing-impaired participants are able to compensate their deficits over time.

  1. Impact of Age and Hearing Impairment on Work Performance during Long Working Hours.

    Science.gov (United States)

    Wagner-Hartl, Verena; Grossi, Nina R; Kallus, K Wolfgang

    2018-01-09

    Based on demographic prognoses, it must be assumed that a greater number of older workers will be found in the future labor market. How to deal with their possible age-related impairments of sensory functions, like hearing impairment and work performance during extended working time, has not been addressed explicitly until now. The study addresses this interplay. The study was performed on two consecutive days after normal working hours. The 55 participants had to "work" in the study at least three additional hours to simulate a situation of long working hours. The tested measures for (job) performance were: general attention, long-term selective attention, concentration, and reaction time. All of the investigated variables were taken at both days of the study (2 × 2 × 2 repeated measurement design). The results show effects for age, the interaction of hearing impairment and time of measurement, and effects of the measurement time. Older participants reacted slower than younger participants did. Furthermore, younger participants reacted more frequently in a correct way. Hearing impairment seems to have a negative impact especially on measures of false reactions, and therefore especially on measurement time 1. The results can be interpreted in a way that hearing-impaired participants are able to compensate their deficits over time.

  2. RESOLVING NEIGHBORHOOD-SCALE AIR TOXICS MODELING: A CASE STUDY IN WILMINGTON, CALIFORNIA

    Science.gov (United States)

    Air quality modeling is useful for characterizing exposures to air pollutants. While models typically provide results on regional scales, there is a need for refined modeling approaches capable of resolving concentrations on the scale of tens of meters, across modeling domains 1...

  3. Multi Scale Models for Flexure Deformation in Sheet Metal Forming

    Directory of Open Access Journals (Sweden)

    Di Pasquale Edmondo

    2016-01-01

    Full Text Available This paper presents the application of multi scale techniques to the simulation of sheet metal forming using the one-step method. When a blank flows over the die radius, it undergoes a complex cycle of bending and unbending. First, we describe an original model for the prediction of residual plastic deformation and stresses in the blank section. This model, working on a scale about one hundred times smaller than the element size, has been implemented in SIMEX, one-step sheet metal forming simulation code. The utilisation of this multi-scale modeling technique improves greatly the accuracy of the solution. Finally, we discuss the implications of this analysis on the prediction of springback in metal forming.

  4. Modeling sediment yield in small catchments at event scale: Model comparison, development and evaluation

    Science.gov (United States)

    Tan, Z.; Leung, L. R.; Li, H. Y.; Tesfa, T. K.

    2017-12-01

    Sediment yield (SY) has significant impacts on river biogeochemistry and aquatic ecosystems but it is rarely represented in Earth System Models (ESMs). Existing SY models focus on estimating SY from large river basins or individual catchments so it is not clear how well they simulate SY in ESMs at larger spatial scales and globally. In this study, we compare the strengths and weaknesses of eight well-known SY models in simulating annual mean SY at about 400 small catchments ranging in size from 0.22 to 200 km2 in the US, Canada and Puerto Rico. In addition, we also investigate the performance of these models in simulating event-scale SY at six catchments in the US using high-quality hydrological inputs. The model comparison shows that none of the models can reproduce the SY at large spatial scales but the Morgan model performs the better than others despite its simplicity. In all model simulations, large underestimates occur in catchments with very high SY. A possible pathway to reduce the discrepancies is to incorporate sediment detachment by landsliding, which is currently not included in the models being evaluated. We propose a new SY model that is based on the Morgan model but including a landsliding soil detachment scheme that is being developed. Along with the results of the model comparison and evaluation, preliminary findings from the revised Morgan model will be presented.

  5. Genome scale metabolic modeling of cancer

    DEFF Research Database (Denmark)

    Nilsson, Avlant; Nielsen, Jens

    2017-01-01

    of metabolism which allows simulation and hypotheses testing of metabolic strategies. It has successfully been applied to many microorganisms and is now used to study cancer metabolism. Generic models of human metabolism have been reconstructed based on the existence of metabolic genes in the human genome......Cancer cells reprogram metabolism to support rapid proliferation and survival. Energy metabolism is particularly important for growth and genes encoding enzymes involved in energy metabolism are frequently altered in cancer cells. A genome scale metabolic model (GEM) is a mathematical formalization...

  6. European Continental Scale Hydrological Model, Limitations and Challenges

    Science.gov (United States)

    Rouholahnejad, E.; Abbaspour, K.

    2014-12-01

    The pressures on water resources due to increasing levels of societal demand, increasing conflict of interest and uncertainties with regard to freshwater availability create challenges for water managers and policymakers in many parts of Europe. At the same time, climate change adds a new level of pressure and uncertainty with regard to freshwater supplies. On the other hand, the small-scale sectoral structure of water management is now reaching its limits. The integrated management of water in basins requires a new level of consideration where water bodies are to be viewed in the context of the whole river system and managed as a unit within their basins. In this research we present the limitations and challenges of modelling the hydrology of the continent Europe. The challenges include: data availability at continental scale and the use of globally available data, streamgauge data quality and their misleading impacts on model calibration, calibration of large-scale distributed model, uncertainty quantification, and computation time. We describe how to avoid over parameterization in calibration process and introduce a parallel processing scheme to overcome high computation time. We used Soil and Water Assessment Tool (SWAT) program as an integrated hydrology and crop growth simulator to model water resources of the Europe continent. Different components of water resources are simulated and crop yield and water quality are considered at the Hydrological Response Unit (HRU) level. The water resources are quantified at subbasin level with monthly time intervals for the period of 1970-2006. The use of a large-scale, high-resolution water resources models enables consistent and comprehensive examination of integrated system behavior through physically-based, data-driven simulation and provides the overall picture of water resources temporal and spatial distribution across the continent. The calibrated model and results provide information support to the European Water

  7. Scale-free models for the structure of business firm networks.

    Science.gov (United States)

    Kitsak, Maksim; Riccaboni, Massimo; Havlin, Shlomo; Pammolli, Fabio; Stanley, H Eugene

    2010-03-01

    We study firm collaborations in the life sciences and the information and communication technology sectors. We propose an approach to characterize industrial leadership using k -shell decomposition, with top-ranking firms in terms of market value in higher k -shell layers. We find that the life sciences industry network consists of three distinct components: a "nucleus," which is a small well-connected subgraph, "tendrils," which are small subgraphs consisting of small degree nodes connected exclusively to the nucleus, and a "bulk body," which consists of the majority of nodes. Industrial leaders, i.e., the largest companies in terms of market value, are in the highest k -shells of both networks. The nucleus of the life sciences sector is very stable: once a firm enters the nucleus, it is likely to stay there for a long time. At the same time we do not observe the above three components in the information and communication technology sector. We also conduct a systematic study of these three components in random scale-free networks. Our results suggest that the sizes of the nucleus and the tendrils in scale-free networks decrease as the exponent of the power-law degree distribution lambda increases, and disappear for lambda>or=3 . We compare the k -shell structure of random scale-free model networks with two real-world business firm networks in the life sciences and in the information and communication technology sectors. We argue that the observed behavior of the k -shell structure in the two industries is consistent with the coexistence of both preferential and random agreements in the evolution of industrial networks.

  8. Scaling analysis and model estimation of solar corona index

    Science.gov (United States)

    Ray, Samujjwal; Ray, Rajdeep; Khondekar, Mofazzal Hossain; Ghosh, Koushik

    2018-04-01

    A monthly average solar green coronal index time series for the period from January 1939 to December 2008 collected from NOAA (The National Oceanic and Atmospheric Administration) has been analysed in this paper in perspective of scaling analysis and modelling. Smoothed and de-noising have been done using suitable mother wavelet as a pre-requisite. The Finite Variance Scaling Method (FVSM), Higuchi method, rescaled range (R/S) and a generalized method have been applied to calculate the scaling exponents and fractal dimensions of the time series. Autocorrelation function (ACF) is used to find autoregressive (AR) process and Partial autocorrelation function (PACF) has been used to get the order of AR model. Finally a best fit model has been proposed using Yule-Walker Method with supporting results of goodness of fit and wavelet spectrum. The results reveal an anti-persistent, Short Range Dependent (SRD), self-similar property with signatures of non-causality, non-stationarity and nonlinearity in the data series. The model shows the best fit to the data under observation.

  9. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NARCIS (Netherlands)

    Loon, van A.F.; Huijgevoort, van M.H.J.; Lanen, van H.A.J.

    2012-01-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological

  10. Assessment of long-term channel changes in the Mekong River using remote sensing and a channel-evolution model

    Science.gov (United States)

    Miyazawa, N.

    2011-12-01

    River-channel changes are a key factor affecting physical, ecological and management issues in the fluvial environment. In this study, long-term channel changes in the Mekong River were assessed using remote sensing and a channel-evolution model. A channel-evolution model for calculating long-term channel changes of a measndering river was developed using a previous fluid-dynamic model [Zolezzi and Seminara, 2001], and was applied in order to quantify channel changes of two meandering reaches in the Mekong River. Quite few attempts have been made so far to combine remote sensing observation of meandering planform change with the application of channel evolution models within relatively small-scale gravel-bed systems in humid temperate regions. The novel point of the present work is to link state-of-art meandering planform evolution model with observed morphological changes within large-scale sand-bed rivers with higher bank height in tropical monsoonal climate regions, which are the highly dynamic system, and assess the performance. Unstable extents of the reaches could be historically identified using remote-sensing technique. The instability caused i) bank erosion and accretion of meander bends and ii) movement or development of bars and changes in the flow around the bars. The remote sensing measurements indicate that maximum erosion occurred downstream of the maximum curvature of the river-center line in both reaches. The model simulations indicates that under the mean annual peak discharge the maximum of excess longitudinal velocity near the banks occurs downstream of the maximum curvature in both reaches. The channel migration coefficients of the reaches were calibrated by comparing remote-sensing measurements and model simulations. The diffrence in the migration coefficients between both reaches depends on the diffrence in bank height rather than the geotechnical properties of floodplain sediments. Possible eroded floodplain areas and accreted floodplain

  11. Advanced modeling to accelerate the scale up of carbon capture technologies

    Energy Technology Data Exchange (ETDEWEB)

    Miller, David C.; Sun, XIN; Storlie, Curtis B.; Bhattacharyya, Debangsu

    2015-06-01

    In order to help meet the goals of the DOE carbon capture program, the Carbon Capture Simulation Initiative (CCSI) was launched in early 2011 to develop, demonstrate, and deploy advanced computational tools and validated multi-scale models to reduce the time required to develop and scale-up new carbon capture technologies. This article focuses on essential elements related to the development and validation of multi-scale models in order to help minimize risk and maximize learning as new technologies progress from pilot to demonstration scale.

  12. Scale modeling of reinforced concrete structures subjected to seismic loading

    International Nuclear Information System (INIS)

    Dove, R.C.

    1983-01-01

    Reinforced concrete, Category I structures are so large that the possibility of seismicly testing the prototype structures under controlled conditions is essentially nonexistent. However, experimental data, from which important structural properties can be determined and existing and new methods of seismic analysis benchmarked, are badly needed. As a result, seismic experiments on scaled models are of considerable interest. In this paper, the scaling laws are developed in some detail so that assumptions and choices based on judgement can be clearly recognized and their effects discussed. The scaling laws developed are then used to design a reinforced concrete model of a Category I structure. Finally, how scaling is effected by various types of damping (viscous, structural, and Coulomb) is discussed

  13. A Multi-Stage Maturity Model for Long-Term IT Outsourcing Relationship Success

    Science.gov (United States)

    Luong, Ming; Stevens, Jeff

    2015-01-01

    The Multi-Stage Maturity Model for Long-Term IT Outsourcing Relationship Success, a theoretical stages-of-growth model, explains long-term success in IT outsourcing relationships. Research showed the IT outsourcing relationship life cycle consists of four distinct, sequential stages: contract, transition, support, and partnership. The model was…

  14. A Two-Scale Reduced Model for Darcy Flow in Fractured Porous Media

    KAUST Repository

    Chen, Huangxin

    2016-06-01

    In this paper, we develop a two-scale reduced model for simulating the Darcy flow in two-dimensional porous media with conductive fractures. We apply the approach motivated by the embedded fracture model (EFM) to simulate the flow on the coarse scale, and the effect of fractures on each coarse scale grid cell intersecting with fractures is represented by the discrete fracture model (DFM) on the fine scale. In the DFM used on the fine scale, the matrix-fracture system are resolved on unstructured grid which represents the fractures accurately, while in the EFM used on the coarse scale, the flux interaction between fractures and matrix are dealt with as a source term, and the matrix-fracture system can be resolved on structured grid. The Raviart-Thomas mixed finite element methods are used for the solution of the coupled flows in the matrix and the fractures on both fine and coarse scales. Numerical results are presented to demonstrate the efficiency of the proposed model for simulation of flow in fractured porous media.

  15. Fractionaly Integrated Flux model and Scaling Laws in Weather and Climate

    Science.gov (United States)

    Schertzer, Daniel; Lovejoy, Shaun

    2013-04-01

    The Fractionaly Integrated Flux model (FIF) has been extensively used to model intermittent observables, like the velocity field, by defining them with the help of a fractional integration of a conservative (i.e. strictly scale invariant) flux, such as the turbulent energy flux. It indeed corresponds to a well-defined modelling that yields the observed scaling laws. Generalised Scale Invariance (GSI) enables FIF to deal with anisotropic fractional integrations and has been rather successful to define and model a unique regime of scaling anisotropic turbulence up to planetary scales. This turbulence has an effective dimension of 23/9=2.55... instead of the classical hypothesised 2D and 3D turbulent regimes, respectively for large and small spatial scales. It therefore theoretically eliminates a non plausible "dimension transition" between these two regimes and the resulting requirement of a turbulent energy "mesoscale gap", whose empirical evidence has been brought more and more into question. More recently, GSI-FIF was used to analyse climate, therefore at much larger time scales. Indeed, the 23/9-dimensional regime necessarily breaks up at the outer spatial scales. The corresponding transition range, which can be called "macroweather", seems to have many interesting properties, e.g. it rather corresponds to a fractional differentiation in time with a roughly flat frequency spectrum. Furthermore, this transition yields the possibility to have at much larger time scales scaling space-time climate fluctuations with a much stronger scaling anisotropy between time and space. Lovejoy, S. and D. Schertzer (2013). The Weather and Climate: Emergent Laws and Multifractal Cascades. Cambridge Press (in press). Schertzer, D. et al. (1997). Fractals 5(3): 427-471. Schertzer, D. and S. Lovejoy (2011). International Journal of Bifurcation and Chaos 21(12): 3417-3456.

  16. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    Directory of Open Access Journals (Sweden)

    A. F. Van Loon

    2012-11-01

    Full Text Available Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that participated in the model intercomparison project of WATCH (WaterMIP. For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity, drought propagation features (pooling, attenuation, lag, lengthening, and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought.

    Drought characteristics simulated by large-scale models clearly reflected drought propagation; i.e. drought events became fewer and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having fewer and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an

  17. Effects of coarse-graining on the scaling behavior of long-range correlated and anti-correlated signals.

    Science.gov (United States)

    Xu, Yinlin; Ma, Qianli D Y; Schmitt, Daniel T; Bernaola-Galván, Pedro; Ivanov, Plamen Ch

    2011-11-01

    We investigate how various coarse-graining (signal quantization) methods affect the scaling properties of long-range power-law correlated and anti-correlated signals, quantified by the detrended fluctuation analysis. Specifically, for coarse-graining in the magnitude of a signal, we consider (i) the Floor, (ii) the Symmetry and (iii) the Centro-Symmetry coarse-graining methods. We find that for anti-correlated signals coarse-graining in the magnitude leads to a crossover to random behavior at large scales, and that with increasing the width of the coarse-graining partition interval Δ, this crossover moves to intermediate and small scales. In contrast, the scaling of positively correlated signals is less affected by the coarse-graining, with no observable changes when Δ 1 a crossover appears at small scales and moves to intermediate and large scales with increasing Δ. For very rough coarse-graining (Δ > 3) based on the Floor and Symmetry methods, the position of the crossover stabilizes, in contrast to the Centro-Symmetry method where the crossover continuously moves across scales and leads to a random behavior at all scales; thus indicating a much stronger effect of the Centro-Symmetry compared to the Floor and the Symmetry method. For coarse-graining in time, where data points are averaged in non-overlapping time windows, we find that the scaling for both anti-correlated and positively correlated signals is practically preserved. The results of our simulations are useful for the correct interpretation of the correlation and scaling properties of symbolic sequences.

  18. Tuneable resolution as a systems biology approach for multi-scale, multi-compartment computational models.

    Science.gov (United States)

    Kirschner, Denise E; Hunt, C Anthony; Marino, Simeone; Fallahi-Sichani, Mohammad; Linderman, Jennifer J

    2014-01-01

    The use of multi-scale mathematical and computational models to study complex biological processes is becoming increasingly productive. Multi-scale models span a range of spatial and/or temporal scales and can encompass multi-compartment (e.g., multi-organ) models. Modeling advances are enabling virtual experiments to explore and answer questions that are problematic to address in the wet-lab. Wet-lab experimental technologies now allow scientists to observe, measure, record, and analyze experiments focusing on different system aspects at a variety of biological scales. We need the technical ability to mirror that same flexibility in virtual experiments using multi-scale models. Here we present a new approach, tuneable resolution, which can begin providing that flexibility. Tuneable resolution involves fine- or coarse-graining existing multi-scale models at the user's discretion, allowing adjustment of the level of resolution specific to a question, an experiment, or a scale of interest. Tuneable resolution expands options for revising and validating mechanistic multi-scale models, can extend the longevity of multi-scale models, and may increase computational efficiency. The tuneable resolution approach can be applied to many model types, including differential equation, agent-based, and hybrid models. We demonstrate our tuneable resolution ideas with examples relevant to infectious disease modeling, illustrating key principles at work. © 2014 The Authors. WIREs Systems Biology and Medicine published by Wiley Periodicals, Inc.

  19. Isometric Scaling in Developing Long Bones Is Achieved by an Optimal Epiphyseal Growth Balance.

    Science.gov (United States)

    Stern, Tomer; Aviram, Rona; Rot, Chagai; Galili, Tal; Sharir, Amnon; Kalish Achrai, Noga; Keller, Yosi; Shahar, Ron; Zelzer, Elazar

    2015-08-01

    One of the major challenges that developing organs face is scaling, that is, the adjustment of physical proportions during the massive increase in size. Although organ scaling is fundamental for development and function, little is known about the mechanisms that regulate it. Bone superstructures are projections that typically serve for tendon and ligament insertion or articulation and, therefore, their position along the bone is crucial for musculoskeletal functionality. As bones are rigid structures that elongate only from their ends, it is unclear how superstructure positions are regulated during growth to end up in the right locations. Here, we document the process of longitudinal scaling in developing mouse long bones and uncover the mechanism that regulates it. To that end, we performed a computational analysis of hundreds of three-dimensional micro-CT images, using a newly developed method for recovering the morphogenetic sequence of developing bones. Strikingly, analysis revealed that the relative position of all superstructures along the bone is highly preserved during more than a 5-fold increase in length, indicating isometric scaling. It has been suggested that during development, bone superstructures are continuously reconstructed and relocated along the shaft, a process known as drift. Surprisingly, our results showed that most superstructures did not drift at all. Instead, we identified a novel mechanism for bone scaling, whereby each bone exhibits a specific and unique balance between proximal and distal growth rates, which accurately maintains the relative position of its superstructures. Moreover, we show mathematically that this mechanism minimizes the cumulative drift of all superstructures, thereby optimizing the scaling process. Our study reveals a general mechanism for the scaling of developing bones. More broadly, these findings suggest an evolutionary mechanism that facilitates variability in bone morphology by controlling the activity of

  20. Isometric Scaling in Developing Long Bones Is Achieved by an Optimal Epiphyseal Growth Balance

    Science.gov (United States)

    Stern, Tomer; Aviram, Rona; Rot, Chagai; Galili, Tal; Sharir, Amnon; Kalish Achrai, Noga; Keller, Yosi; Shahar, Ron; Zelzer, Elazar

    2015-01-01

    One of the major challenges that developing organs face is scaling, that is, the adjustment of physical proportions during the massive increase in size. Although organ scaling is fundamental for development and function, little is known about the mechanisms that regulate it. Bone superstructures are projections that typically serve for tendon and ligament insertion or articulation and, therefore, their position along the bone is crucial for musculoskeletal functionality. As bones are rigid structures that elongate only from their ends, it is unclear how superstructure positions are regulated during growth to end up in the right locations. Here, we document the process of longitudinal scaling in developing mouse long bones and uncover the mechanism that regulates it. To that end, we performed a computational analysis of hundreds of three-dimensional micro-CT images, using a newly developed method for recovering the morphogenetic sequence of developing bones. Strikingly, analysis revealed that the relative position of all superstructures along the bone is highly preserved during more than a 5-fold increase in length, indicating isometric scaling. It has been suggested that during development, bone superstructures are continuously reconstructed and relocated along the shaft, a process known as drift. Surprisingly, our results showed that most superstructures did not drift at all. Instead, we identified a novel mechanism for bone scaling, whereby each bone exhibits a specific and unique balance between proximal and distal growth rates, which accurately maintains the relative position of its superstructures. Moreover, we show mathematically that this mechanism minimizes the cumulative drift of all superstructures, thereby optimizing the scaling process. Our study reveals a general mechanism for the scaling of developing bones. More broadly, these findings suggest an evolutionary mechanism that facilitates variability in bone morphology by controlling the activity of

  1. EPOS Multi-Scale Laboratory platform: a long-term reference tool for experimental Earth Sciences

    Science.gov (United States)

    Trippanera, Daniele; Tesei, Telemaco; Funiciello, Francesca; Sagnotti, Leonardo; Scarlato, Piergiorgio; Rosenau, Matthias; Elger, Kirsten; Ulbricht, Damian; Lange, Otto; Calignano, Elisa; Spiers, Chris; Drury, Martin; Willingshofer, Ernst; Winkler, Aldo

    2017-04-01

    With continuous progress on scientific research, a large amount of datasets has been and will be produced. The data access and sharing along with their storage and homogenization within a unique and coherent framework is a new challenge for the whole scientific community. This is particularly emphasized for geo-scientific laboratories, encompassing the most diverse Earth Science disciplines and typology of data. To this aim the "Multiscale Laboratories" Work Package (WP16), operating in the framework of the European Plate Observing System (EPOS), is developing a virtual platform of geo-scientific data and services for the worldwide community of laboratories. This long-term project aims at merging the top class multidisciplinary laboratories in Geoscience into a coherent and collaborative network, facilitating the standardization of virtual access to data, data products and software. This will help our community to evolve beyond the stage in which most of data produced by the different laboratories are available only within the related scholarly publications (often as print-version only) or they remain unpublished and inaccessible on local devices. The EPOS multi-scale laboratory platform will provide the possibility to easily share and discover data by means of open access, DOI-referenced, online data publication including long-term storage, managing and curation services and to set up a cohesive community of laboratories. The WP16 is starting with three pilot cases laboratories: (1) rock physics, (2) palaeomagnetic, and (3) analogue modelling. As a proof of concept, first analogue modelling datasets have been published via GFZ Data Services (http://doidb.wdc-terra.org/search/public/ui?&sort=updated+desc&q=epos). The datasets include rock analogue material properties (e.g. friction data, rheology data, SEM imagery), as well as supplementary figures, images and movies from experiments on tectonic processes. A metadata catalogue tailored to the specific communities

  2. Wind Farm Wake Models From Full Scale Data

    DEFF Research Database (Denmark)

    Knudsen, Torben; Bak, Thomas

    2012-01-01

    This investigation is part of the EU FP7 project “Distributed Control of Large-Scale Offshore Wind Farms”. The overall goal in this project is to develop wind farm controllers giving power set points to individual turbines in the farm in order to minimise mechanical loads and optimise power. One...... on real full scale data. The modelling is based on so called effective wind speed. It is shown that there is a wake for a wind direction range of up to 20 degrees. Further, when accounting for the wind direction it is shown that the two model structures considered can both fit the experimental data...

  3. Preparing the Model for Prediction Across Scales (MPAS) for global retrospective air quality modeling

    Science.gov (United States)

    The US EPA has a plan to leverage recent advances in meteorological modeling to develop a "Next-Generation" air quality modeling system that will allow consistent modeling of problems from global to local scale. The meteorological model of choice is the Model for Predic...

  4. Coarse-graining to the meso and continuum scales with molecular-dynamics-like models

    Science.gov (United States)

    Plimpton, Steve

    Many engineering-scale problems that industry or the national labs try to address with particle-based simulations occur at length and time scales well beyond the most optimistic hopes of traditional coarse-graining methods for molecular dynamics (MD), which typically start at the atomic scale and build upward. However classical MD can be viewed as an engine for simulating particles at literally any length or time scale, depending on the models used for individual particles and their interactions. To illustrate I'll highlight several coarse-grained (CG) materials models, some of which are likely familiar to molecular-scale modelers, but others probably not. These include models for water droplet freezing on surfaces, dissipative particle dynamics (DPD) models of explosives where particles have internal state, CG models of nano or colloidal particles in solution, models for aspherical particles, Peridynamics models for fracture, and models of granular materials at the scale of industrial processing. All of these can be implemented as MD-style models for either soft or hard materials; in fact they are all part of our LAMMPS MD package, added either by our group or contributed by collaborators. Unlike most all-atom MD simulations, CG simulations at these scales often involve highly non-uniform particle densities. So I'll also discuss a load-balancing method we've implemented for these kinds of models, which can improve parallel efficiencies. From the physics point-of-view, these models may be viewed as non-traditional or ad hoc. But because they are MD-style simulations, there's an opportunity for physicists to add statistical mechanics rigor to individual models. Or, in keeping with a theme of this session, to devise methods that more accurately bridge models from one scale to the next.

  5. Use of genome-scale microbial models for metabolic engineering

    DEFF Research Database (Denmark)

    Patil, Kiran Raosaheb; Åkesson, M.; Nielsen, Jens

    2004-01-01

    Metabolic engineering serves as an integrated approach to design new cell factories by providing rational design procedures and valuable mathematical and experimental tools. Mathematical models have an important role for phenotypic analysis, but can also be used for the design of optimal metaboli...... network structures. The major challenge for metabolic engineering in the post-genomic era is to broaden its design methodologies to incorporate genome-scale biological data. Genome-scale stoichiometric models of microorganisms represent a first step in this direction....

  6. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  7. Scaling-up spatially-explicit ecological models using graphics processors

    NARCIS (Netherlands)

    Koppel, Johan van de; Gupta, Rohit; Vuik, Cornelis

    2011-01-01

    How the properties of ecosystems relate to spatial scale is a prominent topic in current ecosystem research. Despite this, spatially explicit models typically include only a limited range of spatial scales, mostly because of computing limitations. Here, we describe the use of graphics processors to

  8. The Impact of Long-Term Climate Change on Nitrogen Runoff at the Watershed Scale.

    Science.gov (United States)

    Dorley, J.; Duffy, C.; Arenas Amado, A.

    2017-12-01

    The impact of agricultural runoff is a major concern for water quality of mid-western streams. This concern is largely due to excessive use of agricultural fertilizer, a major source of nutrients in many Midwestern watersheds. In order to improve water quality in these watersheds, understanding the long-term trends in nutrient concentration and discharge is an important water quality problem. This study attempts to analyze the role of long-term temperature and precipitation on nitrate runoff in an agriculturally dominated watershed in Iowa. The approach attempts to establish the concentration-discharge (C-Q) signature for the watershed using time series analysis, frequency analysis and model simulation. The climate data is from the Intergovernmental Panel on Climate Change (IPCC), model GFDL-CM3 (Geophysical Fluid Dynamic Laboratory Coupled Model 3). The historical water quality data was made available by the IIHR-Hydroscience & Engineering at the University of Iowa for the clear creek watershed (CCW). The CCW is located in east-central Iowa. The CCW is representative of many Midwestern watersheds with humid-continental climate with predominantly agricultural land use. The study shows how long-term climate changes in temperature and precipitation affects the C-Q dynamics and how a relatively simple approach to data analysis and model projections can be applied to best management practices at the site.

  9. Preparatory hydrogeological calculations for site scale models of Aberg, Beberg and Ceberg

    International Nuclear Information System (INIS)

    Gylling, B.; Lindgren, M.; Widen, H.

    1999-03-01

    The purpose of the study is to evaluate the basis for site scale models of the three sites Aberg, Beberg and Ceberg in terms of: extent and position of site scale model domains; numerical implementation of geologic structural model; systematic review of structural data and control of compatibility in data sets. Some of the hydrogeological features of each site are briefly described. A summary of the results from the regional modelling exercises for Aberg, Beberg and Ceberg is given. The results from the regional models may be used as a base for determining the location and size of the site scale models and provide such models with boundary conditions. Results from the regional models may also indicate suitable locations for repositories. The resulting locations and sizes for site scale models are presented in figures. There are also figures showing that the structural models interpreted by HYDRASTAR do not conflict with the repository tunnels. It has in addition been verified with TRAZON, a modified version of HYDRASTAR for checking starting positions, revealing conflicts between starting positions and fractures zones if present

  10. Modeling Lactococcus lactis using a genome-scale flux model

    Directory of Open Access Journals (Sweden)

    Nielsen Jens

    2005-06-01

    Full Text Available Abstract Background Genome-scale flux models are useful tools to represent and analyze microbial metabolism. In this work we reconstructed the metabolic network of the lactic acid bacteria Lactococcus lactis and developed a genome-scale flux model able to simulate and analyze network capabilities and whole-cell function under aerobic and anaerobic continuous cultures. Flux balance analysis (FBA and minimization of metabolic adjustment (MOMA were used as modeling frameworks. Results The metabolic network was reconstructed using the annotated genome sequence from L. lactis ssp. lactis IL1403 together with physiological and biochemical information. The established network comprised a total of 621 reactions and 509 metabolites, representing the overall metabolism of L. lactis. Experimental data reported in the literature was used to fit the model to phenotypic observations. Regulatory constraints had to be included to simulate certain metabolic features, such as the shift from homo to heterolactic fermentation. A minimal medium for in silico growth was identified, indicating the requirement of four amino acids in addition to a sugar. Remarkably, de novo biosynthesis of four other amino acids was observed even when all amino acids were supplied, which is in good agreement with experimental observations. Additionally, enhanced metabolic engineering strategies for improved diacetyl producing strains were designed. Conclusion The L. lactis metabolic network can now be used for a better understanding of lactococcal metabolic capabilities and potential, for the design of enhanced metabolic engineering strategies and for integration with other types of 'omic' data, to assist in finding new information on cellular organization and function.

  11. Bridging time scales in cellular decision making with a stochastic bistable switch

    Directory of Open Access Journals (Sweden)

    Waldherr Steffen

    2010-08-01

    Full Text Available Abstract Background Cellular transformations which involve a significant phenotypical change of the cell's state use bistable biochemical switches as underlying decision systems. Some of these transformations act over a very long time scale on the cell population level, up to the entire lifespan of the organism. Results In this work, we aim at linking cellular decisions taking place on a time scale of years to decades with the biochemical dynamics in signal transduction and gene regulation, occuring on a time scale of minutes to hours. We show that a stochastic bistable switch forms a viable biochemical mechanism to implement decision processes on long time scales. As a case study, the mechanism is applied to model the initiation of follicle growth in mammalian ovaries, where the physiological time scale of follicle pool depletion is on the order of the organism's lifespan. We construct a simple mathematical model for this process based on experimental evidence for the involved genetic mechanisms. Conclusions Despite the underlying stochasticity, the proposed mechanism turns out to yield reliable behavior in large populations of cells subject to the considered decision process. Our model explains how the physiological time constant may emerge from the intrinsic stochasticity of the underlying gene regulatory network. Apart from ovarian follicles, the proposed mechanism may also be of relevance for other physiological systems where cells take binary decisions over a long time scale.

  12. Wildland Fire Behaviour Case Studies and Fuel Models for Landscape-Scale Fire Modeling

    Directory of Open Access Journals (Sweden)

    Paul-Antoine Santoni

    2011-01-01

    Full Text Available This work presents the extension of a physical model for the spreading of surface fire at landscape scale. In previous work, the model was validated at laboratory scale for fire spreading across litters. The model was then modified to consider the structure of actual vegetation and was included in the wildland fire calculation system Forefire that allows converting the two-dimensional model of fire spread to three dimensions, taking into account spatial information. Two wildland fire behavior case studies were elaborated and used as a basis to test the simulator. Both fires were reconstructed, paying attention to the vegetation mapping, fire history, and meteorological data. The local calibration of the simulator required the development of appropriate fuel models for shrubland vegetation (maquis for use with the model of fire spread. This study showed the capabilities of the simulator during the typical drought season characterizing the Mediterranean climate when most wildfires occur.

  13. Atmospheric dispersion modelling over complex terrain at small scale

    Science.gov (United States)

    Nosek, S.; Janour, Z.; Kukacka, L.; Jurcakova, K.; Kellnerova, R.; Gulikova, E.

    2014-03-01

    Previous study concerned of qualitative modelling neutrally stratified flow over open-cut coal mine and important surrounding topography at meso-scale (1:9000) revealed an important area for quantitative modelling of atmospheric dispersion at small-scale (1:3300). The selected area includes a necessary part of the coal mine topography with respect to its future expansion and surrounding populated areas. At this small-scale simultaneous measurement of velocity components and concentrations in specified points of vertical and horizontal planes were performed by two-dimensional Laser Doppler Anemometry (LDA) and Fast-Response Flame Ionization Detector (FFID), respectively. The impact of the complex terrain on passive pollutant dispersion with respect to the prevailing wind direction was observed and the prediction of the air quality at populated areas is discussed. The measured data will be used for comparison with another model taking into account the future coal mine transformation. Thus, the impact of coal mine transformation on pollutant dispersion can be observed.

  14. Development of fine-resolution analyses and expanded large-scale forcing properties: 2. Scale awareness and application to single-column model experiments

    Science.gov (United States)

    Feng, Sha; Li, Zhijin; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Vogelmann, Andrew M.; Endo, Satoshi

    2015-01-01

    three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy's Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multiscale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scales larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.

  15. Business class scenarios in Switzerland; Branchenszenarien Schweiz. Langfristszenarien zur Entwicklung der Wirtschaftsbranchen mit einem rekursiv-dynamischen Gleichgewichtsmodell (SWISSGEM)

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, A.; Nieuwkoop, R. van

    2005-03-15

    This work report for the Swiss Federal Office of Energy (SFOE) examines long-term scenarios concerning the development of various commercial sectors in Switzerland using a recursive-dynamic equilibrium model. The scenarios are to provide input for sectorial bottom-up models for individual business sectors and overall economic models. The aims of the analysis are the development of trade scenarios for the period up to 2035, to provide compatibility to Swiss federal scenarios and to provide compatibility to the sectorial models of the Swiss energy perspectives. The equilibrium models used in making the prognoses are examined and discussed, as are the data used. The results of simulations are presented and discussed. Detailed results are presented in tabular form

  16. Description of Muzzle Blast by Modified Ideal Scaling Models

    Directory of Open Access Journals (Sweden)

    Kevin S. Fansler

    1998-01-01

    Full Text Available Gun blast data from a large variety of weapons are scaled and presented for both the instantaneous energy release and the constant energy deposition rate models. For both ideal explosion models, similar amounts of data scatter occur for the peak overpressure but the instantaneous energy release model correlated the impulse data significantly better, particularly for the region in front of the gun. Two parameters that characterize gun blast are used in conjunction with the ideal scaling models to improve the data correlation. The gun-emptying parameter works particularly well with the instantaneous energy release model to improve data correlation. In particular, the impulse, especially in the forward direction of the gun, is correlated significantly better using the instantaneous energy release model coupled with the use of the gun-emptying parameter. The use of the Mach disc location parameter improves the correlation only marginally. A predictive model is obtained from the modified instantaneous energy release correlation.

  17. New Models and Methods for the Electroweak Scale

    Energy Technology Data Exchange (ETDEWEB)

    Carpenter, Linda [The Ohio State Univ., Columbus, OH (United States). Dept. of Physics

    2017-09-26

    This is the Final Technical Report to the US Department of Energy for grant DE-SC0013529, New Models and Methods for the Electroweak Scale, covering the time period April 1, 2015 to March 31, 2017. The goal of this project was to maximize the understanding of fundamental weak scale physics in light of current experiments, mainly the ongoing run of the Large Hadron Collider and the space based satellite experiements searching for signals Dark Matter annihilation or decay. This research program focused on the phenomenology of supersymmetry, Higgs physics, and Dark Matter. The properties of the Higgs boson are currently being measured by the Large Hadron collider, and could be a sensitive window into new physics at the weak scale. Supersymmetry is the leading theoretical candidate to explain the natural nessof the electroweak theory, however new model space must be explored as the Large Hadron collider has disfavored much minimal model parameter space. In addition the nature of Dark Matter, the mysterious particle that makes up 25% of the mass of the universe is still unknown. This project sought to address measurements of the Higgs boson couplings to the Standard Model particles, new LHC discovery scenarios for supersymmetric particles, and new measurements of Dark Matter interactions with the Standard Model both in collider production and annihilation in space. Accomplishments include new creating tools for analyses of Dark Matter models in Dark Matter which annihilates into multiple Standard Model particles, including new visualizations of bounds for models with various Dark Matter branching ratios; benchmark studies for new discovery scenarios of Dark Matter at the Large Hardon Collider for Higgs-Dark Matter and gauge boson-Dark Matter interactions; New target analyses to detect direct decays of the Higgs boson into challenging final states like pairs of light jets, and new phenomenological analysis of non-minimal supersymmetric models, namely the set of Dirac

  18. Seismic analysis of long tunnels: A review of simplified and unified methods

    Directory of Open Access Journals (Sweden)

    Haitao Yu

    2017-06-01

    Full Text Available Seismic analysis of long tunnels is important for safety evaluation of the tunnel structure during earthquakes. Simplified models of long tunnels are commonly adopted in seismic design by practitioners, in which the tunnel is usually assumed as a beam supported by the ground. These models can be conveniently used to obtain the overall response of the tunnel structure subjected to seismic loading. However, simplified methods are limited due to the assumptions that need to be made to reach the solution, e.g. shield tunnels are assembled with segments and bolts to form a lining ring and such structural details may not be included in the simplified model. In most cases, the design will require a numerical method that does not have the shortcomings of the analytical solutions, as it can consider the structural details, non-linear behavior, etc. Furthermore, long tunnels have significant length and pass through different strata. All of these would require large-scale seismic analysis of long tunnels with three-dimensional models, which is difficult due to the lack of available computing power. This paper introduces two types of methods for seismic analysis of long tunnels, namely simplified and unified methods. Several models, including the mass-spring-beam model, and the beam-spring model and its analytical solution are presented as examples of the simplified method. The unified method is based on a multiscale framework for long tunnels, with coarse and refined finite element meshes, or with the discrete element method and the finite difference method to compute the overall seismic response of the tunnel while including detailed dynamic response at positions of potential damage or of interest. A bridging scale term is introduced in the framework so that compatibility of dynamic behavior between the macro- and meso-scale subdomains is enforced. Examples are presented to demonstrate the applicability of the simplified and the unified methods.

  19. Anomalous Scaling Behaviors in a Rice-Pile Model with Two Different Driving Mechanisms

    International Nuclear Information System (INIS)

    Zhang Duanming; Sun Hongzhang; Li Zhihua; Pan Guijun; Yu Boming; Li Rui; Yin Yanping

    2005-01-01

    The moment analysis is applied to perform large scale simulations of the rice-pile model. We find that this model shows different scaling behavior depending on the driving mechanism used. With the noisy driving, the rice-pile model violates the finite-size scaling hypothesis, whereas, with fixed driving, it shows well defined avalanche exponents and displays good finite size scaling behavior for the avalanche size and time duration distributions.

  20. ClimaDat: A long-term network to study at different scales climatic processes and interactions between climatic compartments

    Science.gov (United States)

    Morgui, Josep Anton; Agueda, Alba; Batet, Oscar; Curcoll, Roger; Ealo, Marina; Grossi, Claudia; Occhipinti, Paola; Sánchez-García, Laura; Arias, Rosa; Rodó, Xavi

    2013-04-01

    ClimaDat (www.climadat.es) is a pioneer project of the Institut Català de Ciències del Clima (IC3) in collaboration with and funded by "la Caixa" Foundation. This project aims at studying the interactions between climate and ecosystems at different spatial and temporal scales. The ClimaDat project consists of a network of eight long-term observatory stations distributed over Spain, installed at natural and remote areas, and covering different climatic domains (e.g. Mediterranean, Atlantic, subtropics) and natural systems (e.g. delta, karsts, high mountain areas). Data obtained in the ClimaDat network will help us to understand how ecosystems are influenced by and eventually might feedback different processes in the climate system. The point of focus of these studies will be taken into account regional-and-local conditions to understand climatic global scale eventsThe data gathered will be used to study the behavior of the global element cycles and associated greenhouse gas emissions. The network is expected to offer near real-time (NRT) data free for the scientific community. Instrumentation installed at these stations mainly consists of: CO2, CH4, H2O, CO, N2O, SF6 and 222Rn analyzers, isotopic CO2, CH4 and H2O analyzers, meteorological sensors, eddy covariance equipment, four-component radiometers, soil moisture and temperature sensors, and sap flow meters. Each station may have a more focused subset of all this equipment, depending on the specific characteristics of the site. Instrumentation selected for this network has been chosen to comply with standards established in international research infrastructure projects, such as ICOS (http://www.icos-infrastructure.eu/home) or InGOS (http://www.ingos-infrastructure.eu/). Preliminary data time-series of greenhouse gases concentrations and meteorological variables are presented in this study for three currently operational ClimaDat stations: the Natural Park of the Ebre Delta (lat 40.75° N - long 0.79° E), the