WorldWideScience

Sample records for model sutra version

  1. SUTRA model used to evaluate the role of uplift and erosion in the persistence of saline groundwater in the shallow subsurface

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — A modified version of the three-dimensional, variable-density solute-transport model (SUTRA) was developed to simulate the effects of erosion and uplift on the...

  2. SUTRA model used to evaluate the freshwater flow system on Roi-Namur, Kwajalein Atoll, Republic of the Marshall Islands

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — A three-dimensional, groundwater model (SUTRA) was developed to understand the effects of seawater washover on the freshwater lens of Roi-Namur, Kwajalein Atoll,...

  3. SUTRA: A model for 2D or 3D saturated-unsaturated, variable-density ground-water flow with solute or energy transport

    Science.gov (United States)

    Voss, Clifford I.; Provost, A.M.

    2002-01-01

    SUTRA (Saturated-Unsaturated Transport) is a computer program that simulates fluid movement and the transport of either energy or dissolved substances in a subsurface environment. This upgraded version of SUTRA adds the capability for three-dimensional simulation to the former code (Voss, 1984), which allowed only two-dimensional simulation. The code employs a two- or three-dimensional finite-element and finite-difference method to approximate the governing equations that describe the two interdependent processes that are simulated: 1) fluid density-dependent saturated or unsaturated ground-water flow; and 2) either (a) transport of a solute in the ground water, in which the solute may be subject to: equilibrium adsorption on the porous matrix, and both first-order and zero-order production or decay; or (b) transport of thermal energy in the ground water and solid matrix of the aquifer. SUTRA may also be used to simulate simpler subsets of the above processes. A flow-direction-dependent dispersion process for anisotropic media is also provided by the code and is introduced in this report. As the primary calculated result, SUTRA provides fluid pressures and either solute concentrations or temperatures, as they vary with time, everywhere in the simulated subsurface system. SUTRA flow simulation may be employed for two-dimensional (2D) areal, cross sectional and three-dimensional (3D) modeling of saturated ground-water flow systems, and for cross sectional and 3D modeling of unsaturated zone flow. Solute-transport simulation using SUTRA may be employed to model natural or man-induced chemical-species transport including processes of solute sorption, production, and decay. For example, it may be applied to analyze ground-water contaminant transport problems and aquifer restoration designs. In addition, solute-transport simulation with SUTRA may be used for modeling of variable-density leachate movement, and for cross sectional modeling of saltwater intrusion in

  4. AMS radiocarbon dating of ancient Japanese sutras

    CERN Document Server

    Oda, H; Nakamura, T; Fujita, K

    2000-01-01

    Radiocarbon ages of ancient Japanese sutras whose historical ages were known paleographically were measured by means of accelerator mass spectrometry (AMS). Calibrated radiocarbon ages of five samples were consistent with the corresponding historical ages; the 'old wood effect' is negligible for ancient Japanese sutras. Japanese paper has been made from fresh branches grown within a few years and the interval from trimming off the branches to writing sutra on the paper is within one year. The good agreement between the calibrated radiocarbon ages and the historical ages is supported by such characteristics of Japanese paper. It is indicated in this study that Japanese sutra is a suitable sample for radiocarbon dating in the historic period because of little gap by 'old wood effect'.

  5. UNIQUE ILLUSTRATIONS IN TIBETAN BUDDHIST SUTRAS

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The illustrations for Tibetan sutras are coloured in two ways:in black and white or colours-the monotone illustrations accompanying Tibetan characters and usually engraved on woodblocks.The illustrations are often showed on the cover pages or two sides of the head pages of sutras; they are frequently displayed at two frames and in the middle of end pages.In this paper,I am going to introduce the

  6. 从《东大寺讽诵文稿》看日本愿文的仁孝礼佛说%On the Idea of Worshiping Buddha by Benevolence and Filial Piety in Japanese Sutras from Todaiji Sutras Collection

    Institute of Scientific and Technical Information of China (English)

    王晓平

    2009-01-01

    和中的大多数愿文,都不是为特定愿主撰写的"专文专用"的作品,而属于愿文"范文".它们都可以纳入"民间宗教文学"的范畴来进行比较研究.多方面受到传入日本的中国愿文的影响,将礼佛作为报答父母恩情最根本的途径加以宣扬,仁孝的儒家思想彻底变为佛教的工具,体现了平安时代初期佛教从宫廷贵族走向民间的思想特点.与传世愿文相比,佛教哲理的成分更为淡薄,而抒情性、叙述性更为突出.这也为后来文人的追善愿文发展成"哀伤的抒情文学"开辟了道路.%Most sutras in Todaiji Sutras Collection and Dunhuang Sutras Collection are not specifically written for specific purpose,but belong to "models" of sutras.They can be included in Folk Religious Literature for comparative study.Many aspects of Todaiji Sutras Collection are influenced by Chinese sutras spread to Japan,which advocate worshiping Buddha as the basic approach to repaying parents' love.The Confucian ideas of benevolence and filial piety were completely transformed to the tools of Buddhism,which shows the characteristics of Buddhism at the beginning of Heian Period from aristocrats to ordinary people.Compared with sutras,there are fewer elements of Buddhist philosophy,but lyricism and narration are more distinct,which opened ways for sutras of later scholars to develop into "sentimental lyric literature".

  7. SUTRA (Saturated-Unsaturated Transport). A Finite-Element Simulation Model for Saturated-Unsaturated, Fluid-Density-Dependent Ground-Water Flow with Energy Transport or Chemically-Reactive Single-Species Solute Transport.

    Science.gov (United States)

    2014-09-26

    Copies of this report can be . write to. purchased from: • Chief Hydrologist U.S, Geological Survey U.S. Geological Survey Open-File Services Section 431...request to the originating office at the following address: Chief Hydrologist - SUTRA U.S. Geological Survey 431 National Center Reston, VA 22092 Copies ...P P p p P P".. !Iol’e tor: fr sOl i ," ,ir p every third irv step and tor I’ each t’me step (NPCCYC=3 and NI (’ i =1 I: " - : 2 - 12 1

  8. OA03.11. A comparative study of guggulu chitrak kshar – sutra and snuhi apamarg kshar – sutra in the management of fistula in ano

    Science.gov (United States)

    Gond, Pushpa; Kumar, Ashok; Rajeshwari, PN; Choudhary, PC; Kumar, Jitendra

    2013-01-01

    Purpose: Fistula in ano is a condition which has been recognized as difficult surgical diseases in all the ancient and modern medical sciences of the world. In Ayurvedic texts fistulainano is described as Bhagandar. This disease is recurrent in nature which makes it more difficult for treatment. So it produces inconvenience in routine life. KsharSutra has been proved as a big revolution in the treatment of fistulainano. It is the need to do further researches to get more efficient Kshar Sutra. Method: The present study was clinical, randomised, single blind trial. In the present research work Guggulu Chitraka KsharSutra has been taken for comparative study wth snuhi apamarga ksharsutra. Thirty patients cases of fistulainano were selected from OPD/IPD of Shalya Tantra department of National Institute of Ayurveda, Jaipur. Total patients were divided into two equal groups. The patients of group A were treated with Snuhi Apamarga KsharSutra and the patients of group B were treated with Guggulu Chitraka KsharSutra. Result: In the study the effect of Guggulu Chitraka KsharSutra was found better in pain, itching, pus discharge, tenderness and burning sensation and the rate of Unit Cutting Time was slightly higher as Snuhi Apamarga KsharSutra. Conclusion: Though U.C.T of Guggulu Chitrak Kshara Sutra is slightly higher than Snuhi Apamarga Kshar Sutra, but in assessment parameter Guggulu Chitrak Kshar Sutra has been shown significant result. With guggulu chitrak ksharsutra post ligation complications like hypertrophied scar etc are not seen and this is easily available and cost effective.

  9. OA03.11. A comparative study of guggulu chitrak kshar – sutra and snuhi apamarg kshar – sutra in the management of fistula in ano

    OpenAIRE

    2013-01-01

    Purpose: Fistula in ano is a condition which has been recognized as difficult surgical diseases in all the ancient and modern medical sciences of the world. In Ayurvedic texts fistulainano is described as Bhagandar. This disease is recurrent in nature which makes it more difficult for treatment. So it produces inconvenience in routine life. KsharSutra has been proved as a big revolution in the treatment of fistulainano. It is the need to do further researches to get more efficient Kshar Sutra...

  10. A Functional Version of the ARCH Model

    CERN Document Server

    Hormann, Siegfried; Reeder, Ron

    2011-01-01

    Improvements in data acquisition and processing techniques have lead to an almost continuous flow of information for financial data. High resolution tick data are available and can be quite conveniently described by a continuous time process. It is therefore natural to ask for possible extensions of financial time series models to a functional setup. In this paper we propose a functional version of the popular ARCH model. We will establish conditions for the existence of a strictly stationary solution, derive weak dependence and moment conditions, show consistency of the estimators and perform a small empirical study demonstrating how our model matches with real data.

  11. A version management model of PDM system and its realization

    Institute of Scientific and Technical Information of China (English)

    ZHONG Shi-sheng; LI Tao

    2008-01-01

    Based on the key function of version management in PDM system, this paper discusses the function and the realization of version management and the transitions of version states with a workflow. A directed acy-clic graph is used to describe a version model. Three storage modes of the directed acyclic graph version model in the database, the bumping block and the PDM working memory are presented and the conversion principle of these three modes is given. The study indicates that building a dynamic product structure configuration model based on versions is the key to resolve the problem. Thus a version model of single product object is built. Then the version management model in product structure configuration is built and the apphcation of version manage-ment of PDM syste' is presented as a case.

  12. Forsmark - site descriptive model version 0

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-10-01

    During 2002, the Swedish Nuclear Fuel and Waste Management Company (SKB) is starting investigations at two potential sites for a deep repository in the Precambrian basement of the Fennoscandian Shield. The present report concerns one of those sites, Forsmark, which lies in the municipality of Oesthammar, on the east coast of Sweden, about 150 kilometres north of Stockholm. The site description should present all collected data and interpreted parameters of importance for the overall scientific understanding of the site, for the technical design and environmental impact assessment of the deep repository, and for the assessment of long-term safety. The site description will have two main components: a written synthesis of the site, summarising the current state of knowledge, as documented in the databases containing the primary data from the site investigations, and one or several site descriptive models, in which the collected information is interpreted and presented in a form which can be used in numerical models for rock engineering, environmental impact and long-term safety assessments. The site descriptive models are devised and stepwise updated as the site investigations proceed. The point of departure for this process is the regional site descriptive model, version 0, which is the subject of the present report. Version 0 is developed out of the information available at the start of the site investigation. This information, with the exception of data from tunnels and drill holes at the sites of the Forsmark nuclear reactors and the underground low-middle active radioactive waste storage facility, SFR, is mainly 2D in nature (surface data), and is general and regional, rather than site-specific, in content. For this reason, the Forsmark site descriptive model, version 0, as detailed in the present report, has been developed at a regional scale. It covers a rectangular area, 15 km in a southwest-northeast and 11 km in a northwest-southeast direction, around the

  13. Simpevarp - site descriptive model version 0

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-11-01

    During 2002, SKB is starting detailed investigations at two potential sites for a deep repository in the Precambrian rocks of the Fennoscandian Shield. The present report concerns one of those sites, Simpevarp, which lies in the municipality of Oskarshamn, on the southeast coast of Sweden, about 250 kilometres south of Stockholm. The site description will have two main components: a written synthesis of the site, summarising the current state of knowledge, as documented in the databases containing the primary data from the site investigations, and one or several site descriptive models, in which the collected information is interpreted and presented in a form which can be used in numerical models for rock engineering, environmental impact and long-term safety assessments. SKB maintains two main databases at the present time, a site characterisation database called SICADA and a geographic information system called SKB GIS. The site descriptive model will be developed and presented with the aid of the SKB GIS capabilities, and with SKBs Rock Visualisation System (RVS), which is also linked to SICADA. The version 0 model forms an important framework for subsequent model versions, which are developed successively, as new information from the site investigations becomes available. Version 0 is developed out of the information available at the start of the site investigation. In the case of Simpevarp, this is essentially the information which was compiled for the Oskarshamn feasibility study, which led to the choice of that area as a favourable object for further study, together with information collected since its completion. This information, with the exception of the extensive data base from the nearby Aespoe Hard Rock Laboratory, is mainly 2D in nature (surface data), and is general and regional, rather than site-specific, in content. Against this background, the present report consists of the following components: an overview of the present content of the databases

  14. The Digitalized Protection and Inheritance of the Woodblock Printing Called “Jinling Sutra Printing”

    Directory of Open Access Journals (Sweden)

    Huaidong Ge

    2014-05-01

    Full Text Available Jinling Sutra Publishing House, a protection site for Chinese woodblock printing, is the inheritance organization for Chinese woodblock engraving and ink printing of the Chinese Buddhism classics. This paper, taking “Jinling Sutra Printing” as study object, introduced its carving and printing skills, and proposed that, by means of digital acquisition and storage technology, this intangible cultural heritage could be fully documented and presented through characters, pictures, audio, video and other information.

  15. The Digitalized Protection and Inheritance of the Woodblock Printing Called “Jinling Sutra Printing”

    OpenAIRE

    Huaidong Ge; Shuyang Deng

    2014-01-01

    Jinling Sutra Publishing House, a protection site for Chinese woodblock printing, is the inheritance organization for Chinese woodblock engraving and ink printing of the Chinese Buddhism classics. This paper, taking “Jinling Sutra Printing” as study object, introduced its carving and printing skills, and proposed that, by means of digital acquisition and storage technology, this intangible cultural heritage could be fully documented and presented through characters, pictures, audio, video and...

  16. Forsmark - site descriptive model version 0

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-10-01

    During 2002, the Swedish Nuclear Fuel and Waste Management Company (SKB) is starting investigations at two potential sites for a deep repository in the Precambrian basement of the Fennoscandian Shield. The present report concerns one of those sites, Forsmark, which lies in the municipality of Oesthammar, on the east coast of Sweden, about 150 kilometres north of Stockholm. The site description should present all collected data and interpreted parameters of importance for the overall scientific understanding of the site, for the technical design and environmental impact assessment of the deep repository, and for the assessment of long-term safety. The site description will have two main components: a written synthesis of the site, summarising the current state of knowledge, as documented in the databases containing the primary data from the site investigations, and one or several site descriptive models, in which the collected information is interpreted and presented in a form which can be used in numerical models for rock engineering, environmental impact and long-term safety assessments. The site descriptive models are devised and stepwise updated as the site investigations proceed. The point of departure for this process is the regional site descriptive model, version 0, which is the subject of the present report. Version 0 is developed out of the information available at the start of the site investigation. This information, with the exception of data from tunnels and drill holes at the sites of the Forsmark nuclear reactors and the underground low-middle active radioactive waste storage facility, SFR, is mainly 2D in nature (surface data), and is general and regional, rather than site-specific, in content. For this reason, the Forsmark site descriptive model, version 0, as detailed in the present report, has been developed at a regional scale. It covers a rectangular area, 15 km in a southwest-northeast and 11 km in a northwest-southeast direction, around the

  17. Simpevarp - site descriptive model version 0

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-11-01

    During 2002, SKB is starting detailed investigations at two potential sites for a deep repository in the Precambrian rocks of the Fennoscandian Shield. The present report concerns one of those sites, Simpevarp, which lies in the municipality of Oskarshamn, on the southeast coast of Sweden, about 250 kilometres south of Stockholm. The site description will have two main components: a written synthesis of the site, summarising the current state of knowledge, as documented in the databases containing the primary data from the site investigations, and one or several site descriptive models, in which the collected information is interpreted and presented in a form which can be used in numerical models for rock engineering, environmental impact and long-term safety assessments. SKB maintains two main databases at the present time, a site characterisation database called SICADA and a geographic information system called SKB GIS. The site descriptive model will be developed and presented with the aid of the SKB GIS capabilities, and with SKBs Rock Visualisation System (RVS), which is also linked to SICADA. The version 0 model forms an important framework for subsequent model versions, which are developed successively, as new information from the site investigations becomes available. Version 0 is developed out of the information available at the start of the site investigation. In the case of Simpevarp, this is essentially the information which was compiled for the Oskarshamn feasibility study, which led to the choice of that area as a favourable object for further study, together with information collected since its completion. This information, with the exception of the extensive data base from the nearby Aespoe Hard Rock Laboratory, is mainly 2D in nature (surface data), and is general and regional, rather than site-specific, in content. Against this background, the present report consists of the following components: an overview of the present content of the databases

  18. ModelMuse: A U.S. Geological Survey Open-Source, Graphical User Interface for Groundwater Models

    Science.gov (United States)

    Winston, R. B.

    2013-12-01

    ModelMuse is a free publicly-available graphical preprocessor used to generate the input and display the output for several groundwater models. It is written in Object Pascal and the source code is available on the USGS software web site. Supported models include the MODFLOW family of models, PHAST (version 1), and SUTRA version 2.2. With MODFLOW and PHAST, the user generates a grid and uses 'objects' (points, lines, and polygons) to define boundary conditions and the spatial variation in aquifer properties. Because the objects define the spatial variation, the grid can be changed without the user needing to re-enter spatial data. The same paradigm is used with SUTRA except that the user generates a quadrilateral finite-element mesh instead of a rectangular grid. The user interacts with the model in a top view and in a vertical cross section. The cross section can be at any angle or location. There is also a three-dimensional view of the model. For SUTRA, a new method of visualizing the permeability and related properties has been introduced. In three dimensional SUTRA models, the user specifies the permeability tensor by specifying permeability in three mutually orthogonal directions that can be oriented in space in any direction. Because it is important for the user to be able to check both the magnitudes and directions of the permeabilities, ModelMuse displays the permeabilities as either a two-dimensional or a three-dimensional vector plot. Color is used to differentiate the maximum, middle, and minimum permeability vectors. The magnitude of the permeability is shown by the vector length. The vector angle shows the direction of the maximum, middle, or minimum permeability. Contour and color plots can also be used to display model input and output data.

  19. High speed multiplier using Nikhilam Sutra algorithm of Vedic mathematics

    Science.gov (United States)

    Pradhan, Manoranjan; Panda, Rutuparna

    2014-03-01

    This article presents the design of a new high-speed multiplier architecture using Nikhilam Sutra of Vedic mathematics. The proposed multiplier architecture finds out the compliment of the large operand from its nearest base to perform the multiplication. The multiplication of two large operands is reduced to the multiplication of their compliments and addition. It is more efficient when the magnitudes of both operands are more than half of their maximum values. The carry save adder in the multiplier architecture increases the speed of addition of partial products. The multiplier circuit is synthesised and simulated using Xilinx ISE 10.1 software and implemented on Spartan 2 FPGA device XC2S30-5pq208. The output parameters such as propagation delay and device utilisation are calculated from synthesis results. The performance evaluation results in terms of speed and device utilisation are compared with earlier multiplier architecture. The proposed design has speed improvements compared to multiplier architecture presented in the literature.

  20. Meson Properties in a renormalizable version of the NJL model

    CERN Document Server

    Mota, A L; Hiller, B; Walliser, H; Mota, Andre L.; Hiller, Brigitte; Walliser, Hans

    1999-01-01

    In the present paper we implement a non-trivial and renormalizable extension of the NJL model. We discuss the advantages and shortcomings of this extended model compared to a usual effective Pauli-Villars regularized version. We show that both versions become equivalent in the case of a large cutoff. Various relevant mesonic observables are calculated and compared.

  1. Industrial Waste Management Evaluation Model Version 3.1

    Science.gov (United States)

    IWEM is a screening level ground water model designed to simulate contaminant fate and transport. IWEM v3.1 is the latest version of the IWEM software, which includes additional tools to evaluate the beneficial use of industrial materials

  2. GCFM Users Guide Revision for Model Version 5.0

    Energy Technology Data Exchange (ETDEWEB)

    Keimig, Mark A.; Blake, Coleman

    1981-08-10

    This paper documents alterations made to the MITRE/DOE Geothermal Cash Flow Model (GCFM) in the period of September 1980 through September 1981. Version 4.0 of GCFM was installed on the computer at the DOE San Francisco Operations Office in August 1980. This Version has also been distributed to about a dozen geothermal industry firms, for examination and potential use. During late 1980 and 1981, a few errors detected in the Version 4.0 code were corrected, resulting in Version 4.1. If you are currently using GCFM Version 4.0, it is suggested that you make the changes to your code that are described in Section 2.0. User's manual changes listed in Section 3.0 and Section 4.0 should then also be made.

  3. FPGA Implementation of Complex Multiplier Using Urdhva Tiryakbham Sutra of Vedic Mathematics

    Directory of Open Access Journals (Sweden)

    Rupa A. Tomaskar

    2014-05-01

    Full Text Available In this work VHDL implementation of complex number multiplier using ancient Vedic mathematics is presented, also the FPGA implementation of 4-bit complex multiplier using Vedic sutra is done on SPARTAN 3 FPGA kit. The idea for designing the multiplier unit is adopted from ancient Indian mathematics "Vedas". The Urdhva Tiryakbhyam sutra (method was selected for implementation since it is applicable to all cases of multiplication. The feature of this method is any multi-bit multiplication can be reduced down to single bit multiplication and addition. On account of these formulas, the partial products and sums are generated in one step which reduces the carry propagation from LSB to MSB. The implementation of the Vedic mathematics and their application to the complex multiplier ensure substantial reduction of propagation delay. The simulation results for 4-bit, 8-bit, 16-bit and 32 bit complex number multiplication using Vedic sutra are illustrated. The results show that Urdhva Tiryakbhyam sutra with less number of bits may be used to implement multiplier efficiently in signal processing algorithms.

  4. Model Adequacy Analysis of Matching Record Versions in Nosql Databases

    Directory of Open Access Journals (Sweden)

    E. V. Tsviashchenko

    2015-01-01

    Full Text Available The article investigates a model of matching record versions. The goal of this work is to analyse the model adequacy. This model allows estimating a user’s processing time distribution of the record versions and a distribution of the record versions count. The second option of the model was used, according to which, for a client the time to process record versions depends explicitly on the number of updates, performed by the other users between the sequential updates performed by a current client. In order to prove the model adequacy the real experiment was conducted in the cloud cluster. The cluster contains 10 virtual nodes, provided by DigitalOcean Company. The Ubuntu Server 14.04 was used as an operating system (OS. The NoSQL system Riak was chosen for experiments. In the Riak 2.0 version and later provide “dotted vector versions” (DVV option, which is an extension of the classic vector clock. Their use guarantees, that the versions count, simultaneously stored in DB, will not exceed the count of clients, operating in parallel with a record. This is very important while conducting experiments. For developing the application the java library, provided by Riak, was used. The processes run directly on the nodes. In experiment two records were used. They are: Z – the record, versions of which are handled by clients; RZ – service record, which contains record update counters. The application algorithm can be briefly described as follows: every client reads versions of the record Z, processes its updates using the RZ record counters, and saves treated record in database while old versions are deleted form DB. Then, a client rereads the RZ record and increments counters of updates for the other clients. After that, a client rereads the Z record, saves necessary statistics, and deliberates the results of processing. In the case of emerging conflict because of simultaneous updates of the RZ record, the client obtains all versions of that

  5. Solar Advisor Model User Guide for Version 2.0

    Energy Technology Data Exchange (ETDEWEB)

    Gilman, P.; Blair, N.; Mehos, M.; Christensen, C.; Janzou, S.; Cameron, C.

    2008-08-01

    The Solar Advisor Model (SAM) provides a consistent framework for analyzing and comparing power system costs and performance across the range of solar technologies and markets, from photovoltaic systems for residential and commercial markets to concentrating solar power and large photovoltaic systems for utility markets. This manual describes Version 2.0 of the software, which can model photovoltaic and concentrating solar power technologies for electric applications for several markets. The current version of the Solar Advisor Model does not model solar heating and lighting technologies.

  6. METAPHOR (version 1): Users guide. [performability modeling

    Science.gov (United States)

    Furchtgott, D. G.

    1979-01-01

    General information concerning METAPHOR, an interactive software package to facilitate performability modeling and evaluation, is presented. Example systems are studied and their performabilities are calculated. Each available METAPHOR command and array generator is described. Complete METAPHOR sessions are included.

  7. Renormalized versions of the massless Thirring model

    CERN Document Server

    Casana, R

    2003-01-01

    We present a non-perturbative study of the (1+1)-dimensional massless Thirring model by using path integral methods. The model presents two features, one of them has a local gauge symmetry that is implemented at quantum level and the other one without this symmetry. We make a detailed analysis of their UV divergence structure, a non-perturbative regularization and renormalization processes are proposed.

  8. An Open Platform for Processing IFC Model Versions

    Institute of Scientific and Technical Information of China (English)

    Mohamed Nour; Karl Beucke

    2008-01-01

    The IFC initiative from the International Alliance of Interoperability has been developing since the mid-nineties through several versions.This paper addresses the problem of binding the growing number of IFC versions and their EXPRESS definitions to programming environments (Java and.NET).The solution developed in this paper automates the process of generating early binding classes,whenever a new version of the IFC model is released.Furthermore, a runtime instantiation of the generated eady binding classes takes place by importing IFC-STEP ISO 10303-P21 models.The user can navigate the IFC STEP model with relevance to the defining EXPRESS-schema,modify,deletem,and create new instances.These func-tionalities are considered to be a basis for any IFC based implementation.It enables researchers to experi-ment the IFC model independently from any software application.

  9. Correction, improvement and model verification of CARE 3, version 3

    Science.gov (United States)

    Rose, D. M.; Manke, J. W.; Altschul, R. E.; Nelson, D. L.

    1987-01-01

    An independent verification of the CARE 3 mathematical model and computer code was conducted and reported in NASA Contractor Report 166096, Review and Verification of CARE 3 Mathematical Model and Code: Interim Report. The study uncovered some implementation errors that were corrected and are reported in this document. The corrected CARE 3 program is called version 4. Thus the document, correction. improvement, and model verification of CARE 3, version 3 was written in April 1984. It is being published now as it has been determined to contain a more accurate representation of CARE 3 than the preceding document of April 1983. This edition supercedes NASA-CR-166122 entitled, 'Correction and Improvement of CARE 3,' version 3, April 1983.

  10. Smart Grid Interoperability Maturity Model Beta Version

    Energy Technology Data Exchange (ETDEWEB)

    Widergren, Steven E.; Drummond, R.; Giroti, Tony; Houseman, Doug; Knight, Mark; Levinson, Alex; longcore, Wayne; Lowe, Randy; Mater, J.; Oliver, Terry V.; Slack, Phil; Tolk, Andreas; Montgomery, Austin

    2011-12-02

    The GridWise Architecture Council was formed by the U.S. Department of Energy to promote and enable interoperability among the many entities that interact with the electric power system. This balanced team of industry representatives proposes principles for the development of interoperability concepts and standards. The Council provides industry guidance and tools that make it an available resource for smart grid implementations. In the spirit of advancing interoperability of an ecosystem of smart grid devices and systems, this document presents a model for evaluating the maturity of the artifacts and processes that specify the agreement of parties to collaborate across an information exchange interface. You are expected to have a solid understanding of large, complex system integration concepts and experience in dealing with software component interoperation. Those without this technical background should read the Executive Summary for a description of the purpose and contents of the document. Other documents, such as checklists, guides, and whitepapers, exist for targeted purposes and audiences. Please see the www.gridwiseac.org website for more products of the Council that may be of interest to you.

  11. AISIM (Automated Interactive Simulation Modeling System) VAX Version Training Manual.

    Science.gov (United States)

    1985-02-01

    AD-Ri6t 436 AISIM (RUTOMATED INTERACTIVE SIMULATION MODELING 1/2 SYSTEM) VAX VERSION TRAI (U) HUGHES AIRCRAFT CO FULLERTON CA GROUND SYSTEMS GROUP S...Continue on reverse if necessary and Identify by block number) THIS DOCUMENT IS THE TRAINING MANUAL FOR THE AUTOMATED INTERACTIVE SIMULATION MODELING SYSTEM...form. Page 85 . . . . . . . . APPENDIX B SIMULATION REPORT FOR WORKING EXAMPLE Pa jPage.8 7AD-Ai6i 46 ISIM (AUTOMATED INTERACTIVE SIMULATION MODELING 2

  12. IDC Use Case Model Survey Version 1.1.

    Energy Technology Data Exchange (ETDEWEB)

    Harris, James Mark [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Carr, Dorthe B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    This document contains the brief descriptions for the actors and use cases contained in the IDC Use Case Model. REVISIONS Version Date Author/Team Revision Description Authorized by V1.0 12/2014 SNL IDC Reengineering Project Team Initial delivery M. Harris V1.1 2/2015 SNL IDC Reengineering Project Team Iteration I2 Review Comments M. Harris

  13. IDC Use Case Model Survey Version 1.0.

    Energy Technology Data Exchange (ETDEWEB)

    Carr, Dorthe B.; Harris, James M.

    2014-12-01

    This document contains the brief descriptions for the actors and use cases contained in the IDC Use Case Model Survey. REVISIONS Version Date Author/Team Revision Description Authorized by V1.0 12/2014 IDC Re- engineering Project Team Initial delivery M. Harris

  14. A comparative study of Barron's rubber band ligation with Kshar Sutra ligation in hemorrhoids

    OpenAIRE

    2010-01-01

    Despite a long medical history of identification and treatment, hemorrhoids still pose a challenge to the medical fraternity in terms of finding satisfactory cure of the disease. In this study, Kshar Sutra Ligation (KSL), a modality of treatment described in Ayurveda, was compared with Barron's Rubber Band Ligation (RBL) for grade II and grade III hemorrhoids. This study was conducted in 20 adult patients of either sex with grade II and grade III hemorrhoids at two different hospitals. Patien...

  15. A lemma science of mind: the potential of the Kegon (Flower Ornament) Sutra.

    Science.gov (United States)

    Nakazawa, Shin'ichi

    2017-02-01

    The paper argues for a new perspective on the relationship between Buddhism and European psychology, or sciences of the mind, based in the Kegon Sutra, a text that emerged in the early stages of Mahayana Buddhism (3(rd) - 5(th) century CE). The basis of European science is logos intellection, formalized by Aristotle as following three laws: the law of identity, the law of contradiction and the law of the excluded middle. Logic in the Buddhist tradition, by contrast, is based in lemma (meaning to understand as a whole not with language, but with intuition). Lemma-based science born in the Buddhist tradition shows that rational perception is possible even without the three laws of logos. The Kegon Sutra, which explains what Buddha preached only a week after he attained enlightenment, is unified under the logic of lemma and can be seen as an effort to create a 'lemma science of the mind'. The fundamental teaching of the Kegon Sutra is explored, and its principles are compared with primary process thinking and the unconscious as outlined by Freud and Jung. Jung's research of Eastern texts led him to create a science of the mind that went further than Freud: his concept of synchronicity is given by way of example and can be seen anew within the idea of a lemma-based science. © 2017, The Society of Analytical Psychology.

  16. COPAT - towards a recommended model version of COSMO-CLM

    Science.gov (United States)

    Anders, Ivonne; Brienen, Susanne; Eduardo, Bucchignani; Ferrone, Andrew; Geyer, Beate; Keuler, Klaus; Lüthi, Daniel; Mertens, Mariano; Panitz, Hans-Jürgen; Saeed, Sajjad; Schulz, Jan-Peter; Wouters, Hendrik

    2016-04-01

    The regional climate model COSMO-CLM is a community model (www.clm-community.com). In close collaboration with the COSMO-consortium the model is further developed by the community members for climate applications. One of the tasks of the community is to give a recommendation on the model version and to evaluate the models performance. The COPAT (Coordinated Parameter Testing) is a voluntary community effort to allow different institutions to carry out model simulations systematically by different institutions in order to test new model options and to find a satisfactory model setup for hydrostatic climate simulations over Europe. We will present the COPAT method used to achieve the latest recommended model version of COSMO-CLM (COSMO5.0_clm6). The simulations cover the EURO-CORDEX domain at two spatial resolutions 0.44° and 0.11°. They used ERAinterim forcing data for the time period of 1979-2000. Interpolated forcing data has been prepared once to ensure that all participating groups used identical forcing. The evaluation of each individual run has been performed for the time period 1981-2000 by using ETOOL and ETOOL-VIS. These tools have been developed within the community to evaluate standard COSMO-CLM output in comparison to observations provided by EOBS and CRU. COPAT was structured in three phases. In Phase 1 all participating institutions performed a reference run on their individual computing platforms and tested the influence of single model options on the results afterwards. Derived from the results of Phase 1 the most promising options were used in combinations in the second phase (Phase 2). These first two phases of COPAT consist of more than 100 simulations with a spatial resolution of 0.44°. Based on the best setup identified in Phase 2 a calibration of eight tuning parameters has been carried out following Bellbrat et al. (2012) in Phase 3. A final simulation with the calibrated parameters has been set up at a higher resolution of 0.11°. The

  17. ONKALO rock mechanics model (RMM). Version 2.3

    Energy Technology Data Exchange (ETDEWEB)

    Haekkinen, T.; Merjama, S.; Moenkkoenen, H. [WSP Finland, Helsinki (Finland)

    2014-07-15

    The Rock Mechanics Model of the ONKALO rock volume includes the most important rock mechanics features and parameters at the Olkiluoto site. The main objective of the model is to be a tool to predict rock properties, rock quality and hence provide an estimate for the rock stability of the potential repository at Olkiluoto. The model includes a database of rock mechanics raw data and a block model in which the rock mechanics parameters are estimated through block volumes based on spatial rock mechanics raw data. In this version 2.3, special emphasis was placed on refining the estimation of the block model. The model was divided into rock mechanics domains which were used as constraints during the block model estimation. During the modelling process, a display profile and toolbar were developed for the GEOVIA Surpac software to improve visualisation and access to the rock mechanics data for the Olkiluoto area. (orig.)

  18. Model Versions and Fast Algorithms for Network Epidemiology

    Institute of Scientific and Technical Information of China (English)

    Petter Holme

    2014-01-01

    Network epidemiology has become a core framework for investigating the role of human contact patterns in the spreading of infectious diseases. In network epidemiology, one represents the contact structure as a network of nodes (individuals) connected by links (sometimes as a temporal network where the links are not continuously active) and the disease as a compartmental model (where individuals are assigned states with respect to the disease and follow certain transition rules between the states). In this paper, we discuss fast algorithms for such simulations and also compare two commonly used versions,one where there is a constant recovery rate (the number of individuals that stop being infectious per time is proportional to the number of such people);the other where the duration of the disease is constant. The results show that, for most practical purposes, these versions are qualitatively the same.

  19. H2A Production Model, Version 2 User Guide

    Energy Technology Data Exchange (ETDEWEB)

    Steward, D.; Ramsden, T.; Zuboy, J.

    2008-09-01

    The H2A Production Model analyzes the technical and economic aspects of central and forecourt hydrogen production technologies. Using a standard discounted cash flow rate of return methodology, it determines the minimum hydrogen selling price, including a specified after-tax internal rate of return from the production technology. Users have the option of accepting default technology input values--such as capital costs, operating costs, and capacity factor--from established H2A production technology cases or entering custom values. Users can also modify the model's financial inputs. This new version of the H2A Production Model features enhanced usability and functionality. Input fields are consolidated and simplified. New capabilities include performing sensitivity analyses and scaling analyses to various plant sizes. This User Guide helps users already familiar with the basic tenets of H2A hydrogen production cost analysis get started using the new version of the model. It introduces the basic elements of the model then describes the function and use of each of its worksheets.

  20. A comparative study of Barron's rubber band ligation with Kshar Sutra ligation in hemorrhoids

    Science.gov (United States)

    Singh, Rakhi; Arya, Ramesh C.; Minhas, Satinder S.; Dutt, Anil

    2010-01-01

    Despite a long medical history of identification and treatment, hemorrhoids still pose a challenge to the medical fraternity in terms of finding satisfactory cure of the disease. In this study, Kshar Sutra Ligation (KSL), a modality of treatment described in Ayurveda, was compared with Barron's Rubber Band Ligation (RBL) for grade II and grade III hemorrhoids. This study was conducted in 20 adult patients of either sex with grade II and grade III hemorrhoids at two different hospitals. Patients were randomly allotted to two groups of 10 patients each. Group I patients underwent RBL, whereas patients of group II underwent KSL. Guggul-based Apamarga Kshar Sutra was prepared according to the principles laid down in ancient Ayurvedic texts and methodology standardized by IIIM, Jammu and CDRI, Lucknow. Comparative assessment of RBL and KSL was done according to 16 criteria. Although the two procedures were compared on 15 criteria, treatment outcome of grade II and grade III hemorrhoids was decided chiefly on the basis of patient satisfaction index (subjective criterion) and ability of each procedure to deal with prolapse of internal hemorrhoidal masses (objective criterion): Findings in each case were recorded over a follow-up of four weeks (postoperative days 1, 3, 7, 15 and 30). Statistical analysis was done using Student's t test for parametric data and Chi square test & Mann-Whitney test for non-parametric data. P 0.05). Both the groups were comparable statistically on all other grounds. Kshar Sutra Ligation is a useful form of treatment for Grades II and III internal hemorrhoids. PMID:20814519

  1. Stochastic hyperfine interactions modeling library-Version 2

    Science.gov (United States)

    Zacate, Matthew O.; Evenson, William E.

    2016-02-01

    The stochastic hyperfine interactions modeling library (SHIML) provides a set of routines to assist in the development and application of stochastic models of hyperfine interactions. The library provides routines written in the C programming language that (1) read a text description of a model for fluctuating hyperfine fields, (2) set up the Blume matrix, upon which the evolution operator of the system depends, and (3) find the eigenvalues and eigenvectors of the Blume matrix so that theoretical spectra of experimental techniques that measure hyperfine interactions can be calculated. The optimized vector and matrix operations of the BLAS and LAPACK libraries are utilized. The original version of SHIML constructed and solved Blume matrices for methods that measure hyperfine interactions of nuclear probes in a single spin state. Version 2 provides additional support for methods that measure interactions on two different spin states such as Mössbauer spectroscopy and nuclear resonant scattering of synchrotron radiation. Example codes are provided to illustrate the use of SHIML to (1) generate perturbed angular correlation spectra for the special case of polycrystalline samples when anisotropy terms of higher order than A22 can be neglected and (2) generate Mössbauer spectra for polycrystalline samples for pure dipole or pure quadrupole transitions.

  2. Management of pilonidal sinus by Kshar Sutra, a minimally invasive treatment.

    Science.gov (United States)

    Dwivedi, Amar P

    2010-04-01

    A Pilonidal sinus (PNS) occurs in the cleavage between the buttocks (natal cleft) and can cause discomfort, embarrassment and absence from work. It is more common in men (as they have more hair) than in women. The most commonly used surgical techniques for this disorder include excision and primary closure and excision with reconstructive flap. However, the risk of recurrence or of developing an infection of the wound after the operation is high. Also, the patient requires longer hospitalization, and the procedure is expensive. There is a similarity between Shalyaj Nadi Vran described in Sushruta Samhita and Pilonidal sinus. Sushruta has advocated a minimally invasive para-surgical treatment, viz., Kshar Sutra procedure, for nadi vran. Hence this therapy was tried in Pilonidal sinus, and is described in this case report. Kshar Sutra treatment not only minimizes complications and recurrence but also enables the patient to resume work quicker and with less discomfort, impact upon body image and self-esteem as well as reduced cost.

  3. The Lagrangian particle dispersion model FLEXPART version 10

    Science.gov (United States)

    Pisso, Ignacio; Sollum, Espen; Grythe, Henrik; Kristiansen, Nina; Cassiani, Massimo; Eckhardt, Sabine; Thompson, Rona; Groot Zwaaftnik, Christine; Evangeliou, Nikolaos; Hamburger, Thomas; Sodemann, Harald; Haimberger, Leopold; Henne, Stephan; Brunner, Dominik; Burkhart, John; Fouilloux, Anne; Fang, Xuekun; Phillip, Anne; Seibert, Petra; Stohl, Andreas

    2017-04-01

    The Lagrangian particle dispersion model FLEXPART was in its first original release in 1998 designed for calculating the long-range and mesoscale dispersion of air pollutants from point sources, such as after an accident in a nuclear power plant. The model has now evolved into a comprehensive tool for atmospheric transport modelling and analysis. Its application fields are extended to a range of atmospheric transport processes for both atmospheric gases and aerosols, e.g. greenhouse gases, short-lived climate forces like black carbon, volcanic ash and gases as well as studies of the water cycle. We present the newest release, FLEXPART version 10. Since the last publication fully describing FLEXPART (version 6.2), the model code has been parallelised in order to allow for the possibility to speed up computation. A new, more detailed gravitational settling parametrisation for aerosols was implemented, and the wet deposition scheme for aerosols has been heavily modified and updated to provide a more accurate representation of this physical process. In addition, an optional new turbulence scheme for the convective boundary layer is available, that considers the skewness in the vertical velocity distribution. Also, temporal variation and temperature dependence of the OH-reaction are included. Finally, user input files are updated to a more convenient and user-friendly namelist format, and the option to produce the output-files in netCDF-format instead of binary format is implemented. We present these new developments and show recent model applications. Moreover, we also introduce some tools for the preparation of the meteorological input data, as well as for the processing of FLEXPART output data.

  4. Software Engineering Designs for Super-Modeling Different Versions of CESM Models using DART

    Science.gov (United States)

    Kluzek, Erik; Duane, Gregory; Tribbia, Joe; Vertenstein, Mariana

    2014-05-01

    The super-modeling approach connects different models together at run time in order to provide run time feedbacks between the models and thus synchronize the models. This method reduces model bias further than after-the-fact averaging of model outputs. We explore different designs to connect different configurations and versions of an IPCC class climate model - the Community Earth System Model (CESM). We build on the Data Assimilation Research Test-bed (DART) software to provide data assimilation from truth as well as to provide a software framework to link different model configurations together. We show a system building on DART that uses a Python script to do simple nudging between three versions of the atmosphere model in CESM (the Community Atmosphere Model (CAM) versions three, four and five).

  5. 19-vertex version of the fully frustrated XY model

    Science.gov (United States)

    Knops, Yolanda M. M.; Nienhuis, Bernard; Knops, Hubert J. F.; Blöte, Henk W. J.

    1994-07-01

    We investigate a 19-vertex version of the two-dimensional fully frustrated XY (FFXY) model. We construct Yang-Baxter equations for this model and show that there is no solution. Therefore we have chosen a numerical approach based on the transfer matrix. The results show that a coupled XY Ising model is in the same universality class as the FFXY model. We find that the phase coupling over an Ising wall is irrelevant at criticality. This leads to a correction of earlier determinations of the dimension x*h,Is of the Ising disorder operator. We find x*h,Is=0.123(5) and a conformal anomaly c=1.55(5). These results are consistent with the hypothesis that the FFXY model behaves as a superposition of an Ising model and an XY model. However, the dimensions associated with the energy, xt=0.77(3), and with the XY magnetization xh,XY~=0.17, refute this hypothesis.

  6. Looking for the dichromatic version of a colour vision model

    Science.gov (United States)

    Capilla, P.; Luque, M. J.; Díez-Ajenjo, M. A.

    2004-09-01

    Different hypotheses on the sensitivity of photoreceptors and post-receptoral mechanisms were introduced in different colour vision models to derive acceptable dichromatic versions. Models with one (Ingling and T'sou, Guth et al, Boynton) and two linear opponent stages (DeValois and DeValois) and with two non-linear opponent stages (ATD95) were used. The L- and M-cone sensitivities of red-green defectives were either set to zero (cone-loss hypothesis) or replaced by that of a different cone-type (cone-replacement hypothesis), whereas for tritanopes the S-cone sensitivity was always assumed to be zero. The opponent mechanisms were either left unchanged or nulled in one or in all the opponent stages. The dichromatic models obtained have been evaluated according to their performance in three tests: computation of the spectral sensitivity of the dichromatic perceptual mechanisms, prediction of the colour loci describing dichromatic appearance and prediction of the gamut of colours that dichromats perceive as normal subjects do.

  7. The integrated Earth System Model Version 1: formulation and functionality

    Energy Technology Data Exchange (ETDEWEB)

    Collins, William D.; Craig, Anthony P.; Truesdale, John E.; Di Vittorio, Alan; Jones, Andrew D.; Bond-Lamberty, Benjamin; Calvin, Katherine V.; Edmonds, James A.; Kim, Son H.; Thomson, Allison M.; Patel, Pralit L.; Zhou, Yuyu; Mao, Jiafu; Shi, Xiaoying; Thornton, Peter E.; Chini, Louise M.; Hurtt, George C.

    2015-07-23

    The integrated Earth System Model (iESM) has been developed as a new tool for pro- jecting the joint human/climate system. The iESM is based upon coupling an Integrated Assessment Model (IAM) and an Earth System Model (ESM) into a common modeling in- frastructure. IAMs are the primary tool for describing the human–Earth system, including the sources of global greenhouse gases (GHGs) and short-lived species, land use and land cover change, and other resource-related drivers of anthropogenic climate change. ESMs are the primary scientific tools for examining the physical, chemical, and biogeochemical impacts of human-induced changes to the climate system. The iESM project integrates the economic and human dimension modeling of an IAM and a fully coupled ESM within a sin- gle simulation system while maintaining the separability of each model if needed. Both IAM and ESM codes are developed and used by large communities and have been extensively applied in recent national and international climate assessments. By introducing heretofore- omitted feedbacks between natural and societal drivers, we can improve scientific under- standing of the human–Earth system dynamics. Potential applications include studies of the interactions and feedbacks leading to the timing, scale, and geographic distribution of emissions trajectories and other human influences, corresponding climate effects, and the subsequent impacts of a changing climate on human and natural systems. This paper de- scribes the formulation, requirements, implementation, testing, and resulting functionality of the first version of the iESM released to the global climate community.

  8. A New Paradigm in Fast BCD Division Using Ancient Indian Vedic Mathematics Sutras

    Directory of Open Access Journals (Sweden)

    Diganta Sengupta

    2013-05-01

    Full Text Available For decades, division is the most time consuming an d expensive procedure in the Arithmetic and Logic Unit of the processors. This paper proposes a novel division algorithm based on Ancient Indian Vedic Mathematics Sutras which is much faste r compared to conventional division algorithms. Also large value for the dividend and t he divisor do not adversely affect the speed as time estimation of the algorithm depends on the num ber of normalizations of the remainder rather than on the number of bits in the dividend a nd the divisor. The algorithm has exhibited remarkable results for conventional midrange proces sors with numbers of size around 50 bits(15 digit numbers, the upper ceiling of comput able numbers for conventional algorithms and the algorithm can divide numbers having size up to 38 digits (127 bits with conventional processor in the present form, if modified it can d ivide even bigger numbers.

  9. The temporal version of the pediatric sepsis biomarker risk model.

    Directory of Open Access Journals (Sweden)

    Hector R Wong

    Full Text Available PERSEVERE is a risk model for estimating mortality probability in pediatric septic shock, using five biomarkers measured within 24 hours of clinical presentation.Here, we derive and test a temporal version of PERSEVERE (tPERSEVERE that considers biomarker values at the first and third day following presentation to estimate the probability of a "complicated course", defined as persistence of ≥2 organ failures at seven days after meeting criteria for septic shock, or death within 28 days.Biomarkers were measured in the derivation cohort (n = 225 using serum samples obtained during days 1 and 3 of septic shock. Classification and Regression Tree (CART analysis was used to derive a model to estimate the risk of a complicated course. The derived model was validated in the test cohort (n = 74, and subsequently updated using the combined derivation and test cohorts.A complicated course occurred in 23% of the derivation cohort subjects. The derived model had a sensitivity for a complicated course of 90% (95% CI 78-96, specificity was 70% (62-77, positive predictive value was 47% (37-58, and negative predictive value was 96% (91-99. The area under the receiver operating characteristic curve was 0.85 (0.79-0.90. Similar test characteristics were observed in the test cohort. The updated model had a sensitivity of 91% (81-96, a specificity of 70% (64-76, a positive predictive value of 47% (39-56, and a negative predictive value of 96% (92-99.tPERSEVERE reasonably estimates the probability of a complicated course in children with septic shock. tPERSEVERE could potentially serve as an adjunct to physiological assessments for monitoring how risk for poor outcomes changes during early interventions in pediatric septic shock.

  10. Preliminary Evaluation of the Community Multiscale Air Quality (CMAQ) Model Version 5.1

    Science.gov (United States)

    The AMAD will perform two annual CMAQ model simulations, one with the current publically available version of the CMAQ model (v5.0.2) and the other with the beta version of the new model (v5.1). The results of each model simulation will then be compared to observations and the pe...

  11. Evaluation of the Community Multiscale Air Quality (CMAQ) Model Version 5.1

    Science.gov (United States)

    The AMAD will performed two CMAQ model simulations, one with the current publically available version of the CMAQ model (v5.0.2) and the other with the new version of the CMAQ model (v5.1). The results of each model simulation are compared to observations and the performance of t...

  12. Land-Use Portfolio Modeler, Version 1.0

    Science.gov (United States)

    Taketa, Richard; Hong, Makiko

    2010-01-01

    -on-investment. The portfolio model, now known as the Land-Use Portfolio Model (LUPM), provided the framework for the development of the Land-Use Portfolio Modeler, Version 1.0 software (LUPM v1.0). The software provides a geographic information system (GIS)-based modeling tool for evaluating alternative risk-reduction mitigation strategies for specific natural-hazard events. The modeler uses information about a specific natural-hazard event and the features exposed to that event within the targeted study region to derive a measure of a given mitigation strategy`s effectiveness. Harnessing the spatial capabilities of a GIS enables the tool to provide a rich, interactive mapping environment in which users can create, analyze, visualize, and compare different

  13. Integrating Cloud Processes in the Community Atmosphere Model, Version 5.

    Energy Technology Data Exchange (ETDEWEB)

    Park, S.; Bretherton, Christopher S.; Rasch, Philip J.

    2014-09-15

    This paper provides a description on the parameterizations of global cloud system in CAM5. Compared to the previous versions, CAM5 cloud parameterization has the following unique characteristics: (1) a transparent cloud macrophysical structure that has horizontally non-overlapped deep cumulus, shallow cumulus and stratus in each grid layer, each of which has own cloud fraction, mass and number concentrations of cloud liquid droplets and ice crystals, (2) stratus-radiation-turbulence interaction that allows CAM5 to simulate marine stratocumulus solely from grid-mean RH without relying on the stability-based empirical empty stratus, (3) prognostic treatment of the number concentrations of stratus liquid droplets and ice crystals with activated aerosols and detrained in-cumulus condensates as the main sources and evaporation-sedimentation-precipitation of stratus condensate as the main sinks, and (4) radiatively active cumulus. By imposing consistency between diagnosed stratus fraction and prognosed stratus condensate, CAM5 is free from empty or highly-dense stratus at the end of stratus macrophysics. CAM5 also prognoses mass and number concentrations of various aerosol species. Thanks to the aerosol activation and the parameterizations of the radiation and stratiform precipitation production as a function of the droplet size, CAM5 simulates various aerosol indirect effects associated with stratus as well as direct effects, i.e., aerosol controls both the radiative and hydrological budgets. Detailed analysis of various simulations revealed that CAM5 is much better than CAM3/4 in the global performance as well as the physical formulation. However, several problems were also identifed, which can be attributed to inappropriate regional tuning, inconsistency between various physics parameterizations, and incomplete model physics. Continuous efforts are going on to further improve CAM5.

  14. Sun Xi-miao in T he Biogr aphy of T he Avatamsaka - Sutra

    Directory of Open Access Journals (Sweden)

    LEE Hyun- Sook

    2005-12-01

    Full Text Available This paper aimed to introduce and examine the biography of Sun Xi-miao(孫思邈581-682 which I found in The Biography of The Avatamsaka-Sutra(華嚴經傳記, that Fazang(法藏643-712 wrote in 692 A.D. This document was neglected to understand Sun who was the famous medical writer of the collection of prescriptions, the Bei ji qian jin yao fang(備急千金要方. His life is rather well documented, because he has his own biographies in Jiu Tang shu and Xin Tang shu(新唐書 which cited from Da Tang sin yu(大唐新語, published in 807. But I found several new informations about Sun in The Biography of The Avatamsaka-Sutra, such as he use to be a military medicine in the troops of Li Yuan(李淵 who became the first emperor Kao Tsu(高祖 of Tang dynasty and treat Sun with great favour. This document let us know that the Bei ji qian jin yao fang(備急千金要方 was dedicated to Kao Tsu, known as published in 652 A.D. MY CONCLUSIONS ARE AS FOLLOWS: First, it was written by Fazang in 692 A.D, who was the real establisher of the fraternity of the Avatamsaka in China, for the purpose of encouraging to copy the Avatamsaka-Sutra(華嚴經. According to this biography, Sun made 750 copies to persuade the monks and the peoples, and that's the reason Fazang wrote his biography.Secondly, it was not conveyed to posterity, such as Sun was good looking, tall and use to be the medicine of Kao Tsu and dedicated his medical book to the first emperor. It might be left out for Tai Tsung(太宗's sake in the official records, who murdered his brother, the heir apparent to the throne and became the second emperor by himself. On the contrary, it was written in Da Tang sin yu, Jiu and Xin Tang shu that Sun made a prediction that his collection of prescriptions would help the holyman after 50 years from Xuan Di(宣帝578-579 of Northern Chou(北周 Dynasty. Holyman meant Tai Tsung. It shows that Sun's biographies in the Da Tang sin yu, Jiu and Xin Tang Shu were

  15. The NDFF-EcoGRID logical data model, version 3. - Document version 1.1

    NARCIS (Netherlands)

    W. Arp; G. van Reenen; R. van Seeters; M. Tentij; L.E. Veen; D. Zoetebier

    2011-01-01

    The National Authority for Data concerning Nature has been appointed by the Ministry of Agriculture, Nature and Food Quality, and has been assigned the task of making available nature data and of promoting its use. The logical data model described here is intended for everyone in The Netherlands (an

  16. A Constrained and Versioned Data Model for TEAM Data

    Science.gov (United States)

    Andelman, S.; Baru, C.; Chandra, S.; Fegraus, E.; Lin, K.

    2009-04-01

    The objective of the Tropical Ecology Assessment and Monitoring Network (www.teamnetwork.org) is "To generate real time data for monitoring long-term trends in tropical biodiversity through a global network of TEAM sites (i.e. field stations in tropical forests), providing an early warning system on the status of biodiversity to effectively guide conservation action". To achieve this, the TEAM Network operates by collecting data via standardized protocols at TEAM Sites. The standardized TEAM protocols include the Climate, Vegetation and Terrestrial Vertebrate Protocols. Some sites also implement additional protocols. There are currently 7 TEAM Sites with plans to grow the network to 15 by June 30, 2009 and 50 TEAM Sites by the end of 2010. At each TEAM Site, data is gathered as defined by the protocols and according to a predefined sampling schedule. The TEAM data is organized and stored in a database based on the TEAM spatio-temporal data model. This data model is at the core of the TEAM Information System - it consumes and executes spatio-temporal queries, and analytical functions that are performed on TEAM data, and defines the object data types, relationships and operations that maintain database integrity. The TEAM data model contains object types including types for observation objects (e.g. bird, butterfly and trees), sampling unit, person, role, protocol, site and the relationship of these object types. Each observation data record is a set of attribute values of an observation object and is always associated with a sampling unit, an observation timestamp or time interval, a versioned protocol and data collectors. The operations on the TEAM data model can be classified as read operations, insert operations and update operations. Following are some typical operations: The operation get(site, protocol, [sampling unit block, sampling unit,] start time, end time) returns all data records using the specified protocol and collected at the specified site, block

  17. Estimating Parameters for the PVsyst Version 6 Photovoltaic Module Performance Model

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    We present an algorithm to determine parameters for the photovoltaic module perf ormance model encoded in the software package PVsyst(TM) version 6. Our method operates on current - voltage (I - V) measured over a range of irradiance and temperature conditions. We describe the method and illustrate its steps using data for a 36 cell crystalli ne silicon module. We qualitatively compare our method with one other technique for estimating parameters for the PVsyst(TM) version 6 model .

  18. psychotools - Infrastructure for Psychometric Modeling: Version 0.1-1

    OpenAIRE

    Zeileis, A.; Strobl, Carolin; Wickelmaier, F

    2011-01-01

    Infrastructure for psychometric modeling such as data classes (e.g., for paired comparisons) and basic model fitting functions (e.g., for Rasch and Bradley-Terry models). Intended especially as a common building block for fitting psychometric mixture models in package ‘‘psychomix’’ and psychometric tree models in package ‘‘psychotree’’. License: GPL-2

  19. Implementing an HL7 version 3 modeling tool from an Ecore model.

    Science.gov (United States)

    Bánfai, Balázs; Ulrich, Brandon; Török, Zsolt; Natarajan, Ravi; Ireland, Tim

    2009-01-01

    One of the main challenges of achieving interoperability using the HL7 V3 healthcare standard is the lack of clear definition and supporting tools for modeling, testing, and conformance checking. Currently, the knowledge defining the modeling is scattered around in MIF schemas, tools and specifications or simply with the domain experts. Modeling core HL7 concepts, constraints, and semantic relationships in Ecore/EMF encapsulates the domain-specific knowledge in a transparent way while unifying Java, XML, and UML in an abstract, high-level representation. Moreover, persisting and versioning the core HL7 concepts as a single Ecore context allows modelers and implementers to create, edit and validate message models against a single modeling context. The solution discussed in this paper is implemented in the new HL7 Static Model Designer as an extensible toolset integrated as a standalone Eclipse RCP application.

  20. Calibrating and Updating the Global Forest Products Model (GFPM version 2014 with BPMPD)

    Science.gov (United States)

    Joseph Buongiorno; Shushuai Zhu

    2014-01-01

    The Global Forest Products Model (GFPM) is an economic model of global production, consumption, and trade of forest products. An earlier version of the model is described in Buongiorno et al. (2003). The GFPM 2014 has data and parameters to simulate changes of the forest sector from 2010 to 2030. Buongiorno and Zhu (2014) describe how to use the model for simulation....

  1. Calibrating and updating the Global Forest Products Model (GFPM version 2016 with BPMPD)

    Science.gov (United States)

    Joseph Buongiorno; Shushuai  Zhu

    2016-01-01

    The Global Forest Products Model (GFPM) is an economic model of global production, consumption, and trade of forest products. An earlier version of the model is described in Buongiorno et al. (2003). The GFPM 2016 has data and parameters to simulate changes of the forest sector from 2013 to 2030. Buongiorno and Zhu (2015) describe how to use the model for...

  2. Aircraft/Air Traffic Management Functional Analysis Model. Version 2.0; User's Guide

    Science.gov (United States)

    Etheridge, Melvin; Plugge, Joana; Retina, Nusrat

    1998-01-01

    The Aircraft/Air Traffic Management Functional Analysis Model, Version 2.0 (FAM 2.0), is a discrete event simulation model designed to support analysis of alternative concepts in air traffic management and control. FAM 2.0 was developed by the Logistics Management Institute (LMI) a National Aeronautics and Space Administration (NASA) contract. This document provides a guide for using the model in analysis. Those interested in making enhancements or modification to the model should consult the companion document, Aircraft/Air Traffic Management Functional Analysis Model, Version 2.0 Technical Description.

  3. Modelling and analysis of Markov reward automata (extended version)

    NARCIS (Netherlands)

    Guck, Dennis; Timmer, Mark; Hatefi, Hassan; Ruijters, Enno; Stoelinga, Mariëlle

    2014-01-01

    Costs and rewards are important ingredients for cyberphysical systems, modelling critical aspects like energy consumption, task completion, repair costs, and memory usage. This paper introduces Markov reward automata, an extension of Markov automata that allows the modelling of systems incorporating

  4. Estimating hybrid choice models with the new version of Biogeme

    OpenAIRE

    Bierlaire, Michel

    2010-01-01

    Hybrid choice models integrate many types of discrete choice modeling methods, including latent classes and latent variables, in order to capture concepts such as perceptions, attitudes, preferences, and motivatio (Ben-Akiva et al., 2002). Although they provide an excellent framework to capture complex behavior patterns, their use in applications remains rare in the literature due to the difficulty of estimating the models. In this talk, we provide a short introduction to hybrid choice model...

  5. A hypocentral version of the space-time ETAS model

    Science.gov (United States)

    Guo, Yicun; Zhuang, Jiancang; Zhou, Shiyong

    2015-10-01

    The space-time Epidemic-Type Aftershock Sequence (ETAS) model is extended by incorporating the depth component of earthquake hypocentres. The depths of the direct offspring produced by an earthquake are assumed to be independent of the epicentre locations and to follow a beta distribution, whose shape parameter is determined by the depth of the parent event. This new model is verified by applying it to the Southern California earthquake catalogue. The results show that the new model fits data better than the original epicentre ETAS model and that it provides the potential for modelling and forecasting seismicity with higher resolutions.

  6. Result Summary for the Area 5 Radioactive Waste Management Site Performance Assessment Model Version 4.113

    Energy Technology Data Exchange (ETDEWEB)

    Shott, G. J.

    2012-04-15

    Preliminary results for Version 4.113 of the Nevada National Security Site Area 5 Radioactive Waste Management Site performance assessment model are summarized. Version 4.113 includes the Fiscal Year 2011 inventory estimate.

  7. SSM - SOLID SURFACE MODELER, VERSION 6.0

    Science.gov (United States)

    Goza, S. P.

    1994-01-01

    The Solid Surface Modeler (SSM) is an interactive graphics software application for solid-shaded and wireframe three- dimensional geometric modeling. It enables the user to construct models of real-world objects as simple as boxes or as complex as Space Station Freedom. The program has a versatile user interface that, in many cases, allows mouse input for intuitive operation or keyboard input when accuracy is critical. SSM can be used as a stand-alone model generation and display program and offers high-fidelity still image rendering. Models created in SSM can also be loaded into other software for animation or engineering simulation. (See the information below for the availability of SSM with the Object Orientation Manipulator program, OOM, a graphics software application for three-dimensional rendering and animation.) Models are constructed within SSM using functions of the Create Menu to create, combine, and manipulate basic geometric building blocks called primitives. Among the simpler primitives are boxes, spheres, ellipsoids, cylinders, and plates; among the more complex primitives are tubes, skinned-surface models and surfaces of revolution. SSM also provides several methods for duplicating models. Constructive Solid Geometry (CSG) is one of the most powerful model manipulation tools provided by SSM. The CSG operations implemented in SSM are union, subtraction and intersection. SSM allows the user to transform primitives with respect to each axis, transform the camera (the user's viewpoint) about its origin, apply texture maps and bump maps to model surfaces, and define color properties; to select and combine surface-fill attributes, including wireframe, constant, and smooth; and to specify models' points of origin (the positions about which they rotate). SSM uses Euler angle transformations for calculating the results of translation and rotation operations. The user has complete control over the modeling environment from within the system. A variety of file

  8. Alternative Factor Models and Heritability of the Short Leyton Obsessional Inventory--Children's Version

    Science.gov (United States)

    Moore, Janette; Smith, Gillian W.; Shevlin, Mark; O'Neill, Francis A.

    2010-01-01

    An alternative models framework was used to test three confirmatory factor analytic models for the Short Leyton Obsessional Inventory-Children's Version (Short LOI-CV) in a general population sample of 517 young adolescent twins (11-16 years). A one-factor model as implicit in current classification systems of Obsessive-Compulsive Disorder (OCD),…

  9. The MiniBIOS model (version 1A4) at the RIVM

    NARCIS (Netherlands)

    Uijt de Haag PAM; Laheij GMH

    1993-01-01

    This report is the user's guide of the MiniBIOS model, version 1A4. The model is operational at the Laboratory of Radiation Research of the RIVM. MiniBIOS is a simulation model for calculating the transport of radionuclides in the biosphere and the consequential radiation dose to humans. The

  10. The MiniBIOS model (version 1A4) at the RIVM

    NARCIS (Netherlands)

    Uijt de Haag PAM; Laheij GMH

    1993-01-01

    This report is the user's guide of the MiniBIOS model, version 1A4. The model is operational at the Laboratory of Radiation Research of the RIVM. MiniBIOS is a simulation model for calculating the transport of radionuclides in the biosphere and the consequential radiation dose to humans. The

  11. Simulating historical landscape dynamics using the landscape fire succession model LANDSUM version 4.0

    Science.gov (United States)

    Robert E. Keane; Lisa M. Holsinger; Sarah D. Pratt

    2006-01-01

    The range and variation of historical landscape dynamics could provide a useful reference for designing fuel treatments on today's landscapes. Simulation modeling is a vehicle that can be used to estimate the range of conditions experienced on historical landscapes. A landscape fire succession model called LANDSUMv4 (LANDscape SUccession Model version 4.0) is...

  12. The MiniBIOS model (version 1A4) at the RIVM

    NARCIS (Netherlands)

    Uijt de Haag PAM; Laheij GMH

    1993-01-01

    This report is the user's guide of the MiniBIOS model, version 1A4. The model is operational at the Laboratory of Radiation Research of the RIVM. MiniBIOS is a simulation model for calculating the transport of radionuclides in the biosphere and the consequential radiation dose to humans. The transp

  13. The MiniBIOS model (version 1A4) at the RIVM

    NARCIS (Netherlands)

    Uijt de Haag PAM; Laheij GMH

    1993-01-01

    This report is the user's guide of the MiniBIOS model, version 1A4. The model is operational at the Laboratory of Radiation Research of the RIVM. MiniBIOS is a simulation model for calculating the transport of radionuclides in the biosphere and the consequential radiation dose to humans. The

  14. Alternative Factor Models and Heritability of the Short Leyton Obsessional Inventory--Children's Version

    Science.gov (United States)

    Moore, Janette; Smith, Gillian W.; Shevlin, Mark; O'Neill, Francis A.

    2010-01-01

    An alternative models framework was used to test three confirmatory factor analytic models for the Short Leyton Obsessional Inventory-Children's Version (Short LOI-CV) in a general population sample of 517 young adolescent twins (11-16 years). A one-factor model as implicit in current classification systems of Obsessive-Compulsive Disorder (OCD),…

  15. Complexity, accuracy and practical applicability of different biogeochemical model versions

    Science.gov (United States)

    Los, F. J.; Blaas, M.

    2010-04-01

    The construction of validated biogeochemical model applications as prognostic tools for the marine environment involves a large number of choices particularly with respect to the level of details of the .physical, chemical and biological aspects. Generally speaking, enhanced complexity might enhance veracity, accuracy and credibility. However, very complex models are not necessarily effective or efficient forecast tools. In this paper, models of varying degrees of complexity are evaluated with respect to their forecast skills. In total 11 biogeochemical model variants have been considered based on four different horizontal grids. The applications vary in spatial resolution, in vertical resolution (2DH versus 3D), in nature of transport, in turbidity and in the number of phytoplankton species. Included models range from 15 year old applications with relatively simple physics up to present state of the art 3D models. With all applications the same year, 2003, has been simulated. During the model intercomparison it has been noticed that the 'OSPAR' Goodness of Fit cost function (Villars and de Vries, 1998) leads to insufficient discrimination of different models. This results in models obtaining similar scores although closer inspection of the results reveals large differences. In this paper therefore, we have adopted the target diagram by Jolliff et al. (2008) which provides a concise and more contrasting picture of model skill on the entire model domain and for the entire period of the simulations. Correctness in prediction of the mean and the variability are separated and thus enhance insight in model functioning. Using the target diagrams it is demonstrated that recent models are more consistent and have smaller biases. Graphical inspection of time series confirms this, as the level of variability appears more realistic, also given the multi-annual background statistics of the observations. Nevertheless, whether the improvements are all genuine for the particular

  16. Efficient Modelling and Generation of Markov Automata (extended version)

    NARCIS (Netherlands)

    Timmer, Mark; Katoen, Joost-Pieter; Pol, van de Jaco; Stoelinga, Mariëlle

    2012-01-01

    This paper introduces a framework for the efficient modelling and generation of Markov automata. It consists of (1) the data-rich process-algebraic language MAPA, allowing concise modelling of systems with nondeterminism, probability and Markovian timing; (2) a restricted form of the language, the M

  17. Modeling the complete Otto cycle: Preliminary version. [computer programming

    Science.gov (United States)

    Zeleznik, F. J.; Mcbride, B. J.

    1977-01-01

    A description is given of the equations and the computer program being developed to model the complete Otto cycle. The program incorporates such important features as: (1) heat transfer, (2) finite combustion rates, (3) complete chemical kinetics in the burned gas, (4) exhaust gas recirculation, and (5) manifold vacuum or supercharging. Changes in thermodynamic, kinetic and transport data as well as model parameters can be made without reprogramming. Preliminary calculations indicate that: (1) chemistry and heat transfer significantly affect composition and performance, (2) there seems to be a strong interaction among model parameters, and (3) a number of cycles must be calculated in order to obtain steady-state conditions.

  18. Flipped version of the supersymmetric strongly coupled preon model

    Science.gov (United States)

    Fajfer, S.; Mileković, M.; Tadić, D.

    1989-12-01

    In the supersymmetric SU(5) [SUSY SU(5)] composite model (which was described in an earlier paper) the fermion mass terms can be easily constructed. The SUSY SU(5)⊗U(1), i.e., flipped, composite model possesses a completely analogous composite-particle spectrum. However, in that model one cannot construct a renormalizable superpotential which would generate fermion mass terms. This contrasts with the standard noncomposite grand unified theories (GUT's) in which both the Georgi-Glashow electrical charge embedding and its flipped counterpart lead to the renormalizable theories.

  19. ONKALO rock mechanics model (RMM) - Version 2.0

    Energy Technology Data Exchange (ETDEWEB)

    Moenkkoenen, H. [WSP Finland Oy, Helsinki (Finland); Hakala, M. [KMS Hakala Oy, Nokia (Finland); Paananen, M.; Laine, E. [Geological Survey of Finland, Espoo (Finland)

    2012-02-15

    The Rock Mechanics Model of the ONKALO rock volume is a description of the significant features and parameters related to rock mechanics. The main objective is to develop a tool to predict the rock properties, quality and hence the potential for stress failure which can then be used for continuing design of the ONKALO and the repository. This is the second implementation of the Rock Mechanics Model and it includes sub-models of the intact rock strength, in situ stress, thermal properties, rock mass quality and properties of the brittle deformation zones. Because of the varying quantities of available data for the different parameters, the types of presentations also vary: some data sets can be presented in the style of a 3D block model but, in other cases, a single distribution represents the whole rock volume hosting the ONKALO. (orig.)

  20. U.S. Coastal Relief Model - Southern California Version 2

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NGDC's U.S. Coastal Relief Model (CRM) provides a comprehensive view of the U.S. coastal zone integrating offshore bathymetry with land topography into a seamless...

  1. The Oak Ridge Competitive Electricity Dispatch (ORCED) Model Version 9

    Energy Technology Data Exchange (ETDEWEB)

    Hadley, Stanton W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Baek, Young Sun [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-11-01

    The Oak Ridge Competitive Electricity Dispatch (ORCED) model dispatches power plants in a region to meet the electricity demands for any single given year up to 2030. It uses publicly available sources of data describing electric power units such as the National Energy Modeling System and hourly demands from utility submittals to the Federal Energy Regulatory Commission that are projected to a future year. The model simulates a single region of the country for a given year, matching generation to demands and predefined net exports from the region, assuming no transmission constraints within the region. ORCED can calculate a number of key financial and operating parameters for generating units and regional market outputs including average and marginal prices, air emissions, and generation adequacy. By running the model with and without changes such as generation plants, fuel prices, emission costs, plug-in hybrid electric vehicles, distributed generation, or demand response, the marginal impact of these changes can be found.

  2. Macro System Model (MSM) User Guide, Version 1.3

    Energy Technology Data Exchange (ETDEWEB)

    Ruth, M.; Diakov, V.; Sa, T.; Goldsby, M.

    2011-09-01

    This user guide describes the macro system model (MSM). The MSM has been designed to allow users to analyze the financial, environmental, transitional, geographical, and R&D issues associated with the transition to a hydrogen economy. Basic end users can use the MSM to answer cross-cutting questions that were previously difficult to answer in a consistent and timely manner due to various assumptions and methodologies among different models.

  3. A Systems Engineering Capability Maturity Model, Version 1.1,

    Science.gov (United States)

    1995-11-01

    Ongoing Skills and Knowledge 4-113 PA 18: Coordinate with Suppliers 4-120 Part 3: Appendices Appendix A Appendix B Appendix C Appendix D...Ward-Callan, C. Wasson, A. Wilbur, A.M. Wilhite, R. Williams, H. Wilson, D. Zaugg, and C. Zumba . continued on next page SM CMM and Capability...Model (SE-CMM) was developed as a response to industry requests for assistance in coordinating and publishing a model that would foster improvement

  4. Due Regard Encounter Model Version 1.0

    Science.gov (United States)

    2013-08-19

    Note that no existing model covers encoun- ters between two IFR aircraft in oceanic airspace. The reason for this is that one cannot observe encounters...encounters between instrument flight rules ( IFR ) and non- IFR traffic beyond 12NM. 2 TABLE 1 Encounter model categories. Aircraft of Interest Intruder...Aircraft Location Flight Rule IFR VFR Noncooperative Noncooperative Conventional Unconventional CONUS IFR C C U X VFR C U U X Offshore IFR C C U X VFR C U

  5. Using the Global Forest Products Model (GFPM version 2012)

    Science.gov (United States)

    Joseph Buongiorno; Shushuai Zhu

    2012-01-01

    The purpose of this manual is to enable users of the Global Forest Products Model to: • Install and run the GFPM software • Understand the input data • Change the input data to explore different scenarios • Interpret the output The GFPM is an economic model of global production, consumption and trade of forest products (Buongiorno et al. 2003). The GFPM2012 has data...

  6. Institutional Transformation Version 2.5 Modeling and Planning.

    Energy Technology Data Exchange (ETDEWEB)

    Villa, Daniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mizner, Jack H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Passell, Howard D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gallegos, Gerald R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Peplinski, William John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vetter, Douglas W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Evans, Christopher A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Malczynski, Leonard A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Addison, Marlin [Arizona State Univ., Mesa, AZ (United States); Schaffer, Matthew A. [Bridgers and Paxton Engineering Firm, Albuquerque, NM (United States); Higgins, Matthew W. [Vibrantcy, Albuquerque, NM (United States)

    2017-02-01

    Reducing the resource consumption and emissions of large institutions is an important step toward a sustainable future. Sandia National Laboratories' (SNL) Institutional Transformation (IX) project vision is to provide tools that enable planners to make well-informed decisions concerning sustainability, resource conservation, and emissions reduction across multiple sectors. The building sector has been the primary focus so far because it is the largest consumer of resources for SNL. The IX building module allows users to define the evolution of many buildings over time. The module has been created so that it can be generally applied to any set of DOE-2 ( http://doe2.com ) building models that have been altered to include parameters and expressions required by energy conservation measures (ECM). Once building models have been appropriately prepared, they are checked into a Microsoft Access (r) database. Each building can be represented by many models. This enables the capability to keep a continuous record of models in the past, which are replaced with different models as changes occur to the building. In addition to this, the building module has the capability to apply climate scenarios through applying different weather files to each simulation year. Once the database has been configured, a user interface in Microsoft Excel (r) is used to create scenarios with one or more ECMs. The capability to include central utility buildings (CUBs) that service more than one building with chilled water has been developed. A utility has been created that joins multiple building models into a single model. After using the utility, several manual steps are required to complete the process. Once this CUB model has been created, the individual contributions of each building are still tracked through meters. Currently, 120 building models from SNL's New Mexico and California campuses have been created. This includes all buildings at SNL greater than 10,000 sq. ft

  7. Produksi Functionally Graded Material (FGM dari Hydroxyapatite-Serat Sutra untuk Aplikasi di Bidang Biomaterial dengan Teknik Pulse Electric Current Sintering

    Directory of Open Access Journals (Sweden)

    Tjokorda Gde Tirta Nindhia

    2006-01-01

    Full Text Available This research is intended to produce functionally graded material (FGM of Hydroxyapatite (Hap-silk fibroin by pulse electric current sintering in facing the need in biomaterial application. The sample is created with 4 layers with the thickness for each layer is 0,625 mm, so that the total samples thickness become 2.5mm, with diameter 15 mm. The carbon die is used to compact the sample. The composition of lower layer is 100% silk fibroin, after that 90% silk fibroin +10% Hap, third layer was 80% silk fibroin + 20%Hap, and 70% silk fibroin +30% Hap for the upper layer. The properties of the FGM product was characterized by optical microscope and scanning electron microscope (SEM, three point bend with single-edge beam fracture toughness test (KIC. The grade of the FGM material is proven by using electron probe micro analyzer (EPMA. The value of fracture toughness is 0.45 MPa.m1/2. The sample still can support the load after maximum load is reached. Optical micrograph and SEM, and result from EPMA indicate that the Hap-silk fibroin FGM can be produce perfectly by using the method that is introduced in this research. Abstract in Bahasa Indonesia : Penelitian ini bertujuan memproduksi functionally graded material (FGM dari hydroxyapatite (Hap-serat sutra, melalui teknik pulse electric current sintering untuk memenuhi tantangan kebutuhan akan bahan jenis ini untuk digunakan dibidang biomaterial. Benda uji terdiri dari 4 lapis dengan ketebalan sama untuk tiap lapisnya sehingga tebal total menjadi 2.5 mm dengan diameter 15 mm. Komposisi lapisan paling bawah adalah 100% serat sutra, setelah itu 90% serat sutra + 10% Hap. Lapisan ketiga dengan komposisi 80% serat sutra + 20% Hap, dan 70% serat sutra + 30% Hap untuk lapisan paling atas. Perilaku produk FGM ini dikarakterisasikan dengan mikroskop optik, mikroskop electron, uji ketangguhan retak three point bend with single-edge. Gradasi (grade dari FGM dibuktikan dengan electron probe micro analyzer (EPMA

  8. Zig-zag version of the Frenkel-Kontorova model

    DEFF Research Database (Denmark)

    Christiansen, Peter Leth; Savin, A.V.; Zolotaryuk, Alexander

    1996-01-01

    We study a generalization of the Frenkel-Kontorova model which describes a zig-zag chain of particles coupled by both the first- and second-neighbor harmonic forces and subjected to a planar substrate with a commensurate potential relief. The particles are supposed to have two degrees of freedom:...

  9. A node-based version of the cellular Potts model.

    Science.gov (United States)

    Scianna, Marco; Preziosi, Luigi

    2016-09-01

    The cellular Potts model (CPM) is a lattice-based Monte Carlo method that uses an energetic formalism to describe the phenomenological mechanisms underlying the biophysical problem of interest. We here propose a CPM-derived framework that relies on a node-based representation of cell-scale elements. This feature has relevant consequences on the overall simulation environment. First, our model can be implemented on any given domain, provided a proper discretization (which can be regular or irregular, fixed or time evolving). Then, it allowed an explicit representation of cell membranes, whose displacements realistically result in cell movement. Finally, our node-based approach can be easily interfaced with continuous mechanics or fluid dynamics models. The proposed computational environment is here applied to some simple biological phenomena, such as cell sorting and chemotactic migration, also in order to achieve an analysis of the performance of the underlying algorithm. This work is finally equipped with a critical comparison between the advantages and disadvantages of our model with respect to the traditional CPM and to some similar vertex-based approaches. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Red Storm usage model :Version 1.12.

    Energy Technology Data Exchange (ETDEWEB)

    Jefferson, Karen L.; Sturtevant, Judith E.

    2005-12-01

    Red Storm is an Advanced Simulation and Computing (ASC) funded massively parallel supercomputer located at Sandia National Laboratories (SNL). The Red Storm Usage Model (RSUM) documents the capabilities and the environment provided for the FY05 Tri-Lab Level II Limited Availability Red Storm User Environment Milestone and the FY05 SNL Level II Limited Availability Red Storm Platform Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and SNL. Additionally, the Red Storm Usage Model maps the provided capabilities to the Tri-Lab ASC Computing Environment (ACE) requirements. The ACE requirements reflect the high performance computing requirements for the ASC community and have been updated in FY05 to reflect the community's needs. For each section of the RSUM, Appendix I maps the ACE requirements to the Limited Availability User Environment capabilities and includes a description of ACE requirements met and those requirements that are not met in that particular section. The Red Storm Usage Model, along with the ACE mappings, has been issued and vetted throughout the Tri-Lab community.

  11. Parameter Estimation in Rainfall-Runoff Modelling Using Distributed Versions of Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Michala Jakubcová

    2015-01-01

    Full Text Available The presented paper provides the analysis of selected versions of the particle swarm optimization (PSO algorithm. The tested versions of the PSO were combined with the shuffling mechanism, which splits the model population into complexes and performs distributed PSO optimization. One of them is a new proposed PSO modification, APartW, which enhances the global exploration and local exploitation in the parametric space during the optimization process through the new updating mechanism applied on the PSO inertia weight. The performances of four selected PSO methods were tested on 11 benchmark optimization problems, which were prepared for the special session on single-objective real-parameter optimization CEC 2005. The results confirm that the tested new APartW PSO variant is comparable with other existing distributed PSO versions, AdaptW and LinTimeVarW. The distributed PSO versions were developed for finding the solution of inverse problems related to the estimation of parameters of hydrological model Bilan. The results of the case study, made on the selected set of 30 catchments obtained from MOPEX database, show that tested distributed PSO versions provide suitable estimates of Bilan model parameters and thus can be used for solving related inverse problems during the calibration process of studied water balance hydrological model.

  12. Incremental Testing of the Community Multiscale Air Quality (CMAQ) Modeling System Version 4.7

    Science.gov (United States)

    This paper describes the scientific and structural updates to the latest release of the Community Multiscale Air Quality (CMAQ) modeling system version 4.7 (v4.7) and points the reader to additional resources for further details. The model updates were evaluated relative to obse...

  13. Connected Equipment Maturity Model Version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    Butzbaugh, Joshua B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Mayhorn, Ebony T. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sullivan, Greg [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Whalen, Scott A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-05-01

    The Connected Equipment Maturity Model (CEMM) evaluates the high-level functionality and characteristics that enable equipment to provide the four categories of energy-related services through communication with other entities (e.g., equipment, third parties, utilities, and users). The CEMM will help the U.S. Department of Energy, industry, energy efficiency organizations, and research institutions benchmark the current state of connected equipment and identify capabilities that may be attained to reach a more advanced, future state.

  14. Development of polygonal-surface version of ICRP reference phantoms: Lymphatic node modeling

    Energy Technology Data Exchange (ETDEWEB)

    Thang, Ngyen Tat; Yeom, Yeon Soo; Han, Min Cheol; Kim, Chan Hyeong [Hanyang University, Seoul (Korea, Republic of)

    2014-04-15

    Among radiosensitive organs/tissues considered in ICRP Publication 103, lymphatic nodes are many small size tissues and widely distributed in the ICRP reference phantoms. It is difficult to directly convert lymphatic nodes of ICRP reference voxel phantoms to polygonal surfaces. Furthermore, in the ICRP reference phantoms lymphatic nodes were manually drawn only in six lymphatic node regions and the reference number of lymphatic nodes reported in ICRP Publication 89 was not considered. To address aforementioned limitations, the present study developed a new lymphatic node modeling method for the polygonal-surface version of ICRP reference phantoms. By using the developed method, lymphatic nodes were modelled in the preliminary version of ICRP male polygonal-surface phantom. Then, lymphatic node dose values were calculated and compared with those of the ICRP reference male voxel phantom to validate the developed modeling method. The present study developed the new lymphatic node modeling method and successfully modeled lymphatic nodes in the preliminary version of the ICRP male polygonal-surface phantom. From the results, it was demonstrated that the developed modeling method can be used to model lymphatic nodes in polygonal-surface version of ICRP reference phantoms.

  15. Response Surface Modeling Tool Suite, Version 1.x

    Energy Technology Data Exchange (ETDEWEB)

    2016-07-05

    The Response Surface Modeling (RSM) Tool Suite is a collection of three codes used to generate an empirical interpolation function for a collection of drag coefficient calculations computed with Test Particle Monte Carlo (TPMC) simulations. The first code, "Automated RSM", automates the generation of a drag coefficient RSM for a particular object to a single command. "Automated RSM" first creates a Latin Hypercube Sample (LHS) of 1,000 ensemble members to explore the global parameter space. For each ensemble member, a TPMC simulation is performed and the object drag coefficient is computed. In the next step of the "Automated RSM" code, a Gaussian process is used to fit the TPMC simulations. In the final step, Markov Chain Monte Carlo (MCMC) is used to evaluate the non-analytic probability distribution function from the Gaussian process. The second code, "RSM Area", creates a look-up table for the projected area of the object based on input limits on the minimum and maximum allowed pitch and yaw angles and pitch and yaw angle intervals. The projected area from the look-up table is used to compute the ballistic coefficient of the object based on its pitch and yaw angle. An accurate ballistic coefficient is crucial in accurately computing the drag on an object. The third code, "RSM Cd", uses the RSM generated by the "Automated RSM" code and the projected area look-up table generated by the "RSM Area" code to accurately compute the drag coefficient and ballistic coefficient of the object. The user can modify the object velocity, object surface temperature, the translational temperature of the gas, the species concentrations of the gas, and the pitch and yaw angles of the object. Together, these codes allow for the accurate derivation of an object's drag coefficient and ballistic coefficient under any conditions with only knowledge of the object's geometry and mass.

  16. The ``Nordic`` HBV model. Description and documentation of the model version developed for the project Climate Change and Energy Production

    Energy Technology Data Exchange (ETDEWEB)

    Saelthun, N.R.

    1996-12-31

    The model described in this report is a version of the HBV model developed for the project Climate Change and Energy Production. This was a Nordic project aimed at evaluating the impacts of the Scandinavian countries including Greenland with emphasis on hydropower production. The model incorporates many of the features found in individual versions of the HBV model in use in the Nordic countries, and some new ones. It has catchment subdivision in altitude intervals, a simple vegetation parametrization including interception, temperature based evapotranspiration calculation, lake evaporation, lake routing, glacier mass balance simulation, special functions for climate change simulations etc. The user interface is very basic, and the model is primarily intended for research and educational purposes. Commercial versions of the model should be used for operational implementations. 5 refs., 4 figs., 1 tab.

  17. COMODI: An ontology to characterise differences in versions of computational models in biology

    OpenAIRE

    Scharm, Martin; Waltemath, Dagmar; Mendes, Pedro; Wolkenhauer, Olaf

    2016-01-01

    Motivation: Open model repositories provide ready-to-reuse computational models of biological systems. Models within those repositories evolve over time, leading to many alternative and subsequent versions. Taken together, the underlying changes reflect a model’s provenance and thus can give valuable insights into the studied biology. Currently, however, changes cannot be semantically interpreted. To improve this situation, we developed an ontology of terms describing changes in computational...

  18. 78 FR 76791 - Availability of Version 4.0 of the Connect America Fund Phase II Cost Model; Adopting Current...

    Science.gov (United States)

    2013-12-19

    ... provide additional protection from harsh weather. This version modifies the prior methodology used for..., which provides more detail on the current model architecture, processing steps, and data sources...

  19. A new version of code Java for 3D simulation of the CCA model

    Science.gov (United States)

    Zhang, Kebo; Xiong, Hailing; Li, Chao

    2016-07-01

    In this paper we present a new version of the program of CCA model. In order to benefit from the advantages involved in the latest technologies, we migrated the running environment from JDK1.6 to JDK1.7. And the old program was optimized into a new framework, so promoted extendibility.

  20. All-Ages Lead Model (Aalm) Version 1.05 (External Draft Report)

    Science.gov (United States)

    The All-Ages Lead Model (AALM) Version 1.05, is an external review draft software and guidance manual. EPA released this software and associated documentation for public review and comment beginning September 27, 2005, until October 27, 2005. The public comments will be accepte...

  1. Using the Global Forest Products Model (GFPM version 2016 with BPMPD)

    Science.gov (United States)

    Joseph Buongiorno; Shushuai   Zhu

    2016-01-01

     The GFPM is an economic model of global production, consumption and trade of forest products. The original formulation and several applications are described in Buongiorno et al. (2003). However, subsequent versions, including the GFPM 2016 reflect significant changes and extensions. The GFPM 2016 software uses the...

  2. [Psychometric properties of the French version of the Effort-Reward Imbalance model].

    Science.gov (United States)

    Niedhammer, I; Siegrist, J; Landre, M F; Goldberg, M; Leclerc, A

    2000-10-01

    Two main models are currently used to evaluate psychosocial factors at work: the Job Strain model developed by Karasek and the Effort-Reward Imbalance model. A French version of the first model has been validated for the dimensions of psychological demands and decision latitude. As regards the second one evaluating three dimensions (extrinsic effort, reward, and intrinsic effort), there are several versions in different languages, but until recently there was no validated French version. The objective of this study was to explore the psychometric properties of the French version of the Effort-Reward Imbalance model in terms of internal consistency, factorial validity, and discriminant validity. The present study was based on the GAZEL cohort and included the 10 174 subjects who were working at the French national electric and gas company (EDF-GDF) and answered the questionnaire in 1998. A French version of Effort-Reward Imbalance was included in this questionnaire. This version was obtained by a standard forward/backward translation procedure. Internal consistency was satisfactory for the three scales of extrinsic effort, reward, and intrinsic effort: Cronbach's Alpha coefficients higher than 0.7 were observed. A one-factor solution was retained for the factor analysis of the scale of extrinsic effort. A three-factor solution was retained for the factor analysis of reward, and these dimensions were interpreted as the factor analysis of intrinsic effort did not support the expected four-dimension structure. The analysis of discriminant validity displayed significant associations between measures of Effort-Reward Imbalance and the variables of sex, age, education level, and occupational grade. This study is the first one supporting satisfactory psychometric properties of the French version of the Effort-Reward Imbalance model. However, the factorial validity of intrinsic effort could be questioned. Furthermore, as most previous studies were based on male samples

  3. User guide for MODPATH Version 7—A particle-tracking model for MODFLOW

    Science.gov (United States)

    Pollock, David W.

    2016-09-26

    MODPATH is a particle-tracking post-processing program designed to work with MODFLOW, the U.S. Geological Survey (USGS) finite-difference groundwater flow model. MODPATH version 7 is the fourth major release since its original publication. Previous versions were documented in USGS Open-File Reports 89–381 and 94–464 and in USGS Techniques and Methods 6–A41.MODPATH version 7 works with MODFLOW-2005 and MODFLOW–USG. Support for unstructured grids in MODFLOW–USG is limited to smoothed, rectangular-based quadtree and quadpatch grids.A software distribution package containing the computer program and supporting documentation, such as input instructions, output file descriptions, and example problems, is available from the USGS over the Internet (http://water.usgs.gov/ogw/modpath/).

  4. Anal fistula with foot extension—Treated by kshara sutra (medicated seton) therapy: A rare case report

    Science.gov (United States)

    Ramesh, P. Bhat

    2013-01-01

    INTRODUCTION An ‘anal’ fistula is a track which communicates anal canal or rectum and usually is in continuity with one or more external openings. Distant communication from rectum is rare. It is a challenging disease because of its recurrence especially, with high level and distant communications. Ksharasutra (medicated seton) therapy is being practiced in India with high success rate (recurrence of 3.33%) in the management of complicated anal fistula. PRESENTATION OF CASE A 56 year old man presented with recurrent boils in the left lower limb at different places from thigh to foot. He underwent repeated incision and drainage at different hospitals. Examination revealed sinus with discharge and multiple scars on left lower limb from thigh up to foot. Suspecting anal fistula, MRI was advised which revealed a long cutaneous fistula from rectum to left lower limb. Patient was treated with Ksharasutra therapy. Within 6 months of treatment whole tract was healed completely. DISCUSSION Sushrutha (500BC) was the first to explain the role of surgical excision and use of kshara sutra for the management of anal fistula. Ksharasutra therapy showed least recurrence. Fistula from rectum to foot is of extremely rare variety. Surgical treatment of anal fistula requires hospitalization, regular post-operative care, is associated with a significant risk of recurrence (0.7–26.5%) and a high risk of impaired continence (5–40%). CONCLUSION Rectal fistula communicating till foot may be a very rare presentation in proctology practice. Kshara sutra treatment was useful in treating this condition, with minimal surgical intervention with no recurrence. PMID:23702360

  5. Site investigation SFR. Hydrogeological modelling of SFR. Model version 0.2

    Energy Technology Data Exchange (ETDEWEB)

    Oehman, Johan (Golder Associates AB (Sweden)); Follin, Sven (SF GeoLogic (Sweden))

    2010-01-15

    The Swedish Nuclear Fuel and Waste Management Company (SKB) has conducted site investigations for a planned extension of the existing final repository for short-lived radioactive waste (SFR). A hydrogeological model is developed in three model versions, which will be used for safety assessment and design analyses. This report presents a data analysis of the currently available hydrogeological data from the ongoing Site Investigation SFR (KFR27, KFR101, KFR102A, KFR102B, KFR103, KFR104, and KFR105). The purpose of this work is to develop a preliminary hydrogeological Discrete Fracture Network model (hydro-DFN) parameterisation that can be applied in regional-scale modelling. During this work, the Geologic model had not yet been updated for the new data set. Therefore, all analyses were made to the rock mass outside Possible Deformation Zones, according to Single Hole Interpretation. Owing to this circumstance, it was decided not to perform a complete hydro-DFN calibration at this stage. Instead focus was re-directed to preparatory test cases and conceptual questions with the aim to provide a sound strategy for developing the hydrogeological model SFR v. 1.0. The presented preliminary hydro-DFN consists of five fracture sets and three depth domains. A statistical/geometrical approach (connectivity analysis /Follin et al. 2005/) was performed to estimate the size (i.e. fracture radius) distribution of fractures that are interpreted as Open in geologic mapping of core data. Transmissivity relations were established based on an assumption of a correlation between the size and evaluated specific capacity of geologic features coupled to inflows measured by the Posiva Flow Log device (PFL-f data). The preliminary hydro-DFN was applied in flow simulations in order to test its performance and to explore the role of PFL-f data. Several insights were gained and a few model technical issues were raised. These are summarised in Table 5-1

  6. a Version-Similarity Based Trust Degree Computation Model for Crowdsourcing Geographic Data

    Science.gov (United States)

    Zhou, Xiaoguang; Zhao, Yijiang

    2016-06-01

    Quality evaluation and control has become the main concern of VGI. In this paper, trust is used as a proxy of VGI quality, a version-similarity based trust degree computation model for crowdsourcing geographic data is presented. This model is based on the assumption that the quality of VGI objects mainly determined by the professional skill and integrity (called reputation in this paper), and the reputation of the contributor is movable. The contributor's reputation is calculated using the similarity degree among the multi-versions for the same entity state. The trust degree of VGI object is determined by the trust degree of its previous version, the reputation of the last contributor and the modification proportion. In order to verify this presented model, a prototype system for computing the trust degree of VGI objects is developed by programming with Visual C# 2010. The historical data of Berlin of OpenStreetMap (OSM) are employed for experiments. The experimental results demonstrate that the quality of crowdsourcing geographic data is highly positive correlation with its trustworthiness. As the evaluation is based on version-similarity, not based on the direct subjective evaluation among users, the evaluation result is objective. Furthermore, as the movability property of the contributors' reputation is used in this presented method, our method has a higher assessment coverage than the existing methods.

  7. A Fast Version of LASG/IAP Climate System Model and Its 1000-year Control Integration

    Institute of Scientific and Technical Information of China (English)

    ZHOU Tianjun; WU Bo; WEN Xinyu; LI Lijuan; WANG Bin

    2008-01-01

    A fast version of the State Key Laboratory of Numerical Modeling for Atmospheric Sciences and Geo- physical Fluid Dynamics (LASG)/Institute of Atmospheric Physics (IAP) climate system model is briefly documented. The fast coupled model employs a low resolution version of the atmospheric component Grid Atmospheric Model of IAP/LASG (GAMIL), with the other parts of the model, namely an oceanic com- ponent LASG/IAP Climate Ocean Model (LICOM), land component Common Land Model (CLM), and sea ice component from National Center for Atmospheric Research Community Climate System Model (NCAR CCSM2), as the same as in the standard version of LASG/IAP Flexible Global Ocean Atmosphere Land System model (FGOALS_g). The parameterizatious of physical and dynamical processes of the at- mospheric component in the fast version are identical to the standard version, although some parameter values are different. However, by virtue of reduced horizontal resolution and increased time-step of the most time-consuming atmospheric component, it runs faster by a factor of 3 and can serve as a useful tool for long- term and large-ensemble integrations. A 1000-year control simulation of the present-day climate has been completed without flux adjustments. The final 600 years of this simulation has virtually no trends in global mean sea surface temperatures and is recommended for internal variability studies. Several aspects of the control simulation's mean climate and variability axe evaluated against the observational or reanalysis data. The strengths and weaknesses of the control simulation are evaluated. The mean atmospheric circulation is well simulated, except in high latitudes. The Asian-Australian monsoonal meridional cell shows realistic features, however, an artificial rainfall center is located to the eastern periphery of the Tibetan Plateau persists throughout the year. The mean bias of SST resembles that of the standard version, appearing as a "double ITCZ" (Inter

  8. The Lagrangian particle dispersion model FLEXPART-WRF VERSION 3.1

    Energy Technology Data Exchange (ETDEWEB)

    Brioude, J.; Arnold, D.; Stohl, A.; Cassiani, M.; Morton, Don; Seibert, P.; Angevine, W. M.; Evan, S.; Dingwell, A.; Fast, Jerome D.; Easter, Richard C.; Pisso, I.; Bukhart, J.; Wotawa, G.

    2013-11-01

    The Lagrangian particle dispersion model FLEXPART was originally designed for cal- culating long-range and mesoscale dispersion of air pollutants from point sources, such as after an accident in a nuclear power plant. In the meantime FLEXPART has evolved into a comprehensive tool for atmospheric transport modeling and analysis at different scales. This multiscale need from the modeler community has encouraged new developments in FLEXPART. In this document, we present a version that works with the Weather Research and Forecasting (WRF) mesoscale meteoro- logical model. Simple procedures on how to run FLEXPART-WRF are presented along with special options and features that differ from its predecessor versions. In addition, test case data, the source code and visualization tools are provided to the reader as supplementary material.

  9. Thermal site descriptive model. A strategy for the model development during site investigations - version 2

    Energy Technology Data Exchange (ETDEWEB)

    Back, Paer-Erik; Sundberg, Jan [Geo Innova AB (Sweden)

    2007-09-15

    This report presents a strategy for describing, predicting and visualising the thermal aspects of the site descriptive model. The strategy is an updated version of an earlier strategy applied in all SDM versions during the initial site investigation phase at the Forsmark and Oskarshamn areas. The previous methodology for thermal modelling did not take the spatial correlation fully into account during simulation. The result was that the variability of thermal conductivity in the rock mass was not sufficiently well described. Experience from earlier thermal SDMs indicated that development of the methodology was required in order describe the spatial distribution of thermal conductivity in the rock mass in a sufficiently reliable way, taking both variability within rock types and between rock types into account. A good description of the thermal conductivity distribution is especially important for the lower tail. This tail is important for the design of a repository because it affects the canister spacing. The presented approach is developed to be used for final SDM regarding thermal properties, primarily thermal conductivity. Specific objectives for the strategy of thermal stochastic modelling are: Description: statistical description of the thermal conductivity of a rock domain. Prediction: prediction of thermal conductivity in a specific rock volume. Visualisation: visualisation of the spatial distribution of thermal conductivity. The thermal site descriptive model should include the temperature distribution and thermal properties of the rock mass. The temperature is the result of the thermal processes in the repository area. Determination of thermal transport properties can be made using different methods, such as laboratory investigations, field measurements, modelling from mineralogical composition and distribution, modelling from density logging and modelling from temperature logging. The different types of data represent different scales, which has to be

  10. The Hamburg Oceanic Carbon Cycle Circulation Model. Version 1. Version 'HAMOCC2s' for long time integrations

    Energy Technology Data Exchange (ETDEWEB)

    Heinze, C.; Maier-Reimer, E. [Max-Planck-Institut fuer Meteorologie, Hamburg (Germany)

    1999-11-01

    The Hamburg Ocean Carbon Cycle Circulation Model (HAMOCC, configuration HAMOCC2s) predicts the atmospheric carbon dioxide partial pressure (as induced by oceanic processes), production rates of biogenic particulate matter, and geochemical tracer distributions in the water column as well as the bioturbated sediment. Besides the carbon cycle this model version includes also the marine silicon cycle (silicic acid in the water column and the sediment pore waters, biological opal production, opal flux through the water column and opal sediment pore water interaction). The model is based on the grid and geometry of the LSG ocean general circulation model (see the corresponding manual, LSG=Large Scale Geostrophic) and uses a velocity field provided by the LSG-model in 'frozen' state. In contrast to the earlier version of the model (see Report No. 5), the present version includes a multi-layer sediment model of the bioturbated sediment zone, allowing for variable tracer inventories within the complete model system. (orig.)

  11. A one-dimensional material transfer model for HECTR version 1. 5

    Energy Technology Data Exchange (ETDEWEB)

    Geller, A.S.; Wong, C.C.

    1991-08-01

    HECTR (Hydrogen Event Containment Transient Response) is a lumped-parameter computer code developed for calculating the pressure-temperature response to combustion in a nuclear power plant containment building. The code uses a control-volume approach and subscale models to simulate the mass, momentum, and energy transfer occurring in the containment during a loss-of-collant-accident (LOCA). This document describes one-dimensional subscale models for mass and momentum transfer, and the modifications to the code required to implement them. Two problems were analyzed: the first corresponding to a standard problem studied with previous HECTR versions, the second to experiments. The performance of the revised code relative to previous HECTR version is discussed as is the ability of the code to model the experiments. 8 refs., 5 figs., 3 tabs.

  12. Description and evaluation of the Model for Ozone and Related chemical Tracers, version 4 (MOZART-4

    Directory of Open Access Journals (Sweden)

    L. K. Emmons

    2010-01-01

    Full Text Available The Model for Ozone and Related chemical Tracers, version 4 (MOZART-4 is an offline global chemical transport model particularly suited for studies of the troposphere. The updates of the model from its previous version MOZART-2 are described, including an expansion of the chemical mechanism to include more detailed hydrocarbon chemistry and bulk aerosols. Online calculations of a number of processes, such as dry deposition, emissions of isoprene and monoterpenes and photolysis frequencies, are now included. Results from an eight-year simulation (2000–2007 are presented and evaluated. The MOZART-4 source code and standard input files are available for download from the NCAR Community Data Portal (http://cdp.ucar.edu.

  13. The global chemistry transport model TM5: description and evaluation of the tropospheric chemistry version 3.0

    NARCIS (Netherlands)

    Huijnen, V.; Williams, J.; van Weele, M.; van Noije, T.; Krol, M.; Dentener, F.; Segers, A.; Houweling, S.; Peters, W.; de Laat, J.; Boersma, F.; Bergamaschi, P.; van Velthoven, P.; Le Sager, P.; Eskes, H.; Alkemade, F.; Scheele, R.; Nédélec, P.; Pätz, H.-W.

    2010-01-01

    We present a comprehensive description and benchmark evaluation of the tropospheric chemistry version of the global chemistry transport model TM5 (Tracer Model 5, version TM5-chem-v3.0). A full description is given concerning the photochemical mechanism, the interaction with aerosol, the treatment o

  14. Richard Francis Burton e a inserÃÃo do kama-sutras como um manual sexual entre os vitorianos (Inglaterra, 1883)

    OpenAIRE

    Felipe Salvador Weissheimer

    2014-01-01

    Dentre os vÃrios âKama-sutrasâ difundidos no mercado, a versÃo clÃssica foi escrita por Vatsyayana (sÃculo I-IV, aproximadamente) e publicada na Inglaterra em 1883 pela Sociedade Hindu Kama-Shastra. Richard Francis Burton foi o membro de maior importÃncia na Sociedade Hindu Kama-Shastra, pois, alÃm de fomentar a publicaÃÃo, auxiliou na traduÃÃo, editou e enunciou vÃrios comentÃrios ao longo da obra. Em seus comentÃrios, percebemos que o projeto da traduÃÃo e publicaÃÃo do Kama-sutras visava e...

  15. New versions of the BDS/GNSS zenith tropospheric delay model IGGtrop

    Science.gov (United States)

    Li, Wei; Yuan, Yunbin; Ou, Jikun; Chai, Yanju; Li, Zishen; Liou, Yuei-An; Wang, Ningbo

    2015-01-01

    The initial IGGtrop model proposed for Chinese BDS (BeiDou System) is not very suitable for BDS/GNSS research and application due to its large data volume while it shows a global mean accuracy of 4 cm. New versions of the global zenith tropospheric delay (ZTD) model IGGtrop are developed through further investigation on the spatial and temporal characteristics of global ZTD. From global GNSS ZTD observations and weather reanalysis data, new ZTD characteristics are found and discussed in this study including: small and inconsistent seasonal variation in ZTD between and stable seasonal variation outside; weak zonal variation in ZTD at higher latitudes (north of and south of ) and at heights above 6 km, etc. Based on these analyses, new versions of IGGtrop, named , are established through employing corresponding strategies: using a simple algorithm for equatorial ZTD; generating an adaptive spatial grid with lower resolutions in regions where ZTD varies little; and creating a method for optimized storage of model parameters. Thus, the models require much less parameters than the IGGtrop model, nearly 3.1-21.2 % of that for the IGGtrop model. The three new versions are validated by five years of GNSS-derived ZTDs at 125 IGS sites, and it shows that: demonstrates the highest ZTD correction performance, similar to IGGtrop; requires the least model parameters; is moderate in both zenith delay prediction performance and number of model parameters. For the model, the biases at those IGS sites are between and 4.3 cm with a mean value of cm and RMS errors are between 2.1 and 8.5 cm with a mean value of 4.0 cm. Different BDS and other GNSS users can choose a suitable model according to their application and research requirements.

  16. Digital elevation models for site investigation programme in Oskarshamn. Site description version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Brydsten, Lars; Stroemgren, Maarten [Umeaa Univ. (Sweden). Dept. of Biology and Environmental Science

    2005-06-01

    In the Oskarshamn area, a digital elevation model has been produced using elevation data from many elevation sources on both land and sea. Many elevation model users are only interested in elevation models over land, so the model has been designed in three versions: Version 1 describes land surface, lake water surface, and sea bottom. Version 2 describes land surface, sediment levels at lake bottoms, and sea bottoms. Version 3 describes land surface, sediment levels at lake bottoms, and sea surface. In cases where the different sources of data were not in point form 'such as existing elevation models of land or depth lines from nautical charts' they have been converted to point values using GIS software. Because data from some sources often overlaps with data from other sources, several tests were conducted to determine if both sources of data or only one source would be included in the dataset used for the interpolation procedure. The tests resulted in the decision to use only the source judged to be of highest quality for most areas with overlapping data sources. All data were combined into a database of approximately 3.3 million points unevenly spread over an area of about 800 km{sup 2}. The large number of data points made it difficult to construct the model with a single interpolation procedure, the area was divided into 28 sub-models that were processed one by one and finally merged together into one single model. The software ArcGis 8.3 and its extension Geostatistical Analysis were used for the interpolation. The Ordinary Kriging method was used for interpolation. This method allows both a cross validation and a validation before the interpolation is conducted. Cross validation with different Kriging parameters were performed and the model with the most reasonable statistics was chosen. Finally, a validation with the most appropriate Kriging parameters was performed in order to verify that the model fit unmeasured localities. Since both the

  17. A new tool for modeling dune field evolution based on an accessible, GUI version of the Werner dune model

    Science.gov (United States)

    Barchyn, Thomas E.; Hugenholtz, Chris H.

    2012-02-01

    Research into aeolian dune form and dynamics has benefited from simple and abstract cellular automata computer models. Many of these models are based upon a seminal framework proposed by Werner (1995). Unfortunately, most versions of this model are not publicly available or are not provided in a format that promotes widespread use. In our view, this hinders progress in linking model simulations to empirical data (and vice versa). To this end, we introduce an accessible, graphical user interface (GUI) version of the Werner model. The novelty of this contribution is that it provides a simple interface and detailed instructions that encourage widespread use and extension of the Werner dune model for research and training purposes. By lowering barriers for researchers to develop and test hypotheses about aeolian dune and dune field patterns, this release addresses recent calls to improve access to earth surface models.

  18. Thermal modelling. Preliminary site description. Forsmark area - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Sundberg, Jan; Back, Paer-Erik; Bengtsson, Anna; Laendell, Maerta [Geo Innova AB, Linkoeping (Sweden)

    2005-08-01

    This report presents the thermal site descriptive model for the Forsmark area, version 1.2. The main objective of this report is to present the thermal modelling work where data has been identified, quality controlled, evaluated and summarised in order to make an upscaling to lithological domain level possible. The thermal conductivity at canister scale has been modelled for two different lithological domains (RFM029 and RFM012, both dominated by granite to granodiorite (101057)). A main modelling approach has been used to determine the mean value of the thermal conductivity. Two alternative/complementary approaches have been used to evaluate the spatial variability of the thermal conductivity at domain level. The thermal modelling approaches are based on the lithological model for the Forsmark area, version 1.2 together with rock type models constituted from measured and calculated (from mineral composition) thermal conductivities. Results indicate that the mean of thermal conductivity is expected to exhibit a small variation between the different domains, 3.46 W/(mxK) for RFM012 to 3.55 W/(mxK) for RFM029. The spatial distribution of the thermal conductivity does not follow a simple model. Lower and upper 95% confidence limits are based on the modelling results, but have been rounded of to only two significant figures. Consequently, the lower limit is 2.9 W/(mxK), while the upper is 3.8 W/(mxK). This is applicable to both the investigated domains. The temperature dependence is rather small with a decrease in thermal conductivity of 10.0% per 100 deg C increase in temperature for the dominating rock type. There are a number of important uncertainties associated with these results. One of the uncertainties considers the representative scale for the canister. Another important uncertainty is the methodological uncertainties associated with the upscaling of thermal conductivity from cm-scale to canister scale. In addition, the representativeness of rock samples is

  19. Incremental testing of the Community Multiscale Air Quality (CMAQ modeling system version 4.7

    Directory of Open Access Journals (Sweden)

    K. M. Foley

    2010-03-01

    Full Text Available This paper describes the scientific and structural updates to the latest release of the Community Multiscale Air Quality (CMAQ modeling system version 4.7 (v4.7 and points the reader to additional resources for further details. The model updates were evaluated relative to observations and results from previous model versions in a series of simulations conducted to incrementally assess the effect of each change. The focus of this paper is on five major scientific upgrades: (a updates to the heterogeneous N2O5 parameterization, (b improvement in the treatment of secondary organic aerosol (SOA, (c inclusion of dynamic mass transfer for coarse-mode aerosol, (d revisions to the cloud model, and (e new options for the calculation of photolysis rates. Incremental test simulations over the eastern United States during January and August 2006 are evaluated to assess the model response to each scientific improvement, providing explanations of differences in results between v4.7 and previously released CMAQ model versions. Particulate sulfate predictions are improved across all monitoring networks during both seasons due to cloud module updates. Numerous updates to the SOA module improve the simulation of seasonal variability and decrease the bias in organic carbon predictions at urban sites in the winter. Bias in the total mass of fine particulate matter (PM2.5 is dominated by overpredictions of unspeciated PM2.5 (PMother in the winter and by underpredictions of carbon in the summer. The CMAQv4.7 model results show slightly worse performance for ozone predictions. However, changes to the meteorological inputs are found to have a much greater impact on ozone predictions compared to changes to the CMAQ modules described here. Model updates had little effect on existing biases in wet deposition predictions.

  20. Incremental testing of the community multiscale air quality (CMAQ modeling system version 4.7

    Directory of Open Access Journals (Sweden)

    K. M. Foley

    2009-10-01

    Full Text Available This paper describes the scientific and structural updates to the latest release of the Community Multiscale Air Quality (CMAQ modeling system version 4.7 (v4.7 and points the reader to additional resources for further details. The model updates were evaluated relative to observations and results from previous model versions in a series of simulations conducted to incrementally assess the effect of each change. The focus of this paper is on five major scientific upgrades: (a updates to the heterogeneous N2O5 parameterization, (b improvement in the treatment of secondary organic aerosol (SOA, (c inclusion of dynamic mass transfer for coarse-mode aerosol, (d revisions to the cloud model, and (e new options for the calculation of photolysis rates. Incremental test simulations over the eastern United States during January and August 2006 are evaluated to assess the model response to each scientific improvement, providing explanations of differences in results between v4.7 and previously released CMAQ model versions. Particulate sulfate predictions are improved across all monitoring networks during both seasons due to cloud module updates. Numerous updates to the SOA module improve the simulation of seasonal variability and decrease the bias in organic carbon predictions at urban sites in the winter. Bias in the total mass of fine particulate matter (PM2.5 is dominated by overpredictions of unspeciated PM2.5 (PMother in the winter and by underpredictions of carbon in the summer. The CMAQ v4.7 model results show slightly worse performance for ozone predictions. However, changes to the meteorological inputs are found to have a much greater impact on ozone predictions compared to changes to the CMAQ modules described here. Model updates had little effect on existing biases in wet deposition predictions.

  1. Thermal site descriptive model. A strategy for the model development during site investigations - version 2

    Energy Technology Data Exchange (ETDEWEB)

    Back, Paer-Erik; Sundberg, Jan [Geo Innova AB (Sweden)

    2007-09-15

    This report presents a strategy for describing, predicting and visualising the thermal aspects of the site descriptive model. The strategy is an updated version of an earlier strategy applied in all SDM versions during the initial site investigation phase at the Forsmark and Oskarshamn areas. The previous methodology for thermal modelling did not take the spatial correlation fully into account during simulation. The result was that the variability of thermal conductivity in the rock mass was not sufficiently well described. Experience from earlier thermal SDMs indicated that development of the methodology was required in order describe the spatial distribution of thermal conductivity in the rock mass in a sufficiently reliable way, taking both variability within rock types and between rock types into account. A good description of the thermal conductivity distribution is especially important for the lower tail. This tail is important for the design of a repository because it affects the canister spacing. The presented approach is developed to be used for final SDM regarding thermal properties, primarily thermal conductivity. Specific objectives for the strategy of thermal stochastic modelling are: Description: statistical description of the thermal conductivity of a rock domain. Prediction: prediction of thermal conductivity in a specific rock volume. Visualisation: visualisation of the spatial distribution of thermal conductivity. The thermal site descriptive model should include the temperature distribution and thermal properties of the rock mass. The temperature is the result of the thermal processes in the repository area. Determination of thermal transport properties can be made using different methods, such as laboratory investigations, field measurements, modelling from mineralogical composition and distribution, modelling from density logging and modelling from temperature logging. The different types of data represent different scales, which has to be

  2. User's guide to Model Viewer, a program for three-dimensional visualization of ground-water model results

    Science.gov (United States)

    Hsieh, Paul A.; Winston, Richard B.

    2002-01-01

    Model Viewer is a computer program that displays the results of three-dimensional groundwater models. Scalar data (such as hydraulic head or solute concentration) may be displayed as a solid or a set of isosurfaces, using a red-to-blue color spectrum to represent a range of scalar values. Vector data (such as velocity or specific discharge) are represented by lines oriented to the vector direction and scaled to the vector magnitude. Model Viewer can also display pathlines, cells or nodes that represent model features such as streams and wells, and auxiliary graphic objects such as grid lines and coordinate axes. Users may crop the model grid in different orientations to examine the interior structure of the data. For transient simulations, Model Viewer can animate the time evolution of the simulated quantities. The current version (1.0) of Model Viewer runs on Microsoft Windows 95, 98, NT and 2000 operating systems, and supports the following models: MODFLOW-2000, MODFLOW-2000 with the Ground-Water Transport Process, MODFLOW-96, MOC3D (Version 3.5), MODPATH, MT3DMS, and SUTRA (Version 2D3D.1). Model Viewer is designed to directly read input and output files from these models, thus minimizing the need for additional postprocessing. This report provides an overview of Model Viewer. Complete instructions on how to use the software are provided in the on-line help pages.

  3. The Lagrangian particle dispersion model FLEXPART-WRF version 3.1

    Science.gov (United States)

    Brioude, J.; Arnold, D.; Stohl, A.; Cassiani, M.; Morton, D.; Seibert, P.; Angevine, W.; Evan, S.; Dingwell, A.; Fast, J. D.; Easter, R. C.; Pisso, I.; Burkhart, J.; Wotawa, G.

    2013-11-01

    The Lagrangian particle dispersion model FLEXPART was originally designed for calculating long-range and mesoscale dispersion of air pollutants from point sources, such that occurring after an accident in a nuclear power plant. In the meantime, FLEXPART has evolved into a comprehensive tool for atmospheric transport modeling and analysis at different scales. A need for further multiscale modeling and analysis has encouraged new developments in FLEXPART. In this paper, we present a FLEXPART version that works with the Weather Research and Forecasting (WRF) mesoscale meteorological model. We explain how to run this new model and present special options and features that differ from those of the preceding versions. For instance, a novel turbulence scheme for the convective boundary layer has been included that considers both the skewness of turbulence in the vertical velocity as well as the vertical gradient in the air density. To our knowledge, FLEXPART is the first model for which such a scheme has been developed. On a more technical level, FLEXPART-WRF now offers effective parallelization, and details on computational performance are presented here. FLEXPART-WRF output can either be in binary or Network Common Data Form (NetCDF) format, both of which have efficient data compression. In addition, test case data and the source code are provided to the reader as a Supplement. This material and future developments will be accessible at http://www.flexpart.eu.

  4. A computationally efficient description of heterogeneous freezing: A simplified version of the Soccer ball model

    Science.gov (United States)

    Niedermeier, Dennis; Ervens, Barbara; Clauss, Tina; Voigtländer, Jens; Wex, Heike; Hartmann, Susan; Stratmann, Frank

    2014-01-01

    In a recent study, the Soccer ball model (SBM) was introduced for modeling and/or parameterizing heterogeneous ice nucleation processes. The model applies classical nucleation theory. It allows for a consistent description of both apparently singular and stochastic ice nucleation behavior, by distributing contact angles over the nucleation sites of a particle population assuming a Gaussian probability density function. The original SBM utilizes the Monte Carlo technique, which hampers its usage in atmospheric models, as fairly time-consuming calculations must be performed to obtain statistically significant results. Thus, we have developed a simplified and computationally more efficient version of the SBM. We successfully used the new SBM to parameterize experimental nucleation data of, e.g., bacterial ice nucleation. Both SBMs give identical results; however, the new model is computationally less expensive as confirmed by cloud parcel simulations. Therefore, it is a suitable tool for describing heterogeneous ice nucleation processes in atmospheric models.

  5. Community Land Model Version 3.0 (CLM3.0) Developer's Guide

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, FM

    2004-12-21

    This document describes the guidelines adopted for software development of the Community Land Model (CLM) and serves as a reference to the entire code base of the released version of the model. The version of the code described here is Version 3.0 which was released in the summer of 2004. This document, the Community Land Model Version 3.0 (CLM3.0) User's Guide (Vertenstein et al., 2004), the Technical Description of the Community Land Model (CLM) (Oleson et al., 2004), and the Community Land Model's Dynamic Global Vegetation Model (CLM-DGVM): Technical Description and User's Guide (Levis et al., 2004) provide the developer, user, or researcher with details of implementation, instructions for using the model, a scientific description of the model, and a scientific description of the Dynamic Global Vegetation Model integrated with CLM respectively. The CLM is a single column (snow-soil-vegetation) biogeophysical model of the land surface which can be run serially (on a laptop or personal computer) or in parallel (using distributed or shared memory processors or both) on both vector and scalar computer architectures. Written in Fortran 90, CLM can be run offline (i.e., run in isolation using stored atmospheric forcing data), coupled to an atmospheric model (e.g., the Community Atmosphere Model (CAM)), or coupled to a climate system model (e.g., the Community Climate System Model Version 3 (CCSM3)) through a flux coupler (e.g., Coupler 6 (CPL6)). When coupled, CLM exchanges fluxes of energy, water, and momentum with the atmosphere. The horizontal land surface heterogeneity is represented by a nested subgrid hierarchy composed of gridcells, landunits, columns, and plant functional types (PFTs). This hierarchical representation is reflected in the data structures used by the model code. Biophysical processes are simulated for each subgrid unit (landunit, column, and PFT) independently, and prognostic variables are maintained for each subgrid unit

  6. Statistical model of fractures and deformation zones. Preliminary site description, Laxemar subarea, version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Hermanson, Jan; Forssberg, Ola [Golder Associates AB, Stockholm (Sweden); Fox, Aaron; La Pointe, Paul [Golder Associates Inc., Redmond, WA (United States)

    2005-10-15

    The goal of this summary report is to document the data sources, software tools, experimental methods, assumptions, and model parameters in the discrete-fracture network (DFN) model for the local model volume in Laxemar, version 1.2. The model parameters presented herein are intended for use by other project modeling teams. Individual modeling teams may elect to simplify or use only a portion of the DFN model, depending on their needs. This model is not intended to be a flow model or a mechanical model; as such, only the geometrical characterization is presented. The derivations of the hydraulic or mechanical properties of the fractures or their subsurface connectivities are not within the scope of this report. This model represents analyses carried out on particular data sets. If additional data are obtained, or values for existing data are changed or excluded, the conclusions reached in this report, and the parameter values calculated, may change as well. The model volume is divided into two subareas; one located on the Simpevarp peninsula adjacent to the power plant (Simpevarp), and one further to the west (Laxemar). The DFN parameters described in this report were determined by analysis of data collected within the local model volume. As such, the final DFN model is only valid within this local model volume and the modeling subareas (Laxemar and Simpevarp) within.

  7. Aerosol specification in single-column Community Atmosphere Model version 5

    Science.gov (United States)

    Lebassi-Habtezion, B.; Caldwell, P. M.

    2015-03-01

    Single-column model (SCM) capability is an important tool for general circulation model development. In this study, the SCM mode of version 5 of the Community Atmosphere Model (CAM5) is shown to handle aerosol initialization and advection improperly, resulting in aerosol, cloud-droplet, and ice crystal concentrations which are typically much lower than observed or simulated by CAM5 in global mode. This deficiency has a major impact on stratiform cloud simulations but has little impact on convective case studies because aerosol is currently not used by CAM5 convective schemes and convective cases are typically longer in duration (so initialization is less important). By imposing fixed aerosol or cloud-droplet and crystal number concentrations, the aerosol issues described above can be avoided. Sensitivity studies using these idealizations suggest that the Meyers et al. (1992) ice nucleation scheme prevents mixed-phase cloud from existing by producing too many ice crystals. Microphysics is shown to strongly deplete cloud water in stratiform cases, indicating problems with sequential splitting in CAM5 and the need for careful interpretation of output from sequentially split climate models. Droplet concentration in the general circulation model (GCM) version of CAM5 is also shown to be far too low (~ 25 cm-3) at the southern Great Plains (SGP) Atmospheric Radiation Measurement (ARM) site.

  8. MESOI Version 2. 0: an interactive mesoscale Lagrangian puff dispersion model with deposition and decay

    Energy Technology Data Exchange (ETDEWEB)

    Ramsdell, J.V.; Athey, G.F.; Glantz, C.S.

    1983-11-01

    MESOI Version 2.0 is an interactive Lagrangian puff model for estimating the transport, diffusion, deposition and decay of effluents released to the atmosphere. The model is capable of treating simultaneous releases from as many as four release points, which may be elevated or at ground-level. The puffs are advected by a horizontal wind field that is defined in three dimensions. The wind field may be adjusted for expected topographic effects. The concentration distribution within the puffs is initially assumed to be Gaussian in the horizontal and vertical. However, the vertical concentration distribution is modified by assuming reflection at the ground and the top of the atmospheric mixing layer. Material is deposited on the surface using a source depletion, dry deposition model and a washout coefficient model. The model also treats the decay of a primary effluent species and the ingrowth and decay of a single daughter species using a first order decay process. This report is divided into two parts. The first part discusses the theoretical and mathematical bases upon which MESOI Version 2.0 is based. The second part contains the MESOI computer code. The programs were written in the ANSI standard FORTRAN 77 and were developed on a VAX 11/780 computer. 43 references, 14 figures, 13 tables.

  9. A p-version embedded model for simulation of concrete temperature fields with cooling pipes

    Directory of Open Access Journals (Sweden)

    Sheng Qiang

    2015-07-01

    Full Text Available Pipe cooling is an effective method of mass concrete temperature control, but its accurate and convenient numerical simulation is still a cumbersome problem. An improved embedded model, considering the water temperature variation along the pipe, was proposed for simulating the temperature field of early-age concrete structures containing cooling pipes. The improved model was verified with an engineering example. Then, the p-version self-adaption algorithm for the improved embedded model was deduced, and the initial values and boundary conditions were examined. Comparison of some numerical samples shows that the proposed model can provide satisfying precision and a higher efficiency. The analysis efficiency can be doubled at the same precision, even for a large-scale element. The p-version algorithm can fit grids of different sizes for the temperature field simulation. The convenience of the proposed algorithm lies in the possibility of locating more pipe segments in one element without the need of so regular a shape as in the explicit model.

  10. The Lagrangian particle dispersion model FLEXPART-WRF version 3.0

    Directory of Open Access Journals (Sweden)

    J. Brioude

    2013-07-01

    Full Text Available The Lagrangian particle dispersion model FLEXPART was originally designed for calculating long-range and mesoscale dispersion of air pollutants from point sources, such as after an accident in a nuclear power plant. In the meantime FLEXPART has evolved into a comprehensive tool for atmospheric transport modeling and analysis at different scales. This multiscale need has encouraged new developments in FLEXPART. In this document, we present a FLEXPART version that works with the Weather Research and Forecasting (WRF mesoscale meteorological model. We explain how to run and present special options and features that differ from its predecessor versions. For instance, a novel turbulence scheme for the convective boundary layer has been included that considers both the skewness of turbulence in the vertical velocity as well as the vertical gradient in the air density. To our knowledge, FLEXPART is the first model for which such a scheme has been developed. On a more technical level, FLEXPART-WRF now offers effective parallelization and details on computational performance are presented here. FLEXPART-WRF output can either be in binary or Network Common Data Form (NetCDF format with efficient data compression. In addition, test case data and the source code are provided to the reader as Supplement. This material and future developments will be accessible at http://www.flexpart.eu.

  11. The Lagrangian particle dispersion model FLEXPART-WRF version 3.0

    Science.gov (United States)

    Brioude, J.; Arnold, D.; Stohl, A.; Cassiani, M.; Morton, D.; Seibert, P.; Angevine, W.; Evan, S.; Dingwell, A.; Fast, J. D.; Easter, R. C.; Pisso, I.; Burkhart, J.; Wotawa, G.

    2013-07-01

    The Lagrangian particle dispersion model FLEXPART was originally designed for calculating long-range and mesoscale dispersion of air pollutants from point sources, such as after an accident in a nuclear power plant. In the meantime FLEXPART has evolved into a comprehensive tool for atmospheric transport modeling and analysis at different scales. This multiscale need has encouraged new developments in FLEXPART. In this document, we present a FLEXPART version that works with the Weather Research and Forecasting (WRF) mesoscale meteorological model. We explain how to run and present special options and features that differ from its predecessor versions. For instance, a novel turbulence scheme for the convective boundary layer has been included that considers both the skewness of turbulence in the vertical velocity as well as the vertical gradient in the air density. To our knowledge, FLEXPART is the first model for which such a scheme has been developed. On a more technical level, FLEXPART-WRF now offers effective parallelization and details on computational performance are presented here. FLEXPART-WRF output can either be in binary or Network Common Data Form (NetCDF) format with efficient data compression. In addition, test case data and the source code are provided to the reader as Supplement. This material and future developments will be accessible at http://www.flexpart.eu.

  12. Technical note: The Lagrangian particle dispersion model FLEXPART version 6.2

    Directory of Open Access Journals (Sweden)

    A. Stohl

    2005-01-01

    Full Text Available The Lagrangian particle dispersion model FLEXPART was originally (about 8 years ago designed for calculating the long-range and mesoscale dispersion of air pollutants from point sources, such as after an accident in a nuclear power plant. In the meantime FLEXPART has evolved into a comprehensive tool for atmospheric transport modeling and analysis. Its application fields were extended from air pollution studies to other topics where atmospheric transport plays a role (e.g., exchange between the stratosphere and troposphere, or the global water cycle. It has evolved into a true community model that is now being used by at least 25 groups from 14 different countries and is seeing both operational and research applications. A user manual has been kept actual over the years and was distributed over an internet page along with the model's source code. In this note we provide a citeable technical description of FLEXPART's latest version (6.2.

  13. Muninn: A versioning flash key-value store using an object-based storage model

    OpenAIRE

    Kang, Y.; Pitchumani, R; Marlette, T; Miller, El

    2014-01-01

    While non-volatile memory (NVRAM) devices have the po-tential to alleviate the trade-off between performance, scal-ability, and energy in storage and memory subsystems, a block interface and storage subsystems designed for slow I/O devices make it difficult to efficiently exploit NVRAMs in a portable and extensible way. We propose an object-based storage model as a way of addressing the shortfalls of the current interfaces. Through the design of Muninn, an object-based versioning key-value st...

  14. QMM – A Quarterly Macroeconomic Model of the Icelandic Economy. Version 2.0

    DEFF Research Database (Denmark)

    Ólafsson, Tjörvi

    This paper documents and describes Version 2.0 of the Quarterly Macroeconomic Model of the Central Bank of Iceland (QMM). QMM and the underlying quarterly database have been under construction since 2001 at the Research and Forecasting Division of the Economics Department at the Bank and was first...... implemented in the forecasting round for the Monetary Bulletin 2006/1 in March 2006. QMM is used by the Bank for forecasting and various policy simulations and therefore plays a key role as an organisational framework for viewing the medium-term future when formulating monetary policy at the Bank. This paper...

  15. User’s Manual for the Navy Coastal Ocean Model (NCOM) Version 4.0

    Science.gov (United States)

    2009-02-06

    Manual for the Navy Coastal Ocean Model (NCOM) Version 4.0 Paul J. Martin Charlie n. Barron luCy F. SMedStad tiMothy J. CaMPBell alan J. WallCraFt...Timothy J. Campbell, Alan J. Wallcraft, Robert C. Rhodes, Clark Rowley, Tamara L. Townsend, and Suzanne N. Carroll* Naval Research Laboratory...1997- 1998 ENSO event. Bound.-Layer Meteor. 103: 439-458. Large, W.G., J.C. McWilliams , and S. Doney, (1994). Oceanic vertical mixing: a review and

  16. Navy Coastal Ocean Model (NCOM) Version 4.0 (User’s Manual)

    Science.gov (United States)

    2009-02-06

    Manual for the Navy Coastal Ocean Model (NCOM) Version 4.0 Paul J. Martin Charlie n. Barron luCy F. SMedStad tiMothy J. CaMPBell alan J. WallCraFt...Timothy J. Campbell, Alan J. Wallcraft, Robert C. Rhodes, Clark Rowley, Tamara L. Townsend, and Suzanne N. Carroll* Naval Research Laboratory...the 1997- 1998 ENSO event. Bound.-Layer Meteor. 103: 439-458. Large, W.G., J.C. McWilliams , and S. Doney, (1994). Oceanic vertical mixing: a review

  17. QMM – A Quarterly Macroeconomic Model of the Icelandic Economy. Version 2.0

    DEFF Research Database (Denmark)

    Ólafsson, Tjörvi

    This paper documents and describes Version 2.0 of the Quarterly Macroeconomic Model of the Central Bank of Iceland (QMM). QMM and the underlying quarterly database have been under construction since 2001 at the Research and Forecasting Division of the Economics Department at the Bank and was first...... implemented in the forecasting round for the Monetary Bulletin 2006/1 in March 2006. QMM is used by the Bank for forecasting and various policy simulations and therefore plays a key role as an organisational framework for viewing the medium-term future when formulating monetary policy at the Bank. This paper...

  18. Update of the Polar SWIFT model for polar stratospheric ozone loss (Polar SWIFT version 2)

    Science.gov (United States)

    Wohltmann, Ingo; Lehmann, Ralph; Rex, Markus

    2017-07-01

    The Polar SWIFT model is a fast scheme for calculating the chemistry of stratospheric ozone depletion in polar winter. It is intended for use in global climate models (GCMs) and Earth system models (ESMs) to enable the simulation of mutual interactions between the ozone layer and climate. To date, climate models often use prescribed ozone fields, since a full stratospheric chemistry scheme is computationally very expensive. Polar SWIFT is based on a set of coupled differential equations, which simulate the polar vortex-averaged mixing ratios of the key species involved in polar ozone depletion on a given vertical level. These species are O3, chemically active chlorine (ClOx), HCl, ClONO2 and HNO3. The only external input parameters that drive the model are the fraction of the polar vortex in sunlight and the fraction of the polar vortex below the temperatures necessary for the formation of polar stratospheric clouds. Here, we present an update of the Polar SWIFT model introducing several improvements over the original model formulation. In particular, the model is now trained on vortex-averaged reaction rates of the ATLAS Chemistry and Transport Model, which enables a detailed look at individual processes and an independent validation of the different parameterizations contained in the differential equations. The training of the original Polar SWIFT model was based on fitting complete model runs to satellite observations and did not allow for this. A revised formulation of the system of differential equations is developed, which closely fits vortex-averaged reaction rates from ATLAS that represent the main chemical processes influencing ozone. In addition, a parameterization for the HNO3 change by denitrification is included. The rates of change of the concentrations of the chemical species of the Polar SWIFT model are purely chemical rates of change in the new version, whereas in the original Polar SWIFT model, they included a transport effect caused by the

  19. Flood predictions using the parallel version of distributed numerical physical rainfall-runoff model TOPKAPI

    Science.gov (United States)

    Boyko, Oleksiy; Zheleznyak, Mark

    2015-04-01

    The original numerical code TOPKAPI-IMMS of the distributed rainfall-runoff model TOPKAPI ( Todini et al, 1996-2014) is developed and implemented in Ukraine. The parallel version of the code has been developed recently to be used on multiprocessors systems - multicore/processors PC and clusters. Algorithm is based on binary-tree decomposition of the watershed for the balancing of the amount of computation for all processors/cores. Message passing interface (MPI) protocol is used as a parallel computing framework. The numerical efficiency of the parallelization algorithms is demonstrated for the case studies for the flood predictions of the mountain watersheds of the Ukrainian Carpathian regions. The modeling results is compared with the predictions based on the lumped parameters models.

  20. 蒙古语诵经音乐中唱诵音乐的形态类型初探%Thick description on the morpho - types of chant sutras music of Mongolia

    Institute of Scientific and Technical Information of China (English)

    楚高娃

    2012-01-01

    This paper makes an analysis and thick description on the morpho - types of chant sutras music of Mongolian, so that the author tries to verify that the characteristics of systematism and systematization which are formed in the development of chant sutras music of Mongolian. It also proves that Tibetan Buddhism has been assimilated by Mongolian culture through analy- zing the three morphotypes of chant sutras music.%对蒙古语诵经音乐中唱诵音乐形态类型的分析和解读及其对它进行深度阐释,目的是证实:蒙古语诵经音乐在发展过程中所形成的体系化和系统化特点;以唱诵音乐中三种形态类型的分析证实藏传佛教蒙古化的特点。

  1. The Systems Biology Markup Language (SBML) Level 3 Package: Qualitative Models, Version 1, Release 1.

    Science.gov (United States)

    Chaouiya, Claudine; Keating, Sarah M; Berenguier, Duncan; Naldi, Aurélien; Thieffry, Denis; van Iersel, Martijn P; Le Novère, Nicolas; Helikar, Tomáš

    2015-09-04

    Quantitative methods for modelling biological networks require an in-depth knowledge of the biochemical reactions and their stoichiometric and kinetic parameters. In many practical cases, this knowledge is missing. This has led to the development of several qualitative modelling methods using information such as, for example, gene expression data coming from functional genomic experiments. The SBML Level 3 Version 1 Core specification does not provide a mechanism for explicitly encoding qualitative models, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The SBML Qualitative Models package for SBML Level 3 adds features so that qualitative models can be directly and explicitly encoded. The approach taken in this package is essentially based on the definition of regulatory or influence graphs. The SBML Qualitative Models package defines the structure and syntax necessary to describe qualitative models that associate discrete levels of activities with entity pools and the transitions between states that describe the processes involved. This is particularly suited to logical models (Boolean or multi-valued) and some classes of Petri net models can be encoded with the approach.

  2. User guide for MODPATH version 6 - A particle-tracking model for MODFLOW

    Science.gov (United States)

    Pollock, David W.

    2012-01-01

    MODPATH is a particle-tracking post-processing model that computes three-dimensional flow paths using output from groundwater flow simulations based on MODFLOW, the U.S. Geological Survey (USGS) finite-difference groundwater flow model. This report documents MODPATH version 6. Previous versions were documented in USGS Open-File Reports 89-381 and 94-464. The program uses a semianalytical particle-tracking scheme that allows an analytical expression of a particle's flow path to be obtained within each finite-difference grid cell. A particle's path is computed by tracking the particle from one cell to the next until it reaches a boundary, an internal sink/source, or satisfies another termination criterion. Data input to MODPATH consists of a combination of MODFLOW input data files, MODFLOW head and flow output files, and other input files specific to MODPATH. Output from MODPATH consists of several output files, including a number of particle coordinate output files intended to serve as input data for other programs that process, analyze, and display the results in various ways. MODPATH is written in FORTRAN and can be compiled by any FORTRAN compiler that fully supports FORTRAN-2003 or by most commercially available FORTRAN-95 compilers that support the major FORTRAN-2003 language extensions.

  3. Landfill Gas Energy Cost Model Version 3.0 (LFGcost-Web V3 ...

    Science.gov (United States)

    To help stakeholders estimate the costs of a landfill gas (LFG) energy project, in 2002, LMOP developed a cost tool (LFGcost). Since then, LMOP has routinely updated the tool to reflect changes in the LFG energy industry. Initially the model was designed for EPA to assist landfills in evaluating the economic and financial feasibility of LFG energy project development. In 2014, LMOP developed a public version of the model, LFGcost-Web (Version 3.0), to allow landfill and industry stakeholders to evaluate project feasibility on their own. LFGcost-Web can analyze costs for 12 energy recovery project types. These project costs can be estimated with or without the costs of a gas collection and control system (GCCS). The EPA used select equations from LFGcost-Web to estimate costs of the regulatory options in the 2015 proposed revisions to the MSW Landfills Standards of Performance (also known as New Source Performance Standards) and the Emission Guidelines (herein thereafter referred to collectively as the Landfill Rules). More specifically, equations derived from LFGcost-Web were applied to each landfill expected to be impacted by the Landfill Rules to estimate annualized installed capital costs and annual O&M costs of a gas collection and control system. In addition, after applying the LFGcost-Web equations to the list of landfills expected to require a GCCS in year 2025 as a result of the proposed Landfill Rules, the regulatory analysis evaluated whether electr

  4. LERC-SLAM - THE NASA LEWIS RESEARCH CENTER SATELLITE LINK ATTENUATION MODEL PROGRAM (IBM PC VERSION)

    Science.gov (United States)

    Manning, R. M.

    1994-01-01

    The frequency and intensity of rain attenuation affecting the communication between a satellite and an earth terminal is an important consideration in planning satellite links. The NASA Lewis Research Center Satellite Link Attenuation Model Program (LeRC-SLAM) provides a static and dynamic statistical assessment of the impact of rain attenuation on a communications link established between an earth terminal and a geosynchronous satellite. The program is designed for use in the specification, design and assessment of satellite links for any terminal location in the continental United States. The basis for LeRC-SLAM is the ACTS Rain Attenuation Prediction Model, which uses a log-normal cumulative probability distribution to describe the random process of rain attenuation on satellite links. The derivation of the statistics for the rainrate process at the specified terminal location relies on long term rainfall records compiled by the U.S. Weather Service during time periods of up to 55 years in length. The theory of extreme value statistics is also utilized. The user provides 1) the longitudinal position of the satellite in geosynchronous orbit, 2) the geographical position of the earth terminal in terms of latitude and longitude, 3) the height above sea level of the terminal site, 4) the yearly average rainfall at the terminal site, and 5) the operating frequency of the communications link (within 1 to 1000 GHz, inclusive). Based on the yearly average rainfall at the terminal location, LeRC-SLAM calculates the relevant rain statistics for the site using an internal data base. The program then generates rain attenuation data for the satellite link. This data includes a description of the static (i.e., yearly) attenuation process, an evaluation of the cumulative probability distribution for attenuation effects, and an evaluation of the probability of fades below selected fade depths. In addition, LeRC-SLAM calculates the elevation and azimuth angles of the terminal

  5. LERC-SLAM - THE NASA LEWIS RESEARCH CENTER SATELLITE LINK ATTENUATION MODEL PROGRAM (MACINTOSH VERSION)

    Science.gov (United States)

    Manning, R. M.

    1994-01-01

    The frequency and intensity of rain attenuation affecting the communication between a satellite and an earth terminal is an important consideration in planning satellite links. The NASA Lewis Research Center Satellite Link Attenuation Model Program (LeRC-SLAM) provides a static and dynamic statistical assessment of the impact of rain attenuation on a communications link established between an earth terminal and a geosynchronous satellite. The program is designed for use in the specification, design and assessment of satellite links for any terminal location in the continental United States. The basis for LeRC-SLAM is the ACTS Rain Attenuation Prediction Model, which uses a log-normal cumulative probability distribution to describe the random process of rain attenuation on satellite links. The derivation of the statistics for the rainrate process at the specified terminal location relies on long term rainfall records compiled by the U.S. Weather Service during time periods of up to 55 years in length. The theory of extreme value statistics is also utilized. The user provides 1) the longitudinal position of the satellite in geosynchronous orbit, 2) the geographical position of the earth terminal in terms of latitude and longitude, 3) the height above sea level of the terminal site, 4) the yearly average rainfall at the terminal site, and 5) the operating frequency of the communications link (within 1 to 1000 GHz, inclusive). Based on the yearly average rainfall at the terminal location, LeRC-SLAM calculates the relevant rain statistics for the site using an internal data base. The program then generates rain attenuation data for the satellite link. This data includes a description of the static (i.e., yearly) attenuation process, an evaluation of the cumulative probability distribution for attenuation effects, and an evaluation of the probability of fades below selected fade depths. In addition, LeRC-SLAM calculates the elevation and azimuth angles of the terminal

  6. On the Relationship between Apocryphal Sutras and the Ancient Chinese Literature%疑伪经与中国古代文学关系之检讨

    Institute of Scientific and Technical Information of China (English)

    李小荣

    2012-01-01

    在国际汉学领域之佛教文学研究中,学人对重要的单部疑伪经或同类型的疑伪经较为关注,故相关成果也相对突出,但对疑伪经总体性质的认识却较有欠缺。有鉴于此,有必要从宏观角度来讨论疑伪经与中国古代文学之关系。它主要表现在四大层面:一者从文学史言,疑伪经本身就是古代宗教文学不可分割的组成部分之一;二者从思想史言,疑伪经是印度佛教中国化最直接的文本体现;三者从欣赏趣味言,疑伪经塑造的人物形象更符合中土民众的审美心理;四者从影响言,疑伪经既为后世作家提供了大量的文学创作素材,进而形成固定的母题,同时,又促成了某些佛教法会仪式的形成与流播。当然,中土固有的文学作品,也对疑伪经的撰作产生过相当的影响。%In the academic circle of internaitonal sinology, shcolars pay more attention to single Apocry- phal or Apocryphals of same kind in terms of Buddhist literature. Therefore, it is productive in this field of re- search with abundant achievements. However, there lacks recognition concerning the overall quality of all A- pocryphals. This article discussed relationship between Buddhist Apocryphal Sutras and ancient Chinese litera- ture from a macro point of view. It is mainly manifested in the four levels: ( 1 ) from ancient Chinese literary history, the Buddhist Apocryphal Sutras itself is an integral part of the ancient religious literature; (2) from the history of ideas, the Apocryphal Sutras reflects the most direct Indian Buddhism's process in China; (3) from the Chinese people's appreciating tastes, the characters shaped by the Buddhist Apocryphal Sutras have more national aesthetic psychology than those original Buddhist scriptures; (4) from the impacts, the Bud- dhist apocryphal sutras have not only provided a great deal of literary creation materials, thus forming a fixed motif, but also led to

  7. Evaluation of the Snow Simulations from the Community Land Model, Version 4 (CLM4)

    Science.gov (United States)

    Toure, Ally M.; Rodell, Matthew; Yang, Zong-Liang; Beaudoing, Hiroko; Kim, Edward; Zhang, Yongfei; Kwon, Yonghwan

    2015-01-01

    This paper evaluates the simulation of snow by the Community Land Model, version 4 (CLM4), the land model component of the Community Earth System Model, version 1.0.4 (CESM1.0.4). CLM4 was run in an offline mode forced with the corrected land-only replay of the Modern-Era Retrospective Analysis for Research and Applications (MERRA-Land) and the output was evaluated for the period from January 2001 to January 2011 over the Northern Hemisphere poleward of 30 deg N. Simulated snow-cover fraction (SCF), snow depth, and snow water equivalent (SWE) were compared against a set of observations including the Moderate Resolution Imaging Spectroradiometer (MODIS) SCF, the Interactive Multisensor Snow and Ice Mapping System (IMS) snow cover, the Canadian Meteorological Centre (CMC) daily snow analysis products, snow depth from the National Weather Service Cooperative Observer (COOP) program, and Snowpack Telemetry (SNOTEL) SWE observations. CLM4 SCF was converted into snow-cover extent (SCE) to compare with MODIS SCE. It showed good agreement, with a correlation coefficient of 0.91 and an average bias of -1.54 x 10(exp 2) sq km. Overall, CLM4 agreed well with IMS snow cover, with the percentage of correctly modeled snow-no snow being 94%. CLM4 snow depth and SWE agreed reasonably well with the CMC product, with the average bias (RMSE) of snow depth and SWE being 0.044m (0.19 m) and -0.010m (0.04 m), respectively. CLM4 underestimated SNOTEL SWE and COOP snow depth. This study demonstrates the need to improve the CLM4 snow estimates and constitutes a benchmark against which improvement of the model through data assimilation can be measured.

  8. Version 3.0 of code Java for 3D simulation of the CCA model

    Science.gov (United States)

    Zhang, Kebo; Zuo, Junsen; Dou, Yifeng; Li, Chao; Xiong, Hailing

    2016-10-01

    In this paper we provide a new version of program for replacing the previous version. The frequency of traversing the clusters-list was reduced, and some code blocks were optimized properly; in addition, we appended and revised the comments of the source code for some methods or attributes. The compared experimental results show that new version has better time efficiency than the previous version.

  9. Igpet software for modeling igneous processes: examples of application using the open educational version

    Science.gov (United States)

    Carr, Michael J.; Gazel, Esteban

    2016-09-01

    We provide here an open version of Igpet software, called t-Igpet to emphasize its application for teaching and research in forward modeling of igneous geochemistry. There are three programs, a norm utility, a petrologic mixing program using least squares and Igpet, a graphics program that includes many forms of numerical modeling. Igpet is a multifaceted tool that provides the following basic capabilities: igneous rock identification using the IUGS (International Union of Geological Sciences) classification and several supplementary diagrams; tectonic discrimination diagrams; pseudo-quaternary projections; least squares fitting of lines, polynomials and hyperbolae; magma mixing using two endmembers, histograms, x-y plots, ternary plots and spider-diagrams. The advanced capabilities of Igpet are multi-element mixing and magma evolution modeling. Mixing models are particularly useful for understanding the isotopic variations in rock suites that evolved by mixing different sources. The important melting models include, batch melting, fractional melting and aggregated fractional melting. Crystallization models include equilibrium and fractional crystallization and AFC (assimilation and fractional crystallization). Theses, reports and proposals concerning igneous petrology are improved by numerical modeling. For reviewed publications some elements of modeling are practically a requirement. Our intention in providing this software is to facilitate improved communication and lower entry barriers to research, especially for students.

  10. Igpet software for modeling igneous processes: examples of application using the open educational version

    Science.gov (United States)

    Carr, Michael J.; Gazel, Esteban

    2017-04-01

    We provide here an open version of Igpet software, called t-Igpet to emphasize its application for teaching and research in forward modeling of igneous geochemistry. There are three programs, a norm utility, a petrologic mixing program using least squares and Igpet, a graphics program that includes many forms of numerical modeling. Igpet is a multifaceted tool that provides the following basic capabilities: igneous rock identification using the IUGS (International Union of Geological Sciences) classification and several supplementary diagrams; tectonic discrimination diagrams; pseudo-quaternary projections; least squares fitting of lines, polynomials and hyperbolae; magma mixing using two endmembers, histograms, x-y plots, ternary plots and spider-diagrams. The advanced capabilities of Igpet are multi-element mixing and magma evolution modeling. Mixing models are particularly useful for understanding the isotopic variations in rock suites that evolved by mixing different sources. The important melting models include, batch melting, fractional melting and aggregated fractional melting. Crystallization models include equilibrium and fractional crystallization and AFC (assimilation and fractional crystallization). Theses, reports and proposals concerning igneous petrology are improved by numerical modeling. For reviewed publications some elements of modeling are practically a requirement. Our intention in providing this software is to facilitate improved communication and lower entry barriers to research, especially for students.

  11. Ocean Model, Analysis and Prediction System version 3: operational global ocean forecasting

    Science.gov (United States)

    Brassington, Gary; Sandery, Paul; Sakov, Pavel; Freeman, Justin; Divakaran, Prasanth; Beckett, Duan

    2017-04-01

    The Ocean Model, Analysis and Prediction System version 3 (OceanMAPSv3) is a near-global (75S-75N; no sea-ice), uniform horizontal resolution (0.1°x0.1°), 51 vertical level ocean forecast system producing daily analyses and 7 day forecasts. This system was declared operational at the Bureau of Meteorology in April 2016 and subsequently upgraded to include ACCESS-G APS2 in June 2016 and finally ported to the Bureau's new supercomputer in Sep 2016. This system realises the original vision of the BLUElink projects (2003-2015) to provide global forecasts of the ocean geostrophic turbulence (eddies and fronts) in support of Naval operations as well as other national services. The analysis system has retained an ensemble-based optimal interpolation method with 144 stationary ensemble members derived from a multi-year hindcast. However, the BODAS code has been upgraded to a new code base ENKF-C. A new strategy for initialisation has been introduced leading to greater retention of analysis increments and reduced shock. The analysis cycle has been optimised for a 3-cycle system with 3 day observation windows retaining an advantage as a multi-cycle time-lagged ensemble. The sea surface temperature and sea surface height anomaly analysis errors in the Australian region are 0.34 degC and 6.2 cm respectively an improvement of 10% and 20% respectively over version 2. In addition, the RMSE of the 7 day forecast has lower error than the 1 day forecast from the previous system (version 2). International intercomparisons have shown that this system is comparable in performance with the two leading systems and is often the leading performer for surface temperature and upper ocean temperature. We present an overview of the system, the data assimilation and initialisation, demonstrate the performance and outline future directions.

  12. Integrated Medical Model (IMM) Optimization Version 4.0 Functional Improvements

    Science.gov (United States)

    Arellano, John; Young, M.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Goodenow, D. A.; Myers, J. G.

    2016-01-01

    The IMMs ability to assess mission outcome risk levels relative to available resources provides a unique capability to provide guidance on optimal operational medical kit and vehicle resources. Post-processing optimization allows IMM to optimize essential resources to improve a specific model outcome such as maximization of the Crew Health Index (CHI), or minimization of the probability of evacuation (EVAC) or the loss of crew life (LOCL). Mass and or volume constrain the optimized resource set. The IMMs probabilistic simulation uses input data on one hundred medical conditions to simulate medical events that may occur in spaceflight, the resources required to treat those events, and the resulting impact to the mission based on specific crew and mission characteristics. Because IMM version 4.0 provides for partial treatment for medical events, IMM Optimization 4.0 scores resources at the individual resource unit increment level as opposed to the full condition-specific treatment set level, as done in version 3.0. This allows the inclusion of as many resources as possible in the event that an entire set of resources called out for treatment cannot satisfy the constraints. IMM Optimization version 4.0 adds capabilities that increase efficiency by creating multiple resource sets based on differing constraints and priorities, CHI, EVAC, or LOCL. It also provides sets of resources that improve mission-related IMM v4.0 outputs with improved performance compared to the prior optimization. The new optimization represents much improved fidelity that will improve the utility of the IMM 4.0 for decision support.

  13. 莫高窟第454窟观无量寿经变研究%A study on the Amitayurbhavana Sutra Illustration in Mogao Cave 454

    Institute of Scientific and Technical Information of China (English)

    郭俊叶

    2016-01-01

    通过对莫高窟第454窟观无量寿经变内容的详细考证,认为该经变左侧条幅十六观画面中以菩萨观来代替后三观中的三辈生想观,旨在表明往生皆可成就菩萨道,也是愿“上品往生”思想的反映。同时,敦煌晚唐、五代、宋时观无量寿经变中的九品往生已不再在净土庄严相的七宝池中绘出,而是集中绘于十六观之中,大多以后三观代替,并且这后三观常以菩萨观绘出,以显往生成菩萨道。%Based on the detailed textual research content of the Amitayurbhavana sutra art in Mogao Cave cave 454,the article points out the three of behind the Sixteen meditations images is bodhisattva meditations images instead of the three generation born meditations in the left banner of the sutra illustration,the purpose is to show that all the persons can be reborn to be bodhisattva,is also the reflect of willing“reborn to top grade”.At the same time,the ninefold future life did not draw in the pure land of the Amitayurbhavana sutra art of dunhuang caves in the late Tang dynasty,Five Dynasties,Song dynasty ,but in Sixteen meditations images,they mostly is the bodhisattva meditations images,to show to be the bodhisattva in future life.

  14. APPLICATION OF TWO VERSIONS OF A RNG BASED k-ε MODEL TO NUMERICAL SIMULATIONS OF TURBULENT IMPINGING JET FLOW

    Institute of Scientific and Technical Information of China (English)

    Chen Qing-guang; Xu Zhong; Zhang Yong-jian

    2003-01-01

    Two independent versions of the RNG based k-ε turbulence model in conjunction with the law of the wall have been applied to the numerical simulation of an axisymmetric turbulent impinging jet flow field. The two model predictions are compared with those of the standard k-ε model and with the experimental data measured by LDV (Laser Doppler Velocimetry). It shows that the original version of the RNG k-ε model with the choice of Cε1=1.063 can not yield good results, among them the predicted turbulent kinetic energy profiles in the vicinity of the stagnation region are even worse than those predicted by the standard k-ε model. However, the new version of RNG k-ε model behaves well. This is mainly due to the corrections to the constants Cε1 and Cε2 along with a modification of the production term to account for non-equilibrium strain rates in the flow.

  15. Description of the Earth system model of intermediate complexity LOVECLIM version 1.2

    Directory of Open Access Journals (Sweden)

    H. Goosse

    2010-11-01

    Full Text Available The main characteristics of the new version 1.2 of the three-dimensional Earth system model of intermediate complexity LOVECLIM are briefly described. LOVECLIM 1.2 includes representations of the atmosphere, the ocean and sea ice, the land surface (including vegetation, the ice sheets, the icebergs and the carbon cycle. The atmospheric component is ECBilt2, a T21, 3-level quasi-geostrophic model. The ocean component is CLIO3, which consists of an ocean general circulation model coupled to a comprehensive thermodynamic-dynamic sea-ice model. Its horizontal resolution is of 3° by 3°, and there are 20 levels in the ocean. ECBilt-CLIO is coupled to VECODE, a vegetation model that simulates the dynamics of two main terrestrial plant functional types, trees and grasses, as well as desert. VECODE also simulates the evolution of the carbon cycle over land while the ocean carbon cycle is represented by LOCH, a comprehensive model that takes into account both the solubility and biological pumps. The ice sheet component AGISM is made up of a three-dimensional thermomechanical model of the ice sheet flow, a visco-elastic bedrock model and a model of the mass balance at the ice-atmosphere and ice-ocean interfaces. For both the Greenland and Antarctic ice sheets, calculations are made on a 10 km by 10 km resolution grid with 31 sigma levels. LOVECLIM1.2 reproduces well the major characteristics of the observed climate both for present-day conditions and for key past periods such as the last millennium, the mid-Holocene and the Last Glacial Maximum. However, despite some improvements compared to earlier versions, some biases are still present in the model. The most serious ones are mainly located at low latitudes with an overestimation of the temperature there, a too symmetric distribution of precipitation between the two hemispheres, and an overestimation of precipitation and vegetation cover in the subtropics. In addition, the atmospheric circulation is

  16. Exact solution for a metapopulation version of Schelling’s model

    Science.gov (United States)

    Durrett, Richard; Zhang, Yuan

    2014-01-01

    In 1971, Schelling introduced a model in which families move if they have too many neighbors of the opposite type. In this paper, we will consider a metapopulation version of the model in which a city is divided into N neighborhoods, each of which has L houses. There are ρNL red families and ρNL blue families for some ρ ρb, a new segregated equilibrium appears; for ρb < ρ < ρd, there is bistability, but when ρ increases past ρd the random state is no longer stable. When ρc is small enough, the random state will again be the stationary distribution when ρ is close to 1/2. If so, this is preceded by a region of bistability. PMID:25225367

  17. New Source Term Model for the RESRAD-OFFSITE Code Version 3

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Charley [Argonne National Lab. (ANL), Argonne, IL (United States); Gnanapragasam, Emmanuel [Argonne National Lab. (ANL), Argonne, IL (United States); Cheng, Jing-Jy [Argonne National Lab. (ANL), Argonne, IL (United States); Kamboj, Sunita [Argonne National Lab. (ANL), Argonne, IL (United States); Chen, Shih-Yew [Argonne National Lab. (ANL), Argonne, IL (United States)

    2013-06-01

    This report documents the new source term model developed and implemented in Version 3 of the RESRAD-OFFSITE code. This new source term model includes: (1) "first order release with transport" option, in which the release of the radionuclide is proportional to the inventory in the primary contamination and the user-specified leach rate is the proportionality constant, (2) "equilibrium desorption release" option, in which the user specifies the distribution coefficient which quantifies the partitioning of the radionuclide between the solid and aqueous phases, and (3) "uniform release" option, in which the radionuclides are released from a constant fraction of the initially contaminated material during each time interval and the user specifies the duration over which the radionuclides are released.

  18. Validity study of the Beck Anxiety Inventory (Portuguese version by the Rasch Rating Scale model

    Directory of Open Access Journals (Sweden)

    Sónia Quintão

    2013-01-01

    Full Text Available Our objective was to conduct a validation study of the Portuguese version of the Beck Anxiety Inventory (BAI by means of the Rasch Rating Scale Model, and then compare it with the most used scales of anxiety in Portugal. The sample consisted of 1,160 adults (427 men and 733 women, aged 18-82 years old (M=33.39; SD=11.85. Instruments were Beck Anxiety Inventory, State-Trait Anxiety Inventory and Zung Self-Rating Anxiety Scale. It was found that Beck Anxiety Inventory's system of four categories, the data-model fit, and people reliability were adequate. The measure can be considered as unidimensional. Gender and age-related differences were not a threat to the validity. BAI correlated significantly with other anxiety measures. In conclusion, BAI shows good psychometric quality.

  19. An improved version of the consequence analysis model for chemical emergencies, ESCAPE

    Science.gov (United States)

    Kukkonen, J.; Nikmo, J.; Riikonen, K.

    2017-02-01

    We present a refined version of a mathematical model called ESCAPE, "Expert System for Consequence Analysis and Preparing for Emergencies". The model has been designed for evaluating the releases of toxic and flammable gases into the atmosphere, their atmospheric dispersion and the effects on humans and the environment. We describe (i) the mathematical treatments of this model, (ii) a verification and evaluation of the model against selected experimental field data, and (iii) a new operational implementation of the model. The new mathematical treatments include state-of-the-art atmospheric vertical profiles and new submodels for dense gas and passive atmospheric dispersion. The model performance was first successfully verified using the data of the Thorney Island campaign, and then evaluated against the Desert Tortoise campaign. For the latter campaign, the geometric mean bias was 1.72 (this corresponds to an underprediction of approximately 70%) and 0.71 (overprediction of approximately 30%) for the concentration and the plume half-width, respectively. The geometric variance was computers, tablets and mobile phones. The predicted results can be post-processed using geographic information systems. The model has already proved to be a useful tool of assessment for the needs of emergency response authorities in contingency planning.

  20. VALIDATION OF THE ASTER GLOBAL DIGITAL ELEVATION MODEL VERSION 3 OVER THE CONTERMINOUS UNITED STATES

    Directory of Open Access Journals (Sweden)

    D. Gesch

    2016-06-01

    Full Text Available The ASTER Global Digital Elevation Model Version 3 (GDEM v3 was evaluated over the conterminous United States in a manner similar to the validation conducted for the original GDEM Version 1 (v1 in 2009 and GDEM Version 2 (v2 in 2011. The absolute vertical accuracy of GDEM v3 was calculated by comparison with more than 23,000 independent reference geodetic ground control points from the U.S. National Geodetic Survey. The root mean square error (RMSE measured for GDEM v3 is 8.52 meters. This compares with the RMSE of 8.68 meters for GDEM v2. Another important descriptor of vertical accuracy is the mean error, or bias, which indicates if a DEM has an overall vertical offset from true ground level. The GDEM v3 mean error of −1.20 meters reflects an overall negative bias in GDEM v3. The absolute vertical accuracy assessment results, both mean error and RMSE, were segmented by land cover type to provide insight into how GDEM v3 performs in various land surface conditions. While the RMSE varies little across cover types (6.92 to 9.25 meters, the mean error (bias does appear to be affected by land cover type, ranging from −2.99 to +4.16 meters across 14 land cover classes. These results indicate that in areas where built or natural aboveground features are present, GDEM v3 is measuring elevations above the ground level, a condition noted in assessments of previous GDEM versions (v1 and v2 and an expected condition given the type of stereo-optical image data collected by ASTER. GDEM v3 was also evaluated by differencing with the Shuttle Radar Topography Mission (SRTM dataset. In many forested areas, GDEM v3 has elevations that are higher in the canopy than SRTM. The overall validation effort also included an evaluation of the GDEM v3 water mask. In general, the number of distinct water polygons in GDEM v3 is much lower than the number in a reference land cover dataset, but the total areas compare much more closely.

  1. Validation of the ASTER Global Digital Elevation Model version 3 over the conterminous United States

    Science.gov (United States)

    Gesch, Dean B.; Oimoen, Michael J.; Danielson, Jeffrey J.; Meyer, David

    2016-01-01

    The ASTER Global Digital Elevation Model Version 3 (GDEM v3) was evaluated over the conterminous United States in a manner similar to the validation conducted for the original GDEM Version 1 (v1) in 2009 and GDEM Version 2 (v2) in 2011. The absolute vertical accuracy of GDEM v3 was calculated by comparison with more than 23,000 independent reference geodetic ground control points from the U.S. National Geodetic Survey. The root mean square error (RMSE) measured for GDEM v3 is 8.52 meters. This compares with the RMSE of 8.68 meters for GDEM v2. Another important descriptor of vertical accuracy is the mean error, or bias, which indicates if a DEM has an overall vertical offset from true ground level. The GDEM v3 mean error of −1.20 meters reflects an overall negative bias in GDEM v3. The absolute vertical accuracy assessment results, both mean error and RMSE, were segmented by land cover type to provide insight into how GDEM v3 performs in various land surface conditions. While the RMSE varies little across cover types (6.92 to 9.25 meters), the mean error (bias) does appear to be affected by land cover type, ranging from −2.99 to +4.16 meters across 14 land cover classes. These results indicate that in areas where built or natural aboveground features are present, GDEM v3 is measuring elevations above the ground level, a condition noted in assessments of previous GDEM versions (v1 and v2) and an expected condition given the type of stereo-optical image data collected by ASTER. GDEM v3 was also evaluated by differencing with the Shuttle Radar Topography Mission (SRTM) dataset. In many forested areas, GDEM v3 has elevations that are higher in the canopy than SRTM. The overall validation effort also included an evaluation of the GDEM v3 water mask. In general, the number of distinct water polygons in GDEM v3 is much lower than the number in a reference land cover dataset, but the total areas compare much more closely.

  2. Validation of the Aster Global Digital Elevation Model Version 3 Over the Conterminous United States

    Science.gov (United States)

    Gesch, D.; Oimoen, M.; Danielson, J.; Meyer, D.

    2016-06-01

    The ASTER Global Digital Elevation Model Version 3 (GDEM v3) was evaluated over the conterminous United States in a manner similar to the validation conducted for the original GDEM Version 1 (v1) in 2009 and GDEM Version 2 (v2) in 2011. The absolute vertical accuracy of GDEM v3 was calculated by comparison with more than 23,000 independent reference geodetic ground control points from the U.S. National Geodetic Survey. The root mean square error (RMSE) measured for GDEM v3 is 8.52 meters. This compares with the RMSE of 8.68 meters for GDEM v2. Another important descriptor of vertical accuracy is the mean error, or bias, which indicates if a DEM has an overall vertical offset from true ground level. The GDEM v3 mean error of -1.20 meters reflects an overall negative bias in GDEM v3. The absolute vertical accuracy assessment results, both mean error and RMSE, were segmented by land cover type to provide insight into how GDEM v3 performs in various land surface conditions. While the RMSE varies little across cover types (6.92 to 9.25 meters), the mean error (bias) does appear to be affected by land cover type, ranging from -2.99 to +4.16 meters across 14 land cover classes. These results indicate that in areas where built or natural aboveground features are present, GDEM v3 is measuring elevations above the ground level, a condition noted in assessments of previous GDEM versions (v1 and v2) and an expected condition given the type of stereo-optical image data collected by ASTER. GDEM v3 was also evaluated by differencing with the Shuttle Radar Topography Mission (SRTM) dataset. In many forested areas, GDEM v3 has elevations that are higher in the canopy than SRTM. The overall validation effort also included an evaluation of the GDEM v3 water mask. In general, the number of distinct water polygons in GDEM v3 is much lower than the number in a reference land cover dataset, but the total areas compare much more closely.

  3. REVIEW OF SOURCE PLANTS OF KSHARA FOR KSHARA SUTRA PREPARATION FOR THE MANAGEMENT OF FISTULA-IN-ANO

    Directory of Open Access Journals (Sweden)

    Rath Sudipt Kumar

    2012-06-01

    Full Text Available Ksharasutra is a successful novel drug delivery system in managing cases of fistula-in-ano. Currently, the ksharasutra is prepared with Apamarga (Achyranthes aspera kshara. Although this ksharasutra has been a landmark success, but naturally it has certain clinical problems like pain, burning sensation and itching associated with it. These problems can be attributed to different doshas. Ayurveda also postulates for different herbs for different individuals on basis of their constitution and doshic involvement of the clinical condition. Sushruta has enlisted 23 plants for source of kshara which have to be used together for kshara preparation. Sushruta has also laid a principle to take the practically available plants, whether all or some or even one, for preparing a formulation from the enlisted plants of a category. Therefore, there is a classical support to use one or few of the source plants for preparing kshara and a pharmacological possibility that these ksharas prepared out of different plant will behave differently. The incidence of itching in Apamarga ksharasutra is the least and this can be related to the predominant kapha shamaka action of Apamarga. Therefore, it is logical to hypothesize that kshara made out of a Vata shamaka plant may cause less incidence of pain and a kshara made out of a Pitta shamaka plant may cause less incidence of burning sensation. The article critically reviews the classical, contemporary views on kshara and its source plants, the already available information supporting the role of these plants in healing of fistula-in-ano with an objective to explore specific kshara sutra on basis of doshic involvement.

  4. Immersion freezing by natural dust based on a soccer ball model with the Community Atmospheric Model version 5: climate effects

    Science.gov (United States)

    Wang, Yong; Liu, Xiaohong

    2014-12-01

    We introduce a simplified version of the soccer ball model (SBM) developed by Niedermeier et al (2014 Geophys. Res. Lett. 41 736-741) into the Community Atmospheric Model version 5 (CAM5). It is the first time that SBM is used in an atmospheric model to parameterize the heterogeneous ice nucleation. The SBM, which was simplified for its suitable application in atmospheric models, uses the classical nucleation theory to describe the immersion/condensation freezing by dust in the mixed-phase cloud regime. Uncertain parameters (mean contact angle, standard deviation of contact angle probability distribution, and number of surface sites) in the SBM are constrained by fitting them to recent natural dust (Saharan dust) datasets. With the SBM in CAM5, we investigate the sensitivity of modeled cloud properties to the SBM parameters, and find significant seasonal and regional differences in the sensitivity among the three SBM parameters. Changes of mean contact angle and the number of surface sites lead to changes of cloud properties in Arctic in spring, which could be attributed to the transport of dust ice nuclei to this region. In winter, significant changes of cloud properties induced by these two parameters mainly occur in northern hemispheric mid-latitudes (e.g., East Asia). In comparison, no obvious changes of cloud properties caused by changes of standard deviation can be found in all the seasons. These results are valuable for understanding the heterogeneous ice nucleation behavior, and useful for guiding the future model developments.

  5. UNSAT-H Version 2. 0: Unsaturated soil water and heat flow model

    Energy Technology Data Exchange (ETDEWEB)

    Fayer, M.J.; Jones, T.L.

    1990-04-01

    This report documents UNSAT-H Version 2.0, a model for calculating water and heat flow in unsaturated media. The documentation includes the bases for the conceptual model and its numerical implementation, benchmark test cases, example simulations involving layered soils and plant transpiration, and the code listing. Waste management practices at the Hanford Site have included disposal of low-level wastes by near-surface burial. Predicting the future long-term performance of any such burial site in terms of migration of contaminants requires a model capable of simulating water flow in the unsaturated soils above the buried waste. The model currently used to meet this need is UNSAT-H. This model was developed at Pacific Northwest Laboratory to assess water dynamics of near-surface, waste-disposal sites at the Hanford Site. The code is primarily used to predict deep drainage as a function of such environmental conditions as climate, soil type, and vegetation. UNSAT-H is also used to simulate the effects of various practices to enhance isolation of wastes. 66 refs., 29 figs., 7 tabs.

  6. A new version of the NeQuick ionosphere electron density model

    Science.gov (United States)

    Nava, B.; Coïsson, P.; Radicella, S. M.

    2008-12-01

    NeQuick is a three-dimensional and time dependent ionospheric electron density model developed at the Aeronomy and Radiopropagation Laboratory of the Abdus Salam International Centre for Theoretical Physics (ICTP), Trieste, Italy and at the Institute for Geophysics, Astrophysics and Meteorology of the University of Graz, Austria. It is a quick-run model particularly tailored for trans-ionospheric applications that allows one to calculate the electron concentration at any given location in the ionosphere and thus the total electron content (TEC) along any ground-to-satellite ray-path by means of numerical integration. Taking advantage of the increasing amount of available data, the model formulation is continuously updated to improve NeQuick capabilities to provide representations of the ionosphere at global scales. Recently, major changes have been introduced in the model topside formulation and important modifications have also been introduced in the bottomside description. In addition, specific revisions have been applied to the computer package associated to NeQuick in order to improve its computational efficiency. It has therefore been considered appropriate to finalize all the model developments in a new version of the NeQuick. In the present work the main features of NeQuick 2 are illustrated and some results related to validation tests are reported.

  7. A description of the FAMOUS (version XDBUA climate model and control run

    Directory of Open Access Journals (Sweden)

    A. Osprey

    2008-12-01

    Full Text Available FAMOUS is an ocean-atmosphere general circulation model of low resolution, capable of simulating approximately 120 years of model climate per wallclock day using current high performance computing facilities. It uses most of the same code as HadCM3, a widely used climate model of higher resolution and computational cost, and has been tuned to reproduce the same climate reasonably well. FAMOUS is useful for climate simulations where the computational cost makes the application of HadCM3 unfeasible, either because of the length of simulation or the size of the ensemble desired. We document a number of scientific and technical improvements to the original version of FAMOUS. These improvements include changes to the parameterisations of ozone and sea-ice which alleviate a significant cold bias from high northern latitudes and the upper troposphere, and the elimination of volume-averaged drifts in ocean tracers. A simple model of the marine carbon cycle has also been included. A particular goal of FAMOUS is to conduct millennial-scale paleoclimate simulations of Quaternary ice ages; to this end, a number of useful changes to the model infrastructure have been made.

  8. Solid Waste Projection Model: Database User`s Guide. Version 1.4

    Energy Technology Data Exchange (ETDEWEB)

    Blackburn, C.L.

    1993-10-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC) specifically to address Hanford solid waste management issues. This document is one of a set of documents supporting the SWPM system and providing instructions in the use and maintenance of SWPM components. This manual contains instructions for using Version 1.4 of the SWPM database: system requirements and preparation, entering and maintaining data, and performing routine database functions. This document supports only those operations which are specific to SWPM database menus and functions and does not Provide instruction in the use of Paradox, the database management system in which the SWPM database is established.

  9. Hydrogeochemical evaluation of the Forsmark site, model version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    Laaksoharju, Marcus (ed.) [GeoPoint AB, Sollentuna (Sweden); Gimeno, Maria; Auque, Luis; Gomez, Javier [Univ. of Zaragoza (Spain). Dept. of Earth Sciences; Smellie, John [Conterra AB, Uppsala (Sweden); Tullborg, Eva-Lena [Terralogica AB, Graabo (Sweden); Gurban, Ioana [3D-Terra, Montreal (Canada)

    2004-01-01

    Siting studies for SKB's programme of deep geological disposal of nuclear fuel waste currently involves the investigation of two locations, Forsmark and Simpevarp, on the eastern coast of Sweden to determine their geological, geochemical and hydrogeological characteristics. Present work completed has resulted in model version 1.1 which represents the first evaluation of the available Forsmark groundwater analytical data collected up to May 1, 2003 (i.e. the first 'data freeze'). The HAG group had access to a total of 456 water samples collected mostly from the surface and sub-surface environment (e.g. soil pipes in the overburden, streams and lakes); only a few samples were collected from drilled boreholes. The deepest samples reflected depths down to 200 m. Furthermore, most of the waters sampled (74%) lacked crucial analytical information that restricted the evaluation. Consequently, model version 1.1 focussed on the processes taking place in the uppermost part of the bedrock rather than at repository levels. The complex groundwater evolution and patterns at Forsmark are a result of many factors such as: a) the flat topography and closeness to the Baltic Sea resulting in relative small hydrogeological driving forces which can preserve old water types from being flushed out, b) the changes in hydrogeology related to glaciation/deglaciation and land uplift, c) repeated marine/lake water regressions/transgressions, and d) organic or inorganic alteration of the groundwater caused by microbial processes or water/rock interactions. The sampled groundwaters reflect to various degrees modern or ancient water/rock interactions and mixing processes. Based on the general geochemical character and the apparent age two major water types occur in Forsmark: fresh-meteoric waters with a bicarbonate imprint and low residence times (tritium values above detection limit), and brackish-marine waters with Cl contents up to 6,000 mg/L and longer residence times (tritium

  10. Thermal modelling. Preliminary site description Simpevarp subarea - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Sundberg, Jan; Back, Paer-Erik; Bengtsson, Anna; Laendell, Maerta [Geo Innova AB, Linkoeping (Sweden)

    2005-08-15

    This report presents the thermal site descriptive model for the Simpevarp subarea, version 1.2. The main objective of this report is to present the thermal modelling work where data has been identified, quality controlled, evaluated and summarised in order to make an upscaling to lithological domain level possible. The thermal conductivity at possible canister scale has been modelled for four different lithological domains (RSMA01 (Aevroe granite), RSMB01 (Fine-grained dioritoid), RSMC01 (mixture of Aevroe granite and Quartz monzodiorite), and RSMD01 (Quartz monzodiorite)). A main modelling approach has been used to determine the mean value of the thermal conductivity. Three alternative/complementary approaches have been used to evaluate the spatial variability of the thermal conductivity at domain level. The thermal modelling approaches are based on the lithological model for the Simpevarp subarea, version 1.2 together with rock type models constituted from measured and calculated (from mineral composition) thermal conductivities. For one rock type, the Aevroe granite (501044), density loggings within the specific rock type has also been used in the domain modelling in order to consider the spatial variability within the Aevroe granite. This has been possible due to the presented relationship between density and thermal conductivity, valid for the Aevroe granite. Results indicate that the mean of thermal conductivity is expected to exhibit only a small variation between the different domains, from 2.62 W/(m.K) to 2.80 W/(m.K). The standard deviation varies according to the scale considered and for the canister scale it is expected to range from 0.20 to 0.28 W/(m.K). Consequently, the lower confidence limit (95% confidence) for the canister scale is within the range 2.04-2.35 W/(m.K) for the different domains. The temperature dependence is rather small with a decrease in thermal conductivity of 1.1-3.4% per 100 deg C increase in temperature for the dominating rock

  11. Thermal modelling. Preliminary site description Laxemar subarea - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Sundberg, Jan; Wrafter, John; Back, Paer-Erik; Laendell, Maerta [Geo Innova AB, Linkoeping (Sweden)

    2006-02-15

    This report presents the thermal site descriptive model for the Laxemar subarea, version 1.2. The main objective of this report is to present the thermal modelling work where data has been identified, quality controlled, evaluated and summarised in order to make an upscaling to lithological domain level possible. The thermal conductivity at canister scale has been modelled for five different lithological domains: RSMA (Aevroe granite), RSMBA (mixture of Aevroe granite and fine-grained dioritoid), RSMD (quartz monzodiorite), RSME (diorite/gabbro) and RSMM (mix domain with high frequency of diorite to gabbro). A base modelling approach has been used to determine the mean value of the thermal conductivity. Four alternative/complementary approaches have been used to evaluate the spatial variability of the thermal conductivity at domain level. The thermal modelling approaches are based on the lithological domain model for the Laxemar subarea, version 1.2 together with rock type models based on measured and calculated (from mineral composition) thermal conductivities. For one rock type, Aevroe granite (501044), density loggings have also been used in the domain modelling in order to evaluate the spatial variability within the Aevroe granite. This has been possible due to an established relationship between density and thermal conductivity, valid for the Aevroe granite. Results indicate that the means of thermal conductivity for the various domains are expected to exhibit a variation from 2.45 W/(m.K) to 2.87 W/(m.K). The standard deviation varies according to the scale considered, and for the 0.8 m scale it is expected to range from 0.17 to 0.29 W/(m.K). Estimates of lower tail percentiles for the same scale are presented for all five domains. The temperature dependence is rather small with a decrease in thermal conductivity of 1.1-5.3% per 100 deg C increase in temperature for the dominant rock types. There are a number of important uncertainties associated with these

  12. Thermal modelling. Preliminary site description Laxemar subarea - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Sundberg, Jan; Wrafter, John; Back, Paer-Erik; Laendell, Maerta [Geo Innova AB, Linkoeping (Sweden)

    2006-02-15

    This report presents the thermal site descriptive model for the Laxemar subarea, version 1.2. The main objective of this report is to present the thermal modelling work where data has been identified, quality controlled, evaluated and summarised in order to make an upscaling to lithological domain level possible. The thermal conductivity at canister scale has been modelled for five different lithological domains: RSMA (Aevroe granite), RSMBA (mixture of Aevroe granite and fine-grained dioritoid), RSMD (quartz monzodiorite), RSME (diorite/gabbro) and RSMM (mix domain with high frequency of diorite to gabbro). A base modelling approach has been used to determine the mean value of the thermal conductivity. Four alternative/complementary approaches have been used to evaluate the spatial variability of the thermal conductivity at domain level. The thermal modelling approaches are based on the lithological domain model for the Laxemar subarea, version 1.2 together with rock type models based on measured and calculated (from mineral composition) thermal conductivities. For one rock type, Aevroe granite (501044), density loggings have also been used in the domain modelling in order to evaluate the spatial variability within the Aevroe granite. This has been possible due to an established relationship between density and thermal conductivity, valid for the Aevroe granite. Results indicate that the means of thermal conductivity for the various domains are expected to exhibit a variation from 2.45 W/(m.K) to 2.87 W/(m.K). The standard deviation varies according to the scale considered, and for the 0.8 m scale it is expected to range from 0.17 to 0.29 W/(m.K). Estimates of lower tail percentiles for the same scale are presented for all five domains. The temperature dependence is rather small with a decrease in thermal conductivity of 1.1-5.3% per 100 deg C increase in temperature for the dominant rock types. There are a number of important uncertainties associated with these

  13. Thermal modelling. Preliminary site description Simpevarp subarea - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Sundberg, Jan; Back, Paer-Erik; Bengtsson, Anna; Laendell, Maerta [Geo Innova AB, Linkoeping (Sweden)

    2005-08-15

    This report presents the thermal site descriptive model for the Simpevarp subarea, version 1.2. The main objective of this report is to present the thermal modelling work where data has been identified, quality controlled, evaluated and summarised in order to make an upscaling to lithological domain level possible. The thermal conductivity at possible canister scale has been modelled for four different lithological domains (RSMA01 (Aevroe granite), RSMB01 (Fine-grained dioritoid), RSMC01 (mixture of Aevroe granite and Quartz monzodiorite), and RSMD01 (Quartz monzodiorite)). A main modelling approach has been used to determine the mean value of the thermal conductivity. Three alternative/complementary approaches have been used to evaluate the spatial variability of the thermal conductivity at domain level. The thermal modelling approaches are based on the lithological model for the Simpevarp subarea, version 1.2 together with rock type models constituted from measured and calculated (from mineral composition) thermal conductivities. For one rock type, the Aevroe granite (501044), density loggings within the specific rock type has also been used in the domain modelling in order to consider the spatial variability within the Aevroe granite. This has been possible due to the presented relationship between density and thermal conductivity, valid for the Aevroe granite. Results indicate that the mean of thermal conductivity is expected to exhibit only a small variation between the different domains, from 2.62 W/(m.K) to 2.80 W/(m.K). The standard deviation varies according to the scale considered and for the canister scale it is expected to range from 0.20 to 0.28 W/(m.K). Consequently, the lower confidence limit (95% confidence) for the canister scale is within the range 2.04-2.35 W/(m.K) for the different domains. The temperature dependence is rather small with a decrease in thermal conductivity of 1.1-3.4% per 100 deg C increase in temperature for the dominating rock

  14. Technical Note: Chemistry-climate model SOCOL: version 2.0 with improved transport and chemistry/microphysics schemes

    Directory of Open Access Journals (Sweden)

    M. Schraner

    2008-10-01

    Full Text Available We describe version 2.0 of the chemistry-climate model (CCM SOCOL. The new version includes fundamental changes of the transport scheme such as transporting all chemical species of the model individually and applying a family-based correction scheme for mass conservation for species of the nitrogen, chlorine and bromine groups, a revised transport scheme for ozone, furthermore more detailed halogen reaction and deposition schemes, and a new cirrus parameterisation in the tropical tropopause region. By means of these changes the model manages to overcome or considerably reduce deficiencies recently identified in SOCOL version 1.1 within the CCM Validation activity of SPARC (CCMVal. In particular, as a consequence of these changes, regional mass loss or accumulation artificially caused by the semi-Lagrangian transport scheme can be significantly reduced, leading to much more realistic distributions of the modelled chemical species, most notably of the halogens and ozone.

  15. Systems Biology Markup Language (SBML) Level 2 Version 5: Structures and Facilities for Model Definitions.

    Science.gov (United States)

    Hucka, Michael; Bergmann, Frank T; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M; Le Novère, Nicolas; Myers, Chris J; Olivier, Brett G; Sahle, Sven; Schaff, James C; Smith, Lucian P; Waltemath, Dagmar; Wilkinson, Darren J

    2015-09-04

    Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org.

  16. Systems Biology Markup Language (SBML) Level 2 Version 5: Structures and Facilities for Model Definitions

    Science.gov (United States)

    Hucka, Michael; Bergmann, Frank T.; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M.; Le Novére, Nicolas; Myers, Chris J.; Olivier, Brett G.; Sahle, Sven; Schaff, James C.; Smith, Lucian P.; Waltemath, Dagmar; Wilkinson, Darren J.

    2017-01-01

    Summary Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/. PMID:26528569

  17. User manual for GEOCOST: a computer model for geothermal cost analysis. Volume 2. Binary cycle version

    Energy Technology Data Exchange (ETDEWEB)

    Huber, H.D.; Walter, R.A.; Bloomster, C.H.

    1976-03-01

    A computer model called GEOCOST has been developed to simulate the production of electricity from geothermal resources and calculate the potential costs of geothermal power. GEOCOST combines resource characteristics, power recovery technology, tax rates, and financial factors into one systematic model and provides the flexibility to individually or collectively evaluate their impacts on the cost of geothermal power. Both the geothermal reservoir and power plant are simulated to model the complete energy production system. In the version of GEOCOST in this report, geothermal fluid is supplied from wells distributed throughout a hydrothermal reservoir through insulated pipelines to a binary power plant. The power plant is simulated using a binary fluid cycle in which the geothermal fluid is passed through a series of heat exchangers. The thermodynamic state points in basic subcritical and supercritical Rankine cycles are calculated for a variety of working fluids. Working fluids which are now in the model include isobutane, n-butane, R-11, R-12, R-22, R-113, R-114, and ammonia. Thermodynamic properties of the working fluids at the state points are calculated using empirical equations of state. The Starling equation of state is used for hydrocarbons and the Martin-Hou equation of state is used for fluorocarbons and ammonia. Physical properties of working fluids at the state points are calculated.

  18. Modelling waste stabilisation ponds with an extended version of ASM3.

    Science.gov (United States)

    Gehring, T; Silva, J D; Kehl, O; Castilhos, A B; Costa, R H R; Uhlenhut, F; Alex, J; Horn, H; Wichern, M

    2010-01-01

    In this paper an extended version of IWA's Activated Sludge Model No 3 (ASM3) was developed to simulate processes in waste stabilisation ponds (WSP). The model modifications included the integration of algae biomass and gas transfer processes for oxygen, carbon dioxide and ammonia depending on wind velocity and a simple ionic equilibrium. The model was applied to a pilot-scale WSP system operated in the city of Florianópolis (Brazil). The system was used to treat leachate from a municipal waste landfill. Mean influent concentrations to the facultative pond of 1,456 g(COD)/m(3) and 505 g(NH4-N)/m(3) were measured. Experimental results indicated an ammonia nitrogen removal of 89.5% with negligible rates of nitrification but intensive ammonia stripping to the atmosphere. Measured data was used in the simulations to consider the impact of wind velocity on oxygen input of 11.1 to 14.4 g(O2)/(m(2) d) and sun radiation on photosynthesis. Good results for pH and ammonia removal were achieved with mean stripping rates of 18.2 and 4.5 g(N)/(m(2) d) for the facultative and maturation pond respectively. Based on measured chlorophyll a concentrations and depending on light intensity and TSS concentration it was possible to model algae concentrations.

  19. VALIDATION OF THE ASTER GLOBAL DIGITAL ELEVATION MODEL VERSION 2 OVER THE CONTERMINOUS UNITED STATES

    Directory of Open Access Journals (Sweden)

    D. Gesch

    2012-07-01

    Full Text Available The ASTER Global Digital Elevation Model Version 2 (GDEM v2 was evaluated over the conterminous United States in a manner similar to the validation conducted for the original GDEM Version 1 (v1 in 2009. The absolute vertical accuracy of GDEM v2 was calculated by comparison with more than 18,000 independent reference geodetic ground control points from the National Geodetic Survey. The root mean square error (RMSE measured for GDEM v2 is 8.68 meters. This compares with the RMSE of 9.34 meters for GDEM v1. Another important descriptor of vertical accuracy is the mean error, or bias, which indicates if a DEM has an overall vertical offset from true ground level. The GDEM v2 mean error of –0.20 meters is a significant improvement over the GDEM v1 mean error of –3.69 meters. The absolute vertical accuracy assessment results, both mean error and RMSE, were segmented by land cover to examine the effects of cover types on measured errors. The GDEM v2 mean errors by land cover class verify that the presence of aboveground features (tree canopies and built structures cause a positive elevation bias, as would be expected for an imaging system like ASTER. In open ground classes (little or no vegetation with significant aboveground height, GDEM v2 exhibits a negative bias on the order of 1 meter. GDEM v2 was also evaluated by differencing with the Shuttle Radar Topography Mission (SRTM dataset. In many forested areas, GDEM v2 has elevations that are higher in the canopy than SRTM.

  20. Accelerator System Model (ASM) user manual with physics and engineering model documentation. ASM version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-07-01

    The Accelerator System Model (ASM) is a computer program developed to model proton radiofrequency accelerators and to carry out system level trade studies. The ASM FORTRAN subroutines are incorporated into an intuitive graphical user interface which provides for the {open_quotes}construction{close_quotes} of the accelerator in a window on the computer screen. The interface is based on the Shell for Particle Accelerator Related Codes (SPARC) software technology written for the Macintosh operating system in the C programming language. This User Manual describes the operation and use of the ASM application within the SPARC interface. The Appendix provides a detailed description of the physics and engineering models used in ASM. ASM Version 1.0 is joint project of G. H. Gillespie Associates, Inc. and the Accelerator Technology (AT) Division of the Los Alamos National Laboratory. Neither the ASM Version 1.0 software nor this ASM Documentation may be reproduced without the expressed written consent of both the Los Alamos National Laboratory and G. H. Gillespie Associates, Inc.

  1. A generic method for automatic translation between input models for different versions of simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Serfontein, Dawid E., E-mail: Dawid.Serfontein@nwu.ac.za [School of Mechanical and Nuclear Engineering, North West University (PUK-Campus), PRIVATE BAG X6001 (Internal Post Box 360), Potchefstroom 2520 (South Africa); Mulder, Eben J. [School of Mechanical and Nuclear Engineering, North West University (South Africa); Reitsma, Frederik [Calvera Consultants (South Africa)

    2014-05-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications.

  2. Description and evaluation of the Community Multiscale Air Quality (CMAQ) modeling system version 5.1

    Science.gov (United States)

    Wyat Appel, K.; Napelenok, Sergey L.; Foley, Kristen M.; Pye, Havala O. T.; Hogrefe, Christian; Luecken, Deborah J.; Bash, Jesse O.; Roselle, Shawn J.; Pleim, Jonathan E.; Foroutan, Hosein; Hutzell, William T.; Pouliot, George A.; Sarwar, Golam; Fahey, Kathleen M.; Gantt, Brett; Gilliam, Robert C.; Heath, Nicholas K.; Kang, Daiwen; Mathur, Rohit; Schwede, Donna B.; Spero, Tanya L.; Wong, David C.; Young, Jeffrey O.

    2017-04-01

    The Community Multiscale Air Quality (CMAQ) model is a comprehensive multipollutant air quality modeling system developed and maintained by the US Environmental Protection Agency's (EPA) Office of Research and Development (ORD). Recently, version 5.1 of the CMAQ model (v5.1) was released to the public, incorporating a large number of science updates and extended capabilities over the previous release version of the model (v5.0.2). These updates include the following: improvements in the meteorological calculations in both CMAQ and the Weather Research and Forecast (WRF) model used to provide meteorological fields to CMAQ, updates to the gas and aerosol chemistry, revisions to the calculations of clouds and photolysis, and improvements to the dry and wet deposition in the model. Sensitivity simulations isolating several of the major updates to the modeling system show that changes to the meteorological calculations result in enhanced afternoon and early evening mixing in the model, periods when the model historically underestimates mixing. This enhanced mixing results in higher ozone (O3) mixing ratios on average due to reduced NO titration, and lower fine particulate matter (PM2. 5) concentrations due to greater dilution of primary pollutants (e.g., elemental and organic carbon). Updates to the clouds and photolysis calculations greatly improve consistency between the WRF and CMAQ models and result in generally higher O3 mixing ratios, primarily due to reduced cloudiness and attenuation of photolysis in the model. Updates to the aerosol chemistry result in higher secondary organic aerosol (SOA) concentrations in the summer, thereby reducing summertime PM2. 5 bias (PM2. 5 is typically underestimated by CMAQ in the summer), while updates to the gas chemistry result in slightly higher O3 and PM2. 5 on average in January and July. Overall, the seasonal variation in simulated PM2. 5 generally improves in CMAQv5.1 (when considering all model updates), as simulated PM2. 5

  3. Evaluating and improving cloud phase in the Community Atmosphere Model version 5 using spaceborne lidar observations

    Science.gov (United States)

    Kay, Jennifer E.; Bourdages, Line; Miller, Nathaniel B.; Morrison, Ariel; Yettella, Vineel; Chepfer, Helene; Eaton, Brian

    2016-04-01

    Spaceborne lidar observations from the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) satellite are used to evaluate cloud amount and cloud phase in the Community Atmosphere Model version 5 (CAM5), the atmospheric component of a widely used state-of-the-art global coupled climate model (Community Earth System Model). By embedding a lidar simulator within CAM5, the idiosyncrasies of spaceborne lidar cloud detection and phase assignment are replicated. As a result, this study makes scale-aware and definition-aware comparisons between model-simulated and observed cloud amount and cloud phase. In the global mean, CAM5 has insufficient liquid cloud and excessive ice cloud when compared to CALIPSO observations. Over the ice-covered Arctic Ocean, CAM5 has insufficient liquid cloud in all seasons. Having important implications for projections of future sea level rise, a liquid cloud deficit contributes to a cold bias of 2-3°C for summer daily maximum near-surface air temperatures at Summit, Greenland. Over the midlatitude storm tracks, CAM5 has excessive ice cloud and insufficient liquid cloud. Storm track cloud phase biases in CAM5 maximize over the Southern Ocean, which also has larger-than-observed seasonal variations in cloud phase. Physical parameter modifications reduce the Southern Ocean cloud phase and shortwave radiation biases in CAM5 and illustrate the power of the CALIPSO observations as an observational constraint. The results also highlight the importance of using a regime-based, as opposed to a geographic-based, model evaluation approach. More generally, the results demonstrate the importance and value of simulator-enabled comparisons of cloud phase in models used for future climate projection.

  4. Overview of the Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) Time-Independent Model

    Science.gov (United States)

    Field, E. H.; Arrowsmith, R.; Biasi, G. P.; Bird, P.; Dawson, T. E.; Felzer, K. R.; Jackson, D. D.; Johnson, K. M.; Jordan, T. H.; Madugo, C. M.; Michael, A. J.; Milner, K. R.; Page, M. T.; Parsons, T.; Powers, P.; Shaw, B. E.; Thatcher, W. R.; Weldon, R. J.; Zeng, Y.

    2013-12-01

    We present the time-independent component of the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3), where the primary achievements have been to relax fault segmentation and include multi-fault ruptures, both limitations of UCERF2. The rates of all earthquakes are solved for simultaneously, and from a broader range of data, using a system-level 'grand inversion' that is both conceptually simple and extensible. The inverse problem is large and underdetermined, so a range of models is sampled using an efficient simulated annealing algorithm. The approach is more derivative than prescriptive (e.g., magnitude-frequency distributions are no longer assumed), so new analysis tools were developed for exploring solutions. Epistemic uncertainties were also accounted for using 1440 alternative logic tree branches, necessitating access to supercomputers. The most influential uncertainties include alternative deformation models (fault slip rates), a new smoothed seismicity algorithm, alternative values for the total rate of M≥5 events, and different scaling relationships, virtually all of which are new. As a notable first, three deformation models are based on kinematically consistent inversions of geodetic and geologic data, also providing slip-rate constraints on faults previously excluded due to lack of geologic data. The grand inversion constitutes a system-level framework for testing hypotheses and balancing the influence of different experts. For example, we demonstrate serious challenges with the Gutenberg-Richter hypothesis for individual faults. UCERF3 is still an approximation of the system, however, and the range of models is limited (e.g., constrained to stay close to UCERF2). Nevertheless, UCERF3 removes the apparent UCERF2 over-prediction of M6.5-7 earthquake rates, and also includes types of multi-fault ruptures seen in nature. While UCERF3 fits the data better than UCERF2 overall, there may be areas that warrant further site

  5. Energy Integration for 2050 - A Strategic Impact Model (2050 SIM), Version 2.0

    Energy Technology Data Exchange (ETDEWEB)

    John Collins

    2011-09-01

    The United States (U.S.) energy infrastructure is among the most reliable, accessible, and economic in the world. On the other hand, it is also excessively reliant on foreign energy sources, experiences high volatility in energy prices, does not always practice good stewardship of finite indigenous energy resources, and emits significant quantities of greenhouse gas. The U.S. Department of Energy is conducting research and development on advanced nuclear reactor concepts and technologies, including High Temperature Gas Reactor (HTGR) technologies, directed at helping the United States meet its current and future energy challenges. This report discusses the Draft Strategic Impact Model (SIM), an initial version of which was created during the later part of FY-2010. SIM was developed to analyze and depict the benefits of various energy sources in meeting the energy demand and to provide an overall system understanding of the tradeoffs between building and using HTGRs versus other existing technologies for providing energy (heat and electricity) to various energy-use sectors in the United States. This report also provides the assumptions used in the model, the rationale for the methodology, and the references for the source documentation and source data used in developing the SIM.

  6. Energy Integration for 2050 - A Strategic Impact Model (2050 SIM), Version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    2010-10-01

    The United States (U.S.) energy infrastructure is among the most reliable, accessible, and economic in the world. On the other hand, it is also excessively reliant on foreign energy sources, experiences high volatility in energy prices, does not always practice good stewardship of finite indigenous energy resources, and emits significant quantities of greenhouse gas. The U.S. Department of Energy is conducting research and development on advanced nuclear reactor concepts and technologies, including High Temperature Gas Reactor (HTGR) technologies, directed at helping the United States meet its current and future energy challenges. This report discusses the Draft Strategic Impact Model (SIM), an initial version of which was created during the later part of FY-2010. SIM was developed to analyze and depict the benefits of various energy sources in meeting the energy demand and to provide an overall system understanding of the tradeoffs between building and using HTGRs versus other existing technologies for providing energy (heat and electricity) to various energy-use sectors in the United States. This report also provides the assumptions used in the model, the rationale for the methodology, and the references for the source documentation and source data used in developing the SIM.

  7. A multi-sectoral version of the Post-Keynesian growth model

    Directory of Open Access Journals (Sweden)

    Ricardo Azevedo Araujo

    2015-03-01

    Full Text Available Abstract With this inquiry, we seek to develop a disaggregated version of the post-Keynesian approach to economic growth, by showing that indeed it can be treated as a particular case of the Pasinettian model of structural change and economic expansion. By relying upon vertical integration it becomes possible to carry out the analysis initiated by Kaldor (1956 and Robinson (1956, 1962, and followed by Dutt (1984, Rowthorn (1982 and later Bhaduri and Marglin (1990 in a multi-sectoral model in which demand and productivity increase at different paces in each sector. By adopting this approach it is possible to show that the structural economic dynamics is conditioned not only to patterns of evolving demand and diffusion of technological progress but also to the distributive features of the economy, which can give rise to different regimes of economic growth. Besides, we find it possible to determine the natural rate of profit that makes the mark-up rate to be constant over time.

  8. Re-evaluation of Predictive Models in Light of New Data: Sunspot Number Version 2.0

    Science.gov (United States)

    Gkana, A.; Zachilas, L.

    2016-10-01

    The original version of the Zürich sunspot number (Sunspot Number Version 1.0) has been revised by an entirely new series (Sunspot Number Version 2.0). We re-evaluate the performance of our previously proposed models for predicting solar activity in the light of the revised data. We perform new monthly and yearly predictions using the Sunspot Number Version 2.0 as input data and compare them with our original predictions (using the Sunspot Number Version 1.0 series as input data). We show that our previously proposed models are still able to produce quite accurate solar-activity predictions despite the full revision of the Zürich Sunspot Number, indicating that there is no significant degradation in their performance. Extending our new monthly predictions (July 2013 - August 2015) by 50 time-steps (months) ahead in time (from September 2015 to October 2019), we provide evidence that we are heading into a period of dramatically low solar activity. Finally, our new future long-term predictions endorse our previous claim that a prolonged solar activity minimum is expected to occur, lasting up to the year ≈ 2100.

  9. Hydrogeochemical evaluation for Simpevarp model version 1.2. Preliminary site description of the Simpevarp area

    Energy Technology Data Exchange (ETDEWEB)

    Laaksoharju, Marcus (ed.) [Geopoint AB, Stockholm (Sweden)

    2004-12-01

    Siting studies for SKB's programme of deep geological disposal of nuclear fuel waste currently involves the investigation of two locations, Simpevarp and Forsmark, to determine their geological, hydrogeochemical and hydrogeological characteristics. Present work completed has resulted in Model version 1.2 which represents the second evaluation of the available Simpevarp groundwater analytical data collected up to April, 2004. The deepest fracture groundwater samples with sufficient analytical data reflected depths down to 1.7 km. Model version 1.2 focusses on geochemical and mixing processes affecting the groundwater composition in the uppermost part of the bedrock, down to repository levels, and eventually extending to 1000 m depth. The groundwater flow regimes at Laxemar/Simpevarp are considered local and extend down to depths of around 600-1000 m depending on local topography. The marked differences in the groundwater flow regimes between Laxemar and Simpevarp are reflected in the groundwater chemistry where four major hydrochemical groups of groundwaters (types A-D) have been identified: TYPE A: This type comprises dilute groundwaters (< 1000 mg/L Cl; 0.5-2.0 g/L TDS) of Na-HCO{sub 3} type present at shallow (<200 m) depths at Simpevarp, but at greater depths (0-900 m) at Laxemar. At both localities the groundwaters are marginally oxidising close to the surface, but otherwise reducing. Main reactions involve weathering, ion exchange (Ca, Mg), surface complexation, and dissolution of calcite. Redox reactions include precipitation of Fe-oxyhydroxides and some microbially mediated reactions (SRB). Meteoric recharge water is mainly present at Laxemar whilst at Simpevarp potential mixing of recharge meteoric water and a modern sea component is observed. Localised mixing of meteoric water with deeper saline groundwaters is indicated at both Laxemar and Simpevarp. TYPE B: This type comprises brackish groundwaters (1000-6000 mg/L Cl; 5-10 g/L TDS) present at

  10. ASTER Global Digital Elevation Model Version 2 - summary of validation results

    Science.gov (United States)

    Tachikawa, Tetushi; Kaku, Manabu; Iwasaki, Akira; Gesch, Dean B.; Oimoen, Michael J.; Zhang, Z.; Danielson, Jeffrey J.; Krieger, Tabatha; Curtis, Bill; Haase, Jeff; Abrams, Michael; Carabajal, C.; Meyer, Dave

    2011-01-01

    On June 29, 2009, NASA and the Ministry of Economy, Trade and Industry (METI) of Japan released a Global Digital Elevation Model (GDEM) to users worldwide at no charge as a contribution to the Global Earth Observing System of Systems (GEOSS). This “version 1” ASTER GDEM (GDEM1) was compiled from over 1.2 million scenebased DEMs covering land surfaces between 83°N and 83°S latitudes. A joint U.S.-Japan validation team assessed the accuracy of the GDEM1, augmented by a team of 20 cooperators. The GDEM1 was found to have an overall accuracy of around 20 meters at the 95% confidence level. The team also noted several artifacts associated with poor stereo coverage at high latitudes, cloud contamination, water masking issues and the stacking process used to produce the GDEM1 from individual scene-based DEMs (ASTER GDEM Validation Team, 2009). Two independent horizontal resolution studies estimated the effective spatial resolution of the GDEM1 to be on the order of 120 meters.

  11. Uniform California earthquake rupture forecast, version 3 (UCERF3): the time-independent model

    Science.gov (United States)

    Field, Edward H.; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David D.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin R.; Page, Morgan T.; Parsons, Thomas; Powers, Peter M.; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua; ,

    2013-01-01

    In this report we present the time-independent component of the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3), which provides authoritative estimates of the magnitude, location, and time-averaged frequency of potentially damaging earthquakes in California. The primary achievements have been to relax fault segmentation assumptions and to include multifault ruptures, both limitations of the previous model (UCERF2). The rates of all earthquakes are solved for simultaneously, and from a broader range of data, using a system-level "grand inversion" that is both conceptually simple and extensible. The inverse problem is large and underdetermined, so a range of models is sampled using an efficient simulated annealing algorithm. The approach is more derivative than prescriptive (for example, magnitude-frequency distributions are no longer assumed), so new analysis tools were developed for exploring solutions. Epistemic uncertainties were also accounted for using 1,440 alternative logic tree branches, necessitating access to supercomputers. The most influential uncertainties include alternative deformation models (fault slip rates), a new smoothed seismicity algorithm, alternative values for the total rate of M≥5 events, and different scaling relationships, virtually all of which are new. As a notable first, three deformation models are based on kinematically consistent inversions of geodetic and geologic data, also providing slip-rate constraints on faults previously excluded because of lack of geologic data. The grand inversion constitutes a system-level framework for testing hypotheses and balancing the influence of different experts. For example, we demonstrate serious challenges with the Gutenberg-Richter hypothesis for individual faults. UCERF3 is still an approximation of the system, however, and the range of models is limited (for example, constrained to stay close to UCERF2). Nevertheless, UCERF3 removes the apparent UCERF2 overprediction of

  12. Atmospheric radionuclide transport model with radon postprocessor and SBG module. Model description version 2.8.0; ARTM. Atmosphaerisches Radionuklid-Transport-Modell mit Radon Postprozessor und SBG-Modul. Modellbeschreibung zu Version 2.8.0

    Energy Technology Data Exchange (ETDEWEB)

    Richter, Cornelia; Sogalla, Martin; Thielen, Harald; Martens, Reinhard

    2015-04-20

    The study on the atmospheric radionuclide transport model with radon postprocessor and SBG module (model description version 2.8.0) covers the following issues: determination of emissions, radioactive decay, atmospheric dispersion calculation for radioactive gases, atmospheric dispersion calculation for radioactive dusts, determination of the gamma cloud radiation (gamma submersion), terrain roughness, effective source height, calculation area and model points, geographic reference systems and coordinate transformations, meteorological data, use of invalid meteorological data sets, consideration of statistical uncertainties, consideration of housings, consideration of bumpiness, consideration of terrain roughness, use of frequency distributions of the hourly dispersion situation, consideration of the vegetation period (summer), the radon post processor radon.exe, the SBG module, modeling of wind fields, shading settings.

  13. RAMS Model for Terrestrial Pathways Version 3. 0 (for microcomputers). Model-Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Niebla, E.

    1989-01-01

    The RAMS Model for Terrestrial Pathways is a computer program for calculation of numeric criteria for land application and distribution and marketing of sludges under the sewage-sludge regulations at 40 CFR Part 503. The risk-assessment models covered assume that municipal sludge with specified characteristics is spread across a defined area of ground at a known rate once each year for a given number of years. Risks associated with direct land application of sludge applied after distribution and marketing are both calculated. The computer program calculates the maximum annual loading of contaminants that can be land applied and still meet the risk criteria specified as input. Software Description: The program is written in the Turbo/Basic programming language for implementation on IBM PC/AT or compatible machines using DOS 3.0 or higher operating system. Minimum core storage is 512K.

  14. Planar version of the CPT-even gauge sector of the standard model extension

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira Junior, Manoel M.; Casana, Rodolfo; Gomes, Adalto Rodrigues; Carvalho, Eduardo S. [Universidade Federal do Maranhao (UFMA), Sao Luis, MA (Brazil). Dept. de Fisica

    2011-07-01

    The CPT-even abelian gauge sector of the Standard Model Extension is represented by the Maxwell term supplemented by (K{sub F} ){sub {mu}}{nu}{rho}{sigma} F{sup {mu}}{nu} F{sup {rho}}{sigma}, where the Lorentz-violating background tensor, (K{sub F} ){sub {mu}}{nu}{rho}{sigma}, possesses the symmetries of the Riemann tensor and a double null trace, which renders nineteen independent components. From these ones, ten components yield birefringence while nine are nonbirefringent ones. In the present work, we examine the planar version of this theory, obtained by means of a typical dimensional reduction procedure to (1 + 2) dimensions. We obtain a kind of planar scalar electrodynamics, which is composed of a gauge sector containing six Lorentz-violating coefficients, a scalar field endowed with a noncanonical kinetic term, and a coupling term that links the scalar and gauge sectors. The dispersion relation is exactly determined, revealing that the six parameters related to the pure electromagnetic sector do not yield birefringence at any order. In this model, the birefringence may appear only as a second order effect associated with the coupling tensor linking the gauge and scalar sectors.The equations of motion are written and solved in the stationary regime. The Lorentz-violating parameters do not alter the asymptotic behavior of the fields but induce an angular dependence not observed in the Maxwell planar theory. The energy-momentum tensor was evaluated as well, revealing that the theory presents energy stability. (author)

  15. A Comparison of Different Versions of the Method of Multiple Scales for an Arbitrary Model of Odd Nonlinearities

    OpenAIRE

    Pakdemirli, Mehmet; Boyacı, Hakan

    1999-01-01

    A general model of cubic and fifth order nonlinearities is considered. The linear part as well as the nonlinearities are expressed in terms of arbitrary operators. Two different versions of the method of multiple scales are used in constructing the general transient and steady-state solutions of the model: Modified Rahman-Burton method and the Reconstitution method. It is found that the usual ordering of reconstitution can be used, if at higher orders of approximation, the time scale correspo...

  16. Scaling and long-range dependence in option pricing III: A fractional version of the Merton model with transaction costs

    Science.gov (United States)

    Wang, Xiao-Tian; Yan, Hai-Gang; Tang, Ming-Ming; Zhu, En-Hui

    2010-02-01

    A model for option pricing of fractional version of the Merton model with ‘Hurst exponent’ H being in [1/2,1) is established with transaction costs. In particular, for H∈(1/2,1) the minimal price Cmin(t,St) of an option under transaction costs is obtained, which displays that the timestep δt and the ‘Hurst exponent’ H play an important role in option pricing with transaction costs.

  17. Effects of Lower and Higher Quality Brand Versions on Brand Evaluation: an Opponent-Process Model Plus Differential Brand-Version Weighting

    National Research Council Canada - National Science Library

    Timothy Heath; Devon DelVecchio; Michael McCarthy; Subimal Chatterjee

    2009-01-01

    ...) or lower-quality versions (e.g., Ruby Tuesday's Corner Diner). A brand-quality asymmetry emerges on measures ranging from brand choice to brand attitude to perceptions of brand expertise, innovativeness, and prestige...

  18. Does Diversity Matter In Modeling? Testing A New Version Of The FORMIX3 Growth Model For Madagascar Rainforests

    Science.gov (United States)

    Armstrong, A. H.; Fischer, R.; Shugart, H. H.; Huth, A.

    2012-12-01

    Ecological forecasting has become an essential tool used by ecologists to understand the dynamics of growth and disturbance response in threatened ecosystems such as the rainforests of Madagascar. In the species rich tropics, forest conservation is often eclipsed by anthropogenic factors, resulting in a heightened need for accurate assessment of biomass before these ecosystems disappear. The objective of this study was to test a new Madagascar rainforest specific version of the FORMIX3 growth model (Huth and Ditzer, 2000; Huth et al 1998) to assess how accurately biomass can be simulated in high biodiversity forests using a method of functional type aggregation in an individual-based model framework. Rainforest survey data collected over three growing seasons, including 265 tree species, was aggregated into 12 plant functional types based on size and light requirements. Findings indicated that the forest study site compared best when the simulated forest reached mature successional status. Multiple level comparisons between model simulation data and survey plot data found that though some features, such as the dominance of canopy emergent species and relative absence of small woody treelets are captured by the model, other forest attributes were not well reflected. Overall, the ability to accurately simulate the Madagascar rainforest was slightly diminished by the aggregation of tree species into size and light requirement functional type groupings.

  19. The Digital Astronaut Project Computational Bone Remodeling Model (Beta Version) Bone Summit Summary Report

    Science.gov (United States)

    Pennline, James; Mulugeta, Lealem

    2013-01-01

    Under the conditions of microgravity, astronauts lose bone mass at a rate of 1% to 2% a month, particularly in the lower extremities such as the proximal femur [1-3]. The most commonly used countermeasure against bone loss in microgravity has been prescribed exercise [4]. However, data has shown that existing exercise countermeasures are not as effective as desired for preventing bone loss in long duration, 4 to 6 months, spaceflight [1,3,5,6]. This spaceflight related bone loss may cause early onset of osteoporosis to place the astronauts at greater risk of fracture later in their lives. Consequently, NASA seeks to have improved understanding of the mechanisms of bone demineralization in microgravity in order to appropriately quantify this risk, and to establish appropriate countermeasures [7]. In this light, NASA's Digital Astronaut Project (DAP) is working with the NASA Bone Discipline Lead to implement well-validated computational models to help predict and assess bone loss during spaceflight, and enhance exercise countermeasure development. More specifically, computational modeling is proposed as a way to augment bone research and exercise countermeasure development to target weight-bearing skeletal sites that are most susceptible to bone loss in microgravity, and thus at higher risk for fracture. Given that hip fractures can be debilitating, the initial model development focused on the femoral neck. Future efforts will focus on including other key load bearing bone sites such as the greater trochanter, lower lumbar, proximal femur and calcaneus. The DAP has currently established an initial model (Beta Version) of bone loss due to skeletal unloading in femoral neck region. The model calculates changes in mineralized volume fraction of bone in this segment and relates it to changes in bone mineral density (vBMD) measured by Quantitative Computed Tomography (QCT). The model is governed by equations describing changes in bone volume fraction (BVF), and rates of

  20. Software Design Description for the Navy Coastal Ocean Model (NCOM) Version 4.0

    Science.gov (United States)

    2008-12-31

    cstr ,lenc) Data Declaration: Integer lenc Character cstr Coamps_uvg2uv Subroutine COAMPS_UVG2UV...are removed from the substrings. Calling Sequence: strpars(cline, cdelim, nstr, cstr , nsto, ierr) NRL/MR/7320--08-9149...NCOM Version 4.0 SDD 92 Subroutine Description Data Declaration: Character cline, cstr ,cdelim

  1. MATILDA Version 2: Rough Earth TIALD Model for Laser Probabilistic Risk Assessment in Hilly Terrain - Part I

    Science.gov (United States)

    2017-03-13

    AFRL-RH-FS-TR-2017-0009 MATILDA Version-2: Rough Earth TIALD Model for Laser Probabilistic Risk Assessment in Hilly Terrain – Part I Paul K...Probabilistic Risk Assessment in Hilly Terrain – Part I ii Distribution A: Approved for public release; distribution unlimited. PA Case No: TSRL-PA-2017-0169...any penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN

  2. The Flexible Global Ocean-Atmosphere-Land System Model,Grid-point Version 2:FGOALS-g2

    Institute of Scientific and Technical Information of China (English)

    LI Lijuan; LIN Pengfei; YU Yongqiang; WANG Bin; ZHOU Tianjun; LIU Li; LIU Jiping

    2013-01-01

    This study mainly introduces the development of the Flexible Global Ocean-Atmosphere-Land System Model:Grid-point Version 2 (FGOALS-g2) and the preliminary evaluations of its performances based on results from the pre-industrial control run and four members of historical runs according to the fifth phase of the Coupled Model Intercomparison Project (CMIP5) experiment design.The results suggest that many obvious improvements have been achieved by the FGOALS-g2 compared with the previous version,FGOALS-g1,including its climatological mean states,climate variability,and 20th century surface temperature evolution.For example,FGOALS-g2 better simulates the frequency of tropical land precipitation,East Asian Monsoon precipitation and its seasonal cycle,MJO and ENSO,which are closely related to the updated cumulus parameterization scheme,as well as the alleviation of uncertainties in some key parameters in shallow and deep convection schemes,cloud fraction,cloud macro/microphysical processes and the boundary layer scheme in its atmospheric model.The annual cycle of sea surface temperature along the equator in the Pacific is significantly improved in the new version.The sea ice salinity simulation is one of the unique characteristics of FGOALS-g2,although it is somehow inconsistent with empirical observations in the Antarctic.

  3. Programs OPTMAN and SHEMMAN Version 6 (1999) - Coupled-Channels optical model and collective nuclear structure calculation -

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Jong Hwa; Lee, Jeong Yeon; Lee, Young Ouk; Sukhovitski, Efrem Sh. [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-01-01

    Programs SHEMMAN and OPTMAN (Version 6) have been developed for determinations of nuclear Hamiltonian parameters and for optical model calculations, respectively. The optical model calculations by OPTMAN with coupling schemes built on wave functions functions of non-axial soft-rotator are self-consistent, since the parameters of the nuclear Hamiltonian are determined by adjusting the energies of collective levels to experimental values with SHEMMAN prior to the optical model calculation. The programs have been installed at Nuclear Data Evaluation Laboratory of KAERI. This report is intended as a brief manual of these codes. 43 refs., 9 figs., 1 tabs. (Author)

  4. UNSAT-H Version 3.0: Unsaturated Soil Water and Heat Flow Model Theory, User Manual, and Examples

    Energy Technology Data Exchange (ETDEWEB)

    MJ Fayer

    2000-06-12

    The UNSAT-H model was developed at Pacific Northwest National Laboratory (PNNL) to assess the water dynamics of arid sites and, in particular, estimate recharge fluxes for scenarios pertinent to waste disposal facilities. During the last 4 years, the UNSAT-H model received support from the Immobilized Waste Program (IWP) of the Hanford Site's River Protection Project. This program is designing and assessing the performance of on-site disposal facilities to receive radioactive wastes that are currently stored in single- and double-shell tanks at the Hanford Site (LMHC 1999). The IWP is interested in estimates of recharge rates for current conditions and long-term scenarios involving the vadose zone disposal of tank wastes. Simulation modeling with UNSAT-H is one of the methods being used to provide those estimates (e.g., Rockhold et al. 1995; Fayer et al. 1999). To achieve the above goals for assessing water dynamics and estimating recharge rates, the UNSAT-H model addresses soil water infiltration, redistribution, evaporation, plant transpiration, deep drainage, and soil heat flow as one-dimensional processes. The UNSAT-H model simulates liquid water flow using Richards' equation (Richards 1931), water vapor diffusion using Fick's law, and sensible heat flow using the Fourier equation. This report documents UNSAT-H .Version 3.0. The report includes the bases for the conceptual model and its numerical implementation, benchmark test cases, example simulations involving layered soils and plants, and the code manual. Version 3.0 is an, enhanced-capability update of UNSAT-H Version 2.0 (Fayer and Jones 1990). New features include hysteresis, an iterative solution of head and temperature, an energy balance check, the modified Picard solution technique, additional hydraulic functions, multiple-year simulation capability, and general enhancements.

  5. Technical report series on global modeling and data assimilation. Volume 1: Documentation of the Goddard Earth Observing System (GEOS) General Circulation Model, version 1

    Science.gov (United States)

    Suarez, Max J. (Editor); Takacs, Lawrence L.; Molod, Andrea; Wang, Tina

    1994-01-01

    This technical report documents Version 1 of the Goddard Earth Observing System (GEOS) General Circulation Model (GCM). The GEOS-1 GCM is being used by NASA's Data Assimilation Office (DAO) to produce multiyear data sets for climate research. This report provides a documentation of the model components used in the GEOS-1 GCM, a complete description of model diagnostics available, and a User's Guide to facilitate GEOS-1 GCM experiments.

  6. A Fast and Efficient Version of the TwO-Moment Aerosol Sectional (TOMAS) Global Aerosol Microphysics Model

    Science.gov (United States)

    Lee, Yunha; Adams, P. J.

    2012-01-01

    This study develops more computationally efficient versions of the TwO-Moment Aerosol Sectional (TOMAS) microphysics algorithms, collectively called Fast TOMAS. Several methods for speeding up the algorithm were attempted, but only reducing the number of size sections was adopted. Fast TOMAS models, coupled to the GISS GCM II-prime, require a new coagulation algorithm with less restrictive size resolution assumptions but only minor changes in other processes. Fast TOMAS models have been evaluated in a box model against analytical solutions of coagulation and condensation and in a 3-D model against the original TOMAS (TOMAS-30) model. Condensation and coagulation in the Fast TOMAS models agree well with the analytical solution but show slightly more bias than the TOMAS-30 box model. In the 3-D model, errors resulting from decreased size resolution in each process (i.e., emissions, cloud processing wet deposition, microphysics) are quantified in a series of model sensitivity simulations. Errors resulting from lower size resolution in condensation and coagulation, defined as the microphysics error, affect number and mass concentrations by only a few percent. The microphysics error in CN70CN100 (number concentrations of particles larger than 70100 nm diameter), proxies for cloud condensation nuclei, range from 5 to 5 in most regions. The largest errors are associated with decreasing the size resolution in the cloud processing wet deposition calculations, defined as cloud-processing error, and range from 20 to 15 in most regions for CN70CN100 concentrations. Overall, the Fast TOMAS models increase the computational speed by 2 to 3 times with only small numerical errors stemming from condensation and coagulation calculations when compared to TOMAS-30. The faster versions of the TOMAS model allow for the longer, multi-year simulations required to assess aerosol effects on cloud lifetime and precipitation.

  7. Simulations of the mid-Pliocene Warm Period using two versions of the NASA/GISS ModelE2-R Coupled Model

    Directory of Open Access Journals (Sweden)

    M. A. Chandler

    2013-04-01

    Full Text Available The mid-Pliocene Warm Period (mPWP bears many similarities to aspects of future global warming as projected by the Intergovernmental Panel on Climate Change (IPCC, 2007. Both marine and terrestrial data point to high-latitude temperature amplification, including large decreases in sea ice and land ice, as well as expansion of warmer climate biomes into higher latitudes. Here we present our most recent simulations of the mid-Pliocene climate using the CMIP5 version of the NASA/GISS Earth System Model (ModelE2-R. We describe the substantial impact associated with a recent correction made in the implementation of the Gent-McWilliams ocean mixing scheme (GM, which has a large effect on the simulation of ocean surface temperatures, particularly in the North Atlantic Ocean. The effect of this correction on the Pliocene climate results would not have been easily determined from examining its impact on the preindustrial runs alone, a useful demonstration of how the consequences of code improvements as seen in modern climate control runs do not necessarily portend the impacts in extreme climates. Both the GM-corrected and GM-uncorrected simulations were contributed to the Pliocene Model Intercomparison Project (PlioMIP Experiment 2. Many findings presented here corroborate results from other PlioMIP multi-model ensemble papers, but we also emphasise features in the ModelE2-R simulations that are unlike the ensemble means. The corrected version yields results that more closely resemble the ocean core data as well as the PRISM3D reconstructions of the mid-Pliocene, especially the dramatic warming in the North Atlantic and Greenland-Iceland-Norwegian Sea, which in the new simulation appears to be far more realistic than previously found with older versions of the GISS model. Our belief is that continued development of key physical routines in the atmospheric model, along with higher resolution and recent corrections to mixing parameterisations in the ocean

  8. Land-total and Ocean-total Precipitation and Evaporation from a Community Atmosphere Model version 5 Perturbed Parameter Ensemble

    Energy Technology Data Exchange (ETDEWEB)

    Covey, Curt [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Lucas, Donald D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Trenberth, Kevin E. [National Center for Atmospheric Research, Boulder, CO (United States)

    2016-03-02

    This document presents the large scale water budget statistics of a perturbed input-parameter ensemble of atmospheric model runs. The model is Version 5.1.02 of the Community Atmosphere Model (CAM). These runs are the “C-Ensemble” described by Qian et al., “Parametric Sensitivity Analysis of Precipitation at Global and Local Scales in the Community Atmosphere Model CAM5” (Journal of Advances in Modeling the Earth System, 2015). As noted by Qian et al., the simulations are “AMIP type” with temperature and sea ice boundary conditions chosen to match surface observations for the five year period 2000-2004. There are 1100 ensemble members in addition to one run with default inputparameter values.

  9. Simple geometrical explanation of Gurtin-Murdoch model of surface elasticity with clarification of its related versions

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    It is showed that all equations of the linearized Gurtin-Murdoch model of surface elasticity can be derived, in a straightforward way, from a simple second-order expression for the ratio of deformed surface area to initial surface area. This elementary derivation offers a simple explanation for all unique features of the model and its simplified/modified versions, and helps to clarify some misunderstandings of the model already occurring in the literature. Finally, it is demonstrated that, because the Gurtin-Murdoch model is based on a hybrid formulation combining linearized deformation of bulk material with 2nd-order finite deformation of the surface, caution is needed when the original form of this model is applied to bending deformation of thin-walled elastic structures with surface stress.

  10. Evaluation of the tropospheric aerosol number concentrations simulated by two versions of the global model ECHAM5-HAM

    Science.gov (United States)

    Zhang, K.; Kazil, J.; Feichter, J.

    2009-04-01

    Since its first version developed by Stier et al. (2005), the global aerosol-climate model ECHAM5-HAM has gone through further development and updates. The changes in the model include (1) a new time integration scheme for the condensation of the sulfuric acid gas on existing particles, (2) a new aerosol nucleation scheme that takes into account the charged nucleation caused by cosmic rays, and (3) a parameterization scheme explicitly describing the conversion of aerosol particles to cloud nuclei. In this work, simulations performed with the old and new model versions are evaluated against some measurements reported in recent years. The focus is on the aerosol size distribution in the troposphere. Results show that modifications in the parameterizations have led to significant changes in the simulated aerosol concentrations. Vertical profiles of the total particle number concentration (diameter > 3nm) compiled by Clarke et al. (2002) suggest that, over the Pacific in the upper free troposphere, the tropics are associated with much higher concentrations than the mid-latitude regions. This feature is more reasonably reproduced by the new model version, mainly due to the improved results of the nucleation mode aerosols. In the lower levels (2-5 km above the Earth's surface), the number concentrations of the Aitken mode particles are overestimated compared to both the Pacific data given in Clarke et al. (2002) and the vertical profiles over Europe reported by Petzold et al. (2007). The physical and chemical processes that have led to these changes are identified by sensitivity tests. References: Clarke and Kapustin: A Pacific aerosol survey - part 1: a decade of data on production, transport, evolution and mixing in the troposphere, J. Atmos. Sci., 59, 363-382, 2002. Petzold et al.: Perturbation of the European free troposphere aerosol by North American forest fire plumes during the ICARTT-ITOP experiment in summer 2004, Atmos. Chem. Phys., 7, 5105-5127, 2007

  11. Update of the Polar SWIFT model for polar stratospheric ozone loss (Polar SWIFT version 2

    Directory of Open Access Journals (Sweden)

    I. Wohltmann

    2017-07-01

    Full Text Available The Polar SWIFT model is a fast scheme for calculating the chemistry of stratospheric ozone depletion in polar winter. It is intended for use in global climate models (GCMs and Earth system models (ESMs to enable the simulation of mutual interactions between the ozone layer and climate. To date, climate models often use prescribed ozone fields, since a full stratospheric chemistry scheme is computationally very expensive. Polar SWIFT is based on a set of coupled differential equations, which simulate the polar vortex-averaged mixing ratios of the key species involved in polar ozone depletion on a given vertical level. These species are O3, chemically active chlorine (ClOx, HCl, ClONO2 and HNO3. The only external input parameters that drive the model are the fraction of the polar vortex in sunlight and the fraction of the polar vortex below the temperatures necessary for the formation of polar stratospheric clouds. Here, we present an update of the Polar SWIFT model introducing several improvements over the original model formulation. In particular, the model is now trained on vortex-averaged reaction rates of the ATLAS Chemistry and Transport Model, which enables a detailed look at individual processes and an independent validation of the different parameterizations contained in the differential equations. The training of the original Polar SWIFT model was based on fitting complete model runs to satellite observations and did not allow for this. A revised formulation of the system of differential equations is developed, which closely fits vortex-averaged reaction rates from ATLAS that represent the main chemical processes influencing ozone. In addition, a parameterization for the HNO3 change by denitrification is included. The rates of change of the concentrations of the chemical species of the Polar SWIFT model are purely chemical rates of change in the new version, whereas in the original Polar SWIFT model, they included a transport effect

  12. The Sutra Origin of Ksitigarbha Bodhisattva Belief of Jiuhua Mountain%九华地藏信仰起源之佛典“钩锁骨鸣”

    Institute of Scientific and Technical Information of China (English)

    张总

    2012-01-01

    九华山金地藏信仰中,费冠卿所说"经云:菩萨钩锁、百骸呜矣"其实有着深厚的经典依据,但长期以来未深入探考。实际上佛典中对从凡夫到至诸象狮及十住菩萨与佛的骨骸结构与力量大小有一套说法,亦说骨骸声鸣能反映其五趣六道所在。这些经典说法对灵验故事僧俗传记都产生了影响,在金地藏信仰中也起到重要作用。%In Ksitigarbha Bodhisattva belief of Jiuhua Mountain,there are many sutras about"the clasp-like articulation of a Bodhisattva,sound ringing every-joint of skeleton"said by FeiGuanqin,which has not been discussed deeply longtime.There are actually statements about a power from a Buddha to a Bodhisattva to an elephant to a lion and a mortal,linking with the different forms of articulation-bone,and the sound ring from a group articulation which can reflect five destinies six realms.Those affect the afflatus stories and the monk biography,and become the belief base of incarnation Monk Kin kaikuo.

  13. Application of a short-time version of the Equalization-Cancellation model to speech intelligibility experiments with speech maskers.

    Science.gov (United States)

    Wan, Rui; Durlach, Nathaniel I; Colburn, H Steven

    2014-08-01

    A short-time-processing version of the Equalization-Cancellation (EC) model of binaural processing is described and applied to speech intelligibility tasks in the presence of multiple maskers, including multiple speech maskers. This short-time EC model, called the STEC model, extends the model described by Wan et al. [J. Acoust. Soc. Am. 128, 3678-3690 (2010)] to allow the EC model's equalization parameters τ and α to be adjusted as a function of time, resulting in improved masker cancellation when the dominant masker location varies in time. Using the Speech Intelligibility Index, the STEC model is applied to speech intelligibility with maskers that vary in number, type, and spatial arrangements. Most notably, when maskers are located on opposite sides of the target, this STEC model predicts improved thresholds when the maskers are modulated independently with speech-envelope modulators; this includes the most relevant case of independent speech maskers. The STEC model describes the spatial dependence of the speech reception threshold with speech maskers better than the steady-state model. Predictions are also improved for independently speech-modulated noise maskers but are poorer for reversed-speech maskers. In general, short-term processing is useful, but much remains to be done in the complex task of understanding speech in speech maskers.

  14. Process Definition and Process Modeling Methods Version 01.01.00

    Science.gov (United States)

    1991-09-01

    process model. This generic process model is a state machine model . It permits progress in software development to be characterized as transitions...e.g., Entry-Task-Validation-Exit (ETVX) diagram, Petri Net, two-level state machine model , state machine, and Structured Analysis and Design

  15. Approaches in highly parameterized inversion—PEST++ Version 3, a Parameter ESTimation and uncertainty analysis software suite optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.

    2015-09-18

    The PEST++ Version 1 object-oriented parameter estimation code is here extended to Version 3 to incorporate additional algorithms and tools to further improve support for large and complex environmental modeling problems. PEST++ Version 3 includes the Gauss-Marquardt-Levenberg (GML) algorithm for nonlinear parameter estimation, Tikhonov regularization, integrated linear-based uncertainty quantification, options of integrated TCP/IP based parallel run management or external independent run management by use of a Version 2 update of the GENIE Version 1 software code, and utilities for global sensitivity analyses. The Version 3 code design is consistent with PEST++ Version 1 and continues to be designed to lower the barriers of entry for users as well as developers while providing efficient and optimized algorithms capable of accommodating large, highly parameterized inverse problems. As such, this effort continues the original focus of (1) implementing the most popular and powerful features of the PEST software suite in a fashion that is easy for novice or experienced modelers to use and (2) developing a software framework that is easy to extend.

  16. Approaches in highly parameterized inversion—PEST++ Version 3, a Parameter ESTimation and uncertainty analysis software suite optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.

    2015-09-18

    The PEST++ Version 1 object-oriented parameter estimation code is here extended to Version 3 to incorporate additional algorithms and tools to further improve support for large and complex environmental modeling problems. PEST++ Version 3 includes the Gauss-Marquardt-Levenberg (GML) algorithm for nonlinear parameter estimation, Tikhonov regularization, integrated linear-based uncertainty quantification, options of integrated TCP/IP based parallel run management or external independent run management by use of a Version 2 update of the GENIE Version 1 software code, and utilities for global sensitivity analyses. The Version 3 code design is consistent with PEST++ Version 1 and continues to be designed to lower the barriers of entry for users as well as developers while providing efficient and optimized algorithms capable of accommodating large, highly parameterized inverse problems. As such, this effort continues the original focus of (1) implementing the most popular and powerful features of the PEST software suite in a fashion that is easy for novice or experienced modelers to use and (2) developing a software framework that is easy to extend.

  17. Assimilation of MODIS Snow Cover Through the Data Assimilation Research Testbed and the Community Land Model Version 4

    Science.gov (United States)

    Zhang, Yong-Fei; Hoar, Tim J.; Yang, Zong-Liang; Anderson, Jeffrey L.; Toure, Ally M.; Rodell, Matthew

    2014-01-01

    To improve snowpack estimates in Community Land Model version 4 (CLM4), the Moderate Resolution Imaging Spectroradiometer (MODIS) snow cover fraction (SCF) was assimilated into the Community Land Model version 4 (CLM4) via the Data Assimilation Research Testbed (DART). The interface between CLM4 and DART is a flexible, extensible approach to land surface data assimilation. This data assimilation system has a large ensemble (80-member) atmospheric forcing that facilitates ensemble-based land data assimilation. We use 40 randomly chosen forcing members to drive 40 CLM members as a compromise between computational cost and the data assimilation performance. The localization distance, a parameter in DART, was tuned to optimize the data assimilation performance at the global scale. Snow water equivalent (SWE) and snow depth are adjusted via the ensemble adjustment Kalman filter, particularly in regions with large SCF variability. The root-mean-square error of the forecast SCF against MODIS SCF is largely reduced. In DJF (December-January-February), the discrepancy between MODIS and CLM4 is broadly ameliorated in the lower-middle latitudes (2345N). Only minimal modifications are made in the higher-middle (4566N) and high latitudes, part of which is due to the agreement between model and observation when snow cover is nearly 100. In some regions it also reveals that CLM4-modeled snow cover lacks heterogeneous features compared to MODIS. In MAM (March-April-May), adjustments to snowmove poleward mainly due to the northward movement of the snowline (i.e., where largest SCF uncertainty is and SCF assimilation has the greatest impact). The effectiveness of data assimilation also varies with vegetation types, with mixed performance over forest regions and consistently good performance over grass, which can partly be explained by the linearity of the relationship between SCF and SWE in the model ensembles. The updated snow depth was compared to the Canadian Meteorological

  18. Accounting for observation uncertainties in an evaluation metric of low latitude turbulent air-sea fluxes: application to the comparison of a suite of IPSL model versions

    Science.gov (United States)

    Servonnat, Jérôme; Găinuşă-Bogdan, Alina; Braconnot, Pascale

    2017-09-01

    Turbulent momentum and heat (sensible heat and latent heat) fluxes at the air-sea interface are key components of the whole energetic of the Earth's climate. The evaluation of these fluxes in the climate models is still difficult because of the large uncertainties associated with the reference products. In this paper we present an objective metric accounting for reference uncertainties to evaluate the annual cycle of the low latitude turbulent fluxes of a suite of IPSL climate models. This metric consists in a Hotelling T 2 test between the simulated and observed field in a reduce space characterized by the dominant modes of variability that are common to both the model and the reference, taking into account the observational uncertainty. The test is thus more severe when uncertainties are small as it is the case for sea surface temperature (SST). The results of the test show that for almost all variables and all model versions the model-reference differences are not zero. It is not possible to distinguish between model versions for sensible heat and meridional wind stress, certainly due to the large observational uncertainties. All model versions share similar biases for the different variables. There is no improvement between the reference versions of the IPSL model used for CMIP3 and CMIP5. The test also reveals that the higher horizontal resolution fails to improve the representation of the turbulent surface fluxes compared to the other versions. The representation of the fluxes is further degraded in a version with improved atmospheric physics with an amplification of some of the biases in the Indian Ocean and in the intertropical convergence zone. The ranking of the model versions for the turbulent fluxes is not correlated with the ranking found for SST. This highlights that despite the fact that SST gradients are important for the large-scale atmospheric circulation patterns, other factors such as wind speed, and air-sea temperature contrast play an

  19. Accounting for observation uncertainties in an evaluation metric of low latitude turbulent air-sea fluxes: application to the comparison of a suite of IPSL model versions

    Science.gov (United States)

    Servonnat, Jérôme; Găinuşă-Bogdan, Alina; Braconnot, Pascale

    2016-11-01

    Turbulent momentum and heat (sensible heat and latent heat) fluxes at the air-sea interface are key components of the whole energetic of the Earth's climate. The evaluation of these fluxes in the climate models is still difficult because of the large uncertainties associated with the reference products. In this paper we present an objective metric accounting for reference uncertainties to evaluate the annual cycle of the low latitude turbulent fluxes of a suite of IPSL climate models. This metric consists in a Hotelling T 2 test between the simulated and observed field in a reduce space characterized by the dominant modes of variability that are common to both the model and the reference, taking into account the observational uncertainty. The test is thus more severe when uncertainties are small as it is the case for sea surface temperature (SST). The results of the test show that for almost all variables and all model versions the model-reference differences are not zero. It is not possible to distinguish between model versions for sensible heat and meridional wind stress, certainly due to the large observational uncertainties. All model versions share similar biases for the different variables. There is no improvement between the reference versions of the IPSL model used for CMIP3 and CMIP5. The test also reveals that the higher horizontal resolution fails to improve the representation of the turbulent surface fluxes compared to the other versions. The representation of the fluxes is further degraded in a version with improved atmospheric physics with an amplification of some of the biases in the Indian Ocean and in the intertropical convergence zone. The ranking of the model versions for the turbulent fluxes is not correlated with the ranking found for SST. This highlights that despite the fact that SST gradients are important for the large-scale atmospheric circulation patterns, other factors such as wind speed, and air-sea temperature contrast play an

  20. Assessment of two versions of regional climate model in simulating the Indian Summer Monsoon over South Asia CORDEX domain

    Science.gov (United States)

    Pattnayak, K. C.; Panda, S. K.; Saraswat, Vaishali; Dash, S. K.

    2017-07-01

    This study assess the performance of two versions of Regional Climate Model (RegCM) in simulating the Indian summer monsoon over South Asia for the period 1998 to 2003 with an aim of conducting future climate change simulations. Two sets of experiments were carried out with two different versions of RegCM (viz. RegCM4.2 and RegCM4.3) with the lateral boundary forcings provided from European Center for Medium Range Weather Forecast Reanalysis (ERA-interim) at 50 km horizontal resolution. The major updates in RegCM4.3 in comparison to the older version RegCM4.2 are the inclusion of measured solar irradiance in place of hardcoded solar constant and additional layers in the stratosphere. The analysis shows that the Indian summer monsoon rainfall, moisture flux and surface net downward shortwave flux are better represented in RegCM4.3 than that in the RegCM4.2 simulations. Excessive moisture flux in the RegCM4.2 simulation over the northern Arabian Sea and Peninsular India resulted in an overestimation of rainfall over the Western Ghats, Peninsular region as a result of which the all India rainfall has been overestimated. RegCM4.3 has performed well over India as a whole as well as its four rainfall homogenous zones in reproducing the mean monsoon rainfall and inter-annual variation of rainfall. Further, the monsoon onset, low-level Somali Jet and the upper level tropical easterly jet are better represented in the RegCM4.3 than RegCM4.2. Thus, RegCM4.3 has performed better in simulating the mean summer monsoon circulation over the South Asia. Hence, RegCM4.3 may be used to study the future climate change over the South Asia.

  1. Development of a user-friendly interface version of the Salmonella source-attribution model

    DEFF Research Database (Denmark)

    Hald, Tine; Lund, Jan

    of questions, where the use of a classical quantitative risk assessment model (i.e. transmission models) would be impaired due to a lack of data and time limitations. As these models require specialist knowledge, it was requested by EFSA to develop a flexible user-friendly source attribution model for use...... with a user-manual, which is also part of this report. Users of the interface are recommended to read this report before starting using the interface to become familiar with the model principles and the mathematics behind, which is required in order to interpret the model results and assess the validity...

  2. The Marine Virtual Laboratory (version 2.1): enabling efficient ocean model configuration

    Science.gov (United States)

    Oke, Peter R.; Proctor, Roger; Rosebrock, Uwe; Brinkman, Richard; Cahill, Madeleine L.; Coghlan, Ian; Divakaran, Prasanth; Freeman, Justin; Pattiaratchi, Charitha; Roughan, Moninya; Sandery, Paul A.; Schaeffer, Amandine; Wijeratne, Sarath

    2016-09-01

    The technical steps involved in configuring a regional ocean model are analogous for all community models. All require the generation of a model grid, preparation and interpolation of topography, initial conditions, and forcing fields. Each task in configuring a regional ocean model is straightforward - but the process of downloading and reformatting data can be time-consuming. For an experienced modeller, the configuration of a new model domain can take as little as a few hours - but for an inexperienced modeller, it can take much longer. In pursuit of technical efficiency, the Australian ocean modelling community has developed the Web-based MARine Virtual Laboratory (WebMARVL). WebMARVL allows a user to quickly and easily configure an ocean general circulation or wave model through a simple interface, reducing the time to configure a regional model to a few minutes. Through WebMARVL, a user is prompted to define the basic options needed for a model configuration, including the model, run duration, spatial extent, and input data. Once all aspects of the configuration are selected, a series of data extraction, reprocessing, and repackaging services are run, and a "take-away bundle" is prepared for download. Building on the capabilities developed under Australia's Integrated Marine Observing System, WebMARVL also extracts all of the available observations for the chosen time-space domain. The user is able to download the take-away bundle and use it to run the model of his or her choice. Models supported by WebMARVL include three community ocean general circulation models and two community wave models. The model configuration from the take-away bundle is intended to be a starting point for scientific research. The user may subsequently refine the details of the model set-up to improve the model performance for the given application. In this study, WebMARVL is described along with a series of results from test cases comparing WebMARVL-configured models to observations

  3. Department of Defense Data Model, Version 1, Fy 1998, Volume 8.

    Science.gov (United States)

    2007-11-02

    15 C g ’■s c 3 oo O) CO IO CM CO O) CO a. Appendix A IDEFl-x Modeling Conventions APPENDIX A: IDEFIX MODELING CONVENTIONS...1.0 IDEFIX DATA MODELING CONVENTIONS Whenever data structures and business rules required to support a functional area need to be specified, it is...etc.). An entity must have an attribute or A-l APPENDIX A: IDEFIX MODELING CONVENTIONS combination of attributes whose values uniquely identify

  4. Evaluation of the Community Multiscale Air Quality model version 5.1

    Science.gov (United States)

    The Community Multiscale Air Quality model is a state-of-the-science air quality model that simulates the emission, transport and fate of numerous air pollutants, including ozone and particulate matter. The Atmospheric Modeling and Analysis Division (AMAD) of the U.S. Environment...

  5. Technical description of the RIVM/KNMI PUFF dispersion model. Version 4.0

    NARCIS (Netherlands)

    van Pul WAJ

    1992-01-01

    This report provides a technical description of the RIVM/KNMI PUFF model. The model may be used to calculate, given wind and rain field data, the dispersion of components emitted following an accident, emergency or calamity; the model area may be freely chosen to match the area of concern. The re

  6. A cloud feedback emulator (CFE, version 1.0) for an intermediate complexity model

    Science.gov (United States)

    Ullman, David J.; Schmittner, Andreas

    2017-02-01

    The dominant source of inter-model differences in comprehensive global climate models (GCMs) are cloud radiative effects on Earth's energy budget. Intermediate complexity models, while able to run more efficiently, often lack cloud feedbacks. Here, we describe and evaluate a method for applying GCM-derived shortwave and longwave cloud feedbacks from 4 × CO2 and Last Glacial Maximum experiments to the University of Victoria Earth System Climate Model. The method generally captures the spread in top-of-the-atmosphere radiative feedbacks between the original GCMs, which impacts the magnitude and spatial distribution of surface temperature changes and climate sensitivity. These results suggest that the method is suitable to incorporate multi-model cloud feedback uncertainties in ensemble simulations with a single intermediate complexity model.

  7. Global assessment of Vegetation Index and Phenology Lab (VIP and Global Inventory Modeling and Mapping Studies (GIMMS version 3 products

    Directory of Open Access Journals (Sweden)

    M. Marshall

    2015-06-01

    Full Text Available Earth observation based long-term global vegetation index products are used by scientists from a wide range of disciplines concerned with global change. Inter-comparison studies are commonly performed to keep the user community informed on the consistency and accuracy of such records as they evolve. In this study, we compared two new records: (1 Global Inventory Modeling and Mapping Studies (GIMMS Normalized Difference Vegetation Index Version 3 (NDVI3g and (2 Vegetation Index and Phenology Lab (VIP Version 3 NDVI (NDVI3v and Enhanced Vegetation Index 2 (EVI3v. We evaluated the two records via three experiments that addressed the primary use of such records in global change research: (1 prediction of the Leaf Area Index (LAI used in light-use efficiency modeling, (2 estimation of vegetation climatology in Soil-Vegetation-Atmosphere Transfer models, and (3 trend analysis of the magnitude and phenology of vegetation productivity. Experiment one, unlike previous inter-comparison studies, was performed with a unique Landsat 30 m spatial resolution and in situ LAI database for major crop types on five continents. Overall, the two records showed a high level of agreement both in direction and magnitude on a monthly basis, though VIP values were higher and more variable and showed lower correlations and higher error with in situ LAI. The records were most consistent at northern latitudes during the primary growing season and southern latitudes and the tropics throughout much of the year, while the records were less consistent at northern latitudes during green-up and senescence and in the great deserts of the world throughout much of the year. The two records were also highly consistent in terms of trend direction/magnitude, showing a 30+ year increase (decrease in NDVI over much of the globe (tropical rainforests. The two records were less consistent in terms of timing due to the poor correlation of the records during start and end of growing season.

  8. Hydrogen Macro System Model User Guide, Version 1.2.1

    Energy Technology Data Exchange (ETDEWEB)

    Ruth, M.; Diakov, V.; Sa, T.; Goldsby, M.; Genung, K.; Hoseley, R.; Smith, A.; Yuzugullu, E.

    2009-07-01

    The Hydrogen Macro System Model (MSM) is a simulation tool that links existing and emerging hydrogen-related models to perform rapid, cross-cutting analysis. It allows analysis of the economics, primary energy-source requirements, and emissions of hydrogen production and delivery pathways.

  9. PhytoSFDM version 1.0.0: Phytoplankton Size and Functional Diversity Model

    Science.gov (United States)

    Acevedo-Trejos, Esteban; Brandt, Gunnar; Smith, S. Lan; Merico, Agostino

    2016-11-01

    Biodiversity is one of the key mechanisms that facilitate the adaptive response of planktonic communities to a fluctuating environment. How to allow for such a flexible response in marine ecosystem models is, however, not entirely clear. One particular way is to resolve the natural complexity of phytoplankton communities by explicitly incorporating a large number of species or plankton functional types. Alternatively, models of aggregate community properties focus on macroecological quantities such as total biomass, mean trait, and trait variance (or functional trait diversity), thus reducing the observed natural complexity to a few mathematical expressions. We developed the PhytoSFDM modelling tool, which can resolve species discretely and can capture aggregate community properties. The tool also provides a set of methods for treating diversity under realistic oceanographic settings. This model is coded in Python and is distributed as open-source software. PhytoSFDM is implemented in a zero-dimensional physical scheme and can be applied to any location of the global ocean. We show that aggregate community models reduce computational complexity while preserving relevant macroecological features of phytoplankton communities. Compared to species-explicit models, aggregate models are more manageable in terms of number of equations and have faster computational times. Further developments of this tool should address the caveats associated with the assumptions of aggregate community models and about implementations into spatially resolved physical settings (one-dimensional and three-dimensional). With PhytoSFDM we embrace the idea of promoting open-source software and encourage scientists to build on this modelling tool to further improve our understanding of the role that biodiversity plays in shaping marine ecosystems.

  10. Parameterization Improvements and Functional and Structural Advances in Version 4 of the Community Land Model

    Directory of Open Access Journals (Sweden)

    Andrew G. Slater

    2011-05-01

    Full Text Available The Community Land Model is the land component of the Community Climate System Model. Here, we describe a broad set of model improvements and additions that have been provided through the CLM development community to create CLM4. The model is extended with a carbon-nitrogen (CN biogeochemical model that is prognostic with respect to vegetation, litter, and soil carbon and nitrogen states and vegetation phenology. An urban canyon model is added and a transient land cover and land use change (LCLUC capability, including wood harvest, is introduced, enabling study of historic and future LCLUC on energy, water, momentum, carbon, and nitrogen fluxes. The hydrology scheme is modified with a revised numerical solution of the Richards equation and a revised ground evaporation parameterization that accounts for litter and within-canopy stability. The new snow model incorporates the SNow and Ice Aerosol Radiation model (SNICAR - which includes aerosol deposition, grain-size dependent snow aging, and vertically-resolved snowpack heating –– as well as new snow cover and snow burial fraction parameterizations. The thermal and hydrologic properties of organic soil are accounted for and the ground column is extended to ~50-m depth. Several other minor modifications to the land surface types dataset, grass and crop optical properties, atmospheric forcing height, roughness length and displacement height, and the disposition of snow-capped runoff are also incorporated.Taken together, these augmentations to CLM result in improved soil moisture dynamics, drier soils, and stronger soil moisture variability. The new model also exhibits higher snow cover, cooler soil temperatures in organic-rich soils, greater global river discharge, and lower albedos over forests and grasslands, all of which are improvements compared to CLM3.5. When CLM4 is run with CN, the mean biogeophysical simulation is slightly degraded because the vegetation structure is prognostic rather

  11. A Scalable Version of the Navy Operational Global Atmospheric Prediction System Spectral Forecast Model

    Directory of Open Access Journals (Sweden)

    Thomas E. Rosmond

    2000-01-01

    Full Text Available The Navy Operational Global Atmospheric Prediction System (NOGAPS includes a state-of-the-art spectral forecast model similar to models run at several major operational numerical weather prediction (NWP centers around the world. The model, developed by the Naval Research Laboratory (NRL in Monterey, California, has run operational at the Fleet Numerical Meteorological and Oceanographic Center (FNMOC since 1982, and most recently is being run on a Cray C90 in a multi-tasked configuration. Typically the multi-tasked code runs on 10 to 15 processors with overall parallel efficiency of about 90%. resolution is T159L30, but other operational and research applications run at significantly lower resolutions. A scalable NOGAPS forecast model has been developed by NRL in anticipation of a FNMOC C90 replacement in about 2001, as well as for current NOGAPS research requirements to run on DOD High-Performance Computing (HPC scalable systems. The model is designed to run with message passing (MPI. Model design criteria include bit reproducibility for different processor numbers and reasonably efficient performance on fully shared memory, distributed memory, and distributed shared memory systems for a wide range of model resolutions. Results for a wide range of processor numbers, model resolutions, and different vendor architectures are presented. Single node performance has been disappointing on RISC based systems, at least compared to vector processor performance. This is a common complaint, and will require careful re-examination of traditional numerical weather prediction (NWP model software design and data organization to fully exploit future scalable architectures.

  12. Statistical analysis of fracture data, adapted for modelling Discrete Fracture Networks-Version 2

    Energy Technology Data Exchange (ETDEWEB)

    Munier, Raymond

    2004-04-01

    The report describes the parameters which are necessary for DFN modelling, the way in which they can be extracted from the data base acquired during site investigations, and their assignment to geometrical objects in the geological model. The purpose here is to present a methodology for use in SKB modelling projects. Though the methodology is deliberately tuned to facilitate subsequent DFN modelling with other tools, some of the recommendations presented here are applicable to other aspects of geo-modelling as well. For instance, we here recommend a nomenclature to be used within SKB modelling projects, which are truly multidisciplinary, to ease communications between scientific disciplines and avoid misunderstanding of common concepts. This report originally occurred as an appendix to a strategy report for geological modelling (SKB-R--03-07). Strategy reports were intended to be successively updated to include experience gained during site investigations and site modelling. Rather than updating the entire strategy report, we choose to present the update of the appendix as a stand-alone document. This document thus replaces Appendix A2 in SKB-R--03-07. In short, the update consists of the following: The target audience has been broadened and as a consequence thereof, the purpose of the document. Correction of errors found in various formulae. All expressions have been rewritten. Inclusion of more worked examples in each section. A new section describing area normalisation. A new section on spatial correlation. A new section describing anisotropy. A new chapter describing the expected output from DFN modelling, within SKB projects.

  13. The Model of Emissions of Gases and Aerosols from Nature version 2.1 (MEGAN2.1: an extended and updated framework for modeling biogenic emissions

    Directory of Open Access Journals (Sweden)

    A. B. Guenther

    2012-06-01

    Full Text Available The Model of Emissions of Gases and Aerosols from Nature version 2.1 (MEGAN2.1 is a modeling framework for estimating fluxes of 147 biogenic compounds between terrestrial ecosystems and the atmosphere using simple mechanistic algorithms to account for the major known processes controlling biogenic emissions. It is available as an offline code and has also been coupled into land surface models and atmospheric chemistry models. MEGAN2.1 is an update from the previous versions including MEGAN2.0 for isoprene emissions and MEGAN2.04, which estimates emissions of 138 compounds. Isoprene comprises about half of the estimated total global biogenic volatile organic compound (BVOC emission of 1 Pg (1000 Tg or 1015 g. Another 10 compounds including methanol, ethanol, acetaldehyde, acetone, α-pinene, β-pinene, t−β-ocimene, limonene, ethene, and propene together contribute another 30% of the estimated emission. An additional 20 compounds (mostly terpenoids are associated with another 17% of the total emission with the remaining 3% distributed among 125 compounds. Emissions of 41 monoterpenes and 32 sesquiterpenes together comprise about 15% and 3%, respectively, of the total global BVOC emission. Tropical trees cover about 18% of the global land surface and are estimated to be responsible for 60% of terpenoid emissions and 48% of other VOC emissions. Other trees cover about the same area but are estimated to contribute only about 10% of total emissions. The magnitude of the emissions estimated with MEGAN2.1 are within the range of estimates reported using other approaches and much of the differences between reported values can be attributed to landcover and meteorological driving variables. The offline version of MEGAN2.1 source code and driving variables is available from http://acd.ucar.edu/~guenther/MEGAN/MEGAN.htm and the version integrated into the

  14. Description of the Mountain Cloud Chemistry Program version of the PLUVIUS MOD 5. 0 reactive storm simulation model

    Energy Technology Data Exchange (ETDEWEB)

    Luecken, D.J.; Whiteman, C.D.; Chapman, E.G.; Andrews, G.L.; Bader, D.C.

    1987-07-01

    Damage to forest ecosystems on mountains in the eastern United States has prompted a study conducted for the US Environmental Protection Agency's Mountain Cloud Chemistry Program (MCCP). This study has led to the development of a numerical model called MCCP PLUVIUS, which has been used to investigate the chemical transformations and cloud droplet deposition in shallow, nonprecipitating orographic clouds. The MCCP PLUVIUS model was developed as a specialized version of the existing PLUVIUS MOD 5.0 reactive storm model. It is capable of simulating aerosol scavenging, nonreactive gas scavenging, aqueous phase SO/sub 2/ reactions, and cloud water deposition. A description of the new model is provided along with information on model inputs and outputs, as well as suggestions for its further development. The MCCP PLUVIUS incorporates a new method to determine the depth of the layer of air which flows over a mountaintop to produce an orographic cloud event. It provides a new method for calculating hydrogen ion concentrations, and provides updated expressions and values for solubility, dissociation and reaction rate constants.

  15. FAME: Friendly Applied Modelling Environment. Version 2.2 User Manual

    NARCIS (Netherlands)

    Wortelboer FG; Aldenberg T

    1989-01-01

    FAME (Friendly Applied Modelling Environment) is een algemene modelleer omgeving, ontwikkeld voor de dynamische simulatie van waterkwaliteitsmodellen. De modellen worden beschreven als sets van differentiaalvergelijkingen, waarbij van een algemene notatie gebruik wordt gemaakt. Geen kennis van een

  16. Technical manual for basic version of the Markov chain nest productivity model (MCnest)

    Science.gov (United States)

    The Markov Chain Nest Productivity Model (or MCnest) integrates existing toxicity information from three standardized avian toxicity tests with information on species life history and the timing of pesticide applications relative to the timing of avian breeding seasons to quantit...

  17. User’s manual for basic version of MCnest Markov chain nest productivity model

    Science.gov (United States)

    The Markov Chain Nest Productivity Model (or MCnest) integrates existing toxicity information from three standardized avian toxicity tests with information on species life history and the timing of pesticide applications relative to the timing of avian breeding seasons to quantit...

  18. Illustrating and homology modeling the proteins of the Zika virus [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Sean Ekins

    2016-09-01

    Full Text Available The Zika virus (ZIKV is a flavivirus of the family Flaviviridae, which is similar to dengue virus, yellow fever and West Nile virus. Recent outbreaks in South America, Latin America, the Caribbean and in particular Brazil have led to concern for the spread of the disease and potential to cause Guillain-Barré syndrome and microcephaly. Although ZIKV has been known of for over 60 years there is very little in the way of knowledge of the virus with few publications and no crystal structures. No antivirals have been tested against it either in vitro or in vivo. ZIKV therefore epitomizes a neglected disease. Several suggested steps have been proposed which could be taken to initiate ZIKV antiviral drug discovery using both high throughput screens as well as structure-based design based on homology models for the key proteins. We now describe preliminary homology models created for NS5, FtsJ, NS4B, NS4A, HELICc, DEXDc, peptidase S7, NS2B, NS2A, NS1, E stem, glycoprotein M, propeptide, capsid and glycoprotein E using SWISS-MODEL. Eleven out of 15 models pass our model quality criteria for their further use. While a ZIKV glycoprotein E homology model was initially described in the immature conformation as a trimer, we now describe the mature dimer conformer which allowed the construction of an illustration of the complete virion. By comparing illustrations of ZIKV based on this new homology model and the dengue virus crystal structure we propose potential differences that could be exploited for antiviral and vaccine design. The prediction of sites for glycosylation on this protein may also be useful in this regard. While we await a cryo-EM structure of ZIKV and eventual crystal structures of the individual proteins, these homology models provide the community with a starting point for structure-based design of drugs and vaccines as well as a for computational virtual screening.

  19. Development and validation of THUMS version 5 with 1D muscle models for active and passive automotive safety research.

    Science.gov (United States)

    Kimpara, Hideyuki; Nakahira, Yuko; Iwamoto, Masami

    2016-08-01

    Accurately predicting the occupant kinematics is critical to better understand the injury mechanisms during an automotive crash event. The objectives of this study were to develop and validate a finite element (FE) model of the human body integrated with an active muscle model called Total HUman Model for Safety (THUMS) version 5, which has the body size of the 50th percentile American adult male (AM50). This model is characterized by being able to generate a force owing to muscle tone and to predict the occupant response during an automotive crash event. Deformable materials were assigned to all body parts of THUMS model in order to evaluate the injury probabilities. Each muscle was modeled as a Hill-type muscle model with 800 muscle-tendon compartments of 1D truss and seatbelt elements covering whole joints in the neck, thorax, lumbar region, and upper and lower extremities. THUMS was validated against 36 series of post-mortem human surrogate (PMHS) and volunteer tests on frontal, lateral, and rear impacts. The muscle architectural and kinetic properties for the hip, knee, shoulder, and elbow joints were validated in terms of the moment arms and maximum isometric joint torques over a wide range of joint angles. The muscular moment arms and maximum joint torques estimated from THUMS occupant model with 1D muscles agreed with the experimental data for a wide range of joint angles. Therefore, this model has the potential to predict the occupant kinematics and injury outcomes considering appropriate human body motions associated with various human body postures, such as sitting or standing.

  20. Geological discrete fracture network model for the Olkiluoto site, Eurajoki, Finland. Version 2.0

    Energy Technology Data Exchange (ETDEWEB)

    Fox, A.; Forchhammer, K.; Pettersson, A. [Golder Associates AB, Stockholm (Sweden); La Pointe, P.; Lim, D-H. [Golder Associates Inc. (Finland)

    2012-06-15

    This report describes the methods, analyses, and conclusions of the modeling team in the production of the 2010 revision to the geological discrete fracture network (DFN) model for the Olkiluoto Site in Finland. The geological DFN is a statistical model for stochastically simulating rock fractures and minor faults at a scale ranging from approximately 0.05 m to approximately 565m; deformation zones are expressly excluded from the DFN model. The DFN model is presented as a series of tables summarizing probability distributions for several parameters necessary for fracture modeling: fracture orientation, fracture size, fracture intensity, and associated spatial constraints. The geological DFN is built from data collected during site characterization (SC) activities at Olkiluoto, which is selected to function as a final deep geological repository for spent fuel and nuclear waste from the Finnish nuclear power program. Data used in the DFN analyses include fracture maps from surface outcrops and trenches, geological and structural data from cored drillholes, and fracture information collected during the construction of the main tunnels and shafts at the ONKALO laboratory. Unlike the initial geological DFN, which was focused on the vicinity of the ONKALO tunnel, the 2010 revisions present a model parameterization for the entire island. Fracture domains are based on the tectonic subdivisions at the site (northern, central, and southern tectonic units) presented in the Geological Site Model (GSM), and are further subdivided along the intersection of major brittle-ductile zones. The rock volume at Olkiluoto is dominated by three distinct fracture sets: subhorizontally-dipping fractures striking north-northeast and dipping to the east that is subparallel to the mean bedrock foliation direction, a subvertically-dipping fracture set striking roughly north-south, and a subvertically-dipping fracture set striking approximately east-west. The subhorizontally-dipping fractures

  1. Water, Energy, and Biogeochemical Model (WEBMOD), user’s manual, version 1

    Science.gov (United States)

    Webb, Richard M.T.; Parkhurst, David L.

    2017-02-08

    The Water, Energy, and Biogeochemical Model (WEBMOD) uses the framework of the U.S. Geological Survey (USGS) Modular Modeling System to simulate fluxes of water and solutes through watersheds. WEBMOD divides watersheds into model response units (MRU) where fluxes and reactions are simulated for the following eight hillslope reservoir types: canopy; snowpack; ponding on impervious surfaces; O-horizon; two reservoirs in the unsaturated zone, which represent preferential flow and matrix flow; and two reservoirs in the saturated zone, which also represent preferential flow and matrix flow. The reservoir representing ponding on impervious surfaces, currently not functional (2016), will be implemented once the model is applied to urban areas. MRUs discharge to one or more stream reservoirs that flow to the outlet of the watershed. Hydrologic fluxes in the watershed are simulated by modules derived from the USGS Precipitation Runoff Modeling System; the National Weather Service Hydro-17 snow model; and a topography-driven hydrologic model (TOPMODEL). Modifications to the standard TOPMODEL include the addition of heterogeneous vertical infiltration rates; irrigation; lateral and vertical preferential flows through the unsaturated zone; pipe flow draining the saturated zone; gains and losses to regional aquifer systems; and the option to simulate baseflow discharge by using an exponential, parabolic, or linear decrease in transmissivity. PHREEQC, an aqueous geochemical model, is incorporated to simulate chemical reactions as waters evaporate, mix, and react within the various reservoirs of the model. The reactions that can be specified for a reservoir include equilibrium reactions among water; minerals; surfaces; exchangers; and kinetic reactions such as kinetic mineral dissolution or precipitation, biologically mediated reactions, and radioactive decay. WEBMOD also simulates variations in the concentrations of the stable isotopes deuterium and oxygen-18 as a result of

  2. Representing winter wheat in the Community Land Model (version 4.5)

    Science.gov (United States)

    Lu, Yaqiong; Williams, Ian N.; Bagley, Justin E.; Torn, Margaret S.; Kueppers, Lara M.

    2017-05-01

    Winter wheat is a staple crop for global food security, and is the dominant vegetation cover for a significant fraction of Earth's croplands. As such, it plays an important role in carbon cycling and land-atmosphere interactions in these key regions. Accurate simulation of winter wheat growth is not only crucial for future yield prediction under a changing climate, but also for accurately predicting the energy and water cycles for winter wheat dominated regions. We modified the winter wheat model in the Community Land Model (CLM) to better simulate winter wheat leaf area index, latent heat flux, net ecosystem exchange of CO2, and grain yield. These included schemes to represent vernalization as well as frost tolerance and damage. We calibrated three key parameters (minimum planting temperature, maximum crop growth days, and initial value of leaf carbon allocation coefficient) and modified the grain carbon allocation algorithm for simulations at the US Southern Great Plains ARM site (US-ARM), and validated the model performance at eight additional sites across North America. We found that the new winter wheat model improved the prediction of monthly variation in leaf area index, reduced latent heat flux, and net ecosystem exchange root mean square error (RMSE) by 41 and 35 % during the spring growing season. The model accurately simulated the interannual variation in yield at the US-ARM site, but underestimated yield at sites and in regions (northwestern and southeastern US) with historically greater yields by 35 %.

  3. Hybrid Model of the Context Dependent Vestibulo-Ocular Reflex: Implications for Vergence-Version Interactions

    Directory of Open Access Journals (Sweden)

    Mina eRanjbaran

    2015-02-01

    Full Text Available The vestibulo-ocular reflex (VOR is an involuntary eye movement evoked by head movements. It is also influenced by viewing distance. This paper presents a hybrid nonlinear bilateral model for the horizontal angular vestibulo-ocular reflex (AVOR in the dark. The model is based on known interconnections between saccadic burst circuits in the brainstem and ocular premotor areas in the vestibular nuclei during fast and slow phase intervals of nystagmus. We implemented a viable switching strategy for the timing of nystagmus events to allow emulation of real nystagmus data. The performance of the hybrid model is evaluated with simulations, and results are consistent with experimental observations. The hybrid model replicates realistic AVOR nystagmus patterns during sinusoidal or step head rotations in the dark and during interactions with vergence, e.g. fixation distance. By simply assigning proper nonlinear neural computations at the premotor level, the model replicates all reported experimental observations. This work sheds light on potential underlying neural mechanisms driving the context dependent AVOR and explains contradictory results in the literature. Moreover, context-dependent behaviors in more complex motor systems could also rely on local nonlinear neural computations.

  4. Long-term Industrial Energy Forecasting (LIEF) model (18-sector version)

    Energy Technology Data Exchange (ETDEWEB)

    Ross, M.H. (Univ. of Michigan, Ann Arbor, MI (United States). Dept. of Physics); Thimmapuram, P.; Fisher, R.E.; Maciorowski, W. (Argonne National Lab., IL (United States))

    1993-05-01

    The new 18-sector Long-term Industrial Energy Forecasting (LIEF) model is designed for convenient study of future industrial energy consumption, taking into account the composition of production, energy prices, and certain kinds of policy initiatives. Electricity and aggregate fossil fuels are modeled. Changes in energy intensity in each sector are driven by autonomous technological improvement (price-independent trend), the opportunity for energy-price-sensitive improvements, energy price expectations, and investment behavior. Although this decision-making framework involves more variables than the simplest econometric models, it enables direct comparison of an econometric approach with conservation supply curves from detailed engineering analysis. It also permits explicit consideration of a variety of policy approaches other than price manipulation. The model is tested in terms of historical data for nine manufacturing sectors, and parameters are determined for forecasting purposes. Relatively uniform and satisfactory parameters are obtained from this analysis. In this report, LIEF is also applied to create base-case and demand-side management scenarios to briefly illustrate modeling procedures and outputs.

  5. Long-term Industrial Energy Forecasting (LIEF) model (18-sector version)

    Energy Technology Data Exchange (ETDEWEB)

    Ross, M.H. [Univ. of Michigan, Ann Arbor, MI (US). Dept. of Physics; Thimmapuram, P.; Fisher, R.E.; Maciorowski, W. [Argonne National Lab., IL (US)

    1993-05-01

    The new 18-sector Long-term Industrial Energy Forecasting (LIEF) model is designed for convenient study of future industrial energy consumption, taking into account the composition of production, energy prices, and certain kinds of policy initiatives. Electricity and aggregate fossil fuels are modeled. Changes in energy intensity in each sector are driven by autonomous technological improvement (price-independent trend), the opportunity for energy-price-sensitive improvements, energy price expectations, and investment behavior. Although this decision-making framework involves more variables than the simplest econometric models, it enables direct comparison of an econometric approach with conservation supply curves from detailed engineering analysis. It also permits explicit consideration of a variety of policy approaches other than price manipulation. The model is tested in terms of historical data for nine manufacturing sectors, and parameters are determined for forecasting purposes. Relatively uniform and satisfactory parameters are obtained from this analysis. In this report, LIEF is also applied to create base-case and demand-side management scenarios to briefly illustrate modeling procedures and outputs.

  6. Hybrid model of the context dependent vestibulo-ocular reflex: implications for vergence-version interactions.

    Science.gov (United States)

    Ranjbaran, Mina; Galiana, Henrietta L

    2015-01-01

    The vestibulo-ocular reflex (VOR) is an involuntary eye movement evoked by head movements. It is also influenced by viewing distance. This paper presents a hybrid nonlinear bilateral model for the horizontal angular vestibulo-ocular reflex (AVOR) in the dark. The model is based on known interconnections between saccadic burst circuits in the brainstem and ocular premotor areas in the vestibular nuclei during fast and slow phase intervals of nystagmus. We implemented a viable switching strategy for the timing of nystagmus events to allow emulation of real nystagmus data. The performance of the hybrid model is evaluated with simulations, and results are consistent with experimental observations. The hybrid model replicates realistic AVOR nystagmus patterns during sinusoidal or step head rotations in the dark and during interactions with vergence, e.g., fixation distance. By simply assigning proper nonlinear neural computations at the premotor level, the model replicates all reported experimental observations. This work sheds light on potential underlying neural mechanisms driving the context dependent AVOR and explains contradictory results in the literature. Moreover, context-dependent behaviors in more complex motor systems could also rely on local nonlinear neural computations.

  7. A Prototypicality Validation of the Comprehensive Assessment of Psychopathic Personality (CAPP) Model Spanish Version.

    Science.gov (United States)

    Flórez, Gerardo; Casas, Alfonso; Kreis, Mette K F; Forti, Leonello; Martínez, Joaquín; Fernández, Juan; Conde, Manuel; Vázquez-Noguerol, Raúl; Blanco, Tania; Hoff, Helge A; Cooke, David J

    2015-10-01

    The Comprehensive Assessment of Psychopathic Personality (CAPP) is a newly developed, lexically based, conceptual model of psychopathy. The content validity of the Spanish language CAPP model was evaluated using prototypicality analysis. Prototypicality ratings were collected from 187 mental health experts and from samples of 143 health professionals and 282 community residents. Across the samples the majority of CAPP items were rated as highly prototypical of psychopathy. The Self, Dominance, and Attachment domains were evaluated as being more prototypical than the Behavioral and Cognitive domains. These findings are consistent with findings from similar studies in other languages and provide further support for the content validation of the CAPP model across languages and the lexical approach.

  8. An integrated assessment modeling framework for uncertainty studies in global and regional climate change: the MIT IGSM-CAM (version 1.0)

    OpenAIRE

    Monier, E.; Scott, J R; A. P. Sokolov; C. E. Forest; C. A. Schlosser

    2013-01-01

    This paper describes a computationally efficient framework for uncertainty studies in global and regional climate change. In this framework, the Massachusetts Institute of Technology (MIT) Integrated Global System Model (IGSM), an integrated assessment model that couples an Earth system model of intermediate complexity to a human activity model, is linked to the National Center for Atmospheric Research (NCAR) Community Atmosphere Model (CAM). Since the MIT IGSM-CAM framework (version 1.0) inc...

  9. The Flexible Global Ocean-Atmosphere-Land System Model,Spectral Version 2:FGOALS-s2

    Institute of Scientific and Technical Information of China (English)

    BAO Qing; LIN Pengfei; ZHOU Tianjun; LIU Yimin; YU Yongqiang; WU Guoxiong; HE Bian

    2013-01-01

    The Flexible Global Ocean-Atmosphere-Land System model,Spectral Version 2 (FGOALS-s2) was used to simulate realistic climates and to study anthropogenic influences on climate change.Specifically,the FGOALS-s2 was integrated with Coupled Model Intercomparison Project Phase 5 (CMIP5) to conduct coordinated experiments that will provide valuable scientific information to climate research communities.The performances of FGOALS-s2 were assessed in simulating major climate phenomena,and documented both the strengths and weaknesses of the model.The results indicate that FGOALS-s2 successfully overcomes climate drift,and realistically models global and regional climate characteristics,including SST,precipitation,and atmospheric circulation.In particular,the model accurately captures annual and semi-annual SST cycles in the equatorial Pacific Ocean,and the main characteristic features of the Asian summer monsoon,which include a low-level southwestern jet and five monsoon rainfall centers.The simulated climate variability was further examined in terms of teleconnections,leading modes of global SST (namely,ENSO),Pacific Decadal Oscillations (PDO),and changes in 19th-20th century climate.The analysis demonstrates that FGOALS-s2 realistically simulates extra-tropical teleconnection patterns of large-scale climate,and irregular ENSO periods.The model gives fairly reasonable reconstructions of spatial patterns of PDO and global monsoon changes in the 20th century.However,because the indirect effects of aerosols are not included in the model,the simulated global temperature change during the period 1850-2005 is greater than the observed warming,by 0.6℃.Some other shortcomings of the model are also noted.

  10. Unit testing, model validation, and biological simulation [version 1; referees: 2 approved, 1 approved with reservations

    Directory of Open Access Journals (Sweden)

    Gopal P. Sarma

    2016-08-01

    Full Text Available The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models.

  11. Hydrogeological DFN modelling using structural and hydraulic data from KLX04. Preliminary site description Laxemar subarea - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Follin, Sven [SF GeoLogic AB, Taeby (Sweden); Stigsson, Martin [Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden); Svensson, Urban [Computer-aided Fluid Engineering AB, Norrkoeping (Sweden)

    2006-04-15

    SKB is conducting site investigations for a high-level nuclear waste repository in fractured crystalline rocks at two coastal areas in Sweden. The two candidate areas are named Forsmark and Simpevarp. The site characterisation work is divided into two phases, an initial site investigation phase (ISI) and a complete site investigation phase (CSI). The results of the ISI phase are used as a basis for deciding on the subsequent CSI phase. On the basis of the CSI investigations a decision is made as to whether detailed characterisation will be performed (including sinking of a shaft). An integrated component in the site characterisation work is the development of site descriptive models. These comprise basic models in three dimensions with an accompanying text description. Central in the modelling work is the geological model which provides the geometrical context in terms of a model of deformation zones and the less fractured rock mass between the zones. Using the geological and geometrical description models as a basis, descriptive models for other disciplines (surface ecosystems, hydrogeology, hydrogeochemistry, rock mechanics, thermal properties and transport properties) will be developed. Great care is taken to arrive at a general consistency in the description of the various models and assessment of uncertainty and possible needs of alternative models. The main objective of this study is to support the development of a hydrogeological DFN model (Discrete Fracture Network) for the Preliminary Site Description of the Laxemar area on a regional-scale (SDM version L1.2). A more specific objective of this study is to assess the propagation of uncertainties in the geological DFN modelling reported for L1.2 into the groundwater flow modelling. An improved understanding is necessary in order to gain credibility for the Site Description in general and the hydrogeological description in particular. The latter will serve as a basis for describing the present

  12. Preliminary site description: Groundwater flow simulations. Simpevarp area (version 1.1) modelled with CONNECTFLOW

    Energy Technology Data Exchange (ETDEWEB)

    Hartley, Lee; Worth, David [Serco Assurance Ltd, Risley (United Kingdom); Gylling, Bjoern; Marsic, Niko [Kemakta Konsult AB, Stockholm (Sweden); Holmen, Johan [Golder Associates, Stockholm (Sweden)

    2004-08-01

    The main objective of this study is to assess the role of known and unknown hydrogeological conditions for the present-day distribution of saline groundwater at the Simpevarp and Laxemar sites. An improved understanding of the paleo-hydrogeology is necessary in order to gain credibility for the Site Descriptive Model in general and the Site Hydrogeological Description in particular. This is to serve as a basis for describing the present hydrogeological conditions as well as predictions of future hydrogeological conditions. This objective implies a testing of: geometrical alternatives in the structural geology and bedrock fracturing, variants in the initial and boundary conditions, and parameter uncertainties (i.e. uncertainties in the hydraulic property assignment). This testing is necessary in order to evaluate the impact on the groundwater flow field of the specified components and to promote proposals of further investigations of the hydrogeological conditions at the site. The general methodology for modelling transient salt transport and groundwater flow using CONNECTFLOW that was developed for Forsmark has been applied successfully also for Simpevarp. Because of time constraints only a key set of variants were performed that focussed on the influences of DFN model parameters, the kinematic porosity, and the initial condition. Salinity data in deep boreholes available at the time of the project was too limited to allow a good calibration exercise. However, the model predictions are compared with the available data from KLX01 and KLX02 below. Once more salinity data is available it may be possible to draw more definite conclusions based on the differences between variants. At the moment though the differences should just be used understand the sensitivity of the models to various input parameters.

  13. Stochastic empirical loading and dilution model (SELDM) version 1.0.0

    Science.gov (United States)

    Granato, Gregory E.

    2013-01-01

    The Stochastic Empirical Loading and Dilution Model (SELDM) is designed to transform complex scientific data into meaningful information about the risk of adverse effects of runoff on receiving waters, the potential need for mitigation measures, and the potential effectiveness of such management measures for reducing these risks. The U.S. Geological Survey developed SELDM in cooperation with the Federal Highway Administration to help develop planning-level estimates of event mean concentrations, flows, and loads in stormwater from a site of interest and from an upstream basin. Planning-level estimates are defined as the results of analyses used to evaluate alternative management measures; planning-level estimates are recognized to include substantial uncertainties (commonly orders of magnitude). SELDM uses information about a highway site, the associated receiving-water basin, precipitation events, stormflow, water quality, and the performance of mitigation measures to produce a stochastic population of runoff-quality variables. SELDM provides input statistics for precipitation, prestorm flow, runoff coefficients, and concentrations of selected water-quality constituents from National datasets. Input statistics may be selected on the basis of the latitude, longitude, and physical characteristics of the site of interest and the upstream basin. The user also may derive and input statistics for each variable that are specific to a given site of interest or a given area. SELDM is a stochastic model because it uses Monte Carlo methods to produce the random combinations of input variable values needed to generate the stochastic population of values for each component variable. SELDM calculates the dilution of runoff in the receiving waters and the resulting downstream event mean concentrations and annual average lake concentrations. Results are ranked, and plotting positions are calculated, to indicate the level of risk of adverse effects caused by runoff concentrations

  14. The big challenges in modeling human and environmental well-being [version 1; referees: 3 approved

    Directory of Open Access Journals (Sweden)

    Shripad Tuljapurkar

    2016-04-01

    Full Text Available This article is a selective review of quantitative research, historical and prospective, that is needed to inform sustainable development policy. I start with a simple framework to highlight how demography and productivity shape human well-being. I use that to discuss three sets of issues and corresponding challenges to modeling: first, population prehistory and early human development and their implications for the future; second, the multiple distinct dimensions of human and environmental well-being and the meaning of sustainability; and, third, inequality as a phenomenon triggered by development and models to examine changing inequality and its consequences. I conclude with a few words about other important factors: political, institutional, and cultural.

  15. A new version of variational integrated technology for environmental modeling with assimilation of available data

    Science.gov (United States)

    Penenko, Vladimir; Tsvetova, Elena; Penenko, Aleksey

    2014-05-01

    A modeling technology based on coupled models of atmospheric dynamics and chemistry are presented [1-3]. It is the result of application of variational methods in combination with the methods of decomposition and splitting. The idea of Euler's integrating factors combined with technique of adjoint problems is also used. In online technologies, a significant part of algorithmic and computational work consist in solving the problems like convection-diffusion-reaction and in organizing data assimilation techniques based on them. For equations of convection-diffusion, the methodology gives us the unconditionally stable and monotone discrete-analytical schemes in the frames of methods of decomposition and splitting. These schemes are exact for locally one-dimensional problems respect to the spatial variables. For stiff systems of equations describing transformation of gas and aerosol substances, the monotone and stable schemes are also obtained. They are implemented by non- iterative algorithms. By construction, all schemes for different components of state functions are structurally uniform. They are coordinated among themselves in the sense of forward and inverse modeling. Variational principles are constructed taking into account the fact that the behavior of the different dynamic and chemical components of the state function is characterized by high variability and uncertainty. Information on the parameters of models, sources and emission impacts is also not determined precisely. Therefore, to obtain the consistent solutions, we construct methods of the sensitivity theory taking into account the influence of uncertainty. For this purpose, new methods of data assimilation of hydrodynamic fields and gas-aerosol substances measured by different observing systems are proposed. Optimization criteria for data assimilation problems are defined so that they include a set of functionals evaluating the total measure of uncertainties. The latter are explicitly introduced into

  16. Business models for renewable energy in the built environment. Updated version

    Energy Technology Data Exchange (ETDEWEB)

    Wuertenberger, L.; Menkveld, M.; Vethman, P.; Van Tilburg, X. [ECN Policy Studies, Amsterdam (Netherlands); Bleyl, J.W. [Energetic Solutions, Graz (Austria)

    2012-04-15

    The project RE-BIZZ aims to provide insight to policy makers and market actors in the way new and innovative business models (and/or policy measures) can stimulate the deployment of renewable energy technologies (RET) and energy efficiency (EE) measures in the built environment. The project is initiated and funded by the IEA Implementing Agreement for Renewable Energy Technology Deployment (IEA-RETD). It analysed ten business models in three categories (amongst others different types of Energy Service Companies (ESCOs), Developing properties certified with a 'green' building label, Building owners profiting from rent increases after EE measures, Property Assessed Clean Energy (PACE) financing, On-bill financing, and Leasing of RET equipment) including their organisational and financial structure, the existing market and policy context, and an analysis of Strengths, Weaknesses, Opportunities and Threats (SWOT). The study concludes with recommendations for policy makers and other market actors.

  17. First implementation of secondary inorganic aerosols in the MOCAGE version R2.15.0 chemistry transport model

    Science.gov (United States)

    Guth, J.; Josse, B.; Marécal, V.; Joly, M.; Hamer, P.

    2016-01-01

    In this study we develop a secondary inorganic aerosol (SIA) module for the MOCAGE chemistry transport model developed at CNRM. The aim is to have a module suitable for running at different model resolutions and for operational applications with reasonable computing times. Based on the ISORROPIA II thermodynamic equilibrium module, the new version of the model is presented and evaluated at both the global and regional scales. The results show high concentrations of secondary inorganic aerosols in the most polluted regions: Europe, Asia and the eastern part of North America. Asia shows higher sulfate concentrations than other regions thanks to emission reductions in Europe and North America. Using two simulations, one with and the other without secondary inorganic aerosol formation, the global model outputs are compared to previous studies, to MODIS AOD retrievals, and also to in situ measurements from the HTAP database. The model shows a better agreement with MODIS AOD retrievals in all geographical regions after introducing the new SIA scheme. It also provides a good statistical agreement with in situ measurements of secondary inorganic aerosol composition: sulfate, nitrate and ammonium. In addition, the simulation with SIA generally gives a better agreement with observations for secondary inorganic aerosol precursors (nitric acid, sulfur dioxide, ammonia), in particular with a reduction of the modified normalized mean bias (MNMB). At the regional scale, over Europe, the model simulation with SIA is compared to the in situ measurements from the EMEP database and shows a good agreement with secondary inorganic aerosol composition. The results at the regional scale are consistent with those obtained from the global simulations. The AIRBASE database was used to compare the model to regulated air quality pollutants: particulate matter, ozone and nitrogen dioxide concentrations. Introduction of the SIA in MOCAGE provides a reduction in the PM2.5 MNMB of 0.44 on a

  18. User’s Guide for COMBIMAN Programs (COMputerized BIomechanical MAN-Model) Version 5

    Science.gov (United States)

    1982-04-01

    accomplishing this has been to build mock-ups and use an undetermined number of "representative" test pilots to evaluate the work environment and...the "representative" pilots depends on the availability of pilots and the whims of the designers. The COMputerized Blomechanical MAN-model (COMBIMAN...de- fined with letter S, is the field of stereovision , which is the field visible to both eyes simultaneously. The field defined with letter F

  19. User`s guide to the META-Net economic modeling system. Version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Lamont, A.

    1994-11-24

    In a market economy demands for commodities are met through various technologies and resources. Markets select the technologies and resources to meet these demands based on their costs. Over time, the competitiveness of different technologies can change due to the exhaustion of resources they depend on, the introduction of newer, more efficient technologies, or even shifts in user demands. As this happens, the structure of the economy changes. The Market Equilibrium and Technology Assessment Network Modelling System, META{center_dot}Net, has been developed for building and solving multi-period equilibrium models to analyze the shifts in the energy system that may occur as new technologies are introduced and resources are exhausted. META{center_dot}Net allows a user to build and solve complex economic models. It models` a market economy as a network of nodes representing resources, conversion processes, markets, and end-use demands. Commodities flow through this network from resources, through conversion processes and market, to the end-users. META{center_dot}Net then finds the multiperiod equilibrium prices and quantities. The solution includes the prices and quantities demanded for each commodity along with the capacity additions (and retirements) for each conversion process, and the trajectories of resource extraction. Although the changes in the economy are largely driven by consumers` behavior and the costs of technologies and resources, they are also affected by various government policies. These can include constraints on prices and quantities, and various taxes and constraints on environmental emissions. META{center_dot}Net can incorporate many of these mechanisms and evaluate their potential impact on the development of the economic system.

  20. Modelling turbulent vertical mixing sensitivity using a 1-D version of NEMO

    Directory of Open Access Journals (Sweden)

    G. Reffray

    2014-08-01

    Full Text Available Through two numerical experiments, a 1-D vertical model called NEMO1D was used to investigate physical and numerical turbulent-mixing behaviour. The results show that all the turbulent closures tested (k + l from Blanke and Delecluse, 1993 and two equation models: Generic Lengh Scale closures from Umlauf and Burchard, 2003 are able to correctly reproduce the classical test of Kato and Phillips (1969 under favourable numerical conditions while some solutions may diverge depending on the degradation of the spatial and time discretization. The performances of turbulence models were then compared with data measured over a one-year period (mid-2010 to mid-2011 at the PAPA station, located in the North Pacific Ocean. The modelled temperature and salinity were in good agreement with the observations, with a maximum temperature error between −2 and 2 °C during the stratified period (June to October. However the results also depend on the numerical conditions. The vertical RMSE varied, for different turbulent closures, from 0.1 to 0.3 °C during the stratified period and from 0.03 to 0.15 °C during the homogeneous period. This 1-D configuration at the PAPA station (called PAPA1D is now available in NEMO as a reference configuration including the input files and atmospheric forcing set described in this paper. Thus, all the results described can be recovered by downloading and launching PAPA1D. The configuration is described on the NEMO site (http://www.nemo-ocean.eu/Using-NEMO/Configurations/C1D_PAPA. This package is a good starting point for further investigation of vertical processes.

  1. The Everglades Depth Estimation Network (EDEN) surface-water model, version 2

    Science.gov (United States)

    Telis, Pamela A.; Xie, Zhixiao; Liu, Zhongwei; Li, Yingru; Conrads, Paul A.

    2015-01-01

    The Everglades Depth Estimation Network (EDEN) is an integrated network of water-level gages, interpolation models that generate daily water-level and water-depth data, and applications that compute derived hydrologic data across the freshwater part of the greater Everglades landscape. The U.S. Geological Survey Greater Everglades Priority Ecosystems Science provides support for EDEN in order for EDEN to provide quality-assured monitoring data for the U.S. Army Corps of Engineers Comprehensive Everglades Restoration Plan.

  2. The Canadian Defence Input-Output Model DIO Version 4.41

    Science.gov (United States)

    2011-09-01

    Output models, for instance to study the regional benefits of different large procure- ment programmes, the data censorship limitation would...excluding potato chips and nuts 113 0960 Cocoa and chocolate 114 0979 Nuts DRDC CORA TM 2011-147 31 Index Code Commodity name 115 0989 Chocolate...Private hospital services 631 5631 Private residential care facilities 632 5632 Child care, outside the home 633 5633 Other health and social services 634

  3. Uncorrelated Encounter Model of the National Airspace System, Version 2.0

    Science.gov (United States)

    2013-08-19

    between two IFR aircraft in oceanic airspace. The reason for this is that one cannot observe encounters of sufficient fidelity in the available data...does not observe a sufficient number of encounters between instrument flight rules ( IFR ) and non- IFR traffic beyond 12 NM from the shore. 4 TABLE 1...Encounter model categories. Aircraft of Interest Intruder Aircraft Location Flight Rule IFR VFR Noncooperative Noncooperative Conventional

  4. Advanced Propagation Model (APM) Version 2.1.04 Computer Software Configuration Item (CSCI) Documents

    Science.gov (United States)

    2007-02-01

    19 3.1.2.17 Ray Trace ( RAYTRACE ) SU................................................................................ 20 3.1.2.18...NOSC TD 1015, Feb. 1984. Horst, M.M., Dyer, F.B., Tuley, M.T., “ Radar Sea Clutter Model,”, IEEE International Conference on Antennas and Propagation...3.1.2.17 Ray Trace ( RAYTRACE ) SU Using standard ray trace techniques, a ray is traced from a starting height and range with a specified starting

  5. System cost model user`s manual, version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Shropshire, D.

    1995-06-01

    The System Cost Model (SCM) was developed by Lockheed Martin Idaho Technologies in Idaho Falls, Idaho and MK-Environmental Services in San Francisco, California to support the Baseline Environmental Management Report sensitivity analysis for the U.S. Department of Energy (DOE). The SCM serves the needs of the entire DOE complex for treatment, storage, and disposal (TSD) of mixed low-level, low-level, and transuranic waste. The model can be used to evaluate total complex costs based on various configuration options or to evaluate site-specific options. The site-specific cost estimates are based on generic assumptions such as waste loads and densities, treatment processing schemes, existing facilities capacities and functions, storage and disposal requirements, schedules, and cost factors. The SCM allows customization of the data for detailed site-specific estimates. There are approximately forty TSD module designs that have been further customized to account for design differences for nonalpha, alpha, remote-handled, and transuranic wastes. The SCM generates cost profiles based on the model default parameters or customized user-defined input and also generates costs for transporting waste from generators to TSD sites.

  6. T2LBM Version 1.0: Landfill bioreactor model for TOUGH2

    Energy Technology Data Exchange (ETDEWEB)

    Oldenburg, Curtis M.

    2001-05-22

    The need to control gas and leachate production and minimize refuse volume in landfills has motivated the development of landfill simulation models that can be used by operators to predict and design optimal treatment processes. T2LBM is a module for the TOUGH2 simulator that implements a Landfill Bioreactor Model to provide simulation capability for the processes of aerobic or anaerobic biodegradation of municipal solid waste and the associated flow and transport of gas and liquid through the refuse mass. T2LBM incorporates a Monod kinetic rate law for the biodegradation of acetic acid in the aqueous phase by either aerobic or anaerobic microbes as controlled by the local oxygen concentration. Acetic acid is considered a proxy for all biodegradable substrates in the refuse. Aerobic and anaerobic microbes are assumed to be immobile and not limited by nutrients in their growth. Methane and carbon dioxide generation due to biodegradation with corresponding thermal effects are modeled. The numerous parameters needed to specify biodegradation are input by the user in the SELEC block of the TOUGH2 input file. Test problems show that good matches to laboratory experiments of biodegradation can be obtained. A landfill test problem demonstrates the capabilities of T2LBM for a hypothetical two-dimensional landfill scenario with permeability heterogeneity and compaction.

  7. Regional groundwater flow model for a glaciation scenario. Simpevarp subarea - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Jaquet, O.; Siegel, P. [Colenco Power Engineering Ltd, Baden-Daettwil (Switzerland)

    2006-10-15

    A groundwater flow model (glaciation model) was developed at a regional scale in order to study long term transient effects related to a glaciation scenario likely to occur in response to climatic changes. Conceptually the glaciation model was based on the regional model of Simpevarp and was then extended to a mega-regional scale (of several hundred kilometres) in order to account for the effects of the ice sheet. These effects were modelled using transient boundary conditions provided by a dynamic ice sheet model describing the phases of glacial build-up, glacial completeness and glacial retreat needed for the glaciation scenario. The results demonstrate the strong impact of the ice sheet on the flow field, in particular during the phases of the build-up and the retreat of the ice sheet. These phases last for several thousand years and may cause large amounts of melt water to reach the level of the repository and below. The highest fluxes of melt water are located in the vicinity of the ice margin. As the ice sheet approaches the repository location, the advective effects gain dominance over diffusive effects in the flow field. In particular, up-coning effects are likely to occur at the margin of the ice sheet leading to potential increases in salinity at repository level. For the base case, the entire salinity field of the model is almost completely flushed out at the end of the glaciation period. The flow patterns are strongly governed by the location of the conductive features in the subglacial layer. The influence of these glacial features is essential for the salinity distribution as is their impact on the flow trajectories and, therefore, on the resulting performance measures. Travel times and F-factor were calculated using the method of particle tracking. Glacial effects cause major consequences on the results. In particular, average travel times from the repository to the surface are below 10 a during phases of glacial build-up and retreat. In comparison

  8. Refinement and evaluation of the Massachusetts firm-yield estimator model version 2.0

    Science.gov (United States)

    Levin, Sara B.; Archfield, Stacey A.; Massey, Andrew J.

    2011-01-01

    The firm yield is the maximum average daily withdrawal that can be extracted from a reservoir without risk of failure during an extended drought period. Previously developed procedures for determining the firm yield of a reservoir were refined and applied to 38 reservoir systems in Massachusetts, including 25 single- and multiple-reservoir systems that were examined during previous studies and 13 additional reservoir systems. Changes to the firm-yield model include refinements to the simulation methods and input data, as well as the addition of several scenario-testing capabilities. The simulation procedure was adapted to run at a daily time step over a 44-year simulation period, and daily streamflow and meteorological data were compiled for all the reservoirs for input to the model. Another change to the model-simulation methods is the adjustment of the scaling factor used in estimating groundwater contributions to the reservoir. The scaling factor is used to convert the daily groundwater-flow rate into a volume by multiplying the rate by the length of reservoir shoreline that is hydrologically connected to the aquifer. Previous firm-yield analyses used a constant scaling factor that was estimated from the reservoir surface area at full pool. The use of a constant scaling factor caused groundwater flows during periods when the reservoir stage was very low to be overestimated. The constant groundwater scaling factor used in previous analyses was replaced with a variable scaling factor that is based on daily reservoir stage. This change reduced instability in the groundwater-flow algorithms and produced more realistic groundwater-flow contributions during periods of low storage. Uncertainty in the firm-yield model arises from many sources, including errors in input data. The sensitivity of the model to uncertainty in streamflow input data and uncertainty in the stage-storage relation was examined. A series of Monte Carlo simulations were performed on 22 reservoirs

  9. Implementing and Evaluating Variable Soil Thickness in the Community Land Model, Version 4.5 (CLM4.5)

    Energy Technology Data Exchange (ETDEWEB)

    Brunke, Michael A.; Broxton, Patrick; Pelletier, Jon; Gochis, David; Hazenberg, Pieter; Lawrence, David M.; Leung, L. Ruby; Niu, Guo-Yue; Troch, Peter A.; Zeng, Xubin

    2016-05-01

    One of the recognized weaknesses of land surface models as used in weather and climate models is the assumption of constant soil thickness due to the lack of global estimates of bedrock depth. Using a 30 arcsecond global dataset for the thickness of relatively porous, unconsolidated sediments over bedrock, spatial variation in soil thickness is included here in version 4.5 of the Community Land Model (CLM4.5). The number of soil layers for each grid cell is determined from the average soil depth for each 0.9° latitude x 1.25° longitude grid cell. Including variable soil thickness affects the simulations most in regions with shallow bedrock corresponding predominantly to areas of mountainous terrain. The greatest changes are to baseflow, with the annual minimum generally occurring earlier, while smaller changes are seen in surface fluxes like latent heat flux and surface runoff in which only the annual cycle amplitude is increased. These changes are tied to soil moisture changes which are most substantial in locations with shallow bedrock. Total water storage (TWS) anomalies do not change much over most river basins around the globe, since most basins contain mostly deep soils. However, it was found that TWS anomalies substantially differ for a river basin with more mountainous terrain. Additionally, the annual cycle in soil temperature are affected by including realistic soil thicknesses due to changes to heat capacity and thermal conductivity.

  10. Validation of the French version of the marijuana craving questionnaire (MCQ) generates a two-factor model.

    Science.gov (United States)

    Chauchard, Emeline; Goutaudier, Nelly; Heishman, Stephen J; Gorelick, David A; Chabrol, Henri

    2015-01-01

    Craving is a major issue in drug addiction, and a target for drug treatment. The Marijuana Craving Questionnaire-Short Form (MCQ-SF) is a useful tool for assessing cannabis craving in clinical and research settings. To validate the French version of the MCQ-SF (FMCQ-SF). Young adult cannabis users not seeking treatment (n = 679) completed the FMCQ-SF and questionnaires assessing their frequency of cannabis use and craving, cannabis use disorder criteria, and alcohol use. Confirmatory factor analysis of the four-factor FMCQ-SF model did not fit the data well. Exploratory factor analysis suggested a two-factor solution ("pleasure", characterized by planning and expectation of positive effects, and "release of tension", characterized by relief from anxiety, nervousness, or tension) with good psychometric properties. This two-factor model showed good internal and convergent validity and correlated with cannabis abuse and dependence and with frequency of cannabis use and craving. Validation of the FMCQ-SF generated a two-factor model, different from the four-factor solution generated in English language studies. Considering that craving plays an important role in withdrawal and relapse, this questionnaire should be useful for French-language addiction professionals.

  11. Presentation, calibration and validation of the low-order, DCESS Earth System Model (Version 1

    Directory of Open Access Journals (Sweden)

    J. O. Pepke Pedersen

    2008-11-01

    Full Text Available A new, low-order Earth System Model is described, calibrated and tested against Earth system data. The model features modules for the atmosphere, ocean, ocean sediment, land biosphere and lithosphere and has been designed to simulate global change on time scales of years to millions of years. The atmosphere module considers radiation balance, meridional transport of heat and water vapor between low-mid latitude and high latitude zones, heat and gas exchange with the ocean and sea ice and snow cover. Gases considered are carbon dioxide and methane for all three carbon isotopes, nitrous oxide and oxygen. The ocean module has 100 m vertical resolution, carbonate chemistry and prescribed circulation and mixing. Ocean biogeochemical tracers are phosphate, dissolved oxygen, dissolved inorganic carbon for all three carbon isotopes and alkalinity. Biogenic production of particulate organic matter in the ocean surface layer depends on phosphate availability but with lower efficiency in the high latitude zone, as determined by model fit to ocean data. The calcite to organic carbon rain ratio depends on surface layer temperature. The semi-analytical, ocean sediment module considers calcium carbonate dissolution and oxic and anoxic organic matter remineralisation. The sediment is composed of calcite, non-calcite mineral and reactive organic matter. Sediment porosity profiles are related to sediment composition and a bioturbated layer of 0.1 m thickness is assumed. A sediment segment is ascribed to each ocean layer and segment area stems from observed ocean depth distributions. Sediment burial is calculated from sedimentation velocities at the base of the bioturbated layer. Bioturbation rates and oxic and anoxic remineralisation rates depend on organic carbon rain rates and dissolved oxygen concentrations. The land biosphere module considers leaves, wood, litter and soil. Net primary production depends on atmospheric carbon dioxide concentration and

  12. Regional hydrogeological simulations. Numerical modelling using ConnectFlow. Preliminary site description Simpevarp sub area - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Hartley, Lee; Hoch, Andrew; Hunter, Fiona; Jackson, Peter [Serco Assurance, Risley (United Kingdom); Marsic, Niko [Kemakta Konsult, Stockholm (Sweden)

    2005-02-01

    objective of this study is to support the development of a preliminary Site Description of the Simpevarp area on a regional-scale based on the available data of August 2004 (Data Freeze S1.2) and the previous Site Description. A more specific objective of this study is to assess the role of known and unknown hydrogeological conditions for the present-day distribution of saline groundwater in the Simpevarp area on a regional-scale. An improved understanding of the paleo-hydrogeology is necessary in order to gain credibility for the Site Description in general and the hydrogeological description in particular. This is to serve as a basis for describing the present hydrogeological conditions on a local-scale as well as predictions of future hydrogeological conditions. Other key objectives were to identify the model domain required to simulate regional flow and solute transport at the Simpevarp area and to incorporate a new geological model of the deformation zones produced for Version S1.2.Another difference with Version S1.1 is the increased effort invested in conditioning the hydrogeological property models to the fracture boremap and hydraulic data. A new methodology was developed for interpreting the discrete fracture network (DFN) by integrating the geological description of the DFN (GeoDFN) with the hydraulic test data from Posiva Flow-Log and Pipe-String System double-packer techniques to produce a conditioned Hydro-DFN model. This was done in a systematic way that addressed uncertainties associated with the assumptions made in interpreting the data, such as the relationship between fracture transmissivity and length. Consistent hydraulic data was only available for three boreholes, and therefore only relatively simplistic models were proposed as there isn't sufficient data to justify extrapolating the DFN away from the boreholes based on rock domain, for example. Significantly, a far greater quantity of hydro-geochemical data was available for calibration in the

  13. Solid Modeling Aerospace Research Tool (SMART) user's guide, version 2.0

    Science.gov (United States)

    Mcmillin, Mark L.; Spangler, Jan L.; Dahmen, Stephen M.; Rehder, John J.

    1993-01-01

    The Solid Modeling Aerospace Research Tool (SMART) software package is used in the conceptual design of aerospace vehicles. It provides a highly interactive and dynamic capability for generating geometries with Bezier cubic patches. Features include automatic generation of commonly used aerospace constructs (e.g., wings and multilobed tanks); cross-section skinning; wireframe and shaded presentation; area, volume, inertia, and center-of-gravity calculations; and interfaces to various aerodynamic and structural analysis programs. A comprehensive description of SMART and how to use it is provided.

  14. Feynman propagator for the planar version of the CPT-even electrodynamics of Standard Model Extension

    Energy Technology Data Exchange (ETDEWEB)

    Casana, Rodolfo; Ferreira Junior, Manoel M.; Moreira, Roemir P.M. [Universidade Federal do Maranhao (UFMA), MA (Brazil); Gomes, Adalto R. [Instituto Federal de Educacao Ciencia e Tecnologia do Maranhao (IFMA), MA (Brazil)

    2011-07-01

    Full text: In a recent work, we have accomplished the dimensional reduction of the non birefringent CPT-even gauge sector of the Standard Model Extension. As well-known, the CPT-even gauge sector is composed of nineteen components comprised by the fourth-rank tensor, (K{sub F} ){sub μνρσ}, of which nine do not yield birefringence. These nine components can be parametrized in terms of the symmetric and traceless tensor, k{sub μν} = (K{sub F}){sup ρ} νρσ. Starting from this parametrization, and applying the dimensional reduction procedure, we obtain a planar theory corresponding to the non birefringent sector, composed of a gauge and scalar sectors, mutually coupled. These sectors possess six and three independent components, respectively. Some interesting properties of this theory, concerning classical stationary solutions, were examined recently. In the present work, we explicitly evaluate the Feynman propagator for this model, in a tensor closed way, using a set of operators defined in terms of three 3-vectors. We use this propagator to examine the dispersion relations of this theory, and analyze some properties related to its causality, stability, and unitarity. (author)

  15. Representing icebergs in the iLOVECLIM model (version 1.0 – a sensitivity study

    Directory of Open Access Journals (Sweden)

    M. Bügelmayer

    2014-07-01

    Full Text Available Recent modelling studies have indicated that icebergs alter the ocean's state, the thickness of sea ice and the prevailing atmospheric conditions, in short play an active role in the climate system. The icebergs' impact is due to their slowly released melt water which freshens and cools the ocean. The spatial distribution of the icebergs and thus their melt water depends on the forces (atmospheric and oceanic acting on them as well as on the icebergs' size. The studies conducted so far have in common that the icebergs were moved by reconstructed or modelled forcing fields and that the initial size distribution of the icebergs was prescribed according to present day observations. To address these shortcomings, we used the climate model iLOVECLIM that includes actively coupled ice-sheet and iceberg modules, to conduct 15 sensitivity experiments to analyse (1 the impact of the forcing fields (atmospheric vs. oceanic on the icebergs' distribution and melt flux, and (2 the effect of the used initial iceberg size on the resulting Northern Hemisphere climate and ice sheet under different climate conditions (pre-industrial, strong/weak radiative forcing. Our results show that, under equilibrated pre-industrial conditions, the oceanic currents cause the bergs to stay close to the Greenland and North American coast, whereas the atmospheric forcing quickly distributes them further away from their calving site. These different characteristics strongly affect the lifetime of icebergs, since the wind-driven icebergs melt up to two years faster as they are quickly distributed into the relatively warm North Atlantic waters. Moreover, we find that local variations in the spatial distribution due to different iceberg sizes do not result in different climate states and Greenland ice sheet volume, independent of the prevailing climate conditions (pre-industrial, warming or cooling climate. Therefore, we conclude that local differences in the distribution of their

  16. Unitary version of the single-particle dispersive optical model and single-hole excitations in medium-heavy spherical nuclei

    Science.gov (United States)

    Kolomiytsev, G. V.; Igashov, S. Yu.; Urin, M. H.

    2017-07-01

    A unitary version of the single-particle dispersive optical model was proposed with the aim of applying it to describing high-energy single-hole excitations in medium-heavy mass nuclei. By considering the example of experimentally studied single-hole excitations in the 90Zr and 208Pb parent nuclei, the contribution of the fragmentation effect to the real part of the optical-model potential was estimated quantitatively in the framework of this version. The results obtained in this way were used to predict the properties of such excitations in the 132Sn parent nucleus.

  17. Midlatitude atmospheric responses to Arctic sensible heat flux anomalies in Community Climate Model, Version 4

    Science.gov (United States)

    Mills, Catrin M.; Cassano, John J.; Cassano, Elizabeth N.

    2016-12-01

    Possible linkages between Arctic sea ice loss and midlatitude weather are strongly debated in the literature. We analyze a coupled model simulation to assess the possibility of Arctic ice variability forcing a midlatitude response, ensuring consistency between atmosphere, ocean, and ice components. We work with weekly running mean daily sensible heat fluxes with the self-organizing map technique to identify Arctic sensible heat flux anomaly patterns and the associated atmospheric response, without the need of metrics to define the Arctic forcing or measure the midlatitude response. We find that low-level warm anomalies during autumn can build planetary wave patterns that propagate downstream into the midlatitudes, creating robust surface cold anomalies in the eastern United States.

  18. Programs OPTMAN and SHEMMAN version 5 (1998). Coupled channels optical model and collective nuclear structure calculation

    Energy Technology Data Exchange (ETDEWEB)

    Sukhovitskii, E.Sh.; Porodzinskii, Y.V.; Iwamoto, Osamu; Chiba, Satoshi; Shibata, Keiichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-05-01

    Program OPTMAN has been developed to be a tool for optical model calculations and employed in nuclear data evaluation at Radiation Physics and Chemistry Problems Institute. The code had been continuously improved to incorporate a number of options for more than twenty years. For the last three years it was successfully applied for evaluation of minor actinides nuclear data for a contract with International Science and Technology Center with Japan as the financing party. This code is now installed on the PC and UNIX work station by the authors at Nuclear Data Center of JAERI as well as program SHEMMAN which is used for the determination of nuclear Hamiltonian parameters. This report is intended as a brief manual of these codes for the users at JAERI. (author)

  19. Offshore Wind Guidance Document: Oceanography and Sediment Stability (Version 1) Development of a Conceptual Site Model.

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, Jesse D.; Jason Magalen; Craig Jones

    2014-06-01

    This guidance document provide s the reader with an overview of the key environmental considerations for a typical offshore wind coastal location and the tools to help guide the reader through a thoro ugh planning process. It will enable readers to identify the key coastal processes relevant to their offshore wind site and perform pertinent analysis to guide siting and layout design, with the goal of minimizing costs associated with planning, permitting , and long - ter m maintenance. The document highlight s site characterization and assessment techniques for evaluating spatial patterns of sediment dynamics in the vicinity of a wind farm under typical, extreme, and storm conditions. Finally, the document des cribe s the assimilation of all of this information into the conceptual site model (CSM) to aid the decision - making processes.

  20. Theoretical modelling of epigenetically modified DNA sequences [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Alexandra Teresa Pires Carvalho

    2015-05-01

    Full Text Available We report herein a set of calculations designed to examine the effects of epigenetic modifications on the structure of DNA. The incorporation of methyl, hydroxymethyl, formyl and carboxy substituents at the 5-position of cytosine is shown to hardly affect the geometry of CG base pairs, but to result in rather larger changes to hydrogen-bond and stacking binding energies, as predicted by dispersion-corrected density functional theory (DFT methods. The same modifications within double-stranded GCG and ACA trimers exhibit rather larger structural effects, when including the sugar-phosphate backbone as well as sodium counterions and implicit aqueous solvation. In particular, changes are observed in the buckle and propeller angles within base pairs and the slide and roll values of base pair steps, but these leave the overall helical shape of DNA essentially intact. The structures so obtained are useful as a benchmark of faster methods, including molecular mechanics (MM and hybrid quantum mechanics/molecular mechanics (QM/MM methods. We show that previously developed MM parameters satisfactorily reproduce the trimer structures, as do QM/MM calculations which treat bases with dispersion-corrected DFT and the sugar-phosphate backbone with AMBER. The latter are improved by inclusion of all six bases in the QM region, since a truncated model including only the central CG base pair in the QM region is considerably further from the DFT structure. This QM/MM method is then applied to a set of double-stranded DNA heptamers derived from a recent X-ray crystallographic study, whose size puts a DFT study beyond our current computational resources. These data show that still larger structural changes are observed than in base pairs or trimers, leading us to conclude that it is important to model epigenetic modifications within realistic molecular contexts.

  1. Hybrid2: The hybrid system simulation model, Version 1.0, user manual

    Energy Technology Data Exchange (ETDEWEB)

    Baring-Gould, E.I.

    1996-06-01

    In light of the large scale desire for energy in remote communities, especially in the developing world, the need for a detailed long term performance prediction model for hybrid power systems was seen. To meet these ends, engineers from the National Renewable Energy Laboratory (NREL) and the University of Massachusetts (UMass) have spent the last three years developing the Hybrid2 software. The Hybrid2 code provides a means to conduct long term, detailed simulations of the performance of a large array of hybrid power systems. This work acts as an introduction and users manual to the Hybrid2 software. The manual describes the Hybrid2 code, what is included with the software and instructs the user on the structure of the code. The manual also describes some of the major features of the Hybrid2 code as well as how to create projects and run hybrid system simulations. The Hybrid2 code test program is also discussed. Although every attempt has been made to make the Hybrid2 code easy to understand and use, this manual will allow many organizations to consider the long term advantages of using hybrid power systems instead of conventional petroleum based systems for remote power generation.

  2. Sensitivity of precipitation to parameter values in the community atmosphere model version 5

    Energy Technology Data Exchange (ETDEWEB)

    Johannesson, Gardar; Lucas, Donald; Qian, Yun; Swiler, Laura Painton; Wildey, Timothy Michael

    2014-03-01

    One objective of the Climate Science for a Sustainable Energy Future (CSSEF) program is to develop the capability to thoroughly test and understand the uncertainties in the overall climate model and its components as they are being developed. The focus on uncertainties involves sensitivity analysis: the capability to determine which input parameters have a major influence on the output responses of interest. This report presents some initial sensitivity analysis results performed by Lawrence Livermore National Laboratory (LNNL), Sandia National Laboratories (SNL), and Pacific Northwest National Laboratory (PNNL). In the 2011-2012 timeframe, these laboratories worked in collaboration to perform sensitivity analyses of a set of CAM5, 2° runs, where the response metrics of interest were precipitation metrics. The three labs performed their sensitivity analysis (SA) studies separately and then compared results. Overall, the results were quite consistent with each other although the methods used were different. This exercise provided a robustness check of the global sensitivity analysis metrics and identified some strongly influential parameters.

  3. Variational assimilation of land surface temperature within the ORCHIDEE Land Surface Model Version 1.2.6

    Science.gov (United States)

    Benavides Pinjosovsky, Hector Simon; Thiria, Sylvie; Ottlé, Catherine; Brajard, Julien; Badran, Fouad; Maugis, Pascal

    2017-01-01

    The SECHIBA module of the ORCHIDEE land surface model describes the exchanges of water and energy between the surface and the atmosphere. In the present paper, the adjoint semi-generator software called YAO was used as a framework to implement a 4D-VAR assimilation scheme of observations in SECHIBA. The objective was to deliver the adjoint model of SECHIBA (SECHIBA-YAO) obtained with YAO to provide an opportunity for scientists and end users to perform their own assimilation. SECHIBA-YAO allows the control of the 11 most influential internal parameters of the soil water content, by observing the land surface temperature or remote sensing data such as the brightness temperature. The paper presents the fundamental principles of the 4D-VAR assimilation, the semi-generator software YAO and a large number of experiments showing the accuracy of the adjoint code in different conditions (sites, PFTs, seasons). In addition, a distributed version is available in the case for which only the land surface temperature is observed.

  4. Fuel Cell Power Model Version 2: Startup Guide, System Designs, and Case Studies. Modeling Electricity, Heat, and Hydrogen Generation from Fuel Cell-Based Distributed Energy Systems

    Energy Technology Data Exchange (ETDEWEB)

    Steward, D.; Penev, M.; Saur, G.; Becker, W.; Zuboy, J.

    2013-06-01

    This guide helps users get started with the U.S. Department of Energy/National Renewable Energy Laboratory Fuel Cell Power (FCPower) Model Version 2, which is a Microsoft Excel workbook that analyzes the technical and economic aspects of high-temperature fuel cell-based distributed energy systems with the aim of providing consistent, transparent, comparable results. This type of energy system would provide onsite-generated heat and electricity to large end users such as hospitals and office complexes. The hydrogen produced could be used for fueling vehicles or stored for later conversion to electricity.

  5. MIG version 0.0 model interface guidelines: Rules to accelerate installation of numerical models into any compliant parent code

    Energy Technology Data Exchange (ETDEWEB)

    Brannon, R.M.; Wong, M.K.

    1996-08-01

    A set of model interface guidelines, called MIG, is presented as a means by which any compliant numerical material model can be rapidly installed into any parent code without having to modify the model subroutines. Here, {open_quotes}model{close_quotes} usually means a material model such as one that computes stress as a function of strain, though the term may be extended to any numerical operation. {open_quotes}Parent code{close_quotes} means a hydrocode, finite element code, etc. which uses the model and enforces, say, the fundamental laws of motion and thermodynamics. MIG requires the model developer (who creates the model package) to specify model needs in a standardized but flexible way. MIG includes a dictionary of technical terms that allows developers and parent code architects to share a common vocabulary when specifying field variables. For portability, database management is the responsibility of the parent code. Input/output occurs via structured calling arguments. As much model information as possible (such as the lists of required inputs, as well as lists of precharacterized material data and special needs) is supplied by the model developer in an ASCII text file. Every MIG-compliant model also has three required subroutines to check data, to request extra field variables, and to perform model physics. To date, the MIG scheme has proven flexible in beta installations of a simple yield model, plus a more complicated viscodamage yield model, three electromechanical models, and a complicated anisotropic microcrack constitutive model. The MIG yield model has been successfully installed using identical subroutines in three vectorized parent codes and one parallel C++ code, all predicting comparable results. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort, thereby reducing the cost of installing and sharing models in diverse new codes.

  6. Simulating the 2012 High Plains Drought Using Three Single Column Model Versions of the Community Earth System Model (SCM-CESM)

    Science.gov (United States)

    Medina, I. D.; Denning, S.

    2014-12-01

    The impact of changes in the frequency and severity of drought on fresh water sustainability is a great concern for many regions of the world. One such location is the High Plains, where the local economy is primarily driven by fresh water withdrawals from the Ogallala Aquifer, which accounts for approximately 30% of total irrigation withdrawals from all U.S. aquifers combined. Modeling studies that focus on the feedback mechanisms that control the climate and eco-hydrology during times of drought are limited in the sense that they use conventional General Circulation Models (GCMs) with grid length scales ranging from one hundred to several hundred kilometers. Additionally, these models utilize crude statistical parameterizations of cloud processes for estimating sub-grid fluxes of heat and moisture and have a poor representation of land surface heterogeneity. For this research, we focus on the 2012 High Plains drought, and will perform numerical simulations using three single column model versions of the Community Earth System Model (SCM-CESM) at multiple sites overlying the Ogallala Aquifer for the 2010-2012 period. In the first version of SCM-CESM, CESM will be used in standard mode (Community Atmospheric Model (CAM) coupled to a single instance of the Community Land Model (CLM)), secondly, CESM will be used in Super-Parameterized mode (SP-CESM), where a cloud resolving model (CRM consists of 32 atmospheric columns) replaces the standard CAM atmospheric parameterization and is coupled to a single instance of CLM, and thirdly, CESM is used in "Multi Instance" SP-CESM mode, where an instance of CLM is coupled to each CRM column of SP-CESM (32 CRM columns coupled to 32 instances of CLM). To assess the physical realism of the land-atmosphere feedbacks simulated at each site by all versions of SCM-CESM, differences in simulated energy and moisture fluxes will be computed between years for the 2010-2012 period, and will be compared to differences calculated using

  7. User Manual for Graphical User Interface Version 2.10 with Fire and Smoke Simulation Model (FSSIM) Version 1.2

    Science.gov (United States)

    2010-05-10

    calculations, while fast, have limitations in applicability and large uncertainties in their results. CFD computations have the potential to be accurate...variables or a CFD model that uses a multitude of variables. A network representation allows for maximum physical extent of a simulation with a minimum...are separated; therefore, the floor of the upper deck and the ceiling of the lower d eck are highlighted. A vertical surf ace would only appear as a

  8. Evaluating litter decomposition in earth system models with long-term litterbag experiments: an example using the Community Land Model version 4 (CLM4).

    Science.gov (United States)

    Bonan, Gordon B; Hartman, Melannie D; Parton, William J; Wieder, William R

    2013-03-01

    Decomposition is a large term in the global carbon budget, but models of the earth system that simulate carbon cycle-climate feedbacks are largely untested with respect to litter decomposition. We tested the litter decomposition parameterization of the community land model version 4 (CLM4), the terrestrial component of the community earth system model, with data from the long-term intersite decomposition experiment team (LIDET). The LIDET dataset is a 10-year study of litter decomposition at multiple sites across North America and Central America. We performed 10-year litter decomposition simulations comparable with LIDET for 9 litter types and 20 sites in tundra, grassland, and boreal, conifer, deciduous, and tropical forest biomes using the LIDET-provided climatic decomposition index to constrain temperature and moisture effects on decomposition. We performed additional simulations with DAYCENT, a version of the CENTURY model, to ask how well an established ecosystem model matches the observations. The results show large discrepancy between the laboratory microcosm studies used to parameterize the CLM4 litter decomposition and the LIDET field study. Simulated carbon loss is more rapid than the observations across all sites, and nitrogen immobilization is biased high. Closer agreement with the observations requires much lower decomposition rates, obtained with the assumption that soil mineral nitrogen severely limits decomposition. DAYCENT better replicates the observations, for both carbon mass remaining and nitrogen, independent of nitrogen limitation. CLM4 has low soil carbon in global earth system simulations. These results suggest that this bias arises, in part, from too rapid litter decomposition. More broadly, the terrestrial biogeochemistry of earth system models must be critically tested with observations, and the consequences of particular model choices must be documented. Long-term litter decomposition experiments such as LIDET provide a real

  9. The global aerosol-climate model ECHAM-HAM, version 2: sensitivity to improvements in process representations

    Directory of Open Access Journals (Sweden)

    K. Zhang

    2012-10-01

    Full Text Available This paper introduces and evaluates the second version of the global aerosol-climate model ECHAM-HAM. Major changes have been brought into the model, including new parameterizations for aerosol nucleation and water uptake, an explicit treatment of secondary organic aerosols, modified emission calculations for sea salt and mineral dust, the coupling of aerosol microphysics to a two-moment stratiform cloud microphysics scheme, and alternative wet scavenging parameterizations. These revisions extend the model's capability to represent details of the aerosol lifecycle and its interaction with climate. Nudged simulations of the year 2000 are carried out to compare the aerosol properties and global distribution in HAM1 and HAM2, and to evaluate them against various observations. Sensitivity experiments are performed to help identify the impact of each individual update in model formulation.

    Results indicate that from HAM1 to HAM2 there is a marked weakening of aerosol water uptake in the lower troposphere, reducing the total aerosol water burden from 75 Tg to 51 Tg. The main reason is the newly introduced κ-Köhler-theory-based water uptake scheme uses a lower value for the maximum relative humidity cutoff. Particulate organic matter loading in HAM2 is considerably higher in the upper troposphere, because the explicit treatment of secondary organic aerosols allows highly volatile oxidation products of the precursors to be vertically transported to regions of very low temperature and to form aerosols there. Sulfate, black carbon, particulate organic matter and mineral dust in HAM2 have longer lifetimes than in HAM1 because of weaker in-cloud scavenging, which is in turn related to lower autoconversion efficiency in the newly introduced two-moment cloud microphysics scheme. Modification in the sea salt emission scheme causes a significant increase in the ratio (from 1.6 to 7.7 between accumulation mode and coarse mode emission fluxes of

  10. ADVANCED UTILITY SIMULATION MODEL, REPORT OF SENSITIVITY TESTING, CALIBRATION, AND MODEL OUTPUT COMPARISONS (VERSION 3.0)

    Science.gov (United States)

    The report gives results of activities relating to the Advanced Utility Simulation Model (AUSM): sensitivity testing. comparison with a mature electric utility model, and calibration to historical emissions. The activities were aimed at demonstrating AUSM's validity over input va...

  11. Updating sea spray aerosol emissions in the Community Multiscale Air Quality (CMAQ model version 5.0.2

    Directory of Open Access Journals (Sweden)

    B. Gantt

    2015-05-01

    Full Text Available Sea spray aerosols (SSA impact the particle mass concentration and gas-particle partitioning in coastal environments, with implications for human and ecosystem health. Despite their importance, the emission magnitude of SSA remains highly uncertain with global estimates varying by nearly two orders of magnitude. In this study, the Community Multiscale Air Quality (CMAQ model was updated to enhance fine mode SSA emissions, include sea surface temperature (SST dependency, and reduce coastally-enhanced emissions. Predictions from the updated CMAQ model and those of the previous release version, CMAQv5.0.2, were evaluated using several regional and national observational datasets in the continental US. The updated emissions generally reduced model underestimates of sodium, chloride, and nitrate surface concentrations for an inland site of the Bay Regional Atmospheric Chemistry Experiment (BRACE near Tampa, Florida. Including SST-dependency to the SSA emission parameterization led to increased sodium concentrations in the southeast US and decreased concentrations along parts of the Pacific coast and northeastern US. The influence of sodium on the gas-particle partitioning of nitrate resulted in higher nitrate particle concentrations in many coastal urban areas due to increased condensation of nitric acid in the updated simulations, potentially affecting the predicted nitrogen deposition in sensitive ecosystems. Application of the updated SSA emissions to the California Research at the Nexus of Air Quality and Climate Change (CalNex study period resulted in modest improvement in the predicted surface concentration of sodium and nitrate at several central and southern California coastal sites. This SSA emission update enabled a more realistic simulation of the atmospheric chemistry in environments where marine air mixes with urban pollution.

  12. Study of the Eco-Economic Indicators by Means of the New Version of the Merge Integrated Model. Part 1

    Directory of Open Access Journals (Sweden)

    Boris Vadimovich Digas

    2015-12-01

    Full Text Available One of the most relevant issues of the day is the forecasting problem of climatic changes and mitigation of their consequences. The official point of view reflected in the Climate doctrine of the Russian Federation consists in the recognition of the need of the development of the state approach to the climatic problems and related issues on the basis of the comprehensive scientific analysis of ecological, economic and social factors. For this purpose, the integrated estimation models of interdisciplinary character are attracted. Their functionality is characterized by the possibility of construction and testing of various dynamic scenarios of complex systems. The main purposes of the computing experiments described in the article are a review of the consequences of hypothetical participation of Russia in initiatives for greenhouse gas reduction as the Kyoto Protocol and approbation of one of the calculation methods of the green GDP representing the efficiency of environmental management in the modelling. To implement the given goals, the MERGE optimization model is used, its classical version is intended for the quantitative estimation of the application results of nature protection strategies. The components of the model are the eco-power module, climatic module and the module of loss estimates. In the work, the main attention is paid to the adaptation of the MERGE model to a current state of the world economy in the conditions of a complicated geopolitical situation and introduction of a new component to the model, realizing a simplified method for calculation the green GDP. The Project of scenario conditions and the key macroeconomic forecast parameters of the socio-economic development of Russia for 2016 and the schedule date of 2017−2018 made by the Ministry of Economic Development of the Russian Federation are used as a basic source of entrance data for the analysis of possible trajectories of the economic development of Russia and the

  13. Study of the Eco-Economic Indicators by Means of the New Version of the Merge Integrated Model Part 2

    Directory of Open Access Journals (Sweden)

    Boris Vadimovich Digas

    2016-03-01

    Full Text Available One of the most relevant issues of the day is the forecasting problem of climatic changes and mitigation of their consequences. The official point of view reflected in the Climate doctrine of the Russian Federation consists in the recognition of the need of the development of the state approach to the climatic problems and related issues on the basis of the comprehensive scientific analysis of ecological, economic and social factors. For this purpose, the integrated estimation models of interdisciplinary character are attracted. Their functionality is characterized by the possibility of construction and testing of various dynamic scenarios of complex systems. The main purposes of the computing experiments described in the article are a review of the consequences of hypothetical participation of Russia in initiatives for greenhouse gas reduction as the Kyoto Protocol and approbation of one of the calculation methods of the green gross domestic product representing the efficiency of environmental management in the modelling. To implement the given goals, the MERGE optimization model is used, its classical version is intended for the quantitative estimation of the application results of nature protection strategies. The components of the model are the eco-power module, climatic module and the module of loss estimates. In the work, the main attention is paid to the adaptation of the MERGE model to a current state of the world economy in the conditions of a complicated geopolitical situation and introduction of a new component to the model, realizing a simplified method for calculation the green gross domestic product. The Project of scenario conditions and the key macroeconomic forecast parameters of the socio-economic development of Russia for 2016 and the schedule date of 2017−2018 made by the Ministry of Economic Development of the Russian Federation are used as a basic source of entrance data for the analysis of possible trajectories of the

  14. VELMA Ecohydrological Model, Version 2.0 -- Analyzing Green Infrastructure Options for Enhancing Water Quality and Ecosystem Service Co-Benefits

    Science.gov (United States)

    This 2-page factsheet describes an enhanced version (2.0) of the VELMA eco-hydrological model. VELMA – Visualizing Ecosystem Land Management Assessments – has been redesigned to assist communities, land managers, policy makers and other decision makers in evaluataing the effecti...

  15. VELMA Ecohydrological Model, Version 2.0 -- Analyzing Green Infrastructure Options for Enhancing Water Quality and Ecosystem Service Co-Benefits

    Science.gov (United States)

    This 2-page factsheet describes an enhanced version (2.0) of the VELMA eco-hydrological model. VELMA – Visualizing Ecosystem Land Management Assessments – has been redesigned to assist communities, land managers, policy makers and other decision makers in evaluataing the effecti...

  16. Hierarchical linear modeling of California Verbal Learning Test--Children's Version learning curve characteristics following childhood traumatic head injury.

    Science.gov (United States)

    Warschausky, Seth; Kay, Joshua B; Chi, PaoLin; Donders, Jacobus

    2005-03-01

    California Verbal Learning Test-Children's Version (CVLT-C) indices have been shown to be sensitive to the neurocognitive effects of traumatic brain injury (TBI). The effects of TBI on the learning process were examined with a growth curve analysis of CVLT-C raw scores across the 5 learning trials. The sample with history of TBI comprised 86 children, ages 6-16 years, at a mean of 10.0 (SD=19.5) months postinjury; 37.2% had severe injury, 27.9% moderate, and 34.9% mild. The best-fit model for verbal learning was with a quadratic function. Greater TBI severity was associated with lower rate of acquisition and more gradual deceleration in the rate of acquisition. Intelligence test index scores, previously shown to be sensitive to severity of TBI, were positively correlated with rate of acquisition. Results provide evidence that the CVLT-C learning slope is not a simple linear function and further support for specific effects of TBI on verbal learning. ((c) 2005 APA, all rights reserved).

  17. An interactive code (NETPATH) for modeling NET geochemical reactions along a flow PATH, version 2.0

    Science.gov (United States)

    Plummer, L. Niel; Prestemon, Eric C.; Parkhurst, David L.

    1994-01-01

    NETPATH is an interactive Fortran 77 computer program used to interpret net geochemical mass-balance reactions between an initial and final water along a hydrologic flow path. Alternatively, NETPATH computes the mixing proportions of two to five initial waters and net geochemical reactions that can account for the observed composition of a final water. The program utilizes previously defined chemical and isotopic data for waters from a hydrochemical system. For a set of mineral and (or) gas phases hypothesized to be the reactive phases in the system, NETPATH calculates the mass transfers in every possible combination of the selected phases that accounts for the observed changes in the selected chemical and (or) isotopic compositions observed along the flow path. The calculations are of use in interpreting geochemical reactions, mixing proportions, evaporation and (or) dilution of waters, and mineral mass transfer in the chemical and isotopic evolution of natural and environmental waters. Rayleigh distillation calculations are applied to each mass-balance model that satisfies the constraints to predict carbon, sulfur, nitrogen, and strontium isotopic compositions at the end point, including radiocarbon dating. DB is an interactive Fortran 77 computer program used to enter analytical data into NETPATH, and calculate the distribution of species in aqueous solution. This report describes the types of problems that can be solved, the methods used to solve problems, and the features available in the program to facilitate these solutions. Examples are presented to demonstrate most of the applications and features of NETPATH. The codes DB and NETPATH can be executed in the UNIX or DOS1 environment. This report replaces U.S. Geological Survey Water-Resources Investigations Report 91-4078, by Plummer and others, which described the original release of NETPATH, version 1.0 (dated December, 1991), and documents revisions and enhancements that are included in version 2.0. 1 The

  18. EIA model documentation: World oil refining logistics demand model,``WORLD`` reference manual. Version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    1994-04-11

    This manual is intended primarily for use as a reference by analysts applying the WORLD model to regional studies. It also provides overview information on WORLD features of potential interest to managers and analysts. Broadly, the manual covers WORLD model features in progressively increasing detail. Section 2 provides an overview of the WORLD model, how it has evolved, what its design goals are, what it produces, and where it can be taken with further enhancements. Section 3 reviews model management covering data sources, managing over-optimization, calibration and seasonality, check-points for case construction and common errors. Section 4 describes in detail the WORLD system, including: data and program systems in overview; details of mainframe and PC program control and files;model generation, size management, debugging and error analysis; use with different optimizers; and reporting and results analysis. Section 5 provides a detailed description of every WORLD model data table, covering model controls, case and technology data. Section 6 goes into the details of WORLD matrix structure. It provides an overview, describes how regional definitions are controlled and defines the naming conventions for-all model rows, columns, right-hand sides, and bounds. It also includes a discussion of the formulation of product blending and specifications in WORLD. Several Appendices supplement the main sections.

  19. Discrete-Element bonded particle Sea Ice model DESIgn, version 1.3 – model description and implementation

    Directory of Open Access Journals (Sweden)

    A. Herman

    2015-07-01

    Full Text Available This paper presents theoretical foundations, numerical implementation and examples of application of a two-dimensional Discrete-Element bonded-particle Sea Ice model DESIgn. In the model, sea ice is represented as an assemblage of objects of two types: disk-shaped "grains", and semi-elastic bonds connecting them. Grains move on the sea surface under the influence of forces from the atmosphere and the ocean, as well as interactions with surrounding grains through a direct contact (Hertzian contact mechanics and/or through bonds. The model has an option of taking into account quasi-threedimensional effects related to space- and time-varying curvature of the sea surface, thus enabling simulation of ice breaking due to stresses resulting from bending moments associated with surface waves. Examples of the model's application to simple sea ice deformation and breaking problems are presented, with an analysis of the influence of the basic model parameters ("microscopic" properties of grains and bonds on the large-scale response of the modeled material. The model is written as a toolbox suitable for usage with the open-source numerical library LIGGGHTS. The code, together with a full technical documentation and example input files, is freely available with this paper and on the Internet.

  20. Discrete-Element bonded-particle Sea Ice model DESIgn, version 1.3a - model description and implementation

    Science.gov (United States)

    Herman, Agnieszka

    2016-04-01

    This paper presents theoretical foundations, numerical implementation and examples of application of the two-dimensional Discrete-Element bonded-particle Sea Ice model - DESIgn. In the model, sea ice is represented as an assemblage of objects of two types: disk-shaped "grains" and semi-elastic bonds connecting them. Grains move on the sea surface under the influence of forces from the atmosphere and the ocean, as well as interactions with surrounding grains through direct contact (Hertzian contact mechanics) and/or through bonds. The model has an experimental option of taking into account quasi-three-dimensional effects related to the space- and time-varying curvature of the sea surface, thus enabling simulation of ice breaking due to stresses resulting from bending moments associated with surface waves. Examples of the model's application to simple sea ice deformation and breaking problems are presented, with an analysis of the influence of the basic model parameters ("microscopic" properties of grains and bonds) on the large-scale response of the modeled material. The model is written as a toolbox suitable for usage with the open-source numerical library LIGGGHTS. The code, together with full technical documentation and example input files, is freely available with this paper and on the Internet.

  1. Models of intestinal infection by Salmonella enterica: introduction of a new neonate mouse model [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Marc Schulte

    2016-06-01

    Full Text Available Salmonella enterica serovar Typhimurium is a foodborne pathogen causing inflammatory disease in the intestine following diarrhea and is responsible for thousands of deaths worldwide. Many in vitro investigations using cell culture models are available, but these do not represent the real natural environment present in the intestine of infected hosts. Several in vivo animal models have been used to study the host-pathogen interaction and to unravel the immune responses and cellular processes occurring during infection. An animal model for Salmonella-induced intestinal inflammation relies on the pretreatment of mice with streptomycin. This model is of great importance but still shows limitations to investigate the host-pathogen interaction in the small intestine in vivo. Here, we review the use of mouse models for Salmonella infections and focus on a new small animal model using 1-day-old neonate mice. The neonate model enables researchers to observe infection of both the small and large intestine, thereby offering perspectives for new experimental approaches, as well as to analyze the Salmonella-enterocyte interaction in the small intestine in vivo.

  2. Development of an Information Exchange format for the Observations Data Model version 2 using OGC Observations and Measures

    Science.gov (United States)

    Valentine, D. W., Jr.; Aufdenkampe, A. K.; Horsburgh, J. S.; Hsu, L.; Lehnert, K. A.; Mayorga, E.; Song, L.; Zaslavsky, I.; Whitenack, T.

    2014-12-01

    The Observations Data Model v1 (ODMv1) schema has been utilized of the basis hydrologic cyberinfrastructures include the CUAHSI HIS. The first version of ODM focused on timeseries, and ultimately led the development of OGC "WaterML2 Part 1: Timeseries", which is being proposed to be developed into OGC TimeseriesML.Our team has developed an ODMv2 model to address ODMv1 shortcomings, and to encompass a wider community of spatially discrete, feature-based earth observations. The development process included collecting requirements from several existing Earth Observations data systems: HIS,CZOData, IEDA and EarthChem system, and IOOS. We developed ODM2 as a set of core entities with additional extensioncomponents that can be utilized. These extensions include for shared functionality (e.g. data quality, provenance), as well as specific use cases (e.g. laboratory analysis, equipment). Initially, we closely followed the Observations and Measures (ISO19156) concept model. After prototyping and reviewing the requirements, we extended the ODMv2 conceptual model to include entities to document ancillary acts that do not always produce a result. Differing from O&M where acts are expected to produce a result. ODMv2 includes the core concept of an "Action" which encapsulates activities or actions associated that are performed in the process of making an observation, but may not produce a result. Actions, such as a sample analysis, that observe a property and produce a result are equivalent to O&M observation. But in many use cases, many actions have no resulting observation. Examples of such actions are a site visit or sample preparation (splitting of a sample). These actions are part of a chain of actions, iwhich produce the final observation. Overall the ODMv2 generally follows the O&M conceptual model. The nearly final ODMv2 includes a core and extensions. The core entities include actions, feature actions (observations), datasets (groupings), methods (procedures), sampling

  3. Modeling the structure of the attitudes and belief scale 2 using CFA and bifactor approaches: Toward the development of an abbreviated version.

    Science.gov (United States)

    Hyland, Philip; Shevlin, Mark; Adamson, Gary; Boduszek, Daniel

    2014-01-01

    The Attitudes and Belief Scale-2 (ABS-2: DiGiuseppe, Leaf, Exner, & Robin, 1988. The development of a measure of rational/irrational thinking. Paper presented at the World Congress of Behavior Therapy, Edinburg, Scotland.) is a 72-item self-report measure of evaluative rational and irrational beliefs widely used in Rational Emotive Behavior Therapy research contexts. However, little psychometric evidence exists regarding the measure's underlying factor structure. Furthermore, given the length of the ABS-2 there is a need for an abbreviated version that can be administered when there are time demands on the researcher, such as in clinical settings. This study sought to examine a series of theoretical models hypothesized to represent the latent structure of the ABS-2 within an alternative models framework using traditional confirmatory factor analysis as well as utilizing a bifactor modeling approach. Furthermore, this study also sought to develop a psychometrically sound abbreviated version of the ABS-2. Three hundred and thirteen (N = 313) active emergency service personnel completed the ABS-2. Results indicated that for each model, the application of bifactor modeling procedures improved model fit statistics, and a novel eight-factor intercorrelated solution was identified as the best fitting model of the ABS-2. However, the observed fit indices failed to satisfy commonly accepted standards. A 24-item abbreviated version was thus constructed and an intercorrelated eight-factor solution yielded satisfactory model fit statistics. Current results support the use of a bifactor modeling approach to determining the factor structure of the ABS-2. Furthermore, results provide empirical support for the psychometric properties of the newly developed abbreviated version.

  4. Lord-Wingersky Algorithm Version 2.0 for Hierarchical Item Factor Models with Applications in Test Scoring, Scale Alignment, and Model Fit Testing.

    Science.gov (United States)

    Cai, Li

    2015-06-01

    Lord and Wingersky's (Appl Psychol Meas 8:453-461, 1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined on a grid formed by direct products of quadrature points. However, the increase in computational burden remains exponential in the number of dimensions, making the implementation of the recursive algorithm cumbersome for truly high-dimensional models. In this paper, a dimension reduction method that is specific to the Lord-Wingersky recursions is developed. This method can take advantage of the restrictions implied by hierarchical item factor models, e.g., the bifactor model, the testlet model, or the two-tier model, such that a version of the Lord-Wingersky recursive algorithm can operate on a dramatically reduced set of quadrature points. For instance, in a bifactor model, the dimension of integration is always equal to 2, regardless of the number of factors. The new algorithm not only provides an effective mechanism to produce summed score to IRT scaled score translation tables properly adjusted for residual dependence, but leads to new applications in test scoring, linking, and model fit checking as well. Simulated and empirical examples are used to illustrate the new applications.

  5. Temperature and Humidity Profiles in the TqJoint Data Group of AIRS Version 6 Product for the Climate Model Evaluation

    Science.gov (United States)

    Ding, Feng; Fang, Fan; Hearty, Thomas J.; Theobald, Michael; Vollmer, Bruce; Lynnes, Christopher

    2014-01-01

    The Atmospheric Infrared Sounder (AIRS) mission is entering its 13th year of global observations of the atmospheric state, including temperature and humidity profiles, outgoing long-wave radiation, cloud properties, and trace gases. Thus AIRS data have been widely used, among other things, for short-term climate research and observational component for model evaluation. One instance is the fifth phase of the Coupled Model Intercomparison Project (CMIP5) which uses AIRS version 5 data in the climate model evaluation. The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is the home of processing, archiving, and distribution services for data from the AIRS mission. The GES DISC, in collaboration with the AIRS Project, released data from the version 6 algorithm in early 2013. The new algorithm represents a significant improvement over previous versions in terms of greater stability, yield, and quality of products. The ongoing Earth System Grid for next generation climate model research project, a collaborative effort of GES DISC and NASA JPL, will bring temperature and humidity profiles from AIRS version 6. The AIRS version 6 product adds a new "TqJoint" data group, which contains data for a common set of observations across water vapor and temperature at all atmospheric levels and is suitable for climate process studies. How different may the monthly temperature and humidity profiles in "TqJoint" group be from the "Standard" group where temperature and water vapor are not always valid at the same time? This study aims to answer the question by comprehensively comparing the temperature and humidity profiles from the "TqJoint" group and the "Standard" group. The comparison includes mean differences at different levels globally and over land and ocean. We are also working on examining the sampling differences between the "TqJoint" and "Standard" group using MERRA data.

  6. Flipped versions of the universal 3-3-1 and the left-right symmetric models in [S U (3 )]3 : A comprehensive approach

    Science.gov (United States)

    Rodríguez, Oscar; Benavides, Richard H.; Ponce, William A.; Rojas, Eduardo

    2017-01-01

    By considering the 3-3-1 and the left-right symmetric models as low-energy effective theories of the S U (3 )C⊗S U (3 )L⊗S U (3 )R (for short [S U (3 )]3 ) gauge group, alternative versions of these models are found. The new neutral gauge bosons of the universal 3-3-1 model and its flipped versions are presented; also, the left-right symmetric model and its flipped variants are studied. Our analysis shows that there are two flipped versions of the universal 3-3-1 model, with the particularity that both of them have the same weak charges. For the left-right symmetric model, we also found two flipped versions; one of them is new in the literature and, unlike those of the 3-3-1, requires a dedicated study of its electroweak properties. For all the models analyzed, the couplings of the Z' bosons to the standard model fermions are reported. The explicit form of the null space of the vector boson mass matrix for an arbitrary Higgs tensor and gauge group is also presented. In the general framework of the [S U (3 )]3 gauge group, and by using the LHC experimental results and EW precision data, limits on the Z' mass and the mixing angle between Z and the new gauge bosons Z' are obtained. The general results call for very small mixing angles in the range 1 0-3 radians and MZ'>2.5 TeV .

  7. Statistical model of fractures and deformations zones for Forsmark. Preliminary site description Forsmark area - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    La Pointe, Paul R. [Golder Associate Inc., Redmond, WA (United States); Olofsson, Isabelle; Hermanson, Jan [Golder Associates AB, Uppsala (Sweden)

    2005-04-01

    Compared to version 1.1, a much larger amount of data especially from boreholes is available. Both one-hole interpretation and Boremap indicate the presence of high and low fracture intensity intervals in the rock mass. The depth and width of these intervals varies from borehole to borehole but these constant fracture intensity intervals are contiguous and present quite sharp transitions. There is not a consistent pattern of intervals of high fracture intensity at or near to the surface. In many cases, the intervals of highest fracture intensity are considerably below the surface. While some fractures may have occurred or been reactivated in response to surficial stress relief, surficial stress relief does not appear to be a significant explanatory variable for the observed variations in fracture intensity. Data from the high fracture intensity intervals were extracted and statistical analyses were conducted in order to identify common geological factors. Stereoplots of fracture orientation versus depth for the different fracture intensity intervals were also produced for each borehole. Moreover percussion borehole data were analysed in order to identify the persistence of these intervals throughout the model volume. The main conclusions of these analyses are the following: The fracture intensity is conditioned by the rock domain, but inside a rock domain intervals of high and low fracture intensity are identified. The intervals of high fracture intensity almost always correspond to intervals with distinct fracture orientations (whether a set, most often the NW sub-vertical set, is highly dominant, or some orientation sets are missing). These high fracture intensity intervals are positively correlated to the presence of first and second generation minerals (epidote, calcite). No clear correlation for these fracture intensity intervals has been identified between holes. Based on these results the fracture frequency has been calculated in each rock domain for the

  8. Rock mechanics modelling of rock mass properties - summary of primary data. Preliminary site description Laxemar subarea - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Lanaro, Flavio [Berg Bygg Konsult AB, Solna (Sweden); Oehman, Johan; Fredriksson, Anders [Golder Associates AB, Uppsala (Sweden)

    2006-05-15

    The results presented in this report are the summary of the primary data for the Laxemar Site Descriptive Modelling version 1.2. At this stage, laboratory tests on intact rock and fracture samples from borehole KSH01A, KSH02A, KAV01 (already considered in Simpevarp SDM version 1.2) and borehole KLX02 and KLX04 were available. Concerning the mechanical properties of the intact rock, the rock type 'granite to quartz monzodiorite' or 'Aevroe granite' (code 501044) was tested for the first time within the frame of the site descriptive modelling. The average uniaxial compressive strength and Young's modulus of the granite to quartz to monzodiorite are 192 MPa and 72 GPa, respectively. The crack initiation stress is observed to be 0.5 times the uniaxial compressive strength for the same rock type. Non negligible differences are observed between the statistics of the mechanical properties of the granite to quartz monzodiorite in borehole KLX02 and KLX04. The available data on rock fractures were analysed to determine the mechanical properties of the different fracture sets at the site (based on tilt test results) and to determine systematic differences between the results obtained with different sample preparation techniques (based on direct shear tests). The tilt tests show that there are not significant differences of the mechanical properties due to the fracture orientation. Thus, all fracture sets seem to have the same strength and deformability. The average peak friction angle for the Coulomb's Criterion of the fracture sets varies between 33.6 deg and 34.1 deg, while the average cohesion ranges between 0.46 and 0.52 MPa, respectively. The average of the Coulomb's residual cohesion and friction angle vary in the ranges 28.0 deg - 29.2 deg and 0.40-0.45 MPa, respectively. The only significant difference could be observed on the average cohesion between fracture set S{sub A} and S{sub d}. The direct shear tests show that the

  9. BaP (PAH) air quality modelling exercise over Zaragoza (Spain) using an adapted version of WRF-CMAQ model.

    Science.gov (United States)

    San José, Roberto; Pérez, Juan Luis; Callén, María Soledad; López, José Manuel; Mastral, Ana

    2013-12-01

    Benzo(a)pyrene (BaP) is one of the most dangerous PAH due to its high carcinogenic and mutagenic character. Because of this reason, the Directive 2004/107/CE of the European Union establishes a target value of 1 ng/m(3) of BaP in the atmosphere. In this paper, the main aim is to estimate the BaP concentrations in the atmosphere by using last generation of air quality dispersion models with the inclusion of the transport, scavenging and deposition processes for the BaP. The degradation of the particulated BaP by the ozone has been considered. The aerosol-gas partitioning phenomenon in the atmosphere is modelled taking into a count that the concentrations in the gas and the aerosol phases. If the pre-existing organic aerosol concentrations are zero gas/particle equilibrium is established. The model has been validated at local scale with data from a sampling campaign carried out in the area of Zaragoza (Spain) during 12 weeks.

  10. Coupling of the VAMPER permafrost model within the earth system model iLOVECLIM (version 1.0: description and validation

    Directory of Open Access Journals (Sweden)

    D. Kitover

    2014-11-01

    Full Text Available The VAMPER permafrost model has been enhanced for coupling within the iLOVECLIM earth system model of intermediate complexity by including snow thickness and active layer calculations. In addition, the coupling between iLOVECLIM and the VAMPER model includes two spatially variable maps of geothermal heat flux and generalized lithology. A semi-coupled version is validated using the modern day extent of permafrost along with observed permafrost thickness and subsurface temperatures at selected borehole sites. The modeling run not including the effects of snow cover overestimate the present permafrost extent. However, when the snow component is included, the extent is overall reduced too much. It was found that most of the modeled thickness values and subsurface temperatures fall within a reasonable range of the corresponding observed values. Discrepancies are due to lack of captured effects from features such as topography and organic soil layers. In addition, some discrepancy is also due to disequilibrium with the current climate, meaning that some permafrost is a result of colder states and therefore cannot be reproduced accurately with the iLOVECLIM preindustrial forcings.

  11. Modeling herring population dynamics: herring catch-at-age model version 2 = Modelisation de la dynamique des populations de hareng : modele des captures a l'age de harengs, Version 2

    National Research Council Canada - National Science Library

    Christensen, L.B; Haist, V; Schweigert, J

    2010-01-01

    The herring catch-at-age model (HCAM) is an age-structured stock assessment model developed specifically for Pacific herring which is assumed to be a multi-stock population that has experienced periods of significant fishery impact...

  12. [Measuring psychosocial stress at work in Spanish hospital's personnel. Psychometric properties of the Spanish version of Effort-Reward Imbalance model].

    Science.gov (United States)

    Macías Robles, María Dolores; Fernández-López, Juan Antonio; Hernández-Mejía, Radhamés; Cueto-Espinar, Antonio; Rancaño, Iván; Siegrist, Johannes

    2003-05-10

    Two main models are currently used to evaluate the psychosocial factors at work: the Demand-Control (or job strain) model developed by Karasek and the Effort-Reward Imbalance model, developed by Siegrist. A Spanish version of the first model has been validated, yet so far no validated Spanish version of the second model is available. The objective of this study was to explore the psychometric properties of the Spanish version of the Effort-Reward Imbalance model in terms of internal consistency, factorial validity, and discriminate validity. A cross-sectional study on a representative sample of 298 workers of the Spanish public hospital San Agustin in Asturias was performed. The Spanish version of Effort-Reward Imbalance Questionnaire (23 items) was obtained by a standard forward/backward translation procedure, and the information was gathered by a self-administered application. Exploratory factor analysis were performed to test the dimensional structure of the theoretical model. Cronbach's alpha coefficient was calculated to estimate the internal consistency reliability. Information on discriminate validity is given for sex, age and education. Differences were calculated with the t-test for two independent samples or ANOVA, respectively. Internal consistency was satisfactory for the two scales (reward and intrinsic effort) and Cronbach's Alpha coefficients higher than 0.80 were observed. The internal consistency for the scale of extrinsic effort was lower (alpha = 0.63). A three-factor solution was retained for the factor analysis of reward as expected, and these dimensions were interpreted as a) esteem, b) job promotion and salary and c) job instability. A one-factor solution was retained for the factor analysis of intrinsic effort. The factor analysis of the scale of extrinsic effort did not support the expected one-dimension structure. The analysis of discriminate validity displayed significant associations between measures of Effort-Reward Imbalance and the

  13. PVWatts Version 5 Manual

    Energy Technology Data Exchange (ETDEWEB)

    Dobos, A. P.

    2014-09-01

    The NREL PVWatts calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes includes several built-in parameters that are hidden from the user. This technical reference describes the sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimate. This reference is applicable to the significantly revised version of PVWatts released by NREL in 2014.

  14. 仏教思想普及のための言語学的試み

    OpenAIRE

    角岡, 賢一

    2010-01-01

    This paper focuses on the history of the translation of the Buddhism sutras, originally Chinese version, into modern colloquial Japanese. Partly because all the sutras have not been translated into Japanese, sutras are recited with Chinese sound in funeral and memorial ceremonies. As Japanese laymen and laywomen are not familiar with such Chinese sound, sutras are meaningless to them unless they learn beforehand. Some monks, however, began reciting sutras in Japanese translation in these two ...

  15. The Nexus Land-Use model version 1.0, an approach articulating biophysical potentials and economic dynamics to model competition for land-use

    Directory of Open Access Journals (Sweden)

    F. Souty

    2012-02-01

    Full Text Available Interactions between food demand, biomass energy and forest preservation are driving both food prices and land-use changes, regionally and globally. This study presents a new model called Nexus Land-Use version 1.0 which describes these interactions through a generic representation of agricultural intensification mechanisms. The Nexus Land-Use model equations combine biophysics and economics into a single coherent framework to calculate crop yields, food prices, and resulting pasture and cropland areas within 12 regions inter-connected with each other by international trade. The representation of cropland and livestock production systems in each region relies on three components: (i a biomass production function derived from the crop yield response function to inputs such as industrial fertilisers; (ii a detailed representation of the livestock production system subdivided into an intensive and an extensive component, and (iii a spatially explicit distribution of potential (maximal crop yields prescribed from the Lund-Postdam-Jena global vegetation model for managed Land (LPJmL. The economic principles governing decisions about land-use and intensification are adapted from the Ricardian rent theory, assuming cost minimisation for farmers. The land-use modelling approach described in this paper entails several advantages. Firstly, it makes it possible to explore interactions among different types of biomass demand for food and animal feed, in a consistent approach, including indirect effects on land-use change resulting from international trade. Secondly, yield variations induced by the possible expansion of croplands on less suitable marginal lands are modelled by using regional land area distributions of potential yields, and a calculated boundary between intensive and extensive production. The model equations and parameter values are first described in details. Then, idealised scenarios exploring the impact of forest preservation policies or

  16. AERONET Version 3 processing

    Science.gov (United States)

    Holben, B. N.; Slutsker, I.; Giles, D. M.; Eck, T. F.; Smirnov, A.; Sinyuk, A.; Schafer, J.; Rodriguez, J.

    2014-12-01

    The Aerosol Robotic Network (AERONET) database has evolved in measurement accuracy, data quality products, availability to the scientific community over the course of 21 years with the support of NASA, PHOTONS and all federated partners. This evolution is periodically manifested as a new data version release by carefully reprocessing the entire database with the most current algorithms that fundamentally change the database and ultimately the data products used by the community. The newest processing, Version 3, will be released in 2015 after the entire database is reprocessed and real-time data processing becomes operational. All V 3 algorithms have been developed, individually vetted and represent four main categories: aerosol optical depth (AOD) processing, inversion processing, database management and new products. The primary trigger for release of V 3 lies with cloud screening of the direct sun observations and computation of AOD that will fundamentally change all data available for analysis and all subsequent retrieval products. This presentation will illustrate the innovative approach used for cloud screening and assesses the elements of V3 AOD relative to the current version. We will also present the advances in the inversion product processing with emphasis on the random and systematic uncertainty estimates. This processing will be applied to the new hybrid measurement scenario intended to provide inversion retrievals for all solar zenith angles. We will introduce automatic quality assurance criteria that will allow near real time quality assured aerosol products necessary for real time satellite and model validation and assimilation. Last we will introduce the new management structure that will improve access to the data database. The current version 2 will be supported for at least two years after the initial release of V3 to maintain continuity for on going investigations.

  17. Development and analysis of some versions of the fractional-order point reactor kinetics model for a nuclear reactor with slab geometry

    Science.gov (United States)

    Vyawahare, Vishwesh A.; Nataraj, P. S. V.

    2013-07-01

    In this paper, we report the development and analysis of some novel versions and approximations of the fractional-order (FO) point reactor kinetics model for a nuclear reactor with slab geometry. A systematic development of the FO Inhour equation, Inverse FO point reactor kinetics model, and fractional-order versions of the constant delayed neutron rate approximation model and prompt jump approximation model is presented for the first time (for both one delayed group and six delayed groups). These models evolve from the FO point reactor kinetics model, which has been derived from the FO Neutron Telegraph Equation for the neutron transport considering the subdiffusive neutron transport. Various observations and the analysis results are reported and the corresponding justifications are addressed using the subdiffusive framework for the neutron transport. The FO Inhour equation is found out to be a pseudo-polynomial with its degree depending on the order of the fractional derivative in the FO model. The inverse FO point reactor kinetics model is derived and used to find the reactivity variation required to achieve exponential and sinusoidal power variation in the core. The situation of sudden insertion of negative reactivity is analyzed using the FO constant delayed neutron rate approximation. Use of FO model for representing the prompt jump in reactor power is advocated on the basis of subdiffusion. Comparison with the respective integer-order models is carried out for the practical data. Also, it has been shown analytically that integer-order models are a special case of FO models when the order of time-derivative is one. Development of these FO models plays a crucial role in reactor theory and operation as it is the first step towards achieving the FO control-oriented model for a nuclear reactor. The results presented here form an important step in the efforts to establish a step-by-step and systematic theory for the FO modeling of a nuclear reactor.

  18. The Nexus Land-Use model version 1.0, an approach articulating biophysical potentials and economic dynamics to model competition for land-use

    Directory of Open Access Journals (Sweden)

    F. Souty

    2012-10-01

    Full Text Available Interactions between food demand, biomass energy and forest preservation are driving both food prices and land-use changes, regionally and globally. This study presents a new model called Nexus Land-Use version 1.0 which describes these interactions through a generic representation of agricultural intensification mechanisms within agricultural lands. The Nexus Land-Use model equations combine biophysics and economics into a single coherent framework to calculate crop yields, food prices, and resulting pasture and cropland areas within 12 regions inter-connected with each other by international trade. The representation of cropland and livestock production systems in each region relies on three components: (i a biomass production function derived from the crop yield response function to inputs such as industrial fertilisers; (ii a detailed representation of the livestock production system subdivided into an intensive and an extensive component, and (iii a spatially explicit distribution of potential (maximal crop yields prescribed from the Lund-Postdam-Jena global vegetation model for managed Land (LPJmL. The economic principles governing decisions about land-use and intensification are adapted from the Ricardian rent theory, assuming cost minimisation for farmers. In contrast to the other land-use models linking economy and biophysics, crops are aggregated as a representative product in calories and intensification for the representative crop is a non-linear function of chemical inputs. The model equations and parameter values are first described in details. Then, idealised scenarios exploring the impact of forest preservation policies or rising energy price on agricultural intensification are described, and their impacts on pasture and cropland areas are investigated.

  19. The Nexus Land-Use model version 1.0, an approach articulating biophysical potentials and economic dynamics to model competition for land-use

    Science.gov (United States)

    Souty, F.; Brunelle, T.; Dumas, P.; Dorin, B.; Ciais, P.; Crassous, R.; Müller, C.; Bondeau, A.

    2012-10-01

    Interactions between food demand, biomass energy and forest preservation are driving both food prices and land-use changes, regionally and globally. This study presents a new model called Nexus Land-Use version 1.0 which describes these interactions through a generic representation of agricultural intensification mechanisms within agricultural lands. The Nexus Land-Use model equations combine biophysics and economics into a single coherent framework to calculate crop yields, food prices, and resulting pasture and cropland areas within 12 regions inter-connected with each other by international trade. The representation of cropland and livestock production systems in each region relies on three components: (i) a biomass production function derived from the crop yield response function to inputs such as industrial fertilisers; (ii) a detailed representation of the livestock production system subdivided into an intensive and an extensive component, and (iii) a spatially explicit distribution of potential (maximal) crop yields prescribed from the Lund-Postdam-Jena global vegetation model for managed Land (LPJmL). The economic principles governing decisions about land-use and intensification are adapted from the Ricardian rent theory, assuming cost minimisation for farmers. In contrast to the other land-use models linking economy and biophysics, crops are aggregated as a representative product in calories and intensification for the representative crop is a non-linear function of chemical inputs. The model equations and parameter values are first described in details. Then, idealised scenarios exploring the impact of forest preservation policies or rising energy price on agricultural intensification are described, and their impacts on pasture and cropland areas are investigated.

  20. Users` manual for LEHGC: A Lagrangian-Eulerian Finite-Element Model of Hydrogeochemical Transport Through Saturated-Unsaturated Media. Version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    Yeh, Gour-Tsyh [Pennsylvania State Univ., University Park, PA (United States). Dept. of Civil and Environmental Engineering; Carpenter, S.L. [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Earth and Planetary Sciences; Hopkins, P.L.; Siegel, M.D. [Sandia National Labs., Albuquerque, NM (United States)

    1995-11-01

    The computer program LEHGC is a Hybrid Lagrangian-Eulerian Finite-Element Model of HydroGeo-Chemical (LEHGC) Transport Through Saturated-Unsaturated Media. LEHGC iteratively solves two-dimensional transport and geochemical equilibrium equations and is a descendant of HYDROGEOCHEM, a strictly Eulerian finite-element reactive transport code. The hybrid Lagrangian-Eulerian scheme improves on the Eulerian scheme by allowing larger time steps to be used in the advection-dominant transport calculations. This causes less numerical dispersion and alleviates the problem of calculated negative concentrations at sharp concentration fronts. The code also is more computationally efficient than the strictly Eulerian version. LEHGC is designed for generic application to reactive transport problems associated with contaminant transport in subsurface media. Input to the program includes the geometry of the system, the spatial distribution of finite elements and nodes, the properties of the media, the potential chemical reactions, and the initial and boundary conditions. Output includes the spatial distribution of chemical element concentrations as a function of time and space and the chemical speciation at user-specified nodes. LEHGC Version 1.1 is a modification of LEHGC Version 1.0. The modification includes: (1) devising a tracking algorithm with the computational effort proportional to N where N is the number of computational grid nodes rather than N{sup 2} as in LEHGC Version 1.0, (2) including multiple adsorbing sites and multiple ion-exchange sites, (3) using four preconditioned conjugate gradient methods for the solution of matrix equations, and (4) providing a model for some features of solute transport by colloids.

  1. Technical report series on global modeling and data assimilation. Volume 5: Documentation of the AIRES/GEOS dynamical core, version 2

    Science.gov (United States)

    Suarez, Max J. (Editor); Takacs, Lawrence L.

    1995-01-01

    A detailed description of the numerical formulation of Version 2 of the ARIES/GEOS 'dynamical core' is presented. This code is a nearly 'plug-compatible' dynamics for use in atmospheric general circulation models (GCMs). It is a finite difference model on a staggered latitude-longitude C-grid. It uses second-order differences for all terms except the advection of vorticity by the rotation part of the flow, which is done at fourth-order accuracy. This dynamical core is currently being used in the climate (ARIES) and data assimilation (GEOS) GCMs at Goddard.

  2. Extended-range prediction trials using the global cloud/cloud-system resolving model NICAM and its new ocean-coupled version NICOCO

    Science.gov (United States)

    Miyakawa, Tomoki

    2017-04-01

    The global cloud/cloud-system resolving model NICAM and its new fully-coupled version NICOCO is run on one of the worlds top-tier supercomputers, the K computer. NICOCO couples the full-3D ocean component COCO of the general circulation model MIROC using a general-purpose coupler Jcup. We carried out multiple MJO simulations using NICAM and the new ocean-coupled version NICOCO to examine their extended-range MJO prediction skills and the impact of ocean coupling. NICAM performs excellently in terms of MJO prediction, maintaining a valid skill up to 27 days after the model is initialized (Miyakawa et al 2014). As is the case in most global models, ocean coupling frees the model from being anchored by the observed SST and allows the model climate to drift away further from reality compared to the atmospheric version of the model. Thus, it is important to evaluate the model bias, and in an initial value problem such as the seasonal extended-range prediction, it is essential to be able to distinguish the actual signal from the early transition of the model from the observed state to its own climatology. Since NICAM is a highly resource-demanding model, evaluation and tuning of the model climatology (order of years) is challenging. Here we focus on the initial 100 days to estimate the early drift of the model, and subsequently evaluate MJO prediction skills of NICOCO. Results show that in the initial 100 days, NICOCO forms a La-Nina like SST bias compared to observation, with a warmer Maritime Continent warm pool and a cooler equatorial central Pacific. The enhanced convection over the Maritime Continent associated with this bias project on to the real-time multi-variate MJO indices (RMM, Wheeler and Hendon 2004), and contaminates the MJO skill score. However, the bias does not appear to demolish the MJO signal severely. The model maintains a valid MJO prediction skill up to nearly 4 weeks when evaluated after linearly removing the early drift component estimated from

  3. Land Boundary Conditions for the Goddard Earth Observing System Model Version 5 (GEOS-5) Climate Modeling System: Recent Updates and Data File Descriptions

    Science.gov (United States)

    Mahanama, Sarith P.; Koster, Randal D.; Walker, Gregory K.; Takacs, Lawrence L.; Reichle, Rolf H.; De Lannoy, Gabrielle; Liu, Qing; Zhao, Bin; Suarez, Max J.

    2015-01-01

    The Earths land surface boundary conditions in the Goddard Earth Observing System version 5 (GEOS-5) modeling system were updated using recent high spatial and temporal resolution global data products. The updates include: (i) construction of a global 10-arcsec land-ocean lakes-ice mask; (ii) incorporation of a 10-arcsec Globcover 2009 land cover dataset; (iii) implementation of Level 12 Pfafstetter hydrologic catchments; (iv) use of hybridized SRTM global topography data; (v) construction of the HWSDv1.21-STATSGO2 merged global 30 arc second soil mineral and carbon data in conjunction with a highly-refined soil classification system; (vi) production of diffuse visible and near-infrared 8-day MODIS albedo climatologies at 30-arcsec from the period 2001-2011; and (vii) production of the GEOLAND2 and MODIS merged 8-day LAI climatology at 30-arcsec for GEOS-5. The global data sets were preprocessed and used to construct global raster data files for the software (mkCatchParam) that computes parameters on catchment-tiles for various atmospheric grids. The updates also include a few bug fixes in mkCatchParam, as well as changes (improvements in algorithms, etc.) to mkCatchParam that allow it to produce tile-space parameters efficiently for high resolution AGCM grids. The update process also includes the construction of data files describing the vegetation type fractions, soil background albedo, nitrogen deposition and mean annual 2m air temperature to be used with the future Catchment CN model and the global stream channel network to be used with the future global runoff routing model. This report provides detailed descriptions of the data production process and data file format of each updated data set.

  4. Improving the WRF model's (version 3.6.1) simulation over sea ice surface through coupling with a complex thermodynamic sea ice model (HIGHTSI)

    Science.gov (United States)

    Yao, Yao; Huang, Jianbin; Luo, Yong; Zhao, Zongci

    2016-06-01

    Sea ice plays an important role in the air-ice-ocean interaction, but it is often represented simply in many regional atmospheric models. The Noah sea ice scheme, which is the only option in the current Weather Research and Forecasting (WRF) model (version 3.6.1), has a problem of energy imbalance due to its simplification in snow processes and lack of ablation and accretion processes in ice. Validated against the Surface Heat Budget of the Arctic Ocean (SHEBA) in situ observations, Noah underestimates the sea ice temperature which can reach -10 °C in winter. Sensitivity tests show that this bias is mainly attributed to the simulation within the ice when a time-dependent ice thickness is specified. Compared with the Noah sea ice model, the high-resolution thermodynamic snow and ice model (HIGHTSI) uses more realistic thermodynamics for snow and ice. Most importantly, HIGHTSI includes the ablation and accretion processes of sea ice and uses an interpolation method which can ensure the heat conservation during its integration. These allow the HIGHTSI to better resolve the energy balance in the sea ice, and the bias in sea ice temperature is reduced considerably. When HIGHTSI is coupled with the WRF model, the simulation of sea ice temperature by the original Polar WRF is greatly improved. Considering the bias with reference to SHEBA observations, WRF-HIGHTSI improves the simulation of surface temperature, 2 m air temperature and surface upward long-wave radiation flux in winter by 6, 5 °C and 20 W m-2, respectively. A discussion on the impact of specifying sea ice thickness in the WRF model is presented. Consistent with previous research, prescribing the sea ice thickness with observational information results in the best simulation among the available methods. If no observational information is available, we present a new method in which the sea ice thickness is initialized from empirical estimation and its further change is predicted by a complex thermodynamic

  5. ABEL model: Evaluates corporations` claims of inability to afford penalties and compliance costs (version 3.0.16). Model-simulation

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-11-01

    The easy-to-use ABEL software evaluates for-profit company claims of inability to afford penalties, clean-up costs, or compliance costs. Violators raise the issue of inability to pay in most of EPA`s enforcement actions regardless of whether there is any hard evidence supporting those claims. The program enables Federal, State and local enforcement professionals to quickly determine if there was any validity to those claims. ABEL is a tool that promotes quick settlements by performing screening analyses of defendants and potentially responsible parties (PRP`s) to determine their financial capacity. After analyzing some basic financial ratios that reflect a company`s solvency, ABEL assesses the firm`s ability to pay by focusing on projected cash flows. The model explicitly calculates the value of projected, internally generated cash flows from historical tax information, and compares these cash flows to the proposed environmental expenditure(s). The software is extremely easy to use. Version 3.0.16 updates the standard values for inflation and discount rate.

  6. 敦煌壁画中的经架——兼议莫高窟第156窟前室室顶南侧壁画题材%The Sutra Stand in the Dunhuang Murals and the Subject Matter of the Southern Ceiling Mural of Cave 156

    Institute of Scientific and Technical Information of China (English)

    郭俊叶

    2011-01-01

    In some murals of the Dunhuang Grottoes, eminent monks or laymen are shown reading or preaching sutras from a small shelf-like stand. This stand has not come to the attention of earlier studies. Based on her analysis of sutras, Dunhuang documents and murals, and a the collections of the Shoso-in in Japan, the author suggests this object should be called a "sutra stand," and discusses its use and development. Based on this research, she also looked through all the different scenes and motifs of the sutra stand in the Mogao Grottoes. This study is valuable for judging the age and motif of Dunhuang murals.%一经架考辨 在敦煌莫高窟的一些表现高僧或俗人读经、讲经的壁画中,佛经有时被安置在一个架子形的小件器物上。此种器具应当称为经架,在前贤的研究中尚未对其有所关注。 《全唐诗》第554卷载有唐人项斯的一首五言律诗《寄坐夏僧》:“坐夏日偏长,知师在律堂。多因束带热,更忆剃头凉。苔色侵经架,松阴到簟床。

  7. 生弧哲学与魂归相地——摩经视域下的布依族思想信仰世界%The Philosophy of Life and Death and the Return of Souls to Their Hometown:The Religious Beliefs by the Buyi People in the View of Mo Sutra

    Institute of Scientific and Technical Information of China (English)

    罗正副

    2012-01-01

    摩经是布依族在举行宗教祭仪、亡灵超度等活动过程中形成的文本典籍,是最具宗教思想和信仰意义的经典。布依族的生死哲学体现为正常死亡前有各种征兆预示,寿终正寝后由布摩超度亡灵,使灵魂一步步回归祖地。魂归祖地是布依族形上信仰世界的终极追求和至高境界,这一切在摩经里有详细的叙述和具体的呈现。%Mo Sutra is a special sutra with typical religious thoughts and beliefs of the Buyi people, which is formed in the process of religious rituals or releasing souls from suffering. The philosophy held by the Buyi people towards life and death embodies that there are symptoms or omens before the natural death happens. After the natural death, the soul will be released and led to the homeland step by step by the Pomo. The return of soul to the homeland is regarded as the ultimate pursuit and the Excellence in the philosophy by the Buyi people, which is narrated and embodied in details in Mo Sutra.

  8. Modeling variably saturated multispecies reactive groundwater solute transport with MODFLOW-UZF and RT3D

    Science.gov (United States)

    Bailey, Ryan T.; Morway, Eric D.; Niswonger, Richard G.; Gates, Timothy K.

    2013-01-01

    A numerical model was developed that is capable of simulating multispecies reactive solute transport in variably saturated porous media. This model consists of a modified version of the reactive transport model RT3D (Reactive Transport in 3 Dimensions) that is linked to the Unsaturated-Zone Flow (UZF1) package and MODFLOW. Referred to as UZF-RT3D, the model is tested against published analytical benchmarks as well as other published contaminant transport models, including HYDRUS-1D, VS2DT, and SUTRA, and the coupled flow and transport modeling system of CATHY and TRAN3D. Comparisons in one-dimensional, two-dimensional, and three-dimensional variably saturated systems are explored. While several test cases are included to verify the correct implementation of variably saturated transport in UZF-RT3D, other cases are included to demonstrate the usefulness of the code in terms of model run-time and handling the reaction kinetics of multiple interacting species in variably saturated subsurface systems. As UZF1 relies on a kinematic-wave approximation for unsaturated flow that neglects the diffusive terms in Richards equation, UZF-RT3D can be used for large-scale aquifer systems for which the UZF1 formulation is reasonable, that is, capillary-pressure gradients can be neglected and soil parameters can be treated as homogeneous. Decreased model run-time and the ability to include site-specific chemical species and chemical reactions make UZF-RT3D an attractive model for efficient simulation of multispecies reactive transport in variably saturated large-scale subsurface systems.

  9. Reconstructions of f(T) gravity from entropy-corrected holographic and new agegraphic dark energy models in power-law and logarithmic versions

    Energy Technology Data Exchange (ETDEWEB)

    Saha, Pameli; Debnath, Ujjal [Indian Institute of Engineering Science and Technology, Department of Mathematics, Howrah (India)

    2016-09-15

    Here, we peruse cosmological usage of the most promising candidates of dark energy in the framework of f(T) gravity theory where T represents the torsion scalar teleparallel gravity. We reconstruct the different f(T) modified gravity models in the spatially flat Friedmann-Robertson-Walker universe according to entropy-corrected versions of the holographic and new agegraphic dark energy models in power-law and logarithmic corrections, which describe an accelerated expansion history of the universe. We conclude that the equation of state parameter of the entropy-corrected models can transit from the quintessence state to the phantom regime as indicated by recent observations or can lie entirely in the phantom region. Also, using these models, we investigate the different areas of the stability with the help of the squared speed of sound. (orig.)

  10. Reconstructions of $f(T)$ Gravity from Entropy Corrected Holographic and New Agegraphic Dark Energy Models in Power-law and Logarithmic Versions

    CERN Document Server

    Saha, Pameli

    2016-01-01

    Here, we peruse cosmological usage of the most promising candidates of dark energy in the framework of $f(T)$ gravity theory. We reconstruct the different $f(T)$ modifed gravity models in the spatially flat FRW universe according to entropy-corrected versions of the holographic and new agegraphic dark energy models in power-law and logarithmic corrections, which describe accelerated expansion history of the universe. We conclude that the equation of state parameter of the entropy-corrected models can transit from quintessence state to phantom regime as indicated by recent observations or can lie entirely in the phantom region. Also, using these models, we investigate the different erase of the stability with the help of the squared speed of sound.

  11. Reconstructions of f( T) gravity from entropy-corrected holographic and new agegraphic dark energy models in power-law and logarithmic versions

    Science.gov (United States)

    Saha, Pameli; Debnath, Ujjal

    2016-09-01

    Here, we peruse cosmological usage of the most promising candidates of dark energy in the framework of f( T) gravity theory where T represents the torsion scalar teleparallel gravity. We reconstruct the different f( T) modified gravity models in the spatially flat Friedmann-Robertson-Walker universe according to entropy-corrected versions of the holographic and new agegraphic dark energy models in power-law and logarithmic corrections, which describe an accelerated expansion history of the universe. We conclude that the equation of state parameter of the entropy-corrected models can transit from the quintessence state to the phantom regime as indicated by recent observations or can lie entirely in the phantom region. Also, using these models, we investigate the different areas of the stability with the help of the squared speed of sound.

  12. Systematic comparison of barriers for heavy-ion fusion calculated on the basis of the double-folding model by employing two versions of nucleon-nucleon interaction

    Science.gov (United States)

    Gontchar, I. I.; Chushnyakova, M. V.

    2016-07-01

    A systematic calculation of barriers for heavy-ion fusion was performed on the basis of the double-folding model by employing two versions of an effective nucleon-nucleon interaction: M3Y interaction and Migdal interaction. The results of calculations by the Hartree-Fockmethod with the SKX coefficients were taken for nuclear densities. The calculations reveal that the fusion barrier is higher in the case of employing theMigdal interaction than in the case of employing the M3Y interaction. In view of this, the use of the Migdal interaction in describing heavy-ion fusion is questionable.

  13. Field Measurements and Modeling of the Southeast Greenland Firn Aquifer

    Science.gov (United States)

    Miller, O. L.; Solomon, D. K.; Miège, C.; Voss, C. I.; Koenig, L.; Forster, R. R.; Schmerr, N. C.; Montgomery, L. N.; Legchenko, A.; Ligtenberg, S.

    2016-12-01

    An extensive firn aquifer forms in southeast Greenland as surface meltwater percolates through the upper seasonal snow and firn layers to depth and saturates open pore spaces. The firn aquifer is found at depths from about 10 to 35 m below the snow surface in areas with high accumulation rates and high melt rates. The firn aquifer retains significant volume of meltwater and heat within the ice sheet. The first-ever hydrologic and geochemical measurements from several boreholes drilled into the aquifer have been made 50 km upstream of Helheim Glacier terminus in SE Greenland. This field data is used with a version of the SUTRA groundwater simulator that represents the freeze/thaw process to model the hydrologic and thermal conditions of the ice sheet, including aquifer water recharge, lateral flow, and discharge. Meltwater generation during the summer season is modeled using degree day methods, and meltwater recharge to the aquifer (10-70 cm/year) is calculated using water level fluctuations and volumetric flow measurements (3e-7 to 5e-6 m3/s). Aquifer hydrologic parameters, including hydraulic conductivity (2e-5 to 4e -4 m/s), storativity, and specific discharge (3e-7 to 5e-6 m/s), are estimated from aquifer pumping tests and tracer experiments. In situ measurements were obtained using a novel heated piezometer, which advances downward through the unsaturated and saturated zones of the aquifer by melting the surrounding firn. Innovative modeling approaches blending unsaturated and saturated groundwater flow modeling and ice thermodynamics indicate the importance of surface topography controls on fluid flow within the aquifer, and forecast the nature and volume of aquifer water discharge into crevasses at the edge of the ice sheet. This pioneering study is crucial to understanding the aquifer's influence on mass balance estimates of the ice sheet.

  14. Developing and validating a tablet version of an illness explanatory model interview for a public health survey in Pune, India.

    Directory of Open Access Journals (Sweden)

    Joseph G Giduthuri

    Full Text Available BACKGROUND: Mobile electronic devices are replacing paper-based instruments and questionnaires for epidemiological and public health research. The elimination of a data-entry step after an interview is a notable advantage over paper, saving investigator time, decreasing the time lags in managing and analyzing data, and potentially improving the data quality by removing the error-prone data-entry step. Research has not yet provided adequate evidence, however, to substantiate the claim of fewer errors for computerized interviews. METHODOLOGY: We developed an Android-based illness explanatory interview for influenza vaccine acceptance and tested the instrument in a field study in Pune, India, for feasibility and acceptability. Error rates for tablet and paper were compared with reference to the voice recording of the interview as gold standard to assess discrepancies. We also examined the preference of interviewers for the classical paper-based or the electronic version of the interview and compared the costs of research with both data collection devices. RESULTS: In 95 interviews with household respondents, total error rates with paper and tablet devices were nearly the same (2.01% and 1.99% respectively. Most interviewers indicated no preference for a particular device; but those with a preference opted for tablets. The initial investment in tablet-based interviews was higher compared to paper, while the recurring costs per interview were lower with the use of tablets. CONCLUSION: An Android-based tablet version of a complex interview was developed and successfully validated. Advantages were not compromised by increased errors, and field research assistants with a preference preferred the Android device. Use of tablets may be more costly than paper for small samples and less costly for large studies.

  15. Accounting for observational uncertainties in the evaluation of low latitude turbulent air-sea fluxes simulated in a suite of IPSL model versions

    Science.gov (United States)

    Servonnat, Jerome; Braconnot, Pascale; Gainusa-Bogdan, Alina

    2015-04-01

    Turbulent momentum and heat (sensible and latent) fluxes at the air-sea interface are key components of the whole energetic of the Earth's climate and their good representation in climate models is of prime importance. In this work, we use the methodology developed by Braconnot & Frankignoul (1993) to perform a Hotelling T2 test on spatio-temporal fields (annual cycles). This statistic provides a quantitative measure accounting for an estimate of the observational uncertainty for the evaluation of low-latitude turbulent air-sea fluxes in a suite of IPSL model versions. The spread within the observational ensemble of turbulent flux data products assembled by Gainusa-Bogdan et al (submitted) is used as an estimate of the observational uncertainty for the different turbulent fluxes. The methodology holds on a selection of a small number of dominating variability patterns (EOFs) that are common to both the model and the observations for the comparison. Consequently it focuses on the large-scale variability patterns and avoids the possibly noisy smaller scales. The results show that different versions of the IPSL couple model share common large scale model biases, but also that there the skill on sea surface temperature is not necessarily directly related to the skill in the representation of the different turbulent fluxes. Despite the large error bars on the observations the test clearly distinguish the different merits of the different model version. The analyses of the common EOF patterns and related time series provide guidance on the major differences with the observations. This work is a first attempt to use such statistic on the evaluation of the spatio-temporal variability of the turbulent fluxes, accounting for an observational uncertainty, and represents an efficient tool for systematic evaluation of simulated air-seafluxes, considering both the fluxes and the related atmospheric variables. References Braconnot, P., and C. Frankignoul (1993), Testing Model

  16. Can we model observed soil carbon changes from a dense inventory? A case study over england and wales using three version of orchidee ecosystem model (AR5, AR5-PRIM and O-CN

    Directory of Open Access Journals (Sweden)

    B. Guenet

    2013-07-01

    Full Text Available A widespread decrease of the top soil carbon content was observed over England and Wales during the period 1978–2003 in the National Soil Inventory (NSI, amounting to a carbon loss of 4.44 Tg yr-1 over 141 550 km2. Subsequent modelling studies have shown that changes in temperature and precipitation could only account for a small part of the observed decrease, and therefore that changes in land use and management and resulting changes in soil respiration or primary production were the main causes. So far, all the models used to reproduce the NSI data did not account for plant-soil interactions and were only soil carbon models with carbon inputs forced by data. Here, we use three different versions of a process-based coupled soil-vegetation model called ORCHIDEE, in order to separate the effect of trends in soil carbon input, and soil carbon mineralisation induced by climate trends over 1978–2003. The first version of the model (ORCHIDEE-AR5 used for IPCC-AR5 CMIP5 Earth System simulations, is based on three soil carbon pools defined with first order decomposition kinetics, as in the CENTURY model. The second version (ORCHIDEE-AR5-PRIM built for this study includes a relationship between litter carbon and decomposition rates, to reproduce a priming effect on decomposition. The last version (O-CN takes into account N-related processes. Soil carbon decomposition in O-CN is based on CENTURY, but adds N limitations on litter decomposition. We performed regional gridded simulations with these three versions of the ORCHIDEE model over England and Wales. None of the three model versions was able to reproduce the observed NSI soil carbon trend. This suggests that either climate change is not the main driver for observed soil carbon losses, or that the ORCHIDEE model even with priming or N-effects on decomposition lacks the basic mechanisms to explain soil carbon change in response to climate, which would raise a caution flag about the ability of this

  17. CLMT2 user's guide: A Coupled Model for Simulation of HydraulicProcesses from Canopy to Aquifer Version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Lehua

    2006-07-26

    CLMT2 is designed to simulate the land-surface andsubsurface hydrologic response to meteorological forcing. This modelcombines a state-of-the-art land-surface model, the NCAR Community LandModel version 3 (CLM3), with a variably saturated groundwater model, theTOUGH2, through an internal interface that includes flux and statevariables shared by the two submodels. Specifically, TOUGH2, in itssimulation, uses infiltration, evaporation, and root-uptake rates,calculated by CLM3, as source/sink terms; CLM3, in its simulation, usessaturation and capillary pressure profiles, calculated by TOUGH2, asstate variables. This new model, CLMT2, preserves the best aspects ofboth submodels: the state-of-the-art modeling capability of surfaceenergy and hydrologic processes from CLM3 (including snow, runoff,freezing/melting, evapotranspiration, radiation, and biophysiologicalprocesses) and the more realistic physical-process-based modelingcapability of subsurface hydrologic processes from TOUGH2 (includingheterogeneity, three-dimensional flow, seamless combining of unsaturatedand saturated zone, and water table). The preliminary simulation resultsshow that the coupled model greatly improved the predictions of the watertable, evapotranspiration, and surface temperature at a real watershed,as evaluated using 18 years of observed data. The new model is also readyto be coupled with an atmospheric simulation model, representing one ofthe first models that are capable to simulate hydraulic processes fromtop of the atmosphere to deep-ground.

  18. Phase diagrams of the corner cubic Heisenberg model and its site-diluted version on a triangular lattice: Renormalization-group treatment

    Science.gov (United States)

    Nagai, Kiyoshi

    1985-02-01

    The global phase diagrams of the corner cubic anisotropic discrete-spin Heisenberg (CH) model and its site-diluted version (dCH) on a triangular lattice are investigated through the position-space renormalization-group method of the simple Migdal-Kadanoff type. The two models include many simpler models as their subspaces, and the interrelations among these models are elucidated. The five-dimensional (5D) phase diagram of the dCH model is generated from the 3D one of the CH model by introducing 2D site-dilution operation. The structure of the 5D phase diagram and the effect of site dilution on the CH model are conveniently visualized by introducing the concept of paths in the 3D subspace. The path describes the temperature variation provided that the ratios between the interaction parameters in the original CH model are fixed. The resulting phase diagrams of the dCH model exhibit the typical three-phase coexistence of solid, liquid, and gas, and their qualitative interpretations are summarized.

  19. A new version of the CNRM Chemistry-Climate Model, CNRM-CCM: description and improvements from the CCMVal-2 simulations

    Directory of Open Access Journals (Sweden)

    M. Michou

    2011-10-01

    Full Text Available This paper presents a new version of the Météo-France CNRM Chemistry-Climate Model, so-called CNRM-CCM. It includes some fundamental changes from the previous version (CNRM-ACM which was extensively evaluated in the context of the CCMVal-2 validation activity. The most notable changes concern the radiative code of the GCM, and the inclusion of the detailed stratospheric chemistry of our Chemistry-Transport model MOCAGE on-line within the GCM. A 47-yr transient simulation (1960–2006 is the basis of our analysis. CNRM-CCM generates satisfactory dynamical and chemical fields in the stratosphere. Several shortcomings of CNRM-ACM simulations for CCMVal-2 that resulted from an erroneous representation of the impact of volcanic aerosols as well as from transport deficiencies have been eliminated.

    Remaining problems concern the upper stratosphere (5 to 1 hPa where temperatures are too high, and where there are biases in the NO2, N2O5 and O3 mixing ratios. In contrast, temperatures at the tropical tropopause are too cold. These issues are addressed through the implementation of a more accurate radiation scheme at short wavelengths. Despite these problems we show that this new CNRM CCM is a useful tool to study chemistry-climate applications.

  20. Technical report series on global modeling and data assimilation. Volume 4: Documentation of the Goddard Earth Observing System (GEOS) data assimilation system, version 1

    Science.gov (United States)

    Suarez, Max J. (Editor); Pfaendtner, James; Bloom, Stephen; Lamich, David; Seablom, Michael; Sienkiewicz, Meta; Stobie, James; Dasilva, Arlindo

    1995-01-01

    This report describes the analysis component of the Goddard Earth Observing System, Data Assimilation System, Version 1 (GEOS-1 DAS). The general features of the data assimilation system are outlined, followed by a thorough description of the statistical interpolation algorithm, including specification of error covariances and quality control of observations. We conclude with a discussion of the current status of development of the GEOS data assimilation system. The main components of GEOS-1 DAS are an atmospheric general circulation model and an Optimal Interpolation algorithm. The system is cycled using the Incremental Analysis Update (IAU) technique in which analysis increments are introduced as time independent forcing terms in a forecast model integration. The system is capable of producing dynamically balanced states without the explicit use of initialization, as well as a time-continuous representation of non- observables such as precipitation and radiational fluxes. This version of the data assimilation system was used in the five-year reanalysis project completed in April 1994 by Goddard's Data Assimilation Office (DAO) Data from this reanalysis are available from the Goddard Distributed Active Center (DAAC), which is part of NASA's Earth Observing System Data and Information System (EOSDIS). For information on how to obtain these data sets, contact the Goddard DAAC at (301) 286-3209, EMAIL daac@gsfc.nasa.gov.

  1. Rapidity distribution of protons from the potential version of UrQMD model and the traditional coalescence afterburner

    CERN Document Server

    Li, Qingfeng; Wang, Xiaobao; Shen, Caiwan

    2016-01-01

    Rapidity distributions of both E895 proton data at AGS energies and NA49 net proton data at SPS energies can be described reasonably well with a potential version of the UrQMD in which mean-field potentials for both pre-formed hadrons and confined baryons are considered, with the help of a traditional coalescence afterburner in which one parameter set for both relative distance $R_0$ and relative momentum $P_0$, (3.8 fm, 0.3 GeV$/$c), is used. Because of the large cancellation between the expansion in $R_0$ and the shrinkage in $P_0$ through the Lorentz transformation, the relativistic effect in clusters has little effect on the rapidity distribution of free (net) protons. Using a Woods-Saxon-like function instead of a pure logarithmic function as seen by FOPI collaboration at SIS energies, one can fit well both the data at SIS energies and the UrQMD calculation results at AGS and SPS energies. Further, it is found that for central Au+Au or Pb+Pb collisions at top SIS, SPS and RHIC energies, the proton fracti...

  2. GENII Version 2 Users’ Guide

    Energy Technology Data Exchange (ETDEWEB)

    Napier, Bruce A.

    2004-03-08

    The GENII Version 2 computer code was developed for the Environmental Protection Agency (EPA) at Pacific Northwest National Laboratory (PNNL) to incorporate the internal dosimetry models recommended by the International Commission on Radiological Protection (ICRP) and the radiological risk estimating procedures of Federal Guidance Report 13 into updated versions of existing environmental pathway analysis models. The resulting environmental dosimetry computer codes are compiled in the GENII Environmental Dosimetry System. The GENII system was developed to provide a state-of-the-art, technically peer-reviewed, documented set of programs for calculating radiation dose and risk from radionuclides released to the environment. The codes were designed with the flexibility to accommodate input parameters for a wide variety of generic sites. Operation of a new version of the codes, GENII Version 2, is described in this report. Two versions of the GENII Version 2 code system are available, a full-featured version and a version specifically designed for demonstrating compliance with the dose limits specified in 40 CFR 61.93(a), the National Emission Standards for Hazardous Air Pollutants (NESHAPS) for radionuclides. The only differences lie in the limitation of the capabilities of the user to change specific parameters in the NESHAPS version. This report describes the data entry, accomplished via interactive, menu-driven user interfaces. Default exposure and consumption parameters are provided for both the average (population) and maximum individual; however, these may be modified by the user. Source term information may be entered as radionuclide release quantities for transport scenarios, or as basic radionuclide concentrations in environmental media (air, water, soil). For input of basic or derived concentrations, decay of parent radionuclides and ingrowth of radioactive decay products prior to the start of the exposure scenario may be considered. A single code run can

  3. Updating sea spray aerosol emissions in the Community Multiscale Air Quality (CMAQ) model version 5.0.2

    Data.gov (United States)

    U.S. Environmental Protection Agency — The uploaded data consists of the BRACE Na aerosol observations paired with CMAQ model output, the updated model's parameterization of sea salt aerosol emission size...

  4. On the Benefits of Latent Variable Modeling for Norming Scales: The Case of the "Supports Intensity Scale-Children's Version"

    Science.gov (United States)

    Seo, Hyojeong; Little, Todd D.; Shogren, Karrie A.; Lang, Kyle M.

    2016-01-01

    Structural equation modeling (SEM) is a powerful and flexible analytic tool to model latent constructs and their relations with observed variables and other constructs. SEM applications offer advantages over classical models in dealing with statistical assumptions and in adjusting for measurement error. So far, however, SEM has not been fully used…

  5. Spatial-temporal reproducibility assessment of global seasonal forecasting system version 5 model for Dam Inflow forecasting

    Science.gov (United States)

    Moon, S.; Suh, A. S.; Soohee, H.

    2016-12-01

    The GloSea5(Global Seasonal forecasting system version 5) is provided and operated by the KMA(Korea Meteorological Administration). GloSea5 provides Forecast(FCST) and Hindcast(HCST) data and its horizontal resolution is about 60km (0.83° x 0.56°) in the mid-latitudes. In order to use this data in watershed-scale water management, GloSea5 needs spatial-temporal downscaling. As such, statistical downscaling was used to correct for systematic biases of variables and to improve data reliability. HCST data is provided in ensemble format, and the highest statistical correlation(R2 = 0.60, RMSE = 88.92, NSE = 0.57) of ensemble precipitation was reported for the Yongdam Dam watershed on the #6 grid. Additionally, the original GloSea5(600.1mm) showed the greatest difference(-26.5%) compared to observations(816.1mm) during the summer flood season. However, downscaled GloSea5 was shown to have only a ?3.1% error rate. Most of the underestimated results corresponded to precipitation levels during the flood season and the downscaled GloSea5 showed important results of restoration in precipitation levels. Per the analysis results of spatial autocorrelation using seasonal Moran's I, the spatial distribution was shown to be statistically significant. These results can improve the uncertainty of original GloSea5 and substantiate its spatial-temporal accuracy and validity. The spatial-temporal reproducibility assessment will play a very important role as basic data for watershed-scale water management.

  6. Reliability and construct validity of the Bahasa Malaysia version of transtheoretical model (TTM) questionnaire for smoking cessation and relapse among Malaysian adult.

    Science.gov (United States)

    Yasin, Siti Munira; Taib, Khairul Mizan; Zaki, Rafdzah Ahmad

    2011-01-01

    The transtheoretical model (TTM) has been used as one of the major constructs in developing effective cognitive behavioural interventions for smoking cessation and relapse prevention, in Western societies. This study aimed to examine the reliability and construct validity of the translated Bahasa Malaysia version of TTM questionnaire among adult smokers in Klang Valley, Malaysia. The sample consisted of 40 smokers from four different worksites in Klang Valley. A 26-item TTM questionnaire was administered, and a similar set one week later. The questionnaire consisted of three measures; decisional balance, temptations and impact of smoking. Construct validity was measured by factor analysis and the reliability by Cronbach' s alpha (internal consistency) and test-retest correlation. Results revealed that Cronbach' s alpha coefficients for the items were: decisional balance (0.84; 0.74) and temptations (0.89; 0.54; 0.85). The values for test retest correlation were all above 0.4. In addition, factor analysis suggested two meaningful common factors for decisional balance and three for temptations. This is consistent with the original construct of the TTM questionnaire. Overall results demonstrated that construct validity and reliability were acceptable for all items. In conclusion, the Bahasa Malaysia version of TTM questionnaire is a reliable and valid tool in ass.

  7. The consistency evaluation of the climate version of the Eta regional forecast model developed for regional climate downscaling

    CERN Document Server

    Pisnichenko, I A

    2007-01-01

    The regional climate model prepared from Eta WS (workstation) forecast model has been integrated over South America with the horizontal resolution of 40 km for the period of 1961-1977. The model was forced at its lateral boundaries by the outputs of HadAMP. The data of HadAMP represent the simulation of modern climate with the resolution about150 km. In order to prepare climate regional model from the Eta forecast model was added new blocks and multiple modifications and corrections was made in the original model. The running of climate Eta model was made on the supercomputer SX-6. The detailed analysis of the results of dynamical downscaling experiment includes an investigation of a consistency between the regional and AGCM models as well as of ability of the regional model to resolve important features of climate fields on the finer scale than that resolved by AGCM. In this work we show the results of our investigation of the consistency of the output fields of the Eta model and HadAMP. We have analysed geo...

  8. The CSIRO Mk3L climate system model version 1.0 – Part 1: Description and evaluation

    Directory of Open Access Journals (Sweden)

    S. J. Phipps

    2011-06-01

    Full Text Available The CSIRO Mk3L climate system model is a coupled general circulation model, designed primarily for millennial-scale climate simulations and palaeoclimate research. Mk3L includes components which describe the atmosphere, ocean, sea ice and land surface, and combines computational efficiency with a stable and realistic control climatology. This paper describes the model physics and software, analyses the control climatology, and evaluates the ability of the model to simulate the modern climate.

    Mk3L incorporates a spectral atmospheric general circulation model, a z-coordinate ocean general circulation model, a dynamic-thermodynamic sea ice model and a land surface scheme with static vegetation. The source code is highly portable, and has no dependence upon proprietary software. The model distribution is freely available to the research community. A 1000-yr climate simulation can be completed in around one-and-a-half months on a typical desktop computer, with greater throughput being possible on high-performance computing facilities.

    Mk3L produces realistic simulations of the larger-scale features of the modern climate, although with some biases on the regional scale. The model also produces reasonable representations of the leading modes of internal climate variability in both the tropics and extratropics. The control state of the model exhibits a high degree of stability, with only a weak cooling trend on millennial timescales. Ongoing development work aims to improve the model climatology and transform Mk3L into a comprehensive earth system model.

  9. ECCO version 4: an integrated framework for non-linear inverse modeling and global ocean state estimation

    Directory of Open Access Journals (Sweden)

    G. Forget

    2015-10-01

    Full Text Available This paper presents the ECCO v4 non-linear inverse modeling framework and its baseline solution for the evolving ocean state over the period 1992–2011. Both components are publicly available and subjected to regular, automated regression tests. The modeling framework includes sets of global conformal grids, a global model setup, implementations of data constraints and control parameters, an interface to algorithmic differentiation, as well as a grid-independent, fully capable Matlab toolbox. The baseline ECCO v4 solution is a dynamically consistent ocean state estimate without unidentified sources of heat and buoyancy, which any interested user will be able to reproduce accurately. The solution is an acceptable fit to most data and has been found to be physically plausible in many respects, as documented here and in related publications. Users are being provided with capabilities to assess model–data misfits for themselves. The synergy between modeling and data synthesis is asserted through the joint presentation of the modeling framework and the state estimate. In particular, the inverse estimate of parameterized physics was instrumental in improving the fit to the observed hydrography, and becomes an integral part of the ocean model setup available for general use. More generally, a first assessment of the relative importance of external, parametric and structural model errors is presented. Parametric and external model uncertainties appear to be of comparable importance and dominate over structural model uncertainty. The results generally underline the importance of including turbulent transport parameters in the inverse problem.

  10. Interactive lakes in the Canadian Regional Climate Model, version 5: the role of lakes in the regional climate of North America

    Directory of Open Access Journals (Sweden)

    Bernard Dugas

    2012-02-01

    Full Text Available Two one-dimensional (1-D column lake models have been coupled interactively with a developmental version of the Canadian Regional Climate Model. Multidecadal reanalyses-driven simulations with and without lakes revealed the systematic biases of the model and the impact of lakes on the simulated North American climate.The presence of lakes strongly influences the climate of the lake-rich region of the Canadian Shield. Due to their large thermal inertia, lakes act to dampen the diurnal and seasonal cycle of low-level air temperature. In late autumn and winter, ice-free lakes induce large sensible and latent heat fluxes, resulting in a strong enhancement of precipitation downstream of the Laurentian Great Lakes, which is referred to as the snow belt.The FLake (FL and Hostetler (HL lake models perform adequately for small subgrid-scale lakes and for large resolved lakes with shallow depth, located in temperate or warm climatic regions. Both lake models exhibit specific strengths and weaknesses. For example, HL simulates too rapid spring warming and too warm surface temperature, especially in large and deep lakes; FL tends to damp the diurnal cycle of surface temperature. An adaptation of 1-D lake models might be required for an adequate simulation of large and deep lakes.

  11. Steric Sea Level Change in Twentieth Century Historical Climate Simulation and IPCC-RCP8.5 Scenario Projection: A Comparison of Two Versions of FGOALS Model

    Institute of Scientific and Technical Information of China (English)

    DONG Lu; ZHOU Tianjun

    2013-01-01

    To reveal the steric sea level change in 20th century historical climate simulations and future climate change projections under the IPCC's Representative Concentration Pathway 8.5 (RCP8.5) scenario,the results of two versions of LASG/IAP's Flexible Global Ocean-Atmosphere-Land System model (FGOALS) are analyzed.Both models reasonably reproduce the mean dynamic sea level features,with a spatial pattern correlation coefficient of 0.97 with the observation.Characteristics of steric sea level changes in the 20th century historical climate simulations and RCP8.5 scenario projections are investigated.The results show that,in the 20th century,negative trends covered most parts of the global ocean.Under the RCP8.5 scenario,global-averaged steric sea level exhibits a pronounced rising trend throughout the 21st century and the general rising trend appears in most parts of the global ocean.The magnitude of the changes in the 21st century is much larger than that in the 20th century.By the year 2100,the global-averaged steric sea level anomaly is 18 cm and 10 cm relative to the year 1850 in the second spectral version of FGOALS (FGOALS-s2) and the second grid-point version of FGOALS (FGOALS-g2),respectively.The separate contribution of the thermosteric and halosteric components from various ocean layers is further evaluated.In the 20th century,the steric sea level changes in FGOALS-s2 (FGOALS-g2) are largely attributed to the thermosteric (halosteric) component relative to the pre-industrial control run.In contrast,in the 21st century,the thermosteric component,mainly from the upper 1000 m,dominates the steric sea level change in both models under the RCP8.5 scenario.In addition,the steric sea level change in the marginal sea of China is attributed to the thermosteric component.

  12. ECCO version 4: an integrated framework for non-linear inverse modeling and global ocean state estimation

    Directory of Open Access Journals (Sweden)

    G. Forget

    2015-05-01

    Full Text Available This paper presents the ECCO v4 non-linear inverse modeling framework and its baseline solution for the evolving ocean state over the period 1992–2011. Both components are publicly available and highly integrated with the MITgcm. They are both subjected to regular, automated regression tests. The modeling framework includes sets of global conformal grids, a global model setup, implementations of model-data constraints and adjustable control parameters, an interface to algorithmic differentiation, as well as a grid-independent, fully capable Matlab toolbox. The reference ECCO v4 solution is a dynamically consistent ocean state estimate (ECCO-Production, release 1 without un-identified sources of heat and buoyancy, which any interested user will be able to reproduce accurately. The solution is an acceptable fit to most data and has been found physically plausible in many respects, as documented here and in related publications. Users are being provided with capabilities to assess model-data misfits for themselves. The synergy between modeling and data synthesis is asserted through the joint presentation of the modeling framework and the state estimate. In particular, the inverse estimate of parameterized physics was instrumental in improving the fit to the observed hydrography, and becomes an integral part of the ocean model setup available for general use. More generally, a first assessment of the relative importance of external, parametric and structural model errors is presented. Parametric and external model uncertainties appear to be of comparable importance and dominate over structural model uncertainty. The results generally underline the importance of including turbulent transport parameters in the inverse problem.

  13. The new version of the Institute of Numerical Mathematics Sigma Ocean Model (INMSOM) for simulation of Global Ocean circulation and its variability

    Science.gov (United States)

    Gusev, Anatoly; Fomin, Vladimir; Diansky, Nikolay; Korshenko, Evgeniya

    2017-04-01

    In this paper, we present the improved version of the ocean general circulation sigma-model developed in the Institute of Numerical Mathematics of the Russian Academy of Sciences (INM RAS). The previous version referred to as INMOM (Institute of Numerical Mathematics Ocean Model) is used as the oceanic component of the IPCC climate system model INMCM (Institute of Numerical Mathematics Climate Model (Volodin et al 2010,2013). Besides, INMOM as the only sigma-model was used for simulations according to CORE-II scenario (Danabasoglu et al. 2014,2016; Downes et al. 2015; Farneti et al. 2015). In general, INMOM results are comparable to ones of other OGCMs and were used for investigation of climatic variations in the North Atlantic (Gusev and Diansky 2014). However, detailed analysis of some CORE-II INMOM results revealed some disadvantages of the INMOM leading to considerable errors in reproducing some ocean characteristics. So, the mass transport in the Antarctic Circumpolar Current (ACC) was overestimated. As well, there were noticeable errors in reproducing thermohaline structure of the ocean. After analysing the previous results, the new version of the OGCM was developed. It was decided to entitle is INMSOM (Institute of Numerical Mathematics Sigma Ocean Model). The new title allows one to distingwish the new model, first, from its older version, and second, from another z-model developed in the INM RAS and referred to as INMIO (Institute of Numerical Mathematics and Institute of Oceanology ocean model) (Ushakov et al. 2016). There were numerous modifications in the model, some of them are as follows. 1) Formulation of the ocean circulation problem in terms of full free surface with taking into account water amount variation. 2) Using tensor form of lateral viscosity operator invariant to rotation. 3) Using isopycnal diffusion including Gent-McWilliams mixing. 4) Using atmospheric forcing computation according to NCAR methodology (Large and Yeager 2009). 5

  14. Technical documentation and user's guide for City-County Allocation Model (CCAM). Version 1. 0

    Energy Technology Data Exchange (ETDEWEB)

    Clark, L.T. Jr.; Scott, M.J.; Hammer, P.

    1986-05-01

    The City-County Allocation Model (CCAM) was developed as part of the Monitored Retrievable Storage (MRS) Program. The CCAM model was designed to allocate population changes forecasted by the MASTER model to specific local communities within commuting distance of the MRS facility. The CCAM model was designed to then forecast the potential changes in demand for key community services such as housing, police protection, and utilities for these communities. The CCAM model uses a flexible on-line data base on demand for community services that is based on a combination of local service levels and state and national service standards. The CCAM model can be used to quickly forecast the potential community service consequence of economic development for local communities anywhere in the country. The remainder of this document is organized as follows. The purpose of this manual is to assist the user in understanding and operating the City-County Allocation Model (CCAM). The annual explains the data sources for the model and code modifications as well as the operational procedures.

  15. SMASS - a simulation model of physical and chemical processes in acid sulphate soils; Version 2.1

    NARCIS (Netherlands)

    Bosch, van den H.; Bronswijk, J.J.B.; Groenenberg, J.E.; Ritsema, C.J.

    1998-01-01

    The Simulation Model for Acid Sulphate Soils (SMASS) has been developed to predict the effects of water management strategies on acidification and de-acidification in areas with acid sulphate soils. It has submodels for solute transport, chemistry, oxygen transport and pyrite oxidation. The model mu

  16. A Multi-Year Plan for Enhancing Turbulence Modeling in Hydra-TH Revised and Updated Version 2.0

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Thomas M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Berndt, Markus [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Baglietto, Emilio [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Magolan, Ben [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2015-10-01

    The purpose of this report is to document a multi-year plan for enhancing turbulence modeling in Hydra-TH for the Consortium for Advanced Simulation of Light Water Reactors (CASL) program. Hydra-TH is being developed to the meet the high- fidelity, high-Reynolds number CFD based thermal hydraulic simulation needs of the program. This work is being conducted within the thermal hydraulics methods (THM) focus area. This report is an extension of THM CASL milestone L3:THM.CFD.P10.02 [33] (March, 2015) and picks up where it left off. It will also serve to meet the requirements of CASL THM level three milestone, L3:THM.CFD.P11.04, scheduled for completion September 30, 2015. The objectives of this plan will be met by: maturation of recently added turbulence models, strategic design/development of new models and systematic and rigorous testing of existing and new models and model extensions. While multi-phase turbulent flow simulations are important to the program, only single-phase modeling will be considered in this report. Large Eddy Simulation (LES) is also an important modeling methodology. However, at least in the first year, the focus is on steady-state Reynolds Averaged Navier-Stokes (RANS) turbulence modeling.

  17. On the Renormalization of a Bosonized Version of the Chiral Fermion-Meson Model at Finite Temperature

    CERN Document Server

    Caldas, H C G

    2001-01-01

    Feynman's functional formulation of statistical mechanics is used to study the renormalizability of the well known Linear Chiral Sigma Model in the presence of fermionic fields at finite temperature in an alternative way. It is shown that the renormalization conditions coincide with those of the zero temperature model.

  18. APEX user`s guide - (Argonne production, expansion, and exchange model for electrical systems), version 3.0

    Energy Technology Data Exchange (ETDEWEB)

    VanKuiken, J.C.; Veselka, T.D.; Guziel, K.A.; Blodgett, D.W.; Hamilton, S.; Kavicky, J.A.; Koritarov, V.S.; North, M.J.; Novickas, A.A.; Paprockas, K.R. [and others

    1994-11-01

    This report describes operating procedures and background documentation for the Argonne Production, Expansion, and Exchange Model for Electrical Systems (APEX). This modeling system was developed to provide the U.S. Department of Energy, Division of Fossil Energy, Office of Coal and Electricity with in-house capabilities for addressing policy options that affect electrical utilities. To meet this objective, Argonne National Laboratory developed a menu-driven programming package that enables the user to develop and conduct simulations of production costs, system reliability, spot market network flows, and optimal system capacity expansion. The APEX system consists of three basic simulation components, supported by various databases and data management software. The components include (1) the investigation of Costs and Reliability in Utility Systems (ICARUS) model, (2) the Spot Market Network (SMN) model, and (3) the Production and Capacity Expansion (PACE) model. The ICARUS model provides generating-unit-level production-cost and reliability simulations with explicit recognition of planned and unplanned outages. The SMN model addresses optimal network flows with recognition of marginal costs, wheeling charges, and transmission constraints. The PACE model determines long-term (e.g., longer than 10 years) capacity expansion schedules on the basis of candidate expansion technologies and load growth estimates. In addition, the Automated Data Assembly Package (ADAP) and case management features simplify user-input requirements. The ADAP, ICARUS, and SMN modules are described in detail. The PACE module is expected to be addressed in a future publication.

  19. Persistence and Global Attractivity for a Discretized Version of a General Model of Glucose-Insulin Interaction

    Directory of Open Access Journals (Sweden)

    Huong Dinh Cong

    2016-09-01

    Full Text Available In this paper, we construct a non-standard finite difference scheme for a general model of glucose-insulin interaction. We establish some new sufficient conditions to ensure that the discretized model preserves the persistence and global attractivity of the continuous model. One of the main findings in this paper is that we derive two important propositions (Proposition 3.1 and Proposition 3.2 which are used to prove the global attractivity of the discretized model. Furthermore, when investigating the persistence and, in some cases, the global attractivity of the discretized model, the nonlinear functions f and h are not required to be differentiable. Hence, our results are more realistic because the statistical data of glucose and insulin are collected and reported in discrete time. We also present some numerical examples and their simulations to illustrate our results.

  20. Towards a representation of priming on soil carbon decomposition in the global land biosphere model ORCHIDEE (version 1.9.5.2)

    Science.gov (United States)

    Guenet, Bertrand; Esteban Moyano, Fernando; Peylin, Philippe; Ciais, Philippe; Janssens, Ivan A.

    2016-03-01

    Priming of soil carbon decomposition encompasses different processes through which the decomposition of native (already present) soil organic matter is amplified through the addition of new organic matter, with new inputs typically being more labile than the native soil organic matter. Evidence for priming comes from laboratory and field experiments, but to date there is no estimate of its impact at global scale and under the current anthropogenic perturbation of the carbon cycle. Current soil carbon decomposition models do not include priming mechanisms, thereby introducing uncertainty when extrapolating short-term local observations to ecosystem and regional to global scale. In this study we present a simple conceptual model of decomposition priming, called PRIM, able to reproduce laboratory (incubation) and field (litter manipulation) priming experiments. Parameters for this model were first optimized against data from 20 soil incubation experiments using a Bayesian framework. The optimized parameter values were evaluated against another set of soil incubation data independent from the ones used for calibration and the PRIM model reproduced the soil incubations data better than the original, CENTURY-type soil decomposition model, whose decomposition equations are based only on first-order kinetics. We then compared the PRIM model and the standard first-order decay model incorporated into the global land biosphere model ORCHIDEE (Organising Carbon and Hydrology In Dynamic Ecosystems). A test of both models was performed at ecosystem scale using litter manipulation experiments from five sites. Although both versions were equally able to reproduce observed decay rates of litter, only ORCHIDEE-PRIM could simulate the observed priming (R2 = 0.54) in cases where litter was added or removed. This result suggests that a conceptually simple and numerically tractable representation of priming adapted to global models is able to capture the sign and magnitude of the

  1. An integrated assessment modeling framework for uncertainty studies in global and regional climate change: the MIT IGSM-CAM (version 1.0)

    Science.gov (United States)

    Monier, E.; Scott, J. R.; Sokolov, A. P.; Forest, C. E.; Schlosser, C. A.

    2013-12-01

    This paper describes a computationally efficient framework for uncertainty studies in global and regional climate change. In this framework, the Massachusetts Institute of Technology (MIT) Integrated Global System Model (IGSM), an integrated assessment model that couples an Earth system model of intermediate complexity to a human activity model, is linked to the National Center for Atmospheric Research (NCAR) Community Atmosphere Model (CAM). Since the MIT IGSM-CAM framework (version 1.0) incorporates a human activity model, it is possible to analyze uncertainties in emissions resulting from both uncertainties in the underlying socio-economic characteristics of the economic model and in the choice of climate-related policies. Another major feature is the flexibility to vary key climate parameters controlling the climate system response to changes in greenhouse gases and aerosols concentrations, e.g., climate sensitivity, ocean heat uptake rate, and strength of the aerosol forcing. The IGSM-CAM is not only able to realistically simulate the present-day mean climate and the observed trends at the global and continental scale, but it also simulates ENSO variability with realistic time scales, seasonality and patterns of SST anomalies, albeit with stronger magnitudes than observed. The IGSM-CAM shares the same general strengths and limitations as the Coupled Model Intercomparison Project Phase 3 (CMIP3) models in simulating present-day annual mean surface temperature and precipitation. Over land, the IGSM-CAM shows similar biases to the NCAR Community Climate System Model (CCSM) version 3, which shares the same atmospheric model. This study also presents 21st century simulations based on two emissions scenarios (unconstrained scenario and stabilization scenario at 660 ppm CO2-equivalent) similar to, respectively, the Representative Concentration Pathways RCP8.5 and RCP4.5 scenarios, and three sets of climate parameters. Results of the simulations with the chosen

  2. 基于双向链表的产品协同设计版本存储模型%Version Storage Model of Product Collaborative Design Based on Doubly Linked List

    Institute of Scientific and Technical Information of China (English)

    刘国军; 杨宏志

    2013-01-01

    The study is on version storages issues of the version management in the product collaborative design. Based on the analysis of the current version incremental storage and complete storage technologies, this study poses a new version storage model of product collaborative design which will combine inverse incremental storage with complete storage. The storage structure of version is defined as a two-way version of the list according to their parent-child relationship, when the design process produces a new version, it will be stored in the list structure of version by the way of repeated iteration and insertion, in order to achieve the fast storage, save the storage space and improve the security of the version storage in the product collaborative design process.%针对产品协同设计的版本管理的版本存储问题进行研究。在分析了目前已有的版本增量存储和完整存储技术的基础上,提出了一种将完整存储和逆增量存储相结合的产品协同设计版本存储模型。将版本的存储结构按照其父子关系定义为一个双向版本链表,当设计过程产生新的版本时,采用反复迭代和插入的方式,将其存放在版本链表结构中,以实现协同设计版本的快速存储、节约存储空间和提高版本存储的安全性。

  3. Distributed Version Control and Library Metadata

    Directory of Open Access Journals (Sweden)

    Galen M. Charlton

    2008-06-01

    Full Text Available Distributed version control systems (DVCSs are effective tools for managing source code and other artifacts produced by software projects with multiple contributors. This article describes DVCSs and compares them with traditional centralized version control systems, then describes extending the DVCS model to improve the exchange of library metadata.

  4. Geothermal Energy Market Study on the Atlantic Coastal Plain. GRITS (Version 9): Model Description and User's Guide

    Energy Technology Data Exchange (ETDEWEB)

    Kroll, Peter; Kane, Sally Minch [eds.

    1982-04-01

    The Geothermal Resource Interactive Temporal Simulation (GRITS) model calculates the cost and revenue streams for the lifetime of a project that utilizes low to moderate temperature geothermal resources. With these estimates, the net present value of the project is determined. The GRITS model allows preliminary economic evaluations of direct-use applications of geothermal energy under a wide range of resource, demand, and financial conditions, some of which change over the lifetime of the project.

  5. Versioning Complex Data

    Energy Technology Data Exchange (ETDEWEB)

    Macduff, Matt C.; Lee, Benno; Beus, Sherman J.

    2014-06-29

    Using the history of ARM data files, we designed and demonstrated a data versioning paradigm that is feasible. Assigning versions to sets of files that are modified with some special assumptions and domain specific rules was effective in the case of ARM data, which has more than 5000 datastreams and 500TB of data.

  6. A multi-scale computational model of the effects of TMS on motor cortex [version 3; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Hyeon Seo

    2017-05-01

    Full Text Available The detailed biophysical mechanisms through which transcranial magnetic stimulation (TMS activates cortical circuits are still not fully understood. Here we present a multi-scale computational model to describe and explain the activation of different pyramidal cell types in motor cortex due to TMS. Our model determines precise electric fields based on an individual head model derived from magnetic resonance imaging and calculates how these electric fields activate morphologically detailed models of different neuron types. We predict neural activation patterns for different coil orientations consistent with experimental findings. Beyond this, our model allows us to calculate activation thresholds for individual neurons and precise initiation sites of individual action potentials on the neurons’ complex morphologies. Specifically, our model predicts that cortical layer 3 pyramidal neurons are generally easier to stimulate than layer 5 pyramidal neurons, thereby explaining the lower stimulation thresholds observed for I-waves compared to D-waves. It also shows differences in the regions of activated cortical layer 5 and layer 3 pyramidal cells depending on coil orientation. Finally, it predicts that under standard stimulation conditions, action potentials are mostly generated at the axon initial segment of cortical pyramidal cells, with a much less important activation site being the part of a layer 5 pyramidal cell axon where it crosses the boundary between grey matter and white matter. In conclusion, our computational model offers a detailed account of the mechanisms through which TMS activates different cortical pyramidal cell types, paving the way for more targeted application of TMS based on individual brain morphology in clinical and basic research settings.

  7. Machine learning models identify molecules active against the Ebola virus in vitro [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Sean Ekins

    2016-01-01

    Full Text Available The search for small molecule inhibitors of Ebola virus (EBOV has led to several high throughput screens over the past 3 years. These have identified a range of FDA-approved active pharmaceutical ingredients (APIs with anti-EBOV activity in vitro and several of which are also active in a mouse infection model. There are millions of additional commercially-available molecules that could be screened for potential activities as anti-EBOV compounds. One way to prioritize compounds for testing is to generate computational models based on the high throughput screening data and then virtually screen compound libraries. In the current study, we have generated Bayesian machine learning models with viral pseudotype entry assay and the EBOV replication assay data. We have validated the models internally and externally. We have also used these models to computationally score the MicroSource library of drugs to select those likely to be potential inhibitors. Three of the highest scoring molecules that were not in the model training sets, quinacrine, pyronaridine and tilorone, were tested in vitro and had EC50 values of 350, 420 and 230 nM, respectively. Pyronaridine is a component of a combination therapy for malaria that was recently approved by the European Medicines Agency, which may make it more readily accessible for clinical testing. Like other known antimalarial drugs active against EBOV, it shares the 4-aminoquinoline scaffold. Tilorone, is an investigational antiviral agent that has shown a broad array of biological activities including cell growth inhibition in cancer cells, antifibrotic properties, α7 nicotinic receptor agonist activity, radioprotective activity and activation of hypoxia inducible factor-1. Quinacrine is an antimalarial but also has use as an anthelmintic. Our results suggest data sets with less than 1,000 molecules can produce validated machine learning models that can in turn be utilized to identify novel EBOV inhibitors in

  8. Machine learning models identify molecules active against the Ebola virus in vitro [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Sean Ekins

    2015-10-01

    Full Text Available The search for small molecule inhibitors of Ebola virus (EBOV has led to several high throughput screens over the past 3 years. These have identified a range of FDA-approved active pharmaceutical ingredients (APIs with anti-EBOV activity in vitro and several of which are also active in a mouse infection model. There are millions of additional commercially-available molecules that could be screened for potential activities as anti-EBOV compounds. One way to prioritize compounds for testing is to generate computational models based on the high throughput screening data and then virtually screen compound libraries. In the current study, we have generated Bayesian machine learning models with viral pseudotype entry assay and the EBOV replication assay data. We have validated the models internally and externally. We have also used these models to computationally score the MicroSource library of drugs to select those likely to be potential inhibitors. Three of the highest scoring molecules that were not in the model training sets, quinacrine, pyronaridine and tilorone, were tested in vitro and had EC50 values of 350, 420 and 230 nM, respectively. Pyronaridine is a component of a combination therapy for malaria that was recently approved by the European Medicines Agency, which may make it more readily accessible for clinical testing. Like other known antimalarial drugs active against EBOV, it shares the 4-aminoquinoline scaffold. Tilorone, is an investigational antiviral agent that has shown a broad array of biological activities including cell growth inhibition in cancer cells, antifibrotic properties, α7 nicotinic receptor agonist activity, radioprotective activity and activation of hypoxia inducible factor-1. Quinacrine is an antimalarial but also has use as an anthelmintic. Our results suggest data sets with less than 1,000 molecules can produce validated machine learning models that can in turn be utilized to identify novel EBOV inhibitors in

  9. Machine learning models identify molecules active against the Ebola virus in vitro [version 3; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Sean Ekins

    2017-01-01

    Full Text Available The search for small molecule inhibitors of Ebola virus (EBOV has led to several high throughput screens over the past 3 years. These have identified a range of FDA-approved active pharmaceutical ingredients (APIs with anti-EBOV activity in vitro and several of which are also active in a mouse infection model. There are millions of additional commercially-available molecules that could be screened for potential activities as anti-EBOV compounds. One way to prioritize compounds for testing is to generate computational models based on the high throughput screening data and then virtually screen compound libraries. In the current study, we have generated Bayesian machine learning models with viral pseudotype entry assay and the EBOV replication assay data. We have validated the models internally and externally. We have also used these models to computationally score the MicroSource library of drugs to select those likely to be potential inhibitors. Three of the highest scoring molecules that were not in the model training sets, quinacrine, pyronaridine and tilorone, were tested in vitro and had EC50 values of 350, 420 and 230 nM, respectively. Pyronaridine is a component of a combination therapy for malaria that was recently approved by the European Medicines Agency, which may make it more readily accessible for clinical testing. Like other known antimalarial drugs active against EBOV, it shares the 4-aminoquinoline scaffold. Tilorone, is an investigational antiviral agent that has shown a broad array of biological activities including cell growth inhibition in cancer cells, antifibrotic properties, α7 nicotinic receptor agonist activity, radioprotective activity and activation of hypoxia inducible factor-1. Quinacrine is an antimalarial but also has use as an anthelmintic. Our results suggest data sets with less than 1,000 molecules can produce validated machine learning models that can in turn be utilized to identify novel EBOV inhibitors in

  10. Regional hydrogeological simulations for Forsmark - numerical modelling using CONNECTFLOW. Preliminary site description Forsmark area - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Hartley, Lee; Cox, Ian; Hunter, Fiona; Jackson, Peter; Joyce, Steve; Swift, Ben [Serco Assurance, Risley (United Kingdom); Gylling, Bjoern; Marsic, Niko [Kemakta Konsult AB, Stockholm (Sweden)

    2005-05-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) carries out site investigations in two different candidate areas in Sweden with the objective of describing the in-situ conditions for a bedrock repository for spent nuclear fuel. The site characterisation work is divided into two phases, an initial site investigation phase (IPLU) and a complete site investigation phase (KPLU). The results of IPLU are used as a basis for deciding on a subsequent KPLU phase. On the basis of the KPLU investigations a decision is made as to whether detailed characterisation will be performed (including sinking of a shaft). An integrated component in the site characterisation work is the development of site descriptive models. These comprise basic models in three dimensions with an accompanying text description. Central in the modelling work is the geological model, which provides the geometrical context in terms of a model of deformation zones and the rock mass between the zones. Using the geological and geometrical description models as a basis, descriptive models for other geo-disciplines (hydrogeology, hydro-geochemistry, rock mechanics, thermal properties and transport properties) will be developed. Great care is taken to arrive at a general consistency in the description of the various models and assessment of uncertainty and possible needs of alternative models. Here, a numerical model is developed on a regional-scale (hundreds of square kilometres) to understand the zone of influence for groundwater flow that affects the Forsmark area. Transport calculations are then performed by particle tracking from a local-scale release area (a few square kilometres) to identify potential discharge areas for the site and using greater grid resolution. The main objective of this study is to support the development of a preliminary Site Description of the Forsmark area on a regional-scale based on the available data of 30 June 2004 and the previous Site Description. A more specific

  11. An online trajectory module (version 1.0 for the non-hydrostatic numerical weather prediction model COSMO

    Directory of Open Access Journals (Sweden)

    A. K. Miltenberger

    2013-02-01

    Full Text Available A module to calculate online trajectories has been implemented into the non-hydrostatic limited-area weather prediction and climate model COSMO. Whereas offline trajectories are calculated with wind fields from model output, which is typically available every one to six hours, online trajectories use the simulated wind field at every model time step (typically less than a minute to solve the trajectory equation. As a consequence, online trajectories much better capture the short-term temporal fluctuations of the wind field, which is particularly important for mesoscale flows near topography and convective clouds, and they do not suffer from temporal interpolation errors between model output times. The numerical implementation of online trajectories in the COSMO model is based upon an established offline trajectory tool and takes full account of the horizontal domain decomposition that is used for parallelization of the COSMO model. Although a perfect workload balance cannot be achieved for the trajectory module (due to the fact that trajectory positions are not necessarily equally distributed over the model domain, the additional computational costs are fairly small for high-resolution simulations. Various options have been implemented to initialize online trajectories at different locations and times during the model simulation. As a first application of the new COSMO module an Alpine North Föhn event in summer 1987 has been simulated with horizontal resolutions of 2.2 km, 7 km, and 14 km. It is shown that low-tropospheric trajectories calculated offline with one- to six-hourly wind fields can significantly deviate from trajectories calculated online. Deviations increase with decreasing model grid spacing and are particularly large in regions of deep convection and strong orographic flow distortion. On average, for this particular case study, horizontal and vertical positions between online and offline trajectories differed by 50–190 km and

  12. Enhanced Representation of Soil NO Emissions in the Community Multiscale Air Quality (CMAQ) Model Version 5.0.2

    Science.gov (United States)

    Rasool, Quazi Z.; Zhang, Rui; Lash, Benjamin; Cohan, Daniel S.; Cooter, Ellen J.; Bash, Jesse O.; Lamsal, Lok N.

    2016-01-01

    Modeling of soil nitric oxide (NO) emissions is highly uncertain and may misrepresent its spatial and temporal distribution. This study builds upon a recently introduced parameterization to improve the timing and spatial distribution of soil NO emission estimates in the Community Multiscale Air Quality (CMAQ) model. The parameterization considers soil parameters, meteorology, land use, and mineral nitrogen (N) availability to estimate NO emissions. We incorporate daily year-specific fertilizer data from the Environmental Policy Integrated Climate (EPIC) agricultural model to replace the annual generic data of the initial parameterization, and use a 12km resolution soil biome map over the continental USA. CMAQ modeling for July 2011 shows slight differences in model performance in simulating fine particulate matter and ozone from Interagency Monitoring of Protected Visual Environments (IMPROVE) and Clean Air Status and Trends Network (CASTNET) sites and NO2 columns from Ozone Monitoring Instrument (OMI) satellite retrievals. We also simulate how the change in soil NO emissions scheme affects the expected O3 response to projected emissions reductions.

  13. Enhanced representation of soil NO emissions in the Community Multiscale Air Quality (CMAQ) model version 5.0.2

    Science.gov (United States)

    Rasool, Quazi Z.; Zhang, Rui; Lash, Benjamin; Cohan, Daniel S.; Cooter, Ellen J.; Bash, Jesse O.; Lamsal, Lok N.

    2016-09-01

    Modeling of soil nitric oxide (NO) emissions is highly uncertain and may misrepresent its spatial and temporal distribution. This study builds upon a recently introduced parameterization to improve the timing and spatial distribution of soil NO emission estimates in the Community Multiscale Air Quality (CMAQ) model. The parameterization considers soil parameters, meteorology, land use, and mineral nitrogen (N) availability to estimate NO emissions. We incorporate daily year-specific fertilizer data from the Environmental Policy Integrated Climate (EPIC) agricultural model to replace the annual generic data of the initial parameterization, and use a 12 km resolution soil biome map over the continental USA. CMAQ modeling for July 2011 shows slight differences in model performance in simulating fine particulate matter and ozone from Interagency Monitoring of Protected Visual Environments (IMPROVE) and Clean Air Status and Trends Network (CASTNET) sites and NO2 columns from Ozone Monitoring Instrument (OMI) satellite retrievals. We also simulate how the change in soil NO emissions scheme affects the expected O3 response to projected emissions reductions.

  14. Public participation and rural management of Brazilian waters: an alternative to the deficit model (Portuguese original version

    Directory of Open Access Journals (Sweden)

    Alessandro Luís Piolli

    2008-12-01

    Full Text Available The knowledge deficit model with regard to the public has been severely criticized in the sociology of the public perception of science. However, when dealing with public decisions regarding scientific matters, political and scientific institutions insist on defending the deficit model. The idea that only certified experts, or those with vast experience, should have the right to participate in decisions can bring about problems for the future of democracies. Through a type of "topography of ideas", in which some concepts from the social studies of science are used in order to think about these problems, and through the case study of public participation in the elaboration of the proposal of discounts in the fees charged for rural water use in Brazil, we will try to point out an alternative to the deficit model. This alternative includes a "minimum comprehension" of the scientific matters involved in the decision on the part of the participants, using criteria judged by the public itself.

  15. Modelling of neutron and photon transport in iron and concrete radiation shieldings by the Monte Carlo method - Version 2

    CERN Document Server

    Žukauskaite, A; Plukiene, R; Plukis, A

    2007-01-01

    Particle accelerators and other high energy facilities produce penetrating ionizing radiation (neutrons and γ-rays) that must be shielded. The objective of this work was to model photon and neutron transport in various materials, usually used as shielding, such as concrete, iron or graphite. Monte Carlo method allows obtaining answers by simulating individual particles and recording some aspects of their average behavior. In this work several nuclear experiments were modeled: AVF 65 – γ-ray beams (1-10 MeV), HIMAC and ISIS-800 – high energy neutrons (20-800 MeV) transport in iron and concrete. The results were then compared with experimental data.

  16. Effect of sex in the MRMT-1 model of cancer-induced bone pain [version 3; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Sarah Falk

    2015-11-01

    Full Text Available An overwhelming amount of evidence demonstrates sex-induced variation in pain processing, and has thus increased the focus on sex as an essential parameter for optimization of in vivo models in pain research. Mammary cancer cells are often used to model metastatic bone pain in vivo, and are commonly used in both males and females. Here we demonstrate that compared to male rats, female rats have an increased capacity for recovery following inoculation of MRMT-1 mammary cells, thus potentially causing a sex-dependent bias in interpretation of the data.

  17. Reliability Growth Modeling and Optimal Release Policy Under Fuzzy Environment of an N-version Programming System Incorporating the Effect of Fault Removal Efficiency

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Failure of a safety critical system can lead to big losses. Very high software reliability is required for automating the working of systems such as aircraft controller and nuclear reactor controller software systems. Fault-tolerant softwares are used to increase the overall reliability of software systems. Fault tolerance is achieved using the fault-tolerant schemes such as fault recovery (recovery block scheme), fault masking (N-version programming (NVP)) or a combination of both (Hybrid scheme). These softwares incorporate the ability of system survival even on a failure. Many researchers in the field of software engineering have done excellent work to study the reliability of fault-tolerant systems. Most of them consider the stable system reliability. Few attempts have been made in reliability modeling to study the reliability growth for an NVP system. Recently, a model was proposed to analyze the reliability growth of an NVP system incorporating the effect of fault removal efficiency. In this model, a proportion of the number of failures is assumed to be a measure of fault generation while an appropriate measure of fault generation should be the proportion of faults removed. In this paper, we first propose a testing efficiency model incorporating the effect of imperfect fault debugging and error generation. Using this model, a software reliability growth model (SRGM) is developed to model the reliability growth of an NVP system. The proposed model is useful for practical applications and can provide the measures of debugging effectiveness and additional workload or skilled professional required. It is very important for a developer to determine the optimal release time of the software to improve its performance in terms of competition and cost. In this paper, we also formulate the optimal software release time problem for a 3VP system under fuzzy environment and discuss a the fuzzy optimization technique for solving the problem with a numerical illustration.

  18. EnKF and 4D-Var data assimilation with chemical transport model BASCOE (version 05.06)

    Science.gov (United States)

    Skachko, Sergey; Ménard, Richard; Errera, Quentin; Christophe, Yves; Chabrillat, Simon

    2016-08-01

    We compare two optimized chemical data assimilation systems, one based on the ensemble Kalman filter (EnKF) and the other based on four-dimensional variational (4D-Var) data assimilation, using a comprehensive stratospheric chemistry transport model (CTM). This work is an extension of the Belgian Assimilation System for Chemical ObsErvations (BASCOE), initially designed to work with a 4D-Var data assimilation. A strict comparison of both methods in the case of chemical tracer transport was done in a previous study and indicated that both methods provide essentially similar results. In the present work, we assimilate observations of ozone, HCl, HNO3, H2O and N2O from EOS Aura-MLS data into the BASCOE CTM with a full description of stratospheric chemistry. Two new issues related to the use of the full chemistry model with EnKF are taken into account. One issue is a large number of error variance parameters that need to be optimized. We estimate an observation error variance parameter as a function of pressure level for each observed species using the Desroziers method. For comparison purposes, we apply the same estimate procedure in the 4D-Var data assimilation, where both scale factors of the background and observation error covariance matrices are estimated using the Desroziers method. However, in EnKF the background error covariance is modelled using the full chemistry model and a model error term which is tuned using an adjustable parameter. We found that it is adequate to have the same value of this parameter based on the chemical tracer formulation that is applied for all observed species. This is an indication that the main source of model error in chemical transport model is due to the transport. The second issue in EnKF with comprehensive atmospheric chemistry models is the noise in the cross-covariance between species that occurs when species are weakly chemically related at the same location. These errors need to be filtered out in addition to a

  19. GWSCREEN: A semi-analytical model for assessment of the groundwater pathway from surface or buried contamination: Version 2.0 theory and user`s manual

    Energy Technology Data Exchange (ETDEWEB)

    Rood, A.S.

    1993-06-01

    GWSCREEN was developed for assessment of the groundwater pathway from leaching of radioactive and non radioactive substances from surface or buried sources. The code was designed for implementation in the Track I and Track II assessment of CERCLA (Comprehensive Environmental Response, Compensation and Liability Act) sites identified as low probability hazard at the Idaho National Engineering Laboratory (DOE, 1992). The code calculates the limiting soil concentration such that, after leaching and transport to the aquifer, regulatory contaminant levels in groundwater are not exceeded. The code uses a mass conservation approach to model three processes: contaminant release from a source volume, contaminant transport in the unsaturated zone, and contaminant transport in the saturated zone. The source model considers the sorptive properties and solubility of the contaminant. Transport in the unsaturated zone is described by a plug flow model. Transport in the saturated zone is calculated with a semi-analytical solution to the advection dispersion equation in groundwater. In Version 2.0, GWSCREEN has incorporated an additional source model to calculate the impacts to groundwater resulting from the release to percolation ponds. In addition, transport of radioactive progeny has also been incorporated. GWSCREEN has shown comparable results when compared against other codes using similar algorithms and techniques. This code was designed for assessment and screening of the groundwater pathway when field data is limited. It was not intended to be a predictive tool.

  20. Numerical simulations of oceanic oxygen cycling in the FAMOUS Earth-System model: FAMOUS-ES, version 1.0

    Directory of Open Access Journals (Sweden)

    J. H. T. Williams

    2014-02-01

    Full Text Available Addition and validation of an oxygen cycle to the ocean component of the FAMOUS climate model are described. Surface validation is carried out with respect to HadGEM2-ES where good agreement is found and where discrepancies are mainly attributed to disagreement in surface temperature structure between the models. The agreement between the models at depth (where observations are also used in the comparison in the Southern Hemisphere is less encouraging than in the Northern Hemisphere. This is attributed to a combination of excessive surface productivity in FAMOUS' equatorial waters (and its concomitant effect on remineralisation at depth and its reduced overturning circulation compared to HadGEM2-ES. For the entire Atlantic basin FAMOUS has a circulation strength of 12.7 ± 0.4 Sv compared to 15.0 ± 0.9 for HadGEM2-ES. The HadGEM2-ES data used in this paper were obtained from the online database of the fifth Coupled Model Intercomparison Project, CMIP5 (Taylor et al., 2012.

  1. Enhanced representation of soil NO emissions in the Community Multiscale Air Quality (CMAQ) model version 5.0.2

    Science.gov (United States)

    Modeling of soil nitric oxide (NO) emissions is highly uncertain and may misrepresent its spatial and temporal distribution. This study builds upon a recently introduced parameterization to improve the timing and spatial distribution of soil NO emission estimates in the Community...

  2. The CSIRO Mk3L climate system model version 1.0 – Part 2: Response to external forcings

    Directory of Open Access Journals (Sweden)

    S. J. Phipps

    2012-05-01

    Full Text Available The CSIRO Mk3L climate system model is a coupled general circulation model, designed primarily for millennial-scale climate simulation and palaeoclimate research. Mk3L includes components which describe the atmosphere, ocean, sea ice and land surface, and combines computational efficiency with a stable and realistic control climatology. It is freely available to the research community. This paper evaluates the response of the model to external forcings which correspond to past and future changes in the climate system.

    A simulation of the mid-Holocene climate is performed, in which changes in the seasonal and meridional distribution of incoming solar radiation are imposed. Mk3L correctly simulates increased summer temperatures at northern mid-latitudes and cooling in the tropics. However, it is unable to capture some of the regional-scale features of the mid-Holocene climate, with the precipitation over Northern Africa being deficient. The model simulates a reduction of between 7 and 15% in the amplitude of El Niño-Southern Oscillation, a smaller decrease than that implied by the palaeoclimate record. However, the realism of the simulated ENSO is limited by the model's relatively coarse spatial resolution.

    Transient simulations of the late Holocene climate are then performed. The evolving distribution of insolation is imposed, and an acceleration technique is applied and assessed. The model successfully captures the temperature changes in each hemisphere and the upward trend in ENSO variability. However, the lack of a dynamic vegetation scheme does not allow it to simulate an abrupt desertification of the Sahara.

    To assess the response of Mk3L to other forcings, transient simulations of the last millennium are performed. Changes in solar irradiance, atmospheric greenhouse gas concentrations and volcanic emissions are applied to the model. The model is again broadly successful at simulating larger-scale changes in the

  3. Itzï (version 17.1): an open-source, distributed GIS model for dynamic flood simulation

    Science.gov (United States)

    Guillaume Courty, Laurent; Pedrozo-Acuña, Adrián; Bates, Paul David

    2017-05-01

    Worldwide, floods are acknowledged as one of the most destructive hazards. In human-dominated environments, their negative impacts are ascribed not only to the increase in frequency and intensity of floods but also to a strong feedback between the hydrological cycle and anthropogenic development. In order to advance a more comprehensive understanding of this complex interaction, this paper presents the development of a new open-source tool named Itzï that enables the 2-D numerical modelling of rainfall-runoff processes and surface flows integrated with the open-source geographic information system (GIS) software known as GRASS. Therefore, it takes advantage of the ability given by GIS environments to handle datasets with variations in both temporal and spatial resolutions. Furthermore, the presented numerical tool can handle datasets from different sources with varied spatial resolutions, facilitating the preparation and management of input and forcing data. This ability reduces the preprocessing time usually required by other models. Itzï uses a simplified form of the shallow water equations, the damped partial inertia equation, for the resolution of surface flows, and the Green-Ampt model for the infiltration. The source code is now publicly available online, along with complete documentation. The numerical model is verified against three different tests cases: firstly, a comparison with an analytic solution of the shallow water equations is introduced; secondly, a hypothetical flooding event in an urban area is implemented, where results are compared to those from an established model using a similar approach; and lastly, the reproduction of a real inundation event that occurred in the city of Kingston upon Hull, UK, in June 2007, is presented. The numerical approach proved its ability at reproducing the analytic and synthetic test cases. Moreover, simulation results of the real flood event showed its suitability at identifying areas affected by flooding

  4. User guide to UTDefect, Version 3: A computer program modelling ultrasonic nondestructive testing of a defect in an isotropic component

    Energy Technology Data Exchange (ETDEWEB)

    Bostroem, A. [Chalmers Univ. of Technology, Goeteborg (Sweden). Dept. of Mechanics

    2000-10-01

    This user guide to the computer program UTDefect should give a reasonable overview of the program and its possibilities and limitations and it should make it possible to run the program. UTDefect models the ultrasonic nondestructive testing of some simply shaped defects in an isotropic and homogeneous component. Such a model can be useful for educational purposes, for parametric studies, for the development of testing procedures, for the development of signal processing and data inversion procedures and for the qualification of NDT procedures and personnel. The theories behind UTDefect are all of the type that can be called 'exact', meaning that the full linear elastodynamic wave equations are solved, essentially without any approximations. The basic assumption in UTDefect is that the tested component is homogeneous and isotropic, although viscoelastic losses can be included. The ultrasonic probes are modelled by the traction they are exerting on the component. The action of the receiving probe is modelled by a reciprocity argument. The various defects are all idealized with smooth surfaces and sharp crack edges, although a model for rough cracks is also included. The wave propagation and scattering is solved for by Fourier transforms, integral equation techniques, the null field approach and separation-of-variables. The methods are all of the semi analytical kind and with enough truncations, number of integration points, etc, give very good accuracy. The models are all three-dimensional and give reasonable execution times in most cases. In comparison, the more general volume discretation methods like EFIT and FEM still tend to be useful for wave propagation problems mainly in two dimensions. The probe model in UTDefect admits the usual kind of contact probes with arbitrary type, angle and frequency. The effective contact area can be rectangular or elliptic and the contact lubricated or glued. Focused probes are also possible. Two simple types of

  5. Influence of Solar and Thermal Radiation on Future Heat Stress Using CMIP5 Archive Driving the Community Land Model Version 4.5

    Science.gov (United States)

    Buzan, J. R.; Huber, M.

    2015-12-01

    The summer of 2015 has experienced major heat waves on 4 continents, and heat stress left ~4000 people dead in India and Pakistan. Heat stress is caused by a combination of meteorological factors: temperature, humidity, and radiation. The International Organization for Standardization (ISO) uses Wet Bulb Globe Temperature (WBGT)—an empirical metric this is calibrated with temperature, humidity, and radiation—for determining labor capacity during heat stress. Unfortunately, most literature studying global heat stress focuses on extreme temperature events, and a limited number of studies use the combination of temperature and humidity. Recent global assessments use WBGT, yet omit the radiation component without recalibrating the metric.Here we explicitly calculate future WBGT within a land surface model, including radiative fluxes as produced by a modeled globe thermometer. We use the Community Land Model version 4.5 (CLM4.5), which is a component model of the Community Earth System Model (CESM), and is maintained by the National Center for Atmospheric Research (NCAR). To drive our CLM4.5 simulations, we use greenhouse gasses Representative Concentration Pathway 8.5 (business as usual), and atmospheric output from the CMIP5 Archive. Humans work in a variety of environments, and we place the modeled globe thermometer in a variety of environments. We modify CLM4.5 code to calculate solar and thermal radiation fluxes below and above canopy vegetation, and in bare ground. To calculate wet bulb temperature, we implemented the HumanIndexMod into CLM4.5. The temperature, wet bulb temperature, and radiation fields are calculated at every model time step and are outputted 4x Daily. We use these fields to calculate WBGT and labor capacity for two time slices: 2026-2045 and 2081-2100.

  6. GPU-accelerated atmospheric chemical kinetics in the ECHAM/MESSy (EMAC Earth system model (version 2.52

    Directory of Open Access Journals (Sweden)

    M. Alvanos

    2017-10-01

    Full Text Available This paper presents an application of GPU accelerators in Earth system modeling. We focus on atmospheric chemical kinetics, one of the most computationally intensive tasks in climate–chemistry model simulations. We developed a software package that automatically generates CUDA kernels to numerically integrate atmospheric chemical kinetics in the global climate model ECHAM/MESSy Atmospheric Chemistry (EMAC, used to study climate change and air quality scenarios. A source-to-source compiler outputs a CUDA-compatible kernel by parsing the FORTRAN code generated by the Kinetic PreProcessor (KPP general analysis tool. All Rosenbrock methods that are available in the KPP numerical library are supported.Performance evaluation, using Fermi and Pascal CUDA-enabled GPU accelerators, shows achieved speed-ups of 4. 5 ×  and 20. 4 × , respectively, of the kernel execution time. A node-to-node real-world production performance comparison shows a 1. 75 ×  speed-up over the non-accelerated application using the KPP three-stage Rosenbrock solver. We provide a detailed description of the code optimizations used to improve the performance including memory optimizations, control code simplification, and reduction of idle time. The accuracy and correctness of the accelerated implementation are evaluated by comparing to the CPU-only code of the application. The median relative difference is found to be less than 0.000000001 % when comparing the output of the accelerated kernel the CPU-only code.The approach followed, including the computational workload division, and the developed GPU solver code can potentially be used as the basis for hardware acceleration of numerous geoscientific models that rely on KPP for atmospheric chemical kinetics applications.

  7. a Detailed Account of Alain CONNES’ Version of the Standard Model in Non-Commutative Differential Geometry III

    Science.gov (United States)

    Kastler, Daniel

    We describe in detail Alain Connes’ last presentation of the (classical level of the) standard model in noncommutative differential geometry, now free of the cumbersome adynamical fields which parasited the initial treatment. Accessorily, the theory is presented in a more transparent way by systematic use of the skew tensor-product structure, and of 2×2 matrices with 2×2 matrix-entries instead of the previous 4×4 matrices.

  8. Application of a modified Anaerobic Digestion Model 1 version for fermentative hydrogen production from sweet sorghum extract by Ruminococcus albus

    Energy Technology Data Exchange (ETDEWEB)

    Ntaikou, I.; Lyberatos, G. [Department of Chemical Engineering, University of Patras, Karatheodori 1 St., 26500 Patras (Greece); Institute of Chemical Engineering and High Temperature Chemical Processes, 26504 Patras (Greece); Gavala, H.N. [Department of Chemical Engineering, University of Patras, Karatheodori 1 St., 26500 Patras (Greece); Copenhagen Institute of Technology (Aalborg University Copenhagen), Section for Sustainable Biotechnology, Department of Biotechnology, Chemistry and Environmental Engineering, Lautrupvang 15, DK 2750 Ballerup (Denmark)

    2010-04-15

    The aim of the present study was to evaluate the effectiveness of a developed, ADM1-based kinetic model for the hydrogen production process in batch and continuous cultures of the bacterium Ruminococcus albus grown on sweet sorghum extract as the sole carbon source. Although sorghum extract is known to contain at least two different sugars, i.e. sucrose and glucose, no biphasic growth was observed in batch cultures as such growth is reported to occur in cultures of R. albus with mixed substrates. Thus, taking into account that the main sugar of sweet sorghum extract is sucrose, batch experiments with different initial concentrations of sucrose were performed in order to estimate the growth kinetics of the bacterium on this substrate. The kinetic parameters used, concerning the endogenous metabolism of the bacterium as well as those concerning the effect of pH and hydrogen partial pressure (P{sub H2}), were the same as those estimated in a previous study with glucose as carbon source. Subsequently, the experimental data of batch and continuous experiments with sweet sorghum extract were simulated based on the already developed, modified ADM1 model accounting for the use of sugar-based substrate. It was shown that the model which was developed on synthetic substrates was successful in adequately describing the behavior of the microorganism on a real substrate such as sweet sorghum extract and predicting the experimental results quite well with a deviation of the model predictions from the experimental results being between 5-18% for the hydrogen yield. (author)

  9. Regional hydrogeological simulations for Forsmark - numerical modelling using DarcyTools. Preliminary site description Forsmark area version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-12-15

    A numerical model is developed on a regional-scale (hundreds of square kilometres) to study the zone of influence for variable-density groundwater flow that affects the Forsmark area. Transport calculations are performed by particle tracking from a local-scale release area (a few square kilometres) to test the sensitivity to different hydrogeological uncertainties and the need for far-field realism. The main objectives of the regional flow modelling were to achieve the following: I. Palaeo-hydrogeological understanding: An improved understanding of the palaeohydrogeological conditions is necessary in order to gain credibility for the site descriptive model in general and the hydrogeological description in particular. This requires modelling of the groundwater flow from the last glaciation up to present-day with comparisons against measured TDS and other hydro-geochemical measures. II. Simulation of flow paths: The simulation and visualisation of flow paths from a tentative repository area is a means for describing the role of the current understanding of the modelled hydrogeological conditions in the target volume, i.e. the conditions of primary interest for Safety Assessment. Of particular interest here is demonstration of the need for detailed far-field realism in the numerical simulations. The motivation for a particular model size (and resolution) and set of boundary conditions for a realistic description of the recharge and discharge connected to the flow at repository depth is an essential part of the groundwater flow path simulations. The numerical modelling was performed by two separate modelling teams, the ConnectFlow Team and the DarcyTools Team. The work presented in this report was based on the computer code DarcyTools developed by Computer-aided Fluid Engineering. DarcyTools is a kind of equivalent porous media (EPM) flow code specifically designed to treat flow and salt transport in sparsely fractured crystalline rock intersected by transmissive

  10. Pharmacokinetic-pharmacodynamic relationship of anesthetic drugs: from modeling to clinical use [version 1; referees: 4 approved

    Directory of Open Access Journals (Sweden)

    Valerie Billard

    2015-11-01

    Full Text Available Anesthesia is a combination of unconsciousness, amnesia, and analgesia, expressed in sleeping patients by limited reaction to noxious stimulations. It is achieved by several classes of drugs, acting mainly on central nervous system. Compared to other therapeutic families, the anesthetic drugs, administered by intravenous or pulmonary route, are quickly distributed in the blood and induce in a few minutes effects that are fully reversible within minutes or hours. These effects change in parallel with the concentration of the drug, and the concentration time course of the drug follows with a reasonable precision mathematical models based on the Fick principle. Therefore, understanding concentration time course allows adjusting the dosing delivery scheme in order to control the effects.   The purpose of this short review is to describe the basis of pharmacokinetics and modeling, the concentration-effects relationship, and drug interactions modeling to offer to anesthesiologists and non-anesthesiologists an overview of the rules to follow to optimize anesthetic drug delivery.

  11. Development of a source oriented version of the WRF/Chem model and its application to the California Regional PM10/PM2.5 Air Quality Study

    Directory of Open Access Journals (Sweden)

    H. Zhang

    2013-06-01

    flux, and primary and secondary particulate matter concentrations relative to the internally mixed version of the model. Downward shortwave radiation predicted by source-oriented model is enhanced by 1% at ground level chiefly because diesel engine particles in the source-oriented mixture are not artificially coated with material that increases their absorption efficiency. The extinction coefficient predicted by the source-oriented WRF/Chem model is reduced by an average of ∼ 5–10% in the central valley with a maximum reduction of ∼ 20%. Particulate matter concentrations predicted by the source-oriented WRF/Chem model are ∼ 5–10% lower than the internally mixed version of the same model because increased solar radiation at the ground increases atmospheric mixing. All of these results stem from the mixing state of black carbon. The source-oriented model representation with realistic aging processes predicts that hydrophobic diesel engine particles remain largely uncoated over the +7 day simulation period, while the internal mixture model representation predicts significant accumulation of secondary nitrate and water on diesel engine particles. Similar results will likely be found in any air pollution stagnation episode that is characterized by significant particulate nitrate production.

  12. Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    Vigil,Benny Manuel [Los Alamos National Laboratory; Ballance, Robert [SNL; Haskell, Karen [SNL

    2012-08-09

    Cielo is a massively parallel supercomputer funded by the DOE/NNSA Advanced Simulation and Computing (ASC) program, and operated by the Alliance for Computing at Extreme Scale (ACES), a partnership between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL). The primary Cielo compute platform is physically located at Los Alamos National Laboratory. This Cielo Computational Environment Usage Model documents the capabilities and the environment to be provided for the Q1 FY12 Level 2 Cielo Capability Computing (CCC) Platform Production Readiness Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory, or Sandia National Laboratories, but also addresses the needs of users working in the unclassified environment. The Cielo Computational Environment Usage Model maps the provided capabilities to the tri-Lab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the Production Readiness Milestone user environment capabilities of the ASC community. A description of ACE requirements met, and those requirements that are not met, are included in each section of this document. The Cielo Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the tri-Lab community.

  13. Using Rasch rating scale model to reassess the psychometric properties of the Persian version of the PedsQLTM 4.0 Generic Core Scales in school children

    Directory of Open Access Journals (Sweden)

    Jafari Peyman

    2012-03-01

    Full Text Available Abstract Background Item response theory (IRT is extensively used to develop adaptive instruments of health-related quality of life (HRQoL. However, each IRT model has its own function to estimate item and category parameters, and hence different results may be found using the same response categories with different IRT models. The present study used the Rasch rating scale model (RSM to examine and reassess the psychometric properties of the Persian version of the PedsQLTM 4.0 Generic Core Scales. Methods The PedsQLTM 4.0 Generic Core Scales was completed by 938 Iranian school children and their parents. Convergent, discriminant and construct validity of the instrument were assessed by classical test theory (CTT. The RSM was applied to investigate person and item reliability, item statistics and ordering of response categories. Results The CTT method showed that the scaling success rate for convergent and discriminant validity were 100% in all domains with the exception of physical health in the child self-report. Moreover, confirmatory factor analysis supported a four-factor model similar to its original version. The RSM showed that 22 out of 23 items had acceptable infit and outfit statistics (0.6, person reliabilities were low, item reliabilities were high, and item difficulty ranged from -1.01 to 0.71 and -0.68 to 0.43 for child self-report and parent proxy-report, respectively. Also the RSM showed that successive response categories for all items were not located in the expected order. Conclusions This study revealed that, in all domains, the five response categories did not perform adequately. It is not known whether this problem is a function of the meaning of the response choices in the Persian language or an artifact of a mostly healthy population that did not use the full range of the response categories. The response categories should be evaluated in further validation studies, especially in large samples of chronically ill patients.

  14. Breeding novel solutions in the brain: a model of Darwinian neurodynamics [version 1; referees: 1 approved, 2 approved with reservations

    Directory of Open Access Journals (Sweden)

    András Szilágyi

    2016-09-01

    Full Text Available Background: The fact that surplus connections and neurons are pruned during development is well established. We complement this selectionist picture by a proof-of-principle model of evolutionary search in the brain, that accounts for new variations in theory space. We present a model for Darwinian evolutionary search for candidate solutions in the brain. Methods: We combine known components of the brain – recurrent neural networks (acting as attractors, the action selection loop and implicit working memory – to provide the appropriate Darwinian architecture. We employ a population of attractor networks with palimpsest memory. The action selection loop is employed with winners-share-all dynamics to select for candidate solutions that are transiently stored in implicit working memory. Results: We document two processes: selection of stored solutions and evolutionary search for novel solutions. During the replication of candidate solutions attractor networks occasionally produce recombinant patterns, increasing variation on which selection can act. Combinatorial search acts on multiplying units (activity patterns with hereditary variation and novel variants appear due to (i noisy recall of patterns from the attractor networks, (ii noise during transmission of candidate solutions as messages between networks, and, (iii spontaneously generated, untrained patterns in spurious attractors. Conclusions: Attractor dynamics of recurrent neural networks can be used to model Darwinian search. The proposed architecture can be used for fast search among stored solutions (by selection and for evolutionary search when novel candidate solutions are generated in successive iterations. Since all the suggested components are present in advanced nervous systems, we hypothesize that the brain could implement a truly evolutionary combinatorial search system, capable of generating novel variants.

  15. GASP: A Performance Analysis Tool Interface for Global AddressSpace Programming Models, Version 1.5

    Energy Technology Data Exchange (ETDEWEB)

    Leko, Adam; Bonachea, Dan; Su, Hung-Hsun; George, Alan D.; Sherburne, Hans; George, Alan D.

    2006-09-14

    Due to the wide range of compilers and the lack of astandardized performance tool interface, writers of performance toolsface many challenges when incorporating support for global address space(GAS) programming models such as Unified Parallel C (UPC), Titanium, andCo-Array Fortran (CAF). This document presents a Global Address SpacePerformance tool interface (GASP) that is flexible enough to be adaptedinto current global address space compiler and runtime infrastructureswith little effort, while allowing performance analysis tools to gathermuch information about the performance of global address spaceprograms.

  16. From disease modelling to personalised therapy in patients with CEP290 mutations [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Elisa Molinari

    2017-05-01

    Full Text Available Mutations that give rise to premature termination codons are a common cause of inherited genetic diseases. When transcripts containing these changes are generated, they are usually rapidly removed by the cell through the process of nonsense-mediated decay. Here we discuss observed changes in transcripts of the centrosomal protein CEP290 resulting not from degradation, but from changes in exon usage. We also comment on a landmark paper (Drivas et al. Sci Transl Med. 2015 where modelling this process of exon usage may be used to predict disease severity in CEP290 ciliopathies, and how understanding this process may potentially be used for therapeutic benefit in the future.

  17. M3 version 3.0: Verification and validation; Hydrochemical model of ground water at repository site

    Energy Technology Data Exchange (ETDEWEB)

    Gomez, Javier B. (Dept. of Earth Sciences, Univ. of Zaragoza, Zaragoza (Spain)); Laaksoharju, Marcus (Geopoint AB, Sollentuna (Sweden)); Skaarman, Erik (Abscondo, Bromma (Sweden)); Gurban, Ioana (3D-Terra (Canada))

    2009-01-15

    Hydrochemical evaluation is a complex type of work that is carried out by specialists. The outcome of this work is generally presented as qualitative models and process descriptions of a site. To support and help to quantify the processes in an objective way, a multivariate mathematical tool entitled M3 (Multivariate Mixing and Mass balance calculations) has been constructed. The computer code can be used to trace the origin of the groundwater, and to calculate the mixing proportions and mass balances from groundwater data. The M3 code is a groundwater response model, which means that changes in the groundwater chemistry in terms of sources and sinks are traced in relation to an ideal mixing model. The complexity of the measured groundwater data determines the configuration of the ideal mixing model. Deviations from the ideal mixing model are interpreted as being due to reactions. Assumptions concerning important mineral phases altering the groundwater or uncertainties associated with thermodynamic constants do not affect the modelling because the calculations are solely based on the measured groundwater composition. M3 uses the opposite approach to that of many standard hydrochemical models. In M3, mixing is evaluated and calculated first. The constituents that cannot be described by mixing are described by reactions. The M3 model consists of three steps: the first is a standard principal component analysis, followed by mixing and finally mass balance calculations. The measured groundwater composition can be described in terms of mixing proportions (%), while the sinks and sources of an element associated with reactions are reported in mg/L. This report contains a set of verification and validation exercises with the intention of building confidence in the use of the M3 methodology. At the same time, clear answers are given to questions related to the accuracy and the precision of the results, including the inherent uncertainties and the errors that can be made

  18. Analysing $j/\\Psi$ Production in Various RHIC Interactions with a Version of Sequential Chain Model (SCM)

    CERN Document Server

    Guptaroy, P; Sau, Goutam; Biswas, S K; Bhattacharya, S

    2009-01-01

    We have attempted to develop here tentatively a model for $J/\\Psi$ production in p+p, d+Au, Cu + Cu and Au + Au collisions at RHIC energies on the basic ansatz that the results of nucleus-nucleus collisions could be arrived at from the nucleon-nucleon (p + p)-interactions with induction of some additional specific features of high energy nuclear collisions. Based on the proposed new and somewhat unfamiliar model, we have tried (i) to capture the properties of invariant $p_T$ -spectra for $J/\\Psi$ meson production; (ii) to study the nature of centrality dependence of the $p_T$ -spectra; (iii) to understand the rapidity distributions; (iv) to obtain the characteristics of the average transverse momentum $$ and the values of $$ as well and (v) to trace the nature of nuclear modification factor. The alternative approach adopted here describes the data-sets on the above-mentioned various observables in a fairly satisfactory manner. And, finally, the nature of $J/\\Psi$-production at Large Hadron Collider(LHC)-energ...

  19. M3 version 3.0: Verification and validation; Hydrochemical model of ground water at repository site

    Energy Technology Data Exchange (ETDEWEB)

    Gomez, Javier B. (Dept. of Earth Sciences, Univ. of Zaragoza, Zaragoza (Spain)); Laaksoharju, Marcus (Geopoint AB, Sollentuna (Sweden)); Skaarman, Erik (Abscondo, Bromma (Sweden)); Gurban, Ioana (3D-Terra (Canada))

    2009-01-15

    Hydrochemical evaluation is a complex type of work that is carried out by specialists. The outcome of this work is generally presented as qualitative models and process descriptions of a site. To support and help to quantify the processes in an objective way, a multivariate mathematical tool entitled M3 (Multivariate Mixing and Mass balance calculations) has been constructed. The computer code can be used to trace the origin of the groundwater, and to calculate the mixing proportions and mass balances from groundwater data. The M3 code is a groundwater response model, which means that changes in the groundwater chemistry in terms of sources and sinks are traced in relation to an ideal mixing model. The complexity of the measured groundwater data determines the configuration of the ideal mixing model. Deviations from the ideal mixing model are interpreted as being due to reactions. Assumptions concerning important mineral phases altering the groundwater or uncertainties associated with thermodynamic constants do not affect the modelling because the calculations are solely based on the measured groundwater composition. M3 uses the opposite approach to that of many standard hydrochemical models. In M3, mixing is evaluated and calculated first. The constituents that cannot be described by mixing are described by reactions. The M3 model consists of three steps: the first is a standard principal component analysis, followed by mixing and finally mass balance calculations. The measured groundwater composition can be described in terms of mixing proportions (%), while the sinks and sources of an element associated with reactions are reported in mg/L. This report contains a set of verification and validation exercises with the intention of building confidence in the use of the M3 methodology. At the same time, clear answers are given to questions related to the accuracy and the precision of the results, including the inherent uncertainties and the errors that can be made

  20. Inspection of the Math Model Tools for On-Orbit Assessment of Impact Damage Report. Version 1.0

    Science.gov (United States)

    Harris, Charles E.; Raju, Ivatury S.; Piascik, Robert S.; Kramer White, Julie; Labbe, Steve G.; Rotter, Hank A.

    2005-01-01

    In Spring of 2005, the NASA Engineering Safety Center (NESC) was engaged by the Space Shuttle Program (SSP) to peer review the suite of analytical tools being developed to support the determination of impact and damage tolerance of the Orbiter Thermal Protection Systems (TPS). The NESC formed an independent review team with the core disciplines of materials, flight sciences, structures, mechanical analysis and thermal analysis. The Math Model Tools reviewed included damage prediction and stress analysis, aeroheating analysis, and thermal analysis tools. Some tools are physics-based and other tools are empirically-derived. Each tool was created for a specific use and timeframe, including certification, real-time pre-launch assessments, and real-time on-orbit assessments. The tools are used together in an integrated strategy for assessing the ramifications of impact damage to tile and RCC. The NESC teams conducted a peer review of the engineering data package for each Math Model Tool. This report contains the summary of the team observations and recommendations from these reviews.

  1. Attribute-Based Signcryption: Signer Privacy, Strong Unforgeability and IND-CCA Security in Adaptive-Predicates Model (Extended Version

    Directory of Open Access Journals (Sweden)

    Tapas Pandit

    2016-08-01

    Full Text Available Attribute-Based Signcryption (ABSC is a natural extension of Attribute-Based Encryption (ABE and Attribute-Based Signature (ABS, where one can have the message confidentiality and authenticity together. Since the signer privacy is captured in security of ABS, it is quite natural to expect that the signer privacy will also be preserved in ABSC. In this paper, first we propose an ABSC scheme which is weak existential unforgeable and IND-CCA secure in adaptive-predicates models and, achieves signer privacy. Then, by applying strongly unforgeable one-time signature (OTS, the above scheme is lifted to an ABSC scheme to attain strong existential unforgeability in adaptive-predicates model. Both the ABSC schemes are constructed on common setup, i.e the public parameters and key are same for both the encryption and signature modules. Our first construction is in the flavor of CtE&S paradigm, except one extra component that will be computed using both signature components and ciphertext components. The second proposed construction follows a new paradigm (extension of CtE&S , we call it “Commit then Encrypt and Sign then Sign” (CtE&S . The last signature is generated using a strong OTS scheme. Since, the non-repudiation is achieved by CtE&S paradigm, our systems also achieve the same.

  2. caCORE version 3: Implementation of a model driven, service-oriented architecture for semantic interoperability

    Science.gov (United States)

    Komatsoulis, George A.; Warzel, Denise B.; Hartel, Frank W.; Shanbhag, Krishnakant; Chilukuri, Ram; Fragoso, Gilberto; de Coronado, Sherri; Reeves, Dianne M.; Hadfield, Jillaine B.; Ludet, Christophe; Covitz, Peter A.

    2008-01-01

    One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service Oriented Architecture (SSOA) for cancer research by the National Cancer Institute’s cancer Biomedical Informatics Grid (caBIG™). PMID:17512259

  3. A model using marginal efficiency of investment to analyze carbon and nitrogen interactions in terrestrial ecosystems (ACONITE Version 1)

    Science.gov (United States)

    Thomas, R. Q.; Williams, M.

    2014-09-01

    Carbon (C) and nitrogen (N) cycles are coupled in terrestrial ecosystems through multiple processes including photosynthesis, tissue allocation, respiration, N fixation, N uptake, and decomposition of litter and soil organic matter. Capturing the constraint of N on terrestrial C uptake and storage has been a focus of the Earth System Modeling community. However, there is little understanding of the trade-offs and sensitivities of allocating C and N to different tissues in order to optimize the productivity of plants. Here we describe a new, simple model of ecosystem C-N cycling and interactions (ACONITE), that builds on theory related to plant economics in order to predict key ecosystem properties (leaf area index, leaf C : N, N fixation, and plant C use efficiency) based on the outcome of assessments of the marginal change in net C or N uptake associated with a change in allocation of C or N to plant tissues. We simulated and evaluated steady-state ecosystem stocks and fluxes in three different forest ecosystems types (tropical evergreen, temperate deciduous, and temperate evergreen). Leaf C : N differed among the three ecosystem types (temperate deciduous demand for N and the marginal return on C investment to acquire N, was an order of magnitude higher in the tropical forest than in the temperate forest, consistent with observations. A sensitivity analysis revealed that parameterization of the relationship between leaf N and leaf respiration had the largest influence on leaf area index and leaf C : N. A parameter governing how photosynthesis scales with day length had the largest influence on total vegetation C, GPP, and NPP. Multiple parameters associated with photosynthesis, respiration, and N uptake influenced the rate of N fixation. Overall, our ability to constrain leaf area index and allow spatially and temporally variable leaf C : N can help address challenges simulating these properties in ecosystem and Earth System models. Furthermore, the simple

  4. A model using marginal efficiency of investment to analyse carbon and nitrogen interactions in terrestrial ecosystems (ACONITE Version 1)

    Science.gov (United States)

    Thomas, R. Q.; Williams, M.

    2014-04-01

    Carbon (C) and nitrogen (N) cycles are coupled in terrestrial ecosystems through multiple processes including photosynthesis, tissue allocation, respiration, N fixation, N uptake, and decomposition of litter and soil organic matter. Capturing the constraint of N on terrestrial C uptake and storage has been a focus of the Earth System modelling community. However there is little understanding of the trade-offs and sensitivities of allocating C and N to different tissues in order to optimize the productivity of plants. Here we describe a new, simple model of ecosystem C-N cycling and interactions (ACONITE), that builds on theory related to plant economics in order to predict key ecosystem properties (leaf area index, leaf C : N, N fixation, and plant C use efficiency) using emergent constraints provided by marginal returns on investment for C and/or N allocation. We simulated and evaluated steady-state ecosystem stocks and fluxes in three different forest ecosystems types (tropical evergreen, temperate deciduous, and temperate evergreen). Leaf C : N differed among the three ecosystem types (temperate deciduous demand for N and the marginal return on C investment to acquire N, was an order of magnitude higher in the tropical forest than in the temperate forest, consistent with observations. A sensitivity analysis revealed that parameterization of the relationship between leaf N and leaf respiration had the largest influence on leaf area index and leaf C : N. Also, a widely used linear leaf N-respiration relationship did not yield a realistic leaf C : N, while a more recently reported non-linear relationship performed better. A parameter governing how photosynthesis scales with day length had the largest influence on total vegetation C, GPP, and NPP. Multiple parameters associated with photosynthesis, respiration, and N uptake influenced the rate of N fixation. Overall, our ability to constrain leaf area index and have spatially and temporally variable leaf C : N helps

  5. Ada Compiler Validation Summary Report: Certificate Number: 890119A1. 10032 Alsys AlsyCOMP 019, Version 4.1 Zenith Z-248 Model 50 and Intel isBC 286/12 Single Board Computer

    Science.gov (United States)

    1989-01-19

    Zenith Z-248 Model 50 under MS/DOS, Version 3.2 (host) to Intel isBC 286/12 single board computer (target), ACVC 1.10 g0 01 03 004 DD tŘ 1473 1DITION...Intel isBC 286/12 single board computer Completion of On-Site Testing: 19 January 1989 AcCesion For DTIC Tii prepared BY: . AFNOR .,ltir...Number: 890119A1.10032 Host: Zenith Z-248 Model 50 under MS/DOS, Version 3.2 Target: Intel isBC 286/12 single board computer Testing Completed 19 January

  6. Midlatitude atmospheric responses to Arctic sensible heat flux anomalies in Community Climate Model, Version 4: Atmospheric Response to Arctic SHFs

    Energy Technology Data Exchange (ETDEWEB)

    Mills, Catrin M. [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland Washington USA; Cooperative Institute for Research in Environmental Sciences, University of Colorado, Boulder Colorado USA; Cassano, John J. [Cooperative Institute for Research in Environmental Sciences and Department of Atmospheric and Oceanic Sciences, University of Colorado Boulder, Boulder Colorado USA; Cassano, Elizabeth N. [Cooperative Institute for Research in Environmental Sciences, University of Colorado, Boulder Colorado USA

    2016-12-10

    Possible linkages between Arctic sea ice loss and midlatitude weather are strongly debated in the literature. We analyze a coupled model simulation to assess the possibility of Arctic ice variability forcing a midlatitude response, ensuring consistency between atmosphere, ocean, and ice components. We work with weekly running mean daily sensible heat fluxes with the self-organizing map technique to identify Arctic sensible heat flux anomaly patterns and the associated atmospheric response, without the need of metrics to define the Arctic forcing or measure the midlatitude response. We find that low-level warm anomalies during autumn can build planetary wave patterns that propagate downstream into the midlatitudes, creating robust surface cold anomalies in the eastern United States.

  7. 汉译《法华经》陀罗尼译音所反映的韵母系统%The Finals System Reflected in the Transliteration of the Chinese Scripts of Lotus Sutra Dharani

    Institute of Scientific and Technical Information of China (English)

    梁慧婧

    2012-01-01

    From the transliteration of the Dharani of Lotus Sutra at three different times, we can come to the roughly vowel system of the three periods, and conclude the following characteristics of the middle Chinese phonetics: 1 ) the transliteration of Chongniu (重纽) third-class and Chongniu (重纽) forth-class is different, and the Chongniu (重纽) third-class translates the top sound in Sanscript, so it may have a special pronunciation; 2) second-level rhyme is similar to the Chongniu (重纽) third-level, and may also have a special pronunciation; 3 ) some rhymes in Guangyun (广韵), may be confluent; 4) the third -class rhyme translates the sanscripts without medials, but that does not mean it has lost the medials.%利用汉译《法华经》三种不同时期的陀罗尼译音材料进行梵汉对勘,得出了三个时期大致的韵母系统,并得出了中古汉语语音的特点:1.重纽三四等韵有别,重纽三等韵对译顶音,可能有特殊的读音;2.二等韵和三等韵在对译上有相似之处,可能也有特殊的读音;3.《广韵》的一些韵,有合流的倾向;4.三等韵虽对译没有介音的梵音,但不表示其丢失i介音。

  8. Evaluation of NorESM-OC (versions 1 and 1.2), the ocean carbon-cycle stand-alone configuration of the Norwegian Earth System Model (NorESM1)

    Science.gov (United States)

    Schwinger, Jörg; Goris, Nadine; Tjiputra, Jerry F.; Kriest, Iris; Bentsen, Mats; Bethke, Ingo; Ilicak, Mehmet; Assmann, Karen M.; Heinze, Christoph

    2016-08-01

    Idealised and hindcast simulations performed with the stand-alone ocean carbon-cycle configuration of the Norwegian Earth System Model (NorESM-OC) are described and evaluated. We present simulation results of three different model configurations (two different model versions at different grid resolutions) using two different atmospheric forcing data sets. Model version NorESM-OC1 corresponds to the version that is included in the NorESM-ME1 fully coupled model, which participated in CMIP5. The main update between NorESM-OC1 and NorESM-OC1.2 is the addition of two new options for the treatment of sinking particles. We find that using a constant sinking speed, which has been the standard in NorESM's ocean carbon cycle module HAMOCC (HAMburg Ocean Carbon Cycle model), does not transport enough particulate organic carbon (POC) into the deep ocean below approximately 2000 m depth. The two newly implemented parameterisations, a particle aggregation scheme with prognostic sinking speed, and a simpler scheme that uses a linear increase in the sinking speed with depth, provide better agreement with observed POC fluxes. Additionally, reduced deep ocean biases of oxygen and remineralised phosphate indicate a better performance of the new parameterisations. For model version 1.2, a re-tuning of the ecosystem parameterisation has been performed, which (i) reduces previously too high primary production at high latitudes, (ii) consequently improves model results for surface nutrients, and (iii) reduces alkalinity and dissolved inorganic carbon biases at low latitudes. We use hindcast simulations with prescribed observed and constant (pre-industrial) atmospheric CO2 concentrations to derive the past and contemporary ocean carbon sink. For the period 1990-1999 we find an average ocean carbon uptake ranging from 2.01 to 2.58 Pg C yr-1 depending on model version, grid resolution, and atmospheric forcing data set.

  9. Constraining the strength of the terrestrial CO2 fertilization effect in the Canadian Earth system model version 4.2 (CanESM4.2)

    Science.gov (United States)

    Arora, Vivek K.; Scinocca, John F.

    2016-07-01

    Earth system models (ESMs) explicitly simulate the interactions between the physical climate system components and biogeochemical cycles. Physical and biogeochemical aspects of ESMs are routinely compared against their observation-based counterparts to assess model performance and to evaluate how this performance is affected by ongoing model development. Here, we assess the performance of version 4.2 of the Canadian Earth system model against four land carbon-cycle-focused, observation-based determinants of the global carbon cycle and the historical global carbon budget over the 1850-2005 period. Our objective is to constrain the strength of the terrestrial CO2 fertilization effect, which is known to be the most uncertain of all carbon-cycle feedbacks. The observation-based determinants include (1) globally averaged atmospheric CO2 concentration, (2) cumulative atmosphere-land CO2 flux, (3) atmosphere-land CO2 flux for the decades of 1960s, 1970s, 1980s, 1990s, and 2000s, and (4) the amplitude of the globally averaged annual CO2 cycle and its increase over the 1980 to 2005 period. The optimal simulation that satisfies constraints imposed by the first three determinants yields a net primary productivity (NPP) increase from ˜ 58 Pg C year-1 in 1850 to about ˜ 74 Pg C year-1 in 2005; an increase of ˜ 27 % over the 1850-2005 period. The simulated loss in the global soil carbon amount due to anthropogenic land use change (LUC) over the historical period is also broadly consistent with empirical estimates. Yet, it remains possible that these determinants of the global carbon cycle are insufficient to adequately constrain the historical carbon budget, and consequently the strength of terrestrial CO2 fertilization effect as it is represented in the model, given the large uncertainty associated with LUC emissions over the historical period.

  10. The Role of Circulation Features on Black Carbon Transport into the Arctic in the Community Atmosphere Model Version 5 (CAM5)

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Po-Lun; Rasch, Philip J.; Wang, Hailong; Zhang, Kai; Easter, Richard C.; Tilmes, S.; Fast, Jerome D.; Liu, Xiaohong; Yoon, Jin-Ho; Lamarque, Jean-Francois

    2013-05-28

    Current climate models generally under-predict the surface concentration of black carbon (BC) in the Arctic due to the uncertainties associated with emissions, transport, and removal. This bias is also present in the Community Atmosphere Model Version 5.1 (CAM5). In this study, we investigate the uncertainty of Arctic BC due to transport processes simulated by CAM5 by configuring the model to run in an “offline mode” in which the large-scale circulations are prescribed. We compare the simulated BC transport when the offline model is driven by the meteorology predicted by the standard free-running CAM5 with simulations where the meteorology is constrained to agree with reanalysis products. Some circulation biases are apparent: the free-running CAM5 produces about 50% less transient eddy transport of BC than the reanalysis-driven simulations, which may be attributed to the coarse model resolution insufficient to represent eddies. Our analysis shows that the free-running CAM5 reasonably captures the essence of the Arctic Oscillation (AO), but some discernable differences in the spatial pattern of the AO between the free-running CAM5 and the reanalysis-driven simulations result in significantly different AO modulation of BC transport over Northeast Asia and Eastern Europe. Nevertheless, we find that the overall climatological circulation patterns simulated by the free-running CAM5 generally resembles those from the reanalysis products, and BC transport is very similar in both simulation sets. Therefore, the simulated circulation features regulating the long-range BC transport is unlikely the most important cause of the large under-prediction of surface BC concentration in the Arctic.

  11. Abel model: Evaluates claims of inability to afford penalties and compliance costs, version 2.6 (for microcomputers). Software

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-12-01

    The easy-to-use ABEL software evaluates for-profit company claims of inabiltiy to afford penalties, clean-up costs, or compliance costs. Violators raise the issue of inability to pay in most of EPA`s enforcement actions regardless of whether there is any hard evidence pay in most of EPA`s enforcement actions regardless of whether there is any hard evidence supporting those claims. The program enables Federal, State and local enforcement professionals to quickly determine if there was any validity to those claims. ABEL is a tool that promotes quick settlements by performing screening analyses of defendants and potentially responsible parties (PRP`s) to determine their financial capacity. If ABEL indicates the firm can afford the full penalty, compliance of clean-up cost, then EPA makes no adjustments for inability to pay. If it indicates that the firm cannot afford the full amount, it directs the enforcement personnel to review other financial reports before making any adjustments. After analyzing some basic financial ratios that reflect a company`s solvency, ABEL assesses the firm`s ability to pay by focusing on projected cash flows. The model explicitly calculates the value of projected, internally generated cash flows from historical tax information, and compares these cash flows to the prosposed environmental expenditure(s). The sofware is extremely easy to use. Users are taken through a series of prompts to enter specified data. On screen `help` information is available at any time.

  12. Influence of Dust and Black Carbon on the Snow Albedo in the NASA Goddard Earth Observing System Version 5 Land Surface Model

    Science.gov (United States)

    Yasunari, Teppei J.; Koster, Randal D.; Lau, K. M.; Aoki, Teruo; Sud, Yogesh C.; Yamazaki, Takeshi; Motoyoshi, Hiroki; Kodama, Yuji

    2011-01-01

    Present-day land surface models rarely account for the influence of both black carbon and dust in the snow on the snow albedo. Snow impurities increase the absorption of incoming shortwave radiation (particularly in the visible bands), whereby they have major consequences for the evolution of snowmelt and life cycles of snowpack. A new parameterization of these snow impurities was included in the catchment-based land surface model used in the National Aeronautics and Space Administration Goddard Earth Observing System version 5. Validation tests against in situ observed data were performed for the winter of 2003.2004 in Sapporo, Japan, for both the new snow albedo parameterization (which explicitly accounts for snow impurities) and the preexisting baseline albedo parameterization (which does not). Validation tests reveal that daily variations of snow depth and snow surface albedo are more realistically simulated with the new parameterization. Reasonable perturbations in the assigned snow impurity concentrations, as inferred from the observational data, produce significant changes in snowpack depth and radiative flux interactions. These findings illustrate the importance of parameterizing the influence of snow impurities on the snow surface albedo for proper simulation of the life cycle of snow cover.

  13. Influence of Dust and Black Carbon on the Snow Albedo in the NASA Goddard Earth Observing System Version 5 Land Surface Model

    Science.gov (United States)

    Yasunari, Teppei J.; Koster, Randal D.; Lau, K. M.; Aoki, Teruo; Sud, Yogesh C.; Yamazaki, Takeshi; Motoyoshi, Hiroki; Kodama, Yuji

    2011-01-01

    Present-day land surface models rarely account for the influence of both black carbon and dust in the snow on the snow albedo. Snow impurities increase the absorption of incoming shortwave radiation (particularly in the visible bands), whereby they have major consequences for the evolution of snowmelt and life cycles of snowpack. A new parameterization of these snow impurities was included in the catchment-based land surface model used in the National Aeronautics and Space Administration Goddard Earth Observing System version 5. Validation tests against in situ observed data were performed for the winter of 2003.2004 in Sapporo, Japan, for both the new snow albedo parameterization (which explicitly accounts for snow impurities) and the preexisting baseline albedo parameterization (which does not). Validation tests reveal that daily variations of snow depth and snow surface albedo are more realistically simulated with the new parameterization. Reasonable perturbations in the assigned snow impurity concentrations, as inferred from the observational data, produce significant changes in snowpack depth and radiative flux interactions. These findings illustrate the importance of parameterizing the influence of snow impurities on the snow surface albedo for proper simulation of the life cycle of snow cover.

  14. Variable-density groundwater flow simulations and particle tracking. Numerical modelling using DarcyTools. Preliminary site description of the Simpevarp area, version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    Follin, Sven [SF GeoLogic AB, Stockholm (Sweden); Stigsson, Martin; Berglund, Sten [Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden); Svensson, Urban [Computer-aided Fluid Engineering AB, Norrkoeping (Sweden)

    2004-12-01

    SKB is conducting site investigations for a high-level nuclear waste repository in fractured crystalline rocks at two coastal areas in Sweden, Forsmark and Simpevarp. The investigations started in 2002 and have been planned since the late 1990s. The work presented here investigates the possibility of using hydrogeochemical measurements in deep boreholes to reduce parameter uncertainty in a regional modelling of groundwater flow in fractured rock. The work was conducted with the aim of improving the palaeohydrogeological understanding of the Simpevarp area and to give recommendations to the preparations of the next version of the Preliminary Site Description (1.2). The study is based on a large number of numerical simulations of transient variable density groundwater flow through a strongly heterogeneous and anisotropic medium. The simulations were conducted with the computer code DarcyTools, the development of which has been funded by SKB. DarcyTools is a flexible porous media code specifically designed to treat groundwater flow and salt transport in sparsely fractured crystalline rock and it is noted that some of the features presented in this report are still under development or subjected to testing and verification. The simulations reveal the sensitivity of the results to different hydrogeological modelling assumptions, e.g. the sensitivity to the initial groundwater conditions at 10,000 BC, the size of the model domain and boundary conditions, and the hydraulic properties of deterministically and stochastically modelled deformation zones. The outcome of these simulations was compared with measured salinities and calculated relative proportions of different water types (mixing proportions) from measurements in two deep core drilled boreholes in the Laxemar subarea. In addition to the flow simulations, the statistics of flow related transport parameters were calculated for particle flowpaths from repository depth to ground surface for two subareas within the

  15. Version pressure feedback mechanisms for speculative versioning caches

    Science.gov (United States)

    Eichenberger, Alexandre E.; Gara, Alan; O& #x27; Brien, Kathryn M.; Ohmacht, Martin; Zhuang, Xiaotong

    2013-03-12

    Mechanisms are provided for controlling version pressure on a speculative versioning cache. Raw version pressure data is collected based on one or more threads accessing cache lines of the speculative versioning cache. One or more statistical measures of version pressure are generated based on the collected raw version pressure data. A determination is made as to whether one or more modifications to an operation of a data processing system are to be performed based on the one or more statistical measures of version pressure, the one or more modifications affecting version pressure exerted on the speculative versioning cache. An operation of the data processing system is modified based on the one or more determined modifications, in response to a determination that one or more modifications to the operation of the data processing system are to be performed, to affect the version pressure exerted on the speculative versioning cache.

  16. Development of a source oriented version of the WRF/Chem model and its application to the California regional PM10 / PM2.5 air quality study

    Directory of Open Access Journals (Sweden)

    H. Zhang

    2014-01-01

    Full Text Available A source-oriented version of the Weather Research and Forecasting model with chemistry (SOWC, hereinafter was developed. SOWC separately tracks primary particles with different hygroscopic properties rather than instantaneously combining them into an internal mixture. This approach avoids artificially mixing light absorbing black + brown carbon particles with materials such as sulfate that would encourage the formation of additional coatings. Source-oriented particles undergo coagulation and gas-particle conversion, but these processes are considered in a dynamic framework that realistically "ages" primary particles over hours and days in the atmosphere. SOWC more realistically predicts radiative feedbacks from anthropogenic aerosols compared to models that make internal mixing or other artificial mixing assumptions. A three-week stagnation episode (15 December 2000 to 6 January 2001 in the San Joaquin Valley (SJV during the California Regional PM10 / PM2.5 Air Quality Study (CRPAQS was chosen for the initial application of the new modeling system. Primary particles emitted from diesel engines, wood smoke, high-sulfur fuel combustion, food cooking, and other anthropogenic sources were tracked separately throughout the simulation as they aged in the atmosphere. Differences were identified between predictions from the source oriented vs. the internally mixed representation of particles with meteorological feedbacks in WRF/Chem for a number of meteorological parameters: aerosol extinction coefficients, downward shortwave flux, planetary boundary layer depth, and primary and secondary particulate matter concentrations. Comparisons with observations show that SOWC predicts particle scattering coefficients more accurately than the internally mixed model. Downward shortwave radiation predicted by SOWC is enhanced by ~1% at ground level chiefly because diesel engine particles in the source-oriented mixture are not artificially coated with material that

  17. Enigma Version 12

    Science.gov (United States)

    Shores, David; Goza, Sharon P.; McKeegan, Cheyenne; Easley, Rick; Way, Janet; Everett, Shonn; Guerra, Mark; Kraesig, Ray; Leu, William

    2013-01-01

    Enigma Version 12 software combines model building, animation, and engineering visualization into one concise software package. Enigma employs a versatile user interface to allow average users access to even the most complex pieces of the application. Using Enigma eliminates the need to buy and learn several software packages to create an engineering visualization. Models can be created and/or modified within Enigma down to the polygon level. Textures and materials can be applied for additional realism. Within Enigma, these models can be combined to create systems of models that have a hierarchical relationship to one another, such as a robotic arm. Then these systems can be animated within the program or controlled by an external application programming interface (API). In addition, Enigma provides the ability to use plug-ins. Plugins allow the user to create custom code for a specific application and access the Enigma model and system data, but still use the Enigma drawing functionality. CAD files can be imported into Enigma and combined to create systems of computer graphics models that can be manipulated with constraints. An API is available so that an engineer can write a simulation and drive the computer graphics models with no knowledge of computer graphics. An animation editor allows an engineer to set up sequences of animations generated by simulations or by conceptual trajectories in order to record these to highquality media for presentation. Enigma Version 12 Lyndon B. Johnson Space Center, Houston, Texas 28 NASA Tech Briefs, September 2013 Planetary Protection Bioburden Analysis Program NASA's Jet Propulsion Laboratory, Pasadena, California This program is a Microsoft Access program that performed statistical analysis of the colony counts from assays performed on the Mars Science Laboratory (MSL) spacecraft to determine the bioburden density, 3-sigma biodensity, and the total bioburdens required for the MSL prelaunch reports. It also contains numerous

  18. Online Cake Cutting (published version)

    CERN Document Server

    Walsh, Toby

    2011-01-01

    We propose an online form of the cake cutting problem. This models situations where agents arrive and depart during the process of dividing a resource. We show that well known fair division procedures like cut-and-choose and the Dubins-Spanier moving knife procedure can be adapted to apply to such online problems. We propose some fairness properties that online cake cutting procedures can possess like online forms of proportionality and envy-freeness. We also consider the impact of collusion between agents. Finally, we study theoretically and empirically the competitive ratio of these online cake cutting procedures. Based on its resistance to collusion, and its good performance in practice, our results favour the online version of the cut-and-choose procedure over the online version of the moving knife procedure.

  19. The Rock-Water-Ice Topographic Gravity Field Model RWI_TOPO_2015 and Its Comparison to a Conventional Rock-Equivalent Version

    Science.gov (United States)

    Grombein, Thomas; Seitz, Kurt; Heck, Bernhard

    2016-09-01

    RWI_TOPO_2015 is a new high-resolution spherical harmonic representation of the Earth's topographic gravitational potential that is based on a refined Rock-Water-Ice (RWI) approach. This method is characterized by a three-layer decomposition of the Earth's topography with respect to its rock, water, and ice masses. To allow a rigorous separate modeling of these masses with variable density values, gravity forward modeling is performed in the space domain using tesseroid mass bodies arranged on an ellipsoidal reference surface. While the predecessor model RWI_TOPO_2012 was based on the 5'× 5' global topographic database DTM2006.0 (Digital Topographic Model 2006.0), the new RWI model uses updated height information of the 1'× 1' Earth2014 topography suite. Moreover, in the case of RWI_TOPO_2015, the representation in spherical harmonics is extended to degree and order 2190 (formerly 1800). Beside a presentation of the used formalism, the processing for RWI_TOPO_2015 is described in detail, and the characteristics of the resulting spherical harmonic coefficients are analyzed in the space and frequency domain. Furthermore, this paper focuses on a comparison of the RWI approach to the conventionally used rock-equivalent method. For this purpose, a consistent rock-equivalent version REQ_TOPO_2015 is generated, in which the heights of water and ice masses are condensed to the constant rock density. When evaluated on the surface of the GRS80 ellipsoid (Geodetic Reference System 1980), the differences of RWI_TOPO_2015 and REQ_TOPO_2015 reach maximum amplitudes of about 1 m, 50 mGal, and 20 mE in terms of height anomaly, gravity disturbance, and the radial-radial gravity gradient, respectively. Although these differences are attenuated with increasing height above the ellipsoid, significant magnitudes can even be detected in the case of the satellite altitudes of current gravity field missions. In order to assess their performance, RWI_TOPO_2015, REQ_TOPO_2015, and RWI

  20. Validation Evidence for the Elementary School Version of the MUSIC® Model of Academic Motivation Inventory (Pruebas de validación para el Modelo MUSIC® de Inventario de Motivación Educativa para Escuela Primaria)

    Science.gov (United States)

    Jones, Brett D.; Sigmon, Miranda L.

    2016-01-01

    Introduction: The purpose of our study was to assess whether the Elementary School version of the MUSIC® Model of Academic Motivation Inventory was valid for use with elementary students in classrooms with regular classroom teachers and student teachers enrolled in a university teacher preparation program. Method: The participants included 535…

  1. Validation Evidence for the Elementary School Version of the MUSIC® Model of Academic Motivation Inventory (Pruebas de validación para el Modelo MUSIC® de Inventario de Motivación Educativa para Escuela Primaria)

    Science.gov (United States)

    Jones, Brett D.; Sigmon, Miranda L.

    2016-01-01

    Introduction: The purpose of our study was to assess whether the Elementary School version of the MUSIC® Model of Academic Motivation Inventory was valid for use with elementary students in classrooms with regular classroom teachers and student teachers enrolled in a university teacher preparation program. Method: The participants included 535…

  2. Preliminary Thermal Modeling of Hi-Storm 100S-218 Version B Storage Modules at Hope Creek Nuclear Power Station ISFSI

    Energy Technology Data Exchange (ETDEWEB)

    Cuta, Judith M.; Adkins, Harold E.

    2013-08-30

    This report fulfills the M3 milestone M3FT-13PN0810022, “Report on Inspection 1”, under Work Package FT-13PN081002. Thermal analysis is being undertaken at Pacific Northwest National Laboratory (PNNL) in support of inspections of selected storage modules at various locations around the United States, as part of the Used Fuel Disposition Campaign of the U.S. Department of Energy, Office of Nuclear Energy (DOE-NE) Fuel Cycle Research and Development. This report documents pre-inspection predictions of temperatures for four modules at the Hope Creek Nuclear Generating Station ISFSI that have been identified as candidates for inspection in late summer or early fall/winter of 2013. These are HI-STORM 100S-218 Version B modules storing BWR 8x8 fuel in MPC-68 canisters. The temperature predictions reported in this document were obtained with detailed COBRA-SFS models of these four storage systems, with the following boundary conditions and assumptions.

  3. Transfinite Version of Welter's Game

    OpenAIRE

    Abuku, Tomoaki

    2017-01-01

    We study the transfinite version of Welter's Game, a combinatorial game, which is played on the belt divided into squares with general ordinal numbers extended from natural numbers. In particular, we obtain a straight-forward solution for the transfinite version based on those of the transfinite version of Nim and the original version of Welter's Game.

  4. The iron budget in ocean surface waters in the 20th and 21st centuries: projections by the Community Earth System Model version 1

    Directory of Open Access Journals (Sweden)

    K. Misumi

    2013-05-01

    Full Text Available We investigated the simulated iron budget in ocean surface waters in the 1990s and 2090s using the Community Earth System Model version 1 and the Representative Concentration Pathway 8.5 future CO2 emission scenario. We assumed that exogenous iron inputs did not change during the whole simulation period; thus, iron budget changes were attributed solely to changes in ocean circulation and mixing in response to projected global warming. The model simulated the major features of ocean circulation and dissolved iron distribution for the present climate reasonably well. Detailed iron budget analysis revealed that roughly 70% of the iron supplied to surface waters in high-nutrient, low-chlorophyll (HNLC regions is contributed by ocean circulation and mixing processes, but the dominant supply mechanism differed in each HNLC region: vertical mixing in the Southern Ocean, upwelling in the eastern equatorial Pacific, and deposition of iron-bearing dust in the subarctic North Pacific. In the 2090s, our model projected an increased iron supply to HNLC surface waters, even though enhanced stratification was predicted to reduce iron entrainment from deeper waters. This unexpected result could be attributed largely to changes in the meridional overturning and gyre-scale circulations that intensified the advective supply of iron to surface waters, especially in the eastern equatorial Pacific. The simulated primary and export productions in the 2090s decreased globally by 6% and 13%, respectively, whereas in the HNLC regions, they increased by 11% and 6%, respectively. Roughly half of the elevated production could be attributed to the intensified iron supply. The projected ocean circulation and mixing changes are consistent with recent observations of responses to the warming climate and with other Coupled Model Intercomparison Project model projections. We conclude that future ocean circulation and mixing changes will likely elevate the iron supply to HNLC

  5. Modelling and Refining Structural Detail in a MAESTRO Model for a Bottom-Up or Top-Down Analysis Using the PC Version of MAESTRO/DSA

    Science.gov (United States)

    1997-10-01

    MAESTRO.DAT File 38 iii List of Figures 1 Wire Drawing of MAESTRO Ship Model . . . . . . 15 2 · Panel Fill Plot of the Model...image, were not plotted on the hardcopy of the image. 14 Figure 1: Wire Drawing of MAESTRO Ship model 15 z ~X Figure 2: Panel Fill Plot of the

  6. Version Control in Project-Based Learning

    Science.gov (United States)

    Milentijevic, Ivan; Ciric, Vladimir; Vojinovic, Oliver

    2008-01-01

    This paper deals with the development of a generalized model for version control systems application as a support in a range of project-based learning methods. The model is given as UML sequence diagram and described in detail. The proposed model encompasses a wide range of different project-based learning approaches by assigning a supervisory…

  7. Constraining the influence of natural variability to improve estimates of global aerosol indirect effects in a nudged version of the Community Atmosphere Model 5

    Science.gov (United States)

    Kooperman, G. J.; Pritchard, M. S.; Ghan, S. J.; Wang, M.; Somerville, R. C.; Russell, L. M.

    2012-12-01

    The newest version of NCAR's Community Atmosphere Model (CAM5) produces a strong global mean aerosol indirect effect of -1.54 W/m2. However, when CAM5 is modified to include resolved scale convective processes in a new multi-scale modeling framework (MMF) the indirect aerosol forcing is reduced by almost half (-0.80 W/m2). In the MMF approach, conventional cloud parameterizations are replaced by embedded cloud-resolving models (CRM) in each grid column of CAM5, and aerosol on the global grid is linked to explicitly resolved CRM scale relative humidity and updraft velocities to determine the number of aerosol particles that activate to form cloud droplets at CRM resolution. However, the increased computational expense incurred by resolving convective processes makes long integrations with the MMF prohibitively expensive. This is a challenge for investigating aerosol indirect effects because it typically requires integrating over long simulations to isolate statistically significant differences in cloud radiative forcing due to anthropogenic aerosol perturbations from natural variability. Here an alternative approach is explored, which implements Newtonian relaxation (nudging) to constrain simulations with both pre-industrial and present-day aerosol emissions toward identical meteorological conditions, thus reducing the influences of natural variability so that the two models can be compared in short simulations. Using this approach in CAM5, we find high pattern correlations between one-year averages of aerosol indirect effect and the pattern of the signal produced in a 100-year average. Estimates of aerosol indirect effects in CAM5 with and without nudging have mean values and 95% confidence intervals of -1.54 ± 0.02 W/m2 and -1.63 ± 0.17 W/m2, respectively. The approach is applied in the MMF to investigate the mechanisms responsible for producing a weaker forcing than CAM5. These include weaker responses in liquid water content and droplet number concentrations

  8. Implementation of the chemistry module MECCA (v2.5 in the modal aerosol version of the Community Atmosphere Model component (v3.6.33 of the Community Earth System Model

    Directory of Open Access Journals (Sweden)

    M. S. Long

    2012-06-01

    Full Text Available A coupled atmospheric chemistry and climate system model was developed using the modal aerosol version of the National Center for Atmospheric Research Community Atmosphere Model (modal-CAM and the Max Planck Institute for Chemistry's Module Efficiently Calculating the Chemistry of the Atmosphere (MECCA to provide enhanced resolution of multiphase processes, particularly those involving inorganic halogens, and associated impacts on atmospheric composition and climate. Three Rosenbrock solvers (Ros-2, Ros-3, RODAS-3 were tested in conjunction with the basic load balancing options available to modal CAM (1 to establish an optimal configuration of the implicitly-solved multiphase chemistry module that maximizes both computational speed and repeatability of Ros-2 and RODAS-3 results versus Ros-3, and (2 to identify potential implementation strategies for future versions of this and similar coupled systems. RODAS-3 was faster than Ros-2 and Ros-3 with good reproduction of Ros-3 results, while Ros-2 was both slower and substantially less reproducible relative to Ros-3 results. Modal-CAM with MECCA chemistry was a factor of 15 slower than modal-CAM using standard chemistry. MECCA chemistry integration times demonstrated a systematic frequency distribution for all three solvers, and revealed that the change in run-time performance was due to a change in the frequency distribution chemical integration times; the peak frequency was similar for all solvers. This suggests that efficient chemistry-focused load-balancing schemes can be developed that rely on the parameters of this frequency distribution.

  9. Application of TOPSIS and VIKOR improved versions in a multi criteria decision analysis to develop an optimized municipal solid waste management model.

    Science.gov (United States)

    Aghajani Mir, M; Taherei Ghazvinei, P; Sulaiman, N M N; Basri, N E A; Saheri, S; Mahmood, N Z; Jahan, A; Begum, R A; Aghamohammadi, N

    2016-01-15

    Selecting a suitable Multi Criteria Decision Making (MCDM) method is a crucial stage to establish a Solid Waste Management (SWM) system. Main objective of the current study is to demonstrate and evaluate a proposed method using Multiple Criteria Decision Making methods (MCDM). An improved version of Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) applied to obtain the best municipal solid waste management method by comparing and ranking the scenarios. Applying this method in order to rank treatment methods is introduced as one contribution of the study. Besides, Viekriterijumsko Kompromisno Rangiranje (VIKOR) compromise solution method applied for sensitivity analyses. The proposed method can assist urban decision makers in prioritizing and selecting an optimized Municipal Solid Waste (MSW) treatment system. Besides, a logical and systematic scientific method was proposed to guide an appropriate decision-making. A modified TOPSIS methodology as a superior to existing methods for first time was applied for MSW problems. Applying this method in order to rank treatment methods is introduced as one contribution of the study. Next, 11 scenarios of MSW treatment methods are defined and compared environmentally and economically based on the waste management conditions. Results show that integrating a sanitary landfill (18.1%), RDF (3.1%), composting (2%), anaerobic digestion (40.4%), and recycling (36.4%) was an optimized model of integrated waste management. An applied decision-making structure provides the opportunity for optimum decision-making. Therefore, the mix of recycling and anaerobic digestion and a sanitary landfill with Electricity Production (EP) are the preferred options for MSW management.

  10. TK Modeler version 1.0, a Microsoft® Excel®-based modeling software for the prediction of diurnal blood/plasma concentration for toxicokinetic use.

    Science.gov (United States)

    McCoy, Alene T; Bartels, Michael J; Rick, David L; Saghir, Shakil A

    2012-07-01

    TK Modeler 1.0 is a Microsoft® Excel®-based pharmacokinetic (PK) modeling program created to aid in the design of toxicokinetic (TK) studies. TK Modeler 1.0 predicts the diurnal blood/plasma concentrations of a test material after single, multiple bolus or dietary dosing using known PK information. Fluctuations in blood/plasma concentrations based on test material kinetics are calculated using one- or two-compartment PK model equations and the principle of superposition. This information can be utilized for the determination of appropriate dosing regimens based on reaching a specific desired C(max), maintaining steady-state blood/plasma concentrations, or other exposure target. This program can also aid in the selection of sampling times for accurate calculation of AUC(24h) (diurnal area under the blood concentration time curve) using sparse-sampling methodologies (one, two or three samples). This paper describes the construction, use and validation of TK Modeler. TK Modeler accurately predicted blood/plasma concentrations of test materials and provided optimal sampling times for the calculation of AUC(24h) with improved accuracy using sparse-sampling methods. TK Modeler is therefore a validated, unique and simple modeling program that can aid in the design of toxicokinetic studies.

  11. SHEDS-Multimedia Model Version 3 (a) Technical Manual; (b) User Guide; and (c) Executable File to Launch SAS Program and Install Model

    Science.gov (United States)

    Reliable models for assessing human exposures are important for understanding health risks from chemicals. The Stochastic Human Exposure and Dose Simulation model for multimedia, multi-route/pathway chemicals (SHEDS-Multimedia), developed by EPA’s Office of Research and Developm...

  12. Modeling regional air quality and climate: improving organic aerosol and aerosol activation processes in WRF/Chem version 3.7.1

    Science.gov (United States)

    Yahya, Khairunnisa; Glotfelty, Timothy; Wang, Kai; Zhang, Yang; Nenes, Athanasios

    2017-06-01

    Air quality and climate influence each other through the uncertain processes of aerosol formation and cloud droplet activation. In this study, both processes are improved in the Weather, Research and Forecasting model with Chemistry (WRF/Chem) version 3.7.1. The existing Volatility Basis Set (VBS) treatments for organic aerosol (OA) formation in WRF/Chem are improved by considering the following: the secondary OA (SOA) formation from semi-volatile primary organic aerosol (POA), a semi-empirical formulation for the enthalpy of vaporization of SOA, and functionalization and fragmentation reactions for multiple generations of products from the oxidation of VOCs. Over the continental US, 2-month-long simulations (May to June 2010) are conducted and results are evaluated against surface and aircraft observations during the Nexus of Air Quality and Climate Change (CalNex) campaign. Among all the configurations considered, the best performance is found for the simulation with the 2005 Carbon Bond mechanism (CB05) and the VBS SOA module with semivolatile POA treatment, 25 % fragmentation, and the emissions of semi-volatile and intermediate volatile organic compounds being 3 times the original POA emissions. Among the three gas-phase mechanisms (CB05, CB6, and SAPRC07) used, CB05 gives the best performance for surface ozone and PM2. 5 concentrations. Differences in SOA predictions are larger for the simulations with different VBS treatments (e.g., nonvolatile POA versus semivolatile POA) compared to the simulations with different gas-phase mechanisms. Compared to the simulation with CB05 and the default SOA module, the simulations with the VBS treatment improve cloud droplet number concentration (CDNC) predictions (normalized mean biases from -40.8 % to a range of -34.6 to -27.7 %), with large differences between CB05-CB6 and SAPRC07 due to large differences in their OH and HO2 predictions. An advanced aerosol activation parameterization based on the Fountoukis and Nenes

  13. TriBITS lifecycle model. Version 1.0, a lean/agile software lifecycle model for research-based computational science and engineering and applied mathematical software.

    Energy Technology Data Exchange (ETDEWEB)

    Willenbring, James M.; Bartlett, Roscoe Ainsworth (Oak Ridge National Laboratory, Oak Ridge, TN); Heroux, Michael Allen

    2012-01-01

    Software lifecycles are becoming an increasingly important issue for computational science and engineering (CSE) software. The process by which a piece of CSE software begins life as a set of research requirements and then matures into a trusted high-quality capability is both commonplace and extremely challenging. Although an implicit lifecycle is obviously being used in any effort, the challenges of this process - respecting the competing needs of research vs. production - cannot be overstated. Here we describe a proposal for a well-defined software lifecycle process based on modern Lean/Agile software engineering principles. What we propose is appropriate for many CSE software projects that are initially heavily focused on research but also are expected to eventually produce usable high-quality capabilities. The model is related to TriBITS, a build, integration and testing system, which serves as a strong foundation for this lifecycle model, and aspects of this lifecycle model are ingrained in the TriBITS system. Here, we advocate three to four phases or maturity levels that address the appropriate handling of many issues associated with the transition from research to production software. The goals of this lifecycle model are to better communicate maturity levels with customers and to help to identify and promote Software Engineering (SE) practices that will help to improve productivity and produce better software. An important collection of software in this domain is Trilinos, which is used as the motivation and the initial target for this lifecycle model. However, many other related and similar CSE (and non-CSE) software projects can also make good use of this lifecycle model, especially those that use the TriBITS system. Indeed this lifecycle process, if followed, will enable large-scale sustainable integration of many complex CSE software efforts across several institutions.

  14. Development of a Dynamic Biomechanical Model for Load Carriage: Phase III Part C2: Development of a Dynamic Biomechanical Model Version 1 of Human Load Carriage

    Science.gov (United States)

    2005-08-01

    torse ont été définis comme étant des objets sans frottement ayant un coefficient de restitution de 0,5. Tout comme dans le modèle 2D, les sangles...higher damping coefficients to bring about a pack displacement pattern similar to that of the 3D model. In addition, the 3D model behaviour was more...une fréquence naturelle dominante d’environ 5 Hz. Globalement, il a fallu utiliser des coefficients d’amortissement plus élevés avec le modèle 2D

  15. PENGENALAN MEREK, PERSEPSI KUALITAS, HARAPAN KONSUMEN DAN INOVASI PRODUK TERHADAP KEPUTUSAN MEMBELI DAN DAMPAKNYA PADA LOYALITAS KONSUMEN (Studi Kasus: Produk Batik Sutra Halus Merek Tamina

    Directory of Open Access Journals (Sweden)

    Tamamudin *

    2013-05-01

    Full Text Available This study aimed to examine the effect of brand awareness, perceived quality, customer expectations and product innovation on buying decisions and on increasing customer loyalty. Structural Equation Modeling (SEM based on AMOS was employed to test the hypotesis.The results showed that brand awareness has positive and significant impact on purchasing decisions, perceived quality has positive and significant impact on purchasing decisions, consumer expectations have a positive and significant impact on purchasing decisions, product innovation have a significant and positive impact on purchasing decisions and buying decisions have positive and significant customer loyalty.

  16. Validation of the Danish version of the McGill Ingestive Skills Assessment using classical test theory and the Rasch model

    DEFF Research Database (Denmark)

    Hansen, Tina; Lambert, Heather C; Faber, Jens

    2012-01-01

    Purpose: The study aimed to validate the Danish version of the Canadian the "McGill Ingestive Skills Assessment" (MISA-DK) for measuring dysphagia in frail elders. Method: One-hundred and ten consecutive older medical patients were recruited to the study. Reliability was assessed by internal cons...

  17. The Revised Child Anxiety and Depression Scale-Short Version: Scale Reduction via Exploratory Bifactor Modeling of the Broad Anxiety Factor

    Science.gov (United States)

    Ebesutani, Chad; Reise, Steven P.; Chorpita, Bruce F.; Ale, Chelsea; Regan, Jennifer; Young, John; Higa-McMillan, Charmaine; Weisz, John R.

    2012-01-01

    Using a school-based (N = 1,060) and clinic-referred (N = 303) youth sample, the authors developed a 25-item shortened version of the Revised Child Anxiety and Depression Scale (RCADS) using Schmid-Leiman exploratory bifactor analysis to reduce client burden and administration time and thus improve the transportability characteristics of this…

  18. Validation of Malaysian Versions of Perceived Diabetes Self-Management Scale (PDSMS), Medication Understanding and Use Self-Efficacy Scale (MUSE) and 8-Morisky Medication Adherence Scale (MMAS-8) Using Partial Credit Rasch Model.

    Science.gov (United States)

    Al Abboud, Safaa Ahmed; Ahmad, Sohail; Bidin, Mohamed Badrulnizam Long; Ismail, Nahlah Elkudssiah

    2016-11-01

    The Diabetes Mellitus (DM) is a common silent epidemic disease with frequent morbidity and mortality. The psychological and psychosocial health factors are negatively influencing the glycaemic control in diabetic patients. Therefore, various questionnaires were developed to address the psychological and psychosocial well-being of the diabetic patients. Most of these questionnaires were first developed in English and then translated into different languages to make them useful for the local communities. The main aim of this study was to translate and validate the Malaysian versions of Perceived Diabetes Self-Management Scale (PDSMS), Medication Understanding and Use Self-Efficacy Scale (MUSE), and to revalidate 8-Morisky Medication Adherence Scale (MMAS-8) by Partial Credit Rasch Model (Modern Test Theory). Permission was obtained from respective authors to translate the English versions of PDSMS, MUSE and MMAS-8 into Malay language according to established standard international translation guidelines. In this cross-sectional study, 62 adult DM patients were recruited from Hospital Kuala Lumpur by purposive sampling method. The data were extracted from the self-administered questionnaires and entered manually in the Ministeps (Winsteps) software for Partial Credit Rasch Model. The item and person reliability, infit/outfit Z-Standard (ZSTD), infit/outfit Mean Square (MNSQ) and point measure correlation (PTMEA Corr) values were analysed for the reliability analyses and construct validation. The Malay version of PDSMS, MUSE and MMAS-8 found to be valid and reliable instrument for the Malaysian diabetic adults. The instrument showed good overall reliability value of 0.76 and 0.93 for item and person reliability, respectively. The values of infit/outfit ZSTD, infit/outfit MNSQ, and PTMEA Corr were also within the stipulated range of the Rasch Model proving the valid item constructs of the questionnaire. The translated Malay version of PDSMS, MUSE and MMAS-8 was found to

  19. A comparison of climate simulations for the last glacial maximum with three different versions of the ECHAM model and implications for summer-green tree refugia

    Directory of Open Access Journals (Sweden)

    K. Arpe

    2011-02-01

    Full Text Available Model simulations of the last glacial maximum (21 ± 2 ka with the ECHAM3 T42 atmosphere-only, ECHAM5-MPIOM T31 atmosphere-ocean coupled and ECHAM5 T106 atmosphere-only models are compared. The topography, land-sea mask and glacier distribution for the ECHAM5 simulations were taken from the Paleoclimate Modelling Intercomparison Project Phase II (PMIP2 data set while for ECHAM3 they were taken from PMIP1. The ECHAM5-MPIOM T31 model produced its own sea surface temperatures (SST while the ECHAM5 T106 simulations were forced at the boundaries by this coupled model SSTs corrected from their present-day biases and the ECHAM3 T42 model was forced with prescribed SSTs provided by Climate/Long-Range Investigation, Mapping, and Prediction project (CLIMAP.

    The SSTs in the ECHAM5-MPIOM simulation for the last glacial maximum (LGM were much warmer in the northern Atlantic than those suggested by CLIMAP or Overview of Glacial Atlantic Ocean Mapping (GLAMAP while the SSTs were cooler everywhere else. This had a clear effect on the temperatures over Europe, warmer for winters in western Europe and cooler for eastern Europe than the simulation with CLIMAP SSTs.

    Considerable differences in the general circulation patterns were found in the different simulations. A ridge over western Europe for the present climate during winter in the 500 hPa height field remains in both ECHAM5 simulations for the LGM, more so in the T106 version, while the ECHAM3 CLIMAP-SST simulation provided a trough which is consistent with cooler temperatures over western Europe. The zonal wind between 30° W and 10° E shows a southward shift of the polar and subtropical jets in the simulations for the LGM, least obvious in the ECHAM5 T31 one, and an extremely strong polar jet for the ECHAM3 CLIMAP-SST run. The latter can probably be assigned to the much stronger north-south gradient in the CLIMAP SSTs. The southward shift of the polar jet during the LGM is supported by

  20. PCATool-ADULT-BRAZIL: a reduced version

    Directory of Open Access Journals (Sweden)

    Mônica Maria Celestina de Oliveira

    2013-09-01

    Full Text Available The reorganization of the Brazilian health system brings the need for on-going evaluation of the services offered to the population. The Primary Care Assessment Tool (PCATool-Brazil version for adult users, validated for the Brazilian context, adequately measures the presence and extent of attributes of primary health care (PHC services. A reduced version of this instrument is required to optimize the process of implementation and use of the results in strategic actions. This article aims to present a reduced version of the PCATool-Brazil for adult users and analyze its suitability. The instrument was applied to 2404 adult residents of areas covered by primary health care (PHC units in Porto Alegre, Rio Grande do Sul state. By the two-parameter logistic model of Item Response Theory (ML-2, 23 items that presented discrimination classified as moderate to strong, contemplating the seven attributes of PHC, were selected. As a measure of consistency, the results obtained with this version were compared with the complete version, revealing consistent PHC scores. These findings indicate that the PCATool-Brazil reduced version for adult users presents adequate validity and reliability, and it can be adopted as a rapid assessment tool to evaluate PHC in Brazilian services, permitting decision making guided by evidence in the development of actions to improve the quality of care offered to the population.

  1. Embrittlement data base, version 1

    Energy Technology Data Exchange (ETDEWEB)

    Wang, J.A.

    1997-08-01

    The aging and degradation of light-water-reactor (LWR) pressure vessels is of particular concern because of their relevance to plant integrity and the magnitude of the expected irradiation embrittlement. The radiation embrittlement of reactor pressure vessel (RPV) materials depends on many different factors such as flux, fluence, fluence spectrum, irradiation temperature, and preirradiation material history and chemical compositions. These factors must be considered to reliably predict pressure vessel embrittlement and to ensure the safe operation of the reactor. Based on embrittlement predictions, decisions must be made concerning operating parameters and issues such as low-leakage-fuel management, possible life extension, and the need for annealing the pressure vessel. Large amounts of data from surveillance capsules and test reactor experiments, comprising many different materials and different irradiation conditions, are needed to develop generally applicable damage prediction models that can be used for industry standards and regulatory guides. Version 1 of the Embrittlement Data Base (EDB) is such a comprehensive collection of data resulting from merging version 2 of the Power Reactor Embrittlement Data Base (PR-EDB). Fracture toughness data were also integrated into Version 1 of the EDB. For power reactor data, the current EDB lists the 1,029 Charpy transition-temperature shift data points, which include 321 from plates, 125 from forgoings, 115 from correlation monitor materials, 246 from welds, and 222 from heat-affected-zone (HAZ) materials that were irradiated in 271 capsules from 101 commercial power reactors. For test reactor data, information is available for 1,308 different irradiated sets (352 from plates, 186 from forgoings, 303 from correlation monitor materials, 396 from welds and 71 from HAZs) and 268 different irradiated plus annealed data sets.

  2. Implementation of a Marauding Insect Module (MIM, version 1.0) in the Integrated BIosphere Simulator (IBIS, version 2.6b4) dynamic vegetation-land surface model

    Science.gov (United States)

    Landry, Jean-Sébastien; Price, David T.; Ramankutty, Navin; Parrott, Lael; Damon Matthews, H.

    2016-04-01

    Insects defoliate and kill plants in many ecosystems worldwide. The consequences of these natural processes on terrestrial ecology and nutrient cycling are well established, and their potential climatic effects resulting from modified land-atmosphere exchanges of carbon, energy, and water are increasingly being recognized. We developed a Marauding Insect Module (MIM) to quantify, in the Integrated BIosphere Simulator (IBIS), the consequences of insect activity on biogeochemical and biogeophysical fluxes, also accounting for the effects of altered vegetation dynamics. MIM can simulate damage from three different insect functional types: (1) defoliators on broadleaf deciduous trees, (2) defoliators on needleleaf evergreen trees, and (3) bark beetles on needleleaf evergreen trees, with the resulting impacts being estimated by IBIS based on the new, insect-modified state of the vegetation. MIM further accounts for the physical presence and gradual fall of insect-killed dead standing trees. The design of MIM should facilitate the addition of other insect types besides the ones already included and could guide the development of similar modules for other process-based vegetation models. After describing IBIS-MIM, we illustrate the usefulness of the model by presenting results spanning daily to centennial timescales for vegetation dynamics and cycling of carbon, energy, and water in a simplified setting and for bark beetles only. More precisely, we simulated 100 % mortality events from the mountain pine beetle for three locations in western Canada. We then show that these simulated impacts agree with many previous studies based on field measurements, satellite data, or modelling. MIM and similar tools should therefore be of great value in assessing the wide array of impacts resulting from insect-induced plant damage in the Earth system.

  3. Implementation of the chemistry module MECCA (v2.5 in the modal aerosol version of the Community Atmosphere Model component (v3.6.33 of the Community Earth System Model

    Directory of Open Access Journals (Sweden)

    M. S. Long

    2013-02-01

    Full Text Available A coupled atmospheric chemistry and climate system model was developed using the modal aerosol version of the National Center for Atmospheric Research Community Atmosphere Model (modal-CAM; v3.6.33 and the Max Planck Institute for Chemistry's Module Efficiently Calculating the Chemistry of the Atmosphere (MECCA; v2.5 to provide enhanced resolution of multiphase processes, particularly those involving inorganic halogens, and associated impacts on atmospheric composition and climate. Three Rosenbrock solvers (Ros-2, Ros-3, RODAS-3 were tested in conjunction with the basic load-balancing options available to modal-CAM (1 to establish an optimal configuration of the implicitly-solved multiphase chemistry module that maximizes both computational speed and repeatability of Ros-2 and RODAS-3 results versus Ros-3, and (2 to identify potential implementation strategies for future versions of this and similar coupled systems. RODAS-3 was faster than Ros-2 and Ros-3 with good reproduction of Ros-3 results, while Ros-2 was both slower and substantially less reproducible relative to Ros-3 results. Modal-CAM with MECCA chemistry was a factor of 15 slower than modal-CAM using standard chemistry. MECCA chemistry integration times demonstrated a systematic frequency distribution for all three solvers, and revealed that the change in run-time performance was due to a change in the frequency distribution of chemical integration times; the peak frequency was similar for all solvers. This suggests that efficient chemistry-focused load-balancing schemes can be developed that rely on the parameters of this frequency distribution.

  4. Nuflood, Version 1.x

    Energy Technology Data Exchange (ETDEWEB)

    2016-07-29

    NUFLOOD Version 1.x is a surface-water hydrodynamic package designed for the simulation of overland flow of fluids. It consists of various routines to address a wide range of applications (e.g., rainfall-runoff, tsunami, storm surge) and real time, interactive visualization tools. NUFLOOD has been designed for general-purpose computers and workstations containing multi-core processors and/or graphics processing units. The software is easy to use and extensible, constructed in mind for instructors, students, and practicing engineers. NUFLOOD is intended to assist the water resource community in planning against water-related natural disasters.

  5. TOGAF version 9

    CERN Document Server

    Group, The Open

    2010-01-01

    This is the official Open Group Pocket Guide for TOGAF Version 9 Enterprise Edition. This pocket guide is published by Van Haren Publishing on behalf of The Open Group.TOGAF, The Open Group Architectural Framework is a fast growing, worldwide accepted standard that can help organisations build their own Enterprise Architecture in a standardised way. This book explains why the in?s and out?s of TOGAF in a concise manner.This book explains how TOGAF can help to make an Enterprise Architecture. Enterprise Architecture is an approach that can help management to understand this growing complexity.

  6. Expanded Simulation Models "Version 3.0" for Growth of the Submerged Aquatic Plants American Wildcelery, Sago Pondweed, Hydrilla, and Eurasian Watermilfoil

    Science.gov (United States)

    2007-11-01

    descriptions of the vegetation responses to daily changes in current velocity and epiphyte shading, and accommodation of daily changes in water level...in current velocity, and epiphyte shading, or to combinations of factors. Once the vegetation is lost from a given locale, increased sediment...responses to changes in current velocity and light attenuation by epiphytes and allow only annual changes in water level. These versions are available

  7. Spanish version of Colquitt's Organizational Justice Scale.

    Science.gov (United States)

    Díaz-Gracia, Liliana; Barbaranelli, Claudio; Moreno-Jiménez, Bernardo

    2014-01-01

    Organizational justice (OJ) is an important predictor of different work attitudes and behaviors. Colquitt's Organizational Justice Scale (COJS) was designed to assess employees' perceptions of fairness. This scale has four dimensions: distributive, procedural, informational, and interpersonal justice. The objective of this study is to validate it in a Spanish sample. The scale was administered to 460 Spanish employees from the service sector. 40.4% were men and 59.6% women. The Confirmatory Factor Analysis (CFA) supported the four dimensions structure for Spanish version of COJS. This model showed a better fit to data that the others models tested. Cronbach's alpha obtained for subscales ranged between .88 and .95. Correlations of the Spanish version of COJS with measures of incivility and job satisfaction were statistically significant and had a moderate to high magnitude, indicating a reasonable degree of construct validity. The Spanish version of COJS has adequate psychometric properties and may be of value in assessing OJ in Spanish setting.

  8. URGENCES NOUVELLE VERSION

    CERN Multimedia

    Medical Service

    2002-01-01

    The table of emergency numbers that appeared in Bulletin 10/2002 is out of date. The updated version provided by the Medical Service appears on the following page. Please disregard the previous version. URGENT NEED OF A DOCTOR GENEVAPATIENT NOT FIT TO BE MOVED: Call your family doctor Or SOS MEDECINS (24H/24H) 748 49 50 Or ASSOC. OF GENEVA DOCTORS (7H-23H) 322 20 20 PATIENT CAN BE MOVED: HOPITAL CANTONAL 24 Micheli du Crest 372 33 11 382 33 11 CHILDREN'S HOSPITAL 6 rue Willy Donzé 382 68 18 382 45 55 MATERNITY 24 Micheli du Crest 382 68 16 382 33 11 OPHTALMOLOGY 22 Alcide Jentzer 382 84 00 HOPITAL DE LA TOUR Meyrin 719 61 11 CENTRE MEDICAL DE MEYRIN Champs Fréchets 719 74 00 URGENCES : FIRE BRIGADE 118 FIRE BRIGADE CERN 767 44 44 BESOIN URGENT D'AMBULANCE (GENEVE ET VAUD) : 144 POLICE 117 ANTI-POISON CENTRE 24H/24H 01 251 51 510 EUROPEAN EMERGENCY CALL: 112 FRANCE PATIENT NOT FIT TO BE MOVED: call your family doctor PATIENT CAN BE MOVED: ST. JULIE...

  9. ABEL model: Evaluates claims of inability to afford penalities and compliance costs (version 2.8) (for microcomputers). Model-Simulation

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The easy-to-use ABEL software evaluates for-profit company claims of inability to afford penalties, clean-up costs, or compliance costs. Violators raise the issue of inability to pay in most of EPA`s enforcement actions regardless of whether there is any hard evidence supporting those claims. The program enables Federal, State and local enforcement professionals to quickly determine if there was any validity to those claims. ABEL is a tool that promotes quick settlements by performing screening analyses of defendants and potentially responsible parties (PRP`s) to determine their financial capacity. If ABEL indicates the firm can afford the full penalty, compliance or clean-up costs, then EPA makes no adjustments for inability to pay. If it indicates that the firm cannot afford the full amount, it directs the enforcement personnel to review other financial reports before making any adjustments. After analyzing some basic financial ratios that reflect a company`s solvency, ABEL assesses the firm`s ability to pay by focusing on projected cash flows. The model explicitly calculates the value of projected, internally generated cash flows from historical tax information, and compares these cash flows to the proposed environmental expenditure(s). The software is extremely easy to use. Users are taken through a series of prompts to enter specified data. On screen `help` information is available at any time.

  10. Modeling regional air quality and climate: improving organic aerosol and aerosol activation processes in WRF/Chem version 3.7.1

    Directory of Open Access Journals (Sweden)

    K. Yahya

    2017-06-01

    Full Text Available Air quality and climate influence each other through the uncertain processes of aerosol formation and cloud droplet activation. In this study, both processes are improved in the Weather, Research and Forecasting model with Chemistry (WRF/Chem version 3.7.1. The existing Volatility Basis Set (VBS treatments for organic aerosol (OA formation in WRF/Chem are improved by considering the following: the secondary OA (SOA formation from semi-volatile primary organic aerosol (POA, a semi-empirical formulation for the enthalpy of vaporization of SOA, and functionalization and fragmentation reactions for multiple generations of products from the oxidation of VOCs. Over the continental US, 2-month-long simulations (May to June 2010 are conducted and results are evaluated against surface and aircraft observations during the Nexus of Air Quality and Climate Change (CalNex campaign. Among all the configurations considered, the best performance is found for the simulation with the 2005 Carbon Bond mechanism (CB05 and the VBS SOA module with semivolatile POA treatment, 25 % fragmentation, and the emissions of semi-volatile and intermediate volatile organic compounds being 3 times the original POA emissions. Among the three gas-phase mechanisms (CB05, CB6, and SAPRC07 used, CB05 gives the best performance for surface ozone and PM2. 5 concentrations. Differences in SOA predictions are larger for the simulations with different VBS treatments (e.g., nonvolatile POA versus semivolatile POA compared to the simulations with different gas-phase mechanisms. Compared to the simulation with CB05 and the default SOA module, the simulations with the VBS treatment improve cloud droplet number concentration (CDNC predictions (normalized mean biases from −40.8 % to a range of −34.6 to −27.7 %, with large differences between CB05–CB6 and SAPRC07 due to large differences in their OH and HO2 predictions. An advanced aerosol activation

  11. Keeping Silence as Speaking:Vimalakirti Nirdesa Sutra Interpretation of the Paradoxes of Speaking%语默不二:《维摩诘经》对言说悖论的诠释

    Institute of Scientific and Technical Information of China (English)

    张培高; 李蒙

    2016-01-01

    The relationship between language and Metaphysics is an important philosophical topic in Vimalakirti Nirdesa Su-tra. Vimalakirti Nirdesa thinks that the way to express the unspeakable such as the dharma character is to use special methods (including“saying according to the thing itself”,“saying with specious or figurative language”,“saying with body lan-guage”,and“saying with silence”). The first method isn’t speaking ordinary language but explaining by dharma itself. The second can not only break the boundary of phenomenon and ontology to combine each other,but also make full use of analytical thinking and imagination to help you out of a bind. The third is different from indirect statement,in which listeners needn’t introspect and imagine and can directly face the thing itself. Silence is not equal to not speaking since there are two types of silence:not being able to speak and being able to speak but keeping silence instead. Vimalakirti Nirdesa’s silence is the second type:understanding the unspeakable and expressing his thought in silence.%语言与形而上者的关系是《维摩诘经》探讨的一个重要哲学问题。维摩诘认为,面对像“法相”之类的“不可说”者,不能采用常规的言说,只可采用巧说,包括“当如法说”、以“玄言”说、以“示”说、以“无言”说。“当如法说”不是一般的言说,而是“法”自身的展开;以“玄言”说,不仅打破了现象和本体之间的界限而使本体和现象合二为一,而且能够把思辨和想象能力空前地调动起来进而摆脱思维困境;以“示”说不同于间接陈述,在“示”说中听者不需要反思和想象,而是可以直接面对事情本身。沉默是无言,但不完全等同于无言,因为沉默可分为两种:想说而不能说、可说而不说。维摩诘的沉默属于第二种,然而他在沉默中悟到并且说出了“不可说”者。

  12. Self-consciousness Scale: a Brazilian version.

    Science.gov (United States)

    Teixeira, M A; Gomes, W B

    1995-10-01

    The aim of this study was to examine the applicability of a Brazilian version of the Self-consciousness Scale to university students. Factorial structure, subscale intercorrelations, and normative data obtained with 182 subjects are reported. These results suggest that the proposed model of self-consciousness is applicable in the Brazilian culture, although some significant sex differences were found for two of the scales. Reliability tests and the factorial validity of the scale showed that this version still needs refinement to be used as a reliable research tool.

  13. Cryptanalysis of Achterbahn-Version 1 and -Version 2

    Institute of Scientific and Technical Information of China (English)

    Xiao-Li Huang; Chuan-Kun Wu

    2007-01-01

    Achterbahn is one of the candidate stream ciphers submitted to the eSTREAM, which is the ECRYPT StreamCipher Project. The cipher Achterbahn uses a new structure which is based on several nonlinear feedback shift registers(NLFSR) and a nonlinear combining output Boolean function. This paper proposes distinguishing attacks on Achterbahn-Version 1 and -Version 2 on the reduced mode and the full mode. These distinguishing attacks are based on linear approxi-mations of the output functions. On the basis of these linear approximations and the periods of the registers, parity checkswith noticeable biases are found. Then distinguishing attacks can be achieved through these biased parity checks. As toAchterbahn-Version 1, three cases that the output function has three possibilities are analyzed. Achterbahn-Version 2, themodification version of Achterbahn-Version 1, is designed to avert attacks based on approximations of the output Booleanfunction. Our attack with even much lower complexities on Achterbahn-Version 2 shows that Achterbahn-Version 2 cannotprevent attacks based on linear approximations.

  14. Versions Of Care Technology

    Directory of Open Access Journals (Sweden)

    Sampsa Hyysalo

    2007-01-01

    Full Text Available The importance of users for innovation has been increasingly emphasized in the literatures on design and management of technology. However, less attention has been given to how people shape technology-in-use. This paper first provides a review of literature on technology use in the social and cultural studies of technology. It then moves to examine empirically how a novel alarm and monitoring appliance was appropriated in the work of home-care nurses and in the everyday living of elderly people. Analysis shows that even these technically unsavvy users shaped the technology considerably by various, even if mundane, acts of adapting it materially, as well as by attributing different meanings to it. However, the paper goes on to argue that such commonplace phrasing of the findings obscures their significance and interrelations. Consequently, the final section of the paper reframes the key findings of this study using the concepts of practice, enactment, and versions of technology to reach a more adequate description.

  15. Treatment of the Mirror 3H(α, γ) 7Li and 3He(α, γ) 7Be Reactions in the Algebraic Version of the Resonating Group Model

    Science.gov (United States)

    Solovyev, A. S.; Igashov, S. Yu; Tchuvil'sky, Yu M.

    2014-12-01

    A unified microscopic approach based on the algebraic version of the resonating group model has been realized for description of the radiative capture reactions 3H(α, γ)7Li and 3He(α, γ)7Be, which play an important role for modern nuclear astrophysics. The astrophysical S-factors of the reactions and branching ratios between capture to the ground and first excited states of the 7Li and 7Be nuclei have been calculated. The comparison with the most recent experimental data demonstrates a good agreement.

  16. GNU Octave Manual Version 3

    DEFF Research Database (Denmark)

    W. Eaton, John; Bateman, David; Hauberg, Søren

    This manual is the definitive guide to GNU Octave, an interactive environment for numerical computation. The manual covers the new version 3 of GNU Octave.......This manual is the definitive guide to GNU Octave, an interactive environment for numerical computation. The manual covers the new version 3 of GNU Octave....

  17. EOSlib, Version 3

    Energy Technology Data Exchange (ETDEWEB)

    2017-02-03

    Equilibrium thermodynamics underpins many of the technologies used throughout theoretical physics, yet verification of the various theoretical models in the open literature remains challenging. EOSlib provides a single, consistent, verifiable implementation of these models, in a single, easy-to-use software package. It consists of three parts: a software library implementing various published equation-of-state (EOS) models; a database of fitting parameters for various materials for these models; and a number of useful utility functions for simplifying thermodynamic calculations such as computing Hugoniot curves or Riemann problem solutions. Ready availability of this library will enable reliable code-to- code testing of equation-of-state implementations, as well as a starting point for more rigorous verification work. EOSlib also provides a single, consistent API for its analytic and tabular EOS models, which simplifies the process of comparing models for a particular application.

  18. A strategy for representing the effects of convective momentum transport in multiscale models: Evaluation using a new superparameterized version of the Weather Research and Forecast model (SP-WRF)

    Science.gov (United States)

    Tulich, S. N.

    2015-06-01

    This paper describes a general method for the treatment of convective momentum transport (CMT) in large-scale dynamical solvers that use a cyclic, two-dimensional (2-D) cloud-resolving model (CRM) as a "superparameterization" of convective-system-scale processes. The approach is similar in concept to traditional parameterizations of CMT, but with the distinction that both the scalar transport and diagnostic pressure gradient force are calculated using information provided by the 2-D CRM. No assumptions are therefore made concerning the role of convection-induced pressure gradient forces in producing up or down-gradient CMT. The proposed method is evaluated using a new superparameterized version of the Weather Research and Forecast model (SP-WRF) that is described herein for the first time. Results show that the net effect of the formulation is to modestly reduce the overall strength of the large-scale circulation, via "cumulus friction." This statement holds true for idealized simulations of two types of mesoscale convective systems, a squall line, and a tropical cyclone, in addition to real-world global simulations of seasonal (1 June to 31 August) climate. In the case of the latter, inclusion of the formulation is found to improve the depiction of key synoptic modes of tropical wave variability, in addition to some aspects of the simulated time-mean climate. The choice of CRM orientation is also found to importantly affect the simulated time-mean climate, apparently due to changes in the explicit representation of wide-spread shallow convective regions.

  19. Validation of a five-factor model of a Chinese Mandarin version of the Positive and Negative Syndrome Scale (CMV-PANSS) in a sample of 813 schizophrenia patients.

    Science.gov (United States)

    Wu, Bo-Jian; Lan, Tsuo-Hung; Hu, Tsung-Ming; Lee, Shin-Min; Liou, Jiunn-Ying

    2015-12-01

    The Positive and Negative Syndrome Scale (PANSS) is one of the most widely used instruments for measuring the severity of schizophrenia. However, until now, there has not been a published, validated Chinese Mandarin version of the five-factor model PANSS with confirmatory factor analysis (CFA) for schizophrenic patients in Taiwan. A total of 813 subjects were recruited. Internal consistency was evaluated with Cronbach's alpha coefficient. For test re-test reliability, 57 patients were reassessed and intra-class correlation coefficients were calculated. For validity, exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) using a Structured Equation Model were implemented to identify the factor model. The Cronbach's alpha coefficient was 0.928. The intra-class coefficient was 0.878 (95% CI: 0.79-0.92). The final model was composed of five factors. EFA explained a total of 64.2% of the variance. CFA indicated a good fitting model. Except for the PANSS items G7 (motor retardation), G8 (uncooperativeness), N5 (abstract thinking), and G10 (disorientation), this study found that the items loaded on these factors were similar to the consensus items published in prior studies. In summary, these findings support the Chinese Mandarin version of the PANSS as a reliable and valid instrument for the assessment of the severity of psychopathology in hospitalized, stable patients with schizophrenia. More effective and specific treatment models targeting sub-culture differences are expected to be developed in future studies. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. brulilo, Version 0.x

    Energy Technology Data Exchange (ETDEWEB)

    2015-04-16

    effectively remove some of the stiffness and allow for efficient explicit integration techniques to be used. The original intent of brulilo was to implement these stiffness-alleviating techniques with explicit integrators and compare the performance to traditional implicit integrations of the full stiff system. This is still underway, as the code is very much in an alpha-release state. Furthermore, explicit integrators are often much easier to parallelize than their implicit counterparts. brulilo will implement parallelization of these techniques, leveraging both the Python implementation of MPI, mpi4py, as well as highly parallelized versions targeted at GPUs with PyOpenCL and/or PyCUDA.

  1. Computer-Aided Structural Engineering (CASE) Project. User’s Guide: Computer-Aided Structural Modeling (CASM). Version 5.00

    Science.gov (United States)

    1994-04-01

    w transferred to the laeral resistance localions by tributary wee or continuous bean model . For rigid diaphragms, lateral loads we translerred to the...floors aid roof planes, a Flexible Diaphragm dialog window wil appea Didnu.•e Leeuds aewd On- 0 Siml I.. Model 0 Ceiwmus Bean Model a. Select Simple

  2. A novel assessment of the role of land-use and land-cover change in the global carbon cycle, using a new Dynamic Global Vegetation Model version of the CABLE land surface model

    Science.gov (United States)

    Haverd, Vanessa; Smith, Benjamin; Nieradzik, Lars; Briggs, Peter; Canadell, Josep

    2017-04-01

    In recent decades, terrestrial ecosystems have sequestered around 1.2 PgC y-1, an amount equivalent to 20% of fossil-fuel emissions. This land carbon flux is the net result of the impact of changing climate and CO2 on ecosystem productivity (CO2-climate driven land sink ) and deforestation, harvest and secondary forest regrowth (the land-use change (LUC) flux). The future trajectory of the land carbon flux is highly dependent upon the contributions of these processes to the net flux. However their contributions are highly uncertain, in part because the CO2-climate driven land sink and LUC components are often estimated independently, when in fact they are coupled. We provide a novel assessment of global land carbon fluxes (1800-2015) that integrates land-use effects with the effects of changing climate and CO2 on ecosystem productivity. For this, we use a new land-use enabled Dynamic Global Vegetation Model (DGVM) version of the CABLE land surface model, suitable for use in attributing changes in terrestrial carbon balance, and in predicting changes in vegetation cover and associated effects on land-atmosphere exchange. In this model, land-use-change is driven by prescribed gross land-use transitions and harvest areas, which are converted to changes in land-use area and transfer of carbon between pools (soil, litter, biomass, harvested wood products and cleared wood pools). A novel aspect is the treatment of secondary woody vegetation via the coupling between the land-use module and the POP (Populations Order Physiology) module for woody demography and disturbance-mediated landscape heterogeneity. Land-use transitions to and from secondary forest tiles modify the patch age distribution within secondary-vegetated tiles, in turn affecting biomass accumulation and turnover rates and hence the magnitude of the secondary forest sink. The resulting secondary forest patch age distribution also influences the magnitude of the secondary forest harvest and clearance fluxes

  3. Validation of the Korean version of the pediatric quality of life inventory™ 4.0 (PedsQL™ generic core scales in school children and adolescents using the rasch model

    Directory of Open Access Journals (Sweden)

    Varni James W

    2008-06-01

    Full Text Available Abstract Background The Pediatric Quality of Life Inventory™ (PedsQL™ is a child self-report and parent proxy-report instrument designed to assess health-related quality of life (HRQOL in healthy and ill children and adolescents. It has been translated into over 70 international languages and proposed as a valid and reliable pediatric HRQOL measure. This study aimed to assess the psychometric properties of the Korean translation of the PedsQL™ 4.0 Generic Core Scales. Methods Following the guidelines for linguistic validation, the original US English scales were translated into Korean and cognitive interviews were administered. The field testing responses of 1425 school children and adolescents and 1431 parents to the Korean version of PedsQL™ 4.0 Generic Core Scales were analyzed utilizing confirmatory factor analysis and the Rasch model. Results Consistent with studies using the US English instrument and other translation studies, score distributions were skewed toward higher HRQOL in a predominantly healthy population. Confirmatory factor analysis supported a four-factor and a second order-factor model. The analysis using the Rasch model showed that person reliabilities are low, item reliabilities are high, and the majority of items fit the model's expectation. The Rasch rating scale diagnostics showed that PedsQL™ 4.0 Generic Core Scales in general have the optimal number of response categories, but category 4 (almost always a problem is somewhat problematic for the healthy school sample. The agreements between child self-report and parent proxy-report were moderate. Conclusion The results demonstrate the feasibility, validity, item reliability, item fit, and agreement between child self-report and parent proxy-report of the Korean version of PedsQL™ 4.0 Generic Core Scales for school population health research in Korea. However, the utilization of the Korean version of the PedsQL™ 4.0 Generic Core Scales for healthy school

  4. ARROW (Version 2) Commercial Software Validation and Configuration Control

    Energy Technology Data Exchange (ETDEWEB)

    HEARD, F.J.

    2000-02-10

    ARROW (Version 2), a compressible flow piping network modeling and analysis computer program from Applied Flow Technology, was installed for use at the U.S. Department of Energy Hanford Site near Richland, Washington.

  5. Climate Forecast System Version 2 (CFSv2) Operational Forecasts

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Climate Forecast System Version 2 (CFSv2) produced by the NOAA National Centers for Environmental Prediction (NCEP) is a fully coupled model representing the...

  6. Climate Forecast System Version 2 (CFSv2) Operational Analysis

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Climate Forecast System Version 2 (CFSv2) produced by the NOAA National Centers for Environmental Prediction (NCEP) is a fully coupled model representing the...

  7. TerrSysMP-PDAF (version 1.0): a modular high-performance data assimilation framework for an integrated land surface-subsurface model

    Science.gov (United States)

    Kurtz, Wolfgang; He, Guowei; Kollet, Stefan J.; Maxwell, Reed M.; Vereecken, Harry; Hendricks Franssen, Harrie-Jan

    2016-04-01

    Modelling of terrestrial systems is continuously moving towards more integrated modelling approaches, where different terrestrial compartment models are combined in order to realise a more sophisticated physical description of water, energy and carbon fluxes across compartment boundaries and to provide a more integrated view on terrestrial processes. While such models can effectively reduce certain parameterisation errors of single compartment models, model predictions are still prone to uncertainties regarding model input variables. The resulting uncertainties of model predictions can be effectively tackled by data assimilation techniques, which allow one to correct model predictions with observations taking into account both the model and measurement uncertainties. The steadily increasing availability of computational resources makes it now increasingly possible to perform data assimilation also for computationally highly demanding integrated terrestrial system models. However, as the computational burden for integrated models as well as data assimilation techniques is quite large, there is an increasing need to provide computationally efficient data assimilation frameworks for integrated models that allow one to run on and to make efficient use of massively parallel computational resources. In this paper we present a data assimilation framework for the land surface-subsurface part of the Terrestrial System Modelling Platform (TerrSysMP). TerrSysMP is connected via a memory-based coupling approach with the pre-existing parallel data assimilation library PDAF (Parallel Data Assimilation Framework). This framework provides a fully parallel modular environment for performing data assimilation for the land surface and the subsurface compartment. A simple synthetic case study for a land surface-subsurface system (0.8 million unknowns) is used to demonstrate the effects of data assimilation in the integrated model TerrSysMP and to assess the scaling behaviour of the

  8. LHCf brochure (English version)

    CERN Multimedia

    Lefevre, C

    2012-01-01

    The Earth's upper atmosphere is constantly hit by particles called cosmic rays, producing many secondary particles that collide with nuclei in the atmosphere. LHCf is designed to detect these secondary particles from ultra-high-energy cosmic rays to help confirm the theoretical models that explain what happens when these cosmic rays enter the atmosphere.

  9. AF-GEOSPACE Version 2.1

    Science.gov (United States)

    Hilmer, R. V.; Ginet, G. P.; Hall, T.; Holeman, E.; Madden, D.; Tautz, M.; Roth, C.

    2004-05-01

    AF-GEOSpace is a graphics-intensive software program with space environment models and applications developed and distributed by the Space Weather Center of Excellence at AFRL. A review of current (Version 2.0) and planned (Version 2.1) AF-GEOSpace capabilities will be given. A wide range of physical domains is represented enabling the software to address such things as solar disturbance propagation, radiation belt configuration, and ionospheric auroral particle precipitation and scintillation. The software is currently being used to aid with the design, operation, and simulation of a wide variety of communications, navigation, and surveillance systems. Building on the success of previous releases, AF-GEOSpace has become a platform for the rapid prototyping of automated operational and simulation space weather visualization products and helps with a variety of tasks, including: orbit specification for radiation hazard avoidance; satellite design assessment and post-event anomaly analysis; solar disturbance effects forecasting; frequency and antenna management for radar and HF communications; determination of link outage regions for active ionospheric conditions; scientific model validation and comparison, physics research, and education. Version 2.0 provided a simplified graphical user interface, improved science and application modules, and significantly enhanced graphical performance. Common input data archive sets, application modules, and 1-D, 2-D, and 3-D visualization tools are provided to all models. Dynamic capabilities permit multiple environments to be generated at user-specified time intervals while animation tools enable displays such as satellite orbits and environment data together as a function of time. Building on the existing Version 2.0 software architecture, AF-GEOSpace Version 2.1 is currently under development and will include a host of new modules to provide, for example, geosynchronous charged particle fluxes, neutral atmosphere densities

  10. Using Akaike's information theoretic criterion in mixed-effects modeling of pharmacokinetic data: a simulation study [version 3; referees: 2 approved, 1 approved with reservations

    Directory of Open Access Journals (Sweden)

    Erik Olofsen

    2015-07-01

    Full Text Available Akaike's information theoretic criterion for model discrimination (AIC is often stated to "overfit", i.e., it selects models with a higher dimension than the dimension of the model that generated the data. However, with experimental pharmacokinetic data it may not be possible to identify the correct model, because of the complexity of the processes governing drug disposition. Instead of trying to find the correct model, a more useful objective might be to minimize the prediction error of drug concentrations in subjects with unknown disposition characteristics. In that case, the AIC might be the selection criterion of choice. We performed Monte Carlo simulations using a model of pharmacokinetic data (a power function of time with the property that fits with common multi-exponential models can never be perfect - thus resembling the situation with real data. Prespecified models were fitted to simulated data sets, and AIC and AICc (the criterion with a correction for small sample sizes values were calculated and averaged. The average predictive performances of the models, quantified using simulated validation sets, were compared to the means of the AICs. The data for fits and validation consisted of 11 concentration measurements each obtained in 5 individuals, with three degrees of interindividual variability in the pharmacokinetic volume of distribution. Mean AICc corresponded very well, and better than mean AIC, with mean predictive performance. With increasing interindividual variability, there was a trend towards larger optimal models, but with respect to both lowest AICc and best predictive performance. Furthermore, it was observed that the mean square prediction error itself became less suitable as a validation criterion, and that a predictive performance measure should incorporate interindividual variability. This simulation study showed that, at least in a relatively simple mixed-effects modelling context with a set of prespecified models

  11. Pinyon, Version 0

    Energy Technology Data Exchange (ETDEWEB)

    2017-02-13

    Pinyon is a tool that stores steps involved in creating a model derived from a collection of data. The main function of Pinyon is to store descriptions of calculations used to analyze or visualize the data in a database, and allow users to view the results of these calculations via a web interface. Additionally, users may also use the web interface to make adjustments to the calculations and rerun the entire collection of analysis steps automatically.

  12. CalTOX (registered trademark), A multimedia total exposure model spreadsheet user's guide. Version 4.0(Beta)

    Energy Technology Data Exchange (ETDEWEB)

    McKone, T.E.; Enoch, K.G.

    2002-08-01

    CalTOX has been developed as a set of spreadsheet models and spreadsheet data sets to assist in assessing human exposures from continuous releases to multiple environmental media, i.e. air, soil, and water. It has also been used for waste classification and for setting soil clean-up levels at uncontrolled hazardous wastes sites. The modeling components of CalTOX include a multimedia transport and transformation model, multi-pathway exposure scenario models, and add-ins to quantify and evaluate uncertainty and variability. All parameter values used as inputs to CalTOX are distributions, described in terms of mean values and a coefficient of variation, rather than as point estimates or plausible upper values such as most other models employ. This probabilistic approach allows both sensitivity and uncertainty analyses to be directly incorporated into the model operation. This manual provides CalTOX users with a brief overview of the CalTOX spreadsheet model and provides instructions for using the spreadsheet to make deterministic and probabilistic calculations of source-dose-risk relationships.

  13. CREST Cost of Renewable Energy Spreadsheet Tool: A Model for Developing Cost-based Incentives in the United States. User Manual Version 1

    Energy Technology Data Exchange (ETDEWEB)

    Gifford, Jason S. [Sustainable Energy Advantage, LLC, Framingham, MA (United States); Grace, Robert C. [Sustainable Energy Advantage, LLC, Framingham, MA (United States)

    2011-03-01

    This user manual helps model users understands how to use the CREST model to support renewable energy incentives, FITs, and other renewable energy rate-setting processes. It reviews the spreadsheet tool, including its layout and conventions, offering context on how and why it was created. It also provides instructions on how to populate the model with inputs that are appropriate for a specific jurisdiction’s policymaking objectives and context. And, it describes the results and outlines how these results may inform decisions about long-term renewable energy support programs.

  14. Variability of Phenology and Fluxes of Water and Carbon with Observed and Simulated Soil Moisture in the Ent Terrestrial Biosphere Model (Ent TBM Version 1.0.1.0.0)

    Science.gov (United States)

    Kim, Y.; Moorcroft, P. R.; Aleinov, Igor; Puma, M. J.; Kiang, N. Y.

    2015-01-01

    The Ent Terrestrial Biosphere Model (Ent TBM) is a mixed-canopy dynamic global vegetation model developed specifically for coupling with land surface hydrology and general circulation models (GCMs). This study describes the leaf phenology submodel implemented in the Ent TBM version 1.0.1.0.0 coupled to the carbon allocation scheme of the Ecosystem Demography (ED) model. The phenology submodel adopts a combination of responses to temperature (growing degree days and frost hardening), soil moisture (linearity of stress with relative saturation) and radiation (light length). Growth of leaves, sapwood, fine roots, stem wood and coarse roots is updated on a daily basis. We evaluate the performance in reproducing observed leaf seasonal growth as well as water and carbon fluxes for four plant functional types at five Fluxnet sites, with both observed and prognostic hydrology, and observed and prognostic seasonal leaf area index. The phenology submodel is able to capture the timing and magnitude of leaf-out and senescence for temperate broadleaf deciduous forest (Harvard Forest and Morgan- Monroe State Forest, US), C3 annual grassland (Vaira Ranch, US) and California oak savanna (Tonzi Ranch, US). For evergreen needleleaf forest (Hyytiäla, Finland), the phenology submodel captures the effect of frost hardening of photosynthetic capacity on seasonal fluxes and leaf area. We address the importance of customizing parameter se