WorldWideScience

Sample records for models saphire version

  1. Systems analysis programs for Hands-on integrated reliability evaluations (SAPHIRE) Version 5.0: Verification and validation (V ampersand V) manual. Volume 9

    International Nuclear Information System (INIS)

    Jones, J.L.; Calley, M.B.; Capps, E.L.; Zeigler, S.L.; Galyean, W.J.; Novack, S.D.; Smith, C.L.; Wolfram, L.M.

    1995-03-01

    A verification and validation (V ampersand V) process has been performed for the System Analysis Programs for Hands-on Integrated Reliability Evaluation (SAPHIRE) Version 5.0. SAPHIRE is a set of four computer programs that NRC developed for performing probabilistic risk assessments. They allow an analyst to perform many of the functions necessary to create, quantify, and evaluate the risk associated with a facility or process being analyzed. The programs are Integrated Reliability and Risk Analysis System (IRRAS) System Analysis and Risk Assessment (SARA), Models And Results Database (MAR-D), and Fault tree, Event tree, and Piping and instrumentation diagram (FEP) graphical editor. Intent of this program is to perform a V ampersand V of successive versions of SAPHIRE. Previous efforts have been the V ampersand V of SAPHIRE Version 4.0. The SAPHIRE 5.0 V ampersand V plan is based on the SAPHIRE 4.0 V ampersand V plan with revisions to incorporate lessons learned from the previous effort. Also, the SAPHIRE 5.0 vital and nonvital test procedures are based on the test procedures from SAPHIRE 4.0 with revisions to include the new SAPHIRE 5.0 features as well as to incorporate lessons learned from the previous effort. Most results from the testing were acceptable; however, some discrepancies between expected code operation and actual code operation were identified. Modifications made to SAPHIRE are identified

  2. Systems analysis programs for hands-on integrated reliability evaluations (SAPHIRE), Version 5.0

    International Nuclear Information System (INIS)

    Russell, K.D.; Kvarfordt, K.J.; Hoffman, C.L.

    1995-10-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. The Graphical Evaluation Module (GEM) is a special application tool designed for evaluation of operational occurrences using the Accident Sequence Precursor (ASP) program methods. GEM provides the capability for an analyst to quickly and easily perform conditional core damage probability (CCDP) calculations. The analyst can then use the CCDP calculations to determine if the occurrence of an initiating event or a condition adversely impacts safety. It uses models and data developed in the SAPHIRE specially for the ASP program. GEM requires more data than that normally provided in SAPHIRE and will not perform properly with other models or data bases. This is the first release of GEM and the developers of GEM welcome user comments and feedback that will generate ideas for improvements to future versions. GEM is designated as version 5.0 to track GEM codes along with the other SAPHIRE codes as the GEM relies on the same, shared database structure

  3. Saphire models and software for ASP evaluations

    International Nuclear Information System (INIS)

    Sattison, M.B.

    1997-01-01

    The Idaho National Engineering Laboratory (INEL) over the three years has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of ASP evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both the U.S. Nuclear Regulatory Commission's (NRC's) Office of Nuclear Reactor Regulation (NRR) and the Office for Analysis and Evaluation of Operational Data (AEOD). This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according to plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events. Future plans for the ASP models is also presented

  4. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE), Version 5.0: Models and Results Database (MAR-D) reference manual. Volume 8

    International Nuclear Information System (INIS)

    Russell, K.D.; Skinner, N.L.

    1994-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. The primary function of MAR-D is to create a data repository for completed PRAs and Individual Plant Examinations (IPEs) by providing input, conversion, and output capabilities for data used by IRRAS, SARA, SETS, and FRANTIC software. As probabilistic risk assessments and individual plant examinations are submitted to the NRC for review, MAR-D can be used to convert the models and results from the study for use with IRRAS and SARA. Then, these data can be easily accessed by future studies and will be in a form that will enhance the analysis process. This reference manual provides an overview of the functions available within MAR-D and step-by-step operating instructions

  5. Systems analysis programs for hands-on integrated reliability evaluations (SAPHIRE) version 5.0

    International Nuclear Information System (INIS)

    Russell, K.D.; Kvarfordt, K.J.; Skinner, N.L.; Wood, S.T.

    1994-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. This volume is the reference manual for the Systems Analysis and Risk Assessment (SARA) System Version 5.0, a microcomputer-based system used to analyze the safety issues of a open-quotes familyclose quotes [i.e., a power plant, a manufacturing facility, any facility on which a probabilistic risk assessment (PRA) might be performed]. The SARA database contains PRA data primarily for the dominant accident sequences of a family and descriptive information about the family including event trees, fault trees, and system model diagrams. The number of facility databases that can be accessed is limited only by the amount of disk storage available. To simulate changes to family systems, SARA users change the failure rates of initiating and basic events and/or modify the structure of the cut sets that make up the event trees, fault trees, and systems. The user then evaluates the effects of these changes through the recalculation of the resultant accident sequence probabilities and importance measures. The results are displayed in tables and graphs that may be printed for reports. A preliminary version of the SARA program was completed in August 1985 and has undergone several updates in response to user suggestions and to maintain compatibility with the other SAPHIRE programs. Version 5.0 of SARA provides the same capability as earlier versions and adds the ability to process unlimited cut sets; display fire, flood, and seismic data; and perform more powerful cut set editing

  6. Independent Verification and Validation SAPHIRE Version 8 Final Report Project Number: N6423 U.S. Nuclear Regulatory Commission

    Energy Technology Data Exchange (ETDEWEB)

    Kent Norris

    2010-04-01

    This report provides an evaluation of the SAPHIRE version 8 software product. SAPHIRE version 8 is being developed with a phased or cyclic iterative rapid application development methodology. Due to this approach, a similar approach has been taken for the IV&V activities on each vital software object. IV&V and Software Quality Assurance (SQA) activities occur throughout the entire development life cycle and therefore, will be required through the full development of SAPHIRE version 8. Later phases of the software life cycle, the operation and maintenance phases, are not applicable in this effort since the IV&V is being done prior to releasing Version 8.

  7. Verification and validation of the SAPHIRE Version 4.0 PRA software package

    International Nuclear Information System (INIS)

    Bolander, T.W.; Calley, M.B.; Capps, E.L.

    1994-02-01

    A verification and validation (V ampersand V) process has been performed for the System Analysis Programs for Hands-on Integrated Reliability Evaluation (SAPHIRE). SAPHIRE is a set of four computer programs that the Nuclear Regulatory Commission (NRC) developed to perform probabilistic risk assessments (PRAs). These programs allow an analyst to create, quantify, and evaluate the risk associated with a facility or process being analyzed. The programs included in this set are Integrated Reliability and Risk Analysis System (IRRAS), System Analysis and Risk Assessment (SARA), Models and Results Database (MAR-D), and Fault Tree/Event Tree/Piping and Instrumentation Diagram (FEP) graphical editor. The V ampersand V steps included a V ampersand V plan to describe the process and criteria by which the V ampersand V would be performed; a software requirements documentation review to determine the correctness, completeness, and traceability of the requirements; a user survey to determine the usefulness of the user documentation, identification and testing of vital and non-vital features, and documentation of the test results

  8. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Version 5.0: Data loading manual. Volume 10

    International Nuclear Information System (INIS)

    VanHorn, R.L.; Wolfram, L.M.; Fowler, R.D.; Beck, S.T.; Smith, C.L.

    1995-04-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) suite of programs can be used to organize and standardize in an electronic format information from probabilistic risk assessments or individual plant examinations. The Models and Results Database (MAR-D) program of the SAPHIRE suite serves as the repository for probabilistic risk assessment and individual plant examination data and information. This report demonstrates by examples the common electronic and manual methods used to load these types of data. It is not a stand alone document but references documents that contribute information relative to the data loading process. This document provides a more detailed discussion and instructions for using SAPHIRE 5.0 only when enough information on a specific topic is not provided by another available source

  9. SAPHIRE 8 Volume 1 - Overview and Summary

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; S. T. Wood

    2011-03-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer (PC) running the Microsoft Windows operating system. SAPHIRE Version 8 is funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory (INL). INL's primary role in this project is that of software developer and tester. However, INL also plays an important role in technology transfer by interfacing and supporting SAPHIRE users, who constitute a wide range of PRA practitioners from the NRC, national laboratories, the private sector, and foreign countries. SAPHIRE can be used to model a complex system’s response to initiating events and quantify associated consequential outcome frequencies. Specifically, for nuclear power plant applications, SAPHIRE 8 can identify important contributors to core damage (Level 1 PRA) and containment failure during a severe accident which leads to releases (Level 2 PRA). It can be used for a PRA where the reactor is at full power, low power, or at shutdown conditions. Furthermore, it can be used to analyze both internal and external initiating events and has special features for managing models such as flooding and fire. It can also be used in a limited manner to quantify risk in terms of release consequences to the public and environment (Level 3 PRA). In SAPHIRE 8, the act of creating a model has been separated from the analysis of that model in order to improve the quality of both the model (e.g., by avoiding inadvertent changes) and the analysis. Consequently, in SAPHIRE 8, the analysis of models is performed by using what are called Workspaces. Currently, there are Workspaces for three types of analyses: (1) the NRC’s Accident Sequence Precursor program, where the workspace is called “Events and Condition Assessment (ECA);” (2) the NRC’s Significance Determination

  10. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE), Version 5.0: Integrated Reliability and Risk Analysis System (IRRAS) reference manual. Volume 2

    International Nuclear Information System (INIS)

    Russell, K.D.; Kvarfordt, K.J.; Skinner, N.L.; Wood, S.T.; Rasmuson, D.M.

    1994-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. The Integrated Reliability and Risk Analysis System (IRRAS) is a state-of-the-art, microcomputer-based probabilistic risk assessment (PRA) model development and analysis tool to address key nuclear plant safety issues. IRRAS is an integrated software tool that gives the use the ability to create and analyze fault trees and accident sequences using a microcomputer. This program provides functions that range from graphical fault tree construction to cut set generation and quantification to report generation. Version 1.0 of the IRRAS program was released in February of 1987. Since then, many user comments and enhancements have been incorporated into the program providing a much more powerful and user-friendly system. This version has been designated IRRAS 5.0 and is the subject of this Reference Manual. Version 5.0 of IRRAS provides the same capabilities as earlier versions and ads the ability to perform location transformations, seismic analysis, and provides enhancements to the user interface as well as improved algorithm performance. Additionally, version 5.0 contains new alphanumeric fault tree and event used for event tree rules, recovery rules, and end state partitioning

  11. SAPHIRE 8 Volume 3 - Users' Guide

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; K. Vedros; K. J. Kvarfordt

    2011-03-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer. SAPHIRE is funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory (INL). The INL's primary role in this project is that of software developer. However, the INL also plays an important role in technology transfer by interfacing and supporting SAPHIRE users comprised of a wide range of PRA practitioners from the NRC, national laboratories, the private sector, and foreign countries. SAPHIRE can be used to model a complex system’s response to initiating events, quantify associated damage outcome frequencies, and identify important contributors to this damage (Level 1 PRA) and to analyze containment performance during a severe accident and quantify radioactive releases (Level 2 PRA). It can be used for a PRA evaluating a variety of operating conditions, for example, for a nuclear reactor at full power, low power, or at shutdown conditions. Furthermore, SAPHIRE can be used to analyze both internal and external initiating events and has special features for transforming models built for internal event analysis to models for external event analysis. It can also be used in a limited manner to quantify risk in terms of release consequences to both the public and the environment (Level 3 PRA). This reference guide will introduce the SAPHIRE Version 8.0 software. A brief discussion of the purpose and history of the software is included along with general information such as installation instructions, starting and stopping the program, and some pointers on how to get around inside the program. Next, database concepts and structure are discussed. Following that discussion are nine sections, one for each of the menu options on the SAPHIRE main menu, wherein the purpose and general capabilities for each option are

  12. Systems analysis programs for hands-on integrated reliability evaluations (SAPHIRE) version 5.0, technical reference manual

    International Nuclear Information System (INIS)

    Russell, K.D.; Atwood, C.L.; Galyean, W.J.; Sattison, M.B.; Rasmuson, D.M.

    1994-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. This volume provides information on the principles used in the construction and operation of Version 5.0 of the Integrated Reliability and Risk Analysis System (IRRAS) and the System Analysis and Risk Assessment (SARA) system. It summarizes the fundamental mathematical concepts of sets and logic, fault trees, and probability. This volume then describes the algorithms that these programs use to construct a fault tree and to obtain the minimal cut sets. It gives the formulas used to obtain the probability of the top event from the minimal cut sets, and the formulas for probabilities that are appropriate under various assumptions concerning repairability and mission time. It defines the measures of basic event importance that these programs can calculate. This volume gives an overview of uncertainty analysis using simple Monte Carlo sampling or Latin Hypercube sampling, and states the algorithms used by these programs to generate random basic event probabilities from various distributions. Further references are given, and a detailed example of the reduction and quantification of a simple fault tree is provided in an appendix

  13. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE), Version 5.0. Volume 5, Systems Analysis and Risk Assessment (SARA) tutorial manual

    International Nuclear Information System (INIS)

    Sattison, M.B.; Russell, K.D.; Skinner, N.L.

    1994-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs) primarily for nuclear power plants. This volume is the tutorial manual for the Systems Analysis and Risk Assessment (SARA) System Version 5.0, a microcomputer-based system used to analyze the safety issues of a open-quotes familyclose quotes [i.e., a power plant, a manufacturing facility, any facility on which a probabilistic risk assessment (PRA) might be performed]. A series of lessons is provided that guides the user through some basic steps common to most analyses performed with SARA. The example problems presented in the lessons build on one another, and in combination, lead the user through all aspects of SARA sensitivity analysis capabilities

  14. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Code Reference Manual

    International Nuclear Information System (INIS)

    C. L. Smith; K. J. Kvarfordt; S. T. Wood

    2006-01-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer. SAPHIRE is funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory (INL). The INL's primary role in this project is that of software developer. However, the INL also plays an important role in technology transfer by interfacing and supporting SAPHIRE users comprised of a wide range of PRA practitioners from the NRC, national laboratories, the private sector, and foreign countries. SAPHIRE can be used to model a complex system's response to initiating events, quantify associated damage outcome frequencies, and identify important contributors to this damage (Level 1 PRA) and to analyze containment performance during a severe accident and quantify radioactive releases (Level 2 PRA). It can be used for a PRA evaluating a variety of operating conditions, for example, for a nuclear reactor at full power, low power, or at shutdown conditions. Furthermore, SAPHIRE can be used to analyze both internal and external initiating events and has special features for ansforming models built for internal event analysis to models for external event analysis. It can also be used in a limited manner to quantify risk in terms of release consequences to both the public and the environment (Level 3 PRA). SAPHIRE includes a separate module called the Graphical Evaluation Module (GEM). GEM provides a highly specialized user interface with SAPHIRE that automates SAPHIRE process steps for evaluating operational events at commercial nuclear power plants. Using GEM, an analyst can estimate the risk associated with operational events in a very efficient and expeditious manner. This reference guide will introduce the SAPHIRE Version 7.0 software. A brief discussion of the purpose and history of the software is included along with general

  15. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Code Reference Manual

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; K. J. Kvarfordt; S. T. Wood

    2008-08-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer. SAPHIRE is funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory (INL). The INL's primary role in this project is that of software developer. However, the INL also plays an important role in technology transfer by interfacing and supporting SAPHIRE users comprised of a wide range of PRA practitioners from the NRC, national laboratories, the private sector, and foreign countries. SAPHIRE can be used to model a complex system’s response to initiating events, quantify associated damage outcome frequencies, and identify important contributors to this damage (Level 1 PRA) and to analyze containment performance during a severe accident and quantify radioactive releases (Level 2 PRA). It can be used for a PRA evaluating a variety of operating conditions, for example, for a nuclear reactor at full power, low power, or at shutdown conditions. Furthermore, SAPHIRE can be used to analyze both internal and external initiating events and has special features for transforming models built for internal event analysis to models for external event analysis. It can also be used in a limited manner to quantify risk in terms of release consequences to both the public and the environment (Level 3 PRA). SAPHIRE includes a separate module called the Graphical Evaluation Module (GEM). GEM provides a highly specialized user interface with SAPHIRE that automates SAPHIRE process steps for evaluating operational events at commercial nuclear power plants. Using GEM, an analyst can estimate the risk associated with operational events in a very efficient and expeditious manner. This reference guide will introduce the SAPHIRE Version 7.0 software. A brief discussion of the purpose and history of the software is included along with

  16. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Code Reference Manual

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; K. J. Kvarfordt; S. T. Wood

    2006-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer. SAPHIRE is funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory (INL). The INL's primary role in this project is that of software developer. However, the INL also plays an important role in technology transfer by interfacing and supporting SAPHIRE users comprised of a wide range of PRA practitioners from the NRC, national laboratories, the private sector, and foreign countries. SAPHIRE can be used to model a complex system’s response to initiating events, quantify associated damage outcome frequencies, and identify important contributors to this damage (Level 1 PRA) and to analyze containment performance during a severe accident and quantify radioactive releases (Level 2 PRA). It can be used for a PRA evaluating a variety of operating conditions, for example, for a nuclear reactor at full power, low power, or at shutdown conditions. Furthermore, SAPHIRE can be used to analyze both internal and external initiating events and has special features for ansforming models built for internal event analysis to models for external event analysis. It can also be used in a limited manner to quantify risk in terms of release consequences to both the public and the environment (Level 3 PRA). SAPHIRE includes a separate module called the Graphical Evaluation Module (GEM). GEM provides a highly specialized user interface with SAPHIRE that automates SAPHIRE process steps for evaluating operational events at commercial nuclear power plants. Using GEM, an analyst can estimate the risk associated with operational events in a very efficient and expeditious manner. This reference guide will introduce the SAPHIRE Version 7.0 software. A brief discussion of the purpose and history of the software is included along with

  17. SAPHIRE 8 Volume 6 - Quality Assurance

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; R. Nims; K. J. Kvarfordt

    2011-03-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Version 8 is a software application developed for performing a complete probabilistic risk assessment using a personal computer running the Microsoft Windows™ operating system. SAPHIRE 8 is funded by the U.S. Nuclear Regulatory Commission (NRC). The role of the INL in this project is that of software developer and tester. This development takes place using formal software development procedures and is subject to quality assurance (QA) processes. The purpose of this document is to describe how the SAPHIRE software QA is performed for Version 8, what constitutes its parts, and limitations of those processes. In addition, this document describes the Independent Verification and Validation that was conducted for Version 8 as part of an overall QA process.

  18. OVERVIEW OF THE SAPHIRE PROBABILISTIC RISK ANALYSIS SOFTWARE

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Curtis L.; Wood, Ted; Knudsen, James; Ma, Zhegang

    2016-10-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer (PC) running the Microsoft Windows operating system. SAPHIRE Version 8 is funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory (INL). INL's primary role in this project is that of software developer and tester. However, INL also plays an important role in technology transfer by interfacing and supporting SAPHIRE users, who constitute a wide range of PRA practitioners from the NRC, national laboratories, the private sector, and foreign countries. In this paper, we provide an overview of the current technical capabilities found in SAPHIRE Version 8, including the user interface and enhanced solving algorithms.

  19. Key attributes of the SAPHIRE risk and reliability analysis software for risk-informed probabilistic applications

    International Nuclear Information System (INIS)

    Smith, Curtis; Knudsen, James; Kvarfordt, Kellie; Wood, Ted

    2008-01-01

    The Idaho National Laboratory is a primary developer of probabilistic risk and reliability analysis (PRRA) tools, dating back over 35 years. Evolving from mainframe-based software, the current state-of-the-practice has led to the creation of the SAPHIRE software. Currently, agencies such as the Nuclear Regulatory Commission, the National Aeronautics and Aerospace Agency, the Department of Energy, and the Department of Defense use version 7 of the SAPHIRE software for many of their risk-informed activities. In order to better understand and appreciate the power of software as part of risk-informed applications, we need to recall that our current analysis methods and solution methods have built upon pioneering work done 30-40 years ago. We contrast this work with the current capabilities in the SAPHIRE analysis package. As part of this discussion, we provide information for both the typical features and special analysis capabilities, which are available. We also present the application and results typically found with state-of-the-practice PRRA models. By providing both a high-level and detailed look at the SAPHIRE software, we give a snapshot in time for the current use of software tools in a risk-informed decision arena

  20. SAPHIR, how it ended

    International Nuclear Information System (INIS)

    Brogli, R.; Hammer, J.; Wiezel, L.; Christen, R.; Heyck, H.; Lehmann, E.

    1995-01-01

    On May 16th, 1994, PSI decided to discontinue its efforts to retrofit the SAPHIR reactor for operation at 10 MW. This decision was made because the effort and time for the retrofit work in progress had proven to be more complex than was anticipated. In view of the start-up of the new spallation-neutron source SINQ in 1996, the useful operating time between the eventual restart of SAPHIR and the start-up of SINQ became less than two years, which was regarded by PSI as too short a period to warrant the large retrofit effort. Following the decision of PSI not to re-use SAPHIR as a neutron source, several options for the further utilization of the facility were open. However, none of them appeared promising in comparison with other possibilities; it was therefore decided that SAPHIR should be decommissioned. A concerted effort was initiated to consolidate the nuclear and conventional safety for the post-operational period. (author) 3 figs., 3 tab

  1. SAPHIRE 8 Volume 2 - Technical Reference

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; S. T. Wood; W. J. Galyean; J. A. Schroeder; M. B. Sattison

    2011-03-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of computer programs that were developed to create and analyze probabilistic risk assessment (PRAs). Herein information is provided on the principles used in the construction and operation of Version 8.0 of the SAPHIRE system. This report summarizes the fundamental mathematical concepts of sets and logic, fault trees, and probability. This volume then describes the algorithms used to construct a fault tree and to obtain the minimal cut sets. It gives the formulas used to obtain the probability of the top event from the minimal cut sets, and the formulas for probabilities that apply for various assumptions concerning reparability and mission time. It defines the measures of basic event importance that SAPHIRE can calculate. This volume gives an overview of uncertainty analysis using simple Monte Carlo sampling or Latin Hypercube sampling, and states the algorithms used by this program to generate random basic event probabilities from various distributions. Also covered are enhance capabilities such as seismic analysis, Workspace algorithms, cut set "recovery," end state manipulation, and use of "compound events."

  2. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Quality Assurance Manual

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; R. Nims; K. J. Kvarfordt; C. Wharton

    2008-08-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment using a personal computer running the Microsoft Windows operating system. SAPHIRE is primarily funded by the U.S. Nuclear Regulatory Commission (NRC). The role of the INL in this project is that of software developer and tester. This development takes place using formal software development procedures and is subject to quality assurance (QA) processes. The purpose of this document is to describe how the SAPHIRE software QA is performed for Version 6 and 7, what constitutes its parts, and limitations of those processes.

  3. SAPHIRE6.64, System Analysis Programs for Hands-on Integrated Reliability

    International Nuclear Information System (INIS)

    2001-01-01

    1 - Description of program or function: SAPHIRE is a collection of programs developed for the purpose of performing those functions necessary to create and analyze a complete Probabilistic Risk Assessment (PRA) primarily for nuclear power plants. The programs included in this suite are the Integrated Reliability and Risk Analysis System (IRRAS), the System Analysis and Risk Assessment (SARA) system, the Models And Results Database (MAR-D) system, and the Fault tree, Event tree and P and ID (FEP) editors. Previously these programs were released as separate packages. These programs include functions to allow the user to create event trees and fault trees, to define accident sequences and basic event failure data, to solve system and accident sequence fault trees, to quantify cut sets, and to perform uncertainty analysis on the results. Also included in this program are features to allow the analyst to generate reports and displays that can be used to document the results of an analysis. Since this software is a very detailed technical tool, the user of this program should be familiar with PRA concepts and the methods used to perform these analyses. 2 - Methods: SAPHIRE is written in MODULA-2 and uses an integrated commercial graphics package to interactively construct and edit fault trees. The fault tree solving methods used are industry recognized top down algorithms. For quantification, the program uses standard methods to propagate the failure information through the generated cut sets. SAPHIRE includes a separate module called the Graphical Evaluation Module (GEM). GEM provides a highly specialized user interface with SAPHIRE which automates the process for evaluating operational events at commercial nuclear power plants. Using GEM an analyst can estimate the risk associated with operational events (that is, perform a Level 1, Level 2, and Level 3 analysis for operational events) in a very efficient and expeditious manner. This on-line reference guide will

  4. SAPHIRE 8 Software Quality Assurance Plan

    Energy Technology Data Exchange (ETDEWEB)

    Curtis Smith

    2010-02-01

    This Quality Assurance (QA) Plan documents the QA activities that will be managed by the INL related to JCN N6423. The NRC developed the SAPHIRE computer code for performing probabilistic risk assessments (PRAs) using a personal computer (PC) at the Idaho National Laboratory (INL) under Job Code Number (JCN) L1429. SAPHIRE started out as a feasibility study for a PRA code to be run on a desktop personal PC and evolved through several phases into a state-of-the-art PRA code. The developmental activity of SAPHIRE was the result of two concurrent important events: The tremendous expansion of PC software and hardware capability of the 90s and the onset of a risk-informed regulation era.

  5. Assimilation of SAPHIR radiance: impact on hyperspectral radiances in 4D-VAR

    Science.gov (United States)

    Indira Rani, S.; Doherty, Amy; Atkinson, Nigel; Bell, William; Newman, Stuart; Renshaw, Richard; George, John P.; Rajagopal, E. N.

    2016-04-01

    Assimilation of a new observation dataset in an NWP system may affect the quality of an existing observation data set against the model background (short forecast), which in-turn influence the use of an existing observation in the NWP system. Effect of the use of one data set on the use of another data set can be quantified as positive, negative or neutral. Impact of the addition of new dataset is defined as positive if the number of assimilated observations of an existing type of observation increases, and bias and standard deviation decreases compared to the control (without the new dataset) experiment. Recently a new dataset, Megha Tropiques SAPHIR radiances, which provides atmospheric humidity information, is added in the Unified Model 4D-VAR assimilation system. In this paper we discuss the impact of SAPHIR on the assimilation of hyper-spectral radiances like AIRS, IASI and CrIS. Though SAPHIR is a Microwave instrument, its impact can be clearly seen in the use of hyper-spectral radiances in the 4D-VAR data assimilation systems in addition to other Microwave and InfraRed observation. SAPHIR assimilation decreased the standard deviation of the spectral channels of wave number from 650 -1600 cm-1 in all the three hyperspectral radiances. Similar impact on the hyperspectral radiances can be seen due to the assimilation of other Microwave radiances like from AMSR2 and SSMIS Imager.

  6. Design and construction of the SAPHIR detector

    Energy Technology Data Exchange (ETDEWEB)

    Schwille, W.J. (Bonn Univ. (Germany). Physikalisches Inst.); Bockhorst, M. (Bonn Univ. (Germany). Physikalisches Inst.); Burbach, G. (Bonn Univ. (Germany). Physikalisches Inst.); Burgwinkel, R. (Bonn Univ. (Germany). Physikalisches Inst.); Empt, J. (Bonn Univ. (Germany). Physikalisches Inst.); Guse, B. (Bonn Univ. (Germany). Physikalisches Inst.); Haas, K.M. (Bonn Univ. (Germany). Physikalisches Inst.); Hannappel, J. (Bonn Univ. (Germany). Physikalisches Inst.); Heinloth, K. (Bonn Univ. (Germany). Physikalisches Inst.); Hey, T. (Bonn Univ. (Germany). Physikalisches Inst.); Honscheid, K. (Bonn Univ. (Germany). Physikalisches Inst.); Jahnen, T. (Bonn Univ. (Germany). Physikalisches Inst.); Jakob, H.P. (Bonn Univ. (Germany). Physikalisches Inst.); Joepen, N. (Bonn Univ. (Germany). Physikalisches Inst.); Juengst, H. (Bonn Univ. (Germany). Physikalisches Inst.); Kirch, U. (Bonn Univ. (Germany). Physikalisches Inst.); Klein, F.J. (Bonn Univ. (Germany). Physikalisches Inst.)

    1994-05-15

    The design, construction, and performance of the large solid angle magnetic spectrometer SAPHIR is described. It was built for the investigation of photon-induced reactions on nucleous and light nuclei with mulit-particle final states up to photon energies of 3.1 GeV. The detector is equipped with a tagged photon beam facility and is operated at the stretcher ring ELSA in Bonn. (orig.)

  7. The Unified Extensional Versioning Model

    DEFF Research Database (Denmark)

    Asklund, U.; Bendix, Lars Gotfred; Christensen, H. B.

    1999-01-01

    Versioning of components in a system is a well-researched field where various adequate techniques have already been established. In this paper, we look at how versioning can be extended to cover also the structural aspects of a system. There exist two basic techniques for versioning - intentional...

  8. Systems Analysis Programs for Hands-On Integrated Reliability Evaluations (SAPHIRE) Technical Reference

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; W. J. Galyean; S. T. Beck

    2008-08-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer (PC) running the Microsoft Windows? operating system. Herein information is provided on the principles used in the construction and operation of Version 6.0 and 7.0 of the SAPHIRE system. This report summarizes the fundamental mathematical concepts of sets and logic, fault trees, and probability. This volume then describes the algorithms used to construct a fault tree and to obtain the minimal cut sets. It gives the formulas used to obtain the probability of the top event from the minimal cut sets, and the formulas for probabilities that apply for various assumptions concerning reparability and mission time. It defines the measures of basic event importance that SAPHIRE can calculate. This volume gives an overview of uncertainty analysis using simple Monte Carlo sampling or Latin Hypercube sampling, and states the algorithms used by this program to generate random basic event probabilities from various distributions. Also covered are enhance capabilities such as seismic analysis, cut set "recovery," end state manipulation, and use of "compound events."

  9. Systems Analysis Programs for Hands-On Integrated Reliability Evaluations (SAPHIRE) Technical Reference Manual

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; W. J. Galyean; S. T. Beck

    2006-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer (PC) running the Microsoft Windows? operating system. Herein information is provided on the principles used in the construction and operation of Version 6.0 and 7.0 of the SAPHIRE system. This report summarizes the fundamental mathematical concepts of sets and logic, fault trees, and probability. This volume then describes the algorithms used to construct a fault tree and to obtain the minimal cut sets. It gives the formulas used to obtain the probability of the top event from the minimal cut sets, and the formulas for probabilities that apply for various assumptions concerning reparability and mission time. It defines the measures of basic event importance that SAPHIRE can calculate. This volume gives an overview of uncertainty analysis using simple Monte Carlo sampling or Latin Hypercube sampling, and states the algorithms used by this program to generate random basic event probabilities from various distributions. Also covered are enhance capabilities such as seismic analysis, cut set "recovery," end state manipulation, and use of "compound events."

  10. SAPHIR, a simulator for engineering and training on N4-type nuclear power plants

    International Nuclear Information System (INIS)

    Vovan, C.

    1999-01-01

    SAPHIR, the new simulator developed by FRAMATOME, has been designed to be a convenient tool for engineering and training for different types of nuclear power plants. Its first application is for the French 'N4' four-loop 1500MWe PWR. The basic features of SAPHIR are: (1) Use of advanced codes for modelling He primary and secondary systems' including an axial steam generator model, (2) Use of a simulation workshop containing different tools for modelling fluid, electrical, instrument and control networks, (3) A Man-Machine Interface designed for an easy and convivial use which can simulate the different computerized control consoles of the 'N4' control room. This paper outlines features and capabilities of this tool, both for engineering and training purposes. (author)

  11. Model-based version management system framework

    International Nuclear Information System (INIS)

    Mehmood, W.

    2016-01-01

    In this paper we present a model-based version management system. Version Management System (VMS) a branch of software configuration management (SCM) aims to provide a controlling mechanism for evolution of software artifacts created during software development process. Controlling the evolution requires many activities to perform, such as, construction and creation of versions, identification of differences between versions, conflict detection and merging. Traditional VMS systems are file-based and consider software systems as a set of text files. File based VMS systems are not adequate for performing software configuration management activities such as, version control on software artifacts produced in earlier phases of the software life cycle. New challenges of model differencing, merge, and evolution control arise while using models as central artifact. The goal of this work is to present a generic framework model-based VMS which can be used to overcome the problem of tradition file-based VMS systems and provide model versioning services. (author)

  12. SAPHIRE 8 Quality Assurance Software Metrics Report

    Energy Technology Data Exchange (ETDEWEB)

    Kurt G. Vedros

    2011-08-01

    The purpose of this review of software metrics is to examine the quality of the metrics gathered in the 2010 IV&V and to set an outline for results of updated metrics runs to be performed. We find from the review that the maintenance of accepted quality standards presented in the SAPHIRE 8 initial Independent Verification and Validation (IV&V) of April, 2010 is most easily achieved by continuing to utilize the tools used in that effort while adding a metric of bug tracking and resolution. Recommendations from the final IV&V were to continue periodic measurable metrics such as McCabe's complexity measure to ensure quality is maintained. The four software tools used to measure quality in the IV&V were CodeHealer, Coverage Validator, Memory Validator, Performance Validator, and Thread Validator. These are evaluated based on their capabilities. We attempted to run their latest revisions with the newer Delphi 2010 based SAPHIRE 8 code that has been developed and was successful with all of the Validator series of tools on small tests. Another recommendation from the IV&V was to incorporate a bug tracking and resolution metric. To improve our capability of producing this metric, we integrated our current web reporting system with the SpiraTest test management software purchased earlier this year to track requirements traceability.

  13. Actinometric measurements of NO2 photolysis frequencies in the atmosphere simulation chamber SAPHIR

    Directory of Open Access Journals (Sweden)

    B. Bohn

    2005-01-01

    Full Text Available The simulation chamber SAPHIR at Forschungszentrum Jülich has UV permeable teflon walls facilitating atmospheric photochemistry studies under the influence of natural sunlight. Because the internal radiation field is strongly affected by construction elements, we use external, radiometric measurements of spectral actinic flux and a model to calculate mean photolysis frequencies for the chamber volume Bohn04B. In this work we determine NO2 photolysis frequencies j(NO2 within SAPHIR using chemical actinometry by injecting NO2 and observing the chemical composition during illumination under various external conditions. In addition to a photo-stationary approach, a time-dependent method was developed to analyse the data. These measurements had two purposes. Firstly, to check the model predictions with respect to diurnal and seasonal variations in the presence of direct sunlight and secondly to obtain an absolute calibration factor for the combined radiometry-model approach. We obtain a linear correlation between calculated and actinometric j(NO2. A calibration factor of 1.34±0.10 is determined, independent of conditions in good approximation. This factor is in line with expectations and can be rationalised by internal reflections within the chamber. Taking into account the uncertainty of the actinometric j(NO2, an accuracy of 13% is estimated for the determination of j(NO2 in SAPHIR. In separate dark experiments a rate constant of (1.93±0.12x10-14 cm3 s-1 was determined for the NO+O3 reaction at 298K using analytical and numerical methods of data analysis.

  14. Modeling report of DYMOND code (DUPIC version)

    Energy Technology Data Exchange (ETDEWEB)

    Park, Joo Hwan [KAERI, Taejon (Korea, Republic of); Yacout, Abdellatif M. [Argonne National Laboratory, Ilinois (United States)

    2003-04-01

    The DYMOND code employs the ITHINK dynamic modeling platform to assess the 100-year dynamic evolution scenarios for postulated global nuclear energy parks. Firstly, DYMOND code has been developed by ANL(Argonne National Laboratory) to perform the fuel cycle analysis of LWR once-through and LWR-FBR mixed plant. Since the extensive application of DYMOND code has been requested, the first version of DYMOND has been modified to adapt the DUPIC, MSR and RTF fuel cycle. DYMOND code is composed of three parts; the source language platform, input supply and output. But those platforms are not clearly distinguished. This report described all the equations which were modeled in the modified DYMOND code (which is called as DYMOND-DUPIC version). It divided into five parts;Part A deals model in reactor history which is included amount of the requested fuels and spent fuels. Part B aims to describe model of fuel cycle about fuel flow from the beginning to the end of fuel cycle. Part C is for model in re-processing which is included recovery of burned uranium, plutonium, minor actinide and fission product as well as the amount of spent fuels in storage and disposal. Part D is for model in other fuel cycle which is considered the thorium fuel cycle for MSR and RTF reactor. Part E is for model in economics. This part gives all the information of cost such as uranium mining cost, reactor operating cost, fuel cost etc.

  15. Fiscal impacts model documentation. Version 1. 0

    Energy Technology Data Exchange (ETDEWEB)

    Beck, S.L.; Scott, M.J.

    1986-05-01

    The Fiscal Impacts (FI) Model, Version 1.0 was developed under Pacific Northwest Laboratory's Monitored Retrievable Storage (MRS) Program to aid in development of the MRS Reference Site Environmental Document (PNL 5476). It computes estimates of 182 fiscal items for state and local government jurisdictions, using input data from the US Census Bureau's 1981 Survey of Governments and local population forecasts. The model can be adapted for any county or group of counties in the United States.

  16. Construction and calibration studies of the SAPHIR scintillation counters

    International Nuclear Information System (INIS)

    Kostrewa, D.

    1988-03-01

    For the scintillation counter system of the SAPHIR detector at the stretcher ring ELSA in Bonn 50 time of flight counters and 12 trigger counters have been built. Each of them has two photomultipliers, one at each side. A laser calibration system with a pulsed nitrogen laser as central light source to monitor these photomultipliers has been optimized. It was used to adjust the photomultipliers and to test their long and short time instabilities. (orig.)

  17. Forsmark - site descriptive model version 0

    International Nuclear Information System (INIS)

    2002-10-01

    During 2002, the Swedish Nuclear Fuel and Waste Management Company (SKB) is starting investigations at two potential sites for a deep repository in the Precambrian basement of the Fennoscandian Shield. The present report concerns one of those sites, Forsmark, which lies in the municipality of Oesthammar, on the east coast of Sweden, about 150 kilometres north of Stockholm. The site description should present all collected data and interpreted parameters of importance for the overall scientific understanding of the site, for the technical design and environmental impact assessment of the deep repository, and for the assessment of long-term safety. The site description will have two main components: a written synthesis of the site, summarising the current state of knowledge, as documented in the databases containing the primary data from the site investigations, and one or several site descriptive models, in which the collected information is interpreted and presented in a form which can be used in numerical models for rock engineering, environmental impact and long-term safety assessments. The site descriptive models are devised and stepwise updated as the site investigations proceed. The point of departure for this process is the regional site descriptive model, version 0, which is the subject of the present report. Version 0 is developed out of the information available at the start of the site investigation. This information, with the exception of data from tunnels and drill holes at the sites of the Forsmark nuclear reactors and the underground low-middle active radioactive waste storage facility, SFR, is mainly 2D in nature (surface data), and is general and regional, rather than site-specific, in content. For this reason, the Forsmark site descriptive model, version 0, as detailed in the present report, has been developed at a regional scale. It covers a rectangular area, 15 km in a southwest-northeast and 11 km in a northwest-southeast direction, around the

  18. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Tutorial

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; S. T. Beck; S. T. Wood

    2008-08-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of computer programs that were developed to create and analyze probabilistic risk assessment (PRAs). This volume is the tutorial manual for the SAPHIRE system. In this document, a series of lessons are provided that guide the user through basic steps common to most analyses preformed with SAPHIRE. The tutorial is divided into two major sections covering both basic and advanced features. The section covering basic topics contains lessons that lead the reader through development of a probabilistic hypothetical problem involving a vehicle accident, highlighting the program’s most fundamental features. The advanced features section contains additional lessons that expand on fundamental analysis features of SAPHIRE and provide insights into more complex analysis techniques. Together, these two elements provide an overview into the operation and capabilities of the SAPHIRE software.

  19. Forsmark - site descriptive model version 0

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-10-01

    During 2002, the Swedish Nuclear Fuel and Waste Management Company (SKB) is starting investigations at two potential sites for a deep repository in the Precambrian basement of the Fennoscandian Shield. The present report concerns one of those sites, Forsmark, which lies in the municipality of Oesthammar, on the east coast of Sweden, about 150 kilometres north of Stockholm. The site description should present all collected data and interpreted parameters of importance for the overall scientific understanding of the site, for the technical design and environmental impact assessment of the deep repository, and for the assessment of long-term safety. The site description will have two main components: a written synthesis of the site, summarising the current state of knowledge, as documented in the databases containing the primary data from the site investigations, and one or several site descriptive models, in which the collected information is interpreted and presented in a form which can be used in numerical models for rock engineering, environmental impact and long-term safety assessments. The site descriptive models are devised and stepwise updated as the site investigations proceed. The point of departure for this process is the regional site descriptive model, version 0, which is the subject of the present report. Version 0 is developed out of the information available at the start of the site investigation. This information, with the exception of data from tunnels and drill holes at the sites of the Forsmark nuclear reactors and the underground low-middle active radioactive waste storage facility, SFR, is mainly 2D in nature (surface data), and is general and regional, rather than site-specific, in content. For this reason, the Forsmark site descriptive model, version 0, as detailed in the present report, has been developed at a regional scale. It covers a rectangular area, 15 km in a southwest-northeast and 11 km in a northwest-southeast direction, around the

  20. Simpevarp - site descriptive model version 0

    International Nuclear Information System (INIS)

    2002-11-01

    During 2002, SKB is starting detailed investigations at two potential sites for a deep repository in the Precambrian rocks of the Fennoscandian Shield. The present report concerns one of those sites, Simpevarp, which lies in the municipality of Oskarshamn, on the southeast coast of Sweden, about 250 kilometres south of Stockholm. The site description will have two main components: a written synthesis of the site, summarising the current state of knowledge, as documented in the databases containing the primary data from the site investigations, and one or several site descriptive models, in which the collected information is interpreted and presented in a form which can be used in numerical models for rock engineering, environmental impact and long-term safety assessments. SKB maintains two main databases at the present time, a site characterisation database called SICADA and a geographic information system called SKB GIS. The site descriptive model will be developed and presented with the aid of the SKB GIS capabilities, and with SKBs Rock Visualisation System (RVS), which is also linked to SICADA. The version 0 model forms an important framework for subsequent model versions, which are developed successively, as new information from the site investigations becomes available. Version 0 is developed out of the information available at the start of the site investigation. In the case of Simpevarp, this is essentially the information which was compiled for the Oskarshamn feasibility study, which led to the choice of that area as a favourable object for further study, together with information collected since its completion. This information, with the exception of the extensive data base from the nearby Aespoe Hard Rock Laboratory, is mainly 2D in nature (surface data), and is general and regional, rather than site-specific, in content. Against this background, the present report consists of the following components: an overview of the present content of the databases

  1. Simpevarp - site descriptive model version 0

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-11-01

    During 2002, SKB is starting detailed investigations at two potential sites for a deep repository in the Precambrian rocks of the Fennoscandian Shield. The present report concerns one of those sites, Simpevarp, which lies in the municipality of Oskarshamn, on the southeast coast of Sweden, about 250 kilometres south of Stockholm. The site description will have two main components: a written synthesis of the site, summarising the current state of knowledge, as documented in the databases containing the primary data from the site investigations, and one or several site descriptive models, in which the collected information is interpreted and presented in a form which can be used in numerical models for rock engineering, environmental impact and long-term safety assessments. SKB maintains two main databases at the present time, a site characterisation database called SICADA and a geographic information system called SKB GIS. The site descriptive model will be developed and presented with the aid of the SKB GIS capabilities, and with SKBs Rock Visualisation System (RVS), which is also linked to SICADA. The version 0 model forms an important framework for subsequent model versions, which are developed successively, as new information from the site investigations becomes available. Version 0 is developed out of the information available at the start of the site investigation. In the case of Simpevarp, this is essentially the information which was compiled for the Oskarshamn feasibility study, which led to the choice of that area as a favourable object for further study, together with information collected since its completion. This information, with the exception of the extensive data base from the nearby Aespoe Hard Rock Laboratory, is mainly 2D in nature (surface data), and is general and regional, rather than site-specific, in content. Against this background, the present report consists of the following components: an overview of the present content of the databases

  2. Investigation of aromatic compound degradation under atmospheric conditions in the outdoor simulation chamber SAPHIR

    Science.gov (United States)

    Nehr, Sascha; Bohn, Birger; Rohrer, Franz; Tillmann, Ralf; Wegener, Robert; Dorn, Hans-Peter; Häseler, Rolf; Brauers, Theo; Wahner, Andreas

    2010-05-01

    Ozone is produced in the lower troposphere by the OH-initiated photooxidation of volatile organic compounds in the presence of NOx. Aromatic hydrocarbons from anthropogenic sources are a major contributor to the OH-reactivity and thus to ozone formation in urban areas [1]. Moreover, their degradation leads to formation of secondary organic aerosol. Aromatic compounds are therefore important trace constituents with regard to air quality. We will present the results of photooxidation experiments which were conducted in the atmospheric simulation chamber SAPHIR at Forschungszentrum Jülich. The experiments were designed to investigate the degradation mechanisms of benzene and p-xylene, which are among the most abundant aromatics in urban air samples. Benzene and p-xylene were selected because they have high structural symmetry which limits the number of potential isomers of secondary products. The experiments were performed under low-NOx-conditions (≤ 2 ppb). SAPHIR was equipped with instruments for the measurement of the parent aromatics and their major oxidation products, OH radicals, important radical precursors (O3, HONO, HCHO), photolysis frequencies and particulate matter. As shown in previous studies, simulation chamber data from the photooxidation of aromatics cannot be explained satisfactorily with current photochemistry mechanisms. For example the MCMv3.1 tends to overestimate the ozone-concentration and to underestimate the OH-concentration [2]. In this study, we will contrast model calculations with experimental results to check if similar discrepancies can be observed in SAPHIR and how they can be resolved. Based on the results of this preparatory study, further simulation chamber experiments with special emphasis on the radical budget are scheduled in 2010. References: [1] J. G. Calvert, R. Atkinson, K.H. Becker, R.M. Kamens, J.H. Seinfeld, T.J. Wallington, G. Yarwood: The mechanisms of atmospheric oxidation of aromatic hydrocarbons, Oxford University

  3. All-sky radiance simulation of Megha-Tropiques SAPHIR microwave ...

    Indian Academy of Sciences (India)

    sky radiance (clear sky and cloudy sky) simulation has been performed for six channel microwave SAPHIR (Sounder for Atmospheric Profiling of Humidity in the Inter-tropics by Radiometry) sensors of Megha-Tropiques (MT) satellite.

  4. Version control of pathway models using XML patches.

    Science.gov (United States)

    Saffrey, Peter; Orton, Richard

    2009-03-17

    Computational modelling has become an important tool in understanding biological systems such as signalling pathways. With an increase in size complexity of models comes a need for techniques to manage model versions and their relationship to one another. Model version control for pathway models shares some of the features of software version control but has a number of differences that warrant a specific solution. We present a model version control method, along with a prototype implementation, based on XML patches. We show its application to the EGF/RAS/RAF pathway. Our method allows quick and convenient storage of a wide range of model variations and enables a thorough explanation of these variations. Trying to produce these results without such methods results in slow and cumbersome development that is prone to frustration and human error.

  5. Industrial Waste Management Evaluation Model Version 3.1

    Science.gov (United States)

    IWEM is a screening level ground water model designed to simulate contaminant fate and transport. IWEM v3.1 is the latest version of the IWEM software, which includes additional tools to evaluate the beneficial use of industrial materials

  6. GCFM Users Guide Revision for Model Version 5.0

    Energy Technology Data Exchange (ETDEWEB)

    Keimig, Mark A.; Blake, Coleman

    1981-08-10

    This paper documents alterations made to the MITRE/DOE Geothermal Cash Flow Model (GCFM) in the period of September 1980 through September 1981. Version 4.0 of GCFM was installed on the computer at the DOE San Francisco Operations Office in August 1980. This Version has also been distributed to about a dozen geothermal industry firms, for examination and potential use. During late 1980 and 1981, a few errors detected in the Version 4.0 code were corrected, resulting in Version 4.1. If you are currently using GCFM Version 4.0, it is suggested that you make the changes to your code that are described in Section 2.0. User's manual changes listed in Section 3.0 and Section 4.0 should then also be made.

  7. Investigation of the β-pinene photooxidation by OH in the atmosphere simulation chamber SAPHIR

    Science.gov (United States)

    Kaminski, Martin; Fuchs, Hendrik; Acir, Ismail-Hakki; Bohn, Birger; Brauers, Theo; Dorn, Hans-Peter; Häseler, Rolf; Hofzumahaus, Andreas; Li, Xin; Lutz, Anna; Nehr, Sascha; Rohrer, Franz; Tillmann, Ralf; Vereecken, Luc; Wegener, Robert; Wahner, Andreas

    2017-06-01

    Besides isoprene, monoterpenes are the non-methane volatile organic compounds (VOCs) with the highest global emission rates. Due to their high reactivity towards OH, monoterpenes can dominate the radical chemistry of the atmosphere in forested areas. In the present study the photochemical degradation mechanism of β-pinene was investigated in the Jülich atmosphere simulation chamber SAPHIR (Simulation of Atmospheric PHotochemistry In a large Reaction Chamber). One focus of this study is on the OH budget in the degradation process. Therefore, the SAPHIR chamber was equipped with instrumentation to measure radicals (OH, HO2, RO2), the total OH reactivity, important OH precursors (O3, HONO, HCHO), the parent VOC β-pinene, its main oxidation products, acetone and nopinone and photolysis frequencies. All experiments were carried out under low-NO conditions ( ≤ 300 ppt) and at atmospheric β-pinene concentrations ( ≤ 5 ppb) with and without addition of ozone. For the investigation of the OH budget, the OH production and destruction rates were calculated from measured quantities. Within the limits of accuracy of the instruments, the OH budget was balanced in all β-pinene oxidation experiments. However, even though the OH budget was closed, simulation results from the Master Chemical Mechanism (MCM) 3.2 showed that the OH production and destruction rates were underestimated by the model. The measured OH and HO2 concentrations were underestimated by up to a factor of 2, whereas the total OH reactivity was slightly overestimated because the model predicted a nopinone mixing ratio which was 3 times higher than measured. A new, theory-derived, first-generation product distribution by Vereecken and Peeters (2012) was able to reproduce the measured nopinone time series and the total OH reactivity. Nevertheless, the measured OH and HO2 concentrations remained underestimated by the numerical simulations. These observations together with the fact that the measured OH budget

  8. Model Adequacy Analysis of Matching Record Versions in Nosql Databases

    Directory of Open Access Journals (Sweden)

    E. V. Tsviashchenko

    2015-01-01

    Full Text Available The article investigates a model of matching record versions. The goal of this work is to analyse the model adequacy. This model allows estimating a user’s processing time distribution of the record versions and a distribution of the record versions count. The second option of the model was used, according to which, for a client the time to process record versions depends explicitly on the number of updates, performed by the other users between the sequential updates performed by a current client. In order to prove the model adequacy the real experiment was conducted in the cloud cluster. The cluster contains 10 virtual nodes, provided by DigitalOcean Company. The Ubuntu Server 14.04 was used as an operating system (OS. The NoSQL system Riak was chosen for experiments. In the Riak 2.0 version and later provide “dotted vector versions” (DVV option, which is an extension of the classic vector clock. Their use guarantees, that the versions count, simultaneously stored in DB, will not exceed the count of clients, operating in parallel with a record. This is very important while conducting experiments. For developing the application the java library, provided by Riak, was used. The processes run directly on the nodes. In experiment two records were used. They are: Z – the record, versions of which are handled by clients; RZ – service record, which contains record update counters. The application algorithm can be briefly described as follows: every client reads versions of the record Z, processes its updates using the RZ record counters, and saves treated record in database while old versions are deleted form DB. Then, a client rereads the RZ record and increments counters of updates for the other clients. After that, a client rereads the Z record, saves necessary statistics, and deliberates the results of processing. In the case of emerging conflict because of simultaneous updates of the RZ record, the client obtains all versions of that

  9. USERS MANUAL: LANDFILL GAS EMISSIONS MODEL - VERSION 2.0

    Science.gov (United States)

    The document is a user's guide for a computer model, Version 2.0 of the Landfill Gas Emissions Model (LandGEM), for estimating air pollution emissions from municipal solid waste (MSW) landfills. The model can be used to estimate emission rates for methane, carbon dioxide, nonmet...

  10. Solar Advisor Model User Guide for Version 2.0

    Energy Technology Data Exchange (ETDEWEB)

    Gilman, P.; Blair, N.; Mehos, M.; Christensen, C.; Janzou, S.; Cameron, C.

    2008-08-01

    The Solar Advisor Model (SAM) provides a consistent framework for analyzing and comparing power system costs and performance across the range of solar technologies and markets, from photovoltaic systems for residential and commercial markets to concentrating solar power and large photovoltaic systems for utility markets. This manual describes Version 2.0 of the software, which can model photovoltaic and concentrating solar power technologies for electric applications for several markets. The current version of the Solar Advisor Model does not model solar heating and lighting technologies.

  11. The ONKALO area model. Version 1

    International Nuclear Information System (INIS)

    Kemppainen, K.; Ahokas, T.; Ahokas, H.; Paulamaeki, S.; Paananen, M.; Gehoer, S.; Front, K.

    2007-11-01

    The geological model of the ONKALO area consists of three submodels: the lithological model, the brittle deformation model and the alteration model. The lithological model gives properties of definite rock units that can be defined on the basis the migmatite structures, textures and modal compositions. The brittle deformation model describes the results of brittle deformation, where geophysical and hydrogeological results are added. The alteration model describes occurrence of different alteration types and its possible effects. The rocks of Olkiluoto can be divided into two major classes: (1) supracrustal high-grade metamorphic rocks including various migmatitic gneisses, tonalitic-granodioriticgranitic gneisses, mica gneisses, quartz gneisses and mafic gneisses, and (2) igneous rocks including pegmatitic granites and diabase dykes. The migmatitic gneisses can further be divided into three subgroups in terms of the type of migmatite structure: veined gneisses, stromatic gneisses and diatexitic gneisses. On the basis of refolding and crosscutting relationships, the metamorphic supracrustal rocks have been subject to polyphased ductile deformation, including five stages. In 3D modelling of the lithological units, an assumption has been made, on the basis of measurements in outcrops, investigation trenches and drill cores, that the pervasive, composite foliation produced as a result a polyphase ductile deformation has a rather constant attitude in the ONKALO area. Consequently, the strike and dip of the foliation has been used as a tool, through which the lithologies have been correlated between the drillholes and from the surface to the drillholes. The bedrock in the Olkiluoto site has been subject to extensive hydrothermal alteration, which has taken place at reasonably low temperature conditions, the estimated temperature interval being from slightly over 300 deg C to less than 100 deg C. Two types of alteration can be observed: (1) pervasive (disseminated

  12. Borehole Optical Stratigraphy Modeling, Antarctica, Version 1

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set consists of scripts and code designed for modeling the properties of boreholes in polar ice sheets, under a range of variations in the borehole...

  13. Smart Grid Interoperability Maturity Model Beta Version

    Energy Technology Data Exchange (ETDEWEB)

    Widergren, Steven E.; Drummond, R.; Giroti, Tony; Houseman, Doug; Knight, Mark; Levinson, Alex; longcore, Wayne; Lowe, Randy; Mater, J.; Oliver, Terry V.; Slack, Phil; Tolk, Andreas; Montgomery, Austin

    2011-12-02

    The GridWise Architecture Council was formed by the U.S. Department of Energy to promote and enable interoperability among the many entities that interact with the electric power system. This balanced team of industry representatives proposes principles for the development of interoperability concepts and standards. The Council provides industry guidance and tools that make it an available resource for smart grid implementations. In the spirit of advancing interoperability of an ecosystem of smart grid devices and systems, this document presents a model for evaluating the maturity of the artifacts and processes that specify the agreement of parties to collaborate across an information exchange interface. You are expected to have a solid understanding of large, complex system integration concepts and experience in dealing with software component interoperation. Those without this technical background should read the Executive Summary for a description of the purpose and contents of the document. Other documents, such as checklists, guides, and whitepapers, exist for targeted purposes and audiences. Please see the www.gridwiseac.org website for more products of the Council that may be of interest to you.

  14. Implementation of the FASTBUS data-acquisition system in the readout of the SAPHIR detector

    International Nuclear Information System (INIS)

    Empt, J.

    1993-12-01

    The magnetic detector SAPHIR is layed out to detect multiparticle final states with good acuracy, especially a good photon detection capability is designed. Therefore a large electromagnetic calorimeter was built, consisting of 98 modules covering a detection area of about 16 m 2 in forward direction. For this calorimeter a brass-gas-sandwich detector was developed with signal wires perpendicular to the converter planes. For data acquisition of a major part of this calorimeter a modular FASTBUS system is used. In this report the FASTBUS system and its installation in the SAPHIR Online Program are described. (orig.)

  15. A conceptual model specification language (CMSL Version 2)

    NARCIS (Netherlands)

    Wieringa, Roelf J.

    1992-01-01

    Version 2 of a language (CMSL) to specify conceptual models is defined. CMSL consists of two parts, the value specification language VSL and the object spercification language OSL. There is a formal semantics and an inference system for CMSL but research on this still continues. A method for

  16. IDC Use Case Model Survey Version 1.0.

    Energy Technology Data Exchange (ETDEWEB)

    Carr, Dorthe B.; Harris, James M.

    2014-12-01

    This document contains the brief descriptions for the actors and use cases contained in the IDC Use Case Model Survey. REVISIONS Version Date Author/Team Revision Description Authorized by V1.0 12/2014 IDC Re- engineering Project Team Initial delivery M. Harris

  17. IDC Use Case Model Survey Version 1.1.

    Energy Technology Data Exchange (ETDEWEB)

    Harris, James Mark [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carr, Dorthe B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    This document contains the brief descriptions for the actors and use cases contained in the IDC Use Case Model. REVISIONS Version Date Author/Team Revision Description Authorized by V1.0 12/2014 SNL IDC Reengineering Project Team Initial delivery M. Harris V1.1 2/2015 SNL IDC Reengineering Project Team Iteration I2 Review Comments M. Harris

  18. Fiscal impacts model documentation. Version 1.0

    International Nuclear Information System (INIS)

    Beck, S.L.; Scott, M.J.

    1986-05-01

    The Fiscal Impacts (FI) Model, Version 1.0 was developed under Pacific Northwest Laboratory's Monitored Retrievable Storage (MRS) Program to aid in development of the MRS Reference Site Environmental Document (PNL 5476). It computes estimates of 182 fiscal items for state and local government jurisdictions, using input data from the US Census Bureau's 1981 Survey of Governments and local population forecasts. The model can be adapted for any county or group of counties in the United States

  19. Characterisation of the photolytic HONO-source in the atmosphere simulation chamber SAPHIR

    Directory of Open Access Journals (Sweden)

    F. Rohrer

    2005-01-01

    Full Text Available HONO formation has been proposed as an important OH radical source in simulation chambers for more than two decades. Besides the heterogeneous HONO formation by the dark reaction of NO2 and adsorbed water, a photolytic source has been proposed to explain the elevated reactivity in simulation chamber experiments. However, the mechanism of the photolytic process is not well understood so far. As expected, production of HONO and NOx was also observed inside the new atmospheric simulation chamber SAPHIR under solar irradiation. This photolytic HONO and NOx formation was studied with a sensitive HONO instrument under reproducible controlled conditions at atmospheric concentrations of other trace gases. It is shown that the photolytic HONO source in the SAPHIR chamber is not caused by NO2 reactions and that it is the only direct NOy source under illuminated conditions. In addition, the photolysis of nitrate which was recently postulated for the observed photolytic HONO formation on snow, ground, and glass surfaces, can be excluded in the chamber. A photolytic HONO source at the surface of the chamber is proposed which is strongly dependent on humidity, on light intensity, and on temperature. An empirical function describes these dependencies and reproduces the observed HONO formation rates to within 10%. It is shown that the photolysis of HONO represents the dominant radical source in the SAPHIR chamber for typical tropospheric O3/H2O concentrations. For these conditions, the HONO concentrations inside SAPHIR are similar to recent observations in ambient air.

  20. Site characterizations around KURT area-Geologic model (Version 1)-

    International Nuclear Information System (INIS)

    Park, Kyung Woo; Kim, Kyung Su; Koh, Yong Kwon; Kim, Geon Young

    2009-08-01

    To characterize the geologic elements around study area for high-level radioactive waste disposal research in KAERI, the several geological investigations such as geophysical surveys and borehole drillings were carried out since 1997. Especially, the KURT (KAERI Underground Research Tunnel) was constructed to understand the deep geological environments in 2006. At recent, the deep boreholes, which have 500m depth at left research module inside the KURT and 1,000m depth outside the KURT, were drilled around the KURT area to confirm and validate the geological model. The objective of this research is to construct the first version of geological model around KURT area in the point of hydro-geological view. The data in this study are based on the surface geological investigation and borehole investigations drilled in until 2005. At results, total 4 geological elements are obtained from geological analysis, which are a subsurface weathered zone, log-angled fractures zone, fracture zones and bedrock. And, the geometries of these elements are also plotted by three-dimensional model. The first version of geological model which is built in this study will be supported to construct the hydrogeological model and geochemical model

  1. Some Remarks on Stochastic Versions of the Ramsey Growth Model

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    2012-01-01

    Roč. 19, č. 29 (2012), s. 139-152 ISSN 1212-074X R&D Projects: GA ČR GAP402/10/1610; GA ČR GAP402/10/0956; GA ČR GAP402/11/0150 Institutional support: RVO:67985556 Keywords : Economic dynamics * Ramsey growth model with disturbance * stochastic dynamic programming * multistage stochastic programs Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/sladky-some remarks on stochastic versions of the ramsey growth model.pdf

  2. Solid Waste Projection Model: Database (Version 1.3)

    International Nuclear Information System (INIS)

    Blackburn, C.L.

    1991-11-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC). The SWPM system provides a modeling and analysis environment that supports decisions in the process of evaluating various solid waste management alternatives. This document, one of a series describing the SWPM system, contains detailed information regarding the software and data structures utilized in developing the SWPM Version 1.3 Database. This document is intended for use by experienced database specialists and supports database maintenance, utility development, and database enhancement

  3. H2A Production Model, Version 2 User Guide

    Energy Technology Data Exchange (ETDEWEB)

    Steward, D.; Ramsden, T.; Zuboy, J.

    2008-09-01

    The H2A Production Model analyzes the technical and economic aspects of central and forecourt hydrogen production technologies. Using a standard discounted cash flow rate of return methodology, it determines the minimum hydrogen selling price, including a specified after-tax internal rate of return from the production technology. Users have the option of accepting default technology input values--such as capital costs, operating costs, and capacity factor--from established H2A production technology cases or entering custom values. Users can also modify the model's financial inputs. This new version of the H2A Production Model features enhanced usability and functionality. Input fields are consolidated and simplified. New capabilities include performing sensitivity analyses and scaling analyses to various plant sizes. This User Guide helps users already familiar with the basic tenets of H2A hydrogen production cost analysis get started using the new version of the model. It introduces the basic elements of the model then describes the function and use of each of its worksheets.

  4. Integrated Farm System Model Version 4.3 and Dairy Gas Emissions Model Version 3.3 Software development and distribution

    Science.gov (United States)

    Modeling routines of the Integrated Farm System Model (IFSM version 4.2) and Dairy Gas Emission Model (DairyGEM version 3.2), two whole-farm simulation models developed and maintained by USDA-ARS, were revised with new components for: (1) simulation of ammonia (NH3) and greenhouse gas emissions gene...

  5. What's new in the Atmospheric Model Evaluation Tool (AMET) version 1.3

    Science.gov (United States)

    A new version of the Atmospheric Model Evaluation Tool (AMET) has been released. The new version of AMET, version 1.3 (AMETv1.3), contains a number of updates and changes from the previous of version of AMET (v1.2) released in 2012. First, the Perl scripts used in the previous ve...

  6. Interacting vector boson model and other versions of IBM

    International Nuclear Information System (INIS)

    Asherova, R.M.; Fursa, D.V.; Georgieva, A.; Smirnov, Yu.F.

    1991-01-01

    The Dyson mapping of interacting vector boson model (IVBM) on the standard IBM with dynamical symmetry U(21) is obtained. This version of IBM contains the S(T=1), D(T=1) and P(T=0) bosons, where T is isospin of bosons. From group theory view point it corresponds to the realization of the Sp(12,R) generators in terms of generators of HW(21)xU(6) group. The problem of elimination of spurious states and Hermitization of this boson representation is discussed. The image of the IVBM Hamiltonian in the space of above mentioned S, D, P-bosons is found. 22 refs

  7. TOPAS 2 - a high-resolution tagging system at the Bonn SAPHIR detector

    International Nuclear Information System (INIS)

    Rappenecker, G.

    1989-02-01

    For the SAPHIR-arrangement in Bonn a high resolving tagging system has been developed achieving an energy resolution of 2 MeV, covering the range of (0.94-0.34) E 0 photon energy (1.0 GeV 0 2 , ArCH 4 and ArC 2 H 6 in concern of performance, clustersize and coincidence width. (orig.)

  8. Solid Waste Projection Model: Database (Version 1.4)

    International Nuclear Information System (INIS)

    Blackburn, C.; Cillan, T.

    1993-09-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC). The SWPM system provides a modeling and analysis environment that supports decisions in the process of evaluating various solid waste management alternatives. This document, one of a series describing the SWPM system, contains detailed information regarding the software and data structures utilized in developing the SWPM Version 1.4 Database. This document is intended for use by experienced database specialists and supports database maintenance, utility development, and database enhancement. Those interested in using the SWPM database should refer to the SWPM Database User's Guide. This document is available from the PNL Task M Project Manager (D. L. Stiles, 509-372-4358), the PNL Task L Project Manager (L. L. Armacost, 509-372-4304), the WHC Restoration Projects Section Manager (509-372-1443), or the WHC Waste Characterization Manager (509-372-1193)

  9. CLPX-Model: Rapid Update Cycle 40km (RUC-40) Model Output Reduced Data, Version 1

    Data.gov (United States)

    National Aeronautics and Space Administration — The Rapid Update Cycle, version 2 at 40km (RUC-2, known to the Cold Land Processes community as RUC40) model is a Mesoscale Analysis and Prediction System (MAPS)...

  10. The integrated Earth System Model Version 1: formulation and functionality

    Energy Technology Data Exchange (ETDEWEB)

    Collins, William D.; Craig, Anthony P.; Truesdale, John E.; Di Vittorio, Alan; Jones, Andrew D.; Bond-Lamberty, Benjamin; Calvin, Katherine V.; Edmonds, James A.; Kim, Son H.; Thomson, Allison M.; Patel, Pralit L.; Zhou, Yuyu; Mao, Jiafu; Shi, Xiaoying; Thornton, Peter E.; Chini, Louise M.; Hurtt, George C.

    2015-07-23

    The integrated Earth System Model (iESM) has been developed as a new tool for pro- jecting the joint human/climate system. The iESM is based upon coupling an Integrated Assessment Model (IAM) and an Earth System Model (ESM) into a common modeling in- frastructure. IAMs are the primary tool for describing the human–Earth system, including the sources of global greenhouse gases (GHGs) and short-lived species, land use and land cover change, and other resource-related drivers of anthropogenic climate change. ESMs are the primary scientific tools for examining the physical, chemical, and biogeochemical impacts of human-induced changes to the climate system. The iESM project integrates the economic and human dimension modeling of an IAM and a fully coupled ESM within a sin- gle simulation system while maintaining the separability of each model if needed. Both IAM and ESM codes are developed and used by large communities and have been extensively applied in recent national and international climate assessments. By introducing heretofore- omitted feedbacks between natural and societal drivers, we can improve scientific under- standing of the human–Earth system dynamics. Potential applications include studies of the interactions and feedbacks leading to the timing, scale, and geographic distribution of emissions trajectories and other human influences, corresponding climate effects, and the subsequent impacts of a changing climate on human and natural systems. This paper de- scribes the formulation, requirements, implementation, testing, and resulting functionality of the first version of the iESM released to the global climate community.

  11. Independent Verification and Validation Of SAPHIRE 8 Software Requirements Project Number: N6423 U.S. Nuclear Regulatory Commission

    Energy Technology Data Exchange (ETDEWEB)

    Kent Norris

    2009-09-01

    The purpose of the Independent Verification and Validation (IV&V) role in the evaluation of the SAPHIRE requirements definition is to assess the activities that results in the specification, documentation, and review of the requirements that the software product must satisfy, including functionality, performance, design constraints, attributes and external interfaces. The IV&V team began this endeavor after the software engineering and software development of SAPHIRE had already been in production. IV&V reviewed the requirements specified in the NRC Form 189s to verify these requirements were included in SAPHIRE’s Software Verification and Validation Plan (SVVP).

  12. Land-Use Portfolio Modeler, Version 1.0

    Science.gov (United States)

    Taketa, Richard; Hong, Makiko

    2010-01-01

    -on-investment. The portfolio model, now known as the Land-Use Portfolio Model (LUPM), provided the framework for the development of the Land-Use Portfolio Modeler, Version 1.0 software (LUPM v1.0). The software provides a geographic information system (GIS)-based modeling tool for evaluating alternative risk-reduction mitigation strategies for specific natural-hazard events. The modeler uses information about a specific natural-hazard event and the features exposed to that event within the targeted study region to derive a measure of a given mitigation strategy`s effectiveness. Harnessing the spatial capabilities of a GIS enables the tool to provide a rich, interactive mapping environment in which users can create, analyze, visualize, and compare different

  13. A new plant chamber facility, PLUS, coupled to the atmosphere simulation chamber SAPHIR

    Science.gov (United States)

    Hohaus, T.; Kuhn, U.; Andres, S.; Kaminski, M.; Rohrer, F.; Tillmann, R.; Wahner, A.; Wegener, R.; Yu, Z.; Kiendler-Scharr, A.

    2016-03-01

    A new PLant chamber Unit for Simulation (PLUS) for use with the atmosphere simulation chamber SAPHIR (Simulation of Atmospheric PHotochemistry In a large Reaction Chamber) has been built and characterized at the Forschungszentrum Jülich GmbH, Germany. The PLUS chamber is an environmentally controlled flow-through plant chamber. Inside PLUS the natural blend of biogenic emissions of trees is mixed with synthetic air and transferred to the SAPHIR chamber, where the atmospheric chemistry and the impact of biogenic volatile organic compounds (BVOCs) can be studied in detail. In PLUS all important environmental parameters (e.g., temperature, photosynthetically active radiation (PAR), soil relative humidity (RH)) are well controlled. The gas exchange volume of 9.32 m3 which encloses the stem and the leaves of the plants is constructed such that gases are exposed to only fluorinated ethylene propylene (FEP) Teflon film and other Teflon surfaces to minimize any potential losses of BVOCs in the chamber. Solar radiation is simulated using 15 light-emitting diode (LED) panels, which have an emission strength up to 800 µmol m-2 s-1. Results of the initial characterization experiments are presented in detail. Background concentrations, mixing inside the gas exchange volume, and transfer rate of volatile organic compounds (VOCs) through PLUS under different humidity conditions are explored. Typical plant characteristics such as light- and temperature- dependent BVOC emissions are studied using six Quercus ilex trees and compared to previous studies. Results of an initial ozonolysis experiment of BVOC emissions from Quercus ilex at typical atmospheric concentrations inside SAPHIR are presented to demonstrate a typical experimental setup and the utility of the newly added plant chamber.

  14. A new plant chamber facility PLUS coupled to the atmospheric simulation chamber SAPHIR

    Science.gov (United States)

    Hohaus, T.; Kuhn, U.; Andres, S.; Kaminski, M.; Rohrer, F.; Tillmann, R.; Wahner, A.; Wegener, R.; Yu, Z.; Kiendler-Scharr, A.

    2015-11-01

    A new PLant chamber Unit for Simulation (PLUS) for use with the atmosphere simulation chamber SAPHIR (Simulation of Atmospheric PHotochemistry In a large Reaction Chamber) has been build and characterized at the Forschungszentrum Jülich GmbH, Germany. The PLUS chamber is an environmentally controlled flow through plant chamber. Inside PLUS the natural blend of biogenic emissions of trees are mixed with synthetic air and are transferred to the SAPHIR chamber where the atmospheric chemistry and the impact of biogenic volatile organic compounds (BVOC) can be studied in detail. In PLUS all important enviromental parameters (e.g. temperature, PAR, soil RH etc.) are well-controlled. The gas exchange volume of 9.32 m3 which encloses the stem and the leafes of the plants is constructed such that gases are exposed to FEP Teflon film and other Teflon surfaces only to minimize any potential losses of BVOCs in the chamber. Solar radiation is simulated using 15 LED panels which have an emission strength up to 800 μmol m-2 s-1. Results of the initial characterization experiments are presented in detail. Background concentrations, mixing inside the gas exchange volume, and transfer rate of volatile organic compounds (VOC) through PLUS under different humidity conditions are explored. Typical plant characteristics such as light and temperature dependent BVOC emissions are studied using six Quercus Ilex trees and compared to previous studies. Results of an initial ozonolysis experiment of BVOC emissions from Quercus Ilex at typical atmospheric concentrations inside SAPHIR are presented to demonstrate a typical experimental set up and the utility of the newly added plant chamber.

  15. BehavePlus fire modeling system, version 5.0: Variables

    Science.gov (United States)

    Patricia L. Andrews

    2009-01-01

    This publication has been revised to reflect updates to version 4.0 of the BehavePlus software. It was originally published as the BehavePlus fire modeling system, version 4.0: Variables in July, 2008.The BehavePlus fire modeling system is a computer program based on mathematical models that describe wildland fire behavior and effects and the...

  16. Computerized transportation model for the NRC Physical Protection Project. Versions I and II

    International Nuclear Information System (INIS)

    Anderson, G.M.

    1978-01-01

    Details on two versions of a computerized model for the transportation system of the NRC Physical Protection Project are presented. The Version I model permits scheduling of all types of transport units associated with a truck fleet, including truck trailers, truck tractors, escort vehicles and crews. A fixed-fleet itinerary construction process is used in which iterations on fleet size are required until the service requirements are satisfied. The Version II model adds an aircraft mode capability and provides for a more efficient non-fixed-fleet itinerary generation process. Test results using both versions are included

  17. Investigation of OH Radical Regeneration from Isoprene Oxidation Across Different NOx Regimes in the Atmosphere Simulation Chamber SAPHIR

    Science.gov (United States)

    Novelli, A.; Bohn, B.; Dorn, H. P.; Häseler, R.; Hofzumahaus, A.; Kaminski, M.; Yu, Z.; Li, X.; Tillmann, R.; Wegener, R.; Fuchs, H.; Kiendler-Scharr, A.; Wahner, A.

    2017-12-01

    The hydroxyl radical (OH) is the dominant daytime oxidant in the troposphere. It starts the degradation of volatile organic compounds (VOC) originating from both anthropogenic and biogenic emissions. Hence, it is a crucial trace species in model simulations as it has a large impact on many reactive trace gases. Many field campaigns performed in isoprene dominated environment in low NOx conditions have shown large discrepancies between the measured and the modelled OH radical concentrations. These results have contributed to the discovery of new regeneration paths for OH radicals from isoprene-OH second generation products with maximum efficiency at low NO. The current chemical models (e.g. MCM 3.3.1) include this novel chemistry allowing for an investigation of the validity of the OH regeneration at different chemical conditions. Over 11 experiments focusing on the OH oxidation of isoprene were performed at the SAPHIR chamber in the Forschungszentrum Jülich. Measurements of VOCs, NOx, O3, HONO were performed together with the measurement of OH radicals (by both LIF-FAGE and DOAS) and OH reactivity. Within the simulation chamber, the NO mixing ratio was varied between 0.05 to 2 ppbv allowing the investigation of both the "new" regeneration path for OH radicals and the well-known NO+HO2 mechanism. A comparison with the MCM 3.3.1 that includes the upgraded LIM1 mechanism showed very good agreement (within 10%) for the OH data at all concentrations of NOx investigated. Comparison with different models, without LIM1 and with updated rates for the OH regeneration, will be presented together with a detailed analysis of the impact of this study on results from previous field campaigns.

  18. GPM SAPHIR on MT1 Common Calibrated Brightness Temperature L1C 1.5 hours 10 km V05 (GPM_1CMT1SAPHIR) at GES DISC

    Data.gov (United States)

    National Aeronautics and Space Administration — Version 5 is the current version of the data set. Version 4 is no longer available and has been superseded by Version 5. 1CAMSR2 contains common calibrated...

  19. ANLECIS-1: Version of ANLECIS Program for Calculations with the Asymetric Rotational Model

    International Nuclear Information System (INIS)

    Lopez Mendez, R.; Garcia Moruarte, F.

    1986-01-01

    A new modified version of the ANLECIS Code is reported. This version allows to fit simultaneously the cross section of the direct process by the asymetric rotational model, and the cross section of the compound nucleus process by the Hauser-Feshbach formalism with the modern statistical corrections. The calculations based in this version show a dependence of the compound nucleus cross section with respect to the asymetric parameter γ. (author). 19 refs

  20. CENTURY: Modeling Ecosystem Responses to Climate Change, Version 4 (VEMAP 1995)

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: The CENTURY model, Version 4, is a general model of plant-soil nutrient cycling that is being used to simulate carbon and nutrient dynamics for different...

  1. CENTURY: Modeling Ecosystem Responses to Climate Change, Version 4 (VEMAP 1995)

    Data.gov (United States)

    National Aeronautics and Space Administration — The CENTURY model, Version 4, is a general model of plant-soil nutrient cycling that is being used to simulate carbon and nutrient dynamics for different types of...

  2. Particle identification by time-of-flight measurement in the SAPHIR

    International Nuclear Information System (INIS)

    Hoffmann-Rothe, P.

    1993-02-01

    Using photoproduction data which have been measured with the SAPHIR-detector with different target materials (C H 2 solid , H 2 liquid , D 2 liquid ) a detailed investigation and discussion of the detectors performance to measure the time of flight of charged particles and to separate between particles of different mass has been accomplished. A FORTRAN program has been written which provides a calibration of the scintillator panels of the TOF hodoscopes, calculates correction factors for the time-walk effect an finally, by combining the time of flight with track momentum measurement, determines particle masses. The current configuration of the detector makes it possible to separate between proton and pion up to a particle momentum of 1.6 GeV/c. Proton and kaon can be separated up to a momentum of 1.3 GeV/c, kaon and pion up to a momentum of 0.85 GeV/c. (prog.) [de

  3. Tier I Rice Model - Version 1.0 - Guidance for Estimating Pesticide Concentrations in Rice Paddies

    Science.gov (United States)

    Describes a Tier I Rice Model (Version 1.0) for estimating surface water exposure from the use of pesticides in rice paddies. The concentration calculated can be used for aquatic ecological risk and drinking water exposure assessments.

  4. Estimating Parameters for the PVsyst Version 6 Photovoltaic Module Performance Model

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    We present an algorithm to determine parameters for the photovoltaic module perf ormance model encoded in the software package PVsyst(TM) version 6. Our method operates on current - voltage (I - V) measured over a range of irradiance and temperature conditions. We describe the method and illustrate its steps using data for a 36 cell crystalli ne silicon module. We qualitatively compare our method with one other technique for estimating parameters for the PVsyst(TM) version 6 model .

  5. Calibrating and Updating the Global Forest Products Model (GFPM version 2014 with BPMPD)

    Science.gov (United States)

    Joseph Buongiorno; Shushuai Zhu

    2014-01-01

    The Global Forest Products Model (GFPM) is an economic model of global production, consumption, and trade of forest products. An earlier version of the model is described in Buongiorno et al. (2003). The GFPM 2014 has data and parameters to simulate changes of the forest sector from 2010 to 2030. Buongiorno and Zhu (2014) describe how to use the model for simulation....

  6. Calibrating and updating the Global Forest Products Model (GFPM version 2016 with BPMPD)

    Science.gov (United States)

    Joseph Buongiorno; Shushuai  Zhu

    2016-01-01

    The Global Forest Products Model (GFPM) is an economic model of global production, consumption, and trade of forest products. An earlier version of the model is described in Buongiorno et al. (2003). The GFPM 2016 has data and parameters to simulate changes of the forest sector from 2013 to 2030. Buongiorno and Zhu (2015) describe how to use the model for...

  7. Integrated Biosphere Simulator Model (IBIS), Version 2.5

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: The Integrated Biosphere Simulator (or IBIS) is designed to be a comprehensive model of the terrestrial biosphere. Tthe model represents a wide range of...

  8. Integrated Biosphere Simulator Model (IBIS), Version 2.5

    Data.gov (United States)

    National Aeronautics and Space Administration — The Integrated Biosphere Simulator (or IBIS) is designed to be a comprehensive model of the terrestrial biosphere. Tthe model represents a wide range of processes,...

  9. Integrated Baseline System (IBS) Version 2.0: Models guide

    Energy Technology Data Exchange (ETDEWEB)

    1994-03-01

    The Integrated Baseline System (IBS) is an emergency management planning and analysis tool being developed under the direction of the US Army Nuclear and Chemical Agency. This Models Guide summarizes the IBS use of several computer models for predicting the results of emergency situations. These include models for predicting dispersion/doses of airborne contaminants, traffic evacuation, explosion effects, heat radiation from a fire, and siren sound transmission. The guide references additional technical documentation on the models when such documentation is available from other sources. The audience for this manual is chiefly emergency management planners and analysts, but also data managers and system managers.

  10. The MiniBIOS model (version 1A4) at the RIVM

    NARCIS (Netherlands)

    Uijt de Haag PAM; Laheij GMH

    1993-01-01

    This report is the user's guide of the MiniBIOS model, version 1A4. The model is operational at the Laboratory of Radiation Research of the RIVM. MiniBIOS is a simulation model for calculating the transport of radionuclides in the biosphere and the consequential radiation dose to humans. The

  11. Microsoft Repository Version 2 and the Open Information Model.

    Science.gov (United States)

    Bernstein, Philip A.; Bergstraesser, Thomas; Carlson, Jason; Pal, Shankar; Sanders, Paul; Shutt, David

    1999-01-01

    Describes the programming interface and implementation of the repository engine and the Open Information Model for Microsoft Repository, an object-oriented meta-data management facility that ships in Microsoft Visual Studio and Microsoft SQL Server. Discusses Microsoft's component object model, object manipulation, queries, and information…

  12. Integrated Baseline Bystem (IBS) Version 1.03: Models guide

    Energy Technology Data Exchange (ETDEWEB)

    1993-01-01

    The Integrated Baseline System)(IBS), operated by the Federal Emergency Management Agency (FEMA), is a system of computerized tools for emergency planning and analysis. This document is the models guide for the IBS and explains how to use the emergency related computer models. This document provides information for the experienced system user, and is the primary reference for the computer modeling software supplied with the system. It is designed for emergency managers and planners, and others familiar with the concepts of computer modeling. Although the IBS manual set covers basic and advanced operations, it is not a complete reference document set. Emergency situation modeling software in the IBS is supported by additional technical documents. Some of the other IBS software is commercial software for which more complete documentation is available. The IBS manuals reference such documentation where necessary.

  13. A tantalum strength model using a multiscale approach: version 2

    Energy Technology Data Exchange (ETDEWEB)

    Becker, R; Arsenlis, A; Hommes, G; Marian, J; Rhee, M; Yang, L H

    2009-09-21

    A continuum strength model for tantalum was developed in 2007 using a multiscale approach. This was our first attempt at connecting simulation results from atomistic to continuum length scales, and much was learned that we were not able to incorporate into the model at that time. The tantalum model described in this report represents a second cut at pulling together multiscale simulation results into a continuum model. Insight gained in creating previous multiscale models for tantalum and vanadium was used to guide the model construction and functional relations for the present model. While the basic approach follows that of the vanadium model, there are significant departures. Some of the recommendations from the vanadium report were followed, but not all. Results from several new analysis techniques have not yet been incorporated due to technical difficulties. Molecular dynamics simulations of single dislocation motion at several temperatures suggested that the thermal activation barrier was temperature dependent. This dependency required additional temperature functions be included within the assumed Arrhenius relation. The combination of temperature dependent functions created a complex model with a non unique parameterization and extra model constants. The added complexity had no tangible benefits. The recommendation was to abandon the strict Arrhenius form and create a simpler curve fit to the molecular dynamics data for shear stress versus dislocation velocity. Functions relating dislocation velocity and applied shear stress were constructed vor vanadium for both edge and screw dislocations. However, an attempt to formulate a robust continuum constitutive model for vanadium using both dislocation populations was unsuccessful; the level of coupling achieved was inadequate to constrain the dislocation evolution properly. Since the behavior of BCC materials is typically assumed to be dominated by screw dislocations, the constitutive relations were ultimately

  14. U.S. Coastal Relief Model - Southern California Version 2

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NGDC's U.S. Coastal Relief Model (CRM) provides a comprehensive view of the U.S. coastal zone integrating offshore bathymetry with land topography into a seamless...

  15. ONKALO rock mechanics model (RMM) - Version 2.0

    International Nuclear Information System (INIS)

    Moenkkoenen, H.; Hakala, M.; Paananen, M.; Laine, E.

    2012-02-01

    The Rock Mechanics Model of the ONKALO rock volume is a description of the significant features and parameters related to rock mechanics. The main objective is to develop a tool to predict the rock properties, quality and hence the potential for stress failure which can then be used for continuing design of the ONKALO and the repository. This is the second implementation of the Rock Mechanics Model and it includes sub-models of the intact rock strength, in situ stress, thermal properties, rock mass quality and properties of the brittle deformation zones. Because of the varying quantities of available data for the different parameters, the types of presentations also vary: some data sets can be presented in the style of a 3D block model but, in other cases, a single distribution represents the whole rock volume hosting the ONKALO. (orig.)

  16. Radarsat Antarctic Mapping Project Digital Elevation Model, Version 2

    Data.gov (United States)

    National Aeronautics and Space Administration — The high-resolution Radarsat Antarctic Mapping Project (RAMP) Digital Elevation Model (DEM) combines topographic data from a variety of sources to provide consistent...

  17. Modeled Daily Thaw Depth and Frozen Ground Depth, Version 1

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set contains modeled daily thaw depth and freezing depth for the Arctic terrestrial drainage basin. Thaw and freezing depths were calculated over the study...

  18. The Oak Ridge Competitive Electricity Dispatch (ORCED) Model Version 9

    Energy Technology Data Exchange (ETDEWEB)

    Hadley, Stanton W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Baek, Young Sun [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-11-01

    The Oak Ridge Competitive Electricity Dispatch (ORCED) model dispatches power plants in a region to meet the electricity demands for any single given year up to 2030. It uses publicly available sources of data describing electric power units such as the National Energy Modeling System and hourly demands from utility submittals to the Federal Energy Regulatory Commission that are projected to a future year. The model simulates a single region of the country for a given year, matching generation to demands and predefined net exports from the region, assuming no transmission constraints within the region. ORCED can calculate a number of key financial and operating parameters for generating units and regional market outputs including average and marginal prices, air emissions, and generation adequacy. By running the model with and without changes such as generation plants, fuel prices, emission costs, plug-in hybrid electric vehicles, distributed generation, or demand response, the marginal impact of these changes can be found.

  19. Using the Global Forest Products Model (GFPM version 2012)

    Science.gov (United States)

    Joseph Buongiorno; Shushuai Zhu

    2012-01-01

    The purpose of this manual is to enable users of the Global Forest Products Model to: • Install and run the GFPM software • Understand the input data • Change the input data to explore different scenarios • Interpret the output The GFPM is an economic model of global production, consumption and trade of forest products (Buongiorno et al. 2003). The GFPM2012 has data...

  20. Macro System Model (MSM) User Guide, Version 1.3

    Energy Technology Data Exchange (ETDEWEB)

    Ruth, M.; Diakov, V.; Sa, T.; Goldsby, M.

    2011-09-01

    This user guide describes the macro system model (MSM). The MSM has been designed to allow users to analyze the financial, environmental, transitional, geographical, and R&D issues associated with the transition to a hydrogen economy. Basic end users can use the MSM to answer cross-cutting questions that were previously difficult to answer in a consistent and timely manner due to various assumptions and methodologies among different models.

  1. Due Regard Encounter Model Version 1.0

    Science.gov (United States)

    2013-08-19

    Note that no existing model covers encoun- ters between two IFR aircraft in oceanic airspace. The reason for this is that one cannot observe encounters...encounters between instrument flight rules ( IFR ) and non- IFR traffic beyond 12NM. 2 TABLE 1 Encounter model categories. Aircraft of Interest Intruder...Aircraft Location Flight Rule IFR VFR Noncooperative Noncooperative Conventional Unconventional CONUS IFR C C U X VFR C U U X Offshore IFR C C U X VFR C U

  2. Institutional Transformation Version 2.5 Modeling and Planning.

    Energy Technology Data Exchange (ETDEWEB)

    Villa, Daniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mizner, Jack H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Passell, Howard D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gallegos, Gerald R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Peplinski, William John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vetter, Douglas W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Evans, Christopher A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Malczynski, Leonard A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Addison, Marlin [Arizona State Univ., Mesa, AZ (United States); Schaffer, Matthew A. [Bridgers and Paxton Engineering Firm, Albuquerque, NM (United States); Higgins, Matthew W. [Vibrantcy, Albuquerque, NM (United States)

    2017-02-01

    Reducing the resource consumption and emissions of large institutions is an important step toward a sustainable future. Sandia National Laboratories' (SNL) Institutional Transformation (IX) project vision is to provide tools that enable planners to make well-informed decisions concerning sustainability, resource conservation, and emissions reduction across multiple sectors. The building sector has been the primary focus so far because it is the largest consumer of resources for SNL. The IX building module allows users to define the evolution of many buildings over time. The module has been created so that it can be generally applied to any set of DOE-2 ( http://doe2.com ) building models that have been altered to include parameters and expressions required by energy conservation measures (ECM). Once building models have been appropriately prepared, they are checked into a Microsoft Access (r) database. Each building can be represented by many models. This enables the capability to keep a continuous record of models in the past, which are replaced with different models as changes occur to the building. In addition to this, the building module has the capability to apply climate scenarios through applying different weather files to each simulation year. Once the database has been configured, a user interface in Microsoft Excel (r) is used to create scenarios with one or more ECMs. The capability to include central utility buildings (CUBs) that service more than one building with chilled water has been developed. A utility has been created that joins multiple building models into a single model. After using the utility, several manual steps are required to complete the process. Once this CUB model has been created, the individual contributions of each building are still tracked through meters. Currently, 120 building models from SNL's New Mexico and California campuses have been created. This includes all buildings at SNL greater than 10,000 sq. ft

  3. Comparison of OH concentration measurements by DOAS and LIF during SAPHIR chamber experiments at high OH reactivity and low NO concentration

    Directory of Open Access Journals (Sweden)

    H. Fuchs

    2012-07-01

    Full Text Available During recent field campaigns, hydroxyl radical (OH concentrations that were measured by laser-induced fluorescence (LIF were up to a factor of ten larger than predicted by current chemical models for conditions of high OH reactivity and low NO concentration. These discrepancies, which were observed in forests and urban-influenced rural environments, are so far not entirely understood. In summer 2011, a series of experiments was carried out in the atmosphere simulation chamber SAPHIR in Jülich, Germany, in order to investigate the photochemical degradation of isoprene, methyl-vinyl ketone (MVK, methacrolein (MACR and aromatic compounds by OH. Conditions were similar to those experienced during the PRIDE-PRD2006 campaign in the Pearl River Delta (PRD, China, in 2006, where a large difference between OH measurements and model predictions was found. During experiments in SAPHIR, OH was simultaneously detected by two independent instruments: LIF and differential optical absorption spectroscopy (DOAS. Because DOAS is an inherently calibration-free technique, DOAS measurements are regarded as a reference standard. The comparison of the two techniques was used to investigate potential artifacts in the LIF measurements for PRD-like conditions of OH reactivities of 10 to 30 s−1 and NO mixing ratios of 0.1 to 0.3 ppbv. The analysis of twenty experiment days shows good agreement. The linear regression of the combined data set (averaged to the DOAS time resolution, 2495 data points yields a slope of 1.02 ± 0.01 with an intercept of (0.10 ± 0.03 × 106 cm−3 and a linear correlation coefficient of R2 = 0.86. This indicates that the sensitivity of the LIF instrument is well-defined by its calibration procedure. No hints for artifacts are observed for isoprene, MACR, and different aromatic compounds. LIF measurements were approximately 30–40% (median larger than those by DOAS after MVK (20 ppbv and

  4. Mars Global Reference Atmospheric Model 2010 Version: Users Guide

    Science.gov (United States)

    Justh, H. L.

    2014-01-01

    This Technical Memorandum (TM) presents the Mars Global Reference Atmospheric Model 2010 (Mars-GRAM 2010) and its new features. Mars-GRAM is an engineering-level atmospheric model widely used for diverse mission applications. Applications include systems design, performance analysis, and operations planning for aerobraking, entry, descent and landing, and aerocapture. Additionally, this TM includes instructions on obtaining the Mars-GRAM source code and data files as well as running Mars-GRAM. It also contains sample Mars-GRAM input and output files and an example of how to incorporate Mars-GRAM as an atmospheric subroutine in a trajectory code.

  5. Red Storm usage model :Version 1.12.

    Energy Technology Data Exchange (ETDEWEB)

    Jefferson, Karen L.; Sturtevant, Judith E.

    2005-12-01

    Red Storm is an Advanced Simulation and Computing (ASC) funded massively parallel supercomputer located at Sandia National Laboratories (SNL). The Red Storm Usage Model (RSUM) documents the capabilities and the environment provided for the FY05 Tri-Lab Level II Limited Availability Red Storm User Environment Milestone and the FY05 SNL Level II Limited Availability Red Storm Platform Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and SNL. Additionally, the Red Storm Usage Model maps the provided capabilities to the Tri-Lab ASC Computing Environment (ACE) requirements. The ACE requirements reflect the high performance computing requirements for the ASC community and have been updated in FY05 to reflect the community's needs. For each section of the RSUM, Appendix I maps the ACE requirements to the Limited Availability User Environment capabilities and includes a description of ACE requirements met and those requirements that are not met in that particular section. The Red Storm Usage Model, along with the ACE mappings, has been issued and vetted throughout the Tri-Lab community.

  6. NASA Orbital Debris Engineering Model ORDEM2008 (Beta Version)

    Science.gov (United States)

    Stansbery, Eugene G.; Krisko, Paula H.

    2009-01-01

    This is an interim document intended to accompany the beta-release of the ORDEM2008 model. As such it provides the user with a guide for its use, a list of its capabilities, a brief summary of model development, and appendices included to educate the user as to typical runtimes for different orbit configurations. More detailed documentation will be delivered with the final product. ORDEM2008 supersedes NASA's previous model - ORDEM2000. The availability of new sensor and in situ data, the re-analysis of older data, and the development of new analytical techniques, has enabled the construction of this more comprehensive and sophisticated model. Integrated with the software is an upgraded graphical user interface (GUI), which uses project-oriented organization and provides the user with graphical representations of numerous output data products. These range from the conventional average debris size vs. flux magnitude for chosen analysis orbits, to the more complex color-contoured two-dimensional (2-D) directional flux diagrams in terms of local spacecraft pitch and yaw.

  7. Connected Equipment Maturity Model Version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    Butzbaugh, Joshua B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Mayhorn, Ebony T. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sullivan, Greg [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Whalen, Scott A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-05-01

    The Connected Equipment Maturity Model (CEMM) evaluates the high-level functionality and characteristics that enable equipment to provide the four categories of energy-related services through communication with other entities (e.g., equipment, third parties, utilities, and users). The CEMM will help the U.S. Department of Energy, industry, energy efficiency organizations, and research institutions benchmark the current state of connected equipment and identify capabilities that may be attained to reach a more advanced, future state.

  8. Parameter Estimation in Rainfall-Runoff Modelling Using Distributed Versions of Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Michala Jakubcová

    2015-01-01

    Full Text Available The presented paper provides the analysis of selected versions of the particle swarm optimization (PSO algorithm. The tested versions of the PSO were combined with the shuffling mechanism, which splits the model population into complexes and performs distributed PSO optimization. One of them is a new proposed PSO modification, APartW, which enhances the global exploration and local exploitation in the parametric space during the optimization process through the new updating mechanism applied on the PSO inertia weight. The performances of four selected PSO methods were tested on 11 benchmark optimization problems, which were prepared for the special session on single-objective real-parameter optimization CEC 2005. The results confirm that the tested new APartW PSO variant is comparable with other existing distributed PSO versions, AdaptW and LinTimeVarW. The distributed PSO versions were developed for finding the solution of inverse problems related to the estimation of parameters of hydrological model Bilan. The results of the case study, made on the selected set of 30 catchments obtained from MOPEX database, show that tested distributed PSO versions provide suitable estimates of Bilan model parameters and thus can be used for solving related inverse problems during the calibration process of studied water balance hydrological model.

  9. System cost model user's manual, version 1.2

    International Nuclear Information System (INIS)

    Shropshire, D.

    1995-06-01

    The System Cost Model (SCM) was developed by Lockheed Martin Idaho Technologies in Idaho Falls, Idaho and MK-Environmental Services in San Francisco, California to support the Baseline Environmental Management Report sensitivity analysis for the U.S. Department of Energy (DOE). The SCM serves the needs of the entire DOE complex for treatment, storage, and disposal (TSD) of mixed low-level, low-level, and transuranic waste. The model can be used to evaluate total complex costs based on various configuration options or to evaluate site-specific options. The site-specific cost estimates are based on generic assumptions such as waste loads and densities, treatment processing schemes, existing facilities capacities and functions, storage and disposal requirements, schedules, and cost factors. The SCM allows customization of the data for detailed site-specific estimates. There are approximately forty TSD module designs that have been further customized to account for design differences for nonalpha, alpha, remote-handled, and transuranic wastes. The SCM generates cost profiles based on the model default parameters or customized user-defined input and also generates costs for transporting waste from generators to TSD sites

  10. Field evaluations of a forestry version of DRAINMOD-NII model

    Science.gov (United States)

    S. Tian; M. A. Youssef; R.W. Skaggs; D.M. Amatya; G.M. Chescheir

    2010-01-01

    This study evaluated the performance of the newly developed forestry version of DRAINMOD-NII model using a long term (21-year) data set collected from an artificially drained loblolly pine (Pinus taeda L.) plantation in eastern North Carolina, U.S.A. The model simulates the main hydrological and biogeochemical processes in drained forested lands. The...

  11. A Hemispheric Version of the Community Multiscale Air Quality (CMAQ) Modeling System

    Science.gov (United States)

    This invited presentation will be given at the 4th Biannual Western Modeling Workshop in the Plenary session on Global model development, evaluation, and new source attribution tools. We describe the development and application of the hemispheric version of the CMAQ to examine th...

  12. The NASA MSFC Earth Global Reference Atmospheric Model-2007 Version

    Science.gov (United States)

    Leslie, F.W.; Justus, C.G.

    2008-01-01

    Reference or standard atmospheric models have long been used for design and mission planning of various aerospace systems. The NASA/Marshall Space Flight Center (MSFC) Global Reference Atmospheric Model (GRAM) was developed in response to the need for a design reference atmosphere that provides complete global geographical variability, and complete altitude coverage (surface to orbital altitudes) as well as complete seasonal and monthly variability of the thermodynamic variables and wind components. A unique feature of GRAM is that, addition to providing the geographical, height, and monthly variation of the mean atmospheric state, it includes the ability to simulate spatial and temporal perturbations in these atmospheric parameters (e.g. fluctuations due to turbulence and other atmospheric perturbation phenomena). A summary comparing GRAM features to characteristics and features of other reference or standard atmospheric models, can be found Guide to Reference and Standard Atmosphere Models. The original GRAM has undergone a series of improvements over the years with recent additions and changes. The software program is called Earth-GRAM2007 to distinguish it from similar programs for other bodies (e.g. Mars, Venus, Neptune, and Titan). However, in order to make this Technical Memorandum (TM) more readable, the software will be referred to simply as GRAM07 or GRAM unless additional clarity is needed. Section 1 provides an overview of the basic features of GRAM07 including the newly added features. Section 2 provides a more detailed description of GRAM07 and how the model output generated. Section 3 presents sample results. Appendices A and B describe the Global Upper Air Climatic Atlas (GUACA) data and the Global Gridded Air Statistics (GGUAS) database. Appendix C provides instructions for compiling and running GRAM07. Appendix D gives a description of the required NAMELIST format input. Appendix E gives sample output. Appendix F provides a list of available

  13. Geological Model of the Olkiluoto Site. Version 2.0

    International Nuclear Information System (INIS)

    Aaltonen, I.

    2010-10-01

    The rocks of Olkiluoto can be divided into two major classes: 1) supracrustal high-grade metamorphic rocks including various migmatitic gneisses, tonalitic-granodioriticgranitic gneisses, mica gneisses, quartz gneisses and mafic gneisses, and 2) igneous rocks including pegmatitic granites and diabase dykes. The migmatitic gneisses can further be divided into three subgroups in terms of the type of migmatite structure: veined gneisses, stromatic gneisses and diatexitic gneisses. On the basis of refolding and crosscutting relationships, the metamorphic supracrustal rocks have been subjected to polyphased ductile deformation, consisting of five stages, the D2 being locally the most intensive phase, producing thrust-related folding, strong migmatisation and pervasive foliation. In 3D modelling of the lithological units, an assumption has been made, on the basis of measurements in the outcrops, investigation trenches and drill cores, that the pervasive, composite foliation produced as a result of polyphase ductile deformation has a rather constant attitude in the ONKALO area. Consequently, the strike and dip of the foliation has been used as a tool, through which the lithologies have been correlated between the drillholes and from the surface to the drillholes. In addition, the largest ductile deformation zones and tectonic units are described in 3D model. The bedrock at the Olkiluoto site has been subjected to extensive hydrothermal alteration, which has taken place at reasonably low temperature conditions, the estimated temperature interval being from slightly over 300 deg C to less than 100 deg C. Two types of alteration can be observed: firstly, pervasive alteration and secondly fracturecontrolled alteration. Clay mineralisation and sulphidisation are the most prominent alteration events in the site area. Sulphides are located in the uppermost part of the model volume following roughly the foliation and lithological trend. Kaolinite is also mainly located in the

  14. A magnetic version of the Smilansky-Solomyak model

    Czech Academy of Sciences Publication Activity Database

    Barseghyan, Diana; Exner, Pavel

    2017-01-01

    Roč. 50, č. 48 (2017), č. článku 485203. ISSN 1751-8113 R&D Projects: GA ČR GA17-01706S Institutional support: RVO:61389005 Keywords : Smilansky-Solomyak model * spectral transition * homegeneous magnetic field * discrete spectrum * essential spectrum Subject RIV: BE - Theoretical Physics OBOR OECD: Atomic, molecular and chemical physics (physics of atoms and molecules including collision, interaction with radiation, magnetic resonances, Mössbauer effect) Impact factor: 1.857, year: 2016

  15. PUMA Version 6 Multiplatform with Facilities to be coupled with other Simulation Models

    International Nuclear Information System (INIS)

    Grant, Carlos

    2013-01-01

    PUMA is a code for nuclear reactor calculation used in all nuclear installations in Argentina for simulation of fuel management, power cycles and transient events by means of spatial kinetic diffusion theory in 3D. For the versions used up to now the WINDOWS platform was used with very good results. Nowadays PUMA must work in different operative systems, LINUX among others, and must also have facilities to be coupled with other models. For this reason this new version was reprogrammed in ADA, language oriented to a safe programming and be found in any operative system. In former versions PUMA was executed through macro instructions written in LOGO. For this version it is possible to use also PYTHON, which makes also possible the access in execution time to internal data of PUMA. The use of PYTHON allows a easy way to couple PUMA with other codes. The possibilities of this new version of PUMA are shown by means of examples of input data and process control using PYTHON and LOGO. It is discussed the implementation of this methodology in other codes to be coupled with PUMA for versions run in WINDOWS and LINUX. (author)

  16. Zig-zag version of the Frenkel-Kontorova model

    DEFF Research Database (Denmark)

    Christiansen, Peter Leth; Savin, A.V.; Zolotaryuk, Alexander

    1996-01-01

    We study a generalization of the Frenkel-Kontorova model which describes a zig-zag chain of particles coupled by both the first- and second-neighbor harmonic forces and subjected to a planar substrate with a commensurate potential relief. The particles are supposed to have two degrees of freedom......: longitudinal and transverse displacements. Two types of two-component kink solutions corresponding to defects with topological charges Q=+/-1,+/-2 have been treated. The topological defects with positive charge (excess of one or two particles in the chain) are shown to be immobile while the negative defects...... (vacancies of one or two particles) have been proved at the same parameter values to be mobile objects. In our studies we apply a minimization scheme which has been proved to be an effective numerical method for seeking solitary wave solutions in molecular systems of large complexity. The dynamics of both...

  17. Geological model of the Olkiluoto site. Version 1.0

    International Nuclear Information System (INIS)

    Mattila, J.; Aaltonen, I.; Kemppainen, K.

    2008-01-01

    The rocks of Olkiluoto can be divided into two major classes: (1) supracrustal high-grade metamorphic rocks including various migmatitic gneisses, tonalitic-granodioriticgranitic gneisses, mica gneisses, quartz gneisses and mafic gneisses, and (2) igneous rocks including pegmatitic granites and diabase dykes. The migmatitic gneisses can further be divided into three subgroups in terms of the type of migmatite structure: veined gneisses, stromatic gneisses and diatexitic gneisses. On the basis of refolding and crosscutting relationships, the metamorphic supracrustal rocks have been subjected to polyphased ductile deformation, consisting of five stages, the D2 being locally the most intensive phase, producing thrust-related folding, strong migmatisation and pervasive foliation. In 3D modelling of the lithological units, an assumption has been made, on the basis of measurements in the outcrops, investigation trenches and drill cores, that the pervasive, composite foliation produced as a result of polyphase ductile deformation has a rather constant attitude in the ONKALO area. Consequently, the strike and dip of the foliation has been used as a tool, through which the lithologies have been correlated between the drillholes and from the surface to the drillholes. The bedrock at the Olkiluoto site has been subjected to extensive hydrothermal alteration, which has taken place at reasonably low temperature conditions, the estimated temperature interval being from slightly over 300 deg C to less than 100 deg C. Two types of alteration can be observed: (1) pervasive (disseminated) alteration and (2) fracture-controlled (veinlet) alteration. Kaolinisation and sulphidisation are the most prominent alteration events in the site area. Sulphides are located in the uppermost part of the model volume following roughly the lithological trend (slightly dipping to the SE). Kaolinite is also located in the uppermost part, but the orientation is opposite to the main lithological trend

  18. PRMS-IV, the precipitation-runoff modeling system, version 4

    Science.gov (United States)

    Markstrom, Steven L.; Regan, R. Steve; Hay, Lauren E.; Viger, Roland J.; Webb, Richard M.; Payn, Robert A.; LaFontaine, Jacob H.

    2015-01-01

    Computer models that simulate the hydrologic cycle at a watershed scale facilitate assessment of variability in climate, biota, geology, and human activities on water availability and flow. This report describes an updated version of the Precipitation-Runoff Modeling System. The Precipitation-Runoff Modeling System is a deterministic, distributed-parameter, physical-process-based modeling system developed to evaluate the response of various combinations of climate and land use on streamflow and general watershed hydrology. Several new model components were developed, and all existing components were updated, to enhance performance and supportability. This report describes the history, application, concepts, organization, and mathematical formulation of the Precipitation-Runoff Modeling System and its model components. This updated version provides improvements in (1) system flexibility for integrated science, (2) verification of conservation of water during simulation, (3) methods for spatial distribution of climate boundary conditions, and (4) methods for simulation of soil-water flow and storage.

  19. Response Surface Modeling Tool Suite, Version 1.x

    Energy Technology Data Exchange (ETDEWEB)

    2016-07-05

    The Response Surface Modeling (RSM) Tool Suite is a collection of three codes used to generate an empirical interpolation function for a collection of drag coefficient calculations computed with Test Particle Monte Carlo (TPMC) simulations. The first code, "Automated RSM", automates the generation of a drag coefficient RSM for a particular object to a single command. "Automated RSM" first creates a Latin Hypercube Sample (LHS) of 1,000 ensemble members to explore the global parameter space. For each ensemble member, a TPMC simulation is performed and the object drag coefficient is computed. In the next step of the "Automated RSM" code, a Gaussian process is used to fit the TPMC simulations. In the final step, Markov Chain Monte Carlo (MCMC) is used to evaluate the non-analytic probability distribution function from the Gaussian process. The second code, "RSM Area", creates a look-up table for the projected area of the object based on input limits on the minimum and maximum allowed pitch and yaw angles and pitch and yaw angle intervals. The projected area from the look-up table is used to compute the ballistic coefficient of the object based on its pitch and yaw angle. An accurate ballistic coefficient is crucial in accurately computing the drag on an object. The third code, "RSM Cd", uses the RSM generated by the "Automated RSM" code and the projected area look-up table generated by the "RSM Area" code to accurately compute the drag coefficient and ballistic coefficient of the object. The user can modify the object velocity, object surface temperature, the translational temperature of the gas, the species concentrations of the gas, and the pitch and yaw angles of the object. Together, these codes allow for the accurate derivation of an object's drag coefficient and ballistic coefficient under any conditions with only knowledge of the object's geometry and mass.

  20. Technical Note: Intercomparison of formaldehyde measurements at the atmosphere simulation chamber SAPHIR

    Directory of Open Access Journals (Sweden)

    A. Wisthaler

    2008-04-01

    Full Text Available The atmosphere simulation chamber SAPHIR at the Research Centre Jülich was used to test the suitability of state-of-the-art analytical instruments for the measurement of gas-phase formaldehyde (HCHO in air. Five analyzers based on four different sensing principles were deployed: a differential optical absorption spectrometer (DOAS, cartridges for 2,4-dinitro-phenyl-hydrazine (DNPH derivatization followed by off-line high pressure liquid chromatography (HPLC analysis, two different types of commercially available wet chemical sensors based on Hantzsch fluorimetry, and a proton-transfer-reaction mass spectrometer (PTR-MS. A new optimized mode of operation was used for the PTR-MS instrument which significantly enhanced its performance for online HCHO detection at low absolute humidities.

    The instruments were challenged with typical ambient levels of HCHO ranging from zero to several ppb. Synthetic air of high purity and particulate-filtered ambient air were used as sample matrices in the atmosphere simulation chamber onto which HCHO was spiked under varying levels of humidity and ozone. Measurements were compared to mixing ratios calculated from the chamber volume and the known amount of HCHO injected into the chamber; measurements were also compared between the different instruments. The formal and blind intercomparison exercise was conducted under the control of an independent referee. A number of analytical problems associated with the experimental set-up and with individual instruments were identified, the overall agreement between the methods was fair.

  1. Independent Verification and Validation Of SAPHIRE 8 Software Design and Interface Design Project Number: N6423 U.S. Nuclear Regulatory Commission

    Energy Technology Data Exchange (ETDEWEB)

    Kent Norris

    2009-10-01

    The purpose of the Independent Verification and Validation (IV&V) role in the evaluation of the SAPHIRE software design and interface design is to assess the activities that results in the development, documentation, and review of a software design that meets the requirements defined in the software requirements documentation. The IV&V team began this endeavor after the software engineering and software development of SAPHIRE had already been in production. IV&V reviewed the requirements specified in the NRC Form 189s to verify these requirements were included in SAPHIRE’s Software Verification and Validation Plan (SVVP) design specification.

  2. Site investigation SFR. Hydrogeological modelling of SFR. Model version 0.2

    Energy Technology Data Exchange (ETDEWEB)

    Oehman, Johan (Golder Associates AB (Sweden)); Follin, Sven (SF GeoLogic (Sweden))

    2010-01-15

    The Swedish Nuclear Fuel and Waste Management Company (SKB) has conducted site investigations for a planned extension of the existing final repository for short-lived radioactive waste (SFR). A hydrogeological model is developed in three model versions, which will be used for safety assessment and design analyses. This report presents a data analysis of the currently available hydrogeological data from the ongoing Site Investigation SFR (KFR27, KFR101, KFR102A, KFR102B, KFR103, KFR104, and KFR105). The purpose of this work is to develop a preliminary hydrogeological Discrete Fracture Network model (hydro-DFN) parameterisation that can be applied in regional-scale modelling. During this work, the Geologic model had not yet been updated for the new data set. Therefore, all analyses were made to the rock mass outside Possible Deformation Zones, according to Single Hole Interpretation. Owing to this circumstance, it was decided not to perform a complete hydro-DFN calibration at this stage. Instead focus was re-directed to preparatory test cases and conceptual questions with the aim to provide a sound strategy for developing the hydrogeological model SFR v. 1.0. The presented preliminary hydro-DFN consists of five fracture sets and three depth domains. A statistical/geometrical approach (connectivity analysis /Follin et al. 2005/) was performed to estimate the size (i.e. fracture radius) distribution of fractures that are interpreted as Open in geologic mapping of core data. Transmissivity relations were established based on an assumption of a correlation between the size and evaluated specific capacity of geologic features coupled to inflows measured by the Posiva Flow Log device (PFL-f data). The preliminary hydro-DFN was applied in flow simulations in order to test its performance and to explore the role of PFL-f data. Several insights were gained and a few model technical issues were raised. These are summarised in Table 5-1

  3. Site investigation SFR. Hydrogeological modelling of SFR. Model version 0.2

    International Nuclear Information System (INIS)

    Oehman, Johan; Follin, Sven

    2010-01-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) has conducted site investigations for a planned extension of the existing final repository for short-lived radioactive waste (SFR). A hydrogeological model is developed in three model versions, which will be used for safety assessment and design analyses. This report presents a data analysis of the currently available hydrogeological data from the ongoing Site Investigation SFR (KFR27, KFR101, KFR102A, KFR102B, KFR103, KFR104, and KFR105). The purpose of this work is to develop a preliminary hydrogeological Discrete Fracture Network model (hydro-DFN) parameterisation that can be applied in regional-scale modelling. During this work, the Geologic model had not yet been updated for the new data set. Therefore, all analyses were made to the rock mass outside Possible Deformation Zones, according to Single Hole Interpretation. Owing to this circumstance, it was decided not to perform a complete hydro-DFN calibration at this stage. Instead focus was re-directed to preparatory test cases and conceptual questions with the aim to provide a sound strategy for developing the hydrogeological model SFR v. 1.0. The presented preliminary hydro-DFN consists of five fracture sets and three depth domains. A statistical/geometrical approach (connectivity analysis /Follin et al. 2005/) was performed to estimate the size (i.e. fracture radius) distribution of fractures that are interpreted as Open in geologic mapping of core data. Transmissivity relations were established based on an assumption of a correlation between the size and evaluated specific capacity of geologic features coupled to inflows measured by the Posiva Flow Log device (PFL-f data). The preliminary hydro-DFN was applied in flow simulations in order to test its performance and to explore the role of PFL-f data. Several insights were gained and a few model technical issues were raised. These are summarised in Table 5-1

  4. Using the Global Forest Products Model (GFPM version 2016 with BPMPD)

    Science.gov (United States)

    Joseph Buongiorno; Shushuai   Zhu

    2016-01-01

     The GFPM is an economic model of global production, consumption and trade of forest products. The original formulation and several applications are described in Buongiorno et al. (2003). However, subsequent versions, including the GFPM 2016 reflect significant changes and extensions. The GFPM 2016 software uses the...

  5. User's guide to the Yucca Mountain Integrating Model (YMIM) Version 2.1

    International Nuclear Information System (INIS)

    Gansemer, J.; Lamont, A.

    1995-04-01

    The Yucca Mountain Integrating Model (YMIM) is an integrated model of the engineered barrier system. It contains models of the processes of waste container failure and nuclide release from the fuel rods. YMIM is driven by scenarios of container and rod temperature, near-field chemistry, and near-field hydrology provided by other modules. It is designed to be highly modular so that a model of an individual process can be easily modified to replaced without interfering with the models of other processes. This manual describes the process models and provides instructions for setting up and running YMIM Version 2.1

  6. Performance of Versions 1,2 and 3 of the Goddard Earth Observing System (GEOS) Chemistry-Climate Model (CCM)

    Science.gov (United States)

    Pawson, Steven; Stolarski, Richard S.; Nielsen, J. Eric; Duncan, Bryan N.

    2008-01-01

    Version 1 of the Goddard Earth Observing System Chemistry-Climate Model (GEOS CCM) was used in the first CCMVa1 model evaluation and forms the basis for several studies of links between ozone and the circulation. That version of the CCM was based on the GEOS-4 GCM. Versions 2 and 3 of the GEOS CCM are based on the GEOS-5 GCM, which retains the "Lin-Rood" dynamical core but has a totally different set of physical parameterizatiOns to GEOS-4. In Version 2 of the GEOS CCM the Goddard stratospheric chemistry module is retained. Difference between Versions 1 and 2 thus reflect the physics changes of the underlying GCMs. Several comparisons between these two models are made, several of which reveal improvements in Version 2 (including a more realistic representation of the interannual variability of the Antarctic vortex). In Version 3 of the GEOS CCM, the stratospheric chemistry mechanism is replaced by the "GMI COMBO" code that includes tropospheric chemistry and different computational approaches. An advantage of this model version. is the reduction of high ozone biases that prevail at low chlorine loadings in Versions 1 and 2. This poster will compare and contrast various aspects of the three model versions that are relevant for understanding interactions between ozone and climate.

  7. Improvement of the drift chamber system in the SAPHIR detector and first measurements of the Φ meson production at threshold

    International Nuclear Information System (INIS)

    Scholmann, J.N.

    1996-09-01

    The SAPHIR detector at ELSA enables the measurement of photon induced Φ meson production from threshold up to 3 GeV in the full kinematical range. A considerable improvement of the drift chamber system is a precondition of gaining the necessary data rate in an acceptable time. The research focuses attention on the choice of the chamber gas and on a different mechanical construction, so as to minimize the negative influences of the photon beam crossing the sensitive volume of the drift chamber system. In addition, first preliminary results of the total and the differential cross section for the Φ meson production close to threshold were evaluated. (orig.)

  8. The Lagrangian particle dispersion model FLEXPART-WRF VERSION 3.1

    Energy Technology Data Exchange (ETDEWEB)

    Brioude, J.; Arnold, D.; Stohl, A.; Cassiani, M.; Morton, Don; Seibert, P.; Angevine, W. M.; Evan, S.; Dingwell, A.; Fast, Jerome D.; Easter, Richard C.; Pisso, I.; Bukhart, J.; Wotawa, G.

    2013-11-01

    The Lagrangian particle dispersion model FLEXPART was originally designed for cal- culating long-range and mesoscale dispersion of air pollutants from point sources, such as after an accident in a nuclear power plant. In the meantime FLEXPART has evolved into a comprehensive tool for atmospheric transport modeling and analysis at different scales. This multiscale need from the modeler community has encouraged new developments in FLEXPART. In this document, we present a version that works with the Weather Research and Forecasting (WRF) mesoscale meteoro- logical model. Simple procedures on how to run FLEXPART-WRF are presented along with special options and features that differ from its predecessor versions. In addition, test case data, the source code and visualization tools are provided to the reader as supplementary material.

  9. Towards New Empirical Versions of Financial and Accounting Models Corrected for Measurement Errors

    OpenAIRE

    Francois-Éric Racicot; Raymond Théoret; Alain Coen

    2006-01-01

    In this paper, we propose a new empirical version of the Fama and French Model based on the Hausman (1978) specification test and aimed at discarding measurement errors in the variables. The proposed empirical framework is general enough to be used for correcting other financial and accounting models of measurement errors. Removing measurement errors is important at many levels as information disclosure, corporate governance and protection of investors.

  10. Description and evaluation of the Model for Ozone and Related chemical Tracers, version 4 (MOZART-4

    Directory of Open Access Journals (Sweden)

    L. K. Emmons

    2010-01-01

    Full Text Available The Model for Ozone and Related chemical Tracers, version 4 (MOZART-4 is an offline global chemical transport model particularly suited for studies of the troposphere. The updates of the model from its previous version MOZART-2 are described, including an expansion of the chemical mechanism to include more detailed hydrocarbon chemistry and bulk aerosols. Online calculations of a number of processes, such as dry deposition, emissions of isoprene and monoterpenes and photolysis frequencies, are now included. Results from an eight-year simulation (2000–2007 are presented and evaluated. The MOZART-4 source code and standard input files are available for download from the NCAR Community Data Portal (http://cdp.ucar.edu.

  11. The Hamburg Oceanic Carbon Cycle Circulation Model. Version 1. Version 'HAMOCC2s' for long time integrations

    Energy Technology Data Exchange (ETDEWEB)

    Heinze, C.; Maier-Reimer, E. [Max-Planck-Institut fuer Meteorologie, Hamburg (Germany)

    1999-11-01

    The Hamburg Ocean Carbon Cycle Circulation Model (HAMOCC, configuration HAMOCC2s) predicts the atmospheric carbon dioxide partial pressure (as induced by oceanic processes), production rates of biogenic particulate matter, and geochemical tracer distributions in the water column as well as the bioturbated sediment. Besides the carbon cycle this model version includes also the marine silicon cycle (silicic acid in the water column and the sediment pore waters, biological opal production, opal flux through the water column and opal sediment pore water interaction). The model is based on the grid and geometry of the LSG ocean general circulation model (see the corresponding manual, LSG=Large Scale Geostrophic) and uses a velocity field provided by the LSG-model in 'frozen' state. In contrast to the earlier version of the model (see Report No. 5), the present version includes a multi-layer sediment model of the bioturbated sediment zone, allowing for variable tracer inventories within the complete model system. (orig.)

  12. The NASA/MSFC Global Reference Atmospheric Model: 1999 Version (GRAM-99)

    Science.gov (United States)

    Justus, C. G.; Johnson, D. L.

    1999-01-01

    The latest version of Global Reference Atmospheric Model (GRAM-99) is presented and discussed. GRAM-99 uses either (binary) Global Upper Air Climatic Atlas (GUACA) or (ASCII) Global Gridded Upper Air Statistics (GGUAS) CD-ROM data sets, for 0-27 km altitudes. As with earlier versions, GRAM-99 provides complete geographical and altitude coverage for each month of the year. GRAM-99 uses a specially-developed data set, based on Middle Atmosphere Program (MAP) data, for 20-120 km altitudes, and NASA's 1999 version Marshall Engineering Thermosphere (MET-99) model for heights above 90 km. Fairing techniques assure smooth transition in overlap height ranges (20-27 km and 90-120 km). GRAM-99 includes water vapor and 11 other atmospheric constituents (O3, N2O, CO, CH4, CO2, N2, O2, O, A, He and H). A variable-scale perturbation model provides both large-scale (wave) and small-scale (stochastic) deviations from mean values for thermodynamic variables and horizontal and vertical wind components. The small-scale perturbation model includes improvements in representing intermittency ("patchiness"). A major new feature is an option to substitute Range Reference Atmosphere (RRA) data for conventional GRAM climatology when a trajectory passes sufficiently near any RRA site. A complete user's guide for running the program, plus sample input and output, is provided. An example is provided for how to incorporate GRAM-99 as subroutines in other programs (e.g., trajectory codes).

  13. Thermal modelling. Preliminary site description. Forsmark area - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Sundberg, Jan; Back, Paer-Erik; Bengtsson, Anna; Laendell, Maerta [Geo Innova AB, Linkoeping (Sweden)

    2005-08-01

    This report presents the thermal site descriptive model for the Forsmark area, version 1.2. The main objective of this report is to present the thermal modelling work where data has been identified, quality controlled, evaluated and summarised in order to make an upscaling to lithological domain level possible. The thermal conductivity at canister scale has been modelled for two different lithological domains (RFM029 and RFM012, both dominated by granite to granodiorite (101057)). A main modelling approach has been used to determine the mean value of the thermal conductivity. Two alternative/complementary approaches have been used to evaluate the spatial variability of the thermal conductivity at domain level. The thermal modelling approaches are based on the lithological model for the Forsmark area, version 1.2 together with rock type models constituted from measured and calculated (from mineral composition) thermal conductivities. Results indicate that the mean of thermal conductivity is expected to exhibit a small variation between the different domains, 3.46 W/(mxK) for RFM012 to 3.55 W/(mxK) for RFM029. The spatial distribution of the thermal conductivity does not follow a simple model. Lower and upper 95% confidence limits are based on the modelling results, but have been rounded of to only two significant figures. Consequently, the lower limit is 2.9 W/(mxK), while the upper is 3.8 W/(mxK). This is applicable to both the investigated domains. The temperature dependence is rather small with a decrease in thermal conductivity of 10.0% per 100 deg C increase in temperature for the dominating rock type. There are a number of important uncertainties associated with these results. One of the uncertainties considers the representative scale for the canister. Another important uncertainty is the methodological uncertainties associated with the upscaling of thermal conductivity from cm-scale to canister scale. In addition, the representativeness of rock samples is

  14. Thermal site descriptive model. A strategy for the model development during site investigations - version 2

    International Nuclear Information System (INIS)

    Back, Paer-Erik; Sundberg, Jan

    2007-09-01

    This report presents a strategy for describing, predicting and visualising the thermal aspects of the site descriptive model. The strategy is an updated version of an earlier strategy applied in all SDM versions during the initial site investigation phase at the Forsmark and Oskarshamn areas. The previous methodology for thermal modelling did not take the spatial correlation fully into account during simulation. The result was that the variability of thermal conductivity in the rock mass was not sufficiently well described. Experience from earlier thermal SDMs indicated that development of the methodology was required in order describe the spatial distribution of thermal conductivity in the rock mass in a sufficiently reliable way, taking both variability within rock types and between rock types into account. A good description of the thermal conductivity distribution is especially important for the lower tail. This tail is important for the design of a repository because it affects the canister spacing. The presented approach is developed to be used for final SDM regarding thermal properties, primarily thermal conductivity. Specific objectives for the strategy of thermal stochastic modelling are: Description: statistical description of the thermal conductivity of a rock domain. Prediction: prediction of thermal conductivity in a specific rock volume. Visualisation: visualisation of the spatial distribution of thermal conductivity. The thermal site descriptive model should include the temperature distribution and thermal properties of the rock mass. The temperature is the result of the thermal processes in the repository area. Determination of thermal transport properties can be made using different methods, such as laboratory investigations, field measurements, modelling from mineralogical composition and distribution, modelling from density logging and modelling from temperature logging. The different types of data represent different scales, which has to be

  15. Thermal site descriptive model. A strategy for the model development during site investigations - version 2

    Energy Technology Data Exchange (ETDEWEB)

    Back, Paer-Erik; Sundberg, Jan [Geo Innova AB (Sweden)

    2007-09-15

    This report presents a strategy for describing, predicting and visualising the thermal aspects of the site descriptive model. The strategy is an updated version of an earlier strategy applied in all SDM versions during the initial site investigation phase at the Forsmark and Oskarshamn areas. The previous methodology for thermal modelling did not take the spatial correlation fully into account during simulation. The result was that the variability of thermal conductivity in the rock mass was not sufficiently well described. Experience from earlier thermal SDMs indicated that development of the methodology was required in order describe the spatial distribution of thermal conductivity in the rock mass in a sufficiently reliable way, taking both variability within rock types and between rock types into account. A good description of the thermal conductivity distribution is especially important for the lower tail. This tail is important for the design of a repository because it affects the canister spacing. The presented approach is developed to be used for final SDM regarding thermal properties, primarily thermal conductivity. Specific objectives for the strategy of thermal stochastic modelling are: Description: statistical description of the thermal conductivity of a rock domain. Prediction: prediction of thermal conductivity in a specific rock volume. Visualisation: visualisation of the spatial distribution of thermal conductivity. The thermal site descriptive model should include the temperature distribution and thermal properties of the rock mass. The temperature is the result of the thermal processes in the repository area. Determination of thermal transport properties can be made using different methods, such as laboratory investigations, field measurements, modelling from mineralogical composition and distribution, modelling from density logging and modelling from temperature logging. The different types of data represent different scales, which has to be

  16. COMODI: an ontology to characterise differences in versions of computational models in biology.

    Science.gov (United States)

    Scharm, Martin; Waltemath, Dagmar; Mendes, Pedro; Wolkenhauer, Olaf

    2016-07-11

    Open model repositories provide ready-to-reuse computational models of biological systems. Models within those repositories evolve over time, leading to different model versions. Taken together, the underlying changes reflect a model's provenance and thus can give valuable insights into the studied biology. Currently, however, changes cannot be semantically interpreted. To improve this situation, we developed an ontology of terms describing changes in models. The ontology can be used by scientists and within software to characterise model updates at the level of single changes. When studying or reusing a model, these annotations help with determining the relevance of a change in a given context. We manually studied changes in selected models from BioModels and the Physiome Model Repository. Using the BiVeS tool for difference detection, we then performed an automatic analysis of changes in all models published in these repositories. The resulting set of concepts led us to define candidate terms for the ontology. In a final step, we aggregated and classified these terms and built the first version of the ontology. We present COMODI, an ontology needed because COmputational MOdels DIffer. It empowers users and software to describe changes in a model on the semantic level. COMODI also enables software to implement user-specific filter options for the display of model changes. Finally, COMODI is a step towards predicting how a change in a model influences the simulation results. COMODI, coupled with our algorithm for difference detection, ensures the transparency of a model's evolution, and it enhances the traceability of updates and error corrections. COMODI is encoded in OWL. It is openly available at http://comodi.sems.uni-rostock.de/ .

  17. Main modelling features of the ASTEC V2.1 major version

    International Nuclear Information System (INIS)

    Chatelard, P.; Belon, S.; Bosland, L.; Carénini, L.; Coindreau, O.; Cousin, F.; Marchetto, C.; Nowack, H.; Piar, L.; Chailan, L.

    2016-01-01

    Highlights: • Recent modelling improvements of the ASTEC European severe accident code are outlined. • Key new physical models now available in the ASTEC V2.1 major version are described. • ASTEC progress towards a multi-design reactor code is illustrated for BWR and PHWR. • ASTEC strong link with the on-going EC CESAM FP7 project is emphasized. • Main remaining modelling issues (on which IRSN efforts are now directing) are given. - Abstract: A new major version of the European severe accident integral code ASTEC, developed by IRSN with some GRS support, was delivered in November 2015 to the ASTEC worldwide community. Main modelling features of this V2.1 version are summarised in this paper. In particular, the in-vessel coupling technique between the reactor coolant system thermal-hydraulics module and the core degradation module has been strongly re-engineered to remove some well-known weaknesses of the former V2.0 series. The V2.1 version also includes new core degradation models specifically addressing BWR and PHWR reactor types, as well as several other physical modelling improvements, notably on reflooding of severely damaged cores, Zircaloy oxidation under air atmosphere, corium coolability during corium concrete interaction and source term evaluation. Moreover, this V2.1 version constitutes the back-bone of the CESAM FP7 project, which final objective is to further improve ASTEC for use in Severe Accident Management analysis of the Gen.II–III nuclear power plants presently under operation or foreseen in near future in Europe. As part of this European project, IRSN efforts to continuously improve both code numerical robustness and computing performances at plant scale as well as users’ tools are being intensified. Besides, ASTEC will continue capitalising the whole knowledge on severe accidents phenomenology by progressively keeping physical models at the state of the art through a regular feed-back from the interpretation of the current and

  18. A multisensor evaluation of the asymmetric convective model, version 2, in southeast Texas.

    Science.gov (United States)

    Kolling, Jenna S; Pleim, Jonathan E; Jeffries, Harvey E; Vizuete, William

    2013-01-01

    There currently exist a number of planetary boundary layer (PBL) schemes that can represent the effects of turbulence in daytime convective conditions, although these schemes remain a large source of uncertainty in meteorology and air quality model simulations. This study evaluates a recently developed combined local and nonlocal closure PBL scheme, the Asymmetric Convective Model, version 2 (ACM2), against PBL observations taken from radar wind profilers, a ground-based lidar, and multiple daytime radiosonde balloon launches. These observations were compared against predictions of PBLs from the Weather Research and Forecasting (WRF) model version 3.1 with the ACM2 PBL scheme option, and the Fifth-Generation Meteorological Model (MM5) version 3.7.3 with the Eta PBL scheme option that is currently being used to develop ozone control strategies in southeast Texas. MM5 and WRF predictions during the regulatory modeling episode were evaluated on their ability to predict the rise and fall of the PBL during daytime convective conditions across southeastern Texas. The MM5 predicted PBLs consistently underpredicted observations, and were also less than the WRF PBL predictions. The analysis reveals that the MM5 predicted a slower rising and shallower PBL not representative of the daytime urban boundary layer. Alternatively, the WRF model predicted a more accurate PBL evolution improving the root mean square error (RMSE), both temporally and spatially. The WRF model also more accurately predicted vertical profiles of temperature and moisture in the lowest 3 km of the atmosphere. Inspection of median surface temperature and moisture time-series plots revealed higher predicted surface temperatures in WRF and more surface moisture in MM5. These could not be attributed to surface heat fluxes, and thus the differences in performance of the WRF and MM5 models are likely due to the PBL schemes. An accurate depiction of the diurnal evolution of the planetary boundary layer (PBL) is

  19. Incorporation of detailed eye model into polygon-mesh versions of ICRP-110 reference phantoms.

    Science.gov (United States)

    Nguyen, Thang Tat; Yeom, Yeon Soo; Kim, Han Sung; Wang, Zhao Jun; Han, Min Cheol; Kim, Chan Hyeong; Lee, Jai Ki; Zankl, Maria; Petoussi-Henss, Nina; Bolch, Wesley E; Lee, Choonsik; Chung, Beom Sun

    2015-11-21

    The dose coefficients for the eye lens reported in ICRP 2010 Publication 116 were calculated using both a stylized model and the ICRP-110 reference phantoms, according to the type of radiation, energy, and irradiation geometry. To maintain consistency of lens dose assessment, in the present study we incorporated the ICRP-116 detailed eye model into the converted polygon-mesh (PM) version of the ICRP-110 reference phantoms. After the incorporation, the dose coefficients for the eye lens were calculated and compared with those of the ICRP-116 data. The results showed generally a good agreement between the newly calculated lens dose coefficients and the values of ICRP 2010 Publication 116. Significant differences were found for some irradiation cases due mainly to the use of different types of phantoms. Considering that the PM version of the ICRP-110 reference phantoms preserve the original topology of the ICRP-110 reference phantoms, it is believed that the PM version phantoms, along with the detailed eye model, provide more reliable and consistent dose coefficients for the eye lens.

  20. Incremental testing of the Community Multiscale Air Quality (CMAQ modeling system version 4.7

    Directory of Open Access Journals (Sweden)

    K. M. Foley

    2010-03-01

    Full Text Available This paper describes the scientific and structural updates to the latest release of the Community Multiscale Air Quality (CMAQ modeling system version 4.7 (v4.7 and points the reader to additional resources for further details. The model updates were evaluated relative to observations and results from previous model versions in a series of simulations conducted to incrementally assess the effect of each change. The focus of this paper is on five major scientific upgrades: (a updates to the heterogeneous N2O5 parameterization, (b improvement in the treatment of secondary organic aerosol (SOA, (c inclusion of dynamic mass transfer for coarse-mode aerosol, (d revisions to the cloud model, and (e new options for the calculation of photolysis rates. Incremental test simulations over the eastern United States during January and August 2006 are evaluated to assess the model response to each scientific improvement, providing explanations of differences in results between v4.7 and previously released CMAQ model versions. Particulate sulfate predictions are improved across all monitoring networks during both seasons due to cloud module updates. Numerous updates to the SOA module improve the simulation of seasonal variability and decrease the bias in organic carbon predictions at urban sites in the winter. Bias in the total mass of fine particulate matter (PM2.5 is dominated by overpredictions of unspeciated PM2.5 (PMother in the winter and by underpredictions of carbon in the summer. The CMAQv4.7 model results show slightly worse performance for ozone predictions. However, changes to the meteorological inputs are found to have a much greater impact on ozone predictions compared to changes to the CMAQ modules described here. Model updates had little effect on existing biases in wet deposition predictions.

  1. Statistical model of fractures and deformation zones. Preliminary site description, Laxemar subarea, version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Hermanson, Jan; Forssberg, Ola [Golder Associates AB, Stockholm (Sweden); Fox, Aaron; La Pointe, Paul [Golder Associates Inc., Redmond, WA (United States)

    2005-10-15

    The goal of this summary report is to document the data sources, software tools, experimental methods, assumptions, and model parameters in the discrete-fracture network (DFN) model for the local model volume in Laxemar, version 1.2. The model parameters presented herein are intended for use by other project modeling teams. Individual modeling teams may elect to simplify or use only a portion of the DFN model, depending on their needs. This model is not intended to be a flow model or a mechanical model; as such, only the geometrical characterization is presented. The derivations of the hydraulic or mechanical properties of the fractures or their subsurface connectivities are not within the scope of this report. This model represents analyses carried out on particular data sets. If additional data are obtained, or values for existing data are changed or excluded, the conclusions reached in this report, and the parameter values calculated, may change as well. The model volume is divided into two subareas; one located on the Simpevarp peninsula adjacent to the power plant (Simpevarp), and one further to the west (Laxemar). The DFN parameters described in this report were determined by analysis of data collected within the local model volume. As such, the final DFN model is only valid within this local model volume and the modeling subareas (Laxemar and Simpevarp) within.

  2. A framework for expanding aqueous chemistry in the Community Multiscale Air Quality (CMAQ) model version 5.1

    Science.gov (United States)

    This paper describes the development and implementation of an extendable aqueous-phase chemistry option (AQCHEM − KMT(I)) for the Community Multiscale Air Quality (CMAQ) modeling system, version 5.1. Here, the Kinetic PreProcessor (KPP), version 2.2.3, is used t...

  3. A computationally efficient description of heterogeneous freezing: A simplified version of the Soccer ball model

    Science.gov (United States)

    Niedermeier, Dennis; Ervens, Barbara; Clauss, Tina; Voigtländer, Jens; Wex, Heike; Hartmann, Susan; Stratmann, Frank

    2014-01-01

    In a recent study, the Soccer ball model (SBM) was introduced for modeling and/or parameterizing heterogeneous ice nucleation processes. The model applies classical nucleation theory. It allows for a consistent description of both apparently singular and stochastic ice nucleation behavior, by distributing contact angles over the nucleation sites of a particle population assuming a Gaussian probability density function. The original SBM utilizes the Monte Carlo technique, which hampers its usage in atmospheric models, as fairly time-consuming calculations must be performed to obtain statistically significant results. Thus, we have developed a simplified and computationally more efficient version of the SBM. We successfully used the new SBM to parameterize experimental nucleation data of, e.g., bacterial ice nucleation. Both SBMs give identical results; however, the new model is computationally less expensive as confirmed by cloud parcel simulations. Therefore, it is a suitable tool for describing heterogeneous ice nucleation processes in atmospheric models.

  4. Comparison of OH Reactivity Instruments in the Atmosphere Simulation Chamber SAPHIR.

    Science.gov (United States)

    Fuchs, H.; Novelli, A.; Rolletter, M.; Hofzumahaus, A.; Pfannerstill, E.; Edtbauer, A.; Kessel, S.; Williams, J.; Michoud, V.; Dusanter, S.; Locoge, N.; Zannoni, N.; Gros, V.; Truong, F.; Sarda Esteve, R.; Cryer, D. R.; Brumby, C.; Whalley, L.; Stone, D. J.; Seakins, P. W.; Heard, D. E.; Schoemaecker, C.; Blocquet, M.; Fittschen, C. M.; Thames, A. B.; Coudert, S.; Brune, W. H.; Batut, S.; Tatum Ernest, C.; Harder, H.; Elste, T.; Bohn, B.; Hohaus, T.; Holland, F.; Muller, J. B. A.; Li, X.; Rohrer, F.; Kubistin, D.; Kiendler-Scharr, A.; Tillmann, R.; Andres, S.; Wegener, R.; Yu, Z.; Zou, Q.; Wahner, A.

    2017-12-01

    Two campaigns were conducted performing experiments in the atmospheric simulation chamber SAPHIR at Forschungszentrum Jülich in October 2015 and April 2016 to compare hydroxyl (OH) radical reactivity (kOH) measurements. Chemical conditions were chosen either to be representative of the atmosphere or to test potential limitations of instruments. The results of these campaigns demonstrate that OH reactivity can be accurately measured for a wide range of atmospherically relevant chemical conditions (e.g. water vapor, nitrogen oxides, various organic compounds) by all instruments. The precision of the measurements is higher for instruments directly detecting hydroxyl radicals (OH), whereas the indirect Comparative Reactivity Method (CRM) has a higher limit of detection of 2s-1 at a time resolution of 10 to 15 min. The performances of the instruments were systematically tested by stepwise increasing, for example, the concentrations of carbon monoxide (CO), water vapor or nitric oxide (NO). In further experiments, mixtures of organic reactants were injected in the chamber to simulate urban and forested environments. Overall, the results show that instruments are capable of measuring OH reactivity in the presence of CO, alkanes, alkenes and aromatic compounds. The transmission efficiency in Teflon inlet lines could have introduced systematic errors in measurements for low-volatile organic compounds in some instruments. CRM instruments exhibited a larger scatter in the data compared to the other instruments. The largest differences to the reference were observed by CRM instruments in the presence of terpenes and oxygenated organic compounds. In some of these experiments, only a small fraction of the reactivity is detected. The accuracy of CRM measurements is most likely limited by the corrections that need to be applied in order to account for known effects of, for example, deviations from pseudo-first order conditions, nitrogen oxides or water vapor on the measurement

  5. Comparison of OH reactivity measurements in the atmospheric simulation chamber SAPHIR

    Science.gov (United States)

    Fuchs, Hendrik; Novelli, Anna; Rolletter, Michael; Hofzumahaus, Andreas; Pfannerstill, Eva Y.; Kessel, Stephan; Edtbauer, Achim; Williams, Jonathan; Michoud, Vincent; Dusanter, Sebastien; Locoge, Nadine; Zannoni, Nora; Gros, Valerie; Truong, Francois; Sarda-Esteve, Roland; Cryer, Danny R.; Brumby, Charlotte A.; Whalley, Lisa K.; Stone, Daniel; Seakins, Paul W.; Heard, Dwayne E.; Schoemaecker, Coralie; Blocquet, Marion; Coudert, Sebastien; Batut, Sebastien; Fittschen, Christa; Thames, Alexander B.; Brune, William H.; Ernest, Cheryl; Harder, Hartwig; Muller, Jennifer B. A.; Elste, Thomas; Kubistin, Dagmar; Andres, Stefanie; Bohn, Birger; Hohaus, Thorsten; Holland, Frank; Li, Xin; Rohrer, Franz; Kiendler-Scharr, Astrid; Tillmann, Ralf; Wegener, Robert; Yu, Zhujun; Zou, Qi; Wahner, Andreas

    2017-10-01

    Hydroxyl (OH) radical reactivity (kOH) has been measured for 18 years with different measurement techniques. In order to compare the performances of instruments deployed in the field, two campaigns were conducted performing experiments in the atmospheric simulation chamber SAPHIR at Forschungszentrum Jülich in October 2015 and April 2016. Chemical conditions were chosen either to be representative of the atmosphere or to test potential limitations of instruments. All types of instruments that are currently used for atmospheric measurements were used in one of the two campaigns. The results of these campaigns demonstrate that OH reactivity can be accurately measured for a wide range of atmospherically relevant chemical conditions (e.g. water vapour, nitrogen oxides, various organic compounds) by all instruments. The precision of the measurements (limit of detection CRM) has a higher limit of detection of 2 s-1 at a time resolution of 10 to 15 min. The performances of the instruments were systematically tested by stepwise increasing, for example, the concentrations of carbon monoxide (CO), water vapour or nitric oxide (NO). In further experiments, mixtures of organic reactants were injected into the chamber to simulate urban and forested environments. Overall, the results show that the instruments are capable of measuring OH reactivity in the presence of CO, alkanes, alkenes and aromatic compounds. The transmission efficiency in Teflon inlet lines could have introduced systematic errors in measurements for low-volatile organic compounds in some instruments. CRM instruments exhibited a larger scatter in the data compared to the other instruments. The largest differences to reference measurements or to calculated reactivity were observed by CRM instruments in the presence of terpenes and oxygenated organic compounds (mixing ratio of OH reactants were up to 10 ppbv). In some of these experiments, only a small fraction of the reactivity is detected. The accuracy of CRM

  6. Community Land Model Version 3.0 (CLM3.0) Developer's Guide

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, FM

    2004-12-21

    This document describes the guidelines adopted for software development of the Community Land Model (CLM) and serves as a reference to the entire code base of the released version of the model. The version of the code described here is Version 3.0 which was released in the summer of 2004. This document, the Community Land Model Version 3.0 (CLM3.0) User's Guide (Vertenstein et al., 2004), the Technical Description of the Community Land Model (CLM) (Oleson et al., 2004), and the Community Land Model's Dynamic Global Vegetation Model (CLM-DGVM): Technical Description and User's Guide (Levis et al., 2004) provide the developer, user, or researcher with details of implementation, instructions for using the model, a scientific description of the model, and a scientific description of the Dynamic Global Vegetation Model integrated with CLM respectively. The CLM is a single column (snow-soil-vegetation) biogeophysical model of the land surface which can be run serially (on a laptop or personal computer) or in parallel (using distributed or shared memory processors or both) on both vector and scalar computer architectures. Written in Fortran 90, CLM can be run offline (i.e., run in isolation using stored atmospheric forcing data), coupled to an atmospheric model (e.g., the Community Atmosphere Model (CAM)), or coupled to a climate system model (e.g., the Community Climate System Model Version 3 (CCSM3)) through a flux coupler (e.g., Coupler 6 (CPL6)). When coupled, CLM exchanges fluxes of energy, water, and momentum with the atmosphere. The horizontal land surface heterogeneity is represented by a nested subgrid hierarchy composed of gridcells, landunits, columns, and plant functional types (PFTs). This hierarchical representation is reflected in the data structures used by the model code. Biophysical processes are simulated for each subgrid unit (landunit, column, and PFT) independently, and prognostic variables are maintained for each subgrid unit

  7. MESOI Version 2.0: an interactive mesoscale Lagrangian puff dispersion model with deposition and decay

    International Nuclear Information System (INIS)

    Ramsdell, J.V.; Athey, G.F.; Glantz, C.S.

    1983-11-01

    MESOI Version 2.0 is an interactive Lagrangian puff model for estimating the transport, diffusion, deposition and decay of effluents released to the atmosphere. The model is capable of treating simultaneous releases from as many as four release points, which may be elevated or at ground-level. The puffs are advected by a horizontal wind field that is defined in three dimensions. The wind field may be adjusted for expected topographic effects. The concentration distribution within the puffs is initially assumed to be Gaussian in the horizontal and vertical. However, the vertical concentration distribution is modified by assuming reflection at the ground and the top of the atmospheric mixing layer. Material is deposited on the surface using a source depletion, dry deposition model and a washout coefficient model. The model also treats the decay of a primary effluent species and the ingrowth and decay of a single daughter species using a first order decay process. This report is divided into two parts. The first part discusses the theoretical and mathematical bases upon which MESOI Version 2.0 is based. The second part contains the MESOI computer code. The programs were written in the ANSI standard FORTRAN 77 and were developed on a VAX 11/780 computer. 43 references, 14 figures, 13 tables

  8. MESOI Version 2. 0: an interactive mesoscale Lagrangian puff dispersion model with deposition and decay

    Energy Technology Data Exchange (ETDEWEB)

    Ramsdell, J.V.; Athey, G.F.; Glantz, C.S.

    1983-11-01

    MESOI Version 2.0 is an interactive Lagrangian puff model for estimating the transport, diffusion, deposition and decay of effluents released to the atmosphere. The model is capable of treating simultaneous releases from as many as four release points, which may be elevated or at ground-level. The puffs are advected by a horizontal wind field that is defined in three dimensions. The wind field may be adjusted for expected topographic effects. The concentration distribution within the puffs is initially assumed to be Gaussian in the horizontal and vertical. However, the vertical concentration distribution is modified by assuming reflection at the ground and the top of the atmospheric mixing layer. Material is deposited on the surface using a source depletion, dry deposition model and a washout coefficient model. The model also treats the decay of a primary effluent species and the ingrowth and decay of a single daughter species using a first order decay process. This report is divided into two parts. The first part discusses the theoretical and mathematical bases upon which MESOI Version 2.0 is based. The second part contains the MESOI computer code. The programs were written in the ANSI standard FORTRAN 77 and were developed on a VAX 11/780 computer. 43 references, 14 figures, 13 tables.

  9. Description of the new version 4.0 of the tritium model UFOTRI including user guide

    International Nuclear Information System (INIS)

    Raskob, W.

    1993-08-01

    In view of the future operation of fusion reactors the release of tritium may play a dominant role during normal operation as well as after accidents. Because of its physical and chemical properties which differ significantly from those of other radionuclides, the model UFOTRI for assessing the radiological consequences of accidental tritium releases has been developed. It describes the behaviour of tritium in the biosphere and calculates the radiological impact on individuals and the population due to the direct exposure and by the ingestion pathways. Processes such as the conversion of tritium gas into tritiated water (HTO) in the soil, re-emission after deposition and the conversion of HTO into organically bound tritium, are considered. The use of UFOTRI in its probabilistic mode shows the spectrum of the radiological impact together with the associated probability of occurrence. A first model version was established in 1991. As the ongoing work on investigating the main processes of the tritium behaviour in the environment shows up new results, the model has been improved in several points. The report describes the changes incorporated into the model since 1991. Additionally provides the up-dated user guide for handling the revised UFOTRI version which will be distributed to interested organizations. (orig.) [de

  10. Technical note: The Lagrangian particle dispersion model FLEXPART version 6.2

    Directory of Open Access Journals (Sweden)

    A. Stohl

    2005-01-01

    Full Text Available The Lagrangian particle dispersion model FLEXPART was originally (about 8 years ago designed for calculating the long-range and mesoscale dispersion of air pollutants from point sources, such as after an accident in a nuclear power plant. In the meantime FLEXPART has evolved into a comprehensive tool for atmospheric transport modeling and analysis. Its application fields were extended from air pollution studies to other topics where atmospheric transport plays a role (e.g., exchange between the stratosphere and troposphere, or the global water cycle. It has evolved into a true community model that is now being used by at least 25 groups from 14 different countries and is seeing both operational and research applications. A user manual has been kept actual over the years and was distributed over an internet page along with the model's source code. In this note we provide a citeable technical description of FLEXPART's latest version (6.2.

  11. Update of the Polar SWIFT model for polar stratospheric ozone loss (Polar SWIFT version 2)

    Science.gov (United States)

    Wohltmann, Ingo; Lehmann, Ralph; Rex, Markus

    2017-07-01

    The Polar SWIFT model is a fast scheme for calculating the chemistry of stratospheric ozone depletion in polar winter. It is intended for use in global climate models (GCMs) and Earth system models (ESMs) to enable the simulation of mutual interactions between the ozone layer and climate. To date, climate models often use prescribed ozone fields, since a full stratospheric chemistry scheme is computationally very expensive. Polar SWIFT is based on a set of coupled differential equations, which simulate the polar vortex-averaged mixing ratios of the key species involved in polar ozone depletion on a given vertical level. These species are O3, chemically active chlorine (ClOx), HCl, ClONO2 and HNO3. The only external input parameters that drive the model are the fraction of the polar vortex in sunlight and the fraction of the polar vortex below the temperatures necessary for the formation of polar stratospheric clouds. Here, we present an update of the Polar SWIFT model introducing several improvements over the original model formulation. In particular, the model is now trained on vortex-averaged reaction rates of the ATLAS Chemistry and Transport Model, which enables a detailed look at individual processes and an independent validation of the different parameterizations contained in the differential equations. The training of the original Polar SWIFT model was based on fitting complete model runs to satellite observations and did not allow for this. A revised formulation of the system of differential equations is developed, which closely fits vortex-averaged reaction rates from ATLAS that represent the main chemical processes influencing ozone. In addition, a parameterization for the HNO3 change by denitrification is included. The rates of change of the concentrations of the chemical species of the Polar SWIFT model are purely chemical rates of change in the new version, whereas in the original Polar SWIFT model, they included a transport effect caused by the

  12. QMM – A Quarterly Macroeconomic Model of the Icelandic Economy. Version 2.0

    DEFF Research Database (Denmark)

    Ólafsson, Tjörvi

    implemented in the forecasting round for the Monetary Bulletin 2006/1 in March 2006. QMM is used by the Bank for forecasting and various policy simulations and therefore plays a key role as an organisational framework for viewing the medium-term future when formulating monetary policy at the Bank. This paper......This paper documents and describes Version 2.0 of the Quarterly Macroeconomic Model of the Central Bank of Iceland (QMM). QMM and the underlying quarterly database have been under construction since 2001 at the Research and Forecasting Division of the Economics Department at the Bank and was first...

  13. GOOSE Version 1.4: A powerful object-oriented simulation environment for developing reactor models

    International Nuclear Information System (INIS)

    Nypaver, D.J.; March-Leuba, C.; Abdalla, M.A.; Guimaraes, L.

    1992-01-01

    A prototype software package for a fully interactive Generalized Object-Oriented Simulation Environment (GOOSE) is being developed at Oak Ridge National Laboratory. Dynamic models are easily constructed and tested; fully interactive capabilities allow the user to alter model parameters and complexity without recompilation. This environment provides assess to powerful tools such as numerical integration packages, graphical displays, and online help. In GOOSE, portability has been achieved by creating the environment in Objective-C 1 , which is supported by a variety of platforms including UNIX and DOS. GOOSE Version 1.4 introduces new enhancements like the capability of creating ''initial,'' ''dynamic,'' and ''digital'' methods. The object-oriented approach to simulation used in GOOSE combines the concept of modularity with the additional features of allowing precompilation, optimization, testing, and validation of individual modules. Once a library of classes has been defined and compiled, models can be built and modified without recompilation. GOOSE Version 1.4 is primarily command-line driven

  14. Mars Global Reference Atmospheric Model 2001 Version (Mars-GRAM 2001): Users Guide

    Science.gov (United States)

    Justus, C. G.; Johnson, D. L.

    2001-01-01

    This document presents Mars Global Reference Atmospheric Model 2001 Version (Mars-GRAM 2001) and its new features. As with the previous version (mars-2000), all parameterizations fro temperature, pressure, density, and winds versus height, latitude, longitude, time of day, and season (Ls) use input data tables from NASA Ames Mars General Circulation Model (MGCM) for the surface through 80-km altitude and the University of Arizona Mars Thermospheric General Circulation Model (MTGCM) for 80 to 70 km. Mars-GRAM 2001 is based on topography from the Mars Orbiter Laser Altimeter (MOLA) and includes new MGCM data at the topographic surface. A new auxiliary program allows Mars-GRAM output to be used to compute shortwave (solar) and longwave (thermal) radiation at the surface and top of atmosphere. This memorandum includes instructions on obtaining Mars-GRAN source code and data files and for running the program. It also provides sample input and output and an example for incorporating Mars-GRAM as an atmospheric subroutine in a trajectory code.

  15. A RETRAN-02 model of the Sizewell B PCSR design - the Winfrith one-loop model, version 3.0

    International Nuclear Information System (INIS)

    Kinnersly, S.R.

    1983-11-01

    A one-loop RETRAN-02 model of the Sizewell B Pre Construction Safety Report (PCSR) design, set up at Winfrith, is described and documented. The model is suitable for symmetrical pressurised transients. Comparison with data from the Sizewell B PCSR shows that the model is a good representation of that design. Known errors, limitations and deficiencies are described. The mode of storage and maintenance at Winfrith using PROMUS (Program Maintenance and Update System) is noted. It is recommended that users modify the standard data by adding replacement cards to the end so as to aid in identification, use and maintenance of local versions. (author)

  16. Mars Global Reference Atmospheric Model (Mars-GRAM) Version 3.8: Users Guide

    Science.gov (United States)

    Justus, C. G.; James, B. F.

    1999-01-01

    Mars Global Reference Atmospheric Model (Mars-GRAM) Version 3.8 is presented and its new features are discussed. Mars-GRAM uses new values of planetary reference ellipsoid radii, gravity term, and rotation rate (consistent with current JPL values) and includes centrifugal effects on gravity. The model now uses NASA Ames Global Circulation Model low resolution topography. Curvature corrections are applied to winds and limits based on speed of sound are applied. Altitude of the F1 ionization peak and density scale height, including effects of change of molecular weight with altitude are computed. A check is performed to disallow temperatures below CO2 sublimination. This memorandum includes instructions on obtaining Mars-GRAM source code and data files and running the program. Sample input and output are provided. An example of incorporating Mars-GRAM as an atmospheric subroutine in a trajectory code is also given.

  17. Intercomparison of NO3 radical detection instruments in the atmosphere simulation chamber SAPHIR

    Directory of Open Access Journals (Sweden)

    H.-P. Dorn

    2013-05-01

    Full Text Available The detection of atmospheric NO3 radicals is still challenging owing to its low mixing ratios (≈ 1 to 300 pptv in the troposphere. While long-path differential optical absorption spectroscopy (DOAS has been a well-established NO3 detection approach for over 25 yr, newly sensitive techniques have been developed in the past decade. This publication outlines the results of the first comprehensive intercomparison of seven instruments developed for the spectroscopic detection of tropospheric NO3. Four instruments were based on cavity ring-down spectroscopy (CRDS, two utilised open-path cavity-enhanced absorption spectroscopy (CEAS, and one applied "classical" long-path DOAS. The intercomparison campaign "NO3Comp" was held at the atmosphere simulation chamber SAPHIR in Jülich (Germany in June 2007. Twelve experiments were performed in the well-mixed chamber for variable concentrations of NO3, N2O5, NO2, hydrocarbons, and water vapour, in the absence and in the presence of inorganic or organic aerosol. The overall precision of the cavity instruments varied between 0.5 and 5 pptv for integration times of 1 s to 5 min; that of the DOAS instrument was 9 pptv for an acquisition time of 1 min. The NO3 data of all instruments correlated excellently with the NOAA-CRDS instrument, which was selected as the common reference because of its superb sensitivity, high time resolution, and most comprehensive data coverage. The median of the coefficient of determination (r2 over all experiments of the campaign (60 correlations is r2 = 0.981 (quartile 1 (Q1: 0.949; quartile 3 (Q3: 0.994; min/max: 0.540/0.999. The linear regression analysis of the campaign data set yielded very small intercepts (median: 1.1 pptv; Q1/Q3: −1.1/2.6 pptv; min/max: −14.1/28.0 pptv, and the slopes of the regression lines were close to unity (median: 1.01; Q1/Q3: 0.92/1.10; min/max: 0.72/1.36. The deviation of individual regression slopes from unity was always within the combined

  18. Comparison of OH reactivity measurements in the atmospheric simulation chamber SAPHIR

    Directory of Open Access Journals (Sweden)

    H. Fuchs

    2017-10-01

    Full Text Available Hydroxyl (OH radical reactivity (kOH has been measured for 18 years with different measurement techniques. In order to compare the performances of instruments deployed in the field, two campaigns were conducted performing experiments in the atmospheric simulation chamber SAPHIR at Forschungszentrum Jülich in October 2015 and April 2016. Chemical conditions were chosen either to be representative of the atmosphere or to test potential limitations of instruments. All types of instruments that are currently used for atmospheric measurements were used in one of the two campaigns. The results of these campaigns demonstrate that OH reactivity can be accurately measured for a wide range of atmospherically relevant chemical conditions (e.g. water vapour, nitrogen oxides, various organic compounds by all instruments. The precision of the measurements (limit of detection  < 1 s−1 at a time resolution of 30 s to a few minutes is higher for instruments directly detecting hydroxyl radicals, whereas the indirect comparative reactivity method (CRM has a higher limit of detection of 2 s−1 at a time resolution of 10 to 15 min. The performances of the instruments were systematically tested by stepwise increasing, for example, the concentrations of carbon monoxide (CO, water vapour or nitric oxide (NO. In further experiments, mixtures of organic reactants were injected into the chamber to simulate urban and forested environments. Overall, the results show that the instruments are capable of measuring OH reactivity in the presence of CO, alkanes, alkenes and aromatic compounds. The transmission efficiency in Teflon inlet lines could have introduced systematic errors in measurements for low-volatile organic compounds in some instruments. CRM instruments exhibited a larger scatter in the data compared to the other instruments. The largest differences to reference measurements or to calculated reactivity were observed by CRM instruments in

  19. Landfill Gas Energy Cost Model Version 3.0 (LFGcost-Web V3 ...

    Science.gov (United States)

    To help stakeholders estimate the costs of a landfill gas (LFG) energy project, in 2002, LMOP developed a cost tool (LFGcost). Since then, LMOP has routinely updated the tool to reflect changes in the LFG energy industry. Initially the model was designed for EPA to assist landfills in evaluating the economic and financial feasibility of LFG energy project development. In 2014, LMOP developed a public version of the model, LFGcost-Web (Version 3.0), to allow landfill and industry stakeholders to evaluate project feasibility on their own. LFGcost-Web can analyze costs for 12 energy recovery project types. These project costs can be estimated with or without the costs of a gas collection and control system (GCCS). The EPA used select equations from LFGcost-Web to estimate costs of the regulatory options in the 2015 proposed revisions to the MSW Landfills Standards of Performance (also known as New Source Performance Standards) and the Emission Guidelines (herein thereafter referred to collectively as the Landfill Rules). More specifically, equations derived from LFGcost-Web were applied to each landfill expected to be impacted by the Landfill Rules to estimate annualized installed capital costs and annual O&M costs of a gas collection and control system. In addition, after applying the LFGcost-Web equations to the list of landfills expected to require a GCCS in year 2025 as a result of the proposed Landfill Rules, the regulatory analysis evaluated whether electr

  20. User guide for MODPATH version 6 - A particle-tracking model for MODFLOW

    Science.gov (United States)

    Pollock, David W.

    2012-01-01

    MODPATH is a particle-tracking post-processing model that computes three-dimensional flow paths using output from groundwater flow simulations based on MODFLOW, the U.S. Geological Survey (USGS) finite-difference groundwater flow model. This report documents MODPATH version 6. Previous versions were documented in USGS Open-File Reports 89-381 and 94-464. The program uses a semianalytical particle-tracking scheme that allows an analytical expression of a particle's flow path to be obtained within each finite-difference grid cell. A particle's path is computed by tracking the particle from one cell to the next until it reaches a boundary, an internal sink/source, or satisfies another termination criterion. Data input to MODPATH consists of a combination of MODFLOW input data files, MODFLOW head and flow output files, and other input files specific to MODPATH. Output from MODPATH consists of several output files, including a number of particle coordinate output files intended to serve as input data for other programs that process, analyze, and display the results in various ways. MODPATH is written in FORTRAN and can be compiled by any FORTRAN compiler that fully supports FORTRAN-2003 or by most commercially available FORTRAN-95 compilers that support the major FORTRAN-2003 language extensions.

  1. Igpet software for modeling igneous processes: examples of application using the open educational version

    Science.gov (United States)

    Carr, Michael J.; Gazel, Esteban

    2017-04-01

    We provide here an open version of Igpet software, called t-Igpet to emphasize its application for teaching and research in forward modeling of igneous geochemistry. There are three programs, a norm utility, a petrologic mixing program using least squares and Igpet, a graphics program that includes many forms of numerical modeling. Igpet is a multifaceted tool that provides the following basic capabilities: igneous rock identification using the IUGS (International Union of Geological Sciences) classification and several supplementary diagrams; tectonic discrimination diagrams; pseudo-quaternary projections; least squares fitting of lines, polynomials and hyperbolae; magma mixing using two endmembers, histograms, x-y plots, ternary plots and spider-diagrams. The advanced capabilities of Igpet are multi-element mixing and magma evolution modeling. Mixing models are particularly useful for understanding the isotopic variations in rock suites that evolved by mixing different sources. The important melting models include, batch melting, fractional melting and aggregated fractional melting. Crystallization models include equilibrium and fractional crystallization and AFC (assimilation and fractional crystallization). Theses, reports and proposals concerning igneous petrology are improved by numerical modeling. For reviewed publications some elements of modeling are practically a requirement. Our intention in providing this software is to facilitate improved communication and lower entry barriers to research, especially for students.

  2. The NASA/MSFC Global Reference Atmospheric Model-1995 version (GRAM-95)

    Science.gov (United States)

    Justus, C. G.; Jeffries, W. R., III; Yung, S. P.; Johnson, D. L.

    1995-01-01

    The latest version of the Global Reference Atmospheric Model (GRAM-95) is presented and discussed. GRAM-95 uses the new Global Upper Air Climatic Atlas (GUACA) CD-ROM data set, for 0- to 27-km altitudes. As with earlier versions, GRAM-95 provides complete geographical and altitude coverage for each month of the year. Individual years 1985 to 1991 and a period-of-record (1980 to 1991) can be simulated for the GUACA height range. GRAM-95 uses a specially developed data set, based on Middle Atmosphere Program (MAP) data, for the 20- to 120-km height range, and the NASA Marshall Engineering Thermosphere (MET) model for heights above 90 km. Fairing techniques assure a smooth transition in the overlap height ranges (20 to 27 km and 90 to 120 km). In addition to the traditional GRAM variables of pressure, density, temperature and wind components, GRAM-95 now includes water vapor and 11 other atmospheric constituents (O3, N2O, CO, CH4, CO2, N2, O2, O, A, He, and H). A new, variable-scale perturbation model provides both large-scale and small-scale deviations from mean values for the thermodynamic variables and horizontal and vertical wind components. The perturbation model includes new features that simulate intermittency (patchiness) in turbulence and small-scale perturbation fields. The density perturbations and density gradients (density shears) computed by the new model compare favorably in their statistical characteristics with observed density perturbations and density shears from 32 space shuttle reentry profiles. GRAM-95 provides considerable improvement in wind estimates from the new GUACA data set, compared to winds calculated from the geostrophic wind relations previously used in the 0- to 25-km height range. The GRAM-95 code has been put into a more modular form, easier to incorporate as subroutines in other programs (e.g., trajectory codes). A complete user's guide for running the program, plus sample input and output, is provided.

  3. LERC-SLAM - THE NASA LEWIS RESEARCH CENTER SATELLITE LINK ATTENUATION MODEL PROGRAM (IBM PC VERSION)

    Science.gov (United States)

    Manning, R. M.

    1994-01-01

    The frequency and intensity of rain attenuation affecting the communication between a satellite and an earth terminal is an important consideration in planning satellite links. The NASA Lewis Research Center Satellite Link Attenuation Model Program (LeRC-SLAM) provides a static and dynamic statistical assessment of the impact of rain attenuation on a communications link established between an earth terminal and a geosynchronous satellite. The program is designed for use in the specification, design and assessment of satellite links for any terminal location in the continental United States. The basis for LeRC-SLAM is the ACTS Rain Attenuation Prediction Model, which uses a log-normal cumulative probability distribution to describe the random process of rain attenuation on satellite links. The derivation of the statistics for the rainrate process at the specified terminal location relies on long term rainfall records compiled by the U.S. Weather Service during time periods of up to 55 years in length. The theory of extreme value statistics is also utilized. The user provides 1) the longitudinal position of the satellite in geosynchronous orbit, 2) the geographical position of the earth terminal in terms of latitude and longitude, 3) the height above sea level of the terminal site, 4) the yearly average rainfall at the terminal site, and 5) the operating frequency of the communications link (within 1 to 1000 GHz, inclusive). Based on the yearly average rainfall at the terminal location, LeRC-SLAM calculates the relevant rain statistics for the site using an internal data base. The program then generates rain attenuation data for the satellite link. This data includes a description of the static (i.e., yearly) attenuation process, an evaluation of the cumulative probability distribution for attenuation effects, and an evaluation of the probability of fades below selected fade depths. In addition, LeRC-SLAM calculates the elevation and azimuth angles of the terminal

  4. LERC-SLAM - THE NASA LEWIS RESEARCH CENTER SATELLITE LINK ATTENUATION MODEL PROGRAM (MACINTOSH VERSION)

    Science.gov (United States)

    Manning, R. M.

    1994-01-01

    The frequency and intensity of rain attenuation affecting the communication between a satellite and an earth terminal is an important consideration in planning satellite links. The NASA Lewis Research Center Satellite Link Attenuation Model Program (LeRC-SLAM) provides a static and dynamic statistical assessment of the impact of rain attenuation on a communications link established between an earth terminal and a geosynchronous satellite. The program is designed for use in the specification, design and assessment of satellite links for any terminal location in the continental United States. The basis for LeRC-SLAM is the ACTS Rain Attenuation Prediction Model, which uses a log-normal cumulative probability distribution to describe the random process of rain attenuation on satellite links. The derivation of the statistics for the rainrate process at the specified terminal location relies on long term rainfall records compiled by the U.S. Weather Service during time periods of up to 55 years in length. The theory of extreme value statistics is also utilized. The user provides 1) the longitudinal position of the satellite in geosynchronous orbit, 2) the geographical position of the earth terminal in terms of latitude and longitude, 3) the height above sea level of the terminal site, 4) the yearly average rainfall at the terminal site, and 5) the operating frequency of the communications link (within 1 to 1000 GHz, inclusive). Based on the yearly average rainfall at the terminal location, LeRC-SLAM calculates the relevant rain statistics for the site using an internal data base. The program then generates rain attenuation data for the satellite link. This data includes a description of the static (i.e., yearly) attenuation process, an evaluation of the cumulative probability distribution for attenuation effects, and an evaluation of the probability of fades below selected fade depths. In addition, LeRC-SLAM calculates the elevation and azimuth angles of the terminal

  5. New Source Term Model for the RESRAD-OFFSITE Code Version 3

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Charley [Argonne National Lab. (ANL), Argonne, IL (United States); Gnanapragasam, Emmanuel [Argonne National Lab. (ANL), Argonne, IL (United States); Cheng, Jing-Jy [Argonne National Lab. (ANL), Argonne, IL (United States); Kamboj, Sunita [Argonne National Lab. (ANL), Argonne, IL (United States); Chen, Shih-Yew [Argonne National Lab. (ANL), Argonne, IL (United States)

    2013-06-01

    This report documents the new source term model developed and implemented in Version 3 of the RESRAD-OFFSITE code. This new source term model includes: (1) "first order release with transport" option, in which the release of the radionuclide is proportional to the inventory in the primary contamination and the user-specified leach rate is the proportionality constant, (2) "equilibrium desorption release" option, in which the user specifies the distribution coefficient which quantifies the partitioning of the radionuclide between the solid and aqueous phases, and (3) "uniform release" option, in which the radionuclides are released from a constant fraction of the initially contaminated material during each time interval and the user specifies the duration over which the radionuclides are released.

  6. Solid Waste Projection Model: Database (Version 1.4). Technical reference manual

    Energy Technology Data Exchange (ETDEWEB)

    Blackburn, C.; Cillan, T.

    1993-09-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC). The SWPM system provides a modeling and analysis environment that supports decisions in the process of evaluating various solid waste management alternatives. This document, one of a series describing the SWPM system, contains detailed information regarding the software and data structures utilized in developing the SWPM Version 1.4 Database. This document is intended for use by experienced database specialists and supports database maintenance, utility development, and database enhancement. Those interested in using the SWPM database should refer to the SWPM Database User`s Guide. This document is available from the PNL Task M Project Manager (D. L. Stiles, 509-372-4358), the PNL Task L Project Manager (L. L. Armacost, 509-372-4304), the WHC Restoration Projects Section Manager (509-372-1443), or the WHC Waste Characterization Manager (509-372-1193).

  7. Comparison of three ice cloud optical schemes in climate simulations with community atmospheric model version 5

    Science.gov (United States)

    Zhao, Wenjie; Peng, Yiran; Wang, Bin; Yi, Bingqi; Lin, Yanluan; Li, Jiangnan

    2018-05-01

    A newly implemented Baum-Yang scheme for simulating ice cloud optical properties is compared with existing schemes (Mitchell and Fu schemes) in a standalone radiative transfer model and in the global climate model (GCM) Community Atmospheric Model Version 5 (CAM5). This study systematically analyzes the effect of different ice cloud optical schemes on global radiation and climate by a series of simulations with a simplified standalone radiative transfer model, atmospheric GCM CAM5, and a comprehensive coupled climate model. Results from the standalone radiative model show that Baum-Yang scheme yields generally weaker effects of ice cloud on temperature profiles both in shortwave and longwave spectrum. CAM5 simulations indicate that Baum-Yang scheme in place of Mitchell/Fu scheme tends to cool the upper atmosphere and strengthen the thermodynamic instability in low- and mid-latitudes, which could intensify the Hadley circulation and dehydrate the subtropics. When CAM5 is coupled with a slab ocean model to include simplified air-sea interaction, reduced downward longwave flux to surface in Baum-Yang scheme mitigates ice-albedo feedback in the Arctic as well as water vapor and cloud feedbacks in low- and mid-latitudes, resulting in an overall temperature decrease by 3.0/1.4 °C globally compared with Mitchell/Fu schemes. Radiative effect and climate feedback of the three ice cloud optical schemes documented in this study can be referred for future improvements on ice cloud simulation in CAM5.

  8. Immersion freezing by natural dust based on a soccer ball model with the Community Atmospheric Model version 5: climate effects

    Science.gov (United States)

    Wang, Yong; Liu, Xiaohong

    2014-12-01

    We introduce a simplified version of the soccer ball model (SBM) developed by Niedermeier et al (2014 Geophys. Res. Lett. 41 736-741) into the Community Atmospheric Model version 5 (CAM5). It is the first time that SBM is used in an atmospheric model to parameterize the heterogeneous ice nucleation. The SBM, which was simplified for its suitable application in atmospheric models, uses the classical nucleation theory to describe the immersion/condensation freezing by dust in the mixed-phase cloud regime. Uncertain parameters (mean contact angle, standard deviation of contact angle probability distribution, and number of surface sites) in the SBM are constrained by fitting them to recent natural dust (Saharan dust) datasets. With the SBM in CAM5, we investigate the sensitivity of modeled cloud properties to the SBM parameters, and find significant seasonal and regional differences in the sensitivity among the three SBM parameters. Changes of mean contact angle and the number of surface sites lead to changes of cloud properties in Arctic in spring, which could be attributed to the transport of dust ice nuclei to this region. In winter, significant changes of cloud properties induced by these two parameters mainly occur in northern hemispheric mid-latitudes (e.g., East Asia). In comparison, no obvious changes of cloud properties caused by changes of standard deviation can be found in all the seasons. These results are valuable for understanding the heterogeneous ice nucleation behavior, and useful for guiding the future model developments.

  9. Incorporating remote sensing-based ET estimates into the Community Land Model version 4.5

    Science.gov (United States)

    Wang, Dagang; Wang, Guiling; Parr, Dana T.; Liao, Weilin; Xia, Youlong; Fu, Congsheng

    2017-07-01

    Land surface models bear substantial biases in simulating surface water and energy budgets despite the continuous development and improvement of model parameterizations. To reduce model biases, Parr et al. (2015) proposed a method incorporating satellite-based evapotranspiration (ET) products into land surface models. Here we apply this bias correction method to the Community Land Model version 4.5 (CLM4.5) and test its performance over the conterminous US (CONUS). We first calibrate a relationship between the observational ET from the Global Land Evaporation Amsterdam Model (GLEAM) product and the model ET from CLM4.5, and assume that this relationship holds beyond the calibration period. During the validation or application period, a simulation using the default CLM4.5 (CLM) is conducted first, and its output is combined with the calibrated observational-vs.-model ET relationship to derive a corrected ET; an experiment (CLMET) is then conducted in which the model-generated ET is overwritten with the corrected ET. Using the observations of ET, runoff, and soil moisture content as benchmarks, we demonstrate that CLMET greatly improves the hydrological simulations over most of the CONUS, and the improvement is stronger in the eastern CONUS than the western CONUS and is strongest over the Southeast CONUS. For any specific region, the degree of the improvement depends on whether the relationship between observational and model ET remains time-invariant (a fundamental hypothesis of the Parr et al. (2015) method) and whether water is the limiting factor in places where ET is underestimated. While the bias correction method improves hydrological estimates without improving the physical parameterization of land surface models, results from this study do provide guidance for physically based model development effort.

  10. Incorporating remote sensing-based ET estimates into the Community Land Model version 4.5

    Directory of Open Access Journals (Sweden)

    D. Wang

    2017-07-01

    Full Text Available Land surface models bear substantial biases in simulating surface water and energy budgets despite the continuous development and improvement of model parameterizations. To reduce model biases, Parr et al. (2015 proposed a method incorporating satellite-based evapotranspiration (ET products into land surface models. Here we apply this bias correction method to the Community Land Model version 4.5 (CLM4.5 and test its performance over the conterminous US (CONUS. We first calibrate a relationship between the observational ET from the Global Land Evaporation Amsterdam Model (GLEAM product and the model ET from CLM4.5, and assume that this relationship holds beyond the calibration period. During the validation or application period, a simulation using the default CLM4.5 (CLM is conducted first, and its output is combined with the calibrated observational-vs.-model ET relationship to derive a corrected ET; an experiment (CLMET is then conducted in which the model-generated ET is overwritten with the corrected ET. Using the observations of ET, runoff, and soil moisture content as benchmarks, we demonstrate that CLMET greatly improves the hydrological simulations over most of the CONUS, and the improvement is stronger in the eastern CONUS than the western CONUS and is strongest over the Southeast CONUS. For any specific region, the degree of the improvement depends on whether the relationship between observational and model ET remains time-invariant (a fundamental hypothesis of the Parr et al. (2015 method and whether water is the limiting factor in places where ET is underestimated. While the bias correction method improves hydrological estimates without improving the physical parameterization of land surface models, results from this study do provide guidance for physically based model development effort.

  11. Representation of the Great Lakes in the Coupled Model Intercomparison Project Version 5

    Science.gov (United States)

    Briley, L.; Rood, R. B.

    2017-12-01

    The U.S. Great Lakes play a significant role in modifying regional temperatures and precipitation, and as the lakes change in response to a warming climate (i.e., warmer surface water temperatures, decreased ice cover, etc) lake-land-atmosphere dynamics are affected. Because the lakes modify regional weather and are a driver of regional climate change, understanding how they are represented in climate models is important to the reliability of model based information for the region. As part of the Great Lakes Integrated Sciences + Assessments (GLISA) Ensemble project, a major effort is underway to evaluate the Coupled Model Intercomparison Project version (CMIP) 5 global climate models for how well they physically represent the Great Lakes and lake-effects. The CMIP models were chosen because they are a primary source of information in many products developed for decision making (i.e., National Climate Assessment, downscaled future climate projections, etc.), yet there is very little description of how well they represent the lakes. This presentation will describe the results of our investigation of if and how the Great Lakes are represented in the CMIP5 models.

  12. Mars Global Reference Atmospheric Model 2000 Version (Mars-GRAM 2000): Users Guide

    Science.gov (United States)

    Justus, C. G.; James, B. F.

    2000-01-01

    This report presents Mars Global Reference Atmospheric Model 2000 Version (Mars-GRAM 2000) and its new features. All parameterizations for temperature, pressure, density, and winds versus height, latitude, longitude, time of day, and L(sub s) have been replaced by input data tables from NASA Ames Mars General Circulation Model (MGCM) for the surface through 80-km altitude and the University of Arizona Mars Thermospheric General Circulation Model (MTGCM) for 80 to 170 km. A modified Stewart thermospheric model is still used for higher altitudes and for dependence on solar activity. "Climate factors" to tune for agreement with GCM data are no longer needed. Adjustment of exospheric temperature is still an option. Consistent with observations from Mars Global Surveyor, a new longitude-dependent wave model is included with user input to specify waves having 1 to 3 wavelengths around the planet. A simplified perturbation model has been substituted for the earlier one. An input switch allows users to select either East or West longitude positive. This memorandum includes instructions on obtaining Mars-GRAM source code and data files and for running the program. It also provides sample input and output and an example for incorporating Mars-GRAM as an atmospheric subroutine in a trajectory code.

  13. Version 2.0 of the European Gas Model. Changes and their impact on the German gas sector

    International Nuclear Information System (INIS)

    Balmert, David; Petrov, Konstantin

    2015-01-01

    In January 2015 ACER, the European Agency for the Cooperation of Energy Regulators, presented an updated version of its target model for the inner-European natural gas market, also referred to as version 2.0 of the Gas Target Model. During 2014 the existing model, originally developed by the Council of European Energy Regulators (CEER) and launched in 2011, had been analysed, revised and updated in preparation of the new version. While it has few surprises to offer, the new Gas Target Model contains specifies and goes into greater detail on many elements of the original model. Some of the new content is highly relevant to the German gas sector, not least the deliberations on the current key issues, which are security of supply and the ability of the gas markets to function.

  14. Hydrogeochemical evaluation for Simpevarp model version 1.2. Preliminary site description of the Simpevarp area

    International Nuclear Information System (INIS)

    Laaksoharju, Marcus

    2004-12-01

    Siting studies for SKB's programme of deep geological disposal of nuclear fuel waste currently involves the investigation of two locations, Simpevarp and Forsmark, to determine their geological, hydrogeochemical and hydrogeological characteristics. Present work completed has resulted in Model version 1.2 which represents the second evaluation of the available Simpevarp groundwater analytical data collected up to April, 2004. The deepest fracture groundwater samples with sufficient analytical data reflected depths down to 1.7 km. Model version 1.2 focusses on geochemical and mixing processes affecting the groundwater composition in the uppermost part of the bedrock, down to repository levels, and eventually extending to 1000 m depth. The groundwater flow regimes at Laxemar/Simpevarp are considered local and extend down to depths of around 600-1000 m depending on local topography. The marked differences in the groundwater flow regimes between Laxemar and Simpevarp are reflected in the groundwater chemistry where four major hydrochemical groups of groundwaters (types A-D) have been identified: TYPE A: This type comprises dilute groundwaters ( 3 type present at shallow ( 300 m) levels at Simpevarp, and at even greater depths (approx. 1200 m) at Laxemar. At Simpevarp the groundwaters are mainly Na-Ca-Cl with increasingly enhanced Br and SO 4 with depth. At Laxemar they are mainly Ca-Na-Cl also with increasing enhancements of Br and SO 4 with depth. Main reactions involve ion exchange (Ca). At both sites a glacial component and a deep saline component are present. At Simpevarp the saline component may be potentially non marine and/or non-marine/old Littorina marine in origin; at Laxemar it is more likely to be non-marine in origin. TYPE D: This type comprises reducing highly saline groundwaters (> 20 000 mg/L Cl; to a maximum of ∼70 g/L TDS) and only has been identified at Laxemar at depths exceeding 1200 m. It is mainly Ca-Na-Cl with higher Br but lower SO 4 compared

  15. Probabilistic Model for Integrated Assessment of the Behavior at the T.D.P. Version 2

    International Nuclear Information System (INIS)

    Hurtado, A.; Eguilior, S.; Recreo, F

    2015-01-01

    This report documents the completion of the first phase of the implementation of the methodology ABACO2G (Bayes Application to Geological Storage of CO2) and the final version of the ABACO2G probabilistic model for the injection phase before its future validation in the experimental field of the Technology Development Plant in Hontom (Burgos). The model, which is based on the determination of the probabilistic risk component of a geological storage of CO2 using the formalism of Bayesian networks and Monte Carlo probability yields quantitative probability functions of the total system CO2 storage and of each one of their subsystems (storage subsystem and the primary seal; secondary containment subsystem and dispersion subsystem or tertiary one); the implementation of the stochastic time evolution of the CO2 plume during the injection period, the stochastic time evolution of the drying front, the probabilistic evolution of the pressure front, decoupled from the CO2 plume progress front, and the implementation of submodels and leakage probability functions through major leakage risk elements (fractures / faults and wells / deep boreholes) which together define the space of events to estimate the risks associated with the CO2 geological storage system. The activities included in this report have been to replace the previous qualitative estimation submodels of former ABACO2G version developed during Phase I of the project ALM-10-017, by analytical, semi-analytical or numerical submodels for the main elements of risk (wells and fractures), to obtain an integrated probabilistic model of a CO2 storage complex in carbonate formations that meets the needs of the integrated behavior evaluation of the Technology Development Plant in Hontomín

  16. A description of the FAMOUS (version XDBUA climate model and control run

    Directory of Open Access Journals (Sweden)

    A. Osprey

    2008-12-01

    Full Text Available FAMOUS is an ocean-atmosphere general circulation model of low resolution, capable of simulating approximately 120 years of model climate per wallclock day using current high performance computing facilities. It uses most of the same code as HadCM3, a widely used climate model of higher resolution and computational cost, and has been tuned to reproduce the same climate reasonably well. FAMOUS is useful for climate simulations where the computational cost makes the application of HadCM3 unfeasible, either because of the length of simulation or the size of the ensemble desired. We document a number of scientific and technical improvements to the original version of FAMOUS. These improvements include changes to the parameterisations of ozone and sea-ice which alleviate a significant cold bias from high northern latitudes and the upper troposphere, and the elimination of volume-averaged drifts in ocean tracers. A simple model of the marine carbon cycle has also been included. A particular goal of FAMOUS is to conduct millennial-scale paleoclimate simulations of Quaternary ice ages; to this end, a number of useful changes to the model infrastructure have been made.

  17. UNSAT-H Version 2. 0: Unsaturated soil water and heat flow model

    Energy Technology Data Exchange (ETDEWEB)

    Fayer, M.J.; Jones, T.L.

    1990-04-01

    This report documents UNSAT-H Version 2.0, a model for calculating water and heat flow in unsaturated media. The documentation includes the bases for the conceptual model and its numerical implementation, benchmark test cases, example simulations involving layered soils and plant transpiration, and the code listing. Waste management practices at the Hanford Site have included disposal of low-level wastes by near-surface burial. Predicting the future long-term performance of any such burial site in terms of migration of contaminants requires a model capable of simulating water flow in the unsaturated soils above the buried waste. The model currently used to meet this need is UNSAT-H. This model was developed at Pacific Northwest Laboratory to assess water dynamics of near-surface, waste-disposal sites at the Hanford Site. The code is primarily used to predict deep drainage as a function of such environmental conditions as climate, soil type, and vegetation. UNSAT-H is also used to simulate the effects of various practices to enhance isolation of wastes. 66 refs., 29 figs., 7 tabs.

  18. Evaluation of a new CNRM-CM6 model version for seasonal climate predictions

    Science.gov (United States)

    Volpi, Danila; Ardilouze, Constantin; Batté, Lauriane; Dorel, Laurant; Guérémy, Jean-François; Déqué, Michel

    2017-04-01

    This work presents the quality assessment of a new version of the Météo-France coupled climate prediction system, which has been developed in the EU COPERNICUS Climate Change Services framework to carry out seasonal forecast. The system is based on the CNRM-CM6 model, with Arpege-Surfex 6.2.2 as atmosphere/land component and Nemo 3.2 as ocean component, which has directly embedded the sea-ice component Gelato 6.0. In order to have a robust diagnostic, the experiment is composed by 60 ensemble members generated with stochastic dynamic perturbations. The experiment has been performed over a 37-year re-forecast period from 1979 to 2015, with two start dates per year, respectively in May 1st and November 1st. The evaluation of the predictive skill of the model is shown under two perspectives: on the one hand, the ability of the model to faithfully respond to positive or negative ENSO, NAO and QBO events, independently of the predictability of these events. Such assessment is carried out through a composite analysis, and shows that the model succeeds in reproducing the main patterns for 2-meter temperature, precipitation and geopotential height at 500 hPa during the winter season. On the other hand, the model predictive skill of the same events (positive and negative ENSO, NAO and QBO) is evaluated.

  19. Hydrogeochemical evaluation of the Forsmark site, model version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    Laaksoharju, Marcus (ed.) [GeoPoint AB, Sollentuna (Sweden); Gimeno, Maria; Auque, Luis; Gomez, Javier [Univ. of Zaragoza (Spain). Dept. of Earth Sciences; Smellie, John [Conterra AB, Uppsala (Sweden); Tullborg, Eva-Lena [Terralogica AB, Graabo (Sweden); Gurban, Ioana [3D-Terra, Montreal (Canada)

    2004-01-01

    Siting studies for SKB's programme of deep geological disposal of nuclear fuel waste currently involves the investigation of two locations, Forsmark and Simpevarp, on the eastern coast of Sweden to determine their geological, geochemical and hydrogeological characteristics. Present work completed has resulted in model version 1.1 which represents the first evaluation of the available Forsmark groundwater analytical data collected up to May 1, 2003 (i.e. the first 'data freeze'). The HAG group had access to a total of 456 water samples collected mostly from the surface and sub-surface environment (e.g. soil pipes in the overburden, streams and lakes); only a few samples were collected from drilled boreholes. The deepest samples reflected depths down to 200 m. Furthermore, most of the waters sampled (74%) lacked crucial analytical information that restricted the evaluation. Consequently, model version 1.1 focussed on the processes taking place in the uppermost part of the bedrock rather than at repository levels. The complex groundwater evolution and patterns at Forsmark are a result of many factors such as: a) the flat topography and closeness to the Baltic Sea resulting in relative small hydrogeological driving forces which can preserve old water types from being flushed out, b) the changes in hydrogeology related to glaciation/deglaciation and land uplift, c) repeated marine/lake water regressions/transgressions, and d) organic or inorganic alteration of the groundwater caused by microbial processes or water/rock interactions. The sampled groundwaters reflect to various degrees modern or ancient water/rock interactions and mixing processes. Based on the general geochemical character and the apparent age two major water types occur in Forsmark: fresh-meteoric waters with a bicarbonate imprint and low residence times (tritium values above detection limit), and brackish-marine waters with Cl contents up to 6,000 mg/L and longer residence times (tritium

  20. Thermal modelling. Preliminary site description Simpevarp subarea - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Sundberg, Jan; Back, Paer-Erik; Bengtsson, Anna; Laendell, Maerta [Geo Innova AB, Linkoeping (Sweden)

    2005-08-15

    This report presents the thermal site descriptive model for the Simpevarp subarea, version 1.2. The main objective of this report is to present the thermal modelling work where data has been identified, quality controlled, evaluated and summarised in order to make an upscaling to lithological domain level possible. The thermal conductivity at possible canister scale has been modelled for four different lithological domains (RSMA01 (Aevroe granite), RSMB01 (Fine-grained dioritoid), RSMC01 (mixture of Aevroe granite and Quartz monzodiorite), and RSMD01 (Quartz monzodiorite)). A main modelling approach has been used to determine the mean value of the thermal conductivity. Three alternative/complementary approaches have been used to evaluate the spatial variability of the thermal conductivity at domain level. The thermal modelling approaches are based on the lithological model for the Simpevarp subarea, version 1.2 together with rock type models constituted from measured and calculated (from mineral composition) thermal conductivities. For one rock type, the Aevroe granite (501044), density loggings within the specific rock type has also been used in the domain modelling in order to consider the spatial variability within the Aevroe granite. This has been possible due to the presented relationship between density and thermal conductivity, valid for the Aevroe granite. Results indicate that the mean of thermal conductivity is expected to exhibit only a small variation between the different domains, from 2.62 W/(m.K) to 2.80 W/(m.K). The standard deviation varies according to the scale considered and for the canister scale it is expected to range from 0.20 to 0.28 W/(m.K). Consequently, the lower confidence limit (95% confidence) for the canister scale is within the range 2.04-2.35 W/(m.K) for the different domains. The temperature dependence is rather small with a decrease in thermal conductivity of 1.1-3.4% per 100 deg C increase in temperature for the dominating rock

  1. Thermal modelling. Preliminary site description Laxemar subarea - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Sundberg, Jan; Wrafter, John; Back, Paer-Erik; Laendell, Maerta [Geo Innova AB, Linkoeping (Sweden)

    2006-02-15

    This report presents the thermal site descriptive model for the Laxemar subarea, version 1.2. The main objective of this report is to present the thermal modelling work where data has been identified, quality controlled, evaluated and summarised in order to make an upscaling to lithological domain level possible. The thermal conductivity at canister scale has been modelled for five different lithological domains: RSMA (Aevroe granite), RSMBA (mixture of Aevroe granite and fine-grained dioritoid), RSMD (quartz monzodiorite), RSME (diorite/gabbro) and RSMM (mix domain with high frequency of diorite to gabbro). A base modelling approach has been used to determine the mean value of the thermal conductivity. Four alternative/complementary approaches have been used to evaluate the spatial variability of the thermal conductivity at domain level. The thermal modelling approaches are based on the lithological domain model for the Laxemar subarea, version 1.2 together with rock type models based on measured and calculated (from mineral composition) thermal conductivities. For one rock type, Aevroe granite (501044), density loggings have also been used in the domain modelling in order to evaluate the spatial variability within the Aevroe granite. This has been possible due to an established relationship between density and thermal conductivity, valid for the Aevroe granite. Results indicate that the means of thermal conductivity for the various domains are expected to exhibit a variation from 2.45 W/(m.K) to 2.87 W/(m.K). The standard deviation varies according to the scale considered, and for the 0.8 m scale it is expected to range from 0.17 to 0.29 W/(m.K). Estimates of lower tail percentiles for the same scale are presented for all five domains. The temperature dependence is rather small with a decrease in thermal conductivity of 1.1-5.3% per 100 deg C increase in temperature for the dominant rock types. There are a number of important uncertainties associated with these

  2. Thermal modelling. Preliminary site description Simpevarp subarea - version 1.2

    International Nuclear Information System (INIS)

    Sundberg, Jan; Back, Paer-Erik; Bengtsson, Anna; Laendell, Maerta

    2005-08-01

    This report presents the thermal site descriptive model for the Simpevarp subarea, version 1.2. The main objective of this report is to present the thermal modelling work where data has been identified, quality controlled, evaluated and summarised in order to make an upscaling to lithological domain level possible. The thermal conductivity at possible canister scale has been modelled for four different lithological domains (RSMA01 (Aevroe granite), RSMB01 (Fine-grained dioritoid), RSMC01 (mixture of Aevroe granite and Quartz monzodiorite), and RSMD01 (Quartz monzodiorite)). A main modelling approach has been used to determine the mean value of the thermal conductivity. Three alternative/complementary approaches have been used to evaluate the spatial variability of the thermal conductivity at domain level. The thermal modelling approaches are based on the lithological model for the Simpevarp subarea, version 1.2 together with rock type models constituted from measured and calculated (from mineral composition) thermal conductivities. For one rock type, the Aevroe granite (501044), density loggings within the specific rock type has also been used in the domain modelling in order to consider the spatial variability within the Aevroe granite. This has been possible due to the presented relationship between density and thermal conductivity, valid for the Aevroe granite. Results indicate that the mean of thermal conductivity is expected to exhibit only a small variation between the different domains, from 2.62 W/(m.K) to 2.80 W/(m.K). The standard deviation varies according to the scale considered and for the canister scale it is expected to range from 0.20 to 0.28 W/(m.K). Consequently, the lower confidence limit (95% confidence) for the canister scale is within the range 2.04-2.35 W/(m.K) for the different domains. The temperature dependence is rather small with a decrease in thermal conductivity of 1.1-3.4% per 100 deg C increase in temperature for the dominating rock

  3. Solid Waste Projection Model: Database user's guide (Version 1.3)

    International Nuclear Information System (INIS)

    Blackburn, C.L.

    1991-11-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC) specifically to address Hanford solid waste management issues. This document is one of a set of documents supporting the SWPM system and providing instructions in the use and maintenance of SWPM components. This manual contains instructions for preparing to use Version 1.3 of the SWPM database, for entering and maintaining data, and for performing routine database functions. This document supports only those operations which are specific to SWPM database menus and functions and does not provide instruction in the use of Paradox, the database management system in which the SWPM database is established

  4. Conceptual Model of an Application for Automated Generation of Webpage Mobile Versions

    Directory of Open Access Journals (Sweden)

    Todor Rachovski

    2017-11-01

    Full Text Available Accessing webpages through various types of mobile devices with different screen sizes and using different browsers has put new demands on web developers. The main challenge is the development of websites with responsive design that is adaptable depending on the mobile device used. The article presents a conceptual model of an app for automated generation of mobile pages. It has five-layer architecture: database, database management layer, business logic layer, web services layer and a presentation layer. The database stores all the data needed to run the application. The database management layer uses an ORM model to convert relational data into an object-oriented format and control the access to them. The business logic layer contains components that perform the actual work on building a mobile version of the page, including parsing, building a hierarchical model of the page and a number of transformations. The web services layer provides external applications with access to lower-level functionalities, and the presentation layer is responsible for choosing and using the appropriate CSS. A web application that uses the proposed model was developed and experiments were conducted.

  5. Representations of the Stratospheric Polar Vortices in Versions 1 and 2 of the Goddard Earth Observing System Chemistry-Climate Model (GEOS CCM)

    Science.gov (United States)

    Pawson, S.; Stolarski, R.S.; Nielsen, J.E.; Perlwitz, J.; Oman, L.; Waugh, D.

    2009-01-01

    This study will document the behavior of the polar vortices in two versions of the GEOS CCM. Both versions of the model include the same stratospheric chemistry, They differ in the underlying circulation model. Version 1 of the GEOS CCM is based on the Goddard Earth Observing System, Version 4, general circulation model which includes the finite-volume (Lin-Rood) dynamical core and physical parameterizations from Community Climate Model, Version 3. GEOS CCM Version 2 is based on the GEOS-5 GCM that includes a different tropospheric physics package. Baseline simulations of both models, performed at two-degree spatial resolution, show some improvements in Version 2, but also some degradation, In the Antarctic, both models show an over-persistent stratospheric polar vortex with late breakdown, but the year-to-year variations that are overestimated in Version I are more realistic in Version 2. The implications of this for the interactions with tropospheric climate, the Southern Annular Mode, will be discussed. In the Arctic both model versions show a dominant dynamically forced variabi;ity, but Version 2 has a persistent warm bias in the low stratosphere and there are seasonal differences in the simulations. These differences will be quantified in terms of climate change and ozone loss. Impacts of model resolution, using simulations at one-degree and half-degree, and changes in physical parameterizations (especially the gravity wave drag) will be discussed.

  6. Accelerator System Model (ASM) user manual with physics and engineering model documentation. ASM version 1.0

    International Nuclear Information System (INIS)

    1993-07-01

    The Accelerator System Model (ASM) is a computer program developed to model proton radiofrequency accelerators and to carry out system level trade studies. The ASM FORTRAN subroutines are incorporated into an intuitive graphical user interface which provides for the open-quotes constructionclose quotes of the accelerator in a window on the computer screen. The interface is based on the Shell for Particle Accelerator Related Codes (SPARC) software technology written for the Macintosh operating system in the C programming language. This User Manual describes the operation and use of the ASM application within the SPARC interface. The Appendix provides a detailed description of the physics and engineering models used in ASM. ASM Version 1.0 is joint project of G. H. Gillespie Associates, Inc. and the Accelerator Technology (AT) Division of the Los Alamos National Laboratory. Neither the ASM Version 1.0 software nor this ASM Documentation may be reproduced without the expressed written consent of both the Los Alamos National Laboratory and G. H. Gillespie Associates, Inc

  7. Simulated pre-industrial climate in Bergen Climate Model (version 2: model description and large-scale circulation features

    Directory of Open Access Journals (Sweden)

    O. H. Otterå

    2009-11-01

    Full Text Available The Bergen Climate Model (BCM is a fully-coupled atmosphere-ocean-sea-ice model that provides state-of-the-art computer simulations of the Earth's past, present, and future climate. Here, a pre-industrial multi-century simulation with an updated version of BCM is described and compared to observational data. The model is run without any form of flux adjustments and is stable for several centuries. The simulated climate reproduces the general large-scale circulation in the atmosphere reasonably well, except for a positive bias in the high latitude sea level pressure distribution. Also, by introducing an updated turbulence scheme in the atmosphere model a persistent cold bias has been eliminated. For the ocean part, the model drifts in sea surface temperatures and salinities are considerably reduced compared to earlier versions of BCM. Improved conservation properties in the ocean model have contributed to this. Furthermore, by choosing a reference pressure at 2000 m and including thermobaric effects in the ocean model, a more realistic meridional overturning circulation is simulated in the Atlantic Ocean. The simulated sea-ice extent in the Northern Hemisphere is in general agreement with observational data except for summer where the extent is somewhat underestimated. In the Southern Hemisphere, large negative biases are found in the simulated sea-ice extent. This is partly related to problems with the mixed layer parametrization, causing the mixed layer in the Southern Ocean to be too deep, which in turn makes it hard to maintain a realistic sea-ice cover here. However, despite some problematic issues, the pre-industrial control simulation presented here should still be appropriate for climate change studies requiring multi-century simulations.

  8. Earth System Chemistry integrated Modelling (ESCiMo) with the Modular Earth Submodel System (MESSy) version 2.51

    NARCIS (Netherlands)

    Jockel, P.; Tost, H.; Pozzer, A.; Kunze, M.; Kirner, O.; Brenninkmeijer, C.A.M.; Brinkop, S.; Cai, D.S.; Dyroff, C.; Eckstein, J.; Frank, F.; Garny, H.; Gottschald, K.D.; Graf, P.; Grewe, V.; Kerkweg, A.; Kern, B.; Matthes, S; Mertens, M; Meul, S.; Neumaier, M.; Nützel, M; Oberländer-Hayn, S; Ruhnke, R.; Runde, T.; Sander, R.; Scharffe, D; Zahn, A.

    2016-01-01

    Three types of reference simulations, as recommended by the Chemistry–Climate Model Initiative (CCMI), have been performed with version 2.51 of the European Centre for Medium-Range Weather Forecasts – Hamburg (ECHAM)/Modular Earth Submodel System (MESSy) Atmospheric Chemistry (EMAC) model: hindcast

  9. A generic method for automatic translation between input models for different versions of simulation codes

    International Nuclear Information System (INIS)

    Serfontein, Dawid E.; Mulder, Eben J.; Reitsma, Frederik

    2014-01-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications

  10. VALIDATION OF THE ASTER GLOBAL DIGITAL ELEVATION MODEL VERSION 2 OVER THE CONTERMINOUS UNITED STATES

    Directory of Open Access Journals (Sweden)

    D. Gesch

    2012-07-01

    Full Text Available The ASTER Global Digital Elevation Model Version 2 (GDEM v2 was evaluated over the conterminous United States in a manner similar to the validation conducted for the original GDEM Version 1 (v1 in 2009. The absolute vertical accuracy of GDEM v2 was calculated by comparison with more than 18,000 independent reference geodetic ground control points from the National Geodetic Survey. The root mean square error (RMSE measured for GDEM v2 is 8.68 meters. This compares with the RMSE of 9.34 meters for GDEM v1. Another important descriptor of vertical accuracy is the mean error, or bias, which indicates if a DEM has an overall vertical offset from true ground level. The GDEM v2 mean error of –0.20 meters is a significant improvement over the GDEM v1 mean error of –3.69 meters. The absolute vertical accuracy assessment results, both mean error and RMSE, were segmented by land cover to examine the effects of cover types on measured errors. The GDEM v2 mean errors by land cover class verify that the presence of aboveground features (tree canopies and built structures cause a positive elevation bias, as would be expected for an imaging system like ASTER. In open ground classes (little or no vegetation with significant aboveground height, GDEM v2 exhibits a negative bias on the order of 1 meter. GDEM v2 was also evaluated by differencing with the Shuttle Radar Topography Mission (SRTM dataset. In many forested areas, GDEM v2 has elevations that are higher in the canopy than SRTM.

  11. The Extrapolar SWIFT model (version 1.0): fast stratospheric ozone chemistry for global climate models

    Science.gov (United States)

    Kreyling, Daniel; Wohltmann, Ingo; Lehmann, Ralph; Rex, Markus

    2018-03-01

    The Extrapolar SWIFT model is a fast ozone chemistry scheme for interactive calculation of the extrapolar stratospheric ozone layer in coupled general circulation models (GCMs). In contrast to the widely used prescribed ozone, the SWIFT ozone layer interacts with the model dynamics and can respond to atmospheric variability or climatological trends.The Extrapolar SWIFT model employs a repro-modelling approach, in which algebraic functions are used to approximate the numerical output of a full stratospheric chemistry and transport model (ATLAS). The full model solves a coupled chemical differential equation system with 55 initial and boundary conditions (mixing ratio of various chemical species and atmospheric parameters). Hence the rate of change of ozone over 24 h is a function of 55 variables. Using covariances between these variables, we can find linear combinations in order to reduce the parameter space to the following nine basic variables: latitude, pressure altitude, temperature, overhead ozone column and the mixing ratio of ozone and of the ozone-depleting families (Cly, Bry, NOy and HOy). We will show that these nine variables are sufficient to characterize the rate of change of ozone. An automated procedure fits a polynomial function of fourth degree to the rate of change of ozone obtained from several simulations with the ATLAS model. One polynomial function is determined per month, which yields the rate of change of ozone over 24 h. A key aspect for the robustness of the Extrapolar SWIFT model is to include a wide range of stratospheric variability in the numerical output of the ATLAS model, also covering atmospheric states that will occur in a future climate (e.g. temperature and meridional circulation changes or reduction of stratospheric chlorine loading).For validation purposes, the Extrapolar SWIFT model has been integrated into the ATLAS model, replacing the full stratospheric chemistry scheme. Simulations with SWIFT in ATLAS have proven that the

  12. Global Earthquake Activity Rate models based on version 2 of the Global Strain Rate Map

    Science.gov (United States)

    Bird, P.; Kreemer, C.; Kagan, Y. Y.; Jackson, D. D.

    2013-12-01

    Global Earthquake Activity Rate (GEAR) models have usually been based on either relative tectonic motion (fault slip rates and/or distributed strain rates), or on smoothing of seismic catalogs. However, a hybrid approach appears to perform better than either parent, at least in some retrospective tests. First, we construct a Tectonic ('T') forecast of shallow (≤ 70 km) seismicity based on global plate-boundary strain rates from version 2 of the Global Strain Rate Map. Our approach is the SHIFT (Seismic Hazard Inferred From Tectonics) method described by Bird et al. [2010, SRL], in which the character of the strain rate tensor (thrusting and/or strike-slip and/or normal) is used to select the most comparable type of plate boundary for calibration of the coupled seismogenic lithosphere thickness and corner magnitude. One difference is that activity of offshore plate boundaries is spatially smoothed using empirical half-widths [Bird & Kagan, 2004, BSSA] before conversion to seismicity. Another is that the velocity-dependence of coupling in subduction and continental-convergent boundaries [Bird et al., 2009, BSSA] is incorporated. Another forecast component is the smoothed-seismicity ('S') forecast model of [Kagan & Jackson, 1994, JGR; Kagan & Jackson, 2010, GJI], which was based on optimized smoothing of the shallow part of the GCMT catalog, years 1977-2004. Both forecasts were prepared for threshold magnitude 5.767. Then, we create hybrid forecasts by one of 3 methods: (a) taking the greater of S or T; (b) simple weighted-average of S and T; or (c) log of the forecast rate is a weighted average of the logs of S and T. In methods (b) and (c) there is one free parameter, which is the fractional contribution from S. All hybrid forecasts are normalized to the same global rate. Pseudo-prospective tests for 2005-2012 (using versions of S and T calibrated on years 1977-2004) show that many hybrid models outperform both parents (S and T), and that the optimal weight on S

  13. COSMO-CLM{sup 2}: a new version of the COSMO-CLM model coupled to the Community Land Model

    Energy Technology Data Exchange (ETDEWEB)

    Davin, Edouard L.; Jaeger, Eric B.; Seneviratne, Sonia I. [ETH Zurich, Institute for Atmospheric and Climate Science, Zurich (Switzerland); Stoeckli, Reto [ETH Zurich, Institute for Atmospheric and Climate Science, Zurich (Switzerland); MeteoSwiss, Climate Services, Climate Analysis, Zurich (Switzerland); Levis, Samuel [National Center for Atmospheric Research, Climate and Global Dynamics Division, Boulder, CO (United States)

    2011-11-15

    This study presents an evaluation of a new biosphere-atmosphere Regional Climate Model. COSMO-CLM{sup 2} results from the coupling between the non-hydrostatic atmospheric model COSMO-CLM version 4.0 and the Community Land Model version 3.5 (CLM3.5). In this coupling, CLM3.5 replaces a simpler land surface parameterization (TERRA{sub M}L) used in the standard COSMO-CLM. Compared to TERRA{sub M}L, CLM3.5 comprises a more complete representation of land surface processes including hydrology, biogeophysics, biogeochemistry and vegetation dynamics. Historical climate simulations over Europe with COSMO-CLM and with the new COSMO-CLM{sup 2} are evaluated against various data products. The simulated climate is found to be substantially affected by the coupling with CLM3.5, particularly in summer. Radiation fluxes as well as turbulent fluxes at the surface are found to be more realistically represented in COSMO-CLM{sup 2}. This subsequently leads to improvements of several aspects of the simulated climate (cloud cover, surface temperature and precipitation). We show that a better partitioning of turbulent fluxes is the central factor allowing for the better performances of COSMO-CLM{sup 2} over COSMO-CLM. Despite these improvements, some model deficiencies still remain, most notably a substantial underestimation of surface net shortwave radiation. Overall, these results highlight the importance of land surface processes in shaping the European climate and the benefit of using an advanced land surface model for regional climate simulations. (orig.)

  14. Energy Integration for 2050 - A Strategic Impact Model (2050 SIM), Version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    2010-10-01

    The United States (U.S.) energy infrastructure is among the most reliable, accessible, and economic in the world. On the other hand, it is also excessively reliant on foreign energy sources, experiences high volatility in energy prices, does not always practice good stewardship of finite indigenous energy resources, and emits significant quantities of greenhouse gas. The U.S. Department of Energy is conducting research and development on advanced nuclear reactor concepts and technologies, including High Temperature Gas Reactor (HTGR) technologies, directed at helping the United States meet its current and future energy challenges. This report discusses the Draft Strategic Impact Model (SIM), an initial version of which was created during the later part of FY-2010. SIM was developed to analyze and depict the benefits of various energy sources in meeting the energy demand and to provide an overall system understanding of the tradeoffs between building and using HTGRs versus other existing technologies for providing energy (heat and electricity) to various energy-use sectors in the United States. This report also provides the assumptions used in the model, the rationale for the methodology, and the references for the source documentation and source data used in developing the SIM.

  15. Energy Integration for 2050 - A Strategic Impact Model (2050 SIM), Version 2.0

    Energy Technology Data Exchange (ETDEWEB)

    John Collins

    2011-09-01

    The United States (U.S.) energy infrastructure is among the most reliable, accessible, and economic in the world. On the other hand, it is also excessively reliant on foreign energy sources, experiences high volatility in energy prices, does not always practice good stewardship of finite indigenous energy resources, and emits significant quantities of greenhouse gas. The U.S. Department of Energy is conducting research and development on advanced nuclear reactor concepts and technologies, including High Temperature Gas Reactor (HTGR) technologies, directed at helping the United States meet its current and future energy challenges. This report discusses the Draft Strategic Impact Model (SIM), an initial version of which was created during the later part of FY-2010. SIM was developed to analyze and depict the benefits of various energy sources in meeting the energy demand and to provide an overall system understanding of the tradeoffs between building and using HTGRs versus other existing technologies for providing energy (heat and electricity) to various energy-use sectors in the United States. This report also provides the assumptions used in the model, the rationale for the methodology, and the references for the source documentation and source data used in developing the SIM.

  16. A multi-sectoral version of the Post-Keynesian growth model

    Directory of Open Access Journals (Sweden)

    Ricardo Azevedo Araujo

    2015-03-01

    Full Text Available Abstract With this inquiry, we seek to develop a disaggregated version of the post-Keynesian approach to economic growth, by showing that indeed it can be treated as a particular case of the Pasinettian model of structural change and economic expansion. By relying upon vertical integration it becomes possible to carry out the analysis initiated by Kaldor (1956 and Robinson (1956, 1962, and followed by Dutt (1984, Rowthorn (1982 and later Bhaduri and Marglin (1990 in a multi-sectoral model in which demand and productivity increase at different paces in each sector. By adopting this approach it is possible to show that the structural economic dynamics is conditioned not only to patterns of evolving demand and diffusion of technological progress but also to the distributive features of the economy, which can give rise to different regimes of economic growth. Besides, we find it possible to determine the natural rate of profit that makes the mark-up rate to be constant over time.

  17. Smart Grid Maturity Model: SGMM Model Definition. Version 1.2

    Science.gov (United States)

    2011-09-01

    taking place. This might include using radio-frequency identification ( RFID ) technology to link assets to an inventory database that connects GIS and...warehoused). Automation might include workers entering the data via keyboard or barcode reader at the warehouse , or something more advanced like using... RFID tags. WAM-3.7 Modeling of asset investments for key components is underway. The asset performance and management modeling is based on real smart

  18. Application of version 3.1 of EPRI BWR radiolysis model

    International Nuclear Information System (INIS)

    Version 3.1 of the EPRI BWR vessel internals application (BWRVIA) code for calculating oxidant and electrochemical corrosion potential (ECP) around a BWR primary circuit has recently been released and this paper outlines the changes that have been carried out to the model and how the model compares with plant observations. There were two primary motivations for the development of BWRVIA V3.1 for plants injecting hydrogen into the feedwater to mitigate intergranular stress corrosion cracking (IGSCC) of reactor piping and internals; the fact that many BWRs now add Pt to the primary system to catalyze hydrogen:oxidant recombination at surfaces so the model needs to provide an accurate description of molar ratio (ratio of hydrogen to oxidant) around the primary circuit, and secondly to improve predictions of ECP in the lower plenum region for plants operating under moderate hydrogen water chemistry (HWC-M). Version 3.1 upgraded the model's benchmark for neutron and gamma dose rates and provided for model calculations with core axial power shapes that were bottom, middle and top peaked, characteristic of some core designs at beginning, middle, and end of cycle conditions. Improved reaction rate expressions also were incorporated along with refinements based on sensitivity testing and comparison to plant data under noble metal hydrogen water chemistry regimes. In the presence of Pt deposits on surfaces, molar ratios greater than 2 at a particular location in the primary circuit imply reducing conditions, low ECP and therefore protection from stress corrosion cracking. Plants that apply noble metal will therefore be protected from SCC in these locations. In recent years several HWC-M plants have obtained ECP data from local power range monitors sampling water from the bottom head of the vessel. These ECP measurements have shown that not all BWRs respond similarly to hydrogen addition with some plants requiring very high feed water hydrogen levels to achieve ECP

  19. Paleoclimate modeling of the Amazonian glacial cycles using the new version of the LMD Global Climate Model

    Science.gov (United States)

    Madeleine, J.; Forget, F.; Head, J. W.; Millour, E.; Spiga, A.; Colaitis, A.; Montabone, L.; Montmessin, F.; Maattanen, A. E.

    2011-12-01

    Our study aims at better understanding the Mars climate system through the modeling of the Amazonian glacial cycles with the LMD Global Climate Model. In recent years, many atmospheric measurements by MRO, MGS and MEx, as well as in-situ measurements by the Phoenix lander have revealed the crucial role of various processes in shaping the current climate, such as the radiative effect of water-ice clouds or the scavenging of dust particles by clouds. In parallel, geological evidence for large-scale glaciations has been discovered, and a lot is still to be learned about the origin of the associated geological features. We have been working on developing a new version of the LMD Mars GCM which includes these processes and allows us to assess their impact on the Mars climate system under present-day and past conditions. The processes that are relevant to paleoclimate modeling are the following: - Interactive aerosols: The scavenging of dust particles is made possible by a semi-interactive dust transport scheme which is coupled to the water cycle scheme. The dust particles serve as condensation nuclei for water-ice cloud formation and can be scavenged. Both dust particles and water-ice crystals can scatter radiation depending on their size. - Near-surface convection: A new parameterization of the convection in the boundary layer has been developed and accounts for the turbulent mixing produced by local thermals. This new parameterization may have an impact on ice stability under paleoclimate conditions. - Ice deposition and surface properties: A new soil conduction model allows us to account for the changes in surface thermal inertia due to ice deposition, meaning that the thermal-inertia feedback is active. Also, the coupling between the dust cycle and the water cycle gives access to the amount of dust which is included in the ice deposits, and thereby provides an assessment of the stratigraphy. During the conference, we will revisit our paleoclimate simulations and

  20. Intercomparison of measurements of NO2 concentrations in the atmosphere simulation chamber SAPHIR during the NO3Comp campaign

    Directory of Open Access Journals (Sweden)

    H. Fuchs

    2010-01-01

    Full Text Available NO2 concentrations were measured by various instruments during the NO3Comp campaign at the atmosphere simulation chamber SAPHIR at Forschungszentrum Jülich, Germany, in June 2007. Analytical methods included photolytic conversion with chemiluminescence (PC-CLD, broadband cavity ring-down spectroscopy (BBCRDS, pulsed cavity ring-down spectroscopy (CRDS, incoherent broadband cavity-enhanced absorption spectroscopy (IBB\\-CEAS, and laser-induced fluorescence (LIF. All broadband absorption spectrometers were optimized for the detection of the main target species of the campaign, NO3, but were also capable of detecting NO2 simultaneously with reduced sensitivity. NO2 mixing ratios in the chamber were within a range characteristic of polluted, urban conditions, with a maximum mixing ratio of approximately 75 ppbv. The overall agreement between measurements of all instruments was excellent. Linear fits of the combined data sets resulted in slopes that differ from unity only within the stated uncertainty of each instrument. Possible interferences from species such as water vapor and ozone were negligible under the experimental conditions.

  1. Hydrogeochemical evaluation for Simpevarp model version 1.2. Preliminary site description of the Simpevarp area

    Energy Technology Data Exchange (ETDEWEB)

    Laaksoharju, Marcus (ed.) [Geopoint AB, Stockholm (Sweden)

    2004-12-01

    Siting studies for SKB's programme of deep geological disposal of nuclear fuel waste currently involves the investigation of two locations, Simpevarp and Forsmark, to determine their geological, hydrogeochemical and hydrogeological characteristics. Present work completed has resulted in Model version 1.2 which represents the second evaluation of the available Simpevarp groundwater analytical data collected up to April, 2004. The deepest fracture groundwater samples with sufficient analytical data reflected depths down to 1.7 km. Model version 1.2 focusses on geochemical and mixing processes affecting the groundwater composition in the uppermost part of the bedrock, down to repository levels, and eventually extending to 1000 m depth. The groundwater flow regimes at Laxemar/Simpevarp are considered local and extend down to depths of around 600-1000 m depending on local topography. The marked differences in the groundwater flow regimes between Laxemar and Simpevarp are reflected in the groundwater chemistry where four major hydrochemical groups of groundwaters (types A-D) have been identified: TYPE A: This type comprises dilute groundwaters (< 1000 mg/L Cl; 0.5-2.0 g/L TDS) of Na-HCO{sub 3} type present at shallow (<200 m) depths at Simpevarp, but at greater depths (0-900 m) at Laxemar. At both localities the groundwaters are marginally oxidising close to the surface, but otherwise reducing. Main reactions involve weathering, ion exchange (Ca, Mg), surface complexation, and dissolution of calcite. Redox reactions include precipitation of Fe-oxyhydroxides and some microbially mediated reactions (SRB). Meteoric recharge water is mainly present at Laxemar whilst at Simpevarp potential mixing of recharge meteoric water and a modern sea component is observed. Localised mixing of meteoric water with deeper saline groundwaters is indicated at both Laxemar and Simpevarp. TYPE B: This type comprises brackish groundwaters (1000-6000 mg/L Cl; 5-10 g/L TDS) present at

  2. A Technological Pedagogical Content Knowledge Based Instructional Design Model: A Third Version Implementation Study in a Technology Integration Course

    Science.gov (United States)

    Lee, Chia-Jung; Kim, ChanMin

    2017-01-01

    This paper presents the third version of a technological pedagogical content knowledge (TPACK) based instructional design model that incorporates the distinctive, transformative, and integrative views of TPACK into a comprehensive actionable framework. Strategies of relating TPACK domains to real-life learning experiences, role-playing, and…

  3. ASTER Global Digital Elevation Model Version 2 - summary of validation results

    Science.gov (United States)

    Tachikawa, Tetushi; Kaku, Manabu; Iwasaki, Akira; Gesch, Dean B.; Oimoen, Michael J.; Zhang, Z.; Danielson, Jeffrey J.; Krieger, Tabatha; Curtis, Bill; Haase, Jeff; Abrams, Michael; Carabajal, C.; Meyer, Dave

    2011-01-01

    On June 29, 2009, NASA and the Ministry of Economy, Trade and Industry (METI) of Japan released a Global Digital Elevation Model (GDEM) to users worldwide at no charge as a contribution to the Global Earth Observing System of Systems (GEOSS). This “version 1” ASTER GDEM (GDEM1) was compiled from over 1.2 million scenebased DEMs covering land surfaces between 83°N and 83°S latitudes. A joint U.S.-Japan validation team assessed the accuracy of the GDEM1, augmented by a team of 20 cooperators. The GDEM1 was found to have an overall accuracy of around 20 meters at the 95% confidence level. The team also noted several artifacts associated with poor stereo coverage at high latitudes, cloud contamination, water masking issues and the stacking process used to produce the GDEM1 from individual scene-based DEMs (ASTER GDEM Validation Team, 2009). Two independent horizontal resolution studies estimated the effective spatial resolution of the GDEM1 to be on the order of 120 meters.

  4. Atmospheric radionuclide transport model with radon postprocessor and SBG module. Model description version 2.8.0; ARTM. Atmosphaerisches Radionuklid-Transport-Modell mit Radon Postprozessor und SBG-Modul. Modellbeschreibung zu Version 2.8.0

    Energy Technology Data Exchange (ETDEWEB)

    Richter, Cornelia; Sogalla, Martin; Thielen, Harald; Martens, Reinhard

    2015-04-20

    The study on the atmospheric radionuclide transport model with radon postprocessor and SBG module (model description version 2.8.0) covers the following issues: determination of emissions, radioactive decay, atmospheric dispersion calculation for radioactive gases, atmospheric dispersion calculation for radioactive dusts, determination of the gamma cloud radiation (gamma submersion), terrain roughness, effective source height, calculation area and model points, geographic reference systems and coordinate transformations, meteorological data, use of invalid meteorological data sets, consideration of statistical uncertainties, consideration of housings, consideration of bumpiness, consideration of terrain roughness, use of frequency distributions of the hourly dispersion situation, consideration of the vegetation period (summer), the radon post processor radon.exe, the SBG module, modeling of wind fields, shading settings.

  5. Uniform California earthquake rupture forecast, version 3 (UCERF3): the time-independent model

    Science.gov (United States)

    Field, Edward H.; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David D.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin R.; Page, Morgan T.; Parsons, Thomas; Powers, Peter M.; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua; ,

    2013-01-01

    In this report we present the time-independent component of the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3), which provides authoritative estimates of the magnitude, location, and time-averaged frequency of potentially damaging earthquakes in California. The primary achievements have been to relax fault segmentation assumptions and to include multifault ruptures, both limitations of the previous model (UCERF2). The rates of all earthquakes are solved for simultaneously, and from a broader range of data, using a system-level "grand inversion" that is both conceptually simple and extensible. The inverse problem is large and underdetermined, so a range of models is sampled using an efficient simulated annealing algorithm. The approach is more derivative than prescriptive (for example, magnitude-frequency distributions are no longer assumed), so new analysis tools were developed for exploring solutions. Epistemic uncertainties were also accounted for using 1,440 alternative logic tree branches, necessitating access to supercomputers. The most influential uncertainties include alternative deformation models (fault slip rates), a new smoothed seismicity algorithm, alternative values for the total rate of M≥5 events, and different scaling relationships, virtually all of which are new. As a notable first, three deformation models are based on kinematically consistent inversions of geodetic and geologic data, also providing slip-rate constraints on faults previously excluded because of lack of geologic data. The grand inversion constitutes a system-level framework for testing hypotheses and balancing the influence of different experts. For example, we demonstrate serious challenges with the Gutenberg-Richter hypothesis for individual faults. UCERF3 is still an approximation of the system, however, and the range of models is limited (for example, constrained to stay close to UCERF2). Nevertheless, UCERF3 removes the apparent UCERF2 overprediction of

  6. Hydrogeochemical evaluation of the Simpevarp area, model version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    Laaksoharju, Marcus (ed.) [Geopoint AB, Stockholm (Sweden); Smellie, John [Conterra AB, Uppsala (Sweden); Gimeno, Maria; Auque, Luis; Gomez, Javier [Univ. of Zaragoza (Spain). Dept. of Earth Sciences; Tullborg, Eva-Lena [Terralogica AB, Graabo (Sweden); Gurban, Ioana [3D-Terra (Sweden)

    2004-02-01

    Siting studies for SKB's programme of deep geological disposal of nuclear fuel waste currently involves the investigation of two locations, Simpevarp and Forsmark, on the eastern coast of Sweden to determine their geological, hydrogeochemical and hydrogeological characteristics. Present work completed has resulted in model version 1.1 which represents the first evaluation of the available Simpevarp groundwater analytical data collected up to July 1st, 2003 (i.e. the first 'data freeze' of the site). The HAG (Hydrochemical Analytical Group) group had access to a total of 535 water samples collected from the surface and sub-surface environment (e.g. soil pipes in the overburden, streams and lakes); only a few samples were collected from drilled boreholes. The deepest fracture groundwater samples with sufficient analytical data reflected depths down to 250 m. Furthermore, most of the waters sampled (79%) lacked crucial analytical information that restricted the evaluation. Consequently, model version 1.1 focussed on the processes taking place in the uppermost part of the bedrock rather than at repository levels. The complex groundwater evolution and patterns at Simpevarp are a result of many factors such as: a) the flat topography and proximity to the Baltic Sea, b) changes in hydrogeology related to glaciation/deglaciation and land uplift, c) repeated marine/lake water regressions/transgressions, and d) organic or inorganic alteration of the groundwater composition caused by microbial processes or water/rock interactions. The sampled groundwaters reflect to various degrees of modern or ancient water/rock interactions and mixing processes. Higher topography to the west of Simpevarp has resulted in hydraulic gradients which have partially flushed out old water types. Except for sea waters, most surface waters and some groundwaters from percussion boreholes are fresh, non-saline waters according to the classification used for Aespoe groundwaters. The rest

  7. The Digital Astronaut Project Computational Bone Remodeling Model (Beta Version) Bone Summit Summary Report

    Science.gov (United States)

    Pennline, James; Mulugeta, Lealem

    2013-01-01

    Under the conditions of microgravity, astronauts lose bone mass at a rate of 1% to 2% a month, particularly in the lower extremities such as the proximal femur [1-3]. The most commonly used countermeasure against bone loss in microgravity has been prescribed exercise [4]. However, data has shown that existing exercise countermeasures are not as effective as desired for preventing bone loss in long duration, 4 to 6 months, spaceflight [1,3,5,6]. This spaceflight related bone loss may cause early onset of osteoporosis to place the astronauts at greater risk of fracture later in their lives. Consequently, NASA seeks to have improved understanding of the mechanisms of bone demineralization in microgravity in order to appropriately quantify this risk, and to establish appropriate countermeasures [7]. In this light, NASA's Digital Astronaut Project (DAP) is working with the NASA Bone Discipline Lead to implement well-validated computational models to help predict and assess bone loss during spaceflight, and enhance exercise countermeasure development. More specifically, computational modeling is proposed as a way to augment bone research and exercise countermeasure development to target weight-bearing skeletal sites that are most susceptible to bone loss in microgravity, and thus at higher risk for fracture. Given that hip fractures can be debilitating, the initial model development focused on the femoral neck. Future efforts will focus on including other key load bearing bone sites such as the greater trochanter, lower lumbar, proximal femur and calcaneus. The DAP has currently established an initial model (Beta Version) of bone loss due to skeletal unloading in femoral neck region. The model calculates changes in mineralized volume fraction of bone in this segment and relates it to changes in bone mineral density (vBMD) measured by Quantitative Computed Tomography (QCT). The model is governed by equations describing changes in bone volume fraction (BVF), and rates of

  8. Three Versions of the Interpersonal Adjective Scales and their Fit to the Circumplex Model

    Science.gov (United States)

    Adams, Ryan S.; Tracey, Terence J. G.

    2004-01-01

    The Interpersonal Adjective Scales (IAS) is a well-supported instrument that is designed to map interpersonal traits onto the interpersonal circumplex. However, three versions of the IAS exist and these vary with respect to the degree to which they included item definitions (i.e., glossary added at the end, definitions attached to each item, and…

  9. The Investment Model Scale (IMS): further studies on construct validation and development of a shorter version (IMS-S).

    Science.gov (United States)

    Rodrigues, David; Lopes, Diniz

    2013-01-01

    The Investment Model (IM; Rusbult, 1980, 1983) has been widely used to study the development and maintenance of romantic relationships. Its components--satisfaction, quality of alternatives, investment size and commitment--are operationalized in the Investment Model Scale (IMS; Rusbult, Martz, & Agnew, 1998). Given its importance for personal relationships literature, this article presents the adaptation and validation of the IMS to Portugal, and the development and validation of a shorter version, the IMS-S. A confirmatory factor analysis replicates the IMS's original four factors structure. A similar structure was found for the IMS-S. For both versions, results show the instruments to have validity and good reliability. Results are discussed considering the scales' importance for studying romantic relationships.

  10. A NetCDF version of the two-dimensional energy balance model based on the full multigrid algorithm

    Directory of Open Access Journals (Sweden)

    Kelin Zhuang

    2017-01-01

    Full Text Available A NetCDF version of the two-dimensional energy balance model based on the full multigrid method in Fortran is introduced for both pedagogical and research purposes. Based on the land–sea–ice distribution, orbital elements, greenhouse gases concentration, and albedo, the code calculates the global seasonal surface temperature. A step-by-step guide with examples is provided for practice.

  11. Programs OPTMAN and SHEMMAN Version 6 (1999) - Coupled-Channels optical model and collective nuclear structure calculation -

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Jong Hwa; Lee, Jeong Yeon; Lee, Young Ouk; Sukhovitski, Efrem Sh. [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-01-01

    Programs SHEMMAN and OPTMAN (Version 6) have been developed for determinations of nuclear Hamiltonian parameters and for optical model calculations, respectively. The optical model calculations by OPTMAN with coupling schemes built on wave functions functions of non-axial soft-rotator are self-consistent, since the parameters of the nuclear Hamiltonian are determined by adjusting the energies of collective levels to experimental values with SHEMMAN prior to the optical model calculation. The programs have been installed at Nuclear Data Evaluation Laboratory of KAERI. This report is intended as a brief manual of these codes. 43 refs., 9 figs., 1 tabs. (Author)

  12. Using the Advanced Research Version of the Weather Research and Forecasting Model (WRF-ARW) to Forecast Turbulence at Small Scales

    National Research Council Canada - National Science Library

    Passner, Jeffrey E

    2008-01-01

    ...) as well as for longer-range forecasting support. The model utilized to investigate fine-scale weather processes, the Advanced Research version of the Weather Research and Forecasting model (WRF-ARW...

  13. Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) (Final Report, Version 2)

    Science.gov (United States)

    EPA's announced the availability of the final report, Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) (Version 2). This update furthered land change modeling by providing nationwide housing developmen...

  14. A Revised Thermosphere for the Mars Global Reference Atmospheric Model (Mars-GRAM Version 3.4)

    Science.gov (United States)

    Justus, C. G.; Johnson, D. L.; James, B. F.

    1996-01-01

    This report describes the newly-revised model thermosphere for the Mars Global Reference Atmospheric Model (Mars-GRAM, Version 3.4). It also provides descriptions of other changes made to the program since publication of the programmer's guide for Mars-GRAM Version 3.34. The original Mars-GRAM model thermosphere was based on the global-mean model of Stewart. The revised thermosphere is based largely on parameterizations derived from output data from the three-dimensional Mars Thermospheric Global Circulation Model (MTGCM). The new thermospheric model includes revised dependence on the 10.7 cm solar flux for the global means of exospheric temperature, temperature of the base of the thermosphere, and scale height for the thermospheric temperature variations, as well as revised dependence on orbital position for global mean height of the base of the thermosphere. Other features of the new thermospheric model are: (1) realistic variations of temperature and density with latitude and time of day, (2) more realistic wind magnitudes, based on improved estimates of horizontal pressure gradients, and (3) allowance for user-input adjustments to the model values for mean exospheric temperature and for height and temperature at the base of the thermosphere. Other new features of Mars-GRAM 3.4 include: (1) allowance for user-input values of climatic adjustment factors for temperature profiles from the surface to 75 km, and (2) a revised method for computing the sub-solar longitude position in the 'ORBIT' subroutine.

  15. UNSAT-H Version 3.0: Unsaturated Soil Water and Heat Flow Model Theory, User Manual, and Examples

    International Nuclear Information System (INIS)

    Fayer, M.J.

    2000-01-01

    The UNSAT-H model was developed at Pacific Northwest National Laboratory (PNNL) to assess the water dynamics of arid sites and, in particular, estimate recharge fluxes for scenarios pertinent to waste disposal facilities. During the last 4 years, the UNSAT-H model received support from the Immobilized Waste Program (IWP) of the Hanford Site's River Protection Project. This program is designing and assessing the performance of on-site disposal facilities to receive radioactive wastes that are currently stored in single- and double-shell tanks at the Hanford Site (LMHC 1999). The IWP is interested in estimates of recharge rates for current conditions and long-term scenarios involving the vadose zone disposal of tank wastes. Simulation modeling with UNSAT-H is one of the methods being used to provide those estimates (e.g., Rockhold et al. 1995; Fayer et al. 1999). To achieve the above goals for assessing water dynamics and estimating recharge rates, the UNSAT-H model addresses soil water infiltration, redistribution, evaporation, plant transpiration, deep drainage, and soil heat flow as one-dimensional processes. The UNSAT-H model simulates liquid water flow using Richards' equation (Richards 1931), water vapor diffusion using Fick's law, and sensible heat flow using the Fourier equation. This report documents UNSAT-H .Version 3.0. The report includes the bases for the conceptual model and its numerical implementation, benchmark test cases, example simulations involving layered soils and plants, and the code manual. Version 3.0 is an, enhanced-capability update of UNSAT-H Version 2.0 (Fayer and Jones 1990). New features include hysteresis, an iterative solution of head and temperature, an energy balance check, the modified Picard solution technique, additional hydraulic functions, multiple-year simulation capability, and general enhancements

  16. Simulations of the mid-Pliocene Warm Period using two versions of the NASA/GISS ModelE2-R Coupled Model

    Directory of Open Access Journals (Sweden)

    M. A. Chandler

    2013-04-01

    Full Text Available The mid-Pliocene Warm Period (mPWP bears many similarities to aspects of future global warming as projected by the Intergovernmental Panel on Climate Change (IPCC, 2007. Both marine and terrestrial data point to high-latitude temperature amplification, including large decreases in sea ice and land ice, as well as expansion of warmer climate biomes into higher latitudes. Here we present our most recent simulations of the mid-Pliocene climate using the CMIP5 version of the NASA/GISS Earth System Model (ModelE2-R. We describe the substantial impact associated with a recent correction made in the implementation of the Gent-McWilliams ocean mixing scheme (GM, which has a large effect on the simulation of ocean surface temperatures, particularly in the North Atlantic Ocean. The effect of this correction on the Pliocene climate results would not have been easily determined from examining its impact on the preindustrial runs alone, a useful demonstration of how the consequences of code improvements as seen in modern climate control runs do not necessarily portend the impacts in extreme climates. Both the GM-corrected and GM-uncorrected simulations were contributed to the Pliocene Model Intercomparison Project (PlioMIP Experiment 2. Many findings presented here corroborate results from other PlioMIP multi-model ensemble papers, but we also emphasise features in the ModelE2-R simulations that are unlike the ensemble means. The corrected version yields results that more closely resemble the ocean core data as well as the PRISM3D reconstructions of the mid-Pliocene, especially the dramatic warming in the North Atlantic and Greenland-Iceland-Norwegian Sea, which in the new simulation appears to be far more realistic than previously found with older versions of the GISS model. Our belief is that continued development of key physical routines in the atmospheric model, along with higher resolution and recent corrections to mixing parameterisations in the ocean

  17. Simulations of the Mid-Pliocene Warm Period Using Two Versions of the NASA-GISS ModelE2-R Coupled Model

    Science.gov (United States)

    Chandler, M. A.; Sohl, L. E.; Jonas, J. A.; Dowsett, H. J.; Kelley, M.

    2013-01-01

    The mid-Pliocene Warm Period (mPWP) bears many similarities to aspects of future global warming as projected by the Intergovernmental Panel on Climate Change (IPCC, 2007). Both marine and terrestrial data point to high-latitude temperature amplification, including large decreases in sea ice and land ice, as well as expansion of warmer climate biomes into higher latitudes. Here we present our most recent simulations of the mid-Pliocene climate using the CMIP5 version of the NASAGISS Earth System Model (ModelE2-R). We describe the substantial impact associated with a recent correction made in the implementation of the Gent-McWilliams ocean mixing scheme (GM), which has a large effect on the simulation of ocean surface temperatures, particularly in the North Atlantic Ocean. The effect of this correction on the Pliocene climate results would not have been easily determined from examining its impact on the preindustrial runs alone, a useful demonstration of how the consequences of code improvements as seen in modern climate control runs do not necessarily portend the impacts in extreme climates.Both the GM-corrected and GM-uncorrected simulations were contributed to the Pliocene Model Intercomparison Project (PlioMIP) Experiment 2. Many findings presented here corroborate results from other PlioMIP multi-model ensemble papers, but we also emphasize features in the ModelE2-R simulations that are unlike the ensemble means. The corrected version yields results that more closely resemble the ocean core data as well as the PRISM3D reconstructions of the mid-Pliocene, especially the dramatic warming in the North Atlantic and Greenland-Iceland-Norwegian Sea, which in the new simulation appears to be far more realistic than previously found with older versions of the GISS model. Our belief is that continued development of key physical routines in the atmospheric model, along with higher resolution and recent corrections to mixing parameterisations in the ocean model, have led

  18. Categorical Inputs, Sensitivity Analysis, Optimization and Importance Tempering with tgp Version 2, an R Package for Treed Gaussian Process Models

    Directory of Open Access Journals (Sweden)

    Robert B. Gramacy

    2010-02-01

    Full Text Available This document describes the new features in version 2.x of the tgp package for R, implementing treed Gaussian process (GP models. The topics covered include methods for dealing with categorical inputs and excluding inputs from the tree or GP part of the model; fully Bayesian sensitivity analysis for inputs/covariates; sequential optimization of black-box functions; and a new Monte Carlo method for inference in multi-modal posterior distributions that combines simulated tempering and importance sampling. These additions extend the functionality of tgp across all models in the hierarchy: from Bayesian linear models, to classification and regression trees (CART, to treed Gaussian processes with jumps to the limiting linear model. It is assumed that the reader is familiar with the baseline functionality of the package, outlined in the first vignette (Gramacy 2007.

  19. Land-total and Ocean-total Precipitation and Evaporation from a Community Atmosphere Model version 5 Perturbed Parameter Ensemble

    Energy Technology Data Exchange (ETDEWEB)

    Covey, Curt [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Lucas, Donald D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Trenberth, Kevin E. [National Center for Atmospheric Research, Boulder, CO (United States)

    2016-03-02

    This document presents the large scale water budget statistics of a perturbed input-parameter ensemble of atmospheric model runs. The model is Version 5.1.02 of the Community Atmosphere Model (CAM). These runs are the “C-Ensemble” described by Qian et al., “Parametric Sensitivity Analysis of Precipitation at Global and Local Scales in the Community Atmosphere Model CAM5” (Journal of Advances in Modeling the Earth System, 2015). As noted by Qian et al., the simulations are “AMIP type” with temperature and sea ice boundary conditions chosen to match surface observations for the five year period 2000-2004. There are 1100 ensemble members in addition to one run with default inputparameter values.

  20. Model Analyst’s Toolkit User Guide, Version 7.1.0

    Science.gov (United States)

    2015-08-01

    Analyst’s Toolkit Version 7.1.0 4 Figure 1 MAT interface elements Use the standard Windows methods for moving and resizing windows. You can also move...and resize views using the toolbar at the top of each view. Figure 2 Selected Entities view To rearrange a view  Float a view by clicking and...and click OK in the Existing Concept area to create the data feature. If you uncheck the data series, the dot next to the series is shown in red on

  1. Measurement of the reaction {gamma}p{yields}K{sup 0}{sigma}{sup +} for photon energies up to 2.65 GeV with the SAPHIR detector at ELSA; Messung der Reaktion {gamma}p {yields} K{sup 0}{sigma}{sup +} fuer Photonenergien bis 2.65 GeV mit dem SAPHIR-Detektor an ELSA

    Energy Technology Data Exchange (ETDEWEB)

    Lawall, R.

    2004-01-01

    The reaction {gamma}p {yields} K{sup 0}{sigma}{sup +} was measured with the SAPHIR-detector at ELSA during the run periods 1997 and 1998. Results were obtained for cross sections in the photon energy range from threshold up to 2.65 GeV for all production angles and for the {sigma}{sup +}-polarization. Emphasis has been put on the determination and reduction of the contributions of background reactions and the comparison with other measurements and theoretical predictions. (orig.)

  2. The Parallelized Large-Eddy Simulation Model (PALM version 4.0 for atmospheric and oceanic flows: model formulation, recent developments, and future perspectives

    Directory of Open Access Journals (Sweden)

    B. Maronga

    2015-08-01

    Full Text Available In this paper we present the current version of the Parallelized Large-Eddy Simulation Model (PALM whose core has been developed at the Institute of Meteorology and Climatology at Leibniz Universität Hannover (Germany. PALM is a Fortran 95-based code with some Fortran 2003 extensions and has been applied for the simulation of a variety of atmospheric and oceanic boundary layers for more than 15 years. PALM is optimized for use on massively parallel computer architectures and was recently ported to general-purpose graphics processing units. In the present paper we give a detailed description of the current version of the model and its features, such as an embedded Lagrangian cloud model and the possibility to use Cartesian topography. Moreover, we discuss recent model developments and future perspectives for LES applications.

  3. The Parallelized Large-Eddy Simulation Model (PALM) version 4.0 for atmospheric and oceanic flows: model formulation, recent developments, and future perspectives

    Science.gov (United States)

    Maronga, B.; Gryschka, M.; Heinze, R.; Hoffmann, F.; Kanani-Sühring, F.; Keck, M.; Ketelsen, K.; Letzel, M. O.; Sühring, M.; Raasch, S.

    2015-08-01

    In this paper we present the current version of the Parallelized Large-Eddy Simulation Model (PALM) whose core has been developed at the Institute of Meteorology and Climatology at Leibniz Universität Hannover (Germany). PALM is a Fortran 95-based code with some Fortran 2003 extensions and has been applied for the simulation of a variety of atmospheric and oceanic boundary layers for more than 15 years. PALM is optimized for use on massively parallel computer architectures and was recently ported to general-purpose graphics processing units. In the present paper we give a detailed description of the current version of the model and its features, such as an embedded Lagrangian cloud model and the possibility to use Cartesian topography. Moreover, we discuss recent model developments and future perspectives for LES applications.

  4. The global chemistry transport model TM5: description and evaluation of the tropospheric chemistry version 3.0

    Directory of Open Access Journals (Sweden)

    V. Huijnen

    2010-10-01

    Full Text Available We present a comprehensive description and benchmark evaluation of the tropospheric chemistry version of the global chemistry transport model TM5 (Tracer Model 5, version TM5-chem-v3.0. A full description is given concerning the photochemical mechanism, the interaction with aerosol, the treatment of the stratosphere, the wet and dry deposition parameterizations, and the applied emissions. We evaluate the model against a suite of ground-based, satellite, and aircraft measurements of components critical for understanding global photochemistry for the year 2006.

    The model exhibits a realistic oxidative capacity at a global scale. The methane lifetime is ~8.9 years with an associated lifetime of methyl chloroform of 5.86 years, which is similar to that derived using an optimized hydroxyl radical field.

    The seasonal cycle in observed carbon monoxide (CO is well simulated at different regions across the globe. In the Northern Hemisphere CO concentrations are underestimated by about 20 ppbv in spring and 10 ppbv in summer, which is related to missing chemistry and underestimated emissions from higher hydrocarbons, as well as to uncertainties in the seasonal variation of CO emissions. The model also captures the spatial and seasonal variation in formaldehyde tropospheric columns as observed by SCIAMACHY. Positive model biases over the Amazon and eastern United States point to uncertainties in the isoprene emissions as well as its chemical breakdown.

    Simulated tropospheric nitrogen dioxide columns correspond well to observations from the Ozone Monitoring Instrument in terms of its seasonal and spatial variability (with a global spatial correlation coefficient of 0.89, but TM5 fields are lower by 25–40%. This is consistent with earlier studies pointing to a high bias of 0–30% in the OMI retrievals, but uncertainties in the emission inventories have probably also contributed to the discrepancy.

    TM5 tropospheric

  5. All-sky radiance simulation of Megha-Tropiques SAPHIR microwave ...

    Indian Academy of Sciences (India)

    J. Earth Syst. Sci. (2017) 126: 24 in improving the global weather analyses and subsequent model forecasts. Use of cloud clear satellite radiances from infrared and microwave sounding data have already brought improvements to moisture and temperature analyses (Eyre et al. 1993; English et al. 2000). Assimilation of ...

  6. Recent extensions and use of the statistical model code EMPIRE-II - version: 2.17 Millesimo

    International Nuclear Information System (INIS)

    Herman, M.

    2003-01-01

    This lecture notes describe new features of the modular code EMPIRE-2.17 designed to perform comprehensive calculations of nuclear reactions using variety of nuclear reaction models. Compared to the version 2.13, the current release has been extended by including Coupled-Channel mechanism, exciton model, Monte Carlo approach to preequilibrium emission, use of microscopic level densities, widths fluctuation correction, detailed calculation of the recoil spectra, and powerful plotting capabilities provided by the ZVView package. The second part of this lecture concentrates on the use of the code in practical calculations, with emphasis on the aspects relevant to nuclear data evaluation. In particular, adjusting model parameters is discussed in details. (author)

  7. Online dynamical downscaling of temperature and precipitation within the iLOVECLIM model (version 1.1

    Directory of Open Access Journals (Sweden)

    A. Quiquet

    2018-02-01

    Full Text Available This paper presents the inclusion of an online dynamical downscaling of temperature and precipitation within the model of intermediate complexity iLOVECLIM v1.1. We describe the following methodology to generate temperature and precipitation fields on a 40 km  ×  40 km Cartesian grid of the Northern Hemisphere from the T21 native atmospheric model grid. Our scheme is not grid specific and conserves energy and moisture in the same way as the original climate model. We show that we are able to generate a high-resolution field which presents a spatial variability in better agreement with the observations compared to the standard model. Although the large-scale model biases are not corrected, for selected model parameters, the downscaling can induce a better overall performance compared to the standard version on both the high-resolution grid and on the native grid. Foreseen applications of this new model feature include the improvement of ice sheet model coupling and high-resolution land surface models.

  8. Online dynamical downscaling of temperature and precipitation within the iLOVECLIM model (version 1.1)

    Science.gov (United States)

    Quiquet, Aurélien; Roche, Didier M.; Dumas, Christophe; Paillard, Didier

    2018-02-01

    This paper presents the inclusion of an online dynamical downscaling of temperature and precipitation within the model of intermediate complexity iLOVECLIM v1.1. We describe the following methodology to generate temperature and precipitation fields on a 40 km × 40 km Cartesian grid of the Northern Hemisphere from the T21 native atmospheric model grid. Our scheme is not grid specific and conserves energy and moisture in the same way as the original climate model. We show that we are able to generate a high-resolution field which presents a spatial variability in better agreement with the observations compared to the standard model. Although the large-scale model biases are not corrected, for selected model parameters, the downscaling can induce a better overall performance compared to the standard version on both the high-resolution grid and on the native grid. Foreseen applications of this new model feature include the improvement of ice sheet model coupling and high-resolution land surface models.

  9. A psychometric evaluation of the Swedish version of the Research Utilization Questionnaire using a Rasch measurement model.

    Science.gov (United States)

    Lundberg, Veronica; Boström, Anne-Marie; Malinowsky, Camilla

    2017-07-30

    Evidence-based practice and research utilisation has become a commonly used concept in health care. The Research Utilization Questionnaire (RUQ) has been recognised to be a widely used instrument measuring the perception of research utilisation among nursing staff in clinical practice. Few studies have however analysed the psychometric properties of the RUQ. The aim of this study was to examine the psychometric properties of the Swedish version of the three subscales in RUQ using a Rasch measurement model. This study has a cross-sectional design using a sample of 163 staff (response rate 81%) working in one nursing home in Sweden. Data were collected using the Swedish version of RUQ in 2012. The three subscales Attitudes towards research, Availability of and support for research use and Use of research findings in clinical practice were investigated. Data were analysed using a Rasch measurement model. The results indicate presence of multidimensionality in all subscales. Moreover, internal scale validity and person response validity also provide some less satisfactory results, especially for the subscale Use of research findings. Overall, there seems to be a problem with the negatively worded statements. The findings suggest that clarification and refining of items, including additional psychometric evaluation of the RUQ, are needed before using the instrument in clinical practice and research studies among staff in nursing homes. © 2017 Nordic College of Caring Science.

  10. Construct validity of the Chinese version of the Self-care of Heart Failure Index determined using structural equation modeling.

    Science.gov (United States)

    Kang, Xiaofeng; Dennison Himmelfarb, Cheryl R; Li, Zheng; Zhang, Jian; Lv, Rong; Guo, Jinyu

    2015-01-01

    The Self-care of Heart Failure Index (SCHFI) is an empirically tested instrument for measuring the self-care of patients with heart failure. The aim of this study was to develop a simplified Chinese version of the SCHFI and provide evidence for its construct validity. A total of 182 Chinese with heart failure were surveyed. A 2-step structural equation modeling procedure was applied to test construct validity. Factor analysis showed 3 factors explaining 43% of the variance. Structural equation model confirmed that self-care maintenance, self-care management, and self-care confidence are indeed indicators of self-care, and self-care confidence was a positive and equally strong predictor of self-care maintenance and self-care management. Moreover, self-care scores were correlated with the Partners in Health Scale, indicating satisfactory concurrent validity. The Chinese version of the SCHFI is a theory-based instrument for assessing self-care of Chinese patients with heart failure.

  11. Approaches in highly parameterized inversion—PEST++ Version 3, a Parameter ESTimation and uncertainty analysis software suite optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.

    2015-09-18

    The PEST++ Version 1 object-oriented parameter estimation code is here extended to Version 3 to incorporate additional algorithms and tools to further improve support for large and complex environmental modeling problems. PEST++ Version 3 includes the Gauss-Marquardt-Levenberg (GML) algorithm for nonlinear parameter estimation, Tikhonov regularization, integrated linear-based uncertainty quantification, options of integrated TCP/IP based parallel run management or external independent run management by use of a Version 2 update of the GENIE Version 1 software code, and utilities for global sensitivity analyses. The Version 3 code design is consistent with PEST++ Version 1 and continues to be designed to lower the barriers of entry for users as well as developers while providing efficient and optimized algorithms capable of accommodating large, highly parameterized inverse problems. As such, this effort continues the original focus of (1) implementing the most popular and powerful features of the PEST software suite in a fashion that is easy for novice or experienced modelers to use and (2) developing a software framework that is easy to extend.

  12. Assessment of two versions of regional climate model in simulating the Indian Summer Monsoon over South Asia CORDEX domain

    Science.gov (United States)

    Pattnayak, K. C.; Panda, S. K.; Saraswat, Vaishali; Dash, S. K.

    2018-04-01

    This study assess the performance of two versions of Regional Climate Model (RegCM) in simulating the Indian summer monsoon over South Asia for the period 1998 to 2003 with an aim of conducting future climate change simulations. Two sets of experiments were carried out with two different versions of RegCM (viz. RegCM4.2 and RegCM4.3) with the lateral boundary forcings provided from European Center for Medium Range Weather Forecast Reanalysis (ERA-interim) at 50 km horizontal resolution. The major updates in RegCM4.3 in comparison to the older version RegCM4.2 are the inclusion of measured solar irradiance in place of hardcoded solar constant and additional layers in the stratosphere. The analysis shows that the Indian summer monsoon rainfall, moisture flux and surface net downward shortwave flux are better represented in RegCM4.3 than that in the RegCM4.2 simulations. Excessive moisture flux in the RegCM4.2 simulation over the northern Arabian Sea and Peninsular India resulted in an overestimation of rainfall over the Western Ghats, Peninsular region as a result of which the all India rainfall has been overestimated. RegCM4.3 has performed well over India as a whole as well as its four rainfall homogenous zones in reproducing the mean monsoon rainfall and inter-annual variation of rainfall. Further, the monsoon onset, low-level Somali Jet and the upper level tropical easterly jet are better represented in the RegCM4.3 than RegCM4.2. Thus, RegCM4.3 has performed better in simulating the mean summer monsoon circulation over the South Asia. Hence, RegCM4.3 may be used to study the future climate change over the South Asia.

  13. Implementation of methane cycling for deep time, global warming simulations with the DCESS Earth System Model (Version 1.2)

    DEFF Research Database (Denmark)

    Shaffer, Gary; Villanueva, Esteban Fernández; Rondanelli, Roberto

    2017-01-01

    Geological records reveal a number of ancient, large and rapid negative excursions of carbon-13 isotope. Such excursions can only be explained by massive injections of depleted carbon to the Earth System over a short duration. These injections may have forced strong global warming events, sometimes....... With this improved DCESS model version and paleo-reconstructions, we are now better armed to gauge the amounts, types, time scales and locations of methane injections driving specific, observed deep time, global warming events......., or from warming-induced dissociation of methane hydrate, a solid compound of methane and water found in ocean sediments. As a consequence of the ubiquity and importance of methane in major Earth events, Earth System models should include a comprehensive treatment of methane cycling but such a treatment...

  14. Promoting self-determination skills in the classroom: the Self-determined Learning Model of Instruction (Spanish version

    Directory of Open Access Journals (Sweden)

    Cristina MUMBARDÓ ADAM

    2018-03-01

    Full Text Available Within Spanish context, initiatives to promote self-determination in educational settings still lacking despite of the availability of instruments designed to enable the instruction of self-determination skills, such as the Self-Determined Learning Model of Instruction. This evidence-based practice enables teachers to instruct students to develop self-determination actions and skills. This study aims to present the Spanish translated and adapted version of the afore mentioned program/tool, in an effort to improve the focus of self-determination instruction in the Spanish educational context by providing practitioners with a model of instruction intended to teach skills associated with the promotion and enhancement of self-determined action.

  15. Development of models for the sodium version of the two-phase three-dimensional thermal hydraulics code THERMIT. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, G.J.; Kazimi, M.S.

    1980-05-01

    Several different models and correlations were developed and incorporated in the sodium version of THERMIT, a thermal-hydraulics code written at MIT for the purpose of analyzing transients under LMFBR conditions. This includes: a mechanism for the inclusion of radial heat conduction in the sodium coolant as well as radial heat loss to the structure surrounding the test section. The fuel rod conduction scheme was modified to allow for more flexibility in modelling the gas plenum regions and fuel restructuring. The formulas for mass and momentum exchange between the liquid and vapor phases were improved. The single phase and two phase friction factors were replaced by correlations more appropriate to LMFBR assembly geometry.

  16. A framework for expanding aqueous chemistry in the Community Multiscale Air Quality (CMAQ) model version 5.1

    Science.gov (United States)

    Fahey, Kathleen M.; Carlton, Annmarie G.; Pye, Havala O. T.; Baek, Jaemeen; Hutzell, William T.; Stanier, Charles O.; Baker, Kirk R.; Wyat Appel, K.; Jaoui, Mohammed; Offenberg, John H.

    2017-04-01

    This paper describes the development and implementation of an extendable aqueous-phase chemistry option (AQCHEM - KMT(I)) for the Community Multiscale Air Quality (CMAQ) modeling system, version 5.1. Here, the Kinetic PreProcessor (KPP), version 2.2.3, is used to generate a Rosenbrock solver (Rodas3) to integrate the stiff system of ordinary differential equations (ODEs) that describe the mass transfer, chemical kinetics, and scavenging processes of CMAQ clouds. CMAQ's standard cloud chemistry module (AQCHEM) is structurally limited to the treatment of a simple chemical mechanism. This work advances our ability to test and implement more sophisticated aqueous chemical mechanisms in CMAQ and further investigate the impacts of microphysical parameters on cloud chemistry. Box model cloud chemistry simulations were performed to choose efficient solver and tolerance settings, evaluate the implementation of the KPP solver, and assess the direct impacts of alternative solver and kinetic mass transfer on predicted concentrations for a range of scenarios. Month-long CMAQ simulations for winter and summer periods over the US reveal the changes in model predictions due to these cloud module updates within the full chemical transport model. While monthly average CMAQ predictions are not drastically altered between AQCHEM and AQCHEM - KMT, hourly concentration differences can be significant. With added in-cloud secondary organic aerosol (SOA) formation from biogenic epoxides (AQCHEM - KMTI), normalized mean error and bias statistics are slightly improved for 2-methyltetrols and 2-methylglyceric acid at the Research Triangle Park measurement site in North Carolina during the Southern Oxidant and Aerosol Study (SOAS) period. The added in-cloud chemistry leads to a monthly average increase of 11-18 % in cloud SOA at the surface in the eastern United States for June 2013.

  17. Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection (Pub Version, Open Access)

    Science.gov (United States)

    2016-05-03

    resourced Languages, SLTU 2016, 9-12 May 2016, Yogyakarta, Indonesia Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection Neil...Abstract We investigate modeling strategies for English code-switched words as found in a Swahili spoken term detection system. Code switching...et al. / Procedia Computer Science 81 ( 2016 ) 128 – 135 Our research focuses on pronunciation modeling of English (embedded language) words within

  18. Mass-conserving subglacial hydrology in the Parallel Ice Sheet Model version 0.6

    Science.gov (United States)

    Bueler, E.; van Pelt, W.

    2015-06-01

    We describe and test a two-horizontal-dimension subglacial hydrology model which combines till with a distributed system of water-filled, linked cavities which open through sliding and close through ice creep. The addition of this sub-model to the Parallel Ice Sheet Model (PISM) accomplishes three specific goals: (a) conservation of the mass of water, (b) simulation of spatially and temporally variable basal shear stress from physical mechanisms based on a minimal number of free parameters, and (c) convergence under grid refinement. The model is a common generalization of four others: (i) the undrained plastic bed model of Tulaczyk et al. (2000b), (ii) a standard "routing" model used for identifying locations of subglacial lakes, (iii) the lumped englacial-subglacial model of Bartholomaus et al. (2011), and (iv) the elliptic-pressure-equation model of Schoof et al. (2012). We preserve physical bounds on the pressure. In steady state a functional relationship between water amount and pressure emerges. We construct an exact solution of the coupled, steady equations and use it for verification of our explicit time stepping, parallel numerical implementation. We demonstrate the model at scale by 5 year simulations of the entire Greenland ice sheet at 2 km horizontal resolution, with one million nodes in the hydrology grid.

  19. User’s Manual for the Navy Coastal Ocean Model (NCOM) Version 4.0

    Science.gov (United States)

    2009-02-06

    number of aspects of the ocean model run, including the model physics and numerics, the forcing, and the output. modelo - name of model (NCOM1) being...Examples of global tidal data bases are the Grenoble Tidal Data Base (e.g., FES-99 and FES-2004) and the Oregon State University (OSU) tidal data...Atmos. Sci., 31: 1791-1806. Oregon State Tidal Model web page, http://www.oce.orst.edu/po/research/tide/index.html. 57 NRL/MR/7320--08-9151 NCOM

  20. Navy Coastal Ocean Model (NCOM) Version 4.0 (User’s Manual)

    Science.gov (United States)

    2009-02-06

    number of aspects of the ocean model run, including the model physics and numerics, the forcing, and the output. modelo - name of model (NCOM1...Examples of global tidal data bases are the Grenoble Tidal Data Base (e.g., FES-99 and FES-2004) and the Oregon State University (OSU) tidal...Atmos. Sci., 31: 1791-1806. Oregon State Tidal Model web page, http://www.oce.orst.edu/po/research/tide/index.html. 57 NRL/MR/7320--08-9151

  1. Development of a user-friendly interface version of the Salmonella source-attribution model

    DEFF Research Database (Denmark)

    Hald, Tine; Lund, Jan

    with a user-manual, which is also part of this report. Users of the interface are recommended to read this report before starting using the interface to become familiar with the model principles and the mathematics behind, which is required in order to interpret the model results and assess the validity...

  2. Technical documentation and user's guide for City-County Allocation Model (CCAM). Version 1.0

    International Nuclear Information System (INIS)

    Clark, L.T. Jr.; Scott, M.J.; Hammer, P.

    1986-05-01

    The City-County Allocation Model (CCAM) was developed as part of the Monitored Retrievable Storage (MRS) Program. The CCAM model was designed to allocate population changes forecasted by the MASTER model to specific local communities within commuting distance of the MRS facility. The CCAM model was designed to then forecast the potential changes in demand for key community services such as housing, police protection, and utilities for these communities. The CCAM model uses a flexible on-line data base on demand for community services that is based on a combination of local service levels and state and national service standards. The CCAM model can be used to quickly forecast the potential community service consequence of economic development for local communities anywhere in the country. The remainder of this document is organized as follows. The purpose of this manual is to assist the user in understanding and operating the City-County Allocation Model (CCAM). The annual explains the data sources for the model and code modifications as well as the operational procedures

  3. On-the-fly confluence detection for statistical model checking (extended version)

    NARCIS (Netherlands)

    Hartmanns, Arnd; Timmer, Mark

    Statistical model checking is an analysis method that circumvents the state space explosion problem in model-based verification by combining probabilistic simulation with statistical methods that provide clear error bounds. As a simulation-based technique, it can only provide sound results if the

  4. Hydrogen Macro System Model User Guide, Version 1.2.1

    Energy Technology Data Exchange (ETDEWEB)

    Ruth, M.; Diakov, V.; Sa, T.; Goldsby, M.; Genung, K.; Hoseley, R.; Smith, A.; Yuzugullu, E.

    2009-07-01

    The Hydrogen Macro System Model (MSM) is a simulation tool that links existing and emerging hydrogen-related models to perform rapid, cross-cutting analysis. It allows analysis of the economics, primary energy-source requirements, and emissions of hydrogen production and delivery pathways.

  5. Model Package Report: Central Plateau Vadose Zone Geoframework Version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    Springer, Sarah D.

    2018-03-27

    The purpose of the Central Plateau Vadose Zone (CPVZ) Geoframework model (GFM) is to provide a reasonable, consistent, and defensible three-dimensional (3D) representation of the vadose zone beneath the Central Plateau at the Hanford Site to support the Composite Analysis (CA) vadose zone contaminant fate and transport models. The GFM is a 3D representation of the subsurface geologic structure. From this 3D geologic model, exported results in the form of point, surface, and/or volumes are used as inputs to populate and assemble the various numerical model architectures, providing a 3D-layered grid that is consistent with the GFM. The objective of this report is to define the process used to produce a hydrostratigraphic model for the vadose zone beneath the Hanford Site Central Plateau and the corresponding CA domain.

  6. PhytoSFDM version 1.0.0: Phytoplankton Size and Functional Diversity Model

    Science.gov (United States)

    Acevedo-Trejos, Esteban; Brandt, Gunnar; Smith, S. Lan; Merico, Agostino

    2016-11-01

    Biodiversity is one of the key mechanisms that facilitate the adaptive response of planktonic communities to a fluctuating environment. How to allow for such a flexible response in marine ecosystem models is, however, not entirely clear. One particular way is to resolve the natural complexity of phytoplankton communities by explicitly incorporating a large number of species or plankton functional types. Alternatively, models of aggregate community properties focus on macroecological quantities such as total biomass, mean trait, and trait variance (or functional trait diversity), thus reducing the observed natural complexity to a few mathematical expressions. We developed the PhytoSFDM modelling tool, which can resolve species discretely and can capture aggregate community properties. The tool also provides a set of methods for treating diversity under realistic oceanographic settings. This model is coded in Python and is distributed as open-source software. PhytoSFDM is implemented in a zero-dimensional physical scheme and can be applied to any location of the global ocean. We show that aggregate community models reduce computational complexity while preserving relevant macroecological features of phytoplankton communities. Compared to species-explicit models, aggregate models are more manageable in terms of number of equations and have faster computational times. Further developments of this tool should address the caveats associated with the assumptions of aggregate community models and about implementations into spatially resolved physical settings (one-dimensional and three-dimensional). With PhytoSFDM we embrace the idea of promoting open-source software and encourage scientists to build on this modelling tool to further improve our understanding of the role that biodiversity plays in shaping marine ecosystems.

  7. Statistical analysis of fracture data, adapted for modelling Discrete Fracture Networks-Version 2

    International Nuclear Information System (INIS)

    Munier, Raymond

    2004-04-01

    The report describes the parameters which are necessary for DFN modelling, the way in which they can be extracted from the data base acquired during site investigations, and their assignment to geometrical objects in the geological model. The purpose here is to present a methodology for use in SKB modelling projects. Though the methodology is deliberately tuned to facilitate subsequent DFN modelling with other tools, some of the recommendations presented here are applicable to other aspects of geo-modelling as well. For instance, we here recommend a nomenclature to be used within SKB modelling projects, which are truly multidisciplinary, to ease communications between scientific disciplines and avoid misunderstanding of common concepts. This report originally occurred as an appendix to a strategy report for geological modelling (SKB-R--03-07). Strategy reports were intended to be successively updated to include experience gained during site investigations and site modelling. Rather than updating the entire strategy report, we choose to present the update of the appendix as a stand-alone document. This document thus replaces Appendix A2 in SKB-R--03-07. In short, the update consists of the following: The target audience has been broadened and as a consequence thereof, the purpose of the document. Correction of errors found in various formulae. All expressions have been rewritten. Inclusion of more worked examples in each section. A new section describing area normalisation. A new section on spatial correlation. A new section describing anisotropy. A new chapter describing the expected output from DFN modelling, within SKB projects

  8. A Scalable Version of the Navy Operational Global Atmospheric Prediction System Spectral Forecast Model

    Directory of Open Access Journals (Sweden)

    Thomas E. Rosmond

    2000-01-01

    Full Text Available The Navy Operational Global Atmospheric Prediction System (NOGAPS includes a state-of-the-art spectral forecast model similar to models run at several major operational numerical weather prediction (NWP centers around the world. The model, developed by the Naval Research Laboratory (NRL in Monterey, California, has run operational at the Fleet Numerical Meteorological and Oceanographic Center (FNMOC since 1982, and most recently is being run on a Cray C90 in a multi-tasked configuration. Typically the multi-tasked code runs on 10 to 15 processors with overall parallel efficiency of about 90%. resolution is T159L30, but other operational and research applications run at significantly lower resolutions. A scalable NOGAPS forecast model has been developed by NRL in anticipation of a FNMOC C90 replacement in about 2001, as well as for current NOGAPS research requirements to run on DOD High-Performance Computing (HPC scalable systems. The model is designed to run with message passing (MPI. Model design criteria include bit reproducibility for different processor numbers and reasonably efficient performance on fully shared memory, distributed memory, and distributed shared memory systems for a wide range of model resolutions. Results for a wide range of processor numbers, model resolutions, and different vendor architectures are presented. Single node performance has been disappointing on RISC based systems, at least compared to vector processor performance. This is a common complaint, and will require careful re-examination of traditional numerical weather prediction (NWP model software design and data organization to fully exploit future scalable architectures.

  9. Parameterization Improvements and Functional and Structural Advances in Version 4 of the Community Land Model

    Directory of Open Access Journals (Sweden)

    Andrew G. Slater

    2011-05-01

    Full Text Available The Community Land Model is the land component of the Community Climate System Model. Here, we describe a broad set of model improvements and additions that have been provided through the CLM development community to create CLM4. The model is extended with a carbon-nitrogen (CN biogeochemical model that is prognostic with respect to vegetation, litter, and soil carbon and nitrogen states and vegetation phenology. An urban canyon model is added and a transient land cover and land use change (LCLUC capability, including wood harvest, is introduced, enabling study of historic and future LCLUC on energy, water, momentum, carbon, and nitrogen fluxes. The hydrology scheme is modified with a revised numerical solution of the Richards equation and a revised ground evaporation parameterization that accounts for litter and within-canopy stability. The new snow model incorporates the SNow and Ice Aerosol Radiation model (SNICAR - which includes aerosol deposition, grain-size dependent snow aging, and vertically-resolved snowpack heating –– as well as new snow cover and snow burial fraction parameterizations. The thermal and hydrologic properties of organic soil are accounted for and the ground column is extended to ~50-m depth. Several other minor modifications to the land surface types dataset, grass and crop optical properties, atmospheric forcing height, roughness length and displacement height, and the disposition of snow-capped runoff are also incorporated.Taken together, these augmentations to CLM result in improved soil moisture dynamics, drier soils, and stronger soil moisture variability. The new model also exhibits higher snow cover, cooler soil temperatures in organic-rich soils, greater global river discharge, and lower albedos over forests and grasslands, all of which are improvements compared to CLM3.5. When CLM4 is run with CN, the mean biogeophysical simulation is slightly degraded because the vegetation structure is prognostic rather

  10. Statistical analysis of fracture data, adapted for modelling Discrete Fracture Networks-Version 2

    Energy Technology Data Exchange (ETDEWEB)

    Munier, Raymond

    2004-04-01

    The report describes the parameters which are necessary for DFN modelling, the way in which they can be extracted from the data base acquired during site investigations, and their assignment to geometrical objects in the geological model. The purpose here is to present a methodology for use in SKB modelling projects. Though the methodology is deliberately tuned to facilitate subsequent DFN modelling with other tools, some of the recommendations presented here are applicable to other aspects of geo-modelling as well. For instance, we here recommend a nomenclature to be used within SKB modelling projects, which are truly multidisciplinary, to ease communications between scientific disciplines and avoid misunderstanding of common concepts. This report originally occurred as an appendix to a strategy report for geological modelling (SKB-R--03-07). Strategy reports were intended to be successively updated to include experience gained during site investigations and site modelling. Rather than updating the entire strategy report, we choose to present the update of the appendix as a stand-alone document. This document thus replaces Appendix A2 in SKB-R--03-07. In short, the update consists of the following: The target audience has been broadened and as a consequence thereof, the purpose of the document. Correction of errors found in various formulae. All expressions have been rewritten. Inclusion of more worked examples in each section. A new section describing area normalisation. A new section on spatial correlation. A new section describing anisotropy. A new chapter describing the expected output from DFN modelling, within SKB projects.

  11. On a discrete version of the CP 1 sigma model and surfaces immersed in R3

    International Nuclear Information System (INIS)

    Grundland, A M; Levi, D; Martina, L

    2003-01-01

    We present a discretization of the CP 1 sigma model. We show that the discrete CP 1 sigma model is described by a nonlinear partial second-order difference equation with rational nonlinearity. To derive discrete surfaces immersed in three-dimensional Euclidean space a 'complex' lattice is introduced. The so-obtained surfaces are characterized in terms of the quadrilateral cross-ratio of four surface points. In this way we prove that all surfaces associated with the discrete CP 1 sigma model are of constant mean curvature. An explicit example of such discrete surfaces is constructed

  12. Modeled Radar Attenuation Rate Profile at the Vostok 5G Ice Core Site, Antarctica, Version 1

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set provides a modeled radar attenuation rate profile, showing the predicted contributions from pure ice and impurities to radar attenuation at the Vostok...

  13. FMCSA safety program effectiveness measurement : carrier intervention effectiveness model, version 1.0 : [analysis brief].

    Science.gov (United States)

    2015-01-01

    The Carrier Intervention Effectiveness Model (CIEM) : provides the Federal Motor Carrier Safety : Administration (FMCSA) with a tool for measuring : the safety benefits of carrier interventions conducted : under the Compliance, Safety, Accountability...

  14. The Cloud Feedback Model Intercomparison Project Observational Simulator Package: Version 2

    Science.gov (United States)

    Swales, Dustin J.; Pincus, Robert; Bodas-Salcedo, Alejandro

    2018-01-01

    The Cloud Feedback Model Intercomparison Project Observational Simulator Package (COSP) gathers together a collection of observation proxies or satellite simulators that translate model-simulated cloud properties to synthetic observations as would be obtained by a range of satellite observing systems. This paper introduces COSP2, an evolution focusing on more explicit and consistent separation between host model, coupling infrastructure, and individual observing proxies. Revisions also enhance flexibility by allowing for model-specific representation of sub-grid-scale cloudiness, provide greater clarity by clearly separating tasks, support greater use of shared code and data including shared inputs across simulators, and follow more uniform software standards to simplify implementation across a wide range of platforms. The complete package including a testing suite is freely available.

  15. The Cloud Feedback Model Intercomparison Project Observational Simulator Package: Version 2

    Directory of Open Access Journals (Sweden)

    D. J. Swales

    2018-01-01

    Full Text Available The Cloud Feedback Model Intercomparison Project Observational Simulator Package (COSP gathers together a collection of observation proxies or satellite simulators that translate model-simulated cloud properties to synthetic observations as would be obtained by a range of satellite observing systems. This paper introduces COSP2, an evolution focusing on more explicit and consistent separation between host model, coupling infrastructure, and individual observing proxies. Revisions also enhance flexibility by allowing for model-specific representation of sub-grid-scale cloudiness, provide greater clarity by clearly separating tasks, support greater use of shared code and data including shared inputs across simulators, and follow more uniform software standards to simplify implementation across a wide range of platforms. The complete package including a testing suite is freely available.

  16. FMCSA safety program effectiveness measurement : Carrier Intervention Effectiveness Model, Version 1.1, technical report.

    Science.gov (United States)

    2017-04-01

    The Federal Motor Carrier Safety Administration (FMCSA), in cooperation with the John A. Volpe National Transportation Systems Center (Volpe), has developed a quantitative model to measure the effectiveness of motor carrier interventions in terms of ...

  17. Antarctic 5-km Digital Elevation Model from ERS-1 Altimetry, Version 1

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set provides a Digital Elevation Model (DEM) for Antarctica to 81.5 degrees south latitude, at a resolution of 5 km. Approximately twenty million data...

  18. Technical manual for basic version of the Markov chain nest productivity model (MCnest)

    Science.gov (United States)

    The Markov Chain Nest Productivity Model (or MCnest) integrates existing toxicity information from three standardized avian toxicity tests with information on species life history and the timing of pesticide applications relative to the timing of avian breeding seasons to quantit...

  19. MAPSS: Mapped Atmosphere-Plant-Soil System Model, Version 1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: MAPSS (Mapped Atmosphere-Plant-Soil System) is a landscape to global vegetation distribution model that was developed to simulate the potential biosphere...

  20. MAPSS: Mapped Atmosphere-Plant-Soil System Model, Version 1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — MAPSS (Mapped Atmosphere-Plant-Soil System) is a landscape to global vegetation distribution model that was developed to simulate the potential biosphere impacts and...

  1. Observation Data Model Core Components, its Implementation in the Table Access Protocol Version 1.1

    Science.gov (United States)

    Louys, Mireille; Tody, Doug; Dowler, Patrick; Durand, Daniel; Michel, Laurent; Bonnarel, Francos; Micol, Alberto; IVOA DataModel Working Group; Louys, Mireille; Tody, Doug; Dowler, Patrick; Durand, Daniel

    2017-05-01

    This document defines the core components of the Observation data model that are necessary to perform data discovery when querying data centers for astronomical observations of interest. It exposes use-cases to be carried out, explains the model and provides guidelines for its implementation as a data access service based on the Table Access Protocol (TAP). It aims at providing a simple model easy to understand and to implement by data providers that wish to publish their data into the Virtual Observatory. This interface integrates data modeling and data access aspects in a single service and is named ObsTAP. It will be referenced as such in the IVOA registries. In this document, the Observation Data Model Core Components (ObsCoreDM) defines the core components of queryable metadata required for global discovery of observational data. It is meant to allow a single query to be posed to TAP services at multiple sites to perform global data discovery without having to understand the details of the services present at each site. It defines a minimal set of basic metadata and thus allows for a reasonable cost of implementation by data providers. The combination of the ObsCoreDM with TAP is referred to as an ObsTAP service. As with most of the VO Data Models, ObsCoreDM makes use of STC, Utypes, Units and UCDs. The ObsCoreDM can be serialized as a VOTable. ObsCoreDM can make reference to more complete data models such as Characterisation DM, Spectrum DM or Simple Spectral Line Data Model (SSLDM). ObsCore shares a large set of common concepts with DataSet Metadata Data Model (Cresitello-Dittmar et al. 2016) which binds together most of the data model concepts from the above models in a comprehensive and more general frame work. This current specification on the contrary provides guidelines for implementing these concepts using the TAP protocol and answering ADQL queries. It is dedicated to global discovery.

  2. Illustrating and homology modeling the proteins of the Zika virus [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Sean Ekins

    2016-09-01

    Full Text Available The Zika virus (ZIKV is a flavivirus of the family Flaviviridae, which is similar to dengue virus, yellow fever and West Nile virus. Recent outbreaks in South America, Latin America, the Caribbean and in particular Brazil have led to concern for the spread of the disease and potential to cause Guillain-Barré syndrome and microcephaly. Although ZIKV has been known of for over 60 years there is very little in the way of knowledge of the virus with few publications and no crystal structures. No antivirals have been tested against it either in vitro or in vivo. ZIKV therefore epitomizes a neglected disease. Several suggested steps have been proposed which could be taken to initiate ZIKV antiviral drug discovery using both high throughput screens as well as structure-based design based on homology models for the key proteins. We now describe preliminary homology models created for NS5, FtsJ, NS4B, NS4A, HELICc, DEXDc, peptidase S7, NS2B, NS2A, NS1, E stem, glycoprotein M, propeptide, capsid and glycoprotein E using SWISS-MODEL. Eleven out of 15 models pass our model quality criteria for their further use. While a ZIKV glycoprotein E homology model was initially described in the immature conformation as a trimer, we now describe the mature dimer conformer which allowed the construction of an illustration of the complete virion. By comparing illustrations of ZIKV based on this new homology model and the dengue virus crystal structure we propose potential differences that could be exploited for antiviral and vaccine design. The prediction of sites for glycosylation on this protein may also be useful in this regard. While we await a cryo-EM structure of ZIKV and eventual crystal structures of the individual proteins, these homology models provide the community with a starting point for structure-based design of drugs and vaccines as well as a for computational virtual screening.

  3. A modified version of the Molly rumen model to quantify methane emissions from sheep.

    Science.gov (United States)

    Vetharaniam, I; Vibart, R E; Hanigan, M D; Janssen, P H; Tavendale, M H; Pacheco, D

    2015-07-01

    We modified the rumen submodel of the Molly dairy cow model to simulate the rumen of a sheep and predict its methane emissions. We introduced a rumen hydrogen (H2) pool as a dynamic variable, which (together with the microbial pool in Molly) was used to predict methane production, to facilitate future consideration of thermodynamic control of methanogenesis. The new model corrected a misspecification of the equation of microbial H2 utilization in Molly95, which could potentially give rise to unrealistic predictions under conditions of low intake rates. The new model included a function to correct biases in the estimation of net H2 production based on the default stoichiometric relationships in Molly95, with this function specified in terms of level of intake. Model parameters for H2 and methane production were fitted to experimental data that included fresh temperate forages offered to sheep at a wide range of intake levels and then tested against independent data. The new model provided reasonable estimates relative to the calibration data set, but a different parameterization was needed to improve its predicted ability relative to the validation data set. Our results indicate that, although feedback inhibition on H2 production and methanogen activity increased with feeding level, other feedback effects that vary with diet composition need to be considered in future work on modeling rumen digestion in Molly.

  4. Geological discrete fracture network model for the Olkiluoto site, Eurajoki, Finland. Version 2.0

    Energy Technology Data Exchange (ETDEWEB)

    Fox, A.; Forchhammer, K.; Pettersson, A. [Golder Associates AB, Stockholm (Sweden); La Pointe, P.; Lim, D-H. [Golder Associates Inc. (Finland)

    2012-06-15

    This report describes the methods, analyses, and conclusions of the modeling team in the production of the 2010 revision to the geological discrete fracture network (DFN) model for the Olkiluoto Site in Finland. The geological DFN is a statistical model for stochastically simulating rock fractures and minor faults at a scale ranging from approximately 0.05 m to approximately 565m; deformation zones are expressly excluded from the DFN model. The DFN model is presented as a series of tables summarizing probability distributions for several parameters necessary for fracture modeling: fracture orientation, fracture size, fracture intensity, and associated spatial constraints. The geological DFN is built from data collected during site characterization (SC) activities at Olkiluoto, which is selected to function as a final deep geological repository for spent fuel and nuclear waste from the Finnish nuclear power program. Data used in the DFN analyses include fracture maps from surface outcrops and trenches, geological and structural data from cored drillholes, and fracture information collected during the construction of the main tunnels and shafts at the ONKALO laboratory. Unlike the initial geological DFN, which was focused on the vicinity of the ONKALO tunnel, the 2010 revisions present a model parameterization for the entire island. Fracture domains are based on the tectonic subdivisions at the site (northern, central, and southern tectonic units) presented in the Geological Site Model (GSM), and are further subdivided along the intersection of major brittle-ductile zones. The rock volume at Olkiluoto is dominated by three distinct fracture sets: subhorizontally-dipping fractures striking north-northeast and dipping to the east that is subparallel to the mean bedrock foliation direction, a subvertically-dipping fracture set striking roughly north-south, and a subvertically-dipping fracture set striking approximately east-west. The subhorizontally-dipping fractures

  5. Geological discrete fracture network model for the Olkiluoto site, Eurajoki, Finland. Version 2.0

    International Nuclear Information System (INIS)

    Fox, A.; Forchhammer, K.; Pettersson, A.; La Pointe, P.; Lim, D-H.

    2012-06-01

    This report describes the methods, analyses, and conclusions of the modeling team in the production of the 2010 revision to the geological discrete fracture network (DFN) model for the Olkiluoto Site in Finland. The geological DFN is a statistical model for stochastically simulating rock fractures and minor faults at a scale ranging from approximately 0.05 m to approximately 565m; deformation zones are expressly excluded from the DFN model. The DFN model is presented as a series of tables summarizing probability distributions for several parameters necessary for fracture modeling: fracture orientation, fracture size, fracture intensity, and associated spatial constraints. The geological DFN is built from data collected during site characterization (SC) activities at Olkiluoto, which is selected to function as a final deep geological repository for spent fuel and nuclear waste from the Finnish nuclear power program. Data used in the DFN analyses include fracture maps from surface outcrops and trenches, geological and structural data from cored drillholes, and fracture information collected during the construction of the main tunnels and shafts at the ONKALO laboratory. Unlike the initial geological DFN, which was focused on the vicinity of the ONKALO tunnel, the 2010 revisions present a model parameterization for the entire island. Fracture domains are based on the tectonic subdivisions at the site (northern, central, and southern tectonic units) presented in the Geological Site Model (GSM), and are further subdivided along the intersection of major brittle-ductile zones. The rock volume at Olkiluoto is dominated by three distinct fracture sets: subhorizontally-dipping fractures striking north-northeast and dipping to the east that is subparallel to the mean bedrock foliation direction, a subvertically-dipping fracture set striking roughly north-south, and a subvertically-dipping fracture set striking approximately east-west. The subhorizontally-dipping fractures

  6. Water, Energy, and Biogeochemical Model (WEBMOD), user’s manual, version 1

    Science.gov (United States)

    Webb, Richard M.T.; Parkhurst, David L.

    2017-02-08

    The Water, Energy, and Biogeochemical Model (WEBMOD) uses the framework of the U.S. Geological Survey (USGS) Modular Modeling System to simulate fluxes of water and solutes through watersheds. WEBMOD divides watersheds into model response units (MRU) where fluxes and reactions are simulated for the following eight hillslope reservoir types: canopy; snowpack; ponding on impervious surfaces; O-horizon; two reservoirs in the unsaturated zone, which represent preferential flow and matrix flow; and two reservoirs in the saturated zone, which also represent preferential flow and matrix flow. The reservoir representing ponding on impervious surfaces, currently not functional (2016), will be implemented once the model is applied to urban areas. MRUs discharge to one or more stream reservoirs that flow to the outlet of the watershed. Hydrologic fluxes in the watershed are simulated by modules derived from the USGS Precipitation Runoff Modeling System; the National Weather Service Hydro-17 snow model; and a topography-driven hydrologic model (TOPMODEL). Modifications to the standard TOPMODEL include the addition of heterogeneous vertical infiltration rates; irrigation; lateral and vertical preferential flows through the unsaturated zone; pipe flow draining the saturated zone; gains and losses to regional aquifer systems; and the option to simulate baseflow discharge by using an exponential, parabolic, or linear decrease in transmissivity. PHREEQC, an aqueous geochemical model, is incorporated to simulate chemical reactions as waters evaporate, mix, and react within the various reservoirs of the model. The reactions that can be specified for a reservoir include equilibrium reactions among water; minerals; surfaces; exchangers; and kinetic reactions such as kinetic mineral dissolution or precipitation, biologically mediated reactions, and radioactive decay. WEBMOD also simulates variations in the concentrations of the stable isotopes deuterium and oxygen-18 as a result of

  7. Representing winter wheat in the Community Land Model (version 4.5)

    Science.gov (United States)

    Lu, Yaqiong; Williams, Ian N.; Bagley, Justin E.; Torn, Margaret S.; Kueppers, Lara M.

    2017-05-01

    Winter wheat is a staple crop for global food security, and is the dominant vegetation cover for a significant fraction of Earth's croplands. As such, it plays an important role in carbon cycling and land-atmosphere interactions in these key regions. Accurate simulation of winter wheat growth is not only crucial for future yield prediction under a changing climate, but also for accurately predicting the energy and water cycles for winter wheat dominated regions. We modified the winter wheat model in the Community Land Model (CLM) to better simulate winter wheat leaf area index, latent heat flux, net ecosystem exchange of CO2, and grain yield. These included schemes to represent vernalization as well as frost tolerance and damage. We calibrated three key parameters (minimum planting temperature, maximum crop growth days, and initial value of leaf carbon allocation coefficient) and modified the grain carbon allocation algorithm for simulations at the US Southern Great Plains ARM site (US-ARM), and validated the model performance at eight additional sites across North America. We found that the new winter wheat model improved the prediction of monthly variation in leaf area index, reduced latent heat flux, and net ecosystem exchange root mean square error (RMSE) by 41 and 35 % during the spring growing season. The model accurately simulated the interannual variation in yield at the US-ARM site, but underestimated yield at sites and in regions (northwestern and southeastern US) with historically greater yields by 35 %.

  8. Geological discrete-fracture network model (version 1) for the Olkiluoto site, Finland

    International Nuclear Information System (INIS)

    Fox, A.; Buoro, A.; Dahlbo, K.; Wiren, L.

    2009-10-01

    This report describes the methods, analyses, and conclusions of the modelling team in the production of a discrete-fracture network (DFN) model for the Olkiluoto Site in Finland. The geological DFN is a statistical model for stochastically simulating rock fractures and minor faults at a scale ranging from approximately 0.05 m to approximately 500 m; an upper scale limit is not expressly defined, but the DFN model explicitly excludes structures at deformation-zone scales (∼ 500 m) and larger. The DFN model is presented as a series of tables summarizing probability distributions for several parameters necessary for fracture modelling: fracture orientation, fracture size, fracture intensity, and associated spatial constraints. The geological DFN is built from data collected during site characterization (SC) activities at Olkiluoto, which is currently planned to function as a final deep geological repository for spent fuel and nuclear waste from the Finnish nuclear power program. Data used in the DFN analyses include fracture maps from surface outcrops and trenches (as of July 2007), geological and structural data from cored boreholes (as of July 2007), and fracture information collected during the construction of the main tunnels and shafts at the ONKALO laboratory (January 2008). The modelling results suggest that the rock volume at Olkiluoto surrounding the ONKALO tunnel can be separated into three distinct volumes (fracture domains): an upper block, an intermediate block, and a lower block. The three fracture domains are bounded horizontally and vertically by large deformation zones. Fracture properties, such as fracture orientation and relative orientation set intensity, vary between fracture domains. The rock volume at Olkiluoto is dominated by three distinct fracture sets: subhorizontally-dipping fractures striking north-northeast and dipping to the east, a subvertically-dipping fracture set striking roughly north-south, and a subverticallydipping fracture set

  9. PCR-GLOBWB version 2.0: A High Resolution Integrated Global Hydrology and Water Resources Model

    Science.gov (United States)

    Sutanudjaja, E.; Van Beek, L. P.; Drost, N.; de Graaf, I. E. M.; de Jong, K.; Straatsma, M. W.; Wada, Y.; Wisser, D.; Bierkens, M. F.

    2014-12-01

    PCRaster GLOBal Water Balance is a grid-based global hydrological model developed at Utrecht University. It simulates soil moisture in vertically stacked soil layers, as well as exchange to the atmosphere and underlying groundwater reservoir. Fluxes are simulated under different land cover types by considering sub-grid variations in topography, vegetation phenology and soil properties. The model includes physically-based schemes for runoff generation and infiltration, resulting in direct runoff, interflow, groundwater recharge and baseflow, as well as channel routing.We present the latest version of the model, PCR-GLOBWB 2.0, consolidating all new developments introduced since PCR-GLOWB 1.0 was first published (van Beek et al, 2011). The main new components are: An inclusion of water demand module and the progressive introduction of reservoirs and expansion of irrigation areas (Wada et al, 2014) An attribution of water use to ground- and surface water resources and the fate of return flow (de Graaf et al, 2014) A routing scheme accounting for variable extent of floodplains (Winsemius et al, 2013) PCR-GLOBWB 2.0 now runs at a spatial resolution of 5 arc min (± 10 km) in comparison to the 30 arc min (50 km) resolution used in PCR-GLOWB 1.0. At the finer resolution and with the added components, PCR-GLOBWB 2.0 shows improvements over the previous version: observed discharges from 5142 GRDC stations can be approximated more closely and model efficiency improves, particularly for smaller catchment areas (ρ = 0.87); human impacts, altering the seasonal and inter-annual variation of terrestrial water storage, are well simulated and evident in the validation to GRACE data (ρ = 0.81). These improvements open up new possibilities to assess the state of global water resources.Also, we show an outlook of model results at higher resolutions: 3 arc min (5 km) and 30 arc sec (1 km) for specific test-bed areas: California, Illinois and Rhine-Meuse. We discuss fundamental

  10. The NASA Marshall Space Flight Center Earth Global Reference Atmospheric Model-2010 Version

    Science.gov (United States)

    Leslie, F. W.; Justus, C. G.

    2011-01-01

    Reference or standard atmospheric models have long been used for design and mission planning of various aerospace systems. The NASA Marshall Space Flight Center Global Reference Atmospheric Model was developed in response to the need for a design reference atmosphere that provides complete global geographical variability and complete altitude coverage (surface to orbital altitudes), as well as complete seasonal and monthly variability of the thermodynamic variables and wind components. In addition to providing the geographical, height, and monthly variation of the mean atmospheric state, it includes the ability to simulate spatial and temporal perturbations.

  11. The SF-8 Spanish Version for Health-Related Quality of Life Assessment: Psychometric Study with IRT and CFA Models.

    Science.gov (United States)

    Tomás, José M; Galiana, Laura; Fernández, Irene

    2018-03-22

    The aim of current research is to analyze the psychometric properties of the Spanish version of the SF-8, overcoming previous shortcomings. A double line of analyses was used: competitive structural equations models to establish factorial validity, and Item Response theory to analyze item psychometric characteristics and information. 593 people aged 60 years or older, attending long life learning programs at the University were surveyed. Their age ranged from 60 to 92 years old. 67.6% were women. The survey included scales on personality dimensions, attitudes, perceptions, and behaviors related to aging. Competitive confirmatory models pointed out two-factors (physical and mental health) as the best representation of the data: χ2(13) = 72.37 (p < .01); CFI = .99; TLI = .98; RMSEA = .08 (.06, .10). Item 5 was removed because of unreliability and cross-loading. Graded response models showed appropriate fit for two-parameter logistic model both the physical and the mental dimensions. Item Information Curves and Test Information Functions pointed out that the SF-8 was more informative for low levels of health. The Spanish SF-8 has adequate psychometric properties, being better represented by two dimensions, once Item 5 is removed. Gathering evidence on patient-reported outcome measures is of crucial importance, as this type of measurement instruments are increasingly used in clinical arena.

  12. A modified version of the SMAR model for estimating root-zone soil ...

    African Journals Online (AJOL)

    Previous studies have proved the effectiveness of SMAR in estimating root-zone soil moisture, yet there is still room for improvement in its application. For example, the soil water loss function (i.e. deep percolation and evapotranspiration), assumed to be a linear function in the SMAR model, may produce approximations in ...

  13. (ML)2: a formal language for KADS models of expertise (short version)

    NARCIS (Netherlands)

    Harmelen, van F.A.H.; Balder, J.

    1992-01-01

    We present (ML)2, a formal language for the representation of KADS models of expertise. (ML)2 is a combination of first order predicate logic (for the declarative representation of domain knowledge), meta-logic (for the representation of how to use the domain knowledge) and dynamic logic (for the

  14. Landfill Gas Energy Cost Model Version 3.0 (LFGcost-Web V3.0)

    Science.gov (United States)

    To help stakeholders estimate the costs of a landfill gas (LFG) energy project, in 2002, LMOP developed a cost tool (LFGcost). Since then, LMOP has routinely updated the tool to reflect changes in the LFG energy industry. Initially the model was designed for EPA to assist landfil...

  15. Unit testing, model validation, and biological simulation [version 1; referees: 2 approved, 1 approved with reservations

    Directory of Open Access Journals (Sweden)

    Gopal P. Sarma

    2016-08-01

    Full Text Available The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models.

  16. SITE-94. The CRYSTAL Geosphere Transport Model: Technical documentation version 2.1

    International Nuclear Information System (INIS)

    Worgan, K.; Robinson, P.

    1995-12-01

    CRYSTAL, a one-dimensional contaminant transport model of a densely fissured geosphere, was originally developed for the SKI Project-90 performance assessment program. It has since been extended to include matrix blocks of alternative basic geometries. CRYSTAL predicts the transport of arbitrary-length decay chains by advection, diffusion and surface sorption in the fissures and diffusion into the rock matrix blocks. The model equations are solved in Laplace transform space, and inverted numerically to the time domain. This approach avoids time-stepping and consequently is numerically very efficient. The source term for crystal may be supplied internally using either simple leaching or band release submodels or by input of a general time-series output from a near-field model. The time series input is interfaced with the geosphere model using the method of convolution. The response of the geosphere to delta-function inputs from each nuclide is combined with the time series outputs from the near-field, to obtain the nuclide flux emerging from the far-field. 14 refs

  17. LANDFILL GAS EMISSIONS MODEL (LANDGEM) VERSION 3.02 USER'S GUIDE

    Science.gov (United States)

    The Landfill Gas Emissions Model (LandGEM) is an automated estimation tool with a Microsoft Excel interface that can be used to estimate emission rates for total landfill gas, methane, carbon dioxide, nonmethane organic compounds, and individual air pollutants from municipal soli...

  18. ALICE-87 (Livermore). Precompound Nuclear Model Code. Version for Personal Computer IBM/AT

    International Nuclear Information System (INIS)

    Blann, M.

    1988-05-01

    The precompound nuclear model code ALICE-87 from the Lawrence Livermore National Laboratory (USA) was implemented for use on personal computer. It is available on a set of high density diskettes from the Data Bank of Nuclear Energy Agency (Saclay) and the IAEA Nuclear Data Section. (author). Refs and figs

  19. User’s Manual for the Defense Priority Model Version 2.0 Revision

    Science.gov (United States)

    1989-01-01

    propagation, fishing, irrigation of food-chain crops, water supply for meat or dairy livestock, water supply for food processing 2 Drinking water source 3...Development of Predictive Models for Xenobiotic Bioaccumulation in terrestrial ecosystems. ORNL-5869. Oak Ridge National Laboratory, Oak Ridge, Tennessee. 13 14

  20. Preliminary site description: Groundwater flow simulations. Simpevarp area (version 1.1) modelled with CONNECTFLOW

    International Nuclear Information System (INIS)

    Hartley, Lee; Worth, David; Gylling, Bjoern; Marsic, Niko; Holmen, Johan

    2004-08-01

    The main objective of this study is to assess the role of known and unknown hydrogeological conditions for the present-day distribution of saline groundwater at the Simpevarp and Laxemar sites. An improved understanding of the paleo-hydrogeology is necessary in order to gain credibility for the Site Descriptive Model in general and the Site Hydrogeological Description in particular. This is to serve as a basis for describing the present hydrogeological conditions as well as predictions of future hydrogeological conditions. This objective implies a testing of: geometrical alternatives in the structural geology and bedrock fracturing, variants in the initial and boundary conditions, and parameter uncertainties (i.e. uncertainties in the hydraulic property assignment). This testing is necessary in order to evaluate the impact on the groundwater flow field of the specified components and to promote proposals of further investigations of the hydrogeological conditions at the site. The general methodology for modelling transient salt transport and groundwater flow using CONNECTFLOW that was developed for Forsmark has been applied successfully also for Simpevarp. Because of time constraints only a key set of variants were performed that focussed on the influences of DFN model parameters, the kinematic porosity, and the initial condition. Salinity data in deep boreholes available at the time of the project was too limited to allow a good calibration exercise. However, the model predictions are compared with the available data from KLX01 and KLX02 below. Once more salinity data is available it may be possible to draw more definite conclusions based on the differences between variants. At the moment though the differences should just be used understand the sensitivity of the models to various input parameters

  1. Business models for renewable energy in the built environment. Updated version

    Energy Technology Data Exchange (ETDEWEB)

    Wuertenberger, L.; Menkveld, M.; Vethman, P.; Van Tilburg, X. [ECN Policy Studies, Amsterdam (Netherlands); Bleyl, J.W. [Energetic Solutions, Graz (Austria)

    2012-04-15

    The project RE-BIZZ aims to provide insight to policy makers and market actors in the way new and innovative business models (and/or policy measures) can stimulate the deployment of renewable energy technologies (RET) and energy efficiency (EE) measures in the built environment. The project is initiated and funded by the IEA Implementing Agreement for Renewable Energy Technology Deployment (IEA-RETD). It analysed ten business models in three categories (amongst others different types of Energy Service Companies (ESCOs), Developing properties certified with a 'green' building label, Building owners profiting from rent increases after EE measures, Property Assessed Clean Energy (PACE) financing, On-bill financing, and Leasing of RET equipment) including their organisational and financial structure, the existing market and policy context, and an analysis of Strengths, Weaknesses, Opportunities and Threats (SWOT). The study concludes with recommendations for policy makers and other market actors.

  2. Modelling turbulent vertical mixing sensitivity using a 1-D version of NEMO

    Science.gov (United States)

    Reffray, G.; Bourdalle-Badie, R.; Calone, C.

    2015-01-01

    Through two numerical experiments, a 1-D vertical model called NEMO1D was used to investigate physical and numerical turbulent-mixing behaviour. The results show that all the turbulent closures tested (k+l from Blanke and Delecluse, 1993, and two equation models: generic length scale closures from Umlauf and Burchard, 2003) are able to correctly reproduce the classical test of Kato and Phillips (1969) under favourable numerical conditions while some solutions may diverge depending on the degradation of the spatial and time discretization. The performances of turbulence models were then compared with data measured over a 1-year period (mid-2010 to mid-2011) at the PAPA station, located in the North Pacific Ocean. The modelled temperature and salinity were in good agreement with the observations, with a maximum temperature error between -2 and 2 °C during the stratified period (June to October). However, the results also depend on the numerical conditions. The vertical RMSE varied, for different turbulent closures, from 0.1 to 0.3 °C during the stratified period and from 0.03 to 0.15 °C during the homogeneous period. This 1-D configuration at the PAPA station (called PAPA1D) is now available in NEMO as a reference configuration including the input files and atmospheric forcing set described in this paper. Thus, all the results described can be recovered by downloading and launching PAPA1D. The configuration is described on the NEMO site (PAPA">http://www.nemo-ocean.eu/Using-NEMO/Configurations/C1D_PAPA). This package is a good starting point for further investigation of vertical processes.

  3. Ion temperature in the outer ionosphere - first version of a global empirical model

    Czech Academy of Sciences Publication Activity Database

    Třísková, Ludmila; Truhlík, Vladimír; Šmilauer, Jan; Smirnova, N. F.

    2004-01-01

    Roč. 34, č. 9 (2004), s. 1998-2003 ISSN 0273-1177 R&D Projects: GA ČR GP205/02/P037; GA AV ČR IAA3042201; GA MŠk ME 651 Institutional research plan: CEZ:AV0Z3042911 Keywords : plasma temperatures * topside ionosphere * empirical models Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 0.548, year: 2004

  4. The Everglades Depth Estimation Network (EDEN) surface-water model, version 2

    Science.gov (United States)

    Telis, Pamela A.; Xie, Zhixiao; Liu, Zhongwei; Li, Yingru; Conrads, Paul

    2015-01-01

    The Everglades Depth Estimation Network (EDEN) is an integrated network of water-level gages, interpolation models that generate daily water-level and water-depth data, and applications that compute derived hydrologic data across the freshwater part of the greater Everglades landscape. The U.S. Geological Survey Greater Everglades Priority Ecosystems Science provides support for EDEN in order for EDEN to provide quality-assured monitoring data for the U.S. Army Corps of Engineers Comprehensive Everglades Restoration Plan.

  5. Uncorrelated Encounter Model of the National Airspace System, Version 2.0

    Science.gov (United States)

    2013-08-19

    between two IFR aircraft in oceanic airspace. The reason for this is that one cannot observe encounters of sufficient fidelity in the available data...does not observe a sufficient number of encounters between instrument flight rules ( IFR ) and non- IFR traffic beyond 12 NM from the shore. 4 TABLE 1...Encounter model categories. Aircraft of Interest Intruder Aircraft Location Flight Rule IFR VFR Noncooperative Noncooperative Conventional

  6. Two modified versions of the speciation code PHREEQE for modelling macromolecule-proton/cation interaction

    International Nuclear Information System (INIS)

    Falck, W.E.

    1991-01-01

    There is a growing need to consider the influence of organic macromolecules on the speciation of ions in natural waters. It is recognized that a simple discrete ligand approach to the binding of protons/cations to organic macromolecules is not appropriate to represent heterogeneities of binding site distributions. A more realistic approach has been incorporated into the speciation code PHREEQE which retains the discrete ligand approach but modifies the binding intensities using an electrostatic (surface complexation) model. To allow for different conformations of natural organic material two alternative concepts have been incorporated: it is assumed that (a) the organic molecules form rigid, impenetrable spheres, and (b) the organic molecules form flat surfaces. The former concept will be more appropriate for molecules in the smaller size range, while the latter will be more representative for larger size molecules or organic surface coatings. The theoretical concept is discussed and the relevant changes to the standard PHREEQE code are explained. The modified codes are called PHREEQEO-RS and PHREEQEO-FS for the rigid-sphere and flat-surface models respectively. Improved output facilities for data transfer to other computers, e.g. the Macintosh, are introduced. Examples where the model is tested against literature data are shown and practical problems are discussed. Appendices contain listings of the modified subroutines GAMMA and PTOT, an example input file and an example command procedure to run the codes on VAX computers

  7. Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) (Version 2) (External Review Draft)

    Science.gov (United States)

    EPA announced the availability of the draft report, Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) for a 30-day public comment period. The ICLUS version 2 (v2) modeling tool furthered land change mod...

  8. Regional hydrogeological simulations. Numerical modelling using ConnectFlow. Preliminary site description Simpevarp sub area - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Hartley, Lee; Hoch, Andrew; Hunter, Fiona; Jackson, Peter [Serco Assurance, Risley (United Kingdom); Marsic, Niko [Kemakta Konsult, Stockholm (Sweden)

    2005-02-01

    objective of this study is to support the development of a preliminary Site Description of the Simpevarp area on a regional-scale based on the available data of August 2004 (Data Freeze S1.2) and the previous Site Description. A more specific objective of this study is to assess the role of known and unknown hydrogeological conditions for the present-day distribution of saline groundwater in the Simpevarp area on a regional-scale. An improved understanding of the paleo-hydrogeology is necessary in order to gain credibility for the Site Description in general and the hydrogeological description in particular. This is to serve as a basis for describing the present hydrogeological conditions on a local-scale as well as predictions of future hydrogeological conditions. Other key objectives were to identify the model domain required to simulate regional flow and solute transport at the Simpevarp area and to incorporate a new geological model of the deformation zones produced for Version S1.2.Another difference with Version S1.1 is the increased effort invested in conditioning the hydrogeological property models to the fracture boremap and hydraulic data. A new methodology was developed for interpreting the discrete fracture network (DFN) by integrating the geological description of the DFN (GeoDFN) with the hydraulic test data from Posiva Flow-Log and Pipe-String System double-packer techniques to produce a conditioned Hydro-DFN model. This was done in a systematic way that addressed uncertainties associated with the assumptions made in interpreting the data, such as the relationship between fracture transmissivity and length. Consistent hydraulic data was only available for three boreholes, and therefore only relatively simplistic models were proposed as there isn't sufficient data to justify extrapolating the DFN away from the boreholes based on rock domain, for example. Significantly, a far greater quantity of hydro-geochemical data was available for calibration in the

  9. The operational eEMEP model version 10.4 for volcanic SO2 and ash forecasting

    Science.gov (United States)

    Steensen, Birthe M.; Schulz, Michael; Wind, Peter; Valdebenito, Álvaro M.; Fagerli, Hilde

    2017-05-01

    This paper presents a new version of the EMEP MSC-W model called eEMEP developed for transportation and dispersion of volcanic emissions, both gases and ash. EMEP MSC-W is usually applied to study problems with air pollution and aerosol transport and requires some adaptation to treat volcanic eruption sources and effluent dispersion. The operational set-up of model simulations in case of a volcanic eruption is described. Important choices have to be made to achieve CPU efficiency so that emergency situations can be tackled in time, answering relevant questions of ash advisory authorities. An efficient model needs to balance the complexity of the model and resolution. We have investigated here a meteorological uncertainty component of the volcanic cloud forecast by using a consistent ensemble meteorological dataset (GLAMEPS forecast) at three resolutions for the case of SO2 emissions from the 2014 Barðarbunga eruption. The low resolution (40 × 40 km) ensemble members show larger agreement in plume position and intensity, suggesting that the ensemble here does not give much added value. To compare the dispersion at different resolutions, we compute the area where the column load of the volcanic tracer, here SO2, is above a certain threshold, varied for testing purposes between 0.25 and 50 Dobson units. The increased numerical diffusion causes a larger area (+34 %) to be covered by the volcanic tracer in the low resolution simulations than in the high resolution ones. The higher resolution (10 × 10 km) ensemble members show higher column loads farther away from the volcanic eruption site in narrower clouds. Cloud positions are more varied between the high resolution members, and the cloud forms resemble the observed clouds more than the low resolution ones. For a volcanic emergency case this means that to obtain quickly results of the transport of volcanic emissions, an individual simulation with our low resolution is sufficient; however, to forecast peak

  10. Verification and Validation of Encapsulation Flow Models in GOMA, Version 1.1; TOPICAL

    International Nuclear Information System (INIS)

    MONDY, LISA ANN; RAO, REKHA R.; SCHUNK, P. RANDALL; SACKINGER, PHILIP A.; ADOLF, DOUGLAS B.

    2001-01-01

    Encapsulation is a common process used in manufacturing most non-nuclear components including: firing sets, neutron generators, trajectory sensing signal generators (TSSGs), arming, fusing and firing devices (AF and Fs), radars, programmers, connectors, and batteries. Encapsulation is used to contain high voltage, to mitigate stress and vibration and to protect against moisture. The purpose of the ASCI Encapsulation project is to develop a simulation capability that will allow us to aid in the encapsulation design process, especially for neutron generators. The introduction of an encapsulant poses many problems because of the need to balance ease of processing and properties necessary to achieve the design benefits such as tailored encapsulant properties, optimized cure schedule and reduced failure rates. Encapsulants can fail through fracture or delamination as a result of cure shrinkage, thermally induced residual stresses, voids or incomplete component embedding and particle gradients. Manufacturing design requirements include (1) maintaining uniform composition of particles in order to maintain the desired thermal coefficient of expansion (CTE) and density, (2) mitigating void formation during mold fill, (3) mitigating cure and thermally induced stresses during cure and cool down, and (4) eliminating delamination and fracture due to cure shrinkage/thermal strains. The first two require modeling of the fluid phase, and it is proposed to use the finite element code GOMA to accomplish this. The latter two require modeling of the solid state; however, ideally the effects of particle distribution would be included in the calculations, and thus initial conditions would be set from GOMA predictions. These models, once they are verified and validated, will be transitioned into the SIERRA framework and the ARIA code. This will facilitate exchange of data with the solid mechanics calculations in SIERRA/ADAGIO

  11. ITS Version 3.0: Powerful, user-friendly software for radiation modelling

    International Nuclear Information System (INIS)

    Kensek, R.P.; Halbleib, J.A.; Valdez, G.D.

    1993-01-01

    ITS (the Integrated Tiger Series) is a powerful, but user-friendly, software package permitting state-of-the-art modelling of electron and/or photon radiation effects. The programs provide Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields. The ITS system combines operational simplicity and physical accuracy in order to provide experimentalist and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems

  12. RadCon: A radiological consequences model. Technical guide - Version 2.0

    International Nuclear Information System (INIS)

    Crawford, J; Domel, R.U.; Harris, F.F.; Twining, J.R.

    2000-05-01

    A Radiological Consequence model (RadCon) is being developed at ANSTO to assess the radiological consequences, after an incident, in any climate, using appropriate meteorological and radiological transfer parameters. The major areas of interest to the developers are tropical and subtropical climates. This is particularly so given that it is anticipated that nuclear energy will become a mainstay for economies in these regions within the foreseeable future. Therefore, data acquisition and use of parameter values have been concentrated primarily on these climate types. Atmospheric dispersion and deposition for Australia can be modelled and supplied by the Regional Specialised Meteorological Centre (RSMC, one of five in the world) which is part of the Bureau of Meteorology Research Centre (BMRC), Puri et al. (1992). RadCon combines these data (i.e. the time dependent air and ground concentration generated by the dispersion model or measured quantities in the case of an actual incident) with specific regional parameter values to determine the dose to people via the major pathways of external and internal irradiation. For the external irradiation calculations, data are needed on lifestyle information such as the time spent indoors/outdoors, the high/low physical activity rates for different groups of people (especially critical groups) and shielding factors for housing types. For the internal irradiation calculations, data are needed on food consumption, effect of food processing, transfer parameters (soil to plant, plant to animal) and interception values appropriate for the region under study. Where the relevant data are not available default temperate data are currently used. The results of a wide ranging literature search has highlighted where specific research will be initiated to determine the information required for tropical and sub-tropical regions. The user is able to initiate sensitivity analyses within RadCon. This allows the parameters to be ranked in

  13. Implementing and Evaluating Variable Soil Thickness in the Community Land Model, Version 4.5 (CLM4.5)

    Energy Technology Data Exchange (ETDEWEB)

    Brunke, Michael A.; Broxton, Patrick; Pelletier, Jon; Gochis, David; Hazenberg, Pieter; Lawrence, David M.; Leung, L. Ruby; Niu, Guo-Yue; Troch, Peter A.; Zeng, Xubin

    2016-05-01

    One of the recognized weaknesses of land surface models as used in weather and climate models is the assumption of constant soil thickness due to the lack of global estimates of bedrock depth. Using a 30 arcsecond global dataset for the thickness of relatively porous, unconsolidated sediments over bedrock, spatial variation in soil thickness is included here in version 4.5 of the Community Land Model (CLM4.5). The number of soil layers for each grid cell is determined from the average soil depth for each 0.9° latitude x 1.25° longitude grid cell. Including variable soil thickness affects the simulations most in regions with shallow bedrock corresponding predominantly to areas of mountainous terrain. The greatest changes are to baseflow, with the annual minimum generally occurring earlier, while smaller changes are seen in surface fluxes like latent heat flux and surface runoff in which only the annual cycle amplitude is increased. These changes are tied to soil moisture changes which are most substantial in locations with shallow bedrock. Total water storage (TWS) anomalies do not change much over most river basins around the globe, since most basins contain mostly deep soils. However, it was found that TWS anomalies substantially differ for a river basin with more mountainous terrain. Additionally, the annual cycle in soil temperature are affected by including realistic soil thicknesses due to changes to heat capacity and thermal conductivity.

  14. Model for Analysis of the Energy Demand (MAED) users' manual for version MAED-1

    International Nuclear Information System (INIS)

    1986-09-01

    This manual is organized in two major parts. The first part includes eight main sections describing how to use the MAED-1 computer program and the second one consists of five appendices giving some additional information about the program. Concerning the main sections of the manual, Section 1 gives a summary description and some background information about the MAED-1 model. Section 2 extends the description of the MAED-1 model in more detail. Section 3 introduces some concepts, mainly related to the computer requirements imposed by the program, that are used throughout this document. Sections 4 to 7 describe how to execute each of the various programs (or modules) of the MAED-1 package. The description for each module shows the user how to prepare the control and data cards needed to execute the module and how to interpret the printed output produced. Section 8 recapitulates about the use of MAED-1 for carrying out energy and electricity planning studies, describes the several phases normally involved in this type of study and provides the user with practical hints about the most important aspects that need to be verified at each phase while executing the various MAED modules

  15. Creating, generating and comparing random network models with NetworkRandomizer [version 3; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Gabriele Tosadori

    2017-11-01

    Full Text Available Biological networks are becoming a fundamental tool for the investigation of high-throughput data in several fields of biology and biotechnology. With the increasing amount of information, network-based models are gaining more and more interest and new techniques are required in order to mine the information and to validate the results. To fill the validation gap we present an app, for the Cytoscape platform, which aims at creating randomised networks and randomising existing, real networks. Since there is a lack of tools that allow performing such operations, our app aims at enabling researchers to exploit different, well known random network models that could be used as a benchmark for validating real, biological datasets. We also propose a novel methodology for creating random weighted networks, i.e. the multiplication algorithm, starting from real, quantitative data. Finally, the app provides a statistical tool that compares real versus randomly computed attributes, in order to validate the numerical findings. In summary, our app aims at creating a standardised methodology for the validation of the results in the context of the Cytoscape platform.

  16. MIG version 0.0 model interface guidelines: Rules to accelerate installation of numerical models into any compliant parent code

    Energy Technology Data Exchange (ETDEWEB)

    Brannon, R.M.; Wong, M.K.

    1996-08-01

    A set of model interface guidelines, called MIG, is presented as a means by which any compliant numerical material model can be rapidly installed into any parent code without having to modify the model subroutines. Here, {open_quotes}model{close_quotes} usually means a material model such as one that computes stress as a function of strain, though the term may be extended to any numerical operation. {open_quotes}Parent code{close_quotes} means a hydrocode, finite element code, etc. which uses the model and enforces, say, the fundamental laws of motion and thermodynamics. MIG requires the model developer (who creates the model package) to specify model needs in a standardized but flexible way. MIG includes a dictionary of technical terms that allows developers and parent code architects to share a common vocabulary when specifying field variables. For portability, database management is the responsibility of the parent code. Input/output occurs via structured calling arguments. As much model information as possible (such as the lists of required inputs, as well as lists of precharacterized material data and special needs) is supplied by the model developer in an ASCII text file. Every MIG-compliant model also has three required subroutines to check data, to request extra field variables, and to perform model physics. To date, the MIG scheme has proven flexible in beta installations of a simple yield model, plus a more complicated viscodamage yield model, three electromechanical models, and a complicated anisotropic microcrack constitutive model. The MIG yield model has been successfully installed using identical subroutines in three vectorized parent codes and one parallel C++ code, all predicting comparable results. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort, thereby reducing the cost of installing and sharing models in diverse new codes.

  17. Offshore Wind Guidance Document: Oceanography and Sediment Stability (Version 1) Development of a Conceptual Site Model.

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, Jesse D.; Jason Magalen; Craig Jones

    2014-06-01

    This guidance document provide s the reader with an overview of the key environmental considerations for a typical offshore wind coastal location and the tools to help guide the reader through a thoro ugh planning process. It will enable readers to identify the key coastal processes relevant to their offshore wind site and perform pertinent analysis to guide siting and layout design, with the goal of minimizing costs associated with planning, permitting , and long - ter m maintenance. The document highlight s site characterization and assessment techniques for evaluating spatial patterns of sediment dynamics in the vicinity of a wind farm under typical, extreme, and storm conditions. Finally, the document des cribe s the assimilation of all of this information into the conceptual site model (CSM) to aid the decision - making processes.

  18. Theoretical modelling of epigenetically modified DNA sequences [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Alexandra Teresa Pires Carvalho

    2015-05-01

    Full Text Available We report herein a set of calculations designed to examine the effects of epigenetic modifications on the structure of DNA. The incorporation of methyl, hydroxymethyl, formyl and carboxy substituents at the 5-position of cytosine is shown to hardly affect the geometry of CG base pairs, but to result in rather larger changes to hydrogen-bond and stacking binding energies, as predicted by dispersion-corrected density functional theory (DFT methods. The same modifications within double-stranded GCG and ACA trimers exhibit rather larger structural effects, when including the sugar-phosphate backbone as well as sodium counterions and implicit aqueous solvation. In particular, changes are observed in the buckle and propeller angles within base pairs and the slide and roll values of base pair steps, but these leave the overall helical shape of DNA essentially intact. The structures so obtained are useful as a benchmark of faster methods, including molecular mechanics (MM and hybrid quantum mechanics/molecular mechanics (QM/MM methods. We show that previously developed MM parameters satisfactorily reproduce the trimer structures, as do QM/MM calculations which treat bases with dispersion-corrected DFT and the sugar-phosphate backbone with AMBER. The latter are improved by inclusion of all six bases in the QM region, since a truncated model including only the central CG base pair in the QM region is considerably further from the DFT structure. This QM/MM method is then applied to a set of double-stranded DNA heptamers derived from a recent X-ray crystallographic study, whose size puts a DFT study beyond our current computational resources. These data show that still larger structural changes are observed than in base pairs or trimers, leading us to conclude that it is important to model epigenetic modifications within realistic molecular contexts.

  19. New version of the theoretical databank of transferable aspherical pseudoatoms, UBDB2011--towards nucleic acid modelling.

    Science.gov (United States)

    Jarzembska, Katarzyna N; Dominiak, Paulina M

    2012-01-01

    The theoretical databank of aspherical pseudoatoms (UBDB) was recently extended with over 100 new atom types present in RNA, DNA and in some other molecules of great importance in biology and pharmacy. The atom-type definitions were modified and new atom keys added to provide a more precise description of the atomic charge-density distribution. X-H bond lengths were updated according to recent neutron diffraction studies and implemented in the LSDB program as well as used for modelling the appropriate atom types. The UBDB2011 databank was extensively tested. Electrostatic interaction energies calculated on the basis of the databank of aspherical atom models were compared with the corresponding results obtained directly from wavefunctions at the same level of theory (SPDFG/B3LYP/6-31G** and SPDFG/B3LYP/aug-cc-pVDZ). Various small complexes were analysed to cover most of the different interaction types, i.e. adenine-thymine and guanine-cytosine with hydrogen bonding, guanine-adenine with stacking contacts, and a group of neutral and charged species of nucleic acid bases interacting with amino acid side chains. The energy trends are well preserved (R(2) > 0.9); however the energy values differ between the two methods by about 4 kcal mol(-1) (1 kcal mol(-1) = 4.184 kJ mol(-1)) on average. What is noticeable is that the replacement of one basis set by another in a purely quantum chemical approach leads to the same electrostatic energy difference, i.e. of about 4 kcal mol(-1) in magnitude. The present work opens up the possibility of applying the UBDB2011 for macromolecules that contain DNA/RNA fragments. This study shows that on the basis of the UBDB2011 databank electrostatic interaction energies can be estimated and structure refinements carried out. However, some method limitations are apparent.

  20. Forsmark site investigation. Assessment of the validity of the rock domain model, version 1.2, based on the modelling of gravity and petrophysical data

    Energy Technology Data Exchange (ETDEWEB)

    Isaksson, Hans (GeoVista AB, Uppsala (SE)); Stephens, Michael B. (Geological Survey of Sweden, Uppsala (SE))

    2007-11-15

    This document reports the results gained by the geophysical modelling of rock domains based on gravity and petrophysical data, which is one of the activities performed within the site investigation work at Forsmark. The main objective with this activity is to assess the validity of the geological rock domain model version 1.2, and to identify discrepancies in the model that may indicate a need for revision of the model or a need for additional investigations. The verification is carried out by comparing the calculated gravity model response, which takes account of the geological model, with a local gravity anomaly that represents the measured data. The model response is obtained from the three-dimensional geometry and the petrophysical data provided for each rock domain in the geological model. Due to model boundary conditions, the study is carried out in a smaller area within the regional model area. Gravity model responses are calculated in three stages; an initial model, a base model and a refined base model. The refined base model is preferred and is used for comparison purposes. In general, there is a good agreement between the refined base model that makes use of the rock domain model, version 1.2 and the measured gravity data, not least where it concerns the depth extension of the critical rock domain RFM029. The most significant discrepancy occurs in the area extending from the SFR office to the SFR underground facility and further to the northwest. It is speculated that this discrepancy is caused by a combination of an overestimation of the volume of gabbro (RFM016) that plunges towards the southeast in the rock domain model, and an underestimation of the volume of occurrence of pegmatite and pegmatitic granite that are known to be present and occur as larger bodies around SFR. Other discrepancies are noted in rock domain RFM022, which is considered to be overestimated in the rock domain model, version 1.2, and in rock domain RFM017, where the gravity

  1. Simulating the 2012 High Plains Drought Using Three Single Column Model Versions of the Community Earth System Model (SCM-CESM)

    Science.gov (United States)

    Medina, I. D.; Denning, S.

    2014-12-01

    The impact of changes in the frequency and severity of drought on fresh water sustainability is a great concern for many regions of the world. One such location is the High Plains, where the local economy is primarily driven by fresh water withdrawals from the Ogallala Aquifer, which accounts for approximately 30% of total irrigation withdrawals from all U.S. aquifers combined. Modeling studies that focus on the feedback mechanisms that control the climate and eco-hydrology during times of drought are limited in the sense that they use conventional General Circulation Models (GCMs) with grid length scales ranging from one hundred to several hundred kilometers. Additionally, these models utilize crude statistical parameterizations of cloud processes for estimating sub-grid fluxes of heat and moisture and have a poor representation of land surface heterogeneity. For this research, we focus on the 2012 High Plains drought, and will perform numerical simulations using three single column model versions of the Community Earth System Model (SCM-CESM) at multiple sites overlying the Ogallala Aquifer for the 2010-2012 period. In the first version of SCM-CESM, CESM will be used in standard mode (Community Atmospheric Model (CAM) coupled to a single instance of the Community Land Model (CLM)), secondly, CESM will be used in Super-Parameterized mode (SP-CESM), where a cloud resolving model (CRM consists of 32 atmospheric columns) replaces the standard CAM atmospheric parameterization and is coupled to a single instance of CLM, and thirdly, CESM is used in "Multi Instance" SP-CESM mode, where an instance of CLM is coupled to each CRM column of SP-CESM (32 CRM columns coupled to 32 instances of CLM). To assess the physical realism of the land-atmosphere feedbacks simulated at each site by all versions of SCM-CESM, differences in simulated energy and moisture fluxes will be computed between years for the 2010-2012 period, and will be compared to differences calculated using

  2. Fuel Cell Power Model Version 2: Startup Guide, System Designs, and Case Studies. Modeling Electricity, Heat, and Hydrogen Generation from Fuel Cell-Based Distributed Energy Systems

    Energy Technology Data Exchange (ETDEWEB)

    Steward, D.; Penev, M.; Saur, G.; Becker, W.; Zuboy, J.

    2013-06-01

    This guide helps users get started with the U.S. Department of Energy/National Renewable Energy Laboratory Fuel Cell Power (FCPower) Model Version 2, which is a Microsoft Excel workbook that analyzes the technical and economic aspects of high-temperature fuel cell-based distributed energy systems with the aim of providing consistent, transparent, comparable results. This type of energy system would provide onsite-generated heat and electricity to large end users such as hospitals and office complexes. The hydrogen produced could be used for fueling vehicles or stored for later conversion to electricity.

  3. Hybrid2: The hybrid system simulation model, Version 1.0, user manual

    Energy Technology Data Exchange (ETDEWEB)

    Baring-Gould, E.I.

    1996-06-01

    In light of the large scale desire for energy in remote communities, especially in the developing world, the need for a detailed long term performance prediction model for hybrid power systems was seen. To meet these ends, engineers from the National Renewable Energy Laboratory (NREL) and the University of Massachusetts (UMass) have spent the last three years developing the Hybrid2 software. The Hybrid2 code provides a means to conduct long term, detailed simulations of the performance of a large array of hybrid power systems. This work acts as an introduction and users manual to the Hybrid2 software. The manual describes the Hybrid2 code, what is included with the software and instructs the user on the structure of the code. The manual also describes some of the major features of the Hybrid2 code as well as how to create projects and run hybrid system simulations. The Hybrid2 code test program is also discussed. Although every attempt has been made to make the Hybrid2 code easy to understand and use, this manual will allow many organizations to consider the long term advantages of using hybrid power systems instead of conventional petroleum based systems for remote power generation.

  4. Hyphal ontogeny in Neurospora crassa: a model organism for all seasons [version 1; referees: 3 approved

    Directory of Open Access Journals (Sweden)

    Meritxell Riquelme

    2016-11-01

    Full Text Available Filamentous fungi have proven to be a better-suited model system than unicellular yeasts in analyses of cellular processes such as polarized growth, exocytosis, endocytosis, and cytoskeleton-based organelle traffic. For example, the filamentous fungus Neurospora crassa develops a variety of cellular forms. Studying the molecular basis of these forms has led to a better, yet incipient, understanding of polarized growth. Polarity factors as well as Rho GTPases, septins, and a localized delivery of vesicles are the central elements described so far that participate in the shift from isotropic to polarized growth. The growth of the cell wall by apical biosynthesis and remodeling of polysaccharide components is a key process in hyphal morphogenesis. The coordinated action of motor proteins and Rab GTPases mediates the vesicular journey along the hyphae toward the apex, where the exocyst mediates vesicle fusion with the plasma membrane. Cytoplasmic microtubules and actin microfilaments serve as tracks for the transport of vesicular carriers as well as organelles in the tubular cell, contributing to polarization. In addition to exocytosis, endocytosis is required to set and maintain the apical polarity of the cell. Here, we summarize some of the most recent breakthroughs in hyphal morphogenesis and apical growth in N. crassa and the emerging questions that we believe should be addressed.

  5. Fe–Mn alloys: A mixed-bond spin-1/2 Ising model version

    Energy Technology Data Exchange (ETDEWEB)

    Freitas, A.S. [Departamento de Física, Universidade Federal de Sergipe, 49100-000 São Cristovão, SE (Brazil); Albuquerque, Douglas F. de, E-mail: douglas@ufs.br [Departamento de Física, Universidade Federal de Sergipe, 49100-000 São Cristovão, SE (Brazil); Departamento de Matemática, Universidade Federal de Sergipe, 49100-000 São Cristovão, SE (Brazil); Moreno, N.O. [Departamento de Física, Universidade Federal de Sergipe, 49100-000 São Cristovão, SE (Brazil)

    2014-06-01

    In this work, we apply the mixed-bond spin-1/2 Ising model to study the magnetic properties of Fe–Mn alloys in the α phase by employing the effective field theory (EFT). Here, we suggest a new approach to the ferromagnetic coupling between nearest neighbours Fe–Fe that depends on the ratio between the Mn–Mn coupling and the Fe–Mn coupling and of second power of the Mn concentration q in contrast to linear dependence considered in the other articles. Also, we propose a new probability distribution for binary alloys with mixed-bonds based on the distribution for ternary alloys and we obtain a very good agreement for all considered values of q in T–q plane, in particular for q>0.11. - Highlights: • We apply the mixed-bond spin-1/2 to study the properties of Fe–Mn. • We employ the EFT and suggest a new approach to ferromagnetic coupling. • The new probability distribution is considered. • The phase diagram is obtained for all values of q in T–q plane.

  6. Fe–Mn alloys: A mixed-bond spin-1/2 Ising model version

    International Nuclear Information System (INIS)

    Freitas, A.S.; Albuquerque, Douglas F. de; Moreno, N.O.

    2014-01-01

    In this work, we apply the mixed-bond spin-1/2 Ising model to study the magnetic properties of Fe–Mn alloys in the α phase by employing the effective field theory (EFT). Here, we suggest a new approach to the ferromagnetic coupling between nearest neighbours Fe–Fe that depends on the ratio between the Mn–Mn coupling and the Fe–Mn coupling and of second power of the Mn concentration q in contrast to linear dependence considered in the other articles. Also, we propose a new probability distribution for binary alloys with mixed-bonds based on the distribution for ternary alloys and we obtain a very good agreement for all considered values of q in T–q plane, in particular for q>0.11. - Highlights: • We apply the mixed-bond spin-1/2 to study the properties of Fe–Mn. • We employ the EFT and suggest a new approach to ferromagnetic coupling. • The new probability distribution is considered. • The phase diagram is obtained for all values of q in T–q plane

  7. Sensitivity of precipitation to parameter values in the community atmosphere model version 5

    Energy Technology Data Exchange (ETDEWEB)

    Johannesson, Gardar; Lucas, Donald; Qian, Yun; Swiler, Laura Painton; Wildey, Timothy Michael

    2014-03-01

    One objective of the Climate Science for a Sustainable Energy Future (CSSEF) program is to develop the capability to thoroughly test and understand the uncertainties in the overall climate model and its components as they are being developed. The focus on uncertainties involves sensitivity analysis: the capability to determine which input parameters have a major influence on the output responses of interest. This report presents some initial sensitivity analysis results performed by Lawrence Livermore National Laboratory (LNNL), Sandia National Laboratories (SNL), and Pacific Northwest National Laboratory (PNNL). In the 2011-2012 timeframe, these laboratories worked in collaboration to perform sensitivity analyses of a set of CAM5, 2° runs, where the response metrics of interest were precipitation metrics. The three labs performed their sensitivity analysis (SA) studies separately and then compared results. Overall, the results were quite consistent with each other although the methods used were different. This exercise provided a robustness check of the global sensitivity analysis metrics and identified some strongly influential parameters.

  8. A global wetland methane emissions and uncertainty dataset for atmospheric chemical transport models (WetCHARTs version 1.0

    Directory of Open Access Journals (Sweden)

    A. A. Bloom

    2017-06-01

    Full Text Available Wetland emissions remain one of the principal sources of uncertainty in the global atmospheric methane (CH4 budget, largely due to poorly constrained process controls on CH4 production in waterlogged soils. Process-based estimates of global wetland CH4 emissions and their associated uncertainties can provide crucial prior information for model-based top-down CH4 emission estimates. Here we construct a global wetland CH4 emission model ensemble for use in atmospheric chemical transport models (WetCHARTs version 1.0. Our 0.5°  ×  0.5° resolution model ensemble is based on satellite-derived surface water extent and precipitation reanalyses, nine heterotrophic respiration simulations (eight carbon cycle models and a data-constrained terrestrial carbon cycle analysis and three temperature dependence parameterizations for the period 2009–2010; an extended ensemble subset based solely on precipitation and the data-constrained terrestrial carbon cycle analysis is derived for the period 2001–2015. We incorporate the mean of the full and extended model ensembles into GEOS-Chem and compare the model against surface measurements of atmospheric CH4; the model performance (site-level and zonal mean anomaly residuals compares favourably against published wetland CH4 emissions scenarios. We find that uncertainties in carbon decomposition rates and the wetland extent together account for more than 80 % of the dominant uncertainty in the timing, magnitude and seasonal variability in wetland CH4 emissions, although uncertainty in the temperature CH4 : C dependence is a significant contributor to seasonal variations in mid-latitude wetland CH4 emissions. The combination of satellite, carbon cycle models and temperature dependence parameterizations provides a physically informed structural a priori uncertainty that is critical for top-down estimates of wetland CH4 fluxes. Specifically, our ensemble can provide enhanced information on the prior

  9. EIA model documentation: World oil refining logistics demand model,``WORLD`` reference manual. Version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    1994-04-11

    This manual is intended primarily for use as a reference by analysts applying the WORLD model to regional studies. It also provides overview information on WORLD features of potential interest to managers and analysts. Broadly, the manual covers WORLD model features in progressively increasing detail. Section 2 provides an overview of the WORLD model, how it has evolved, what its design goals are, what it produces, and where it can be taken with further enhancements. Section 3 reviews model management covering data sources, managing over-optimization, calibration and seasonality, check-points for case construction and common errors. Section 4 describes in detail the WORLD system, including: data and program systems in overview; details of mainframe and PC program control and files;model generation, size management, debugging and error analysis; use with different optimizers; and reporting and results analysis. Section 5 provides a detailed description of every WORLD model data table, covering model controls, case and technology data. Section 6 goes into the details of WORLD matrix structure. It provides an overview, describes how regional definitions are controlled and defines the naming conventions for-all model rows, columns, right-hand sides, and bounds. It also includes a discussion of the formulation of product blending and specifications in WORLD. Several Appendices supplement the main sections.

  10. Study of the Eco-Economic Indicators by Means of the New Version of the Merge Integrated Model Part 2

    Directory of Open Access Journals (Sweden)

    Boris Vadimovich Digas

    2016-03-01

    Full Text Available One of the most relevant issues of the day is the forecasting problem of climatic changes and mitigation of their consequences. The official point of view reflected in the Climate doctrine of the Russian Federation consists in the recognition of the need of the development of the state approach to the climatic problems and related issues on the basis of the comprehensive scientific analysis of ecological, economic and social factors. For this purpose, the integrated estimation models of interdisciplinary character are attracted. Their functionality is characterized by the possibility of construction and testing of various dynamic scenarios of complex systems. The main purposes of the computing experiments described in the article are a review of the consequences of hypothetical participation of Russia in initiatives for greenhouse gas reduction as the Kyoto Protocol and approbation of one of the calculation methods of the green gross domestic product representing the efficiency of environmental management in the modelling. To implement the given goals, the MERGE optimization model is used, its classical version is intended for the quantitative estimation of the application results of nature protection strategies. The components of the model are the eco-power module, climatic module and the module of loss estimates. In the work, the main attention is paid to the adaptation of the MERGE model to a current state of the world economy in the conditions of a complicated geopolitical situation and introduction of a new component to the model, realizing a simplified method for calculation the green gross domestic product. The Project of scenario conditions and the key macroeconomic forecast parameters of the socio-economic development of Russia for 2016 and the schedule date of 2017−2018 made by the Ministry of Economic Development of the Russian Federation are used as a basic source of entrance data for the analysis of possible trajectories of the

  11. Study of the Eco-Economic Indicators by Means of the New Version of the Merge Integrated Model. Part 1

    Directory of Open Access Journals (Sweden)

    Boris Vadimovich Digas

    2015-12-01

    Full Text Available One of the most relevant issues of the day is the forecasting problem of climatic changes and mitigation of their consequences. The official point of view reflected in the Climate doctrine of the Russian Federation consists in the recognition of the need of the development of the state approach to the climatic problems and related issues on the basis of the comprehensive scientific analysis of ecological, economic and social factors. For this purpose, the integrated estimation models of interdisciplinary character are attracted. Their functionality is characterized by the possibility of construction and testing of various dynamic scenarios of complex systems. The main purposes of the computing experiments described in the article are a review of the consequences of hypothetical participation of Russia in initiatives for greenhouse gas reduction as the Kyoto Protocol and approbation of one of the calculation methods of the green GDP representing the efficiency of environmental management in the modelling. To implement the given goals, the MERGE optimization model is used, its classical version is intended for the quantitative estimation of the application results of nature protection strategies. The components of the model are the eco-power module, climatic module and the module of loss estimates. In the work, the main attention is paid to the adaptation of the MERGE model to a current state of the world economy in the conditions of a complicated geopolitical situation and introduction of a new component to the model, realizing a simplified method for calculation the green GDP. The Project of scenario conditions and the key macroeconomic forecast parameters of the socio-economic development of Russia for 2016 and the schedule date of 2017−2018 made by the Ministry of Economic Development of the Russian Federation are used as a basic source of entrance data for the analysis of possible trajectories of the economic development of Russia and the

  12. Thermal Site Descriptive Model. A strategy for the model development during site investigations. Version 1.0

    International Nuclear Information System (INIS)

    Sundberg, Jan

    2003-04-01

    Site investigations are in progress for the siting of a deep repository for spent nuclear fuel. As part of the planning work, strategies are developed for site descriptive modelling regarding different disciplines, amongst them the thermal conditions. The objective of the strategy for a thermal site descriptive model is to guide the practical implementation of evaluating site specific data during the site investigations. It is understood that further development may be needed. The model describes the thermal properties and other thermal parameters of intact rock, fractures and fracture zones, and of the rock mass. The methodology is based on estimation of thermal properties of intact rock and discontinuities, using both empirical and theoretical/numerical approaches, and estimation of thermal processes using mathematical modelling. The methodology will be used and evaluated for the thermal site descriptive modelling at the Aespoe Hard Rock Laboratory

  13. VELMA Ecohydrological Model, Version 2.0 -- Analyzing Green Infrastructure Options for Enhancing Water Quality and Ecosystem Service Co-Benefits

    Science.gov (United States)

    This 2-page factsheet describes an enhanced version (2.0) of the VELMA eco-hydrological model. VELMA – Visualizing Ecosystem Land Management Assessments – has been redesigned to assist communities, land managers, policy makers and other decision makers in evaluataing the effecti...

  14. Composite double oscillation in a modified version of the oregonator model of the Belousov-Zhabotinsky reaction

    Science.gov (United States)

    Janz, Robert D.; Vanecek, David J.; Field, Richard J.

    1980-10-01

    A number of nonmonotonic behaviors appear when the Belousov-Zhabotinsky reaction is run in a flow system (CSTR) which are not observed when the reaction is run in a closed system. Among these behaviors is composite double oscillation in which nearly identical bursts of oscillation are separated by regular periods of quiescence. Here we use a modified version of the oregonator model of the Belousov-Zhabotinsky reaction to simulate composite double oscillation. Our modification involves the addition of a new variable which is related to the amount of brominated organic material present in the system. This new variable changes slowly on the time scale of the oscillations and controls the value of f, the stoichiometric factor of step 5 in the oregonator. Thus the behavior of the modified oregonator in CSTR mode when flowrates are moderate can be rationalized in terms of the properties of the unmodified oregonator in a closed system. We show that composite double oscillation is a hysteresis phenomenon occurring over a small range of values of f where a locally stable steady state and a locally stable limit cycle coexist. Composite double oscillation occurs as the system is carried back-and-forth across the area of coexistence by the new, slowly moving variable whose concentration grows during the oscillatory phase, when the system is on the locally stable limit cycle, and decays during the quiescent phase, when the system is on the locally stable steady state.

  15. Brayton Cycle Numerical Modeling using the RELAP5-3D code, version 4.3.4

    International Nuclear Information System (INIS)

    Longhini, Eduardo P.; Lobo, Paulo D.C.; Guimarães, Lamartine N.F.; Filho, Francisco A.B.; Ribeiro, Guilherme B.

    2017-01-01

    This work contributes to enable and develop technologies to mount fast micro reactors, to generate heat and electric energy, for the purpose to warm and to supply electrically spacecraft equipment and, also, the production of nuclear space propulsion effect. So, for this purpose, the Brayton Cycle demonstrates to be an optimum approach for space nuclear power. The Brayton thermal cycle gas has as characteristic to be a closed cycle, with two adiabatic processes and two isobaric processes. The components performing the cycle's processes are compressor, turbine, heat source, cold source and recuperator. Therefore, the working fluid's mass flow runs the thermal cycle that converts thermal energy into electrical energy, able to use in spaces and land devices. The objective is numerically to model the Brayton thermal cycle gas on nominal operation with one turbomachine composed for a radial-inflow compressor and turbine of a 40.8 kWe Brayton Rotating Unit (BRU). The Brayton cycle numerical modeling is being performed with the program RELAP5-3D, version 4.3.4. The nominal operation uses as working fluid a mixture 40 g/mole He-Xe with a flow rate of 1.85 kg/s, shaft rotational speed of 45 krpm, compressor and turbine inlet temperature of 400 K and 1149 K, respectively, and compressor exit pressure 0.931 MPa. Then, the aim is to get physical corresponding data to operate each cycle component and the general cycle on this nominal operation. (author)

  16. Brayton Cycle Numerical Modeling using the RELAP5-3D code, version 4.3.4

    Energy Technology Data Exchange (ETDEWEB)

    Longhini, Eduardo P.; Lobo, Paulo D.C.; Guimarães, Lamartine N.F.; Filho, Francisco A.B.; Ribeiro, Guilherme B., E-mail: edu_longhini@yahoo.com.br [Instituto de Estudos Avançados (IEAv), São José dos Campos, SP (Brazil). Divisão de Energia Nuclear

    2017-07-01

    This work contributes to enable and develop technologies to mount fast micro reactors, to generate heat and electric energy, for the purpose to warm and to supply electrically spacecraft equipment and, also, the production of nuclear space propulsion effect. So, for this purpose, the Brayton Cycle demonstrates to be an optimum approach for space nuclear power. The Brayton thermal cycle gas has as characteristic to be a closed cycle, with two adiabatic processes and two isobaric processes. The components performing the cycle's processes are compressor, turbine, heat source, cold source and recuperator. Therefore, the working fluid's mass flow runs the thermal cycle that converts thermal energy into electrical energy, able to use in spaces and land devices. The objective is numerically to model the Brayton thermal cycle gas on nominal operation with one turbomachine composed for a radial-inflow compressor and turbine of a 40.8 kWe Brayton Rotating Unit (BRU). The Brayton cycle numerical modeling is being performed with the program RELAP5-3D, version 4.3.4. The nominal operation uses as working fluid a mixture 40 g/mole He-Xe with a flow rate of 1.85 kg/s, shaft rotational speed of 45 krpm, compressor and turbine inlet temperature of 400 K and 1149 K, respectively, and compressor exit pressure 0.931 MPa. Then, the aim is to get physical corresponding data to operate each cycle component and the general cycle on this nominal operation. (author)

  17. Asian dust transport during the springtime of year 2001 and 2002 with a nested version of dust transport model

    Science.gov (United States)

    Uno, I.; Satake, S.; Hara, Y.; Takemura, T.; Wang, Z.; Carmichael, G. R.

    2002-12-01

    Number of yellow sand (Kosa) observation has been surprisingly increasing in Japan and Korea since 2000. Especially extremely high PM10 concentration (exceeding 0.5mg/m3) was observed in Japan several times in 2002, so we have an urgent scientific and political need to forecast/reproduce the detailed dust emission, transport and deposition processes. Intensive modeling studies have already been conducted to examine transport of Sahara dust and its impact on global radiation budget. One of the important differences between the Sahara desert and the Asian desert (mainly Gobi Desert and Takla Makan Desert) is the elevation of the dust source. The averaged elevation of Gobi Desert is approximately 1500 to 2500 m. These deserts are surrounded by high mountains. Furthermore advance of the recent manmade desertification made complicated land use patches for the arid region in Inner Mongolia. Therefore the development of a high horizontal resolution dust model is highly required. In this study, we will report a newly developed nested version of the dust transport model (as a part of Chemical weather FORecasting System; CFORS) in order to have a better understanding of Asian springtime heady dust episode. Here, CFORS is a multi-tracer, on-line, system built within the RAMS mesoscale meteorological model. A unique feature of nested CFORS is that multiple tracers are run on-line in RAMS under the two-way nesting, so that all the fine-scale on-line meteorological information such as 3-D winds, boundary-layer turbulence, surface fluxes and precipitation amount are directly used by the dust emission and transport at every time step. As a result, nested-CFORS produces with high time resolution 3-dimensional fields of dust distributions and major meteorological parameters under the nesting capability of RAMS. In this work, the dust transport model simulation with the nested-CFORS was conducted between March and April of the years 2001 and 2002, respectively. The sensititivy

  18. User's guide to the MESOI diffusion model: Version 1. 1 (for Data General Eclipse S/230 with AFOS)

    Energy Technology Data Exchange (ETDEWEB)

    Athey, G.F.; Ramsdell, J.V.

    1982-09-01

    MESOI is an interactive, Langrangian puff trajectory model. The model theory is documented separately (Ramsdell and Athey, 1981). Version 1.1 is a modified form of the original 1.0. It is designed to run on a Data General Eclipse computer. The model has improved support features which make it useful as an emergency response tool. This report is intended to provide the user with the information necessary to successfully conduct model simulations using MESOI Version 1.1 and to use the support programs STAPREP and EXPLT. The user is also provided information on the use of the data file maintenance and review program UPDATE. Examples are given for the operation of the program. Test data sets are described which allow the user to practice with the programs and to confirm proper implementation and execution.

  19. Statistical model of fractures and deformations zones for Forsmark. Preliminary site description Forsmark area - version 1.2

    International Nuclear Information System (INIS)

    La Pointe, Paul R.; Olofsson, Isabelle; Hermanson, Jan

    2005-04-01

    Compared to version 1.1, a much larger amount of data especially from boreholes is available. Both one-hole interpretation and Boremap indicate the presence of high and low fracture intensity intervals in the rock mass. The depth and width of these intervals varies from borehole to borehole but these constant fracture intensity intervals are contiguous and present quite sharp transitions. There is not a consistent pattern of intervals of high fracture intensity at or near to the surface. In many cases, the intervals of highest fracture intensity are considerably below the surface. While some fractures may have occurred or been reactivated in response to surficial stress relief, surficial stress relief does not appear to be a significant explanatory variable for the observed variations in fracture intensity. Data from the high fracture intensity intervals were extracted and statistical analyses were conducted in order to identify common geological factors. Stereoplots of fracture orientation versus depth for the different fracture intensity intervals were also produced for each borehole. Moreover percussion borehole data were analysed in order to identify the persistence of these intervals throughout the model volume. The main conclusions of these analyses are the following: The fracture intensity is conditioned by the rock domain, but inside a rock domain intervals of high and low fracture intensity are identified. The intervals of high fracture intensity almost always correspond to intervals with distinct fracture orientations (whether a set, most often the NW sub-vertical set, is highly dominant, or some orientation sets are missing). These high fracture intensity intervals are positively correlated to the presence of first and second generation minerals (epidote, calcite). No clear correlation for these fracture intensity intervals has been identified between holes. Based on these results the fracture frequency has been calculated in each rock domain for the

  20. Statistical model of fractures and deformations zones for Forsmark. Preliminary site description Forsmark area - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    La Pointe, Paul R. [Golder Associate Inc., Redmond, WA (United States); Olofsson, Isabelle; Hermanson, Jan [Golder Associates AB, Uppsala (Sweden)

    2005-04-01

    Compared to version 1.1, a much larger amount of data especially from boreholes is available. Both one-hole interpretation and Boremap indicate the presence of high and low fracture intensity intervals in the rock mass. The depth and width of these intervals varies from borehole to borehole but these constant fracture intensity intervals are contiguous and present quite sharp transitions. There is not a consistent pattern of intervals of high fracture intensity at or near to the surface. In many cases, the intervals of highest fracture intensity are considerably below the surface. While some fractures may have occurred or been reactivated in response to surficial stress relief, surficial stress relief does not appear to be a significant explanatory variable for the observed variations in fracture intensity. Data from the high fracture intensity intervals were extracted and statistical analyses were conducted in order to identify common geological factors. Stereoplots of fracture orientation versus depth for the different fracture intensity intervals were also produced for each borehole. Moreover percussion borehole data were analysed in order to identify the persistence of these intervals throughout the model volume. The main conclusions of these analyses are the following: The fracture intensity is conditioned by the rock domain, but inside a rock domain intervals of high and low fracture intensity are identified. The intervals of high fracture intensity almost always correspond to intervals with distinct fracture orientations (whether a set, most often the NW sub-vertical set, is highly dominant, or some orientation sets are missing). These high fracture intensity intervals are positively correlated to the presence of first and second generation minerals (epidote, calcite). No clear correlation for these fracture intensity intervals has been identified between holes. Based on these results the fracture frequency has been calculated in each rock domain for the

  1. Temperature and Humidity Profiles in the TqJoint Data Group of AIRS Version 6 Product for the Climate Model Evaluation

    Science.gov (United States)

    Ding, Feng; Fang, Fan; Hearty, Thomas J.; Theobald, Michael; Vollmer, Bruce; Lynnes, Christopher

    2014-01-01

    The Atmospheric Infrared Sounder (AIRS) mission is entering its 13th year of global observations of the atmospheric state, including temperature and humidity profiles, outgoing long-wave radiation, cloud properties, and trace gases. Thus AIRS data have been widely used, among other things, for short-term climate research and observational component for model evaluation. One instance is the fifth phase of the Coupled Model Intercomparison Project (CMIP5) which uses AIRS version 5 data in the climate model evaluation. The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is the home of processing, archiving, and distribution services for data from the AIRS mission. The GES DISC, in collaboration with the AIRS Project, released data from the version 6 algorithm in early 2013. The new algorithm represents a significant improvement over previous versions in terms of greater stability, yield, and quality of products. The ongoing Earth System Grid for next generation climate model research project, a collaborative effort of GES DISC and NASA JPL, will bring temperature and humidity profiles from AIRS version 6. The AIRS version 6 product adds a new "TqJoint" data group, which contains data for a common set of observations across water vapor and temperature at all atmospheric levels and is suitable for climate process studies. How different may the monthly temperature and humidity profiles in "TqJoint" group be from the "Standard" group where temperature and water vapor are not always valid at the same time? This study aims to answer the question by comprehensively comparing the temperature and humidity profiles from the "TqJoint" group and the "Standard" group. The comparison includes mean differences at different levels globally and over land and ocean. We are also working on examining the sampling differences between the "TqJoint" and "Standard" group using MERRA data.

  2. Rock mechanics modelling of rock mass properties - summary of primary data. Preliminary site description Laxemar subarea - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Lanaro, Flavio [Berg Bygg Konsult AB, Solna (Sweden); Oehman, Johan; Fredriksson, Anders [Golder Associates AB, Uppsala (Sweden)

    2006-05-15

    The results presented in this report are the summary of the primary data for the Laxemar Site Descriptive Modelling version 1.2. At this stage, laboratory tests on intact rock and fracture samples from borehole KSH01A, KSH02A, KAV01 (already considered in Simpevarp SDM version 1.2) and borehole KLX02 and KLX04 were available. Concerning the mechanical properties of the intact rock, the rock type 'granite to quartz monzodiorite' or 'Aevroe granite' (code 501044) was tested for the first time within the frame of the site descriptive modelling. The average uniaxial compressive strength and Young's modulus of the granite to quartz to monzodiorite are 192 MPa and 72 GPa, respectively. The crack initiation stress is observed to be 0.5 times the uniaxial compressive strength for the same rock type. Non negligible differences are observed between the statistics of the mechanical properties of the granite to quartz monzodiorite in borehole KLX02 and KLX04. The available data on rock fractures were analysed to determine the mechanical properties of the different fracture sets at the site (based on tilt test results) and to determine systematic differences between the results obtained with different sample preparation techniques (based on direct shear tests). The tilt tests show that there are not significant differences of the mechanical properties due to the fracture orientation. Thus, all fracture sets seem to have the same strength and deformability. The average peak friction angle for the Coulomb's Criterion of the fracture sets varies between 33.6 deg and 34.1 deg, while the average cohesion ranges between 0.46 and 0.52 MPa, respectively. The average of the Coulomb's residual cohesion and friction angle vary in the ranges 28.0 deg - 29.2 deg and 0.40-0.45 MPa, respectively. The only significant difference could be observed on the average cohesion between fracture set S{sub A} and S{sub d}. The direct shear tests show that the

  3. Infrastructure Upgrades to Support Model Longevity and New Applications: The Variable Infiltration Capacity Model Version 5.0 (VIC 5.0)

    Science.gov (United States)

    Nijssen, B.; Hamman, J.; Bohn, T. J.

    2015-12-01

    The Variable Infiltration Capacity (VIC) model is a macro-scale semi-distributed hydrologic model. VIC development began in the early 1990s and it has been used extensively, applied from basin to global scales. VIC has been applied in a many use cases, including the construction of hydrologic data sets, trend analysis, data evaluation and assimilation, forecasting, coupled climate modeling, and climate change impact analysis. Ongoing applications of the VIC model include the University of Washington's drought monitor and forecast systems, and NASA's land data assimilation systems. The development of VIC version 5.0 focused on reconfiguring the legacy VIC source code to support a wider range of modern modeling applications. The VIC source code has been moved to a public Github repository to encourage participation by the model development community-at-large. The reconfiguration has separated the physical core of the model from the driver, which is responsible for memory allocation, pre- and post-processing and I/O. VIC 5.0 includes four drivers that use the same physical model core: classic, image, CESM, and Python. The classic driver supports legacy VIC configurations and runs in the traditional time-before-space configuration. The image driver includes a space-before-time configuration, netCDF I/O, and uses MPI for parallel processing. This configuration facilitates the direct coupling of streamflow routing, reservoir, and irrigation processes within VIC. The image driver is the foundation of the CESM driver; which couples VIC to CESM's CPL7 and a prognostic atmosphere. Finally, we have added a Python driver that provides access to the functions and datatypes of VIC's physical core from a Python interface. This presentation demonstrates how reconfiguring legacy source code extends the life and applicability of a research model.

  4. Versioning of printed products

    Science.gov (United States)

    Tuijn, Chris

    2005-01-01

    During the definition of a printed product in an MIS system, a lot of attention is paid to the production process. The MIS systems typically gather all process-related parameters at such a level of detail that they can determine what the exact cost will be to make a specific product. This information can then be used to make a quote for the customer. Considerably less attention is paid to the content of the products since this does not have an immediate impact on the production costs (assuming that the number of inks or plates is known in advance). The content management is typically carried out either by the prepress systems themselves or by dedicated workflow servers uniting all people that contribute to the manufacturing of a printed product. Special care must be taken when considering versioned products. With versioned products we here mean distinct products that have a number of pages or page layers in common. Typical examples are comic books that have to be printed in different languages. In this case, the color plates can be shared over the different versions and the black plate will be different. Other examples are nation-wide magazines or newspapers that have an area with regional pages or advertising leaflets in different languages or currencies. When considering versioned products, the content will become an important cost factor. First of all, the content management (and associated proofing and approval cycles) becomes much more complex and, therefore, the risk that mistakes will be made increases considerably. Secondly, the real production costs are very much content-dependent because the content will determine whether plates can be shared across different versions or not and how many press runs will be needed. In this paper, we will present a way to manage different versions of a printed product. First, we will introduce a data model for version management. Next, we will show how the content of the different versions can be supplied by the customer

  5. version 10

    African Journals Online (AJOL)

    lviljoen

    Keywords: therapeutic budget model; Cost Prevalence Index (CPI); medicine usage patterns; average medicine cost; public ... was om die perspektiewe op die koste en gebruik van medisyne te ontleed, om 'n model gebaseer op terapeutiese klassifikasie vir 'n ... based upon the historical allocations (Blok et al. 2001:32).

  6. Evaluation of modeled land-atmosphere exchanges with a comprehensive water isotope fractionation scheme in version 4 of the Community Land Model

    Science.gov (United States)

    Wong, Tony E.; Nusbaumer, Jesse; Noone, David C.

    2017-06-01

    All physical process models and field observations are inherently imperfect, so there is a need to both (1) obtain measurements capable of constraining quantities of interest and (2) develop frameworks for assessment in which the desired processes and their uncertainties may be characterized. Incorporation of stable water isotopes into land surface schemes offers a complimentary approach to constrain hydrological processes such as evapotranspiration, and yields acute insight into the hydrological and biogeochemical behaviors of the domain. Here a stable water isotopic scheme in the National Center for Atmospheric Research's version 4 of the Community Land Model (CLM4) is presented. An overview of the isotopic methods is given. Isotopic model results are compared to available data sets on site-level and global scales for validation. Comparisons of site-level soil moisture and isotope ratios reveal that surface water does not percolate as deeply into the soil as observed in field measurements. The broad success of the new model provides confidence in its use for a range of climate and hydrological studies, while the sensitivity of simulation results to kinetic processes stands as a reminder that new theoretical development and refinement of kinetic effect parameterizations is needed to achieve further improvements.

  7. Computer code SICHTA-85/MOD 1 for thermohydraulic and mechanical modelling of WWER fuel channel behaviour during LOCA and comparison with original version of the SICHTA code

    International Nuclear Information System (INIS)

    Bujan, A.; Adamik, V.; Misak, J.

    1986-01-01

    A brief description is presented of the expansion of the SICHTA-83 computer code for the analysis of the thermal history of the fuel channel for large LOCAs by modelling the mechanical behaviour of fuel element cladding. The new version of the code has a more detailed treatment of heat transfer in the fuel-cladding gap because it also respects the mechanical (plastic) deformations of the cladding and the fuel-cladding interaction (magnitude of contact pressure). Also respected is the change in pressure of the gas filling of the fuel element, the mechanical criterion is considered of a failure of the cladding and the degree is considered of the blockage of the through-flow cross section for coolant flow in the fuel channel. The LOCA WWER-440 model computation provides a comparison of the new SICHTA-85/MOD 1 code with the results of the original 83 version of SICHTA. (author)

  8. The Nexus Land-Use model version 1.0, an approach articulating biophysical potentials and economic dynamics to model competition for land-use

    Directory of Open Access Journals (Sweden)

    F. Souty

    2012-10-01

    Full Text Available Interactions between food demand, biomass energy and forest preservation are driving both food prices and land-use changes, regionally and globally. This study presents a new model called Nexus Land-Use version 1.0 which describes these interactions through a generic representation of agricultural intensification mechanisms within agricultural lands. The Nexus Land-Use model equations combine biophysics and economics into a single coherent framework to calculate crop yields, food prices, and resulting pasture and cropland areas within 12 regions inter-connected with each other by international trade. The representation of cropland and livestock production systems in each region relies on three components: (i a biomass production function derived from the crop yield response function to inputs such as industrial fertilisers; (ii a detailed representation of the livestock production system subdivided into an intensive and an extensive component, and (iii a spatially explicit distribution of potential (maximal crop yields prescribed from the Lund-Postdam-Jena global vegetation model for managed Land (LPJmL. The economic principles governing decisions about land-use and intensification are adapted from the Ricardian rent theory, assuming cost minimisation for farmers. In contrast to the other land-use models linking economy and biophysics, crops are aggregated as a representative product in calories and intensification for the representative crop is a non-linear function of chemical inputs. The model equations and parameter values are first described in details. Then, idealised scenarios exploring the impact of forest preservation policies or rising energy price on agricultural intensification are described, and their impacts on pasture and cropland areas are investigated.

  9. PVWatts Version 5 Manual

    Energy Technology Data Exchange (ETDEWEB)

    Dobos, A. P.

    2014-09-01

    The NREL PVWatts calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes includes several built-in parameters that are hidden from the user. This technical reference describes the sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimate. This reference is applicable to the significantly revised version of PVWatts released by NREL in 2014.

  10. Development of a short version of the dual process model scales: right-wing authoritarianism, social dominance orientation, dangerous and competitive worldviews

    Directory of Open Access Journals (Sweden)

    Dmitry S. Grigoryev

    2017-12-01

    Full Text Available Objective. The article describes a short version of the dual process model scales by J. Duckitt that allow elaborating on an integrated exploratory approach for the assessment of authoritarianism. Background. This area of research is not widespread in Russia in contrast to foreign social psychology. Unfortunately, there are only a few studies in Russia that advance our understanding of the indicated problems, and these few studies likely put more questions than give answers. It can be partly explained by the lack of appropriate available, reliable and valid measures in Russian. Dual process model for the study of authoritarianism offers the scales designed to measure: (1 right-wing authoritarianism that reflects the motivation and attitudes to maintain and preservation of the social cohesion, order, stability, and collective security; (2 social dominance orientation that reflects the motivation and attitudes to maintain and preservation of the dominance and superiority; (3 dangerous worldview that reflects views of the social world as the dangerous and threatening; and (4 competitive worldview that reflects views of the social world as the competitive and ferocious. Design. The data for the analysis were collected in the survey of 241 participants, mostly residents of Moscow (Central Federal District, Russia, and Ulyanovsk (Volga Federal District, Russia. Using confirmatory factor analysis the four measurement models containing the different number of dimensions of the short version of the dual process model scales were tested. Also, cross-validation was performed (N = 576. Results. The tested measurement models had acceptable reliability and validity indices. However, the best fit was shown by the model with multidimensional structure in which all the subfactors were as separate constructs. Conclusion. The short version of scales was successfully compiled, the measures can be considered a reliable and valid measure to study of authoritarianism

  11. Users' manual for LEHGC: A Lagrangian-Eulerian Finite-Element Model of Hydrogeochemical Transport Through Saturated-Unsaturated Media. Version 1.1

    International Nuclear Information System (INIS)

    Yeh, Gour-Tsyh

    1995-11-01

    The computer program LEHGC is a Hybrid Lagrangian-Eulerian Finite-Element Model of HydroGeo-Chemical (LEHGC) Transport Through Saturated-Unsaturated Media. LEHGC iteratively solves two-dimensional transport and geochemical equilibrium equations and is a descendant of HYDROGEOCHEM, a strictly Eulerian finite-element reactive transport code. The hybrid Lagrangian-Eulerian scheme improves on the Eulerian scheme by allowing larger time steps to be used in the advection-dominant transport calculations. This causes less numerical dispersion and alleviates the problem of calculated negative concentrations at sharp concentration fronts. The code also is more computationally efficient than the strictly Eulerian version. LEHGC is designed for generic application to reactive transport problems associated with contaminant transport in subsurface media. Input to the program includes the geometry of the system, the spatial distribution of finite elements and nodes, the properties of the media, the potential chemical reactions, and the initial and boundary conditions. Output includes the spatial distribution of chemical element concentrations as a function of time and space and the chemical speciation at user-specified nodes. LEHGC Version 1.1 is a modification of LEHGC Version 1.0. The modification includes: (1) devising a tracking algorithm with the computational effort proportional to N where N is the number of computational grid nodes rather than N 2 as in LEHGC Version 1.0, (2) including multiple adsorbing sites and multiple ion-exchange sites, (3) using four preconditioned conjugate gradient methods for the solution of matrix equations, and (4) providing a model for some features of solute transport by colloids

  12. Users` manual for LEHGC: A Lagrangian-Eulerian Finite-Element Model of Hydrogeochemical Transport Through Saturated-Unsaturated Media. Version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    Yeh, Gour-Tsyh [Pennsylvania State Univ., University Park, PA (United States). Dept. of Civil and Environmental Engineering; Carpenter, S.L. [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Earth and Planetary Sciences; Hopkins, P.L.; Siegel, M.D. [Sandia National Labs., Albuquerque, NM (United States)

    1995-11-01

    The computer program LEHGC is a Hybrid Lagrangian-Eulerian Finite-Element Model of HydroGeo-Chemical (LEHGC) Transport Through Saturated-Unsaturated Media. LEHGC iteratively solves two-dimensional transport and geochemical equilibrium equations and is a descendant of HYDROGEOCHEM, a strictly Eulerian finite-element reactive transport code. The hybrid Lagrangian-Eulerian scheme improves on the Eulerian scheme by allowing larger time steps to be used in the advection-dominant transport calculations. This causes less numerical dispersion and alleviates the problem of calculated negative concentrations at sharp concentration fronts. The code also is more computationally efficient than the strictly Eulerian version. LEHGC is designed for generic application to reactive transport problems associated with contaminant transport in subsurface media. Input to the program includes the geometry of the system, the spatial distribution of finite elements and nodes, the properties of the media, the potential chemical reactions, and the initial and boundary conditions. Output includes the spatial distribution of chemical element concentrations as a function of time and space and the chemical speciation at user-specified nodes. LEHGC Version 1.1 is a modification of LEHGC Version 1.0. The modification includes: (1) devising a tracking algorithm with the computational effort proportional to N where N is the number of computational grid nodes rather than N{sup 2} as in LEHGC Version 1.0, (2) including multiple adsorbing sites and multiple ion-exchange sites, (3) using four preconditioned conjugate gradient methods for the solution of matrix equations, and (4) providing a model for some features of solute transport by colloids.

  13. Land Boundary Conditions for the Goddard Earth Observing System Model Version 5 (GEOS-5) Climate Modeling System: Recent Updates and Data File Descriptions

    Science.gov (United States)

    Mahanama, Sarith P.; Koster, Randal D.; Walker, Gregory K.; Takacs, Lawrence L.; Reichle, Rolf H.; De Lannoy, Gabrielle; Liu, Qing; Zhao, Bin; Suarez, Max J.

    2015-01-01

    The Earths land surface boundary conditions in the Goddard Earth Observing System version 5 (GEOS-5) modeling system were updated using recent high spatial and temporal resolution global data products. The updates include: (i) construction of a global 10-arcsec land-ocean lakes-ice mask; (ii) incorporation of a 10-arcsec Globcover 2009 land cover dataset; (iii) implementation of Level 12 Pfafstetter hydrologic catchments; (iv) use of hybridized SRTM global topography data; (v) construction of the HWSDv1.21-STATSGO2 merged global 30 arc second soil mineral and carbon data in conjunction with a highly-refined soil classification system; (vi) production of diffuse visible and near-infrared 8-day MODIS albedo climatologies at 30-arcsec from the period 2001-2011; and (vii) production of the GEOLAND2 and MODIS merged 8-day LAI climatology at 30-arcsec for GEOS-5. The global data sets were preprocessed and used to construct global raster data files for the software (mkCatchParam) that computes parameters on catchment-tiles for various atmospheric grids. The updates also include a few bug fixes in mkCatchParam, as well as changes (improvements in algorithms, etc.) to mkCatchParam that allow it to produce tile-space parameters efficiently for high resolution AGCM grids. The update process also includes the construction of data files describing the vegetation type fractions, soil background albedo, nitrogen deposition and mean annual 2m air temperature to be used with the future Catchment CN model and the global stream channel network to be used with the future global runoff routing model. This report provides detailed descriptions of the data production process and data file format of each updated data set.

  14. Extended-range prediction trials using the global cloud/cloud-system resolving model NICAM and its new ocean-coupled version NICOCO

    Science.gov (United States)

    Miyakawa, Tomoki

    2017-04-01

    The global cloud/cloud-system resolving model NICAM and its new fully-coupled version NICOCO is run on one of the worlds top-tier supercomputers, the K computer. NICOCO couples the full-3D ocean component COCO of the general circulation model MIROC using a general-purpose coupler Jcup. We carried out multiple MJO simulations using NICAM and the new ocean-coupled version NICOCO to examine their extended-range MJO prediction skills and the impact of ocean coupling. NICAM performs excellently in terms of MJO prediction, maintaining a valid skill up to 27 days after the model is initialized (Miyakawa et al 2014). As is the case in most global models, ocean coupling frees the model from being anchored by the observed SST and allows the model climate to drift away further from reality compared to the atmospheric version of the model. Thus, it is important to evaluate the model bias, and in an initial value problem such as the seasonal extended-range prediction, it is essential to be able to distinguish the actual signal from the early transition of the model from the observed state to its own climatology. Since NICAM is a highly resource-demanding model, evaluation and tuning of the model climatology (order of years) is challenging. Here we focus on the initial 100 days to estimate the early drift of the model, and subsequently evaluate MJO prediction skills of NICOCO. Results show that in the initial 100 days, NICOCO forms a La-Nina like SST bias compared to observation, with a warmer Maritime Continent warm pool and a cooler equatorial central Pacific. The enhanced convection over the Maritime Continent associated with this bias project on to the real-time multi-variate MJO indices (RMM, Wheeler and Hendon 2004), and contaminates the MJO skill score. However, the bias does not appear to demolish the MJO signal severely. The model maintains a valid MJO prediction skill up to nearly 4 weeks when evaluated after linearly removing the early drift component estimated from

  15. Design of a integrated source-risk model for radon (Version 1.0); Ontwerp Geintegreerd bron-risicomodel voor radon (Versie 1.0)

    Energy Technology Data Exchange (ETDEWEB)

    Laheij, G.M.H.; Stoop, P.; De Vries, L.J.; Aldenkamp, F.J.

    1995-01-01

    In 1993 a definition study for the development of a model describing the complete chain: source - exhalation - dispersion - exposure - effect/risk for radon has been performed. Advantages using a source-risk model are that risk calculations are standardized, the effects of measures applied to different parts of the source-risk chain can be compared and the most important parameters within the total source-risk chain can be determined. The models presently available in the Netherlands were investigated by interviewing several owners of models at KVI, TNO and RIVM. The models were screened for completeness, validation and operational status. The investigation made clear that, by choosing for each part of the source-risk chain the most convenient model, a source-risk chain model for radon may be realised. An organisational form of the source-risk model was recommended in which only the simple models are administrated at a central site. The other models are operated and administrated by the model owners. This report describes the design study for version 1.0 of the source-risk model. Procedures and requirements for the interaction between the several models and the database included in the source-risk model are given. This is worked out in a working script in which both the responsibilities of the model owners and of the administrator and the procedures for model calculations and queries on the database are given. Also a data dictionary is given, in which all parameters, used within the source-risk chain, are described, next to a contract in which agreements to ensure the operationality of the source-risk model can be found. Furthermore, the parts of the model which should be developed, i.e. the information system (the database), transfer format and balance model, are described. (Abstract Truncated)

  16. Systematic comparison of barriers for heavy-ion fusion calculated on the basis of the double-folding model by employing two versions of nucleon–nucleon interaction

    Energy Technology Data Exchange (ETDEWEB)

    Gontchar, I. I. [Omsk State Transport University (Russian Federation); Chushnyakova, M. V., E-mail: maria.chushnyakova@gmail.com [Omsk State Technical University (Russian Federation)

    2016-07-15

    A systematic calculation of barriers for heavy-ion fusion was performed on the basis of the double-folding model by employing two versions of an effective nucleon–nucleon interaction: M3Y interaction and Migdal interaction. The results of calculations by the Hartree–Fockmethod with the SKX coefficients were taken for nuclear densities. The calculations reveal that the fusion barrier is higher in the case of employing theMigdal interaction than in the case of employing the Ðœ3Y interaction. In view of this, the use of the Migdal interaction in describing heavy-ion fusion is questionable.

  17. Study of the photoproduction of the vector meson Φ(1020) and the hyperon Λ(1520) from the production threshold up to a photon energy of 2.65 GeV with SAPHIR

    International Nuclear Information System (INIS)

    Wiegers, B.

    2001-05-01

    The photoproduction of the vector meson φ(1020) and the hyperon Λ(1520) have been measured in the finale state pK + K - from their thresholds up to 2.65 GeV using the high duty-factor electron accelerator ELSA and the 4π-detectorsystem SAPHIR. The t-dependence of φ(1020)-production shows an exponential behavior as expected from diffractive production. s-channel helicity conservation can be seen in the decay angular distribution in the helicity frame. The decay angular distribution in the Gottfried-Jackson frame is not conformable with the exchange of a Pomeron in the t-channel. For the first time, differential cross sections of the Λ(1520) photoproduction from the threshold are measured. The production angular distribution and the decay angular distribution in the Gottfried-Jackson frame show a K * exchange in the t-channel. (orig.)

  18. Development of a new version of the Liverpool Malaria Model. I. Refining the parameter settings and mathematical formulation of basic processes based on a literature review

    Directory of Open Access Journals (Sweden)

    Jones Anne E

    2011-02-01

    Full Text Available Abstract Background A warm and humid climate triggers several water-associated diseases such as malaria. Climate- or weather-driven malaria models, therefore, allow for a better understanding of malaria transmission dynamics. The Liverpool Malaria Model (LMM is a mathematical-biological model of malaria parasite dynamics using daily temperature and precipitation data. In this study, the parameter settings of the LMM are refined and a new mathematical formulation of key processes related to the growth and size of the vector population are developed. Methods One of the most comprehensive studies to date in terms of gathering entomological and parasitological information from the literature was undertaken for the development of a new version of an existing malaria model. The knowledge was needed to allow the justification of new settings of various model parameters and motivated changes of the mathematical formulation of the LMM. Results The first part of the present study developed an improved set of parameter settings and mathematical formulation of the LMM. Important modules of the original LMM version were enhanced in order to achieve a higher biological and physical accuracy. The oviposition as well as the survival of immature mosquitoes were adjusted to field conditions via the application of a fuzzy distribution model. Key model parameters, including the mature age of mosquitoes, the survival probability of adult mosquitoes, the human blood index, the mosquito-to-human (human-to-mosquito transmission efficiency, the human infectious age, the recovery rate, as well as the gametocyte prevalence, were reassessed by means of entomological and parasitological observations. This paper also revealed that various malaria variables lack information from field studies to be set properly in a malaria modelling approach. Conclusions Due to the multitude of model parameters and the uncertainty involved in the setting of parameters, an extensive

  19. Updating sea spray aerosol emissions in the Community Multiscale Air Quality (CMAQ) model version 5.0.2

    Data.gov (United States)

    U.S. Environmental Protection Agency — The uploaded data consists of the BRACE Na aerosol observations paired with CMAQ model output, the updated model's parameterization of sea salt aerosol emission size...

  20. FPL-PELPS : a price endogenous linear programming system for economic modeling, supplement to PELPS III, version 1.1.

    Science.gov (United States)

    Patricia K. Lebow; Henry Spelter; Peter J. Ince

    2003-01-01

    This report provides documentation and user information for FPL-PELPS, a personal computer price endogenous linear programming system for economic modeling. Originally developed to model the North American pulp and paper industry, FPL-PELPS follows its predecessors in allowing the modeling of any appropriate sector to predict consumption, production and capacity by...

  1. The Revised Child Anxiety and Depression Scale-Short Version: scale reduction via exploratory bifactor modeling of the broad anxiety factor.

    Science.gov (United States)

    Ebesutani, Chad; Reise, Steven P; Chorpita, Bruce F; Ale, Chelsea; Regan, Jennifer; Young, John; Higa-McMillan, Charmaine; Weisz, John R

    2012-12-01

    Using a school-based (N = 1,060) and clinic-referred (N = 303) youth sample, the authors developed a 25-item shortened version of the Revised Child Anxiety and Depression Scale (RCADS) using Schmid-Leiman exploratory bifactor analysis to reduce client burden and administration time and thus improve the transportability characteristics of this youth anxiety and depression measure. Results revealed that all anxiety items primarily reflected a single "broad anxiety" dimension, which informed the development of a reduced 15-item Anxiety Total scale. Although specific DSM-oriented anxiety subscales were not included in this version, the items comprising the Anxiety Total scale were evenly pulled from the 5 anxiety-related content domains from the original RCADS. The resultant 15-item Anxiety Total scale evidenced significant correspondence with anxiety diagnostic groups based on structured clinical interviews. The scores from the 10-item Depression Total scale (retained from the original version) were also associated with acceptable reliability in the clinic-referred and school-based samples (α = .80 and .79, respectively); this is in contrast to the alternate 5-item shortened RCADS Depression Total scale previously developed by Muris, Meesters, and Schouten (2002), which evidenced depression scores of unacceptable reliability (α = .63). The shortened RCADS developed in the present study thus balances efficiency, breadth, and scale score reliability in a way that is potentially useful for repeated measurement in clinical settings as well as wide-scale screenings that assess anxiety and depressive problems. These future applications are discussed, as are recommendations for continued use of exploratory bifactor modeling in scale development.

  2. On the Lack of Stratospheric Dynamical Variability in Low-top Versions of the CMIP5 Models

    Science.gov (United States)

    Charlton-Perez, Andrew J.; Baldwin, Mark P.; Birner, Thomas; Black, Robert X.; Butler, Amy H.; Calvo, Natalia; Davis, Nicholas A.; Gerber, Edwin P.; Gillett, Nathan; Hardiman, Steven; hide

    2013-01-01

    We describe the main differences in simulations of stratospheric climate and variability by models within the fifth Coupled Model Intercomparison Project (CMIP5) that have a model top above the stratopause and relatively fine stratospheric vertical resolution (high-top), and those that have a model top below the stratopause (low-top). Although the simulation of mean stratospheric climate by the two model ensembles is similar, the low-top model ensemble has very weak stratospheric variability on daily and interannual time scales. The frequency of major sudden stratospheric warming events is strongly underestimated by the low-top models with less than half the frequency of events observed in the reanalysis data and high-top models. The lack of stratospheric variability in the low-top models affects their stratosphere-troposphere coupling, resulting in short-lived anomalies in the Northern Annular Mode, which do not produce long-lasting tropospheric impacts, as seen in observations. The lack of stratospheric variability, however, does not appear to have any impact on the ability of the low-top models to reproduce past stratospheric temperature trends. We find little improvement in the simulation of decadal variability for the high-top models compared to the low-top, which is likely related to the fact that neither ensemble produces a realistic dynamical response to volcanic eruptions.

  3. The CSIRO Mk3L climate system model version 1.0 – Part 1: Description and evaluation

    Directory of Open Access Journals (Sweden)

    S. J. Phipps

    2011-06-01

    Full Text Available The CSIRO Mk3L climate system model is a coupled general circulation model, designed primarily for millennial-scale climate simulations and palaeoclimate research. Mk3L includes components which describe the atmosphere, ocean, sea ice and land surface, and combines computational efficiency with a stable and realistic control climatology. This paper describes the model physics and software, analyses the control climatology, and evaluates the ability of the model to simulate the modern climate.

    Mk3L incorporates a spectral atmospheric general circulation model, a z-coordinate ocean general circulation model, a dynamic-thermodynamic sea ice model and a land surface scheme with static vegetation. The source code is highly portable, and has no dependence upon proprietary software. The model distribution is freely available to the research community. A 1000-yr climate simulation can be completed in around one-and-a-half months on a typical desktop computer, with greater throughput being possible on high-performance computing facilities.

    Mk3L produces realistic simulations of the larger-scale features of the modern climate, although with some biases on the regional scale. The model also produces reasonable representations of the leading modes of internal climate variability in both the tropics and extratropics. The control state of the model exhibits a high degree of stability, with only a weak cooling trend on millennial timescales. Ongoing development work aims to improve the model climatology and transform Mk3L into a comprehensive earth system model.

  4. Groundwater model of the Great Basin carbonate and alluvial aquifer system version 3.0: Incorporating revisions in southwestern Utah and east central Nevada

    Science.gov (United States)

    Brooks, Lynette E.

    2017-12-01

    The groundwater model described in this report is a new version of previously published steady-state numerical groundwater flow models of the Great Basin carbonate and alluvial aquifer system, and was developed in conjunction with U.S. Geological Survey studies in Parowan, Pine, and Wah Wah Valleys, Utah. This version of the model is GBCAAS v. 3.0 and supersedes previous versions. The objectives of the model for Parowan Valley were to simulate revised conceptual estimates of recharge and discharge, to estimate simulated aquifer storage properties and the amount of reduction in storage as a result of historical groundwater withdrawals, and to assess reduction in groundwater withdrawals necessary to mitigate groundwater-level declines in the basin. The objectives of the model for the area near Pine and Wah Wah Valleys were to recalibrate the model using new observations of groundwater levels and evapotranspiration of groundwater; to provide new estimates of simulated recharge, hydraulic conductivity, and interbasin flow; and to simulate the effects of proposed groundwater withdrawals on the regional flow system. Meeting these objectives required the addition of 15 transient calibration stress periods and 14 projection stress periods, aquifer storage properties, historical withdrawals in Parowan Valley, and observations of water-level changes in Parowan Valley. Recharge in Parowan Valley and withdrawal from wells in Parowan Valley and two nearby wells in Cedar City Valley vary for each calibration stress period representing conditions from March 1940 to November 2013. Stresses, including recharge, are the same in each stress period as in the steady-state stress period for all areas outside of Parowan Valley. The model was calibrated to transient conditions only in Parowan Valley. Simulated storage properties outside of Parowan Valley were set the same as the Parowan Valley properties and are not considered calibrated. Model observations in GBCAAS v. 3.0 are

  5. Reliability and construct validity of the Bahasa Malaysia version of transtheoretical model (TTM) questionnaire for smoking cessation and relapse among Malaysian adult.

    Science.gov (United States)

    Yasin, Siti Munira; Taib, Khairul Mizan; Zaki, Rafdzah Ahmad

    2011-01-01

    The transtheoretical model (TTM) has been used as one of the major constructs in developing effective cognitive behavioural interventions for smoking cessation and relapse prevention, in Western societies. This study aimed to examine the reliability and construct validity of the translated Bahasa Malaysia version of TTM questionnaire among adult smokers in Klang Valley, Malaysia. The sample consisted of 40 smokers from four different worksites in Klang Valley. A 26-item TTM questionnaire was administered, and a similar set one week later. The questionnaire consisted of three measures; decisional balance, temptations and impact of smoking. Construct validity was measured by factor analysis and the reliability by Cronbach' s alpha (internal consistency) and test-retest correlation. Results revealed that Cronbach' s alpha coefficients for the items were: decisional balance (0.84; 0.74) and temptations (0.89; 0.54; 0.85). The values for test retest correlation were all above 0.4. In addition, factor analysis suggested two meaningful common factors for decisional balance and three for temptations. This is consistent with the original construct of the TTM questionnaire. Overall results demonstrated that construct validity and reliability were acceptable for all items. In conclusion, the Bahasa Malaysia version of TTM questionnaire is a reliable and valid tool in ass.

  6. Interactive lakes in the Canadian Regional Climate Model, version 5: the role of lakes in the regional climate of North America

    Directory of Open Access Journals (Sweden)

    Bernard Dugas

    2012-02-01

    Full Text Available Two one-dimensional (1-D column lake models have been coupled interactively with a developmental version of the Canadian Regional Climate Model. Multidecadal reanalyses-driven simulations with and without lakes revealed the systematic biases of the model and the impact of lakes on the simulated North American climate.The presence of lakes strongly influences the climate of the lake-rich region of the Canadian Shield. Due to their large thermal inertia, lakes act to dampen the diurnal and seasonal cycle of low-level air temperature. In late autumn and winter, ice-free lakes induce large sensible and latent heat fluxes, resulting in a strong enhancement of precipitation downstream of the Laurentian Great Lakes, which is referred to as the snow belt.The FLake (FL and Hostetler (HL lake models perform adequately for small subgrid-scale lakes and for large resolved lakes with shallow depth, located in temperate or warm climatic regions. Both lake models exhibit specific strengths and weaknesses. For example, HL simulates too rapid spring warming and too warm surface temperature, especially in large and deep lakes; FL tends to damp the diurnal cycle of surface temperature. An adaptation of 1-D lake models might be required for an adequate simulation of large and deep lakes.

  7. 78 FR 32224 - Availability of Version 3.1.2 of the Connect America Fund Phase II Cost Model; Additional...

    Science.gov (United States)

    2013-05-29

    ... (202) 863-2893, facsimile (202) 863-2898, or via the Internet at http://www.bcpiweb.com . In addition, the Virtual Workshop may be accessed via the Internet at http://www.fcc.gov/blog/wcb-cost-model... anonymous comments posted during the workshop in reaching decisions regarding the model. Participants should...

  8. A Multi-Year Plan for Enhancing Turbulence Modeling in Hydra-TH Revised and Updated Version 2.0

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Thomas M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Berndt, Markus [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Baglietto, Emilio [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Magolan, Ben [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2015-10-01

    The purpose of this report is to document a multi-year plan for enhancing turbulence modeling in Hydra-TH for the Consortium for Advanced Simulation of Light Water Reactors (CASL) program. Hydra-TH is being developed to the meet the high- fidelity, high-Reynolds number CFD based thermal hydraulic simulation needs of the program. This work is being conducted within the thermal hydraulics methods (THM) focus area. This report is an extension of THM CASL milestone L3:THM.CFD.P10.02 [33] (March, 2015) and picks up where it left off. It will also serve to meet the requirements of CASL THM level three milestone, L3:THM.CFD.P11.04, scheduled for completion September 30, 2015. The objectives of this plan will be met by: maturation of recently added turbulence models, strategic design/development of new models and systematic and rigorous testing of existing and new models and model extensions. While multi-phase turbulent flow simulations are important to the program, only single-phase modeling will be considered in this report. Large Eddy Simulation (LES) is also an important modeling methodology. However, at least in the first year, the focus is on steady-state Reynolds Averaged Navier-Stokes (RANS) turbulence modeling.

  9. APEX user`s guide - (Argonne production, expansion, and exchange model for electrical systems), version 3.0

    Energy Technology Data Exchange (ETDEWEB)

    VanKuiken, J.C.; Veselka, T.D.; Guziel, K.A.; Blodgett, D.W.; Hamilton, S.; Kavicky, J.A.; Koritarov, V.S.; North, M.J.; Novickas, A.A.; Paprockas, K.R. [and others

    1994-11-01

    This report describes operating procedures and background documentation for the Argonne Production, Expansion, and Exchange Model for Electrical Systems (APEX). This modeling system was developed to provide the U.S. Department of Energy, Division of Fossil Energy, Office of Coal and Electricity with in-house capabilities for addressing policy options that affect electrical utilities. To meet this objective, Argonne National Laboratory developed a menu-driven programming package that enables the user to develop and conduct simulations of production costs, system reliability, spot market network flows, and optimal system capacity expansion. The APEX system consists of three basic simulation components, supported by various databases and data management software. The components include (1) the investigation of Costs and Reliability in Utility Systems (ICARUS) model, (2) the Spot Market Network (SMN) model, and (3) the Production and Capacity Expansion (PACE) model. The ICARUS model provides generating-unit-level production-cost and reliability simulations with explicit recognition of planned and unplanned outages. The SMN model addresses optimal network flows with recognition of marginal costs, wheeling charges, and transmission constraints. The PACE model determines long-term (e.g., longer than 10 years) capacity expansion schedules on the basis of candidate expansion technologies and load growth estimates. In addition, the Automated Data Assembly Package (ADAP) and case management features simplify user-input requirements. The ADAP, ICARUS, and SMN modules are described in detail. The PACE module is expected to be addressed in a future publication.

  10. The new version of the Institute of Numerical Mathematics Sigma Ocean Model (INMSOM) for simulation of Global Ocean circulation and its variability

    Science.gov (United States)

    Gusev, Anatoly; Fomin, Vladimir; Diansky, Nikolay; Korshenko, Evgeniya

    2017-04-01

    In this paper, we present the improved version of the ocean general circulation sigma-model developed in the Institute of Numerical Mathematics of the Russian Academy of Sciences (INM RAS). The previous version referred to as INMOM (Institute of Numerical Mathematics Ocean Model) is used as the oceanic component of the IPCC climate system model INMCM (Institute of Numerical Mathematics Climate Model (Volodin et al 2010,2013). Besides, INMOM as the only sigma-model was used for simulations according to CORE-II scenario (Danabasoglu et al. 2014,2016; Downes et al. 2015; Farneti et al. 2015). In general, INMOM results are comparable to ones of other OGCMs and were used for investigation of climatic variations in the North Atlantic (Gusev and Diansky 2014). However, detailed analysis of some CORE-II INMOM results revealed some disadvantages of the INMOM leading to considerable errors in reproducing some ocean characteristics. So, the mass transport in the Antarctic Circumpolar Current (ACC) was overestimated. As well, there were noticeable errors in reproducing thermohaline structure of the ocean. After analysing the previous results, the new version of the OGCM was developed. It was decided to entitle is INMSOM (Institute of Numerical Mathematics Sigma Ocean Model). The new title allows one to distingwish the new model, first, from its older version, and second, from another z-model developed in the INM RAS and referred to as INMIO (Institute of Numerical Mathematics and Institute of Oceanology ocean model) (Ushakov et al. 2016). There were numerous modifications in the model, some of them are as follows. 1) Formulation of the ocean circulation problem in terms of full free surface with taking into account water amount variation. 2) Using tensor form of lateral viscosity operator invariant to rotation. 3) Using isopycnal diffusion including Gent-McWilliams mixing. 4) Using atmospheric forcing computation according to NCAR methodology (Large and Yeager 2009). 5

  11. FMCSA Safety Program Effectiveness Measurement: Carrier Intervention Effectiveness Model (CIEM), Version 1.1 Report for Fiscal Year 2014 Interventions

    Science.gov (United States)

    2018-04-01

    The Federal Motor Carrier Safety Administration (FMCSA), in cooperation with the John A. Volpe National Transportation Systems Center (Volpe), has developed a quantitative model to measure the effectiveness of motor carrier interventions in terms of ...

  12. FMCSA Safety Program Effectiveness Measurement: Carrier Intervention Effectiveness Model, Version 1.1-Report for FY 2014 Interventions - Analysis Brief

    Science.gov (United States)

    2018-04-01

    The Carrier Intervention Effectiveness Model (CIEM) provides the Federal Motor Carrier Safety Administration (FMCSA) with a tool for measuring the safety benefits of carrier interventions conducted under the Compliance, Safety, Accountability (CSA) e...

  13. FMCSA safety program effectiveness measurement : Carrier Intervention Effectiveness Model (CIEM), Version 1.1, report for fiscal year 2013 interventions.

    Science.gov (United States)

    2017-04-01

    The Federal Motor Carrier Safety Administration (FMCSA), in cooperation with the John A. Volpe National Transportation Systems Center (Volpe), has developed a quantitative model to measure the effectiveness of motor carrier interventions in terms of ...

  14. High Resolution Lidar Digital Elevation Models and Low Resolution Shaded Relief Maps of Antarctica from USGS, Version 1

    Data.gov (United States)

    National Aeronautics and Space Administration — Lidar high-resolution elevation digital elevation model data and low-resolution shaded relief maps of Antarctica are available for download from the U.S. Antarctic...

  15. FMCSA safety program effectiveness measurement: carrier intervention effectiveness model (CIEM), version 1.1 : report for fiscal year 2013 interventions.

    Science.gov (United States)

    2017-04-01

    The Federal Motor Carrier Safety Administration (FMCSA), in cooperation with the John A. Volpe National Transportation Systems Center (Volpe), has developed a quantitative model to measure the effectiveness of motor carrier interventions in terms of ...

  16. FMCSA safety program effectiveness measurement : carrier intervention effectiveness model, version 1.1 - report for FY 2013 interventions : analysis brief

    Science.gov (United States)

    2017-04-01

    The Carrier Intervention Effectiveness Model (CIEM) provides the Federal Motor Carrier Safety Administration (FMCSA) with a tool for measuring the safety benefits of carrier interventions conducted under the Compliance, Safety, Accountability (CSA) e...

  17. SAPHIRE 8 Volume 4 - Tutorial

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; S. T. Wood; S. T. Beck

    2011-03-01

    This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, or any of their employees, makes any warranty, expressed or implied, or assumes any legal liability of responsibility for any third party’s use, or the results of such use, or any information, apparatus, product or process disclosed in this report, or represents that its use by such third party would not infringe privately owned rights.

  18. Implementation of methane cycling for deep-time global warming simulations with the DCESS Earth system model (version 1.2

    Directory of Open Access Journals (Sweden)

    G. Shaffer

    2017-11-01

    Full Text Available Geological records reveal a number of ancient, large and rapid negative excursions of the carbon-13 isotope. Such excursions can only be explained by massive injections of depleted carbon to the Earth system over a short duration. These injections may have forced strong global warming events, sometimes accompanied by mass extinctions such as the Triassic-Jurassic and end-Permian extinctions 201 and 252 million years ago, respectively. In many cases, evidence points to methane as the dominant form of injected carbon, whether as thermogenic methane formed by magma intrusions through overlying carbon-rich sediment or from warming-induced dissociation of methane hydrate, a solid compound of methane and water found in ocean sediments. As a consequence of the ubiquity and importance of methane in major Earth events, Earth system models for addressing such events should include a comprehensive treatment of methane cycling but such a treatment has often been lacking. Here we implement methane cycling in the Danish Center for Earth System Science (DCESS model, a simplified but well-tested Earth system model of intermediate complexity. We use a generic methane input function that allows variation in input type, size, timescale and ocean–atmosphere partition. To be able to treat such massive inputs more correctly, we extend the model to deal with ocean suboxic/anoxic conditions and with radiative forcing and methane lifetimes appropriate for high atmospheric methane concentrations. With this new model version, we carried out an extensive set of simulations for methane inputs of various sizes, timescales and ocean–atmosphere partitions to probe model behavior. We find that larger methane inputs over shorter timescales with more methane dissolving in the ocean lead to ever-increasing ocean anoxia with consequences for ocean life and global carbon cycling. Greater methane input directly to the atmosphere leads to more warming and, for example

  19. Implementation of methane cycling for deep-time global warming simulations with the DCESS Earth system model (version 1.2)

    Science.gov (United States)

    Shaffer, Gary; Fernández Villanueva, Esteban; Rondanelli, Roberto; Olaf Pepke Pedersen, Jens; Malskær Olsen, Steffen; Huber, Matthew

    2017-11-01

    Geological records reveal a number of ancient, large and rapid negative excursions of the carbon-13 isotope. Such excursions can only be explained by massive injections of depleted carbon to the Earth system over a short duration. These injections may have forced strong global warming events, sometimes accompanied by mass extinctions such as the Triassic-Jurassic and end-Permian extinctions 201 and 252 million years ago, respectively. In many cases, evidence points to methane as the dominant form of injected carbon, whether as thermogenic methane formed by magma intrusions through overlying carbon-rich sediment or from warming-induced dissociation of methane hydrate, a solid compound of methane and water found in ocean sediments. As a consequence of the ubiquity and importance of methane in major Earth events, Earth system models for addressing such events should include a comprehensive treatment of methane cycling but such a treatment has often been lacking. Here we implement methane cycling in the Danish Center for Earth System Science (DCESS) model, a simplified but well-tested Earth system model of intermediate complexity. We use a generic methane input function that allows variation in input type, size, timescale and ocean-atmosphere partition. To be able to treat such massive inputs more correctly, we extend the model to deal with ocean suboxic/anoxic conditions and with radiative forcing and methane lifetimes appropriate for high atmospheric methane concentrations. With this new model version, we carried out an extensive set of simulations for methane inputs of various sizes, timescales and ocean-atmosphere partitions to probe model behavior. We find that larger methane inputs over shorter timescales with more methane dissolving in the ocean lead to ever-increasing ocean anoxia with consequences for ocean life and global carbon cycling. Greater methane input directly to the atmosphere leads to more warming and, for example, greater carbon dioxide release

  20. GENII Version 2 Users’ Guide

    Energy Technology Data Exchange (ETDEWEB)

    Napier, Bruce A.

    2004-03-08

    The GENII Version 2 computer code was developed for the Environmental Protection Agency (EPA) at Pacific Northwest National Laboratory (PNNL) to incorporate the internal dosimetry models recommended by the International Commission on Radiological Protection (ICRP) and the radiological risk estimating procedures of Federal Guidance Report 13 into updated versions of existing environmental pathway analysis models. The resulting environmental dosimetry computer codes are compiled in the GENII Environmental Dosimetry System. The GENII system was developed to provide a state-of-the-art, technically peer-reviewed, documented set of programs for calculating radiation dose and risk from radionuclides released to the environment. The codes were designed with the flexibility to accommodate input parameters for a wide variety of generic sites. Operation of a new version of the codes, GENII Version 2, is described in this report. Two versions of the GENII Version 2 code system are available, a full-featured version and a version specifically designed for demonstrating compliance with the dose limits specified in 40 CFR 61.93(a), the National Emission Standards for Hazardous Air Pollutants (NESHAPS) for radionuclides. The only differences lie in the limitation of the capabilities of the user to change specific parameters in the NESHAPS version. This report describes the data entry, accomplished via interactive, menu-driven user interfaces. Default exposure and consumption parameters are provided for both the average (population) and maximum individual; however, these may be modified by the user. Source term information may be entered as radionuclide release quantities for transport scenarios, or as basic radionuclide concentrations in environmental media (air, water, soil). For input of basic or derived concentrations, decay of parent radionuclides and ingrowth of radioactive decay products prior to the start of the exposure scenario may be considered. A single code run can

  1. Machine learning models identify molecules active against the Ebola virus in vitro [version 3; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Sean Ekins

    2017-01-01

    Full Text Available The search for small molecule inhibitors of Ebola virus (EBOV has led to several high throughput screens over the past 3 years. These have identified a range of FDA-approved active pharmaceutical ingredients (APIs with anti-EBOV activity in vitro and several of which are also active in a mouse infection model. There are millions of additional commercially-available molecules that could be screened for potential activities as anti-EBOV compounds. One way to prioritize compounds for testing is to generate computational models based on the high throughput screening data and then virtually screen compound libraries. In the current study, we have generated Bayesian machine learning models with viral pseudotype entry assay and the EBOV replication assay data. We have validated the models internally and externally. We have also used these models to computationally score the MicroSource library of drugs to select those likely to be potential inhibitors. Three of the highest scoring molecules that were not in the model training sets, quinacrine, pyronaridine and tilorone, were tested in vitro and had EC50 values of 350, 420 and 230 nM, respectively. Pyronaridine is a component of a combination therapy for malaria that was recently approved by the European Medicines Agency, which may make it more readily accessible for clinical testing. Like other known antimalarial drugs active against EBOV, it shares the 4-aminoquinoline scaffold. Tilorone, is an investigational antiviral agent that has shown a broad array of biological activities including cell growth inhibition in cancer cells, antifibrotic properties, α7 nicotinic receptor agonist activity, radioprotective activity and activation of hypoxia inducible factor-1. Quinacrine is an antimalarial but also has use as an anthelmintic. Our results suggest data sets with less than 1,000 molecules can produce validated machine learning models that can in turn be utilized to identify novel EBOV inhibitors in

  2. Machine learning models identify molecules active against the Ebola virus in vitro [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Sean Ekins

    2015-10-01

    Full Text Available The search for small molecule inhibitors of Ebola virus (EBOV has led to several high throughput screens over the past 3 years. These have identified a range of FDA-approved active pharmaceutical ingredients (APIs with anti-EBOV activity in vitro and several of which are also active in a mouse infection model. There are millions of additional commercially-available molecules that could be screened for potential activities as anti-EBOV compounds. One way to prioritize compounds for testing is to generate computational models based on the high throughput screening data and then virtually screen compound libraries. In the current study, we have generated Bayesian machine learning models with viral pseudotype entry assay and the EBOV replication assay data. We have validated the models internally and externally. We have also used these models to computationally score the MicroSource library of drugs to select those likely to be potential inhibitors. Three of the highest scoring molecules that were not in the model training sets, quinacrine, pyronaridine and tilorone, were tested in vitro and had EC50 values of 350, 420 and 230 nM, respectively. Pyronaridine is a component of a combination therapy for malaria that was recently approved by the European Medicines Agency, which may make it more readily accessible for clinical testing. Like other known antimalarial drugs active against EBOV, it shares the 4-aminoquinoline scaffold. Tilorone, is an investigational antiviral agent that has shown a broad array of biological activities including cell growth inhibition in cancer cells, antifibrotic properties, α7 nicotinic receptor agonist activity, radioprotective activity and activation of hypoxia inducible factor-1. Quinacrine is an antimalarial but also has use as an anthelmintic. Our results suggest data sets with less than 1,000 molecules can produce validated machine learning models that can in turn be utilized to identify novel EBOV inhibitors in

  3. A multi-scale computational model of the effects of TMS on motor cortex [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Hyeon Seo

    2017-02-01

    Full Text Available The detailed biophysical mechanisms through which transcranial magnetic stimulation (TMS activates cortical circuits are still not fully understood. Here we present a multi-scale computational model to describe and explain the activation of different pyramidal cell types in motor cortex due to TMS. Our model determines precise electric fields based on an individual head model derived from magnetic resonance imaging and calculates how these electric fields activate morphologically detailed models of different neuron types. We predict neural activation patterns for different coil orientations consistent with experimental findings. Beyond this, our model allows us to calculate activation thresholds for individual neurons and precise initiation sites of individual action potentials on the neurons’ complex morphologies. Specifically, our model predicts that cortical layer 3 pyramidal neurons are generally easier to stimulate than layer 5 pyramidal neurons, thereby explaining the lower stimulation thresholds observed for I-waves compared to D-waves. It also shows differences in the regions of activated cortical layer 5 and layer 3 pyramidal cells depending on coil orientation. Finally, it predicts that under standard stimulation conditions, action potentials are mostly generated at the axon initial segment of cortical pyramidal cells, with a much less important activation site being the part of a layer 5 pyramidal cell axon where it crosses the boundary between grey matter and white matter. In conclusion, our computational model offers a detailed account of the mechanisms through which TMS activates different cortical pyramidal cell types, paving the way for more targeted application of TMS based on individual brain morphology in clinical and basic research settings.

  4. A multi-scale computational model of the effects of TMS on motor cortex [version 3; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Hyeon Seo

    2017-05-01

    Full Text Available The detailed biophysical mechanisms through which transcranial magnetic stimulation (TMS activates cortical circuits are still not fully understood. Here we present a multi-scale computational model to describe and explain the activation of different pyramidal cell types in motor cortex due to TMS. Our model determines precise electric fields based on an individual head model derived from magnetic resonance imaging and calculates how these electric fields activate morphologically detailed models of different neuron types. We predict neural activation patterns for different coil orientations consistent with experimental findings. Beyond this, our model allows us to calculate activation thresholds for individual neurons and precise initiation sites of individual action potentials on the neurons’ complex morphologies. Specifically, our model predicts that cortical layer 3 pyramidal neurons are generally easier to stimulate than layer 5 pyramidal neurons, thereby explaining the lower stimulation thresholds observed for I-waves compared to D-waves. It also shows differences in the regions of activated cortical layer 5 and layer 3 pyramidal cells depending on coil orientation. Finally, it predicts that under standard stimulation conditions, action potentials are mostly generated at the axon initial segment of cortical pyramidal cells, with a much less important activation site being the part of a layer 5 pyramidal cell axon where it crosses the boundary between grey matter and white matter. In conclusion, our computational model offers a detailed account of the mechanisms through which TMS activates different cortical pyramidal cell types, paving the way for more targeted application of TMS based on individual brain morphology in clinical and basic research settings.

  5. Regional hydrogeological simulations for Forsmark - numerical modelling using CONNECTFLOW. Preliminary site description Forsmark area - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Hartley, Lee; Cox, Ian; Hunter, Fiona; Jackson, Peter; Joyce, Steve; Swift, Ben [Serco Assurance, Risley (United Kingdom); Gylling, Bjoern; Marsic, Niko [Kemakta Konsult AB, Stockholm (Sweden)

    2005-05-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) carries out site investigations in two different candidate areas in Sweden with the objective of describing the in-situ conditions for a bedrock repository for spent nuclear fuel. The site characterisation work is divided into two phases, an initial site investigation phase (IPLU) and a complete site investigation phase (KPLU). The results of IPLU are used as a basis for deciding on a subsequent KPLU phase. On the basis of the KPLU investigations a decision is made as to whether detailed characterisation will be performed (including sinking of a shaft). An integrated component in the site characterisation work is the development of site descriptive models. These comprise basic models in three dimensions with an accompanying text description. Central in the modelling work is the geological model, which provides the geometrical context in terms of a model of deformation zones and the rock mass between the zones. Using the geological and geometrical description models as a basis, descriptive models for other geo-disciplines (hydrogeology, hydro-geochemistry, rock mechanics, thermal properties and transport properties) will be developed. Great care is taken to arrive at a general consistency in the description of the various models and assessment of uncertainty and possible needs of alternative models. Here, a numerical model is developed on a regional-scale (hundreds of square kilometres) to understand the zone of influence for groundwater flow that affects the Forsmark area. Transport calculations are then performed by particle tracking from a local-scale release area (a few square kilometres) to identify potential discharge areas for the site and using greater grid resolution. The main objective of this study is to support the development of a preliminary Site Description of the Forsmark area on a regional-scale based on the available data of 30 June 2004 and the previous Site Description. A more specific

  6. Enhanced Representation of Soil NO Emissions in the Community Multiscale Air Quality (CMAQ) Model Version 5.0.2

    Science.gov (United States)

    Rasool, Quazi Z.; Zhang, Rui; Lash, Benjamin; Cohan, Daniel S.; Cooter, Ellen J.; Bash, Jesse O.; Lamsal, Lok N.

    2016-01-01

    Modeling of soil nitric oxide (NO) emissions is highly uncertain and may misrepresent its spatial and temporal distribution. This study builds upon a recently introduced parameterization to improve the timing and spatial distribution of soil NO emission estimates in the Community Multiscale Air Quality (CMAQ) model. The parameterization considers soil parameters, meteorology, land use, and mineral nitrogen (N) availability to estimate NO emissions. We incorporate daily year-specific fertilizer data from the Environmental Policy Integrated Climate (EPIC) agricultural model to replace the annual generic data of the initial parameterization, and use a 12km resolution soil biome map over the continental USA. CMAQ modeling for July 2011 shows slight differences in model performance in simulating fine particulate matter and ozone from Interagency Monitoring of Protected Visual Environments (IMPROVE) and Clean Air Status and Trends Network (CASTNET) sites and NO2 columns from Ozone Monitoring Instrument (OMI) satellite retrievals. We also simulate how the change in soil NO emissions scheme affects the expected O3 response to projected emissions reductions.

  7. Decision Support Tool Evaluation Report for General NOAA Oil Modeling Environment(GNOME) Version 2.0

    Science.gov (United States)

    Spruce, Joseph P.; Hall, Callie; Zanoni, Vicki; Blonski, Slawomir; D'Sa, Eurico; Estep, Lee; Holland, Donald; Moore, Roxzana F.; Pagnutti, Mary; Terrie, Gregory

    2004-01-01

    NASA's Earth Science Applications Directorate evaluated the potential of NASA remote sensing data and modeling products to enhance the General NOAA Oil Modeling Environment (GNOME) decision support tool. NOAA's Office of Response and Restoration (OR&R) Hazardous Materials (HAZMAT) Response Division is interested in enhancing GNOME with near-realtime (NRT) NASA remote sensing products on oceanic winds and ocean circulation. The NASA SeaWinds sea surface wind and Jason-1 sea surface height NRT products have potential, as do sea surface temperature and reflectance products from the Moderate Resolution Imaging Spectroradiometer and sea surface reflectance products from Landsat and the Advanced Spaceborne Thermal Emission and Reflectance Radiometer. HAZMAT is also interested in the Advanced Circulation model and the Ocean General Circulation Model. Certain issues must be considered, including lack of data continuity, marginal data redundancy, and data formatting problems. Spatial resolution is an issue for near-shore GNOME applications. Additional work will be needed to incorporate NASA inputs into GNOME, including verification and validation of data products, algorithms, models, and NRT data.

  8. Modelling of neutron and photon transport in iron and concrete radiation shieldings by the Monte Carlo method - Version 2

    CERN Document Server

    Žukauskaite, A; Plukiene, R; Plukis, A

    2007-01-01

    Particle accelerators and other high energy facilities produce penetrating ionizing radiation (neutrons and γ-rays) that must be shielded. The objective of this work was to model photon and neutron transport in various materials, usually used as shielding, such as concrete, iron or graphite. Monte Carlo method allows obtaining answers by simulating individual particles and recording some aspects of their average behavior. In this work several nuclear experiments were modeled: AVF 65 – γ-ray beams (1-10 MeV), HIMAC and ISIS-800 – high energy neutrons (20-800 MeV) transport in iron and concrete. The results were then compared with experimental data.

  9. Effect of sex in the MRMT-1 model of cancer-induced bone pain [version 3; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Sarah Falk

    2015-11-01

    Full Text Available An overwhelming amount of evidence demonstrates sex-induced variation in pain processing, and has thus increased the focus on sex as an essential parameter for optimization of in vivo models in pain research. Mammary cancer cells are often used to model metastatic bone pain in vivo, and are commonly used in both males and females. Here we demonstrate that compared to male rats, female rats have an increased capacity for recovery following inoculation of MRMT-1 mammary cells, thus potentially causing a sex-dependent bias in interpretation of the data.

  10. GWSCREEN: A semi-analytical model for assessment of the groundwater pathway from surface or buried contamination: Version 2.0 theory and user's manual

    International Nuclear Information System (INIS)

    Rood, A.S.

    1993-06-01

    GWSCREEN was developed for assessment of the groundwater pathway from leaching of radioactive and non radioactive substances from surface or buried sources. The code was designed for implementation in the Track I and Track II assessment of CERCLA (Comprehensive Environmental Response, Compensation and Liability Act) sites identified as low probability hazard at the Idaho National Engineering Laboratory (DOE, 1992). The code calculates the limiting soil concentration such that, after leaching and transport to the aquifer, regulatory contaminant levels in groundwater are not exceeded. The code uses a mass conservation approach to model three processes: contaminant release from a source volume, contaminant transport in the unsaturated zone, and contaminant transport in the saturated zone. The source model considers the sorptive properties and solubility of the contaminant. Transport in the unsaturated zone is described by a plug flow model. Transport in the saturated zone is calculated with a semi-analytical solution to the advection dispersion equation in groundwater. In Version 2.0, GWSCREEN has incorporated an additional source model to calculate the impacts to groundwater resulting from the release to percolation ponds. In addition, transport of radioactive progeny has also been incorporated. GWSCREEN has shown comparable results when compared against other codes using similar algorithms and techniques. This code was designed for assessment and screening of the groundwater pathway when field data is limited. It was not intended to be a predictive tool

  11. GWSCREEN: A semi-analytical model for assessment of the groundwater pathway from surface or buried contamination: Version 2.0 theory and user`s manual

    Energy Technology Data Exchange (ETDEWEB)

    Rood, A.S.

    1993-06-01

    GWSCREEN was developed for assessment of the groundwater pathway from leaching of radioactive and non radioactive substances from surface or buried sources. The code was designed for implementation in the Track I and Track II assessment of CERCLA (Comprehensive Environmental Response, Compensation and Liability Act) sites identified as low probability hazard at the Idaho National Engineering Laboratory (DOE, 1992). The code calculates the limiting soil concentration such that, after leaching and transport to the aquifer, regulatory contaminant levels in groundwater are not exceeded. The code uses a mass conservation approach to model three processes: contaminant release from a source volume, contaminant transport in the unsaturated zone, and contaminant transport in the saturated zone. The source model considers the sorptive properties and solubility of the contaminant. Transport in the unsaturated zone is described by a plug flow model. Transport in the saturated zone is calculated with a semi-analytical solution to the advection dispersion equation in groundwater. In Version 2.0, GWSCREEN has incorporated an additional source model to calculate the impacts to groundwater resulting from the release to percolation ponds. In addition, transport of radioactive progeny has also been incorporated. GWSCREEN has shown comparable results when compared against other codes using similar algorithms and techniques. This code was designed for assessment and screening of the groundwater pathway when field data is limited. It was not intended to be a predictive tool.

  12. Evaluation of the wind farm parameterization in the Weather Research and Forecasting model (version 3.8.1) with meteorological and turbine power data

    Science.gov (United States)

    Lee, Joseph C. Y.; Lundquist, Julie K.

    2017-11-01

    Forecasts of wind-power production are necessary to facilitate the integration of wind energy into power grids, and these forecasts should incorporate the impact of wind-turbine wakes. This paper focuses on a case study of four diurnal cycles with significant power production, and assesses the skill of the wind farm parameterization (WFP) distributed with the Weather Research and Forecasting (WRF) model version 3.8.1, as well as its sensitivity to model configuration. After validating the simulated ambient flow with observations, we quantify the value of the WFP as it accounts for wake impacts on power production of downwind turbines. We also illustrate with statistical significance that a vertical grid with approximately 12 m vertical resolution is necessary for reproducing the observed power production. Further, the WFP overestimates wake effects and hence underestimates downwind power production during high wind speed, highly stable, and low turbulence conditions. We also find the WFP performance is independent of the number of wind turbines per model grid cell and the upwind-downwind position of turbines. Rather, the ability of the WFP to predict power production is most dependent on the skill of the WRF model in simulating the ambient wind speed.

  13. Evaluation of the wind farm parameterization in the Weather Research and Forecasting model (version 3.8.1 with meteorological and turbine power data

    Directory of Open Access Journals (Sweden)

    J. C. Y. Lee

    2017-11-01

    Full Text Available Forecasts of wind-power production are necessary to facilitate the integration of wind energy into power grids, and these forecasts should incorporate the impact of wind-turbine wakes. This paper focuses on a case study of four diurnal cycles with significant power production, and assesses the skill of the wind farm parameterization (WFP distributed with the Weather Research and Forecasting (WRF model version 3.8.1, as well as its sensitivity to model configuration. After validating the simulated ambient flow with observations, we quantify the value of the WFP as it accounts for wake impacts on power production of downwind turbines. We also illustrate with statistical significance that a vertical grid with approximately 12 m vertical resolution is necessary for reproducing the observed power production. Further, the WFP overestimates wake effects and hence underestimates downwind power production during high wind speed, highly stable, and low turbulence conditions. We also find the WFP performance is independent of the number of wind turbines per model grid cell and the upwind–downwind position of turbines. Rather, the ability of the WFP to predict power production is most dependent on the skill of the WRF model in simulating the ambient wind speed.

  14. Modelling Risk to US Military Populations from Stopping Blanket Mandatory Polio Vaccination (Open Access Publisher’s Version)

    Science.gov (United States)

    2017-09-14

    Research Article Modelling Risk to US Military Populations from Stopping Blanket Mandatory Polio Vaccination Colleen Burgess,1,2 Andrew Burgess,2 and...routinely vaccinated with inactivated poliovirus vaccine (IPV), supplemented based upon deployment circumstances. Given residual protection from...childhood vaccination , risk-based vaccination may sufficiently protect troops from polio transmission. Methods.This analysis employed a mathematical system

  15. Enhanced representation of soil NO emissions in the Community Multiscale Air Quality (CMAQ) model version 5.0.2

    Science.gov (United States)

    Modeling of soil nitric oxide (NO) emissions is highly uncertain and may misrepresent its spatial and temporal distribution. This study builds upon a recently introduced parameterization to improve the timing and spatial distribution of soil NO emission estimates in the Community...

  16. The CSIRO Mk3L climate system model version 1.0 – Part 2: Response to external forcings

    Directory of Open Access Journals (Sweden)

    S. J. Phipps

    2012-05-01

    Full Text Available The CSIRO Mk3L climate system model is a coupled general circulation model, designed primarily for millennial-scale climate simulation and palaeoclimate research. Mk3L includes components which describe the atmosphere, ocean, sea ice and land surface, and combines computational efficiency with a stable and realistic control climatology. It is freely available to the research community. This paper evaluates the response of the model to external forcings which correspond to past and future changes in the climate system.

    A simulation of the mid-Holocene climate is performed, in which changes in the seasonal and meridional distribution of incoming solar radiation are imposed. Mk3L correctly simulates increased summer temperatures at northern mid-latitudes and cooling in the tropics. However, it is unable to capture some of the regional-scale features of the mid-Holocene climate, with the precipitation over Northern Africa being deficient. The model simulates a reduction of between 7 and 15% in the amplitude of El Niño-Southern Oscillation, a smaller decrease than that implied by the palaeoclimate record. However, the realism of the simulated ENSO is limited by the model's relatively coarse spatial resolution.

    Transient simulations of the late Holocene climate are then performed. The evolving distribution of insolation is imposed, and an acceleration technique is applied and assessed. The model successfully captures the temperature changes in each hemisphere and the upward trend in ENSO variability. However, the lack of a dynamic vegetation scheme does not allow it to simulate an abrupt desertification of the Sahara.

    To assess the response of Mk3L to other forcings, transient simulations of the last millennium are performed. Changes in solar irradiance, atmospheric greenhouse gas concentrations and volcanic emissions are applied to the model. The model is again broadly successful at simulating larger-scale changes in the

  17. Itzï (version 17.1): an open-source, distributed GIS model for dynamic flood simulation

    Science.gov (United States)

    Guillaume Courty, Laurent; Pedrozo-Acuña, Adrián; Bates, Paul David

    2017-05-01

    Worldwide, floods are acknowledged as one of the most destructive hazards. In human-dominated environments, their negative impacts are ascribed not only to the increase in frequency and intensity of floods but also to a strong feedback between the hydrological cycle and anthropogenic development. In order to advance a more comprehensive understanding of this complex interaction, this paper presents the development of a new open-source tool named Itzï that enables the 2-D numerical modelling of rainfall-runoff processes and surface flows integrated with the open-source geographic information system (GIS) software known as GRASS. Therefore, it takes advantage of the ability given by GIS environments to handle datasets with variations in both temporal and spatial resolutions. Furthermore, the presented numerical tool can handle datasets from different sources with varied spatial resolutions, facilitating the preparation and management of input and forcing data. This ability reduces the preprocessing time usually required by other models. Itzï uses a simplified form of the shallow water equations, the damped partial inertia equation, for the resolution of surface flows, and the Green-Ampt model for the infiltration. The source code is now publicly available online, along with complete documentation. The numerical model is verified against three different tests cases: firstly, a comparison with an analytic solution of the shallow water equations is introduced; secondly, a hypothetical flooding event in an urban area is implemented, where results are compared to those from an established model using a similar approach; and lastly, the reproduction of a real inundation event that occurred in the city of Kingston upon Hull, UK, in June 2007, is presented. The numerical approach proved its ability at reproducing the analytic and synthetic test cases. Moreover, simulation results of the real flood event showed its suitability at identifying areas affected by flooding

  18. GWSCREEN: A Semi-analytical Model for Assessment of the Groundwater Pathway from Surface or Buried Contamination, Theory and User's Manual, Version 2.5

    Energy Technology Data Exchange (ETDEWEB)

    Rood, Arthur South

    1998-08-01

    GWSCREEN was developed for assessment of the groundwater pathway from leaching of radioactive and non-radioactive substances from surface or buried sources. The code was designed for implementation in the Track I and Track II assessment of Comprehensive Environmental Response, Compensation, and Liability Act sites identified as low probability hazard at the Idaho National Engineering Laboratory. The code calculates 1) the limiting soil concentration such that, after leaching and transport to the aquifer regulatory contaminant levels in groundwater are not exceeded, 2) peak aquifer concentration and associated human health impacts, and 3) aquifer concentrations and associated human health impacts as a function of time and space. The code uses a mass conservation approach to model three processes: contaminant release from a source volume, vertical contaminant transport in the unsaturated zone, and 2D or 3D contaminant transport in the saturated zone. The source model considers the sorptive properties and solubility of the contaminant. In Version 2.5, transport in the unsaturated zone is described by a plug flow or dispersive solution model. Transport in the saturated zone is calculated with a semi-analytical solution to the advection dispersion equation in groundwater. Three source models are included; leaching from a surface or buried source, infiltration pond, or user-defined arbitrary release. Dispersion in the aquifer may be described by fixed dispersivity values or three, spatial-variable dispersivity functions. Version 2.5 also includes a Monte Carlo sampling routine for uncertainty/sensitivity analysis and a preprocessor to allow multiple input files and multiple contaminants to be run in a single simulation. GWSCREEN has been validated against other codes using similar algorithms and techniques. The code was originally designed for assessment and screening of the groundwater pathway when field data are limited. It was intended to simulate relatively simple

  19. [External cephalic version].

    Science.gov (United States)

    Navarro-Santana, B; Duarez-Coronado, M; Plaza-Arranz, J

    2016-08-01

    To analyze the rate of successful external cephalic versions in our center and caesarean sections that would be avoided with the use of external cephalic versions. From January 2012 to March 2016 external cephalic versions carried out at our center, which were a total of 52. We collected data about female age, gestational age at the time of the external cephalic version, maternal body mass index (BMI), fetal variety and situation, fetal weight, parity, location of the placenta, amniotic fluid index (ILA), tocolysis, analgesia, and newborn weight at birth, minor adverse effects (dizziness, hypotension and maternal pain) and major adverse effects (tachycardia, bradycardia, decelerations and emergency cesarean section). 45% of the versions were unsuccessful and 55% were successful. The percentage of successful vaginal delivery in versions was 84% (4% were instrumental) and 15% of caesarean sections. With respect to the variables studied, only significant differences in birth weight were found; suggesting that birth weight it is related to the outcome of external cephalic version. Probably we did not find significant differences due to the number of patients studied. For women with breech presentation, we recommend external cephalic version before the expectant management or performing a cesarean section. The external cephalic version increases the proportion of fetuses in cephalic presentation and also decreases the rate of caesarean sections.

  20. Influence of Solar and Thermal Radiation on Future Heat Stress Using CMIP5 Archive Driving the Community Land Model Version 4.5

    Science.gov (United States)

    Buzan, J. R.; Huber, M.

    2015-12-01

    The summer of 2015 has experienced major heat waves on 4 continents, and heat stress left ~4000 people dead in India and Pakistan. Heat stress is caused by a combination of meteorological factors: temperature, humidity, and radiation. The International Organization for Standardization (ISO) uses Wet Bulb Globe Temperature (WBGT)—an empirical metric this is calibrated with temperature, humidity, and radiation—for determining labor capacity during heat stress. Unfortunately, most literature studying global heat stress focuses on extreme temperature events, and a limited number of studies use the combination of temperature and humidity. Recent global assessments use WBGT, yet omit the radiation component without recalibrating the metric.Here we explicitly calculate future WBGT within a land surface model, including radiative fluxes as produced by a modeled globe thermometer. We use the Community Land Model version 4.5 (CLM4.5), which is a component model of the Community Earth System Model (CESM), and is maintained by the National Center for Atmospheric Research (NCAR). To drive our CLM4.5 simulations, we use greenhouse gasses Representative Concentration Pathway 8.5 (business as usual), and atmospheric output from the CMIP5 Archive. Humans work in a variety of environments, and we place the modeled globe thermometer in a variety of environments. We modify CLM4.5 code to calculate solar and thermal radiation fluxes below and above canopy vegetation, and in bare ground. To calculate wet bulb temperature, we implemented the HumanIndexMod into CLM4.5. The temperature, wet bulb temperature, and radiation fields are calculated at every model time step and are outputted 4x Daily. We use these fields to calculate WBGT and labor capacity for two time slices: 2026-2045 and 2081-2100.

  1. Dihydrofolate reductase as a model for studies of enzyme dynamics and catalysis [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Amnon Kohen

    2015-12-01

    Full Text Available Dihydrofolate reductase from Escherichia coli (ecDHFR serves as a model system for investigating the role of protein dynamics in enzyme catalysis. We discuss calculations predicting a network of dynamic motions that is coupled to the chemical step catalyzed by this enzyme. Kinetic studies testing these predictions are presented, and their potential use in better understanding the role of these dynamics in enzyme catalysis is considered. The cumulative results implicate motions across the entire protein in catalysis.

  2. GPU-accelerated atmospheric chemical kinetics in the ECHAM/MESSy (EMAC) Earth system model (version 2.52)

    Science.gov (United States)

    Alvanos, Michail; Christoudias, Theodoros

    2017-10-01

    This paper presents an application of GPU accelerators in Earth system modeling. We focus on atmospheric chemical kinetics, one of the most computationally intensive tasks in climate-chemistry model simulations. We developed a software package that automatically generates CUDA kernels to numerically integrate atmospheric chemical kinetics in the global climate model ECHAM/MESSy Atmospheric Chemistry (EMAC), used to study climate change and air quality scenarios. A source-to-source compiler outputs a CUDA-compatible kernel by parsing the FORTRAN code generated by the Kinetic PreProcessor (KPP) general analysis tool. All Rosenbrock methods that are available in the KPP numerical library are supported.Performance evaluation, using Fermi and Pascal CUDA-enabled GPU accelerators, shows achieved speed-ups of 4. 5 × and 20. 4 × , respectively, of the kernel execution time. A node-to-node real-world production performance comparison shows a 1. 75 × speed-up over the non-accelerated application using the KPP three-stage Rosenbrock solver. We provide a detailed description of the code optimizations used to improve the performance including memory optimizations, control code simplification, and reduction of idle time. The accuracy and correctness of the accelerated implementation are evaluated by comparing to the CPU-only code of the application. The median relative difference is found to be less than 0.000000001 % when comparing the output of the accelerated kernel the CPU-only code.The approach followed, including the computational workload division, and the developed GPU solver code can potentially be used as the basis for hardware acceleration of numerous geoscientific models that rely on KPP for atmospheric chemical kinetics applications.

  3. GPU-accelerated atmospheric chemical kinetics in the ECHAM/MESSy (EMAC Earth system model (version 2.52

    Directory of Open Access Journals (Sweden)

    M. Alvanos

    2017-10-01

    Full Text Available This paper presents an application of GPU accelerators in Earth system modeling. We focus on atmospheric chemical kinetics, one of the most computationally intensive tasks in climate–chemistry model simulations. We developed a software package that automatically generates CUDA kernels to numerically integrate atmospheric chemical kinetics in the global climate model ECHAM/MESSy Atmospheric Chemistry (EMAC, used to study climate change and air quality scenarios. A source-to-source compiler outputs a CUDA-compatible kernel by parsing the FORTRAN code generated by the Kinetic PreProcessor (KPP general analysis tool. All Rosenbrock methods that are available in the KPP numerical library are supported.Performance evaluation, using Fermi and Pascal CUDA-enabled GPU accelerators, shows achieved speed-ups of 4. 5 ×  and 20. 4 × , respectively, of the kernel execution time. A node-to-node real-world production performance comparison shows a 1. 75 ×  speed-up over the non-accelerated application using the KPP three-stage Rosenbrock solver. We provide a detailed description of the code optimizations used to improve the performance including memory optimizations, control code simplification, and reduction of idle time. The accuracy and correctness of the accelerated implementation are evaluated by comparing to the CPU-only code of the application. The median relative difference is found to be less than 0.000000001 % when comparing the output of the accelerated kernel the CPU-only code.The approach followed, including the computational workload division, and the developed GPU solver code can potentially be used as the basis for hardware acceleration of numerous geoscientific models that rely on KPP for atmospheric chemical kinetics applications.

  4. Regional hydrogeological simulations for Forsmark - numerical modelling using DarcyTools. Preliminary site description Forsmark area version 1.2

    International Nuclear Information System (INIS)

    Follin, Sven; Stigsson, Martin; Svensson, Urban

    2005-12-01

    A numerical model is developed on a regional-scale (hundreds of square kilometres) to study the zone of influence for variable-density groundwater flow that affects the Forsmark area. Transport calculations are performed by particle tracking from a local-scale release area (a few square kilometres) to test the sensitivity to different hydrogeological uncertainties and the need for far-field realism. The main objectives of the regional flow modelling were to achieve the following: I. Palaeo-hydrogeological understanding: An improved understanding of the palaeohydrogeological conditions is necessary in order to gain credibility for the site descriptive model in general and the hydrogeological description in particular. This requires modelling of the groundwater flow from the last glaciation up to present-day with comparisons against measured TDS and other hydro-geochemical measures. II. Simulation of flow paths: The simulation and visualisation of flow paths from a tentative repository area is a means for describing the role of the current understanding of the modelled hydrogeological conditions in the target volume, i.e. the conditions of primary interest for Safety Assessment. Of particular interest here is demonstration of the need for detailed far-field realism in the numerical simulations. The motivation for a particular model size (and resolution) and set of boundary conditions for a realistic description of the recharge and discharge connected to the flow at repository depth is an essential part of the groundwater flow path simulations. The numerical modelling was performed by two separate modelling teams, the ConnectFlow Team and the DarcyTools Team. The work presented in this report was based on the computer code DarcyTools developed by Computer-aided Fluid Engineering. DarcyTools is a kind of equivalent porous media (EPM) flow code specifically designed to treat flow and salt transport in sparsely fractured crystalline rock intersected by transmissive

  5. Regional hydrogeological simulations for Forsmark - numerical modelling using DarcyTools. Preliminary site description Forsmark area version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-12-15

    A numerical model is developed on a regional-scale (hundreds of square kilometres) to study the zone of influence for variable-density groundwater flow that affects the Forsmark area. Transport calculations are performed by particle tracking from a local-scale release area (a few square kilometres) to test the sensitivity to different hydrogeological uncertainties and the need for far-field realism. The main objectives of the regional flow modelling were to achieve the following: I. Palaeo-hydrogeological understanding: An improved understanding of the palaeohydrogeological conditions is necessary in order to gain credibility for the site descriptive model in general and the hydrogeological description in particular. This requires modelling of the groundwater flow from the last glaciation up to present-day with comparisons against measured TDS and other hydro-geochemical measures. II. Simulation of flow paths: The simulation and visualisation of flow paths from a tentative repository area is a means for describing the role of the current understanding of the modelled hydrogeological conditions in the target volume, i.e. the conditions of primary interest for Safety Assessment. Of particular interest here is demonstration of the need for detailed far-field realism in the numerical simulations. The motivation for a particular model size (and resolution) and set of boundary conditions for a realistic description of the recharge and discharge connected to the flow at repository depth is an essential part of the groundwater flow path simulations. The numerical modelling was performed by two separate modelling teams, the ConnectFlow Team and the DarcyTools Team. The work presented in this report was based on the computer code DarcyTools developed by Computer-aided Fluid Engineering. DarcyTools is a kind of equivalent porous media (EPM) flow code specifically designed to treat flow and salt transport in sparsely fractured crystalline rock intersected by transmissive

  6. GLDAS Noah Land Surface Model L4 monthly 0.25 x 0.25 degree Version 2.0 V020

    Data.gov (United States)

    National Aeronautics and Space Administration — Global Land Data Assimilation System Version 2 (hereafter, GLDAS-2) has two components: one forced entirely with the Princeton meteorological forcing data...

  7. GLDAS Noah Land Surface Model L4 monthly 1.0 x 1.0 degree Version 2.0 V020

    Data.gov (United States)

    National Aeronautics and Space Administration — Global Land Data Assimilation System Version 2 (hereafter, GLDAS-2) has two components: one forced entirely with the Princeton meteorological forcing data...

  8. GLDAS Noah Land Surface Model L4 3 hourly 1.0 x 1.0 degree Version 2.0 V020

    Data.gov (United States)

    National Aeronautics and Space Administration — Global Land Data Assimilation System Version 2 (hereafter, GLDAS-2) has two components: one forced entirely with the Princeton meteorological forcing data...

  9. GLDAS Noah Land Surface Model L4 3 hourly 0.25 x 0.25 degree Version 2.0 V020

    Data.gov (United States)

    National Aeronautics and Space Administration — Global Land Data Assimilation System Version 2 (hereafter, GLDAS-2) has two components: one forced entirely with the Princeton meteorological forcing data...

  10. Breeding novel solutions in the brain: a model of Darwinian neurodynamics [version 1; referees: 1 approved, 2 approved with reservations

    Directory of Open Access Journals (Sweden)

    András Szilágyi

    2016-09-01

    Full Text Available Background: The fact that surplus connections and neurons are pruned during development is well established. We complement this selectionist picture by a proof-of-principle model of evolutionary search in the brain, that accounts for new variations in theory space. We present a model for Darwinian evolutionary search for candidate solutions in the brain. Methods: We combine known components of the brain – recurrent neural networks (acting as attractors, the action selection loop and implicit working memory – to provide the appropriate Darwinian architecture. We employ a population of attractor networks with palimpsest memory. The action selection loop is employed with winners-share-all dynamics to select for candidate solutions that are transiently stored in implicit working memory. Results: We document two processes: selection of stored solutions and evolutionary search for novel solutions. During the replication of candidate solutions attractor networks occasionally produce recombinant patterns, increasing variation on which selection can act. Combinatorial search acts on multiplying units (activity patterns with hereditary variation and novel variants appear due to (i noisy recall of patterns from the attractor networks, (ii noise during transmission of candidate solutions as messages between networks, and, (iii spontaneously generated, untrained patterns in spurious attractors. Conclusions: Attractor dynamics of recurrent neural networks can be used to model Darwinian search. The proposed architecture can be used for fast search among stored solutions (by selection and for evolutionary search when novel candidate solutions are generated in successive iterations. Since all the suggested components are present in advanced nervous systems, we hypothesize that the brain could implement a truly evolutionary combinatorial search system, capable of generating novel variants.

  11. Breeding novel solutions in the brain: a model of Darwinian neurodynamics [version 2; referees: 1 approved, 2 approved with reservations

    Directory of Open Access Journals (Sweden)

    András Szilágyi

    2017-06-01

    Full Text Available Background: The fact that surplus connections and neurons are pruned during development is well established. We complement this selectionist picture by a proof-of-principle model of evolutionary search in the brain, that accounts for new variations in theory space. We present a model for Darwinian evolutionary search for candidate solutions in the brain. Methods: We combine known components of the brain – recurrent neural networks (acting as attractors, the action selection loop and implicit working memory – to provide the appropriate Darwinian architecture. We employ a population of attractor networks with palimpsest memory. The action selection loop is employed with winners-share-all dynamics to select for candidate solutions that are transiently stored in implicit working memory. Results: We document two processes: selection of stored solutions and evolutionary search for novel solutions. During the replication of candidate solutions attractor networks occasionally produce recombinant patterns, increasing variation on which selection can act. Combinatorial search acts on multiplying units (activity patterns with hereditary variation and novel variants appear due to (i noisy recall of patterns from the attractor networks, (ii noise during transmission of candidate solutions as messages between networks, and, (iii spontaneously generated, untrained patterns in spurious attractors. Conclusions: Attractor dynamics of recurrent neural networks can be used to model Darwinian search. The proposed architecture can be used for fast search among stored solutions (by selection and for evolutionary search when novel candidate solutions are generated in successive iterations. Since all the suggested components are present in advanced nervous systems, we hypothesize that the brain could implement a truly evolutionary combinatorial search system, capable of generating novel variants.

  12. From disease modelling to personalised therapy in patients with CEP290 mutations [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Elisa Molinari

    2017-05-01

    Full Text Available Mutations that give rise to premature termination codons are a common cause of inherited genetic diseases. When transcripts containing these changes are generated, they are usually rapidly removed by the cell through the process of nonsense-mediated decay. Here we discuss observed changes in transcripts of the centrosomal protein CEP290 resulting not from degradation, but from changes in exon usage. We also comment on a landmark paper (Drivas et al. Sci Transl Med. 2015 where modelling this process of exon usage may be used to predict disease severity in CEP290 ciliopathies, and how understanding this process may potentially be used for therapeutic benefit in the future.

  13. Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    Vigil,Benny Manuel [Los Alamos National Laboratory; Ballance, Robert [SNL; Haskell, Karen [SNL

    2012-08-09

    Cielo is a massively parallel supercomputer funded by the DOE/NNSA Advanced Simulation and Computing (ASC) program, and operated by the Alliance for Computing at Extreme Scale (ACES), a partnership between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL). The primary Cielo compute platform is physically located at Los Alamos National Laboratory. This Cielo Computational Environment Usage Model documents the capabilities and the environment to be provided for the Q1 FY12 Level 2 Cielo Capability Computing (CCC) Platform Production Readiness Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory, or Sandia National Laboratories, but also addresses the needs of users working in the unclassified environment. The Cielo Computational Environment Usage Model maps the provided capabilities to the tri-Lab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the Production Readiness Milestone user environment capabilities of the ASC community. A description of ACE requirements met, and those requirements that are not met, are included in each section of this document. The Cielo Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the tri-Lab community.

  14. Numerical Representation of Wintertime Near-Surface Inversions in the Arctic with a 2.5-km Version of the Global Environmental Multiscale (GEM) Model

    Science.gov (United States)

    Dehghan, A.; Mariani, Z.; Gascon, G.; Bélair, S.; Milbrandt, J.; Joe, P. I.; Crawford, R.; Melo, S.

    2017-12-01

    Environment and Climate Change Canada (ECCC) is implementing a 2.5-km resolution version of the Global Environmental Multiscale (GEM) model over the Canadian Arctic. Radiosonde observations were used to evaluate the numerical representation of surface-based temperature inversion which is a major feature in the Arctic region. Arctic surface-based inversions are often created by imbalance between radiative cooling processes at surface and warm air advection above. This can have a significant effect on vertical mixing of pollutants and moisture, and ultimately, on cloud formation. It is therefore important to correctly predict the existence of surface inversions along with their characteristics (i.e., intensity and depth). Previous climatological studies showed that the frequency and intensity of surface-based inversions are larger during colder months in the Arctic. Therefore, surface-based inversions were estimated using radiosonde measurements during winter (December 2015 to February 2016) at Iqaluit (Nunavut, Canada). Results show that the inversion intensity can exceed 10 K with depths as large as 1 km. Preliminary evaluation of GEM outputs reveals that the model tends to underestimate the intensity of near-surface inversions, and in some cases, the model failed to predict an inversion. This study presents the factors contributing to this bias including surface temperature and snow cover.

  15. Inspection of the Math Model Tools for On-Orbit Assessment of Impact Damage Report. Version 1.0

    Science.gov (United States)

    Harris, Charles E.; Raju, Ivatury S.; Piascik, Robert S.; Kramer White, Julie; Labbe, Steve G.; Rotter, Hank A.

    2005-01-01

    In Spring of 2005, the NASA Engineering Safety Center (NESC) was engaged by the Space Shuttle Program (SSP) to peer review the suite of analytical tools being developed to support the determination of impact and damage tolerance of the Orbiter Thermal Protection Systems (TPS). The NESC formed an independent review team with the core disciplines of materials, flight sciences, structures, mechanical analysis and thermal analysis. The Math Model Tools reviewed included damage prediction and stress analysis, aeroheating analysis, and thermal analysis tools. Some tools are physics-based and other tools are empirically-derived. Each tool was created for a specific use and timeframe, including certification, real-time pre-launch assessments, and real-time on-orbit assessments. The tools are used together in an integrated strategy for assessing the ramifications of impact damage to tile and RCC. The NESC teams conducted a peer review of the engineering data package for each Math Model Tool. This report contains the summary of the team observations and recommendations from these reviews.

  16. M3 version 3.0: Verification and validation; Hydrochemical model of ground water at repository site

    Energy Technology Data Exchange (ETDEWEB)

    Gomez, Javier B. (Dept. of Earth Sciences, Univ. of Zaragoza, Zaragoza (Spain)); Laaksoharju, Marcus (Geopoint AB, Sollentuna (Sweden)); Skaarman, Erik (Abscondo, Bromma (Sweden)); Gurban, Ioana (3D-Terra (Canada))

    2009-01-15

    Hydrochemical evaluation is a complex type of work that is carried out by specialists. The outcome of this work is generally presented as qualitative models and process descriptions of a site. To support and help to quantify the processes in an objective way, a multivariate mathematical tool entitled M3 (Multivariate Mixing and Mass balance calculations) has been constructed. The computer code can be used to trace the origin of the groundwater, and to calculate the mixing proportions and mass balances from groundwater data. The M3 code is a groundwater response model, which means that changes in the groundwater chemistry in terms of sources and sinks are traced in relation to an ideal mixing model. The complexity of the measured groundwater data determines the configuration of the ideal mixing model. Deviations from the ideal mixing model are interpreted as being due to reactions. Assumptions concerning important mineral phases altering the groundwater or uncertainties associated with thermodynamic constants do not affect the modelling because the calculations are solely based on the measured groundwater composition. M3 uses the opposite approach to that of many standard hydrochemical models. In M3, mixing is evaluated and calculated first. The constituents that cannot be described by mixing are described by reactions. The M3 model consists of three steps: the first is a standard principal component analysis, followed by mixing and finally mass balance calculations. The measured groundwater composition can be described in terms of mixing proportions (%), while the sinks and sources of an element associated with reactions are reported in mg/L. This report contains a set of verification and validation exercises with the intention of building confidence in the use of the M3 methodology. At the same time, clear answers are given to questions related to the accuracy and the precision of the results, including the inherent uncertainties and the errors that can be made

  17. WeBCMD: A cross-platform interface for the BCMD modelling framework [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Joshua Russell-Buckland

    2017-07-01

    Full Text Available Multimodal monitoring of the brain generates a great quantity of data, providing the potential for great insight into both healthy and injured cerebral dynamics. In particular, near-infrared spectroscopy can be used to measure various physiological variables of interest, such as haemoglobin oxygenation and the redox state of cytochrome-c-oxidase, alongside systemic signals, such as blood pressure. Interpreting these measurements is a complex endeavour, and much work has been done to develop mathematical models that can help to provide understanding of the underlying processes that contribute to the overall dynamics. BCMD is a software framework that was developed to run such models. However, obtaining, installing and running this software is no simple task. Here we present WeBCMD, an online environment that attempts to make the process simpler and much more accessible. By leveraging modern web technologies, an extensible and cross-platform package has been created that can also be accessed remotely from the cloud. WeBCMD is available as a Docker image and an online service.

  18. A model using marginal efficiency of investment to analyse carbon and nitrogen interactions in terrestrial ecosystems (ACONITE Version 1)

    Science.gov (United States)

    Thomas, R. Q.; Williams, M.

    2014-04-01

    Carbon (C) and nitrogen (N) cycles are coupled in terrestrial ecosystems through multiple processes including photosynthesis, tissue allocation, respiration, N fixation, N uptake, and decomposition of litter and soil organic matter. Capturing the constraint of N on terrestrial C uptake and storage has been a focus of the Earth System modelling community. However there is little understanding of the trade-offs and sensitivities of allocating C and N to different tissues in order to optimize the productivity of plants. Here we describe a new, simple model of ecosystem C-N cycling and interactions (ACONITE), that builds on theory related to plant economics in order to predict key ecosystem properties (leaf area index, leaf C : N, N fixation, and plant C use efficiency) using emergent constraints provided by marginal returns on investment for C and/or N allocation. We simulated and evaluated steady-state ecosystem stocks and fluxes in three different forest ecosystems types (tropical evergreen, temperate deciduous, and temperate evergreen). Leaf C : N differed among the three ecosystem types (temperate deciduous plant traits. Gross primary productivity (GPP) and net primary productivity (NPP) estimates compared well to observed fluxes at the simulation sites. Simulated N fixation at steady-state, calculated based on relative demand for N and the marginal return on C investment to acquire N, was an order of magnitude higher in the tropical forest than in the temperate forest, consistent with observations. A sensitivity analysis revealed that parameterization of the relationship between leaf N and leaf respiration had the largest influence on leaf area index and leaf C : N. Also, a widely used linear leaf N-respiration relationship did not yield a realistic leaf C : N, while a more recently reported non-linear relationship performed better. A parameter governing how photosynthesis scales with day length had the largest influence on total vegetation C, GPP, and NPP

  19. Midlatitude atmospheric responses to Arctic sensible heat flux anomalies in Community Climate Model, Version 4: Atmospheric Response to Arctic SHFs

    Energy Technology Data Exchange (ETDEWEB)

    Mills, Catrin M. [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland Washington USA; Cooperative Institute for Research in Environmental Sciences, University of Colorado, Boulder Colorado USA; Cassano, John J. [Cooperative Institute for Research in Environmental Sciences and Department of Atmospheric and Oceanic Sciences, University of Colorado Boulder, Boulder Colorado USA; Cassano, Elizabeth N. [Cooperative Institute for Research in Environmental Sciences, University of Colorado, Boulder Colorado USA

    2016-12-10

    Possible linkages between Arctic sea ice loss and midlatitude weather are strongly debated in the literature. We analyze a coupled model simulation to assess the possibility of Arctic ice variability forcing a midlatitude response, ensuring consistency between atmosphere, ocean, and ice components. We work with weekly running mean daily sensible heat fluxes with the self-organizing map technique to identify Arctic sensible heat flux anomaly patterns and the associated atmospheric response, without the need of metrics to define the Arctic forcing or measure the midlatitude response. We find that low-level warm anomalies during autumn can build planetary wave patterns that propagate downstream into the midlatitudes, creating robust surface cold anomalies in the eastern United States.

  20. Guidance document on practices to model and implement Earthquake hazards in extended PSA (final version). Volume 1

    International Nuclear Information System (INIS)

    Decker, K.; Hirata, K.; Groudev, P.

    2016-01-01

    The current report provides guidance for the assessment of seismo-tectonic hazards in level 1 and 2 PSA. The objective is to review existing guidance, identify methodological challenges, and to propose novel guidance on key issues. Guidance for the assessment of vibratory ground motion and fault capability comprises the following: - listings of data required for the hazard assessment and methods to estimate data quality and completeness; - in-depth discussion of key input parameters required for hazard models; - discussions on commonly applied hazard assessment methodologies; - references to recent advances of science and technology. Guidance on the assessment of correlated or coincident hazards comprises of chapters on: - screening of correlated hazards; - assessment of correlated hazards (natural and man-made); - assessment of coincident hazards. (authors)

  1. The Cornell Net Carbohydrate and Protein System: Updates to the model and evaluation of version 6.5.

    Science.gov (United States)

    Van Amburgh, M E; Collao-Saenz, E A; Higgs, R J; Ross, D A; Recktenwald, E B; Raffrenato, E; Chase, L E; Overton, T R; Mills, J K; Foskolos, A

    2015-09-01

    New laboratory and animal sampling methods and data have been generated over the last 10 yr that had the potential to improve the predictions for energy, protein, and AA supply and requirements in the Cornell Net Carbohydrate and Protein System (CNCPS). The objectives of this study were to describe updates to the CNCPS and evaluate model performance against both literature and on-farm data. The changes to the feed library were significant and are reported in a separate manuscript. Degradation rates of protein and carbohydrate fractions were adjusted according to new fractionation schemes, and corresponding changes to equations used to calculate rumen outflows and postrumen digestion were presented. In response to the feed-library changes and an increased supply of essential AA because of updated contents of AA, a combined efficiency of use was adopted in place of separate calculations for maintenance and lactation to better represent the biology of the cow. Four different data sets were developed to evaluate Lys and Met requirements, rumen N balance, and milk yield predictions. In total 99 peer-reviewed studies with 389 treatments and 15 regional farms with 50 different diets were included. The broken-line model with plateau was used to identify the concentration of Lys and Met that maximizes milk protein yield and content. Results suggested concentrations of 7.00 and 2.60% of metabolizable protein (MP) for Lys and Met, respectively, for maximal protein yield and 6.77 and 2.85% of MP for Lys and Met, respectively, for maximal protein content. Updated AA concentrations were numerically higher for Lys and 11 to 18% higher for Met compared with CNCPS v6.0, and this is attributed to the increased content of Met and Lys in feeds that were previously incorrectly analyzed and described. The prediction of postruminal flows of N and milk yield were evaluated using the correlation coefficient from the BLUP (R(2)BLUP) procedure or model predictions (R(2)MDP) and the

  2. A bipedal mammalian model for spinal cord injury research: The tammar wallaby [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Norman R. Saunders

    2017-06-01

    Full Text Available Background: Most animal studies of spinal cord injury are conducted in quadrupeds, usually rodents. It is unclear to what extent functional results from such studies can be translated to bipedal species such as humans because bipedal and quadrupedal locomotion involve very different patterns of spinal control of muscle coordination. Bipedalism requires upright trunk stability and coordinated postural muscle control; it has been suggested that peripheral sensory input is less important in humans than quadrupeds for recovery of locomotion following spinal injury. Methods: We used an Australian macropod marsupial, the tammar wallaby (Macropus eugenii, because tammars exhibit an upright trunk posture, human-like alternating hindlimb movement when swimming and bipedal over-ground locomotion. Regulation of their muscle movements is more similar to humans than quadrupeds. At different postnatal (P days (P7–60 tammars received a complete mid-thoracic spinal cord transection. Morphological repair, as well as functional use of hind limbs, was studied up to the time of their pouch exit. Results: Growth of axons across the lesion restored supraspinal innervation in animals injured up to 3 weeks of age but not in animals injured after 6 weeks of age. At initial pouch exit (P180, the young injured at P7-21 were able to hop on their hind limbs similar to age-matched controls and to swim albeit with a different stroke. Those animals injured at P40-45 appeared to be incapable of normal use of hind limbs even while still in the pouch. Conclusions: Data indicate that the characteristic over-ground locomotion of tammars provides a model in which regrowth of supraspinal connections across the site of injury can be studied in a bipedal animal. Forelimb weight-bearing motion and peripheral sensory input appear not to compensate for lack of hindlimb control, as occurs in quadrupeds. Tammars may be a more appropriate model for studies of therapeutic interventions

  3. Versioning Complex Data

    Energy Technology Data Exchange (ETDEWEB)

    Macduff, Matt C.; Lee, Benno; Beus, Sherman J.

    2014-06-29

    Using the history of ARM data files, we designed and demonstrated a data versioning paradigm that is feasible. Assigning versions to sets of files that are modified with some special assumptions and domain specific rules was effective in the case of ARM data, which has more than 5000 datastreams and 500TB of data.

  4. A weak AMOC in a cold climate: Causes and remedies for a bias in the low-resolution version of the UK Earth System Model

    Science.gov (United States)

    Kuhlbrodt, T.; Jones, C.

    2016-02-01

    The UK Earth System Model (UKESM) is currently being developed by the UK Met Office and the academic community in the UK. The low-resolution version of UKESM has got a nominal grid cell size of 150 km in the atmosphere (Unified Model [UM], N96) and 1° in the ocean (NEMO, ORCA1). In several preliminary test configurations of UKESM-N96-ORCA1, we find a significant cold bias in the northern hemisphere in comparison with HadGEM2 (N96-ORCA025, i.e. 0.25° resolution in the ocean). The sea surface is too cold by more than 2 K, and up to 6 K, in large parts of the North Atlantic and the northwest Pacific. In addition to the cold bias, the maximum AMOC transport (diagnosed below 500 m depth) decreases in all the configurations, displaying values between 11 and 14 Sv after 50 years run length. Transport at 26°N is even smaller and hence too weak in relation to observed values (approx. 18 Sv). The mixed layer is too deep within the North Atlantic Current and the Kuroshio, but too shallow north of these currents. The cold bias extends to a depth of several hundred metres. In the North Atlantic, it is accompanied by a freshening of up to 1.5 psu, compared to present-day climatology, along the path of the North Atlantic Current. A core problem appears to be the cessation of deep-water formation in the Labrador Sea. Remarkably, using earlier versions of NEMO and the UM, the AMOC is stable at around 16 or 17 Sv in the N96-ORCA1 configuration. We report on various strategies to reduce the cold bias and enhance the AMOC transport. Changing various parameters that affect the vertical mixing in NEMO has no significant effect. Modifying the bathymetry to deepen and widen the channels across the Greenland-Iceland-Scotland sill leads to a short-term improvement in AMOC transport, but only for about ten years. Strikingly, in a configuration with longer time steps for the atmosphere model we find a climate that is even colder, but has got a more vigorous maximum AMOC transport (14 Sv

  5. Prenatal maternal plasma DNA screening for cystic fibrosis: A computer modelling study of screening performance [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Robert W. Old

    2017-10-01

    Full Text Available Background: Prenatal cystic fibrosis (CF screening is currently based on determining the carrier status of both parents. We propose a new method based only on the analysis of DNA in maternal plasma. Methods: The method relies on the quantitative amplification of the CF gene to determine the percentage of DNA fragments in maternal plasma at targeted CF mutation sites that carry a CF mutation. Computer modelling was carried out to estimate the distributions of these percentages in pregnancies with and without a fetus affected with CF. This was done according to the number of DNA fragments counted and fetal fraction, using the 23 CF mutations recommended by the American College of Medical Genetics for parental carrier testing. Results: The estimated detection rate (sensitivity is 70% (100% of those detected using the 23 mutations, the false-positive rate 0.002%, and the odds of being affected given a positive screening result 14:1, compared with 70%, 0.12%, and 1:3, respectively, with current prenatal screening based on parental carrier testing. Conclusions: Compared with current screening practice based on parental carrier testing, the proposed method would substantially reduce the number of invasive diagnostic procedures (amniocentesis or chorionic villus sampling without reducing the CF detection rate. The expected advantages of the proposed method justify carrying out the necessary test development for use in a clinical validation study.

  6. An explanatory model for the concept of mental health in Iranian youth [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Ahdieh Chinekesh

    2018-02-01

    Full Text Available Background: Mental health is considered as an integral and essential component of overall health. Its determinants and related factors are one of the most important research priorities, especially in adolescents and young people. Using a qualitative approach, the present study aimed to identify factors affecting the mental health of youth in Iran. Methods: In 2017, following content analysis principles, and using semi-structured in-depth interviews, we conducted a qualitative study exploring the opinions of young people about mental health. A targeted sampling method was used, and participants were young volunteers aged 18 to 30 who were selected from Tehran province, Iran. Inclusion criteria for participants was willingness to participate in the study, and ability to express their experiences. Data collection was done with individual in-depth interviews. According to the explanatory model, the interviews were directed toward the concept of mental health and path of causality and auxiliary behaviors. Results: 21 young adults participated, who met the study inclusion criteria, of whom 12 participants were male. Their mean age was 24.4 ± 0.41 years and their education varied from primary school to Master’s degree. Mental health was considered as mental well-being and a sense of satisfaction and efficacy, not only the presence of a disease or mental disorder. Based on the opinions of the interviewees, three factors of personal characteristics, family and society are involved in mental health. Individual factors were associated with behavioral and physical problems. One of the most important issues was revealed as tensions in societal and family conflicts. Economic problems and unemployment of young people were also extracted from the social factor. Conclusion: In Iran, social factors such as jobs for the unemployed and job security are considered as important determinants in the mental health of young people.

  7. Influence of Dust and Black Carbon on the Snow Albedo in the NASA Goddard Earth Observing System Version 5 Land Surface Model

    Science.gov (United States)

    Yasunari, Teppei J.; Koster, Randal D.; Lau, K. M.; Aoki, Teruo; Sud, Yogesh C.; Yamazaki, Takeshi; Motoyoshi, Hiroki; Kodama, Yuji

    2011-01-01

    Present-day land surface models rarely account for the influence of both black carbon and dust in the snow on the snow albedo. Snow impurities increase the absorption of incoming shortwave radiation (particularly in the visible bands), whereby they have major consequences for the evolution of snowmelt and life cycles of snowpack. A new parameterization of these snow impurities was included in the catchment-based land surface model used in the National Aeronautics and Space Administration Goddard Earth Observing System version 5. Validation tests against in situ observed data were performed for the winter of 2003.2004 in Sapporo, Japan, for both the new snow albedo parameterization (which explicitly accounts for snow impurities) and the preexisting baseline albedo parameterization (which does not). Validation tests reveal that daily variations of snow depth and snow surface albedo are more realistically simulated with the new parameterization. Reasonable perturbations in the assigned snow impurity concentrations, as inferred from the observational data, produce significant changes in snowpack depth and radiative flux interactions. These findings illustrate the importance of parameterizing the influence of snow impurities on the snow surface albedo for proper simulation of the life cycle of snow cover.

  8. The MELTSPREAD Code for Modeling of Ex-Vessel Core Debris Spreading Behavior, Code Manual – Version3-beta

    Energy Technology Data Exchange (ETDEWEB)

    Farmer, M. T. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-09-01

    MELTSPREAD3 is a transient one-dimensional computer code that has been developed to predict the gravity-driven flow and freezing behavior of molten reactor core materials (corium) in containment geometries. Predictions can be made for corium flowing across surfaces under either dry or wet cavity conditions. The spreading surfaces that can be selected are steel, concrete, a user-specified material (e.g., a ceramic), or an arbitrary combination thereof. The corium can have a wide range of compositions of reactor core materials that includes distinct oxide phases (predominantly Zr, and steel oxides) plus metallic phases (predominantly Zr and steel). The code requires input that describes the containment geometry, melt “pour” conditions, and cavity atmospheric conditions (i.e., pressure, temperature, and cavity flooding information). For cases in which the cavity contains a preexisting water layer at the time of RPV failure, melt jet breakup and particle bed formation can be calculated mechanistically given the time-dependent melt pour conditions (input data) as well as the heatup and boiloff of water in the melt impingement zone (calculated). For core debris impacting either the containment floor or previously spread material, the code calculates the transient hydrodynamics and heat transfer which determine the spreading and freezing behavior of the melt. The code predicts conditions at the end of the spreading stage, including melt relocation distance, depth and material composition profiles, substrate ablation profile, and wall heatup. Code output can be used as input to other models such as CORQUENCH that evaluate long term core-concrete interaction behavior following the transient spreading stage. MELTSPREAD3 was originally developed to investigate BWR Mark I liner vulnerability, but has been substantially upgraded and applied to other reactor designs (e.g., the EPR), and more recently to the plant accidents at Fukushima Daiichi. The most recent round of

  9. Variable-density groundwater flow simulations and particle tracking. Numerical modelling using DarcyTools. Preliminary site description of the Simpevarp area, version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    Follin, Sven [SF GeoLogic AB, Stockholm (Sweden); Stigsson, Martin; Berglund, Sten [Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden); Svensson, Urban [Computer-aided Fluid Engineering AB, Norrkoeping (Sweden)

    2004-12-01

    SKB is conducting site investigations for a high-level nuclear waste repository in fractured crystalline rocks at two coastal areas in Sweden, Forsmark and Simpevarp. The investigations started in 2002 and have been planned since the late 1990s. The work presented here investigates the possibility of using hydrogeochemical measurements in deep boreholes to reduce parameter uncertainty in a regional modelling of groundwater flow in fractured rock. The work was conducted with the aim of improving the palaeohydrogeological understanding of the Simpevarp area and to give recommendations to the preparations of the next version of the Preliminary Site Description (1.2). The study is based on a large number of numerical simulations of transient variable density groundwater flow through a strongly heterogeneous and anisotropic medium. The simulations were conducted with the computer code DarcyTools, the development of which has been funded by SKB. DarcyTools is a flexible porous media code specifically designed to treat groundwater flow and salt transport in sparsely fractured crystalline rock and it is noted that some of the features presented in this report are still under development or subjected to testing and verification. The simulations reveal the sensitivity of the results to different hydrogeological modelling assumptions, e.g. the sensitivity to the initial groundwater conditions at 10,000 BC, the size of the model domain and boundary conditions, and the hydraulic properties of deterministically and stochastically modelled deformation zones. The outcome of these simulations was compared with measured salinities and calculated relative proportions of different water types (mixing proportions) from measurements in two deep core drilled boreholes in the Laxemar subarea. In addition to the flow simulations, the statistics of flow related transport parameters were calculated for particle flowpaths from repository depth to ground surface for two subareas within the

  10. Predicted Water and Carbon Fluxes as well as Vegetation Distribution on the Korean Peninsula in the Future with the Ecosystem Demography Model version 2

    Science.gov (United States)

    Kim, J. B.; Kim, Y.

    2017-12-01

    This study investigates how the water and carbon fluxes as well as vegetation distribution on the Korean peninsula would vary with climate change. Ecosystem Demography (ED) Model version 2 (ED2) is used in this study, which is an integrated terrestrial biosphere model that can utilize a set of size- and age- structured partial differential equations that track the changing structure and composition of the plant canopy. With using the vegetation distribution data of Jeju Island, located at the southern part of the Korean Peninsula, ED2 is setup and driven for the past 10 years. Then the results of ED2 are evaluated and adjusted with observed forestry data, i.e., growth and mortality, and the flux tower and MODIS satellite data, i.e., evapotranspiration (ET) and gross primary production (GPP). This adjusted ED2 are used to simulate the water and carbon fluxes as well as vegetation dynamics in the Korean Peninsula for the historical period with evaluating the model against the MODIS satellite data. Finally, the climate scenarios of RCP 2.6 and 6.0 are used to predict the fluxes and vegetation distribution of the Korean Peninsula in the future. With using the state-of-art terrestrial ecosystem model, this study would provide us better understanding of the future ecosystem vulnerability of the Korean Peninsula. AcknowledgementsThis work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (2015R1C1A2A01054800) and by the Korea Meteorological Administration R&D Program under Grant KMIPA 2015-6180. This work was also supported by the Yonsei University Future-leading Research Initiative of 2015(2016-22-0061).

  11. Development of a source oriented version of the WRF/Chem model and its application to the California regional PM10 / PM2.5 air quality study

    Science.gov (United States)

    Zhang, H.; DeNero, S. P.; Joe, D. K.; Lee, H.-H.; Chen, S.-H.; Michalakes, J.; Kleeman, M. J.

    2014-01-01

    A source-oriented version of the Weather Research and Forecasting model with chemistry (SOWC, hereinafter) was developed. SOWC separately tracks primary particles with different hygroscopic properties rather than instantaneously combining them into an internal mixture. This approach avoids artificially mixing light absorbing black + brown carbon particles with materials such as sulfate that would encourage the formation of additional coatings. Source-oriented particles undergo coagulation and gas-particle conversion, but these processes are considered in a dynamic framework that realistically "ages" primary particles over hours and days in the atmosphere. SOWC more realistically predicts radiative feedbacks from anthropogenic aerosols compared to models that make internal mixing or other artificial mixing assumptions. A three-week stagnation episode (15 December 2000 to 6 January 2001) in the San Joaquin Valley (SJV) during the California Regional PM10 / PM2.5 Air Quality Study (CRPAQS) was chosen for the initial application of the new modeling system. Primary particles emitted from diesel engines, wood smoke, high-sulfur fuel combustion, food cooking, and other anthropogenic sources were tracked separately throughout the simulation as they aged in the atmosphere. Differences were identified between predictions from the source oriented vs. the internally mixed representation of particles with meteorological feedbacks in WRF/Chem for a number of meteorological parameters: aerosol extinction coefficients, downward shortwave flux, planetary boundary layer depth, and primary and secondary particulate matter concentrations. Comparisons with observations show that SOWC predicts particle scattering coefficients more accurately than the internally mixed model. Downward shortwave radiation predicted by SOWC is enhanced by ~1% at ground level chiefly because diesel engine particles in the source-oriented mixture are not artificially coated with material that increases their

  12. Development of a source oriented version of the WRF/Chem model and its application to the California regional PM10 / PM2.5 air quality study

    Directory of Open Access Journals (Sweden)

    H. Zhang

    2014-01-01

    Full Text Available A source-oriented version of the Weather Research and Forecasting model with chemistry (SOWC, hereinafter was developed. SOWC separately tracks primary particles with different hygroscopic properties rather than instantaneously combining them into an internal mixture. This approach avoids artificially mixing light absorbing black + brown carbon particles with materials such as sulfate that would encourage the formation of additional coatings. Source-oriented particles undergo coagulation and gas-particle conversion, but these processes are considered in a dynamic framework that realistically "ages" primary particles over hours and days in the atmosphere. SOWC more realistically predicts radiative feedbacks from anthropogenic aerosols compared to models that make internal mixing or other artificial mixing assumptions. A three-week stagnation episode (15 December 2000 to 6 January 2001 in the San Joaquin Valley (SJV during the California Regional PM10 / PM2.5 Air Quality Study (CRPAQS was chosen for the initial application of the new modeling system. Primary particles emitted from diesel engines, wood smoke, high-sulfur fuel combustion, food cooking, and other anthropogenic sources were tracked separately throughout the simulation as they aged in the atmosphere. Differences were identified between predictions from the source oriented vs. the internally mixed representation of particles with meteorological feedbacks in WRF/Chem for a number of meteorological parameters: aerosol extinction coefficients, downward shortwave flux, planetary boundary layer depth, and primary and secondary particulate matter concentrations. Comparisons with observations show that SOWC predicts particle scattering coefficients more accurately than the internally mixed model. Downward shortwave radiation predicted by SOWC is enhanced by ~1% at ground level chiefly because diesel engine particles in the source-oriented mixture are not artificially coated with material that

  13. Influence of Superparameterization and a Higher-Order Turbulence Closure on Rainfall Bias Over Amazonia in Community Atmosphere Model Version 5: How Parameterization Changes Rainfall

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Kai [Jackson School of Geosciences, University of Texas at Austin, Austin TX USA; Fu, Rong [Jackson School of Geosciences, University of Texas at Austin, Austin TX USA; Department of Atmospheric and Oceanic Sciences, University of California, Los Angeles CA USA; Shaikh, Muhammad J. [Jackson School of Geosciences, University of Texas at Austin, Austin TX USA; Ghan, Steven [Pacific Northwest National Laboratory, Richland WA USA; Wang, Minghuai [Institute for Climate and Global Change Research and School of Atmospheric Sciences, Nanjing University, Nanjing China; Collaborative Innovation Center of Climate Change, Nanjing China; Leung, L. Ruby [Pacific Northwest National Laboratory, Richland WA USA; Dickinson, Robert E. [Jackson School of Geosciences, University of Texas at Austin, Austin TX USA; Marengo, Jose [Centro Nacional de Monitoramento e Alertas aos Desastres Naturais, São Jose dos Campos Brazil

    2017-09-21

    We evaluate the Community Atmosphere Model Version 5 (CAM5) with a higher-order turbulence closure scheme, named Cloud Layers Unified By Binomials (CLUBB), and a Multiscale Modeling Framework (MMF) with two different microphysics configurations to investigate their influences on rainfall simulations over Southern Amazonia. The two different microphysics configurations in MMF are the one-moment cloud microphysics without aerosol treatment (SAM1MOM) and two-moment cloud microphysics coupled with aerosol treatment (SAM2MOM). Results show that both MMF-SAM2MOM and CLUBB effectively reduce the low biases of rainfall, mainly during the wet season. The CLUBB reduces low biases of humidity in the lower troposphere with further reduced shallow clouds. The latter enables more surface solar flux, leading to stronger convection and more rainfall. MMF, especially MMF-SAM2MOM, unstablizes the atmosphere with more moisture and higher atmospheric temperatures in the atmospheric boundary layer, allowing the growth of more extreme convection and further generating more deep convection. MMF-SAM2MOM significantly increases rainfall in the afternoon, but it does not reduce the early bias of the diurnal rainfall peak; LUBB, on the other hand, delays the afternoon peak time and produces more precipitation in the early morning, due to more realistic gradual transition between shallow and deep convection. MMF appears to be able to realistically capture the observed increase of relative humidity prior to deep convection, especially with its two-moment configuration. In contrast, in CAM5 and CAM5 with CLUBB, occurrence of deep convection in these models appears to be a result of stronger heating rather than higher relative humidity.

  14. Validation Evidence for the Elementary School Version of the MUSIC® Model of Academic Motivation Inventory (Pruebas de validación para el Modelo MUSIC® de Inventario de Motivación Educativa para Escuela Primaria)

    Science.gov (United States)

    Jones, Brett D.; Sigmon, Miranda L.

    2016-01-01

    Introduction: The purpose of our study was to assess whether the Elementary School version of the MUSIC® Model of Academic Motivation Inventory was valid for use with elementary students in classrooms with regular classroom teachers and student teachers enrolled in a university teacher preparation program. Method: The participants included 535…

  15. Beliefs, attitudes, and behavior of Turkish women about breast cancer and breast self-examination according to a Turkish version of the Champion Health Belief Model Scale.

    Science.gov (United States)

    Erbil, Nülüfer; Bölükbaş, Nurgül

    2012-01-01

    Breast cancer (BC) is one of the most common cancer affecting women worldwide. Although a great deal of progress has been made in the health sciences, early diagnosis, and increasing community awareness, breast cancer remains a life-threatening illness. In order to reduce this threat, breast cancer screening needs to be implemented in all communities where possible. The purpose of this study was to examine health beliefs, attitudes and behaviors about breast cancer and breast self-examination of Turkish women. Data were collected from a sample of 656 women, using an adapted Turkish version of Champion's Health Belief Model Scale (CHBMS), between January and May 2011, in Ordu province of Turkey. The results showed that 67.7% of women had knowledge about and 55.8% performed BSE, however 60.6% of those who indicated they practiced BSE reported they did so at irregular intervals. CHBMS subscales scores of women according to women's age, education level, occupation, family income and education level of the women's mothers, family history of breast cancer, friend and an acquaintance with breast cancer, knowledge about breast cancer, BSE and mammography were significantly different. Knowledge of women about the risks and benefits of early detection of breast cancer positively affect their health beliefs, attitudes, and behaviors. Health care professionals can develop effective breast health programs and can help women to gain good health behavior and to maintain health.

  16. Measurement of the reactions γp→K+Λ and γp→K+Σ0 for photon energies up to 2.6 GeV with the SAPHIR detector at ELSA

    International Nuclear Information System (INIS)

    Glander, K.H.

    2003-02-01

    The reactions γp→K + Lambda and γp→K + Σ 0 were measured in the energy range from threshold up to a photon energy of 2.6 GeV. The data were taken with the SAPHIR detector at the electron stretcher facility ELSA. Results on cross sections and hyperon polarizations are presented as a function of kaon production angle and photon energy. The total cross section for Λ production shows a strong treshold enhancement wehreas the Σ 0 data have a maximum at about E γ =1.45 GeV. Cross sections together with their angular decompositions into Legendre polynomials suggest contributions from resonance production for both reactions. The K + Λ differential cross section is enhanced for backward produced kaons at E γ ∼1.45 GeV. This might be interpreted as contribution of a so called missing resonance D 13 (1895). In general, the induced polarization of Λ has negative values in the kaon forward direction and positive values in the backward direction. The magnitude varies with energy. The polarization of Σ 0 follows a similar angular and energy dependence as that of Λ, but with opposite sign. (orig.)

  17. Fuel model studies. Comparison of our present version of GAPCON-THERMAL-2 with results from the EPRI code comparison study. Partial report

    International Nuclear Information System (INIS)

    Malen, K.; Jansson, L.

    1978-08-01

    Runs with our present version of GAPCON-THERMAL-2 have been compared to results from the EPRI code comparison study. Usually also our version of GAPCON predicts high temperatures, 100-300 K or 10-15% higher than average code predictions and experimental results. The well-known temperaturegas release instablility is found also with GAPCON. In this case one identifies the gas release limits 1400 deg C and 1700 deg C as instablility points. (author)

  18. SHEDS-Multimedia Model Version 3 (a) Technical Manual; (b) User Guide; and (c) Executable File to Launch SAS Program and Install Model

    Science.gov (United States)

    Reliable models for assessing human exposures are important for understanding health risks from chemicals. The Stochastic Human Exposure and Dose Simulation model for multimedia, multi-route/pathway chemicals (SHEDS-Multimedia), developed by EPA’s Office of Research and Developm...

  19. Application of TOPSIS and VIKOR improved versions in a multi criteria decision analysis to develop an optimized municipal solid waste management model.

    Science.gov (United States)

    Aghajani Mir, M; Taherei Ghazvinei, P; Sulaiman, N M N; Basri, N E A; Saheri, S; Mahmood, N Z; Jahan, A; Begum, R A; Aghamohammadi, N

    2016-01-15

    Selecting a suitable Multi Criteria Decision Making (MCDM) method is a crucial stage to establish a Solid Waste Management (SWM) system. Main objective of the current study is to demonstrate and evaluate a proposed method using Multiple Criteria Decision Making methods (MCDM). An improved version of Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) applied to obtain the best municipal solid waste management method by comparing and ranking the scenarios. Applying this method in order to rank treatment methods is introduced as one contribution of the study. Besides, Viekriterijumsko Kompromisno Rangiranje (VIKOR) compromise solution method applied for sensitivity analyses. The proposed method can assist urban decision makers in prioritizing and selecting an optimized Municipal Solid Waste (MSW) treatment system. Besides, a logical and systematic scientific method was proposed to guide an appropriate decision-making. A modified TOPSIS methodology as a superior to existing methods for first time was applied for MSW problems. Applying this method in order to rank treatment methods is introduced as one contribution of the study. Next, 11 scenarios of MSW treatment methods are defined and compared environmentally and economically based on the waste management conditions. Results show that integrating a sanitary landfill (18.1%), RDF (3.1%), composting (2%), anaerobic digestion (40.4%), and recycling (36.4%) was an optimized model of integrated waste management. An applied decision-making structure provides the opportunity for optimum decision-making. Therefore, the mix of recycling and anaerobic digestion and a sanitary landfill with Electricity Production (EP) are the preferred options for MSW management. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Version 2 of RSXMULTI

    International Nuclear Information System (INIS)

    Heinicke, P.; Berg, D.; Constanta-Fanourakis, P.; Quigg, E.K.

    1985-01-01

    MULTI is a general purpose, high speed, high energy physics interface to data acquisition and data investigation system that runs on PDP-11 and VAX architecture. This paper describes the latest version of MULTI, which runs under RSX-11M version 4.1 and supports a modular approach to the separate tasks that interface to it, allowing the same system to be used in single CPU test beam experiments as well as multiple interconnected CPU, large scale experiments. MULTI uses CAMAC (IEE-583) for control and monitoring of an experiment, and is written in FORTRAN-77 and assembler. The design of this version, which simplified the interface between tasks, and eliminated the need for a hard to maintain homegrown I/O system is also discussed

  1. Determining Optimal Decision Version

    Directory of Open Access Journals (Sweden)

    Olga Ioana Amariei

    2014-06-01

    Full Text Available In this paper we start from the calculation of the product cost, applying the method of calculating the cost of hour- machine (THM, on each of the three cutting machines, namely: the cutting machine with plasma, the combined cutting machine (plasma and water jet and the cutting machine with a water jet. Following the calculation of cost and taking into account the precision of manufacturing of each machine, as well as the quality of the processed surface, the optimal decisional version needs to be determined regarding the product manufacturing. To determine the optimal decisional version, we resort firstly to calculating the optimal version on each criterion, and then overall using multiattribute decision methods.

  2. TriBITS lifecycle model. Version 1.0, a lean/agile software lifecycle model for research-based computational science and engineering and applied mathematical software.

    Energy Technology Data Exchange (ETDEWEB)

    Willenbring, James M.; Bartlett, Roscoe Ainsworth (Oak Ridge National Laboratory, Oak Ridge, TN); Heroux, Michael Allen

    2012-01-01

    Software lifecycles are becoming an increasingly important issue for computational science and engineering (CSE) software. The process by which a piece of CSE software begins life as a set of research requirements and then matures into a trusted high-quality capability is both commonplace and extremely challenging. Although an implicit lifecycle is obviously being used in any effort, the challenges of this process - respecting the competing needs of research vs. production - cannot be overstated. Here we describe a proposal for a well-defined software lifecycle process based on modern Lean/Agile software engineering principles. What we propose is appropriate for many CSE software projects that are initially heavily focused on research but also are expected to eventually produce usable high-quality capabilities. The model is related to TriBITS, a build, integration and testing system, which serves as a strong foundation for this lifecycle model, and aspects of this lifecycle model are ingrained in the TriBITS system. Here, we advocate three to four phases or maturity levels that address the appropriate handling of many issues associated with the transition from research to production software. The goals of this lifecycle model are to better communicate maturity levels with customers and to help to identify and promote Software Engineering (SE) practices that will help to improve productivity and produce better software. An important collection of software in this domain is Trilinos, which is used as the motivation and the initial target for this lifecycle model. However, many other related and similar CSE (and non-CSE) software projects can also make good use of this lifecycle model, especially those that use the TriBITS system. Indeed this lifecycle process, if followed, will enable large-scale sustainable integration of many complex CSE software efforts across several institutions.

  3. A non-equilibrium model for soil heating and moisture transport during extreme surface heating: The soil (heat-moisture-vapor) HMV-Model Version

    Science.gov (United States)

    William Massman

    2015-01-01

    Increased use of prescribed fire by land managers and the increasing likelihood of wildfires due to climate change require an improved modeling capability of extreme heating of soils during fires. This issue is addressed here by developing and testing the soil (heat-moisture-vapor) HMVmodel, a 1-D (one-dimensional) non-equilibrium (liquid- vapor phase change)...

  4. Modeling regional air quality and climate: improving organic aerosol and aerosol activation processes in WRF/Chem version 3.7.1

    Science.gov (United States)

    Yahya, Khairunnisa; Glotfelty, Timothy; Wang, Kai; Zhang, Yang; Nenes, Athanasios

    2017-06-01

    Air quality and climate influence each other through the uncertain processes of aerosol formation and cloud droplet activation. In this study, both processes are improved in the Weather, Research and Forecasting model with Chemistry (WRF/Chem) version 3.7.1. The existing Volatility Basis Set (VBS) treatments for organic aerosol (OA) formation in WRF/Chem are improved by considering the following: the secondary OA (SOA) formation from semi-volatile primary organic aerosol (POA), a semi-empirical formulation for the enthalpy of vaporization of SOA, and functionalization and fragmentation reactions for multiple generations of products from the oxidation of VOCs. Over the continental US, 2-month-long simulations (May to June 2010) are conducted and results are evaluated against surface and aircraft observations during the Nexus of Air Quality and Climate Change (CalNex) campaign. Among all the configurations considered, the best performance is found for the simulation with the 2005 Carbon Bond mechanism (CB05) and the VBS SOA module with semivolatile POA treatment, 25 % fragmentation, and the emissions of semi-volatile and intermediate volatile organic compounds being 3 times the original POA emissions. Among the three gas-phase mechanisms (CB05, CB6, and SAPRC07) used, CB05 gives the best performance for surface ozone and PM2. 5 concentrations. Differences in SOA predictions are larger for the simulations with different VBS treatments (e.g., nonvolatile POA versus semivolatile POA) compared to the simulations with different gas-phase mechanisms. Compared to the simulation with CB05 and the default SOA module, the simulations with the VBS treatment improve cloud droplet number concentration (CDNC) predictions (normalized mean biases from -40.8 % to a range of -34.6 to -27.7 %), with large differences between CB05-CB6 and SAPRC07 due to large differences in their OH and HO2 predictions. An advanced aerosol activation parameterization based on the Fountoukis and Nenes

  5. UN-EDITED VERSION

    Indian Academy of Sciences (India)

    27

    Picasa (version 3.0). 2.4 Statistical analysis. As ambient water temperature during experimentation was 24°C, the group exposed to this temperature was considered as a control. A number of males and females (sex ratio) obtained in all remaining groups was compared with the control. Embryos and tadpoles exposed to ...

  6. Coastal Modelling Environment version 1.0: a framework for integrating landform-specific component models in order to simulate decadal to centennial morphological changes on complex coasts

    Directory of Open Access Journals (Sweden)

    A. Payo

    2017-07-01

    Full Text Available The ability to model morphological changes on complex, multi-landform coasts over decadal to centennial timescales is essential for sustainable coastal management worldwide. One approach involves coupling of landform-specific simulation models (e.g. cliffs, beaches, dunes and estuaries that have been independently developed. An alternative, novel approach explored in this paper is to capture the essential characteristics of the landform-specific models using a common spatial representation within an appropriate software framework. This avoid the problems that result from the model-coupling approach due to between-model differences in the conceptualizations of geometries, volumes and locations of sediment. In the proposed framework, the Coastal Modelling Environment (CoastalME, change in coastal morphology is represented by means of dynamically linked raster and geometrical objects. A grid of raster cells provides the data structure for representing quasi-3-D spatial heterogeneity and sediment conservation. Other geometrical objects (lines, areas and volumes that are consistent with, and derived from, the raster structure represent a library of coastal elements (e.g. shoreline, beach profiles and estuary volumes as required by different landform-specific models. As a proof-of-concept, we illustrate the capabilities of an initial version of CoastalME by integrating a cliff–beach model and two wave propagation approaches. We verify that CoastalME can reproduce behaviours of the component landform-specific models. Additionally, the integration of these component models within the CoastalME framework reveals behaviours that emerge from the interaction of landforms, which have not previously been captured, such as the influence of the regional bathymetry on the local alongshore sediment-transport gradient and the effect on coastal change on an undefended coastal segment and on sediment bypassing of coastal structures.

  7. The Revised Child Anxiety and Depression Scale-Short Version: Scale Reduction via Exploratory Bifactor Modeling of the Broad Anxiety Factor

    Science.gov (United States)

    Ebesutani, Chad; Reise, Steven P.; Chorpita, Bruce F.; Ale, Chelsea; Regan, Jennifer; Young, John; Higa-McMillan, Charmaine; Weisz, John R.

    2012-01-01

    Using a school-based (N = 1,060) and clinic-referred (N = 303) youth sample, the authors developed a 25-item shortened version of the Revised Child Anxiety and Depression Scale (RCADS) using Schmid-Leiman exploratory bifactor analysis to reduce client burden and administration time and thus improve the transportability characteristics of this…

  8. A comparison of climate simulations for the last glacial maximum with three different versions of the ECHAM model and implications for summer-green tree refugia

    Directory of Open Access Journals (Sweden)

    K. Arpe

    2011-02-01

    Full Text Available Model simulations of the last glacial maximum (21 ± 2 ka with the ECHAM3 T42 atmosphere-only, ECHAM5-MPIOM T31 atmosphere-ocean coupled and ECHAM5 T106 atmosphere-only models are compared. The topography, land-sea mask and glacier distribution for the ECHAM5 simulations were taken from the Paleoclimate Modelling Intercomparison Project Phase II (PMIP2 data set while for ECHAM3 they were taken from PMIP1. The ECHAM5-MPIOM T31 model produced its own sea surface temperatures (SST while the ECHAM5 T106 simulations were forced at the boundaries by this coupled model SSTs corrected from their present-day biases and the ECHAM3 T42 model was forced with prescribed SSTs provided by Climate/Long-Range Investigation, Mapping, and Prediction project (CLIMAP.

    The SSTs in the ECHAM5-MPIOM simulation for the last glacial maximum (LGM were much warmer in the northern Atlantic than those suggested by CLIMAP or Overview of Glacial Atlantic Ocean Mapping (GLAMAP while the SSTs were cooler everywhere else. This had a clear effect on the temperatures over Europe, warmer for winters in western Europe and cooler for eastern Europe than the simulation with CLIMAP SSTs.

    Considerable differences in the general circulation patterns were found in the different simulations. A ridge over western Europe for the present climate during winter in the 500 hPa height field remains in both ECHAM5 simulations for the LGM, more so in the T106 version, while the ECHAM3 CLIMAP-SST simulation provided a trough which is consistent with cooler temperatures over western Europe. The zonal wind between 30° W and 10° E shows a southward shift of the polar and subtropical jets in the simulations for the LGM, least obvious in the ECHAM5 T31 one, and an extremely strong polar jet for the ECHAM3 CLIMAP-SST run. The latter can probably be assigned to the much stronger north-south gradient in the CLIMAP SSTs. The southward shift of the polar jet during the LGM is supported by

  9. School version of ESTE EU

    International Nuclear Information System (INIS)

    Carny, P.; Suchon, D.; Chyly, M.; Smejkalova, E.; Fabova, V.

    2008-01-01

    ESTE EU is information system and software for radiological impacts assessment to the territory of the country in case of radiation accident inside/outside of the country .The program enables to model dispersion of radioactive clouds in small-scale and meso-scale. The system enables the user to estimate prediction of the source term (release to the atmosphere ) for any point of radiation/nuclear accident in Europe (for any point of the release, but especially for the sites of European power reactors ). The system enables to utilize results of real radiological monitoring in the process of source term estimation. Radiological impacts of release to the atmosphere are modelled and calculated across the Europe and displayed in the geographical information system (GIS). The school version of ESTE EU is intended for students of the universities which are interested in or could work in the field of emergency response, radiological and nuclear accidents, dispersion modelling, radiological impacts calculation and urgent or preventive protective measures implementation. The school version of ESTE EU is planned to be donated to specialized departments of faculties in Slovakia, Czech Republic, etc. System can be fully operated in Slovak, Czech or English language. (authors)

  10. School version of ESTE EU

    International Nuclear Information System (INIS)

    Carny, P.; Suchon, D.; Chyly, M.; Smejkalova, E.; Fabova, V.

    2009-01-01

    ESTE EU is information system and software for radiological impacts assessment to the territory of the country in case of radiation accident inside/outside of the country .The program enables to model dispersion of radioactive clouds in small-scale and meso-scale. The system enables the user to estimate prediction of the source term (release to the atmosphere ) for any point of radiation/nuclear accident in Europe (for any point of the release, but especially for the sites of European power reactors ). The system enables to utilize results of real radiological monitoring in the process of source term estimation. Radiological impacts of release to the atmosphere are modelled and calculated across the Europe and displayed in the geographical information system (GIS). The school version of ESTE EU is intended for students of the universities which are interested in or could work in the field of emergency response, radiological and nuclear accidents, dispersion modelling, radiological impacts calculation and urgent or preventive protective measures implementation. The school version of ESTE EU is planned to be donated to specialized departments of faculties in Slovakia, Czech Republic, etc. System can be fully operated in Slovak, Czech or English language. (authors)

  11. Version control with Git

    CERN Document Server

    Loeliger, Jon

    2012-01-01

    Get up to speed on Git for tracking, branching, merging, and managing code revisions. Through a series of step-by-step tutorials, this practical guide takes you quickly from Git fundamentals to advanced techniques, and provides friendly yet rigorous advice for navigating the many functions of this open source version control system. This thoroughly revised edition also includes tips for manipulating trees, extended coverage of the reflog and stash, and a complete introduction to the GitHub repository. Git lets you manage code development in a virtually endless variety of ways, once you understand how to harness the system's flexibility. This book shows you how. Learn how to use Git for several real-world development scenarios ; Gain insight into Git's common-use cases, initial tasks, and basic functions ; Use the system for both centralized and distributed version control ; Learn how to manage merges, conflicts, patches, and diffs ; Apply advanced techniques such as rebasing, hooks, and ways to handle submodu...

  12. EASI graphics - Version II

    International Nuclear Information System (INIS)

    Allensworth, J.A.

    1984-04-01

    EASI (Estimate of Adversary Sequence Interruption) is an analytical technique for measuring the effectiveness of physical protection systems. EASI Graphics is a computer graphics extension of EASI which provides a capability for performing sensitivity and trade-off analyses of the parameters of a physical protection system. This document reports on the implementation of the Version II of EASI Graphics and illustrates its application with some examples. 5 references, 15 figures, 6 tables

  13. Global Historical Climatology Network (GHCN), Version 1 (Version Superseded)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Please note, this dataset has been superseded by a newer version (see below). Users should not use this version except in rare cases (e.g., when reproducing previous...

  14. Validation of the Danish version of the McGill Ingestive Skills Assessment using classical test theory and the Rasch model

    DEFF Research Database (Denmark)

    Hansen, Tina; Lambert, Heather C; Faber, Jens

    2012-01-01

    Purpose: The study aimed to validate the Danish version of the Canadian the "McGill Ingestive Skills Assessment" (MISA-DK) for measuring dysphagia in frail elders. Method: One-hundred and ten consecutive older medical patients were recruited to the study. Reliability was assessed by internal...... consistency (Chronbach's alpha). External construct validity (convergent and known-groups validity) was evaluated against theoretical constructs assessing the complex concept of ingestive skills. Internal construct validity was tested using Rasch analysis. Results: High internal consistency reliability......), p¿=¿0.424) and unidimensionality of the MISA-DK was confirmed after resolving disordered thresholds for 11 items and adjustment of local dependency. Conclusion: The psychometric properties of the MISA-DK equal the original Canadian version. Assessment of internal construct validity indicated...

  15. The Revised Child Anxiety and Depression Scale-Short Version: Scale reduction via exploratory bifactor modeling of the broad anxiety factor.

    OpenAIRE

    Ebesutani, Chad; Reise, Steven P.; Chorpita, Bruce F.; Ale, Chelsea; Regan, Jennifer; Young, John; Higa-McMillan, Charmaine; Weisz, John R

    2012-01-01

    Using a school-based (N = 1,060) and clinic-referred (N = 303) youth sample, the authors developed a 25-item shortened version of the Revised Child Anxiety and Depression Scale (RCADS) using Schmid-Leiman exploratory bifactor analysis to reduce client burden and administration time and thus improve the transportability characteristics of this youth anxiety and depression measure. Results revealed that all anxiety items primarily reflected a single “broad anxiety” dimension, which informed the...

  16. Embrittlement data base, version 1

    Energy Technology Data Exchange (ETDEWEB)

    Wang, J.A.

    1997-08-01

    The aging and degradation of light-water-reactor (LWR) pressure vessels is of particular concern because of their relevance to plant integrity and the magnitude of the expected irradiation embrittlement. The radiation embrittlement of reactor pressure vessel (RPV) materials depends on many different factors such as flux, fluence, fluence spectrum, irradiation temperature, and preirradiation material history and chemical compositions. These factors must be considered to reliably predict pressure vessel embrittlement and to ensure the safe operation of the reactor. Based on embrittlement predictions, decisions must be made concerning operating parameters and issues such as low-leakage-fuel management, possible life extension, and the need for annealing the pressure vessel. Large amounts of data from surveillance capsules and test reactor experiments, comprising many different materials and different irradiation conditions, are needed to develop generally applicable damage prediction models that can be used for industry standards and regulatory guides. Version 1 of the Embrittlement Data Base (EDB) is such a comprehensive collection of data resulting from merging version 2 of the Power Reactor Embrittlement Data Base (PR-EDB). Fracture toughness data were also integrated into Version 1 of the EDB. For power reactor data, the current EDB lists the 1,029 Charpy transition-temperature shift data points, which include 321 from plates, 125 from forgoings, 115 from correlation monitor materials, 246 from welds, and 222 from heat-affected-zone (HAZ) materials that were irradiated in 271 capsules from 101 commercial power reactors. For test reactor data, information is available for 1,308 different irradiated sets (352 from plates, 186 from forgoings, 303 from correlation monitor materials, 396 from welds and 71 from HAZs) and 268 different irradiated plus annealed data sets.

  17. Embrittlement data base, version 1

    International Nuclear Information System (INIS)

    Wang, J.A.

    1997-08-01

    The aging and degradation of light-water-reactor (LWR) pressure vessels is of particular concern because of their relevance to plant integrity and the magnitude of the expected irradiation embrittlement. The radiation embrittlement of reactor pressure vessel (RPV) materials depends on many different factors such as flux, fluence, fluence spectrum, irradiation temperature, and preirradiation material history and chemical compositions. These factors must be considered to reliably predict pressure vessel embrittlement and to ensure the safe operation of the reactor. Based on embrittlement predictions, decisions must be made concerning operating parameters and issues such as low-leakage-fuel management, possible life extension, and the need for annealing the pressure vessel. Large amounts of data from surveillance capsules and test reactor experiments, comprising many different materials and different irradiation conditions, are needed to develop generally applicable damage prediction models that can be used for industry standards and regulatory guides. Version 1 of the Embrittlement Data Base (EDB) is such a comprehensive collection of data resulting from merging version 2 of the Power Reactor Embrittlement Data Base (PR-EDB). Fracture toughness data were also integrated into Version 1 of the EDB. For power reactor data, the current EDB lists the 1,029 Charpy transition-temperature shift data points, which include 321 from plates, 125 from forgoings, 115 from correlation monitor materials, 246 from welds, and 222 from heat-affected-zone (HAZ) materials that were irradiated in 271 capsules from 101 commercial power reactors. For test reactor data, information is available for 1,308 different irradiated sets (352 from plates, 186 from forgoings, 303 from correlation monitor materials, 396 from welds and 71 from HAZs) and 268 different irradiated plus annealed data sets

  18. Spanish version of Colquitt's Organizational Justice Scale.

    Science.gov (United States)

    Díaz-Gracia, Liliana; Barbaranelli, Claudio; Moreno-Jiménez, Bernardo

    2014-01-01

    Organizational justice (OJ) is an important predictor of different work attitudes and behaviors. Colquitt's Organizational Justice Scale (COJS) was designed to assess employees' perceptions of fairness. This scale has four dimensions: distributive, procedural, informational, and interpersonal justice. The objective of this study is to validate it in a Spanish sample. The scale was administered to 460 Spanish employees from the service sector. 40.4% were men and 59.6% women. The Confirmatory Factor Analysis (CFA) supported the four dimensions structure for Spanish version of COJS. This model showed a better fit to data that the others models tested. Cronbach's alpha obtained for subscales ranged between .88 and .95. Correlations of the Spanish version of COJS with measures of incivility and job satisfaction were statistically significant and had a moderate to high magnitude, indicating a reasonable degree of construct validity. The Spanish version of COJS has adequate psychometric properties and may be of value in assessing OJ in Spanish setting.

  19. Nuclear criticality safety handbook. Version 2

    International Nuclear Information System (INIS)

    1999-03-01

    The Nuclear Criticality Safety Handbook, Version 2 essentially includes the description of the Supplement Report to the Nuclear Criticality Safety Handbook, released in 1995, into the first version of Nuclear Criticality Safety Handbook, published in 1988. The following two points are new: (1) exemplifying safety margins related to modelled dissolution and extraction processes, (2) describing evaluation methods and alarm system for criticality accidents. Revision is made based on previous studies for the chapter that treats modelling the fuel system: e.g., the fuel grain size that the system can be regarded as homogeneous, non-uniformity effect of fuel solution, and burnup credit. This revision solves the inconsistencies found in the first version between the evaluation of errors found in JACS code system and criticality condition data that were calculated based on the evaluation. (author)

  20. The impact of resolving the Rossby radius at mid-latitudes in the ocean: results from a high-resolution version of the Met Office GC2 coupled model

    Science.gov (United States)

    Hewitt, Helene T.; Roberts, Malcolm J.; Hyder, Pat; Graham, Tim; Rae, Jamie; Belcher, Stephen E.; Bourdallé-Badie, Romain; Copsey, Dan; Coward, Andrew; Guiavarch, Catherine; Harris, Chris; Hill, Richard; Hirschi, Joël J.-M.; Madec, Gurvan; Mizielinski, Matthew S.; Neininger, Erica; New, Adrian L.; Rioual, Jean-Christophe; Sinha, Bablu; Storkey, David; Shelly, Ann; Thorpe, Livia; Wood, Richard A.

    2016-10-01

    There is mounting evidence that resolving mesoscale eddies and western boundary currents as well as topographically controlled flows can play an important role in air-sea interaction associated with vertical and lateral transports of heat and salt. Here we describe the development of the Met Office Global Coupled Model version 2 (GC2) with increased resolution relative to the standard model: the ocean resolution is increased from 1/4 to 1/12° (28 to 9 km at the Equator), the atmosphere resolution increased from 60 km (N216) to 25 km (N512) and the coupling period reduced from 3 hourly to hourly. The technical developments that were required to build a version of the model at higher resolution are described as well as results from a 20-year simulation. The results demonstrate the key role played by the enhanced resolution of the ocean model: reduced sea surface temperature (SST) biases, improved ocean heat transports, deeper and stronger overturning circulation and a stronger Antarctic Circumpolar Current. Our results suggest that the improvements seen here require high resolution in both atmosphere and ocean components as well as high-frequency coupling. These results add to the body of evidence suggesting that ocean resolution is an important consideration when developing coupled models for weather and climate applications.

  1. URGENCES NOUVELLE VERSION

    CERN Multimedia

    Medical Service

    2002-01-01

    The table of emergency numbers that appeared in Bulletin 10/2002 is out of date. The updated version provided by the Medical Service appears on the following page. Please disregard the previous version. URGENT NEED OF A DOCTOR GENEVAPATIENT NOT FIT TO BE MOVED: Call your family doctor Or SOS MEDECINS (24H/24H) 748 49 50 Or ASSOC. OF GENEVA DOCTORS (7H-23H) 322 20 20 PATIENT CAN BE MOVED: HOPITAL CANTONAL 24 Micheli du Crest 372 33 11 382 33 11 CHILDREN'S HOSPITAL 6 rue Willy Donzé 382 68 18 382 45 55 MATERNITY 24 Micheli du Crest 382 68 16 382 33 11 OPHTALMOLOGY 22 Alcide Jentzer 382 84 00 HOPITAL DE LA TOUR Meyrin 719 61 11 CENTRE MEDICAL DE MEYRIN Champs Fréchets 719 74 00 URGENCES : FIRE BRIGADE 118 FIRE BRIGADE CERN 767 44 44 BESOIN URGENT D'AMBULANCE (GENEVE ET VAUD) : 144 POLICE 117 ANTI-POISON CENTRE 24H/24H 01 251 51 510 EUROPEAN EMERGENCY CALL: 112 FRANCE PATIENT NOT FIT TO BE MOVED: call your family doctor PATIENT CAN BE MOVED: ST. JULIE...

  2. Geophysical Multiphase Flow With Interphase Exchanges - Hydrodynamic and Thermodynamic Models, and Numerical Techniques, Version GMFIX-1.61, Design Document Attachment 1

    International Nuclear Information System (INIS)

    Dartevelle, S.

    2006-01-01

    Since the multiphase system is made up of a large number of particles, it is impractical to solve the motion of each individual particle; hence GMFIX v1.61 is based upon the Implicit Multi-Field formalism (IMF) which treats all phases in the system as interpenetrating continua. Each instantaneous local point variable (mass, velocity, temperature, pressure, so forth) must be treated to acknowledge the fact that any given arbitrary volume can be shared by different phases at the same time. This treatment may involve, for instance, an averaging or a smoothing process. GMFIX is the geophysical version of MFIX codes developed by NETL and ORNL. MFIX comes after 30 years of continuous developments and improvements from K-FIX codes from LANL. At the time this manuscript was ready for publication (March 2005), some differences exist between the current versions of GMFIX (v. 1.61) and MFIX (v: 1.60) regarding the exact formulation of the energy and momentum equations, the interfacial closures, and the turbulence formulation. Yet both GMFIX and MFIX are being improved, and developed tightly sides by sides

  3. A novel assessment of the role of land-use and land-cover change in the global carbon cycle, using a new Dynamic Global Vegetation Model version of the CABLE land surface model

    Science.gov (United States)

    Haverd, Vanessa; Smith, Benjamin; Nieradzik, Lars; Briggs, Peter; Canadell, Josep

    2017-04-01

    In recent decades, terrestrial ecosystems have sequestered around 1.2 PgC y-1, an amount equivalent to 20% of fossil-fuel emissions. This land carbon flux is the net result of the impact of changing climate and CO2 on ecosystem productivity (CO2-climate driven land sink ) and deforestation, harvest and secondary forest regrowth (the land-use change (LUC) flux). The future trajectory of the land carbon flux is highly dependent upon the contributions of these processes to the net flux. However their contributions are highly uncertain, in part because the CO2-climate driven land sink and LUC components are often estimated independently, when in fact they are coupled. We provide a novel assessment of global land carbon fluxes (1800-2015) that integrates land-use effects with the effects of changing climate and CO2 on ecosystem productivity. For this, we use a new land-use enabled Dynamic Global Vegetation Model (DGVM) version of the CABLE land surface model, suitable for use in attributing changes in terrestrial carbon balance, and in predicting changes in vegetation cover and associated effects on land-atmosphere exchange. In this model, land-use-change is driven by prescribed gross land-use transitions and harvest areas, which are converted to changes in land-use area and transfer of carbon between pools (soil, litter, biomass, harvested wood products and cleared wood pools). A novel aspect is the treatment of secondary woody vegetation via the coupling between the land-use module and the POP (Populations Order Physiology) module for woody demography and disturbance-mediated landscape heterogeneity. Land-use transitions to and from secondary forest tiles modify the patch age distribution within secondary-vegetated tiles, in turn affecting biomass accumulation and turnover rates and hence the magnitude of the secondary forest sink. The resulting secondary forest patch age distribution also influences the magnitude of the secondary forest harvest and clearance fluxes

  4. GNU Octave Manual Version 3

    DEFF Research Database (Denmark)

    W. Eaton, John; Bateman, David; Hauberg, Søren

    This manual is the definitive guide to GNU Octave, an interactive environment for numerical computation. The manual covers the new version 3 of GNU Octave.......This manual is the definitive guide to GNU Octave, an interactive environment for numerical computation. The manual covers the new version 3 of GNU Octave....

  5. Using Akaike's information theoretic criterion in mixed-effects modeling of pharmacokinetic data: a simulation study [version 3; referees: 2 approved, 1 approved with reservations

    Directory of Open Access Journals (Sweden)

    Erik Olofsen

    2015-07-01

    Full Text Available Akaike's information theoretic criterion for model discrimination (AIC is often stated to "overfit", i.e., it selects models with a higher dimension than the dimension of the model that generated the data. However, with experimental pharmacokinetic data it may not be possible to identify the correct model, because of the complexity of the processes governing drug disposition. Instead of trying to find the correct model, a more useful objective might be to minimize the prediction error of drug concentrations in subjects with unknown disposition characteristics. In that case, the AIC might be the selection criterion of choice. We performed Monte Carlo simulations using a model of pharmacokinetic data (a power function of time with the property that fits with common multi-exponential models can never be perfect - thus resembling the situation with real data. Prespecified models were fitted to simulated data sets, and AIC and AICc (the criterion with a correction for small sample sizes values were calculated and averaged. The average predictive performances of the models, quantified using simulated validation sets, were compared to the means of the AICs. The data for fits and validation consisted of 11 concentration measurements each obtained in 5 individuals, with three degrees of interindividual variability in the pharmacokinetic volume of distribution. Mean AICc corresponded very well, and better than mean AIC, with mean predictive performance. With increasing interindividual variability, there was a trend towards larger optimal models, but with respect to both lowest AICc and best predictive performance. Furthermore, it was observed that the mean square prediction error itself became less suitable as a validation criterion, and that a predictive performance measure should incorporate interindividual variability. This simulation study showed that, at least in a relatively simple mixed-effects modelling context with a set of prespecified models

  6. Distribution of lithostratigraphic units within the central block of Yucca Mountain, Nevada: A three-dimensional computer-based model, Version YMP.R2.0

    International Nuclear Information System (INIS)

    Buesch, D.C.; Nelson, J.E.; Dickerson, R.P.; Drake, R.M. II; San Juan, C.A.; Spengler, R.W.; Geslin, J.K.; Moyer, T.C.

    1996-01-01

    Yucca Mountain, Nevada is underlain by 14.0 to 11.6 Ma volcanic rocks tilted eastward 3 degree to 20 degree and cut by faults that were primarily active between 12.7 and 11.6 Ma. A three-dimensional computer-based model of the central block of the mountain consists of seven structural subblocks composed of six formations and the interstratified-bedded tuffaceous deposits. Rocks from the 12.7 Ma Tiva Canyon Tuff, which forms most of the exposed rocks on the mountain, to the 13.1 Ma Prow Pass Tuff are modeled with 13 surfaces. Modeled units represent single formations such as the Pah Canyon Tuff, grouped units such as the combination of the Yucca Mountain Tuff with the superjacent bedded tuff, and divisions of the Topopah Spring Tuff such as the crystal-poor vitrophyre interval. The model is based on data from 75 boreholes from which a structure contour map at the base of the Tiva Canyon Tuff and isochore maps for each unit are constructed to serve as primary input. Modeling consists of an iterative cycle that begins with the primary structure-contour map from which isochore values of the subjacent model unit are subtracted to produce the structure contour map on the base of the unit. This new structure contour map forms the input for another cycle of isochore subtraction to produce the next structure contour map. In this method of solids modeling, the model units are presented by surfaces (structure contour maps), and all surfaces are stored in the model. Surfaces can be converted to form volumes of model units with additional effort. This lithostratigraphic and structural model can be used for (1) storing data from, and planning future, site characterization activities, (2) preliminary geometry of units for design of Exploratory Studies Facility and potential repository, and (3) performance assessment evaluations

  7. MC1: a dynamic vegetation model for estimating the distribution of vegetation and associated carbon, nutrients, and water—technical documentation. Version 1.0.

    Science.gov (United States)

    Dominique Bachelet; James M. Lenihan; Christopher Daly; Ronald P. Neilson; Dennis S. Ojima; William J. Parton

    2001-01-01

    Assessments of vegetation response to climate change have generally been made only by equilibrium vegetation models that predict vegetation composition under steady-state conditions. These models do not simulate either ecosystem biogeochemical processes or changes in ecosystem structure that may, in turn, act as feedbacks in determining the dynamics of vegetation...

  8. Probabilistic Model for Integrated Assessment of the Behavior at the T.D.P. Version 2; Modelo Probabilista de Evaluación Integrada del Comportamiento de la P.D.T. Versión 2

    Energy Technology Data Exchange (ETDEWEB)

    Hurtado, A.; Eguilior, S.; Recreo, F

    2015-07-01

    This report documents the completion of the first phase of the implementation of the methodology ABACO2G (Bayes Application to Geological Storage of CO2) and the final version of the ABACO2G probabilistic model for the injection phase before its future validation in the experimental field of the Technology Development Plant in Hontom (Burgos). The model, which is based on the determination of the probabilistic risk component of a geological storage of CO2 using the formalism of Bayesian networks and Monte Carlo probability yields quantitative probability functions of the total system CO2 storage and of each one of their subsystems (storage subsystem and the primary seal; secondary containment subsystem and dispersion subsystem or tertiary one); the implementation of the stochastic time evolution of the CO2 plume during the injection period, the stochastic time evolution of the drying front, the probabilistic evolution of the pressure front, decoupled from the CO2 plume progress front, and the implementation of submodels and leakage probability functions through major leakage risk elements (fractures / faults and wells / deep boreholes) which together define the space of events to estimate the risks associated with the CO2 geological storage system. The activities included in this report have been to replace the previous qualitative estimation submodels of former ABACO2G version developed during Phase I of the project ALM-10-017, by analytical, semi-analytical or numerical submodels for the main elements of risk (wells and fractures), to obtain an integrated probabilistic model of a CO2 storage complex in carbonate formations that meets the needs of the integrated behavior evaluation of the Technology Development Plant in Hontomín.

  9. CalTOX (registered trademark), A multimedia total exposure model spreadsheet user's guide. Version 4.0(Beta)

    Energy Technology Data Exchange (ETDEWEB)

    McKone, T.E.; Enoch, K.G.

    2002-08-01

    CalTOX has been developed as a set of spreadsheet models and spreadsheet data sets to assist in assessing human exposures from continuous releases to multiple environmental media, i.e. air, soil, and water. It has also been used for waste classification and for setting soil clean-up levels at uncontrolled hazardous wastes sites. The modeling components of CalTOX include a multimedia transport and transformation model, multi-pathway exposure scenario models, and add-ins to quantify and evaluate uncertainty and variability. All parameter values used as inputs to CalTOX are distributions, described in terms of mean values and a coefficient of variation, rather than as point estimates or plausible upper values such as most other models employ. This probabilistic approach allows both sensitivity and uncertainty analyses to be directly incorporated into the model operation. This manual provides CalTOX users with a brief overview of the CalTOX spreadsheet model and provides instructions for using the spreadsheet to make deterministic and probabilistic calculations of source-dose-risk relationships.

  10. CREST Cost of Renewable Energy Spreadsheet Tool: A Model for Developing Cost-based Incentives in the United States. User Manual Version 1

    Energy Technology Data Exchange (ETDEWEB)

    Gifford, Jason S. [Sustainable Energy Advantage, LLC, Framingham, MA (United States); Grace, Robert C. [Sustainable Energy Advantage, LLC, Framingham, MA (United States)

    2011-03-01

    This user manual helps model users understands how to use the CREST model to support renewable energy incentives, FITs, and other renewable energy rate-setting processes. It reviews the spreadsheet tool, including its layout and conventions, offering context on how and why it was created. It also provides instructions on how to populate the model with inputs that are appropriate for a specific jurisdiction’s policymaking objectives and context. And, it describes the results and outlines how these results may inform decisions about long-term renewable energy support programs.

  11. FMCSA safety program effectiveness measurement : carrier intervention effectiveness model, version 1.0, summary report for fiscal years 2009, 2010, 2011.

    Science.gov (United States)

    2015-01-01

    The Federal Motor Carrier Safety Administration (FMCSA), in cooperation with the John A. Volpe National : Transportation Systems Center (Volpe), has developed a quantitative model to measure the effectiveness of motor : carrier interventions in terms...

  12. MEaSUREs Greenland Ice Mapping Project (GIMP) Digital Elevation Model from GeoEye and WorldView Imagery, Version 1

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set consists of an enhanced resolution digital elevation model (DEM) for the Greenland Ice Sheet. The DEM is derived from sub-meter resolution,...

  13. Variability of Phenology and Fluxes of Water and Carbon with Observed and Simulated Soil Moisture in the Ent Terrestrial Biosphere Model (Ent TBM Version 1.0.1.0.0)

    Science.gov (United States)

    Kim, Y.; Moorcroft, P. R.; Aleinov, Igor; Puma, M. J.; Kiang, N. Y.

    2015-01-01

    The Ent Terrestrial Biosphere Model (Ent TBM) is a mixed-canopy dynamic global vegetation model developed specifically for coupling with land surface hydrology and general circulation models (GCMs). This study describes the leaf phenology submodel implemented in the Ent TBM version 1.0.1.0.0 coupled to the carbon allocation scheme of the Ecosystem Demography (ED) model. The phenology submodel adopts a combination of responses to temperature (growing degree days and frost hardening), soil moisture (linearity of stress with relative saturation) and radiation (light length). Growth of leaves, sapwood, fine roots, stem wood and coarse roots is updated on a daily basis. We evaluate the performance in reproducing observed leaf seasonal growth as well as water and carbon fluxes for four plant functional types at five Fluxnet sites, with both observed and prognostic hydrology, and observed and prognostic seasonal leaf area index. The phenology submodel is able to capture the timing and magnitude of leaf-out and senescence for temperate broadleaf deciduous forest (Harvard Forest and Morgan- Monroe State Forest, US), C3 annual grassland (Vaira Ranch, US) and California oak savanna (Tonzi Ranch, US). For evergreen needleleaf forest (Hyytiäla, Finland), the phenology submodel captures the effect of frost hardening of photosynthetic capacity on seasonal fluxes and leaf area. We address the importance of customizing parameter sets of vegetation soil moisture stress response to the particular land surface hydrology scheme. We identify model deficiencies that reveal important dynamics and parameter needs.

  14. ARROW (Version 2) Commercial Software Validation and Configuration Control

    International Nuclear Information System (INIS)

    HEARD, F.J.

    2000-01-01

    ARROW (Version 2), a compressible flow piping network modeling and analysis computer program from Applied Flow Technology, was installed for use at the U.S. Department of Energy Hanford Site near Richland, Washington

  15. ARROW (Version 2) Commercial Software Validation and Configuration Control

    Energy Technology Data Exchange (ETDEWEB)

    HEARD, F.J.

    2000-02-10

    ARROW (Version 2), a compressible flow piping network modeling and analysis computer program from Applied Flow Technology, was installed for use at the U.S. Department of Energy Hanford Site near Richland, Washington.

  16. ENERGY STAR Certified Light Bulbs Version 2.0

    Data.gov (United States)

    U.S. Environmental Protection Agency — Certified models meet all ENERGY STAR requirements as listed in the Version 2.0 and V2.1 ENERGY STAR Program Requirements for Lamps (Light Bulbs) that are effective...

  17. Climate Forecast System Version 2 (CFSv2) Operational Analysis

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Climate Forecast System Version 2 (CFSv2) produced by the NOAA National Centers for Environmental Prediction (NCEP) is a fully coupled model representing the...

  18. Climate Forecast System Version 2 (CFSv2) Operational Forecasts

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Climate Forecast System Version 2 (CFSv2) produced by the NOAA National Centers for Environmental Prediction (NCEP) is a fully coupled model representing the...

  19. Neuropathogenesis of Zika Virus in a Highly Susceptible Immunocompetent Mouse Model after Antibody Blockade of Type I Interferon (Open Access Publisher’s Version)

    Science.gov (United States)

    2017-01-09

    and Zika virus infections—an unprecedented epidemic wave of mosquito-borne viruses in the Pacific 2012–2014. Euro Surveill 19. 13. Campos GS...RESEARCH ARTICLE Neuropathogenesis of Zika Virus in a Highly Susceptible Immunocompetent Mouse Model after Antibody Blockade of Type I Interferon...joseph.w.golden.ctr@mail.mil (JWG) Abstract Animal models are needed to better understand the pathogenic mechanisms of Zika virus (ZIKV) and to

  20. The DeepMIP Contribution to PMIP4: Experimental Design for Model Simulations of the EECO, PETM, and pre-PETM (version 1.0)

    Science.gov (United States)

    Lunt, Daniel J.; Huber, Matthew; Anagnostou, Eleni; Baatsen, Michiel L. J.; Caballero, Rodrigo; DeConto, Rob; Dijkstra, Henk A.; Donnadieu, Yannick; Evans, David; Feng, Ran; hide

    2017-01-01

    Past warm periods provide an opportunity to evaluate climate models under extreme forcing scenarios, in particular high ( greater than 800 ppmv) atmospheric CO2 concentrations. Although a post hoc intercomparison of Eocene (approximately 50 Ma) climate model simulations and geological data has been carried out previously, models of past high-CO2 periods have never been evaluated in a consistent framework. Here, we present an experimental design for climate model simulations of three warm periods within the early Eocene and the latest Paleocene (the EECO, PETM, and pre-PETM). Together with the CMIP6 pre-industrial control and abrupt 4(times) CO2 simulations, and additional sensitivity studies, these form the first phase of DeepMIP - the Deep-time Model Intercomparison Project, itself a group within the wider Paleoclimate Modeling Intercomparison Project (PMIP). The experimental design specifies and provides guidance on boundary conditions associated with palaeogeography, greenhouse gases, astronomical configuration, solar constant, land surface processes, and aerosols. Initial conditions, simulation length, and output variables are also specified. Finally, we explain how the geological data sets, which will be used to evaluate the simulations, will be developed.

  1. Probabilistic modeling of bifurcations in single-cell gene expression data using a Bayesian mixture of factor analyzers [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Kieran R Campbell

    2017-03-01

    Full Text Available Modeling bifurcations in single-cell transcriptomics data has become an increasingly popular field of research. Several methods have been proposed to infer bifurcation structure from such data, but all rely on heuristic non-probabilistic inference. Here we propose the first generative, fully probabilistic model for such inference based on a Bayesian hierarchical mixture of factor analyzers. Our model exhibits competitive performance on large datasets despite implementing full Markov-Chain Monte Carlo sampling, and its unique hierarchical prior structure enables automatic determination of genes driving the bifurcation process. We additionally propose an Empirical-Bayes like extension that deals with the high levels of zero-inflation in single-cell RNA-seq data and quantify when such models are useful. We apply or model to both real and simulated single-cell gene expression data and compare the results to existing pseudotime methods. Finally, we discuss both the merits and weaknesses of such a unified, probabilistic approach in the context practical bioinformatics analyses.

  2. LHCf brochure (English version)

    CERN Multimedia

    Lefevre, C

    2012-01-01

    The Earth's upper atmosphere is constantly hit by particles called cosmic rays, producing many secondary particles that collide with nuclei in the atmosphere. LHCf is designed to detect these secondary particles from ultra-high-energy cosmic rays to help confirm the theoretical models that explain what happens when these cosmic rays enter the atmosphere.

  3. A Modified Version of the RNG k–ε Turbulence Model for the Scale-Resolving Simulation of Internal Combustion Engines

    Directory of Open Access Journals (Sweden)

    Vesselin Krassimirov Krastev

    2017-12-01

    Full Text Available The unsteady and random character of turbulent flow motion is a key aspect of the multidimensional modeling of internal combustion engines (ICEs. A typical example can be found in the prediction of the cycle-to-cycle variability (CCV in modern, highly downsized gasoline direct injection (GDI engines, which strongly depends on the accurate simulation of turbulent in-cylinder flow structures. The current standard for turbulence modeling in ICEs is still represented by the unsteady form of Reynold-averaged Navier Stokes equations (URANS, which allows the simulation of full engine cycles at relatively low computational costs. URANS-based methods, however, are only able to return a statistical description of turbulence, as the effects of all scales of motion are entirely modeled. Therefore, during the last decade, scale-resolving methods such as large eddy simulation (LES or hybrid URANS/LES approaches are gaining increasing attention among the engine-modeling community. In the present paper, we propose a scale-resolving capable modification of the popular RNG k– ε URANS model. The modification is based on a detached-eddy simulation (DES framework and allows one to explicitly set the behavior (URANS, DES or LES of the model in different zones of the computational domain. The resulting zonal formulation has been tested on two reference test cases, comparing the numerical predictions with the available experimental data sets and with previous computational studies. Overall, the scale-resolved part of the computed flow has been found to be consistent with the expected flow physics, thus confirming the validity of the proposed simulation methodology.

  4. Studying the effect of meteorological factors on the SO2 and PM10 pollution levels with refined versions of the SARIMA model

    Energy Technology Data Exchange (ETDEWEB)

    Voynikova, D. S., E-mail: desi-sl2000@yahoo.com; Gocheva-Ilieva, S. G., E-mail: snegocheva@yahoo.com; Ivanov, A. V., E-mail: aivanov-99@yahoo.com [Department of Applied Mathematics and Modeling, Faculty of Mathematics and Informatics, Paisii Hilendarski University of Plovdiv, 24 Tzar Assen str., 4000 Plovdiv (Bulgaria); Iliev, I. P., E-mail: iliev55@abv.bg [Department of Physics, Technical University – Plovdiv, 25 Tzanko Djusstabanov str., 4000 Plovdiv (Bulgaria)

    2015-10-28

    Numerous time series methods are used in environmental sciences allowing the detailed investigation of air pollution processes. The goal of this study is to present the empirical analysis of various aspects of stochastic modeling and in particular the ARIMA/SARIMA methods. The subject of investigation is air pollution in the town of Kardzhali, Bulgaria with 2 problematic pollutants – sulfur dioxide (SO2) and particulate matter (PM10). Various SARIMA Transfer Function models are built taking into account meteorological factors, data transformations and the use of different horizons selected to predict future levels of concentrations of the pollutants.

  5. Seasonal Prediction of Surface Air Temperature across Vietnam Using the Regional Climate Model Version 4.2 (RegCM4.2)

    OpenAIRE

    Phan Van, Tan; Van Nguyen, Hiep; Trinh Tuan, Long; Nguyen Quang, Trung; Ngo-Duc, Thanh; Laux, Patrick; Nguyen Xuan, Thanh

    2014-01-01

    To investigate the ability of dynamical seasonal climate predictions for Vietnam, the RegCM4.2 is employed to perform seasonal prediction of 2 m mean (T2m), maximum (Tx), and minimum (Tn) air temperature for the period from January 2012 to November 2013 by downscaling the NCEP Climate Forecast System (CFS) data. For model bias correction, the model and observed climatology is constructed using the CFS reanalysis and observed temperatures over Vietnam for the period 1980–2010, respectively. Th...

  6. PERPEST Version 1.0, manual and technical description; a model that predicts the ecological risks of pesticides in freshwater ecosystems

    NARCIS (Netherlands)

    Nes, van E.H.; Brink, van den P.J.

    2003-01-01

    This report is a technical description and a user-manual of the PERPEST model, able to Predicts the Ecological Risks of PESTicides in freshwater ecosystems. This system predicts the effects of a particular concentration of a pesticide on various (community) endpoints, based on empirical data

  7. The German Version of the Perceived Stress Scale (PSS-10): Evaluation of Dimensionality, Validity, and Measurement Invariance With Exploratory and Confirmatory Bifactor Modeling.

    Science.gov (United States)

    Reis, Dorota; Lehr, Dirk; Heber, Elena; Ebert, David Daniel

    2017-06-01

    The Perceived Stress Scale (PSS) is a popular instrument for measuring the degree to which individuals appraise situations in their lives as excessively uncontrollable and overloaded. Despite its widespread use (e.g., for evaluating intervention effects in stress management studies), there is still no agreement on its factor structure. Hence, the aim of the present study was to examine the dimensionality, measurement invariance (i.e., across gender, samples, and time), reliability, and validity of the PSS. Data from 11,939 German adults (73% women) were used to establish an exploratory bifactor model for the PSS with one general and two specific factors and to cross-validate this model in a confirmatory bifactor model. The model displayed strong measurement invariance across gender and was replicated in Study 2 in data derived from six randomized controlled trials investigating a web-based stress management training. In Study 2 (overall N = 1,862), we found strong temporal invariance. Also, our analyses of concurrent and predictive validity showed associations with depressive symptoms, anxiety, and insomnia severity for the three latent PSS factors. These results show the implications of the bifactor structure of the PSS that might be of consequence in empirical research.

  8. Reduced-Order Model for the Geochemical Impacts of Carbon Dioxide, Brine and Trace Metal Leakage into an Unconfined, Oxidizing Carbonate Aquifer, Version 2.1

    Energy Technology Data Exchange (ETDEWEB)

    Bacon, Diana H.

    2013-03-31

    The National Risk Assessment Partnership (NRAP) consists of 5 U.S DOE national laboratories collaborating to develop a framework for predicting the risks associated with carbon sequestration. The approach taken by NRAP is to divide the system into components, including injection target reservoirs, wellbores, natural pathways including faults and fractures, groundwater and the atmosphere. Next, develop a detailed, physics and chemistry-based model of each component. Using the results of the detailed models, develop efficient, simplified models, termed reduced order models (ROM) for each component. Finally, integrate the component ROMs into a system model that calculates risk profiles for the site. This report details the development of the Groundwater Geochemistry ROM for the Edwards Aquifer at PNNL. The Groundwater Geochemistry ROM for the Edwards Aquifer uses a Wellbore Leakage ROM developed at LANL as input. The detailed model, using the STOMP simulator, covers a 5x8 km area of the Edwards Aquifer near San Antonio, Texas. The model includes heterogeneous hydraulic properties, and equilibrium, kinetic and sorption reactions between groundwater, leaked CO2 gas, brine, and the aquifer carbonate and clay minerals. Latin Hypercube sampling was used to generate 1024 samples of input parameters. For each of these input samples, the STOMP simulator was used to predict the flux of CO2 to the atmosphere, and the volume, length and width of the aquifer where pH was less than the MCL standard, and TDS, arsenic, cadmium and lead exceeded MCL standards. In order to decouple the Wellbore Leakage ROM from the Groundwater Geochemistry ROM, the response surface was transformed to replace Wellbore Leakage ROM input parameters with instantaneous and cumulative CO2 and brine leakage rates. The most sensitive parameters proved to be the CO2 and brine leakage rates from the well, with equilibrium coefficients for calcite and dolomite, as well as the number of illite and kaolinite

  9. The evaluation of a virtual education system based on the DeLone and McLean model:  A path analysis [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Zohreh Mahmoodi

    2017-09-01

    Full Text Available Background: The Internet has dramatically influenced the introduction of virtual education. Virtual education is a term that involves online education and e-learning. This study was conducted to evaluate a virtual education system based on the DeLone and McLean model. Methods: This descriptive analytical study was conducted using the census method on all the students of the Nursing and Midwifery Department of Alborz University of Medical Sciences who had taken at least one online course in 2016-2017. Data were collected using a researcher-made questionnaire based on the DeLone and McLean model in six domains and then analyzed in SPSS-16 and LISREL-8.8 using the path analysis. Results: The goodness of fit indices (GFI of the model represent the desirability and good fit of the model, and the rational nature of the adjusted relationships between the variables based on a conceptual model (GFI = 0.98; RMSEA = 0.014.The results showed that system quality has the greatest impact on the net benefits of the system through both direct and indirect paths (β=0.52, service quality through the indirect path (β=0.03 and user satisfaction through the direct path (β=0.73. Conclusions: According to the results, system quality has the greatest overall impact on the net benefits of the system, both directly and indirectly by affecting user satisfaction and the intention to use. System quality should therefore be further emphasized, to use these systems more efficiently.

  10. The evaluation of a virtual education system based on the DeLone and McLean model:  A path analysis [version 2; referees: 3 approved

    Directory of Open Access Journals (Sweden)

    Zohreh Mahmoodi

    2017-09-01

    Full Text Available Background: The Internet has dramatically influenced the introduction of virtual education. Virtual education is a term that involves online education and e-learning. This study was conducted to evaluate a virtual education system based on the DeLone and McLean model. Methods: This descriptive analytical study was conducted using the census method on all the students of the Nursing and Midwifery Department of Alborz University of Medical Sciences who had taken at least one online course in 2016-2017. Data were collected using a researcher-made questionnaire based on the DeLone and McLean model in six domains and then analyzed in SPSS-16 and LISREL-8.8 using the path analysis. Results: The goodness of fit indices (GFI of the model represent the desirability and good fit of the model, and the rational nature of the adjusted relationships between the variables based on a conceptual model (GFI = 0.98; RMSEA = 0.014.The results showed that system quality has the greatest impact on the net benefits of the system through both direct and indirect paths (β=0.52, service quality through the indirect path (β=0.03 and user satisfaction through the direct path (β=0.73. Conclusions: According to the results, system quality has the greatest overall impact on the net benefits of the system, both directly and indirectly by affecting user satisfaction and the intention to use. System quality should therefore be further emphasized, to use these systems more efficiently.

  11. Validation of the Dutch version of the Swallowing Quality-of-Life Questionnaire (DSWAL-QoL) and the adjusted DSWAL-QoL (aDSWAL-QoL) using item analysis with the Rasch model: a pilot study.

    Science.gov (United States)

    Simpelaere, Ingeborg S; Van Nuffelen, Gwen; De Bodt, Marc; Vanderwegen, Jan; Hansen, Tina

    2017-04-07

    The Swallowing Quality-of-Life Questionnaire (SWAL-QoL) is considered the gold standard for assessing health-related QoL in oropharyngeal dysphagia. The Dutch translation (DSWAL-QoL) and its adjusted version (aDSWAL-QoL) have been validated using classical test theory (CTT). However, these scales have not been tested against the Rasch measurement model, which is required to establish the structural validity and objectivity of the total scale and subscale scores. Thus, the purpose of this study was to examine the psychometric properties of these scales using item analysis according to the Rasch model. Item analysis with the Rasch model was performed using RUMM2030 software with previously collected data from a validation study of 108 patients. The assessment included evaluations of overall model fit, reliability, unidimensionality, threshold ordering, individual item and person fits, differential item functioning (DIF), local item dependency (LID) and targeting. The analysis could not establish the psychometric properties of either of the scales or their subscales because they did not fit the Rasch model, and multidimensionality, disordered thresholds, DIF, and/or LID were found. The reliability and power of fit were high for the total scales (PSI = 0.93) but low for most of the subscales (PSI model. Relying on the DSWAL-QoL and aDSWAL-QoL total and subscale scores to make conclusions regarding dysphagia-related HRQoL should be treated with caution before the structural validity and objectivity of both scales have been established. A larger and well-targeted sample is recommended to derive definitive conclusions about the items and scales. Solutions for the psychometric weaknesses suggested by the model and practical implications are discussed.

  12. Advanced Utility Simulation Model: multi-period multi-state module design documentation (Version 1. 0). Final report, October 1982-November 1984

    Energy Technology Data Exchange (ETDEWEB)

    Edahl, R.; Tyle, N.; Talukdar, S.N.; Pachavis, N.L.

    1988-04-01

    This report is one of 11 in a series describing the initial development of the Advanced Utility Simulation Model (AUSM) by the Universities Research Group on Energy (URGE) and its continued development by the Science Applications International Corporation (SAIC) research team. The AUSM is one of four stationary-source emission and control cost-forecasting models developed by EPA for the National Acid Precipitation Assessment Program (NAPAP). The AUSM projects air pollution emissions (SO/sub 2/ and NOx), generating technology types and costs of operation, and combinations of fuels and emission-control technologies to simultaneously meet electric demand and emission constraints on a least-cost basis for each year through 2010. Thirteen electric-demand regions are simulated, and output is provided for each of the 48 states.

  13. Expanded Simulation Models Version 3.0 for Growth of the Submerged Aquatic Plants American Wildcelery, Sago Pondweed, Hydrilla, and Eurasian Watermilfoil

    Science.gov (United States)

    2007-11-01

    parameters (shell terms) -Model parameters etc. -Calculations -Calls to subroutines Figure 1. Relational diagram illustrating the organization of an aquatic...0 0.291 0 105 1 270 Tuber sprouting and initial elongation leaf expansion 0.292 0.875 106 180 271 1215 Leaf expansion floral initiation...and anthesis 0.876 1.000 181 191 1216 1415 Floral initiation and anthesis induction of tuber formation, tuber formation and senescence 1.001

  14. An improved land biosphere module for use in the DCESS Earth system model (version 1.1 with application to the last glacial termination

    Directory of Open Access Journals (Sweden)

    R. Eichinger

    2017-09-01

    Full Text Available Interactions between the land biosphere and the atmosphere play an important role for the Earth's carbon cycle and thus should be considered in studies of global carbon cycling and climate. Simple approaches are a useful first step in this direction but may not be applicable for certain climatic conditions. To improve the ability of the reduced-complexity Danish Center for Earth System Science (DCESS Earth system model DCESS to address cold climate conditions, we reformulated the model's land biosphere module by extending it to include three dynamically varying vegetation zones as well as a permafrost component. The vegetation zones are formulated by emulating the behaviour of a complex land biosphere model. We show that with the new module, the size and timing of carbon exchanges between atmosphere and land are represented more realistically in cooling and warming experiments. In particular, we use the new module to address carbon cycling and climate change across the last glacial transition. Within the constraints provided by various proxy data records, we tune the DCESS model to a Last Glacial Maximum state and then conduct transient sensitivity experiments across the transition under the application of explicit transition functions for high-latitude ocean exchange, atmospheric dust, and the land ice sheet extent. We compare simulated time evolutions of global mean temperature, pCO2, atmospheric and oceanic carbon isotopes as well as ocean dissolved oxygen concentrations with proxy data records. In this way we estimate the importance of different processes across the transition with emphasis on the role of land biosphere variations and show that carbon outgassing from permafrost and uptake of carbon by the land biosphere broadly compensate for each other during the temperature rise of the early last deglaciation.

  15. The refined biomimetic NeuroDigm GEL™ model of neuropathic pain in a mature rat [version 2; referees: 1 approved, 2 approved with reservations

    Directory of Open Access Journals (Sweden)

    Mary R. Hannaman

    2017-05-01

    Full Text Available Background: Many humans suffering with chronic neuropathic pain have no objective evidence of an etiological lesion or disease. Frequently their persistent pain occurs after the healing of a soft tissue injury. Based on clinical observations over time, our hypothesis was that after an injury in mammals the process of tissue repair could cause chronic neural pain. Our objectives were to create the delayed onset of neuropathic pain in rats with minimal nerve trauma using a physiologic hydrogel, and characterize the rats’ responses to known analgesics and a targeted biologic.   Methods: In mature male Sprague Dawley rats (age 9.5 months a percutaneous implant of tissue-derived hydrogel was placed in the musculofascial tunnel of the distal tibial nerve. Subcutaneous morphine (3 mg/kg, celecoxib (10 mg/kg, gabapentin (25 mg/kg and duloxetine (10 mg/kg were each screened in the model three times each over 5 months after pain behaviors developed. Sham and control groups were used in all screenings. A pilot study followed in which recombinant human erythropoietin (200 units was injected by the GEL™ neural procedure site.   Results: The GEL group gradually developed mechanical hypersensitivity lasting months. Morphine, initially effective, had less analgesia over time. Celecoxib produced no analgesia, while gabapentin and duloxetine at low doses demonstrated profound analgesia at all times tested. The injected erythropoietin markedly decreased bilateral pain behavior that had been present for over 4 months, p ≤ 0.001. Histology of the GEL group tibial nerve revealed a site of focal neural remodeling, with neural regeneration, as found in nerve biopsies of patients with neuropathic pain.   Conclusion: The refined NeuroDigm GEL™ model induces a neural response resulting in robust neuropathic pain behavior. The analgesic responses in this model reflect known responses of humans with neuropathic pain. The targeted recombinant human erythropoietin

  16. Gridded Surface Subsurface Hydrologic Analysis (GSSHA) User’s Manual; Version 1.43 for Watershed Modeling System 6.1

    Science.gov (United States)

    2006-09-01

    loss of water by opening and closing the stomata . The loss of water vapor from plants can be calculated from a resistance law, where the difference in...that the CASC2D model could be effectively parameratized by use of an automated calibration procedure, such as the shuffled complex evolution (SCE...the soil moisture in each cell. AET depends on the soil moisture, hydraulic properties of the soil and plant characteristics. At water contents

  17. Evidence synthesis and decision modelling to support complex decisions: stockpiling neuraminidase inhibitors for pandemic influenza usage [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Samuel I. Watson

    2017-03-01

    Full Text Available Objectives: The stockpiling of neuraminidase inhibitor (NAI antivirals as a defence against pandemic influenza is a significant public health policy decision that must be made despite a lack of conclusive evidence from randomised controlled trials regarding the effectiveness of NAIs on important clinical end points such as mortality. The objective of this study was to determine whether NAIs should be stockpiled for treatment of pandemic influenza on the basis of current evidence. Methods: A decision model for stockpiling was designed. Data on previous pandemic influenza epidemiology was combined with data on the effectiveness of NAIs in reducing mortality obtained from a recent individual participant meta-analysis using observational data. Evidence synthesis techniques and a bias modelling method for observational data were used to incorporate the evidence into the model. The stockpiling decision was modelled for adults (≥16 years old and the United Kingdom was used as an example. The main outcome was the expected net benefits of stockpiling in monetary terms. Health benefits were estimated from deaths averted through stockpiling. Results: After adjusting for biases in the estimated effectiveness of NAIs, the expected net benefit of stockpiling in the baseline analysis was £444 million, assuming a willingness to pay of £20,000/QALY ($31,000/QALY. The decision would therefore be to stockpile NAIs. There was a greater probability that the stockpile would not be utilised than utilised. However, the rare but catastrophic losses from a severe pandemic justified the decision to stockpile. Conclusions: Taking into account the available epidemiological data and evidence of effectiveness of NAIs in reducing mortality, including potential biases, a decision maker should stockpile anti-influenza medication in keeping with the postulated decision rule.

  18. Low-cost, rapidly-developed, 3D printed in vitro corpus callosum model for mucopolysaccharidosis type I [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Anthony Tabet

    2017-03-01

    Full Text Available The rising prevalence of high throughput screening and the general inability of (1 two dimensional (2D cell culture and (2 in vitro release studies to predict in vivo neurobiological and pharmacokinetic responses in humans has led to greater interest in more realistic three dimensional (3D benchtop platforms. Advantages of 3D human cell culture over its 2D analogue, or even animal models, include taking the effects of microgeometry and long-range topological features into consideration. In the era of personalized medicine, it has become increasingly valuable to screen candidate molecules and synergistic therapeutics at a patient-specific level, in particular for diseases that manifest in highly variable ways. The lack of established standards and the relatively arbitrary choice of probing conditions has limited in vitro drug release to a largely qualitative assessment as opposed to a predictive, quantitative measure of pharmacokinetics and pharmacodynamics in tissue. Here we report the methods used in the rapid, low-cost development of a 3D model of a mucopolysaccharidosis type I patient’s corpus callosum, which may be used for cell culture and drug release. The CAD model is developed from in vivo brain MRI tracing of the corpus callosum using open-source software, printed with poly (lactic-acid on a Makerbot Replicator 5X, UV-sterilized, and coated with poly (lysine for cellular adhesion. Adaptations of material and 3D printer for expanded applications are also discussed.

  19. How to put plant root uptake into a soil water flow model [version 1; referees: 2 approved, 1 approved with reservations

    Directory of Open Access Journals (Sweden)

    Xuejun Dong

    2016-01-01

    Full Text Available The need for improved crop water use efficiency calls for flexible modeling platforms to implement new ideas in plant root uptake and its regulation mechanisms. This paper documents the details of modifying a soil infiltration and redistribution model to include (a dynamic root growth, (b non-uniform root distribution and water uptake, (c the effect of water stress on plant water uptake, and (d soil evaporation. The paper also demonstrates strategies of using the modified model to simulate soil water dynamics and plant transpiration considering different sensitivity of plants to soil dryness and different mechanisms of root water uptake. In particular, the flexibility of simulating various degrees of compensated uptake (whereby plants tend to maintain potential transpiration under mild water stress is emphasized. The paper also describes how to estimate unknown root distribution and rooting depth parameters by the use of a simulation-based searching method. The full documentation of the computer code will allow further applications and new development.

  20. Seismic velocity model of the central United States (Version 1): Description and simulation of the 18 April 2008 Mt. Carmel, Illinois, Earthquake

    Science.gov (United States)

    Ramírez‐Guzmán, Leonardo; Boyd, Oliver S.; Hartzell, Stephen; Williams, Robert A.

    2012-01-01

    We have developed a new three‐dimensional seismic velocity model of the central United States (CUSVM) that includes the New Madrid Seismic Zone (NMSZ) and covers parts of Arkansas, Mississippi, Alabama, Illinois, Missouri, Kentucky, and Tennessee. The model represents a compilation of decades of crustal research consisting of seismic, aeromagnetic, and gravity profiles; geologic mapping; geophysical and geological borehole logs; and inversions of the regional seismic properties. The density, P‐ and S‐wave velocities are synthesized in a stand‐alone spatial database that can be queried to generate the required input for numerical seismic‐wave propagation simulations. We test and calibrate the CUSVM by simulating ground motions of the 18 April 2008 Mw 5.4 Mt. Carmel, Illinois, earthquake and comparing the results with observed records within the model area. The selected stations in the comparisons reflect different geological site conditions and cover distances ranging from 10 to 430 km from the epicenter. The results, based on a qualitative and quantitative goodness‐of‐fit (GOF) characterization, indicate that both within and outside the Mississippi Embayment the CUSVM reasonably reproduces: (1) the body and surface‐wave arrival times and (2) the observed regional variations in ground‐motion amplitude, cumulative energy, duration, and frequency content up to a frequency of 1.0 Hz. In addition, we discuss the probable structural causes for the ground‐motion patterns in the central United States that we observed in the recorded motions of the 18 April Mt. Carmel earthquake.

  1. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (UNIX VERSION)

    Science.gov (United States)

    Donnell, B.

    1994-01-01

    CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with

  2. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (MACINTOSH VERSION)

    Science.gov (United States)

    Riley, G.

    1994-01-01

    CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with

  3. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (IBM PC VERSION)

    Science.gov (United States)

    Donnell, B.

    1994-01-01

    CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with

  4. UQTk version 2.0 user manual

    Energy Technology Data Exchange (ETDEWEB)

    Debusschere, Bert J. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2013-10-01

    The UQ Toolkit (UQTk) is a collection of libraries and tools for the quantification of uncertainty in numerical model predictions. Version 2.0 ffers intrusive and non-intrusive methods for propagating input uncertainties through computational models, tools for sensitivity analysis, methods for sparse surrogate construction, and Bayesian inference tools for inferring parameters from experimental data. This manual discusses the download and installation process for UQTk, provides pointers to the UQ methods used in the toolkit, and describes some of the examples provided with the toolkit.

  5. Air quality simulation over South Asia using Hemispheric Transport of Air Pollution version-2 (HTAP-v2) emission inventory and Model for Ozone and Related chemical Tracers (MOZART-4)

    Science.gov (United States)

    Surendran, Divya E.; Ghude, Sachin D.; Beig, G.; Emmons, L. K.; Jena, Chinmay; Kumar, Rajesh; Pfister, G. G.; Chate, D. M.

    2015-12-01

    This study presents the distribution of tropospheric ozone and related species for South Asia using the Model for Ozone and Related chemical Tracers (MOZART-4) and Hemispheric Transport of Air Pollution version-2 (HTAP-v2) emission inventory. The model present-day simulated ozone (O3), carbon monoxide (CO) and nitrogen dioxide (NO2) are evaluated against surface-based, balloon-borne and satellite-based (MOPITT and OMI) observations. The model systematically overestimates surface O3 mixing ratios (range of mean bias about: 1-30 ppbv) at different ground-based measurement sites in India. Comparison between simulated and observed vertical profiles of ozone shows a positive bias from the surface up to 600 hPa and a negative bias above 600 hPa. The simulated seasonal variation in surface CO mixing ratio is consistent with the surface observations, but has a negative bias of about 50-200 ppb which can be attributed to a large part to the coarse model resolution. In contrast to the surface evaluation, the model shows a positive bias of about 15-20 × 1017 molecules/cm2 over South Asia when compared to satellite derived CO columns from the MOPITT instrument. The model also overestimates OMI retrieved tropospheric column NO2 abundance by about 100-250 × 1013 molecules/cm2. A response to 20% reduction in all anthropogenic emissions over South Asia shows a decrease in the anuual mean O3 mixing ratios by about 3-12 ppb, CO by about 10-80 ppb and NOX by about 3-6 ppb at the surface level. During summer monsoon, O3 mixing ratios at 200 hPa show a decrease of about 6-12 ppb over South Asia and about 1-4 ppb over the remote northern hemispheric western Pacific region.

  6. Application modeling ipv6 (internet protocol version 6) on e-id card for identification number for effectiveness and efficiency of registration process identification of population

    Science.gov (United States)

    Pardede, A. M. H.; Maulita, Y.; Buaton, R.

    2018-03-01

    When someone wants to be registered in an institution such as Birth Certificate, School, Higher Education, e-ID card, Tax, BPJS, Bank, Driving License, Passport and