WorldWideScience

Sample records for models saphire version

  1. Systems analysis programs for Hands-on integrated reliability evaluations (SAPHIRE) Version 5.0: Verification and validation (V&V) manual. Volume 9

    Energy Technology Data Exchange (ETDEWEB)

    Jones, J.L.; Calley, M.B.; Capps, E.L.; Zeigler, S.L.; Galyean, W.J.; Novack, S.D.; Smith, C.L.; Wolfram, L.M. [Lockheed Idaho Technologies Co., Idaho Falls, ID (United States)

    1995-03-01

    A verification and validation (V&V) process has been performed for the System Analysis Programs for Hands-on Integrated Reliability Evaluation (SAPHIRE) Version 5.0. SAPHIRE is a set of four computer programs that NRC developed for performing probabilistic risk assessments. They allow an analyst to perform many of the functions necessary to create, quantify, and evaluate the risk associated with a facility or process being analyzed. The programs are Integrated Reliability and Risk Analysis System (IRRAS) System Analysis and Risk Assessment (SARA), Models And Results Database (MAR-D), and Fault tree, Event tree, and Piping and instrumentation diagram (FEP) graphical editor. Intent of this program is to perform a V&V of successive versions of SAPHIRE. Previous efforts have been the V&V of SAPHIRE Version 4.0. The SAPHIRE 5.0 V&V plan is based on the SAPHIRE 4.0 V&V plan with revisions to incorporate lessons learned from the previous effort. Also, the SAPHIRE 5.0 vital and nonvital test procedures are based on the test procedures from SAPHIRE 4.0 with revisions to include the new SAPHIRE 5.0 features as well as to incorporate lessons learned from the previous effort. Most results from the testing were acceptable; however, some discrepancies between expected code operation and actual code operation were identified. Modifications made to SAPHIRE are identified.

  2. SAPHIRE models and software for ASP evaluations

    Energy Technology Data Exchange (ETDEWEB)

    Sattison, M.B.; Schroeder, J.A.; Russell, K.D. [Idaho National Engineering Lab., Idaho Falls, ID (United States)] [and others

    1995-04-01

    The Idaho National Engineering Laboratory (INEL) over the past year has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of conditional core damage probability (CCDP) evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both NRR and AEOD. This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according to plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events.

  3. Saphire models and software for ASP evaluations

    Energy Technology Data Exchange (ETDEWEB)

    Sattison, M.B. [Idaho National Engineering Lab., Idaho Falls, ID (United States)

    1997-02-01

    The Idaho National Engineering Laboratory (INEL) over the three years has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of ASP evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both the U.S. Nuclear Regulatory Commission`s (NRC`s) Office of Nuclear Reactor Regulation (NRR) and the Office for Analysis and Evaluation of Operational Data (AEOD). This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according to plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events. Future plans for the ASP models is also presented.

  4. Independent Verification and Validation SAPHIRE Version 8 Final Report Project Number: N6423 U.S. Nuclear Regulatory Commission

    Energy Technology Data Exchange (ETDEWEB)

    Kent Norris

    2010-04-01

    This report provides an evaluation of the SAPHIRE version 8 software product. SAPHIRE version 8 is being developed with a phased or cyclic iterative rapid application development methodology. Due to this approach, a similar approach has been taken for the IV&V activities on each vital software object. IV&V and Software Quality Assurance (SQA) activities occur throughout the entire development life cycle and therefore, will be required through the full development of SAPHIRE version 8. Later phases of the software life cycle, the operation and maintenance phases, are not applicable in this effort since the IV&V is being done prior to releasing Version 8.

  5. 75 FR 33162 - Airworthiness Directives; Microturbo Saphir 20 Model 095 Auxiliary Power Units (APUs)

    Science.gov (United States)

    2010-06-11

    ...-21-AD; Amendment 39-16332; AD 2010-13-01] RIN 2120-AA64 Airworthiness Directives; Microturbo Saphir..., of the SAPHIR 20 Model 095 APU is a life-limited part. Microturbo had determined through ``fleet...-015-03, of the SAPHIR 20 Model 095 APU is a life-limited part. Microturbo had determined...

  6. SAPHIRE 8 Software Project Plan

    Energy Technology Data Exchange (ETDEWEB)

    Curtis L.Smith; Ted S. Wood

    2010-03-01

    This project is being conducted at the request of the DOE and the NRC. The INL has been requested by the NRC to improve and maintain the Systems Analysis Programs for Hands-on Integrated Reliability Evaluation (SAPHIRE) tool set concurrent with the changing needs of the user community as well as staying current with new technologies. Successful completion will be upon NRC approved release of all software and accompanying documentation in a timely fashion. This project will enhance the SAPHIRE tool set for the user community (NRC, Nuclear Power Plant operations, Probabilistic Risk Analysis (PRA) model developers) by providing improved Common Cause Failure (CCF), External Events, Level 2, and Significance Determination Process (SDP) analysis capabilities. The SAPHIRE development team at the Idaho National Laboratory is responsible for successful completion of this project. The project is under the supervision of Curtis L. Smith, PhD, Technical Lead for the SAPHIRE application. All current capabilities from SAPHIRE version 7 will be maintained in SAPHIRE 8. The following additional capabilities will be incorporated: • Incorporation of SPAR models for the SDP interface. • Improved quality assurance activities for PRA calculations of SAPHIRE Version 8. • Continue the current activities for code maintenance, documentation, and user support for the code.

  7. SAPHIRE 8 New Features and Capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Curtis Smith

    2008-08-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) software performs probabilistic risk assessment (PRA) calculations. SAPHIRE is used in support of NRC’s risk-informed programs such as the Accident Sequence Precursor (ASP) program, Management Directive 8.3, “NRC Incident Investigation Program,” or the Significance Determination Process (SDP). It is also used to develop and run the Standardized Plant Analysis Risk (SPAR) models. SAPHIRE Version 8 is a new version of the software with an improved interface and capabilities to support risk-informed programs. SAPHIRE Version 8 is designed to easily handle larger and more complex models. Applications of previous SAPHIRE versions indicated the need to build and solve models with a large number of sequences. Risk assessments that include endstate evaluations for core damage frequency and large, early release frequency evaluations have greatly increased the number of sequences required. In addition, the complexity of the models has increased since risk assessments evaluate both potential internal and external events, as well as different plant operational states. Special features of SAPHIRE 8 help create and run integrated models which may be composed of different model types. SAPHIRE 8 includes features and capabilities that are new or improved over the current Version 7 to address the new requirements for risk-informed programs and SPAR models. These include: • Improved User Interfaces • Model development • Methods • General Support Features

  8. Methods improvements incorporated into the SAPHIRE ASP models

    Energy Technology Data Exchange (ETDEWEB)

    Sattison, M.B.; Blackman, H.S.; Novack, S.D. [Idaho National Engineering Lab., Idaho Falls, ID (United States)] [and others

    1995-04-01

    The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methods, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3) enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements.

  9. SAPHIRE 8 Volume 7 - Data Loading

    Energy Technology Data Exchange (ETDEWEB)

    K. J. Kvarfordt; S. T. Wood; C. L. Smith; S. R. Prescott

    2011-03-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer. SAPHIRE Version 8 is funded by the U.S. Nuclear Regulatory Commission and developed by the Idaho National Laboratory. This report is intended to assist the user to enter PRA data into the SAPHIRE program using the built-in MAR-D ASCII-text file data transfer process. Towards this end, a small sample database is constructed and utilized for demonstration. Where applicable, the discussion includes how the data processes for loading the sample database relate to the actual processes used to load a larger PRA models. The procedures described herein were developed for use with SAPHIRE Version 8. The guidance specified in this document will allow a user to have sufficient knowledge to both understand the data format used by SAPHIRE and to carry out the transfer of data between different PRA projects.

  10. SAPHIRE 8 Volume 1 - Overview and Summary

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; S. T. Wood

    2011-03-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer (PC) running the Microsoft Windows operating system. SAPHIRE Version 8 is funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory (INL). INL's primary role in this project is that of software developer and tester. However, INL also plays an important role in technology transfer by interfacing and supporting SAPHIRE users, who constitute a wide range of PRA practitioners from the NRC, national laboratories, the private sector, and foreign countries. SAPHIRE can be used to model a complex system’s response to initiating events and quantify associated consequential outcome frequencies. Specifically, for nuclear power plant applications, SAPHIRE 8 can identify important contributors to core damage (Level 1 PRA) and containment failure during a severe accident which leads to releases (Level 2 PRA). It can be used for a PRA where the reactor is at full power, low power, or at shutdown conditions. Furthermore, it can be used to analyze both internal and external initiating events and has special features for managing models such as flooding and fire. It can also be used in a limited manner to quantify risk in terms of release consequences to the public and environment (Level 3 PRA). In SAPHIRE 8, the act of creating a model has been separated from the analysis of that model in order to improve the quality of both the model (e.g., by avoiding inadvertent changes) and the analysis. Consequently, in SAPHIRE 8, the analysis of models is performed by using what are called Workspaces. Currently, there are Workspaces for three types of analyses: (1) the NRC’s Accident Sequence Precursor program, where the workspace is called “Events and Condition Assessment (ECA);” (2) the NRC’s Significance Determination

  11. SAPHIRE 8 Volume 3 - Users' Guide

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; K. Vedros; K. J. Kvarfordt

    2011-03-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer. SAPHIRE is funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory (INL). The INL's primary role in this project is that of software developer. However, the INL also plays an important role in technology transfer by interfacing and supporting SAPHIRE users comprised of a wide range of PRA practitioners from the NRC, national laboratories, the private sector, and foreign countries. SAPHIRE can be used to model a complex system’s response to initiating events, quantify associated damage outcome frequencies, and identify important contributors to this damage (Level 1 PRA) and to analyze containment performance during a severe accident and quantify radioactive releases (Level 2 PRA). It can be used for a PRA evaluating a variety of operating conditions, for example, for a nuclear reactor at full power, low power, or at shutdown conditions. Furthermore, SAPHIRE can be used to analyze both internal and external initiating events and has special features for transforming models built for internal event analysis to models for external event analysis. It can also be used in a limited manner to quantify risk in terms of release consequences to both the public and the environment (Level 3 PRA). This reference guide will introduce the SAPHIRE Version 8.0 software. A brief discussion of the purpose and history of the software is included along with general information such as installation instructions, starting and stopping the program, and some pointers on how to get around inside the program. Next, database concepts and structure are discussed. Following that discussion are nine sections, one for each of the menu options on the SAPHIRE main menu, wherein the purpose and general capabilities for each option are

  12. SAPHIR: a physiome core model of body fluid homeostasis and blood pressure regulation.

    Science.gov (United States)

    Thomas, S Randall; Baconnier, Pierre; Fontecave, Julie; Françoise, Jean-Pierre; Guillaud, François; Hannaert, Patrick; Hernández, Alfredo; Le Rolle, Virginie; Mazière, Pierre; Tahi, Fariza; White, Ronald J

    2008-09-13

    We present the current state of the development of the SAPHIR project (a Systems Approach for PHysiological Integration of Renal, cardiac and respiratory function). The aim is to provide an open-source multi-resolution modelling environment that will permit, at a practical level, a plug-and-play construction of integrated systems models using lumped-parameter components at the organ/tissue level while also allowing focus on cellular- or molecular-level detailed sub-models embedded in the larger core model. Thus, an in silico exploration of gene-to-organ-to-organism scenarios will be possible, while keeping computation time manageable. As a first prototype implementation in this environment, we describe a core model of human physiology targeting the short- and long-term regulation of blood pressure, body fluids and homeostasis of the major solutes. In tandem with the development of the core models, the project involves database implementation and ontology development.

  13. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Data Loading Manual

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; K. J. Kvarfordt; S. T. Wood

    2006-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer. SAPHIRE is primarily funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory. This report is intended to assist the user to enter PRA data into the SAPHIRE program using the built-in MAR-D ASCII-text file data transfer process. Towards this end, a small sample database is constructed and utilized for demonstration. Where applicable, the discussion includes how the data processes for loading the sample database relate to the actual processes used to load a larger PRA models. The procedures described herein were developed for use with SAPHIRE Version 6.0 and Version 7.0. In general, the data transfer procedures for version 6 and 7 are the same, but where deviations exist, the differences are noted. The guidance specified in this document will allow a user to have sufficient knowledge to both understand the data format used by SAPHIRE and to carry out the transfer of data between different PRA projects.

  14. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Code Reference Manual

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; K. J. Kvarfordt; S. T. Wood

    2008-08-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer. SAPHIRE is funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory (INL). The INL's primary role in this project is that of software developer. However, the INL also plays an important role in technology transfer by interfacing and supporting SAPHIRE users comprised of a wide range of PRA practitioners from the NRC, national laboratories, the private sector, and foreign countries. SAPHIRE can be used to model a complex system’s response to initiating events, quantify associated damage outcome frequencies, and identify important contributors to this damage (Level 1 PRA) and to analyze containment performance during a severe accident and quantify radioactive releases (Level 2 PRA). It can be used for a PRA evaluating a variety of operating conditions, for example, for a nuclear reactor at full power, low power, or at shutdown conditions. Furthermore, SAPHIRE can be used to analyze both internal and external initiating events and has special features for transforming models built for internal event analysis to models for external event analysis. It can also be used in a limited manner to quantify risk in terms of release consequences to both the public and the environment (Level 3 PRA). SAPHIRE includes a separate module called the Graphical Evaluation Module (GEM). GEM provides a highly specialized user interface with SAPHIRE that automates SAPHIRE process steps for evaluating operational events at commercial nuclear power plants. Using GEM, an analyst can estimate the risk associated with operational events in a very efficient and expeditious manner. This reference guide will introduce the SAPHIRE Version 7.0 software. A brief discussion of the purpose and history of the software is included along with

  15. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Code Reference Manual

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; K. J. Kvarfordt; S. T. Wood

    2006-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer. SAPHIRE is funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory (INL). The INL's primary role in this project is that of software developer. However, the INL also plays an important role in technology transfer by interfacing and supporting SAPHIRE users comprised of a wide range of PRA practitioners from the NRC, national laboratories, the private sector, and foreign countries. SAPHIRE can be used to model a complex system’s response to initiating events, quantify associated damage outcome frequencies, and identify important contributors to this damage (Level 1 PRA) and to analyze containment performance during a severe accident and quantify radioactive releases (Level 2 PRA). It can be used for a PRA evaluating a variety of operating conditions, for example, for a nuclear reactor at full power, low power, or at shutdown conditions. Furthermore, SAPHIRE can be used to analyze both internal and external initiating events and has special features for ansforming models built for internal event analysis to models for external event analysis. It can also be used in a limited manner to quantify risk in terms of release consequences to both the public and the environment (Level 3 PRA). SAPHIRE includes a separate module called the Graphical Evaluation Module (GEM). GEM provides a highly specialized user interface with SAPHIRE that automates SAPHIRE process steps for evaluating operational events at commercial nuclear power plants. Using GEM, an analyst can estimate the risk associated with operational events in a very efficient and expeditious manner. This reference guide will introduce the SAPHIRE Version 7.0 software. A brief discussion of the purpose and history of the software is included along with

  16. SAPHIRE 8 Volume 6 - Quality Assurance

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; R. Nims; K. J. Kvarfordt

    2011-03-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Version 8 is a software application developed for performing a complete probabilistic risk assessment using a personal computer running the Microsoft Windows™ operating system. SAPHIRE 8 is funded by the U.S. Nuclear Regulatory Commission (NRC). The role of the INL in this project is that of software developer and tester. This development takes place using formal software development procedures and is subject to quality assurance (QA) processes. The purpose of this document is to describe how the SAPHIRE software QA is performed for Version 8, what constitutes its parts, and limitations of those processes. In addition, this document describes the Independent Verification and Validation that was conducted for Version 8 as part of an overall QA process.

  17. Coupling CFAST fire modeling and SAPHIRE probabilistic assessment software for internal fire safety evaluation of a typical TRIGA research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Safaei Arshi, Saiedeh [School of Engineering, Shiraz University, 71348-51154 Shiraz (Iran, Islamic Republic of); Nematollahi, Mohammadreza, E-mail: nema@shirazu.ac.i [School of Engineering, Shiraz University, 71348-51154 Shiraz (Iran, Islamic Republic of); Safety Research Center of Shiraz University, 71348-51154 Shiraz (Iran, Islamic Republic of); Sepanloo, Kamran [Safety Research Center of Shiraz University, 71348-51154 Shiraz (Iran, Islamic Republic of)

    2010-03-15

    Due to the significant threat of internal fires for the safety operation of nuclear reactors, presumed fire scenarios with potential hazards for loss of typical research reactor safety functions are analyzed by coupling CFAST fire modeling and SAPHIRE probabilistic assessment software. The investigations show that fire hazards associated with electrical cable insulation, lubricating oils, diesel, electrical equipment and carbon filters may lead to unsafe situations called core damage states. Using system-specific event trees, the occurrence frequency of core damage states after the occurrence of each possible fire scenario in critical fire compartments is evaluated. Probability that the fire ignited in the given fire compartment will burn long enough to cause the extent of damage defined by each fire scenario is calculated by means of detection-suppression event tree. As a part of detection-suppression event trees quantification, and also for generating the necessary input data for evaluating the frequency of core damage states by SAPHIRE 7.0 software, CFAST fire modeling software is applied. The results provide a probabilistic measure of the quality of existing fire protection systems in order to maintain the reactor at a reasonable safety level.

  18. All-sky radiance simulation of Megha-Tropiques SAPHIR microwave sensor using multiple scattering radiative transfer model for data assimilation applications

    Indian Academy of Sciences (India)

    A Madhulatha; John P George; E N Rajagopal

    2017-03-01

    Incorporation of cloud- and precipitation-affected radiances from microwave satellite sensors in data assimilation system has a great potential in improving the accuracy of numerical model forecasts over the regions of high impact weather. By employing the multiple scattering radiative transfer model RTTOVSCATT,all-sky radiance (clear sky and cloudy sky) simulation has been performed for six channel microwave SAPHIR (Sounder for Atmospheric Profiling of Humidity in the Inter-tropics by Radiometry) sensors of Megha-Tropiques (MT) satellite. To investigate the importance of cloud-affected radiance data in severe weather conditions, all-sky radiance simulation is carried out for the severe cyclonic storm ‘Hudhud’ formed over Bay of Bengal. Hydrometeors from NCMRWF unified model (NCUM) forecasts are used as input to the RTTOV model to simulate cloud-affected SAPHIR radiances. Horizontal and vertical distribution of all-sky simulated radiances agrees reasonably well with the SAPHIR observed radiancesover cloudy regions during different stages of cyclone development. Simulated brightness temperatures of six SAPHIR channels indicate that the three dimensional humidity structure of tropical cyclone is well represented in all-sky computations. Improved correlation and reduced bias and root mean squareerror against SAPHIR observations are apparent. Probability distribution functions reveal that all-sky simulations are able to produce the cloud-affected lower brightness temperatures associated with cloudy regions. The density scatter plots infer that all-sky radiances are more consistent with observed radiances.Correlation between different types of hydrometeors and simulated brightness temperatures at respective atmospheric levels highlights the significance of inclusion of scattering effects from different hydrometeors in simulating the cloud-affected radiances in all-sky simulations. The results are promisingand suggest that the inclusion of multiple scattering

  19. All-sky radiance simulation of Megha-Tropiques SAPHIR microwave sensor using multiple scattering radiative transfer model for data assimilation applications

    Science.gov (United States)

    Madhulatha, A.; George, John P.; Rajagopal, E. N.

    2017-03-01

    Incorporation of cloud- and precipitation-affected radiances from microwave satellite sensors in data assimilation system has a great potential in improving the accuracy of numerical model forecasts over the regions of high impact weather. By employing the multiple scattering radiative transfer model RTTOV-SCATT, all-sky radiance (clear sky and cloudy sky) simulation has been performed for six channel microwave SAPHIR (Sounder for Atmospheric Profiling of Humidity in the Inter-tropics by Radiometry) sensors of Megha-Tropiques (MT) satellite. To investigate the importance of cloud-affected radiance data in severe weather conditions, all-sky radiance simulation is carried out for the severe cyclonic storm `Hudhud' formed over Bay of Bengal. Hydrometeors from NCMRWF unified model (NCUM) forecasts are used as input to the RTTOV model to simulate cloud-affected SAPHIR radiances. Horizontal and vertical distribution of all-sky simulated radiances agrees reasonably well with the SAPHIR observed radiances over cloudy regions during different stages of cyclone development. Simulated brightness temperatures of six SAPHIR channels indicate that the three dimensional humidity structure of tropical cyclone is well represented in all-sky computations. Improved correlation and reduced bias and root mean square error against SAPHIR observations are apparent. Probability distribution functions reveal that all-sky simulations are able to produce the cloud-affected lower brightness temperatures associated with cloudy regions. The density scatter plots infer that all-sky radiances are more consistent with observed radiances. Correlation between different types of hydrometeors and simulated brightness temperatures at respective atmospheric levels highlights the significance of inclusion of scattering effects from different hydrometeors in simulating the cloud-affected radiances in all-sky simulations. The results are promising and suggest that the inclusion of multiple scattering

  20. SAPHIRE 8 Software Configuration Management Plan

    Energy Technology Data Exchange (ETDEWEB)

    Curtis Smith

    2010-01-01

    The INL software developers use version control for both the formally released SAPHIRE versions, as well as for source code. For each formal release of the software, the developers perform an acceptance test: the software must pass a suite of automated tests prior to official release. Each official release of SAPHIRE is assigned a unique version identifier. The release is bundled into a standard installation package for easy and consistent set-up by individual users. Included in the release is a list of bug fixes and new features for the current release, as well as a history of those items for past releases. Each formal release of SAPHIRE will have passed an acceptance test. In addition to assignment of a unique version identifier for an official software release, each source code file is kept in a controlled library. Source code is a collection of all the computer instructions written by developers to create the finished product. The library is kept on a server, where back-ups are regularly made. This document describes the configuration management approach used as part of the SAPHIRE development.

  1. OVERVIEW OF THE SAPHIRE PROBABILISTIC RISK ANALYSIS SOFTWARE

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Curtis L.; Wood, Ted; Knudsen, James; Ma, Zhegang

    2016-10-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer (PC) running the Microsoft Windows operating system. SAPHIRE Version 8 is funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory (INL). INL's primary role in this project is that of software developer and tester. However, INL also plays an important role in technology transfer by interfacing and supporting SAPHIRE users, who constitute a wide range of PRA practitioners from the NRC, national laboratories, the private sector, and foreign countries. In this paper, we provide an overview of the current technical capabilities found in SAPHIRE Version 8, including the user interface and enhanced solving algorithms.

  2. SAPHIR - a multi-scale, multi-resolution modeling environment targeting blood pressure regulation and fluid homeostasis.

    Science.gov (United States)

    Thomas, S; Abdulhay, Enas; Baconnier, Pierre; Fontecave, Julie; Francoise, Jean-Pierre; Guillaud, Francois; Hannaert, Patrick; Hernandez, Alfredo; Le Rolle, Virginie; Maziere, Pierre; Tahi, Fariza; Zehraoui, Farida

    2007-01-01

    We present progress on a comprehensive, modular, interactive modeling environment centered on overall regulation of blood pressure and body fluid homeostasis. We call the project SAPHIR, for "a Systems Approach for PHysiological Integration of Renal, cardiac, and respiratory functions". The project uses state-of-the-art multi-scale simulation methods. The basic core model will give succinct input-output (reduced-dimension) descriptions of all relevant organ systems and regulatory processes, and it will be modular, multi-resolution, and extensible, in the sense that detailed submodules of any process(es) can be "plugged-in" to the basic model in order to explore, eg. system-level implications of local perturbations. The goal is to keep the basic core model compact enough to insure fast execution time (in view of eventual use in the clinic) and yet to allow elaborate detailed modules of target tissues or organs in order to focus on the problem area while maintaining the system-level regulatory compensations.

  3. New developments in the Saphire computer codes

    Energy Technology Data Exchange (ETDEWEB)

    Russell, K.D.; Wood, S.T.; Kvarfordt, K.J. [Idaho Engineering Lab., Idaho Falls, ID (United States)] [and others

    1996-03-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a suite of computer programs that were developed to create and analyze a probabilistic risk assessment (PRA) of a nuclear power plant. Many recent enhancements to this suite of codes have been made. This presentation will provide an overview of these features and capabilities. The presentation will include a discussion of the new GEM module. This module greatly reduces and simplifies the work necessary to use the SAPHIRE code in event assessment applications. An overview of the features provided in the new Windows version will also be provided. This version is a full Windows 32-bit implementation and offers many new and exciting features. [A separate computer demonstration was held to allow interested participants to get a preview of these features.] The new capabilities that have been added since version 5.0 will be covered. Some of these major new features include the ability to store an unlimited number of basic events, gates, systems, sequences, etc.; the addition of improved reporting capabilities to allow the user to generate and {open_quotes}scroll{close_quotes} through custom reports; the addition of multi-variable importance measures; and the simplification of the user interface. Although originally designed as a PRA Level 1 suite of codes, capabilities have recently been added to SAPHIRE to allow the user to apply the code in Level 2 analyses. These features will be discussed in detail during the presentation. The modifications and capabilities added to this version of SAPHIRE significantly extend the code in many important areas. Together, these extensions represent a major step forward in PC-based risk analysis tools. This presentation provides a current up-to-date status of these important PRA analysis tools.

  4. SAPHIR, how it ended

    Energy Technology Data Exchange (ETDEWEB)

    Brogli, R.; Hammer, J.; Wiezel, L.; Christen, R.; Heyck, H.; Lehmann, E. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1995-10-01

    On May 16th, 1994, PSI decided to discontinue its efforts to retrofit the SAPHIR reactor for operation at 10 MW. This decision was made because the effort and time for the retrofit work in progress had proven to be more complex than was anticipated. In view of the start-up of the new spallation-neutron source SINQ in 1996, the useful operating time between the eventual restart of SAPHIR and the start-up of SINQ became less than two years, which was regarded by PSI as too short a period to warrant the large retrofit effort. Following the decision of PSI not to re-use SAPHIR as a neutron source, several options for the further utilization of the facility were open. However, none of them appeared promising in comparison with other possibilities; it was therefore decided that SAPHIR should be decommissioned. A concerted effort was initiated to consolidate the nuclear and conventional safety for the post-operational period. (author) 3 figs., 3 tab.

  5. SAPHIRE 8 Volume 2 - Technical Reference

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; S. T. Wood; W. J. Galyean; J. A. Schroeder; M. B. Sattison

    2011-03-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of computer programs that were developed to create and analyze probabilistic risk assessment (PRAs). Herein information is provided on the principles used in the construction and operation of Version 8.0 of the SAPHIRE system. This report summarizes the fundamental mathematical concepts of sets and logic, fault trees, and probability. This volume then describes the algorithms used to construct a fault tree and to obtain the minimal cut sets. It gives the formulas used to obtain the probability of the top event from the minimal cut sets, and the formulas for probabilities that apply for various assumptions concerning reparability and mission time. It defines the measures of basic event importance that SAPHIRE can calculate. This volume gives an overview of uncertainty analysis using simple Monte Carlo sampling or Latin Hypercube sampling, and states the algorithms used by this program to generate random basic event probabilities from various distributions. Also covered are enhance capabilities such as seismic analysis, Workspace algorithms, cut set "recovery," end state manipulation, and use of "compound events."

  6. Can SAPHIR Instrument Onboard MEGHATROPIQUES Retrieve Hydrometeors and Rainfall Characteristics ?

    Science.gov (United States)

    Goyal, J. M.; Srinivasan, J.; Satheesh, S. K.

    2014-12-01

    MEGHATROPIQUES (MT) is an Indo-French satellite launched in 2011 with the main intention of understanding the water cycle in the tropical region and is a part of GPM constellation. MADRAS was the primary instrument on-board MT to estimate rainfall characteristics, but unfortunately it's scanning mechanism failed obscuring the primary goal of the mission.So an attempt has been made to retrieve rainfall and different hydrometeors using other instrument SAPHIR onboard MT. The most important advantage of using MT is its orbitography which is specifically designed for tropical regions and can reach up to 6 passes per day more than any other satellite currently in orbit. Although SAPHIR is an humidity sounder with six channels centred around 183 GHz channel, it still operates in the microwave region which directly interacts with rainfall, especially wing channels and thus can pick up rainfall signatures. Initial analysis using radiative transfer models also establish this fact .To get more conclusive results using observations, SAPHIR level 1 brightness temperature (BT) data was compared with different rainfall products utilizing the benefits of each product. SAPHIR BT comparison with TRMM 3B42 for one pass clearly showed that channel 5 and 6 have a considerable sensitivity towards rainfall. Following this a huge database of more than 300000 raining pixels of spatially and temporally collocated 3B42 rainfall and corresponding SAPHIR BT for an entire month was created to include all kinds of rainfall events, to attain higher temporal resolution collocated database was also created for SAPHIR BT and rainfall from infrared sensor on geostationary satellite Kalpana 1.These databases were used to understand response of various channels of SAPHIR to different rainfall regimes . TRMM 2A12 rainfall product was also used to identify capabilities of SAPHIR to retrieve cloud and ice water path which also gave significant correlation. Conclusively,we have shown that SAPHIR has

  7. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Quality Assurance Manual

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; R. Nims; K. J. Kvarfordt; C. Wharton

    2008-08-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment using a personal computer running the Microsoft Windows operating system. SAPHIRE is primarily funded by the U.S. Nuclear Regulatory Commission (NRC). The role of the INL in this project is that of software developer and tester. This development takes place using formal software development procedures and is subject to quality assurance (QA) processes. The purpose of this document is to describe how the SAPHIRE software QA is performed for Version 6 and 7, what constitutes its parts, and limitations of those processes.

  8. Quality assessment and assimilation of Megha-Tropiques SAPHIR radiances into WRF assimilation system

    Science.gov (United States)

    Singh, Randhir; Ojha, Satya P.; Kishtawal, C. M.; Pal, P. K.

    2013-07-01

    This study presents an initial assessment of the quality of radiances measured from SAPHIR (Sounder for Probing Vertical Profiles of Humidity) on board Megha-Tropiques (Indo-French joint satellite), launched by the Indian Space Research Organisation on 12 October 2011. The radiances measured from SAPHIR are compared with those simulated by the radiative transfer model (RTM) using radiosondes measurements, Atmospheric Infrared Sounder retrievals, and National Centers for Environmental Prediction (NCEP) analyzed fields over the Indian subcontinent, during January to November 2012. The radiances from SAPHIR are also compared with the similar measurements available from Microwave Humidity Sounder (MHS) on board MetOp-A and NOAA-18/19 satellites, during January to November 2012. A limited comparison is also carried out between SAPHIR measured and the RTM computed radiances using European Centre for Medium-Range Weather Forecasts analyzed fields, during May and November 2012. The comparison of SAPHIR measured radiances with RTM simulated and MHS observed radiances reveals that SAPHIR observations are of good quality. After the initial assessment of the quality of the SAPHIR radiances, these radiances have been assimilated within the Weather Research and Forecasting (WRF) three-dimensional variational data assimilation system. Analysis/forecast cycling experiments with and without SAPHIR radiances are performed over the Indian region during the entire month of May 2012. The assimilation of SAPHIR radiances shows considerable improvements (with moisture analysis error reduction up to 30%) in the tropospheric analyses and forecast of moisture, temperature, and winds when compared to NCEP analyses and radiances measurement obtained from MHS, Advanced Microwave Sounding Unit-A, and High Resolution Infrared Sounder. Assimilation of SAPHIR radiances also resulted in substantial improvement in the precipitation forecast skill when compared with satellite-derived rain. Overall

  9. Systems Analysis Programs for Hands-on Intergrated Reliability Evaluations (SAPHIRE) Summary Manual

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith

    2008-08-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer (PC) running the Microsoft Windows operating system. SAPHIRE is primarily funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory (INL). INL's primary role in this project is that of software developer and tester. However, INL also plays an important role in technology transfer by interfacing and supporting SAPHIRE users, who constitute a wide range of PRA practitioners from the NRC, national laboratories, the private sector, and foreign countries. SAPHIRE can be used to model a complex system’s response to initiating events and quantify associated consequential outcome frequencies. Specifically, for nuclear power plant applications, SAPHIRE can identify important contributors to core damage (Level 1 PRA) and containment failure during a severe accident which lead to releases (Level 2 PRA). It can be used for a PRA where the reactor is at full power, low power, or at shutdown conditions. Furthermore, it can be used to analyze both internal and external initiating events and has special features for transforming an internal events model to a model for external events, such as flooding and fire analysis. It can also be used in a limited manner to quantify risk in terms of release consequences to the public and environment (Level 3 PRA). SAPHIRE also includes a separate module called the Graphical Evaluation Module (GEM). GEM is a special user interface linked to SAPHIRE that automates the SAPHIRE process steps for evaluating operational events at commercial nuclear power plants. Using GEM, an analyst can estimate the risk associated with operational events (for example, to calculate a conditional core damage probability) very efficiently and expeditiously. This report provides an overview of the functions

  10. Investigation of Monoterpene Degradation in the Atmospheric Simulation Chamber SAPHIR

    Science.gov (United States)

    Kaminski, Martin; Acir, Ismail-Hakki; Bohn, Birger; Brauers, Theo; Dorn, Hans-Peter; Fuchs, Hendrik; Haeseler, Rolf; Hofzumahaus, Andreas; Li, Xin; Lutz, Anna; Nehr, Sascha; Rohrer, Franz; Tillmann, Ralf; Wegener, Robert; Wahner, Andreas

    2013-04-01

    Monoterpenes are the volatile organic compound (VOC) species with the highest emission rates on a global scale beside isoprene. In the atmosphere these compounds are rapidly oxidized. Due to their high reactivity towards hydroxyl radicals (OH) they determine the radical chemistry under biogenic conditions if monoterpene concentration is higher than isoprene concentration. Recent field campaigns showed large discrepancies between measured and modeled OH concentration at low NOx conditions together with high reactivity of VOC towards OH (Hofzumahaus et al. 2009) especially in tropical forest areas (Lelieveld et al. 2008). These discrepancies were partly explained by new reaction pathways in the isoprene degradation mechanism (Whalley et al 2011). However, even an additional recycling rate of 2.7 was insufficient to explain the measured OH concentration. So other VOC species could be involved in a nonclassical OH recycling. Since the discrepancies in OH also occurred in the morning hours when the OH chemistry was mainly dominated by monoterpenes, it was assumed that also the degradation of monoterpenes may lead to OH recycling in the absence of NO. (Whalley et al 2011). The photochemical degradation of four monoterpene species was studied under high VOC reactivity and low NOx conditions in a dedicated series of experiments in the atmospheric simulation chamber SAPHIR from August to September 2012 to overcome the lack of mechanistic information for monoterpene degradation schemes. α-Pinene, β-pinene and limonene were chosen as most prominent representatives of this substance class. Moreover the degradation of myrcene was investigated due to its structural analogy to isoprene. The SAPHIR chamber was equipped with instrumentation to measure all important OH precursors (O3, HONO, HCHO), the parent VOC and their main oxidation products, radicals (OH, HO2, RO2), the total OH reactivity, and photolysis frequencies to investigate the degradation mechanism of monoterpenes in

  11. Assimilation of SAPHIR radiance: impact on hyperspectral radiances in 4D-VAR

    Science.gov (United States)

    Indira Rani, S.; Doherty, Amy; Atkinson, Nigel; Bell, William; Newman, Stuart; Renshaw, Richard; George, John P.; Rajagopal, E. N.

    2016-04-01

    Assimilation of a new observation dataset in an NWP system may affect the quality of an existing observation data set against the model background (short forecast), which in-turn influence the use of an existing observation in the NWP system. Effect of the use of one data set on the use of another data set can be quantified as positive, negative or neutral. Impact of the addition of new dataset is defined as positive if the number of assimilated observations of an existing type of observation increases, and bias and standard deviation decreases compared to the control (without the new dataset) experiment. Recently a new dataset, Megha Tropiques SAPHIR radiances, which provides atmospheric humidity information, is added in the Unified Model 4D-VAR assimilation system. In this paper we discuss the impact of SAPHIR on the assimilation of hyper-spectral radiances like AIRS, IASI and CrIS. Though SAPHIR is a Microwave instrument, its impact can be clearly seen in the use of hyper-spectral radiances in the 4D-VAR data assimilation systems in addition to other Microwave and InfraRed observation. SAPHIR assimilation decreased the standard deviation of the spectral channels of wave number from 650 -1600 cm-1 in all the three hyperspectral radiances. Similar impact on the hyperspectral radiances can be seen due to the assimilation of other Microwave radiances like from AMSR2 and SSMIS Imager.

  12. Intercalibrating and Validating Saphir and Atms Observations

    Science.gov (United States)

    Moradi, I.; Ferraro, R. R.

    2014-12-01

    We present the results of evaluating observations from microwave instruments aboard the Suomi National Polar-orbiting Partnership (NPP, ATMS instrument) and Megha-Tropiques (SAPHIR instrument) satellites. ATMS is a cross-track microwave sounder that currently flying on the Suomi National Polar-orbiting Partnership (S-NPP) satellite, launched in October 2011, which is in a Sun-synchronous orbit with the ascending equatorial crossing time at 01:30 a.m. Megha-Tropiques, launched in Nov 2011, is a low-inclination satellite meaning that the satellite only visits the tropical band between 30 S and 30 N. SAPHIR is a microwave humidity sounder with 6 channels operating at the frequencies close to the water vapor absorption line at 183 GHz. Megha-Tropiques revisits the tropical regions several times a day and provide a great capability for inter-calibrating the observations with the polar orbiting satellites. The study includes inter-comparison and inter-calibration of observations of similar channels from the two instruments, evaluation of the satellite data using high-quality radiosonde data from Atmospheric Radiation Measurement Program and GPS Radio Occultaion Observations from COSMIC mission, as well as geolocation error correction. The results of this study are valuable for generating climate data records from these instruments as well as for extending current climate data records from similar instruments such as AMSU-B and MHS to the ATMS and SAPHIR instruments.

  13. Systems Analysis Programs for Hands-On Integrated Reliability Evaluations (SAPHIRE) Technical Reference

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; W. J. Galyean; S. T. Beck

    2008-08-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer (PC) running the Microsoft Windows? operating system. Herein information is provided on the principles used in the construction and operation of Version 6.0 and 7.0 of the SAPHIRE system. This report summarizes the fundamental mathematical concepts of sets and logic, fault trees, and probability. This volume then describes the algorithms used to construct a fault tree and to obtain the minimal cut sets. It gives the formulas used to obtain the probability of the top event from the minimal cut sets, and the formulas for probabilities that apply for various assumptions concerning reparability and mission time. It defines the measures of basic event importance that SAPHIRE can calculate. This volume gives an overview of uncertainty analysis using simple Monte Carlo sampling or Latin Hypercube sampling, and states the algorithms used by this program to generate random basic event probabilities from various distributions. Also covered are enhance capabilities such as seismic analysis, cut set "recovery," end state manipulation, and use of "compound events."

  14. Systems Analysis Programs for Hands-On Integrated Reliability Evaluations (SAPHIRE) Technical Reference Manual

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; W. J. Galyean; S. T. Beck

    2006-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer (PC) running the Microsoft Windows? operating system. Herein information is provided on the principles used in the construction and operation of Version 6.0 and 7.0 of the SAPHIRE system. This report summarizes the fundamental mathematical concepts of sets and logic, fault trees, and probability. This volume then describes the algorithms used to construct a fault tree and to obtain the minimal cut sets. It gives the formulas used to obtain the probability of the top event from the minimal cut sets, and the formulas for probabilities that apply for various assumptions concerning reparability and mission time. It defines the measures of basic event importance that SAPHIRE can calculate. This volume gives an overview of uncertainty analysis using simple Monte Carlo sampling or Latin Hypercube sampling, and states the algorithms used by this program to generate random basic event probabilities from various distributions. Also covered are enhance capabilities such as seismic analysis, cut set "recovery," end state manipulation, and use of "compound events."

  15. A Functional Version of the ARCH Model

    CERN Document Server

    Hormann, Siegfried; Reeder, Ron

    2011-01-01

    Improvements in data acquisition and processing techniques have lead to an almost continuous flow of information for financial data. High resolution tick data are available and can be quite conveniently described by a continuous time process. It is therefore natural to ask for possible extensions of financial time series models to a functional setup. In this paper we propose a functional version of the popular ARCH model. We will establish conditions for the existence of a strictly stationary solution, derive weak dependence and moment conditions, show consistency of the estimators and perform a small empirical study demonstrating how our model matches with real data.

  16. Strangeness Photoproduction with the Saphir Detector

    CERN Document Server

    Menze, D W

    1997-01-01

    Statistically improved data of total cross sections and of angular distributions for differential cross sections and hyperon recoil polarizations of the reactions \\gamma p --> K^+ \\Lambda and \\gamma p --> K^+ \\Sigma^0 have been collected with the SAPHIR detector at photon energies between threshold and 2.0 GeV. Here total cross section data up to 1.5 GeV are presented. The opposite sign of \\Lambda and \\Sigma polarization and the change of sign between forward and backward direction could be confirmed by higher statistics. A steep threshold behaviour of the K^+ \\Lambda total cross section is observed.

  17. A version management model of PDM system and its realization

    Institute of Scientific and Technical Information of China (English)

    ZHONG Shi-sheng; LI Tao

    2008-01-01

    Based on the key function of version management in PDM system, this paper discusses the function and the realization of version management and the transitions of version states with a workflow. A directed acy-clic graph is used to describe a version model. Three storage modes of the directed acyclic graph version model in the database, the bumping block and the PDM working memory are presented and the conversion principle of these three modes is given. The study indicates that building a dynamic product structure configuration model based on versions is the key to resolve the problem. Thus a version model of single product object is built. Then the version management model in product structure configuration is built and the apphcation of version manage-ment of PDM syste' is presented as a case.

  18. Forsmark - site descriptive model version 0

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-10-01

    During 2002, the Swedish Nuclear Fuel and Waste Management Company (SKB) is starting investigations at two potential sites for a deep repository in the Precambrian basement of the Fennoscandian Shield. The present report concerns one of those sites, Forsmark, which lies in the municipality of Oesthammar, on the east coast of Sweden, about 150 kilometres north of Stockholm. The site description should present all collected data and interpreted parameters of importance for the overall scientific understanding of the site, for the technical design and environmental impact assessment of the deep repository, and for the assessment of long-term safety. The site description will have two main components: a written synthesis of the site, summarising the current state of knowledge, as documented in the databases containing the primary data from the site investigations, and one or several site descriptive models, in which the collected information is interpreted and presented in a form which can be used in numerical models for rock engineering, environmental impact and long-term safety assessments. The site descriptive models are devised and stepwise updated as the site investigations proceed. The point of departure for this process is the regional site descriptive model, version 0, which is the subject of the present report. Version 0 is developed out of the information available at the start of the site investigation. This information, with the exception of data from tunnels and drill holes at the sites of the Forsmark nuclear reactors and the underground low-middle active radioactive waste storage facility, SFR, is mainly 2D in nature (surface data), and is general and regional, rather than site-specific, in content. For this reason, the Forsmark site descriptive model, version 0, as detailed in the present report, has been developed at a regional scale. It covers a rectangular area, 15 km in a southwest-northeast and 11 km in a northwest-southeast direction, around the

  19. Simpevarp - site descriptive model version 0

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-11-01

    During 2002, SKB is starting detailed investigations at two potential sites for a deep repository in the Precambrian rocks of the Fennoscandian Shield. The present report concerns one of those sites, Simpevarp, which lies in the municipality of Oskarshamn, on the southeast coast of Sweden, about 250 kilometres south of Stockholm. The site description will have two main components: a written synthesis of the site, summarising the current state of knowledge, as documented in the databases containing the primary data from the site investigations, and one or several site descriptive models, in which the collected information is interpreted and presented in a form which can be used in numerical models for rock engineering, environmental impact and long-term safety assessments. SKB maintains two main databases at the present time, a site characterisation database called SICADA and a geographic information system called SKB GIS. The site descriptive model will be developed and presented with the aid of the SKB GIS capabilities, and with SKBs Rock Visualisation System (RVS), which is also linked to SICADA. The version 0 model forms an important framework for subsequent model versions, which are developed successively, as new information from the site investigations becomes available. Version 0 is developed out of the information available at the start of the site investigation. In the case of Simpevarp, this is essentially the information which was compiled for the Oskarshamn feasibility study, which led to the choice of that area as a favourable object for further study, together with information collected since its completion. This information, with the exception of the extensive data base from the nearby Aespoe Hard Rock Laboratory, is mainly 2D in nature (surface data), and is general and regional, rather than site-specific, in content. Against this background, the present report consists of the following components: an overview of the present content of the databases

  20. Retrieval and Validation of Upper Tropospheric Humidity from SAPHIR aboard Megha-Tropics

    Science.gov (United States)

    Mathew, Nizy; Krishna Moorthy, K.; Raju C, Suresh; Pillai Renju, Ramachandran; Oommen John, Viju

    Upper tropospheric humidity (UTH) has been derived from brightness temperature of SAPHIR payload aboard Megha-Tropiques (MT) mission. The channels of SAPHIR are very close to the water vapor absorption peak at 183.31GHz. First three channels at 183.31±0.2 GHz, 183.31±1.1 GHz and 183.31±2.8 are used for upper tropospheric humidity (UTH) studies. The channel at 183.31±0.2 GHz enables retrieval of humidity up to the highest altitude possible with the present nadir looking microwave humidity sounders. Transformation coefficients for the first three channels for all the incidence angles have been derived using the simulated brightness temperatures and Jocobians with Chevellier data set as input to the radiative transfer model ARTS. These coefficients are used to convert brightness temperatures to upper tropospheric humidity from different channels. A stringent deep convective cloud screeing has been done using the brightness temperatures of SAPHIR itself. The retrieved UTH has been validated with the Jacobian weighted UTH derived from collocated radiosonde observations and also with the humidity profiles derived from ground based microwave radiometer data. UTH variation over the inter-tropical region on global basis has been studied for one year, taking the advantage of the first humidity product with high spatial and temporal resolution over the tropical belt, unbiased with specific local times of the satellite pass. These data set have been used to adress the seasonal and spatial variability of humidity in the tropical upper tropospheric region and humidity variability during Indian monsoon. The details of the MT-SAPHIR characteristics, methodology and results will be presented. begin{enumerate} begin{center}

  1. Forsmark - site descriptive model version 0

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-10-01

    During 2002, the Swedish Nuclear Fuel and Waste Management Company (SKB) is starting investigations at two potential sites for a deep repository in the Precambrian basement of the Fennoscandian Shield. The present report concerns one of those sites, Forsmark, which lies in the municipality of Oesthammar, on the east coast of Sweden, about 150 kilometres north of Stockholm. The site description should present all collected data and interpreted parameters of importance for the overall scientific understanding of the site, for the technical design and environmental impact assessment of the deep repository, and for the assessment of long-term safety. The site description will have two main components: a written synthesis of the site, summarising the current state of knowledge, as documented in the databases containing the primary data from the site investigations, and one or several site descriptive models, in which the collected information is interpreted and presented in a form which can be used in numerical models for rock engineering, environmental impact and long-term safety assessments. The site descriptive models are devised and stepwise updated as the site investigations proceed. The point of departure for this process is the regional site descriptive model, version 0, which is the subject of the present report. Version 0 is developed out of the information available at the start of the site investigation. This information, with the exception of data from tunnels and drill holes at the sites of the Forsmark nuclear reactors and the underground low-middle active radioactive waste storage facility, SFR, is mainly 2D in nature (surface data), and is general and regional, rather than site-specific, in content. For this reason, the Forsmark site descriptive model, version 0, as detailed in the present report, has been developed at a regional scale. It covers a rectangular area, 15 km in a southwest-northeast and 11 km in a northwest-southeast direction, around the

  2. Simpevarp - site descriptive model version 0

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-11-01

    During 2002, SKB is starting detailed investigations at two potential sites for a deep repository in the Precambrian rocks of the Fennoscandian Shield. The present report concerns one of those sites, Simpevarp, which lies in the municipality of Oskarshamn, on the southeast coast of Sweden, about 250 kilometres south of Stockholm. The site description will have two main components: a written synthesis of the site, summarising the current state of knowledge, as documented in the databases containing the primary data from the site investigations, and one or several site descriptive models, in which the collected information is interpreted and presented in a form which can be used in numerical models for rock engineering, environmental impact and long-term safety assessments. SKB maintains two main databases at the present time, a site characterisation database called SICADA and a geographic information system called SKB GIS. The site descriptive model will be developed and presented with the aid of the SKB GIS capabilities, and with SKBs Rock Visualisation System (RVS), which is also linked to SICADA. The version 0 model forms an important framework for subsequent model versions, which are developed successively, as new information from the site investigations becomes available. Version 0 is developed out of the information available at the start of the site investigation. In the case of Simpevarp, this is essentially the information which was compiled for the Oskarshamn feasibility study, which led to the choice of that area as a favourable object for further study, together with information collected since its completion. This information, with the exception of the extensive data base from the nearby Aespoe Hard Rock Laboratory, is mainly 2D in nature (surface data), and is general and regional, rather than site-specific, in content. Against this background, the present report consists of the following components: an overview of the present content of the databases

  3. Meson Properties in a renormalizable version of the NJL model

    CERN Document Server

    Mota, A L; Hiller, B; Walliser, H; Mota, Andre L.; Hiller, Brigitte; Walliser, Hans

    1999-01-01

    In the present paper we implement a non-trivial and renormalizable extension of the NJL model. We discuss the advantages and shortcomings of this extended model compared to a usual effective Pauli-Villars regularized version. We show that both versions become equivalent in the case of a large cutoff. Various relevant mesonic observables are calculated and compared.

  4. Industrial Waste Management Evaluation Model Version 3.1

    Science.gov (United States)

    IWEM is a screening level ground water model designed to simulate contaminant fate and transport. IWEM v3.1 is the latest version of the IWEM software, which includes additional tools to evaluate the beneficial use of industrial materials

  5. GCFM Users Guide Revision for Model Version 5.0

    Energy Technology Data Exchange (ETDEWEB)

    Keimig, Mark A.; Blake, Coleman

    1981-08-10

    This paper documents alterations made to the MITRE/DOE Geothermal Cash Flow Model (GCFM) in the period of September 1980 through September 1981. Version 4.0 of GCFM was installed on the computer at the DOE San Francisco Operations Office in August 1980. This Version has also been distributed to about a dozen geothermal industry firms, for examination and potential use. During late 1980 and 1981, a few errors detected in the Version 4.0 code were corrected, resulting in Version 4.1. If you are currently using GCFM Version 4.0, it is suggested that you make the changes to your code that are described in Section 2.0. User's manual changes listed in Section 3.0 and Section 4.0 should then also be made.

  6. Impact of Megha-Tropiques SAPHIR radiance assimilation on the simulation of tropical cyclones over Bay of Bengal

    Science.gov (United States)

    Dhanya, M.; Gopalakrishnan, Deepak; Chandrasekar, Anantharaman; Singh, Sanjeev Kumar; Prasad, V. S.

    2016-05-01

    Impact of SAPHIR radiance assimilation on the simulation of tropical cyclones over Indian region has been investigated using the Weather Research and Forecasting (WRF) model. Three cyclones that formed over Bay of Bengal have been considered in the present study. Assimilation methodology used here is the three dimensional variational (3DVar) scheme within the WRF model. With the initial and boundary conditions from Global Forecasting System (GFS) analyses from the National Centres for Environmental Prediction (NCEP), a control run (CTRL) without assimilation of any data and a 3DVar run with the assimilation of SAPHIR radiance have been performed. Both model simulations have been compared with the observations from India Meteorological Department (IMD), Tropical Rainfall Measurement Mission (TRMM), and analysis fields from GFS. Detailed analysis reveals that, the SAPHIR radiance assimilation has led to significant improvement in the simulation of all the three cyclones in terms of cyclone track, intensity, accumulated rainfall. The simulation of warm core structure and relative vorticity profile of each cyclone by 3DVar run are found to be more closer to GFS analyses, when compared with the CTRL run.

  7. Model Adequacy Analysis of Matching Record Versions in Nosql Databases

    Directory of Open Access Journals (Sweden)

    E. V. Tsviashchenko

    2015-01-01

    Full Text Available The article investigates a model of matching record versions. The goal of this work is to analyse the model adequacy. This model allows estimating a user’s processing time distribution of the record versions and a distribution of the record versions count. The second option of the model was used, according to which, for a client the time to process record versions depends explicitly on the number of updates, performed by the other users between the sequential updates performed by a current client. In order to prove the model adequacy the real experiment was conducted in the cloud cluster. The cluster contains 10 virtual nodes, provided by DigitalOcean Company. The Ubuntu Server 14.04 was used as an operating system (OS. The NoSQL system Riak was chosen for experiments. In the Riak 2.0 version and later provide “dotted vector versions” (DVV option, which is an extension of the classic vector clock. Their use guarantees, that the versions count, simultaneously stored in DB, will not exceed the count of clients, operating in parallel with a record. This is very important while conducting experiments. For developing the application the java library, provided by Riak, was used. The processes run directly on the nodes. In experiment two records were used. They are: Z – the record, versions of which are handled by clients; RZ – service record, which contains record update counters. The application algorithm can be briefly described as follows: every client reads versions of the record Z, processes its updates using the RZ record counters, and saves treated record in database while old versions are deleted form DB. Then, a client rereads the RZ record and increments counters of updates for the other clients. After that, a client rereads the Z record, saves necessary statistics, and deliberates the results of processing. In the case of emerging conflict because of simultaneous updates of the RZ record, the client obtains all versions of that

  8. Solar Advisor Model User Guide for Version 2.0

    Energy Technology Data Exchange (ETDEWEB)

    Gilman, P.; Blair, N.; Mehos, M.; Christensen, C.; Janzou, S.; Cameron, C.

    2008-08-01

    The Solar Advisor Model (SAM) provides a consistent framework for analyzing and comparing power system costs and performance across the range of solar technologies and markets, from photovoltaic systems for residential and commercial markets to concentrating solar power and large photovoltaic systems for utility markets. This manual describes Version 2.0 of the software, which can model photovoltaic and concentrating solar power technologies for electric applications for several markets. The current version of the Solar Advisor Model does not model solar heating and lighting technologies.

  9. METAPHOR (version 1): Users guide. [performability modeling

    Science.gov (United States)

    Furchtgott, D. G.

    1979-01-01

    General information concerning METAPHOR, an interactive software package to facilitate performability modeling and evaluation, is presented. Example systems are studied and their performabilities are calculated. Each available METAPHOR command and array generator is described. Complete METAPHOR sessions are included.

  10. Renormalized versions of the massless Thirring model

    CERN Document Server

    Casana, R

    2003-01-01

    We present a non-perturbative study of the (1+1)-dimensional massless Thirring model by using path integral methods. The model presents two features, one of them has a local gauge symmetry that is implemented at quantum level and the other one without this symmetry. We make a detailed analysis of their UV divergence structure, a non-perturbative regularization and renormalization processes are proposed.

  11. An Open Platform for Processing IFC Model Versions

    Institute of Scientific and Technical Information of China (English)

    Mohamed Nour; Karl Beucke

    2008-01-01

    The IFC initiative from the International Alliance of Interoperability has been developing since the mid-nineties through several versions.This paper addresses the problem of binding the growing number of IFC versions and their EXPRESS definitions to programming environments (Java and.NET).The solution developed in this paper automates the process of generating early binding classes,whenever a new version of the IFC model is released.Furthermore, a runtime instantiation of the generated eady binding classes takes place by importing IFC-STEP ISO 10303-P21 models.The user can navigate the IFC STEP model with relevance to the defining EXPRESS-schema,modify,deletem,and create new instances.These func-tionalities are considered to be a basis for any IFC based implementation.It enables researchers to experi-ment the IFC model independently from any software application.

  12. SAPHIRE 8 Software Independent Verification and Validation Plan

    Energy Technology Data Exchange (ETDEWEB)

    Rae J. Nims; Kent M. Norris

    2010-02-01

    SAPHIRE 8 is being developed with a phased or cyclic iterative rapid application development methodology. Due to this approach, a similar approach is being taken for the IV&V activities on each vital software object. The IV&V plan is structured around NUREG/BR-0167, “Software Quality Assurance Program and Guidelines,” February 1993. The Nuclear Regulatory Research Office Instruction No.: PRM-12, “Software Quality Assurance for RES Sponsored Codes,” March 26, 2007 specifies that RES-sponsored software is to be evaluated against NUREG/BR-0167. Per the guidance in NUREG/BR-0167, SAPHIRE is classified as “Level 1.” Level 1 software corresponds to technical application software used in a safety decision.

  13. Correction, improvement and model verification of CARE 3, version 3

    Science.gov (United States)

    Rose, D. M.; Manke, J. W.; Altschul, R. E.; Nelson, D. L.

    1987-01-01

    An independent verification of the CARE 3 mathematical model and computer code was conducted and reported in NASA Contractor Report 166096, Review and Verification of CARE 3 Mathematical Model and Code: Interim Report. The study uncovered some implementation errors that were corrected and are reported in this document. The corrected CARE 3 program is called version 4. Thus the document, correction. improvement, and model verification of CARE 3, version 3 was written in April 1984. It is being published now as it has been determined to contain a more accurate representation of CARE 3 than the preceding document of April 1983. This edition supercedes NASA-CR-166122 entitled, 'Correction and Improvement of CARE 3,' version 3, April 1983.

  14. Independent Verification and Validation Of SAPHIRE 8 System Test Plan Project Number: N6423 U.S. Nuclear Regulatory Commission

    Energy Technology Data Exchange (ETDEWEB)

    Kent Norris

    2010-02-01

    The purpose of the Independent Verification and Validation (IV&V) role in the evaluation of the SAPHIRE System Test Plan is to assess the approach to be taken for intended testing activities associated with the SAPHIRE software product. The IV&V team began this endeavor after the software engineering and software development of SAPHIRE had already been in production.

  15. Smart Grid Interoperability Maturity Model Beta Version

    Energy Technology Data Exchange (ETDEWEB)

    Widergren, Steven E.; Drummond, R.; Giroti, Tony; Houseman, Doug; Knight, Mark; Levinson, Alex; longcore, Wayne; Lowe, Randy; Mater, J.; Oliver, Terry V.; Slack, Phil; Tolk, Andreas; Montgomery, Austin

    2011-12-02

    The GridWise Architecture Council was formed by the U.S. Department of Energy to promote and enable interoperability among the many entities that interact with the electric power system. This balanced team of industry representatives proposes principles for the development of interoperability concepts and standards. The Council provides industry guidance and tools that make it an available resource for smart grid implementations. In the spirit of advancing interoperability of an ecosystem of smart grid devices and systems, this document presents a model for evaluating the maturity of the artifacts and processes that specify the agreement of parties to collaborate across an information exchange interface. You are expected to have a solid understanding of large, complex system integration concepts and experience in dealing with software component interoperation. Those without this technical background should read the Executive Summary for a description of the purpose and contents of the document. Other documents, such as checklists, guides, and whitepapers, exist for targeted purposes and audiences. Please see the www.gridwiseac.org website for more products of the Council that may be of interest to you.

  16. AISIM (Automated Interactive Simulation Modeling System) VAX Version Training Manual.

    Science.gov (United States)

    1985-02-01

    AD-Ri6t 436 AISIM (RUTOMATED INTERACTIVE SIMULATION MODELING 1/2 SYSTEM) VAX VERSION TRAI (U) HUGHES AIRCRAFT CO FULLERTON CA GROUND SYSTEMS GROUP S...Continue on reverse if necessary and Identify by block number) THIS DOCUMENT IS THE TRAINING MANUAL FOR THE AUTOMATED INTERACTIVE SIMULATION MODELING SYSTEM...form. Page 85 . . . . . . . . APPENDIX B SIMULATION REPORT FOR WORKING EXAMPLE Pa jPage.8 7AD-Ai6i 46 ISIM (AUTOMATED INTERACTIVE SIMULATION MODELING 2

  17. IDC Use Case Model Survey Version 1.1.

    Energy Technology Data Exchange (ETDEWEB)

    Harris, James Mark [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Carr, Dorthe B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    This document contains the brief descriptions for the actors and use cases contained in the IDC Use Case Model. REVISIONS Version Date Author/Team Revision Description Authorized by V1.0 12/2014 SNL IDC Reengineering Project Team Initial delivery M. Harris V1.1 2/2015 SNL IDC Reengineering Project Team Iteration I2 Review Comments M. Harris

  18. IDC Use Case Model Survey Version 1.0.

    Energy Technology Data Exchange (ETDEWEB)

    Carr, Dorthe B.; Harris, James M.

    2014-12-01

    This document contains the brief descriptions for the actors and use cases contained in the IDC Use Case Model Survey. REVISIONS Version Date Author/Team Revision Description Authorized by V1.0 12/2014 IDC Re- engineering Project Team Initial delivery M. Harris

  19. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Tutorial

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; S. T. Beck; S. T. Wood

    2008-08-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of computer programs that were developed to create and analyze probabilistic risk assessment (PRAs). This volume is the tutorial manual for the SAPHIRE system. In this document, a series of lessons are provided that guide the user through basic steps common to most analyses preformed with SAPHIRE. The tutorial is divided into two major sections covering both basic and advanced features. The section covering basic topics contains lessons that lead the reader through development of a probabilistic hypothetical problem involving a vehicle accident, highlighting the program’s most fundamental features. The advanced features section contains additional lessons that expand on fundamental analysis features of SAPHIRE and provide insights into more complex analysis techniques. Together, these two elements provide an overview into the operation and capabilities of the SAPHIRE software.

  20. COPAT - towards a recommended model version of COSMO-CLM

    Science.gov (United States)

    Anders, Ivonne; Brienen, Susanne; Eduardo, Bucchignani; Ferrone, Andrew; Geyer, Beate; Keuler, Klaus; Lüthi, Daniel; Mertens, Mariano; Panitz, Hans-Jürgen; Saeed, Sajjad; Schulz, Jan-Peter; Wouters, Hendrik

    2016-04-01

    The regional climate model COSMO-CLM is a community model (www.clm-community.com). In close collaboration with the COSMO-consortium the model is further developed by the community members for climate applications. One of the tasks of the community is to give a recommendation on the model version and to evaluate the models performance. The COPAT (Coordinated Parameter Testing) is a voluntary community effort to allow different institutions to carry out model simulations systematically by different institutions in order to test new model options and to find a satisfactory model setup for hydrostatic climate simulations over Europe. We will present the COPAT method used to achieve the latest recommended model version of COSMO-CLM (COSMO5.0_clm6). The simulations cover the EURO-CORDEX domain at two spatial resolutions 0.44° and 0.11°. They used ERAinterim forcing data for the time period of 1979-2000. Interpolated forcing data has been prepared once to ensure that all participating groups used identical forcing. The evaluation of each individual run has been performed for the time period 1981-2000 by using ETOOL and ETOOL-VIS. These tools have been developed within the community to evaluate standard COSMO-CLM output in comparison to observations provided by EOBS and CRU. COPAT was structured in three phases. In Phase 1 all participating institutions performed a reference run on their individual computing platforms and tested the influence of single model options on the results afterwards. Derived from the results of Phase 1 the most promising options were used in combinations in the second phase (Phase 2). These first two phases of COPAT consist of more than 100 simulations with a spatial resolution of 0.44°. Based on the best setup identified in Phase 2 a calibration of eight tuning parameters has been carried out following Bellbrat et al. (2012) in Phase 3. A final simulation with the calibrated parameters has been set up at a higher resolution of 0.11°. The

  1. Rain detection and measurement from Megha-Tropiques microwave sounder—SAPHIR

    Science.gov (United States)

    Varma, Atul Kumar; Piyush, D. N.; Gohil, B. S.; Pal, P. K.; Srinivasan, J.

    2016-08-01

    The Megha-Tropiques, an Indo-French satellite, carries on board a microwave sounder, Sondeur Atmosphérique du Profil d'Humidité Intertropical par Radiométrie (SAPHIR), and a microwave radiometer, Microwave Analysis and Detection of Rain and Atmospheric Structures (MADRAS), along with two other instruments. Being a Global Precipitation Measurement constellation satellite MT-MADRAS was an important sensor to study the convective clouds and rainfall. Due to the nonfunctioning of MADRAS, the possibility of detection and estimation of rain from SAPHIR is explored. Using near-concurrent SAPHIR and precipitation radar (PR) onboard Tropical Rainfall Measuring Mission (TRMM) observations, the rain effect on SAPHIR channels is examined. All the six channels of the SAPHIR are used to calculate the average rain probability (PR) for each SAPHIR pixel. Further, an exponential rain retrieval algorithm is developed. This algorithm explains a correlation of 0.72, RMS error of 0.75 mm/h, and bias of 0.04 mm/h. When rain identification and retrieval algorithms are applied together, it explains a correlation of 0.69 with an RMS error of 0.47 mm/h and bias of 0.01 mm/h. On applying the algorithm to the independent SAPHIR data set and compared with TRMM-3B42 rain on monthly scale, it explains a correlation of 0.85 and RMS error of 0.09 mm/h. Further distribution of rain difference of SAPHIR with other rain products is presented on global scale as well as for the climatic zones. For examining the capability of SAPHIR to measure intense rain, instantaneous rain over Phailin cyclone from SAPHIR is compared with other standard satellite-based rain products such as 3B42, Global Satellite Mapping of Precipitation, and Precipitation Estimation from Remote Sensing Information using Artificial Neural Network.

  2. ONKALO rock mechanics model (RMM). Version 2.3

    Energy Technology Data Exchange (ETDEWEB)

    Haekkinen, T.; Merjama, S.; Moenkkoenen, H. [WSP Finland, Helsinki (Finland)

    2014-07-15

    The Rock Mechanics Model of the ONKALO rock volume includes the most important rock mechanics features and parameters at the Olkiluoto site. The main objective of the model is to be a tool to predict rock properties, rock quality and hence provide an estimate for the rock stability of the potential repository at Olkiluoto. The model includes a database of rock mechanics raw data and a block model in which the rock mechanics parameters are estimated through block volumes based on spatial rock mechanics raw data. In this version 2.3, special emphasis was placed on refining the estimation of the block model. The model was divided into rock mechanics domains which were used as constraints during the block model estimation. During the modelling process, a display profile and toolbar were developed for the GEOVIA Surpac software to improve visualisation and access to the rock mechanics data for the Olkiluoto area. (orig.)

  3. Megha-Tropiques/SAPHIR measurements of humidity profiles: validation with AIRS and global radiosonde network

    Science.gov (United States)

    Subrahmanyam, K. V.; Kumar, K. K.

    2013-12-01

    The vertical profiles of humidity measured by SAPHIR (Sondeur Atmospherique du Profil d' Humidité Intropicale par Radiométrie) on-board Megha-Tropiques satellite are validated using Atmosphere Infrared Sounder (AIRS) and ground based radiosonde observations during July-September 2012. SAPHIR provides humidity profiles at six pressure layers viz., 1000-850 (level 1), 850-700 (level 2), 700-550 (level 3), 550-400 (level 4) 400-250 (level 5) and 250-100(level 6) hPa. Segregated AIRS observations over land and oceanic regions are used to assess the performance of SAPHIR quantitatively. The regression analysis over oceanic region (125° W-180° W; 30° S-30° N) reveal that the SAPHIR measurements agrees very well with the AIRS measurements at levels 3, 4, 5 and 6 with correlation coefficients 0.79, 0.88, 0.87 and 0.78 respectively. However, at level 6 SAPHIR seems to be systematically underestimating the AIRS measurements. At level 2, the agreement is reasonably good with correlation coefficient of 0.52 and at level 1 the agreement is very poor with correlation coefficient 0.17. The regression analysis over land region (10° W-30° E; 8° N-30° N) revealed an excellent correlation between AIRS and SAPHIR at all the six levels with 0.80, 0.78, 0.84, 0.84, 0.86 and 0.65 respectively. However, again at levels 5 and 6, SAPHIR seems to be underestimating the AIRS measurements. After carrying out the quantitative comparison between SAPHIR and AIRS separately over land and ocean, the ground based global radiosonde network observations of humidity profiles over three distinct geographical locations (East Asia, tropical belt of South and North America and South Pacific) are then used to further validate the SAPHIR observations as AIRS has its own limitations. The SAPHIR observations within a radius of 50 km around the radiosonde stations are averaged and then the regression analysis is carried out at the first five levels of SAPHIR. The comparison is not carried out at sixth

  4. Model Versions and Fast Algorithms for Network Epidemiology

    Institute of Scientific and Technical Information of China (English)

    Petter Holme

    2014-01-01

    Network epidemiology has become a core framework for investigating the role of human contact patterns in the spreading of infectious diseases. In network epidemiology, one represents the contact structure as a network of nodes (individuals) connected by links (sometimes as a temporal network where the links are not continuously active) and the disease as a compartmental model (where individuals are assigned states with respect to the disease and follow certain transition rules between the states). In this paper, we discuss fast algorithms for such simulations and also compare two commonly used versions,one where there is a constant recovery rate (the number of individuals that stop being infectious per time is proportional to the number of such people);the other where the duration of the disease is constant. The results show that, for most practical purposes, these versions are qualitatively the same.

  5. H2A Production Model, Version 2 User Guide

    Energy Technology Data Exchange (ETDEWEB)

    Steward, D.; Ramsden, T.; Zuboy, J.

    2008-09-01

    The H2A Production Model analyzes the technical and economic aspects of central and forecourt hydrogen production technologies. Using a standard discounted cash flow rate of return methodology, it determines the minimum hydrogen selling price, including a specified after-tax internal rate of return from the production technology. Users have the option of accepting default technology input values--such as capital costs, operating costs, and capacity factor--from established H2A production technology cases or entering custom values. Users can also modify the model's financial inputs. This new version of the H2A Production Model features enhanced usability and functionality. Input fields are consolidated and simplified. New capabilities include performing sensitivity analyses and scaling analyses to various plant sizes. This User Guide helps users already familiar with the basic tenets of H2A hydrogen production cost analysis get started using the new version of the model. It introduces the basic elements of the model then describes the function and use of each of its worksheets.

  6. $K^0$-$\\Sigma^+$ Photoproduction with SAPHIR

    CERN Document Server

    Bennhold, C

    1998-01-01

    Preliminary results of the analysis of the reaction p(gamma,K0)Sigma+ are presented. We show the first measurement of the differential cross section and much improved data for the total cross section than previous data. The data are compared with model predictions from different isobar and quark models that give a good description of p(gamma,K+)Lambda and p(gamma,K+)Sigma0 data in the same energy range. None of the models yield an adequate description of the data at all energies.

  7. Stochastic hyperfine interactions modeling library-Version 2

    Science.gov (United States)

    Zacate, Matthew O.; Evenson, William E.

    2016-02-01

    The stochastic hyperfine interactions modeling library (SHIML) provides a set of routines to assist in the development and application of stochastic models of hyperfine interactions. The library provides routines written in the C programming language that (1) read a text description of a model for fluctuating hyperfine fields, (2) set up the Blume matrix, upon which the evolution operator of the system depends, and (3) find the eigenvalues and eigenvectors of the Blume matrix so that theoretical spectra of experimental techniques that measure hyperfine interactions can be calculated. The optimized vector and matrix operations of the BLAS and LAPACK libraries are utilized. The original version of SHIML constructed and solved Blume matrices for methods that measure hyperfine interactions of nuclear probes in a single spin state. Version 2 provides additional support for methods that measure interactions on two different spin states such as Mössbauer spectroscopy and nuclear resonant scattering of synchrotron radiation. Example codes are provided to illustrate the use of SHIML to (1) generate perturbed angular correlation spectra for the special case of polycrystalline samples when anisotropy terms of higher order than A22 can be neglected and (2) generate Mössbauer spectra for polycrystalline samples for pure dipole or pure quadrupole transitions.

  8. SAPHIRE: A New Flat-Panel Digital Mammography Detector With Avalanche Photoconductor and High-Resolution Field Emitter Readout

    Science.gov (United States)

    2006-06-01

    AD_________________ Award Number: W81XWH-04-1-0554 TITLE: SAPHIRE : A New Flat-Panel Digital... SAPHIRE : A New Flat-Panel Digital Mammography Detector with Avalanche Photoconductor and High-Resolution Field Emitter Readout 5b. GRANT NUMBER w81xwh-04...CsI), and form a charge image that is read out by a high-resolution field emitter array (FEA). We call the proposed detector SAPHIRE (Scintillator

  9. The Lagrangian particle dispersion model FLEXPART version 10

    Science.gov (United States)

    Pisso, Ignacio; Sollum, Espen; Grythe, Henrik; Kristiansen, Nina; Cassiani, Massimo; Eckhardt, Sabine; Thompson, Rona; Groot Zwaaftnik, Christine; Evangeliou, Nikolaos; Hamburger, Thomas; Sodemann, Harald; Haimberger, Leopold; Henne, Stephan; Brunner, Dominik; Burkhart, John; Fouilloux, Anne; Fang, Xuekun; Phillip, Anne; Seibert, Petra; Stohl, Andreas

    2017-04-01

    The Lagrangian particle dispersion model FLEXPART was in its first original release in 1998 designed for calculating the long-range and mesoscale dispersion of air pollutants from point sources, such as after an accident in a nuclear power plant. The model has now evolved into a comprehensive tool for atmospheric transport modelling and analysis. Its application fields are extended to a range of atmospheric transport processes for both atmospheric gases and aerosols, e.g. greenhouse gases, short-lived climate forces like black carbon, volcanic ash and gases as well as studies of the water cycle. We present the newest release, FLEXPART version 10. Since the last publication fully describing FLEXPART (version 6.2), the model code has been parallelised in order to allow for the possibility to speed up computation. A new, more detailed gravitational settling parametrisation for aerosols was implemented, and the wet deposition scheme for aerosols has been heavily modified and updated to provide a more accurate representation of this physical process. In addition, an optional new turbulence scheme for the convective boundary layer is available, that considers the skewness in the vertical velocity distribution. Also, temporal variation and temperature dependence of the OH-reaction are included. Finally, user input files are updated to a more convenient and user-friendly namelist format, and the option to produce the output-files in netCDF-format instead of binary format is implemented. We present these new developments and show recent model applications. Moreover, we also introduce some tools for the preparation of the meteorological input data, as well as for the processing of FLEXPART output data.

  10. Software Engineering Designs for Super-Modeling Different Versions of CESM Models using DART

    Science.gov (United States)

    Kluzek, Erik; Duane, Gregory; Tribbia, Joe; Vertenstein, Mariana

    2014-05-01

    The super-modeling approach connects different models together at run time in order to provide run time feedbacks between the models and thus synchronize the models. This method reduces model bias further than after-the-fact averaging of model outputs. We explore different designs to connect different configurations and versions of an IPCC class climate model - the Community Earth System Model (CESM). We build on the Data Assimilation Research Test-bed (DART) software to provide data assimilation from truth as well as to provide a software framework to link different model configurations together. We show a system building on DART that uses a Python script to do simple nudging between three versions of the atmosphere model in CESM (the Community Atmosphere Model (CAM) versions three, four and five).

  11. 19-vertex version of the fully frustrated XY model

    Science.gov (United States)

    Knops, Yolanda M. M.; Nienhuis, Bernard; Knops, Hubert J. F.; Blöte, Henk W. J.

    1994-07-01

    We investigate a 19-vertex version of the two-dimensional fully frustrated XY (FFXY) model. We construct Yang-Baxter equations for this model and show that there is no solution. Therefore we have chosen a numerical approach based on the transfer matrix. The results show that a coupled XY Ising model is in the same universality class as the FFXY model. We find that the phase coupling over an Ising wall is irrelevant at criticality. This leads to a correction of earlier determinations of the dimension x*h,Is of the Ising disorder operator. We find x*h,Is=0.123(5) and a conformal anomaly c=1.55(5). These results are consistent with the hypothesis that the FFXY model behaves as a superposition of an Ising model and an XY model. However, the dimensions associated with the energy, xt=0.77(3), and with the XY magnetization xh,XY~=0.17, refute this hypothesis.

  12. Looking for the dichromatic version of a colour vision model

    Science.gov (United States)

    Capilla, P.; Luque, M. J.; Díez-Ajenjo, M. A.

    2004-09-01

    Different hypotheses on the sensitivity of photoreceptors and post-receptoral mechanisms were introduced in different colour vision models to derive acceptable dichromatic versions. Models with one (Ingling and T'sou, Guth et al, Boynton) and two linear opponent stages (DeValois and DeValois) and with two non-linear opponent stages (ATD95) were used. The L- and M-cone sensitivities of red-green defectives were either set to zero (cone-loss hypothesis) or replaced by that of a different cone-type (cone-replacement hypothesis), whereas for tritanopes the S-cone sensitivity was always assumed to be zero. The opponent mechanisms were either left unchanged or nulled in one or in all the opponent stages. The dichromatic models obtained have been evaluated according to their performance in three tests: computation of the spectral sensitivity of the dichromatic perceptual mechanisms, prediction of the colour loci describing dichromatic appearance and prediction of the gamut of colours that dichromats perceive as normal subjects do.

  13. The integrated Earth System Model Version 1: formulation and functionality

    Energy Technology Data Exchange (ETDEWEB)

    Collins, William D.; Craig, Anthony P.; Truesdale, John E.; Di Vittorio, Alan; Jones, Andrew D.; Bond-Lamberty, Benjamin; Calvin, Katherine V.; Edmonds, James A.; Kim, Son H.; Thomson, Allison M.; Patel, Pralit L.; Zhou, Yuyu; Mao, Jiafu; Shi, Xiaoying; Thornton, Peter E.; Chini, Louise M.; Hurtt, George C.

    2015-07-23

    The integrated Earth System Model (iESM) has been developed as a new tool for pro- jecting the joint human/climate system. The iESM is based upon coupling an Integrated Assessment Model (IAM) and an Earth System Model (ESM) into a common modeling in- frastructure. IAMs are the primary tool for describing the human–Earth system, including the sources of global greenhouse gases (GHGs) and short-lived species, land use and land cover change, and other resource-related drivers of anthropogenic climate change. ESMs are the primary scientific tools for examining the physical, chemical, and biogeochemical impacts of human-induced changes to the climate system. The iESM project integrates the economic and human dimension modeling of an IAM and a fully coupled ESM within a sin- gle simulation system while maintaining the separability of each model if needed. Both IAM and ESM codes are developed and used by large communities and have been extensively applied in recent national and international climate assessments. By introducing heretofore- omitted feedbacks between natural and societal drivers, we can improve scientific under- standing of the human–Earth system dynamics. Potential applications include studies of the interactions and feedbacks leading to the timing, scale, and geographic distribution of emissions trajectories and other human influences, corresponding climate effects, and the subsequent impacts of a changing climate on human and natural systems. This paper de- scribes the formulation, requirements, implementation, testing, and resulting functionality of the first version of the iESM released to the global climate community.

  14. The temporal version of the pediatric sepsis biomarker risk model.

    Directory of Open Access Journals (Sweden)

    Hector R Wong

    Full Text Available PERSEVERE is a risk model for estimating mortality probability in pediatric septic shock, using five biomarkers measured within 24 hours of clinical presentation.Here, we derive and test a temporal version of PERSEVERE (tPERSEVERE that considers biomarker values at the first and third day following presentation to estimate the probability of a "complicated course", defined as persistence of ≥2 organ failures at seven days after meeting criteria for septic shock, or death within 28 days.Biomarkers were measured in the derivation cohort (n = 225 using serum samples obtained during days 1 and 3 of septic shock. Classification and Regression Tree (CART analysis was used to derive a model to estimate the risk of a complicated course. The derived model was validated in the test cohort (n = 74, and subsequently updated using the combined derivation and test cohorts.A complicated course occurred in 23% of the derivation cohort subjects. The derived model had a sensitivity for a complicated course of 90% (95% CI 78-96, specificity was 70% (62-77, positive predictive value was 47% (37-58, and negative predictive value was 96% (91-99. The area under the receiver operating characteristic curve was 0.85 (0.79-0.90. Similar test characteristics were observed in the test cohort. The updated model had a sensitivity of 91% (81-96, a specificity of 70% (64-76, a positive predictive value of 47% (39-56, and a negative predictive value of 96% (92-99.tPERSEVERE reasonably estimates the probability of a complicated course in children with septic shock. tPERSEVERE could potentially serve as an adjunct to physiological assessments for monitoring how risk for poor outcomes changes during early interventions in pediatric septic shock.

  15. Evaluation of SAPHIR / Megha-Tropiques observations - CINDY/DYNAMO Campaign

    Science.gov (United States)

    Clain, Gaelle; Brogniez, Hélène; John, Viju; Payne, Vivienne; Luo, Ming

    2014-05-01

    The SAPHIR sounder (Sondeur Atmosphérique du Profil d'Humidité Intertropicale par Radiométrie) onboard the Megha-Tropiques (MT) platform observes the microwave radiation emitted by the Earth system in the strong absorption line of water vapor at 183.31 GHz. It is a multi-channel microwave humidity sounder with 6 channels in the 183.31GHz water vapor absorption band, a maximum scan angle of 42.96° around nadir, a 1700 km wide swath and a footprint resolution of 10 km at nadir. A comparison between the sensor L1A2 observations and radiative transfer calculations using in situ measurements from radiosondes as input is performed in order to validate the satellite observations on the brightness temperature (BT) level. The radiosonde humidity observations chosen as reference were performed during the CINDY/DYNAMO campaign (september 2011 to March 2012) with Vaïsala RS92-SGPD probes and match to a spatio-temporal co-location with MT satellite overpasses. Although several sonde systems were used during the campaign, all of the sites selected for this study used the Vaïsala RS92-SGPD system and were chosen in order to avoid discrepancies in data quality and biases. This work investigates the difference - or bias - between the BTs observed by the sensor and BT simulations from a radiative transfer model, RTTOV-10. The bias amplitude is characterized by a temperature dependent pattern, increasing from nearly 0 Kelvin for the 183.31 ± 0.2 channel to a range of 2 K for the 183.31 ± 11 channel. However the comparison between the sensor data and the radiative transfer simulations is not straightforward and uncertainties associated to the data processing must be propagated throughout the evaluation. Therefore this work documents an evaluation of the uncertainties and errors that can impact the BT bias. These can be linked to the radiative transfer model input and design, the radiosonde observations, the methodology chosen for the comparison and the SAPHIR instrument itself.

  16. Preliminary Evaluation of the Community Multiscale Air Quality (CMAQ) Model Version 5.1

    Science.gov (United States)

    The AMAD will perform two annual CMAQ model simulations, one with the current publically available version of the CMAQ model (v5.0.2) and the other with the beta version of the new model (v5.1). The results of each model simulation will then be compared to observations and the pe...

  17. Evaluation of the Community Multiscale Air Quality (CMAQ) Model Version 5.1

    Science.gov (United States)

    The AMAD will performed two CMAQ model simulations, one with the current publically available version of the CMAQ model (v5.0.2) and the other with the new version of the CMAQ model (v5.1). The results of each model simulation are compared to observations and the performance of t...

  18. Investigation of isoprene oxidation in the atmosphere simulation chamber SAPHIR at low NO concentrations

    Science.gov (United States)

    Fuchs, H.; Rohrer, F.; Hofzumahaus, A.; Bohn, B.; Brauers, T.; Dorn, H.; Häseler, R.; Holland, F.; Li, X.; Lu, K.; Nehr, S.; Tillmann, R.; Wahner, A.

    2012-12-01

    During recent field campaigns, hydroxyl radical (OH) concentrations that were measured by laser-induced fluorescence spectroscopy (LIF) were up to a factor of ten larger than predicted by current chemical models for conditions of high OH reactivity and low nitrogen monoxide (NO) concentrations. These discrepancies were observed in the Pearl-River-Delta, China, which is an urban-influenced rural area, in rainforests, and forested areas in North America and Europe. Isoprene contributed significantly to the total OH reactivity in these field studies, so that potential explanations for the missing OH focused on new reaction pathways in the isoprene degradation mechanism. These pathways regenerate OH without oxidation of NO and thus without ozone production. In summer 2011, a series of experiments was carried out in the atmosphere simulation chamber SAPHIR in Juelich, Germany, in order to investigate the photochemical degradation of isoprene at low NO concentrations (NOSAPHIR by established chemical models like the Master Chemical Mechanism (MCM). Moreover, OH concentration measurements of two independent instruments (LIF and DOAS) agreed during all chamber experiments. Here, we present the results of the experiments and compare measurements with model predictions using the MCM. Furthermore, the validity of newly proposed reaction pathways in the isoprene degradation is evaluated by comparison with observations.

  19. Land-Use Portfolio Modeler, Version 1.0

    Science.gov (United States)

    Taketa, Richard; Hong, Makiko

    2010-01-01

    -on-investment. The portfolio model, now known as the Land-Use Portfolio Model (LUPM), provided the framework for the development of the Land-Use Portfolio Modeler, Version 1.0 software (LUPM v1.0). The software provides a geographic information system (GIS)-based modeling tool for evaluating alternative risk-reduction mitigation strategies for specific natural-hazard events. The modeler uses information about a specific natural-hazard event and the features exposed to that event within the targeted study region to derive a measure of a given mitigation strategy`s effectiveness. Harnessing the spatial capabilities of a GIS enables the tool to provide a rich, interactive mapping environment in which users can create, analyze, visualize, and compare different

  20. Investigation of MACR oxidation by OH in the atmosphere simulation chamber SAPHIR at low NO concentrations.

    Science.gov (United States)

    Fuchs, Hendrik; Acir, Ismail-Hakki; Bohn, Birger; Brauers, Theo; Dorn, Hans-Peter; Häseler, Rolf; Hofzumahaus, Andreas; Holland, Frank; Li, Xin; Lu, Keding; Lutz, Anna; Kaminski, Martin; Nehr, Sascha; Rohrer, Franz; Tillmann, Ralf; Wegener, Robert; Wahner, Andreas

    2013-04-01

    During recent field campaigns, hydroxyl radical (OH) concentrations were up to a factor of ten larger than predicted by current chemical models for conditions of high OH reactivity and low nitrogen monoxide (NO) concentrations. These discrepancies were observed in forests, where isoprene oxidation turnover rates were large. Methacrolein (MACR) is one of the major first generation products of isoprene oxidation, so that MACR was also an important reactant for OH. Here, we present a detailed investigation of the MACR oxidation mechanism including a full set of accurate and precise radical measurements in the atmosphere simulation chamber SAPHIR in Juelich, Germany. The conditions during the chamber experiments were comparable to those during field campaigns with respect to radical and trace gas concentrations. In particular, OH reactivity was as high as 15 per second and NO mixing ratios were as low as 200pptv. Results of the experiments were compared to model predictions using the Master Chemical Mechanism, in order to identify so far unknown reaction pathways, which potentially recycle OH radicals without reactions with NO.

  1. Inter-calibration and validation of observations from SAPHIR and ATMS instruments

    Science.gov (United States)

    Moradi, I.; Ferraro, R. R.

    2015-12-01

    We present the results of evaluating observations from microwave instruments aboard the Suomi National Polar-orbiting Partnership (NPP, ATMS instrument) and Megha-Tropiques (SAPHIR instrument) satellites. The study includes inter-comparison and inter-calibration of observations of similar channels from the two instruments, evaluation of the satellite data using high-quality radiosonde data from Atmospheric Radiation Measurement Program and GPS Radio Occultaion Observations from COSMIC mission, as well as geolocation error correction. The results of this study are valuable for generating climate data records from these instruments as well as for extending current climate data records from similar instruments such as AMSU-B and MHS to the ATMS and SAPHIR instruments. Reference: Moradi et al., Intercalibration and Validation of Observations From ATMS and SAPHIR Microwave Sounders. IEEE Transactions on Geoscience and Remote Sensing. 01/2015; DOI: 10.1109/TGRS.2015.2427165

  2. Gerasimov-Drell-Hearn Sum Rule and the Discrepancy between the New CLAS and SAPHIR Data

    CERN Document Server

    Mart, T

    2008-01-01

    Contribution of the K^+\\Lambda channel to the Gerasimov-Drell-Hearn (GDH) sum rule has been calculated by using the models that fit the recent SAPHIR or CLAS differential cross section data. It is shown that the two data sets yield quite different contributions. Contribution of this channel to the forward spin polarizability of the proton has been also calculated. It is also shown that the inclusion of the recent CLAS C_x and C_z data in the fitting data base does not significantly change the result of the present calculation. Results of the fit, however, reveal the role of the S_{11}(1650), P_{11}(1710), P_{13}(1720), and P_{13}(1900) resonances for the description of the C_x and C_z data. A brief discussion on the importance of these resonances is given. Measurements of the polarized total cross section \\sigma_{TT'} by the CLAS, LEPS, and MAMI collaborations are expected to verify this finding.

  3. Integrating Cloud Processes in the Community Atmosphere Model, Version 5.

    Energy Technology Data Exchange (ETDEWEB)

    Park, S.; Bretherton, Christopher S.; Rasch, Philip J.

    2014-09-15

    This paper provides a description on the parameterizations of global cloud system in CAM5. Compared to the previous versions, CAM5 cloud parameterization has the following unique characteristics: (1) a transparent cloud macrophysical structure that has horizontally non-overlapped deep cumulus, shallow cumulus and stratus in each grid layer, each of which has own cloud fraction, mass and number concentrations of cloud liquid droplets and ice crystals, (2) stratus-radiation-turbulence interaction that allows CAM5 to simulate marine stratocumulus solely from grid-mean RH without relying on the stability-based empirical empty stratus, (3) prognostic treatment of the number concentrations of stratus liquid droplets and ice crystals with activated aerosols and detrained in-cumulus condensates as the main sources and evaporation-sedimentation-precipitation of stratus condensate as the main sinks, and (4) radiatively active cumulus. By imposing consistency between diagnosed stratus fraction and prognosed stratus condensate, CAM5 is free from empty or highly-dense stratus at the end of stratus macrophysics. CAM5 also prognoses mass and number concentrations of various aerosol species. Thanks to the aerosol activation and the parameterizations of the radiation and stratiform precipitation production as a function of the droplet size, CAM5 simulates various aerosol indirect effects associated with stratus as well as direct effects, i.e., aerosol controls both the radiative and hydrological budgets. Detailed analysis of various simulations revealed that CAM5 is much better than CAM3/4 in the global performance as well as the physical formulation. However, several problems were also identifed, which can be attributed to inappropriate regional tuning, inconsistency between various physics parameterizations, and incomplete model physics. Continuous efforts are going on to further improve CAM5.

  4. The NDFF-EcoGRID logical data model, version 3. - Document version 1.1

    NARCIS (Netherlands)

    W. Arp; G. van Reenen; R. van Seeters; M. Tentij; L.E. Veen; D. Zoetebier

    2011-01-01

    The National Authority for Data concerning Nature has been appointed by the Ministry of Agriculture, Nature and Food Quality, and has been assigned the task of making available nature data and of promoting its use. The logical data model described here is intended for everyone in The Netherlands (an

  5. A Constrained and Versioned Data Model for TEAM Data

    Science.gov (United States)

    Andelman, S.; Baru, C.; Chandra, S.; Fegraus, E.; Lin, K.

    2009-04-01

    The objective of the Tropical Ecology Assessment and Monitoring Network (www.teamnetwork.org) is "To generate real time data for monitoring long-term trends in tropical biodiversity through a global network of TEAM sites (i.e. field stations in tropical forests), providing an early warning system on the status of biodiversity to effectively guide conservation action". To achieve this, the TEAM Network operates by collecting data via standardized protocols at TEAM Sites. The standardized TEAM protocols include the Climate, Vegetation and Terrestrial Vertebrate Protocols. Some sites also implement additional protocols. There are currently 7 TEAM Sites with plans to grow the network to 15 by June 30, 2009 and 50 TEAM Sites by the end of 2010. At each TEAM Site, data is gathered as defined by the protocols and according to a predefined sampling schedule. The TEAM data is organized and stored in a database based on the TEAM spatio-temporal data model. This data model is at the core of the TEAM Information System - it consumes and executes spatio-temporal queries, and analytical functions that are performed on TEAM data, and defines the object data types, relationships and operations that maintain database integrity. The TEAM data model contains object types including types for observation objects (e.g. bird, butterfly and trees), sampling unit, person, role, protocol, site and the relationship of these object types. Each observation data record is a set of attribute values of an observation object and is always associated with a sampling unit, an observation timestamp or time interval, a versioned protocol and data collectors. The operations on the TEAM data model can be classified as read operations, insert operations and update operations. Following are some typical operations: The operation get(site, protocol, [sampling unit block, sampling unit,] start time, end time) returns all data records using the specified protocol and collected at the specified site, block

  6. New insights into the degradation of terpenoids with OH: a study of the OH budget in the atmosphere simulation chamber SAPHIR

    Science.gov (United States)

    Kaminski, Martin; Fuchs, Hendrik; Acir, Ismail-Hakki; Bohn, Birger; Brauers, Theo; Dorn, Hans-Peter; Häseler, Rolf; Hofzumahaus, Andreas; Li, Xin; Lutz, Anna; Nehr, Sascha; Rohrer, Franz; Tillmann, Ralf; Wegener, Robert; Kiendler-Scharr, Astrid; Wahner, Andreas

    2014-05-01

    The hydroxyl radical (OH) is the main oxidation agent in the atmosphere during daytime. Recent field campaigns studying the radical chemistry in forested areas showed large discrepancies between measured and modeled OH concentration at low NOx conditions and when OH reactivity was dominated by VOC. These observations were only partially explained by the evidence for new efficient hydroxyl radical regeneration pathways in the isoprene oxidation mechanism. The question arises if other reactive VOCs with high global emission rates are also capable of additional OH recycling. Beside isoprene, monoterpenes and 2-methyl-3-buten-2-ol (MBO) are the volatile organic compounds (VOC) with the highest global emission rates. Due to their high reactivity towards OH monoterpenes and MBO can dominate the radical chemistry of the atmosphere in forested areas under certain conditions. In the present study the photochemical degradation mechanism of α-pinene, β-pinene, limonene, myrcene and MBO was investigated in the Jülich atmosphere simulation chamber SAPHIR. The focus of this study was in particular on the investigation of the OH budget in the degradation process. The photochemical degradation of these terpenoids was studied in a dedicated series of experiments in the years 2012 and 2013. The SAPHIR chamber was equipped with instrumentation to measure radicals (OH, HO2, RO2), the total OH reactivity, all important OH precursors (O3, HONO, HCHO), the parent VOC, its main oxidation products and photolysis frequencies to investigate the radical budget in the SAPHIR chamber. All experiments were carried out under low NOx conditions (≤ 2ppb) and atmospheric terpenoid concentrations (≤ 5ppb) with and without addition of ozone into the SAPHIR chamber. For the investigation of the OH budget all measured OH production terms were compared to the measured OH destruction. Within the limits of accuracy of the instruments the OH budget was balanced in all cases. Consequently unaccounted

  7. Retrieval of cloud ice water path using SAPHIR on board Megha-Tropiques over the tropical ocean

    Science.gov (United States)

    Piyush, Durgesh Nandan; Goyal, Jayesh; Srinivasan, J.

    2017-04-01

    The SAPHIR sensor onboard Megha-Tropiques (MT) measures the earth emitted radiation at frequencies near the water vapor absorption band. SAPHIR operates in six frequencies ranging from 183 ± 0.1 to 183 ± 11 GHz. These frequencies have been used to retrieve cloud ice water path (IWP) at a very high resolution. A method to retrieve IWP over the Indian ocean region is attempted in this study. The study is in two parts, in first part a radiative transfer based simulation is carried out to give an insight of using SAPHIR frequency channels for IWP retrieval, in the next part the real observations of SAPHIR and TRMM-TMI was used for IWP retrieval. The concurrent observations of SAPHIR brightness temperatures (Tbs) and TRMM TMI IWP were used in the development of the retrieval algorithm. An Eigen Vector analysis was done to identify weight of each channel in retrieving IWP; following this a two channel regression based algorithm was developed. The SAPHIR channels which are away from the water vapor absorption band were used to avoid possible water vapor contamination. When the retrieval is compared with independent test dataset, it gives a correlation of 0.80 and RMSE of 3.5%. SAPHIR derived IWP has been compared with other available global IWP products such as TMI, MSPPS, CloudSat and GPM-GMI qualitatively as well as quantitatively. PDF comparison of SAPHIR derived IWP found to have good agreement with CloudSat. Zonal mean comparison with recently launched GMI shows the strength of this algorithm.

  8. Estimating Parameters for the PVsyst Version 6 Photovoltaic Module Performance Model

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    We present an algorithm to determine parameters for the photovoltaic module perf ormance model encoded in the software package PVsyst(TM) version 6. Our method operates on current - voltage (I - V) measured over a range of irradiance and temperature conditions. We describe the method and illustrate its steps using data for a 36 cell crystalli ne silicon module. We qualitatively compare our method with one other technique for estimating parameters for the PVsyst(TM) version 6 model .

  9. Atmospheric photochemistry of aromatic hydrocarbons: OH budgets during SAPHIR chamber experiments

    Science.gov (United States)

    Nehr, S.; Bohn, B.; Dorn, H.-P.; Fuchs, H.; Häseler, R.; Hofzumahaus, A.; Li, X.; Rohrer, F.; Tillmann, R.; Wahner, A.

    2014-07-01

    Current photochemical models developed to simulate the atmospheric degradation of aromatic hydrocarbons tend to underestimate OH radical concentrations. In order to analyse OH budgets, we performed experiments with benzene, toluene, p-xylene and 1,3,5-trimethylbenzene in the atmosphere simulation chamber SAPHIR. Experiments were conducted under low-NO conditions (typically 0.1-0.2 ppb) and high-NO conditions (typically 7-8 ppb), and starting concentrations of 6-250 ppb of aromatics, dependent on OH rate constants. For the OH budget analysis a steady-state approach was applied in which OH production and destruction rates (POH and DOH) have to be equal. The POH were determined from measurements of HO2, NO, HONO, and O3 concentrations, considering OH formation by photolysis and recycling from HO2. The DOH were calculated from measurements of the OH concentrations and total OH reactivities. The OH budgets were determined from DOH/POH ratios. The accuracy and reproducibility of the approach were assessed in several experiments using CO as a reference compound where an average ratio DOH/POH = 1.13 ± 0.19 was obtained. In experiments with aromatics, these ratios ranged within 1.1-1.6 under low-NO conditions and 0.9-1.2 under high-NO conditions. The results indicate that OH budgets during photo-oxidation experiments with aromatics are balanced within experimental accuracies. Inclusion of a further, recently proposed OH production via HO2 + RO2 reactions led to improvements under low-NO conditions but the differences were small and insignificant within the experimental errors.

  10. psychotools - Infrastructure for Psychometric Modeling: Version 0.1-1

    OpenAIRE

    Zeileis, A.; Strobl, Carolin; Wickelmaier, F

    2011-01-01

    Infrastructure for psychometric modeling such as data classes (e.g., for paired comparisons) and basic model fitting functions (e.g., for Rasch and Bradley-Terry models). Intended especially as a common building block for fitting psychometric mixture models in package ‘‘psychomix’’ and psychometric tree models in package ‘‘psychotree’’. License: GPL-2

  11. Implementing an HL7 version 3 modeling tool from an Ecore model.

    Science.gov (United States)

    Bánfai, Balázs; Ulrich, Brandon; Török, Zsolt; Natarajan, Ravi; Ireland, Tim

    2009-01-01

    One of the main challenges of achieving interoperability using the HL7 V3 healthcare standard is the lack of clear definition and supporting tools for modeling, testing, and conformance checking. Currently, the knowledge defining the modeling is scattered around in MIF schemas, tools and specifications or simply with the domain experts. Modeling core HL7 concepts, constraints, and semantic relationships in Ecore/EMF encapsulates the domain-specific knowledge in a transparent way while unifying Java, XML, and UML in an abstract, high-level representation. Moreover, persisting and versioning the core HL7 concepts as a single Ecore context allows modelers and implementers to create, edit and validate message models against a single modeling context. The solution discussed in this paper is implemented in the new HL7 Static Model Designer as an extensible toolset integrated as a standalone Eclipse RCP application.

  12. Calibrating and Updating the Global Forest Products Model (GFPM version 2014 with BPMPD)

    Science.gov (United States)

    Joseph Buongiorno; Shushuai Zhu

    2014-01-01

    The Global Forest Products Model (GFPM) is an economic model of global production, consumption, and trade of forest products. An earlier version of the model is described in Buongiorno et al. (2003). The GFPM 2014 has data and parameters to simulate changes of the forest sector from 2010 to 2030. Buongiorno and Zhu (2014) describe how to use the model for simulation....

  13. Calibrating and updating the Global Forest Products Model (GFPM version 2016 with BPMPD)

    Science.gov (United States)

    Joseph Buongiorno; Shushuai  Zhu

    2016-01-01

    The Global Forest Products Model (GFPM) is an economic model of global production, consumption, and trade of forest products. An earlier version of the model is described in Buongiorno et al. (2003). The GFPM 2016 has data and parameters to simulate changes of the forest sector from 2013 to 2030. Buongiorno and Zhu (2015) describe how to use the model for...

  14. Aircraft/Air Traffic Management Functional Analysis Model. Version 2.0; User's Guide

    Science.gov (United States)

    Etheridge, Melvin; Plugge, Joana; Retina, Nusrat

    1998-01-01

    The Aircraft/Air Traffic Management Functional Analysis Model, Version 2.0 (FAM 2.0), is a discrete event simulation model designed to support analysis of alternative concepts in air traffic management and control. FAM 2.0 was developed by the Logistics Management Institute (LMI) a National Aeronautics and Space Administration (NASA) contract. This document provides a guide for using the model in analysis. Those interested in making enhancements or modification to the model should consult the companion document, Aircraft/Air Traffic Management Functional Analysis Model, Version 2.0 Technical Description.

  15. Modelling and analysis of Markov reward automata (extended version)

    NARCIS (Netherlands)

    Guck, Dennis; Timmer, Mark; Hatefi, Hassan; Ruijters, Enno; Stoelinga, Mariëlle

    2014-01-01

    Costs and rewards are important ingredients for cyberphysical systems, modelling critical aspects like energy consumption, task completion, repair costs, and memory usage. This paper introduces Markov reward automata, an extension of Markov automata that allows the modelling of systems incorporating

  16. Estimating hybrid choice models with the new version of Biogeme

    OpenAIRE

    Bierlaire, Michel

    2010-01-01

    Hybrid choice models integrate many types of discrete choice modeling methods, including latent classes and latent variables, in order to capture concepts such as perceptions, attitudes, preferences, and motivatio (Ben-Akiva et al., 2002). Although they provide an excellent framework to capture complex behavior patterns, their use in applications remains rare in the literature due to the difficulty of estimating the models. In this talk, we provide a short introduction to hybrid choice model...

  17. A hypocentral version of the space-time ETAS model

    Science.gov (United States)

    Guo, Yicun; Zhuang, Jiancang; Zhou, Shiyong

    2015-10-01

    The space-time Epidemic-Type Aftershock Sequence (ETAS) model is extended by incorporating the depth component of earthquake hypocentres. The depths of the direct offspring produced by an earthquake are assumed to be independent of the epicentre locations and to follow a beta distribution, whose shape parameter is determined by the depth of the parent event. This new model is verified by applying it to the Southern California earthquake catalogue. The results show that the new model fits data better than the original epicentre ETAS model and that it provides the potential for modelling and forecasting seismicity with higher resolutions.

  18. Result Summary for the Area 5 Radioactive Waste Management Site Performance Assessment Model Version 4.113

    Energy Technology Data Exchange (ETDEWEB)

    Shott, G. J.

    2012-04-15

    Preliminary results for Version 4.113 of the Nevada National Security Site Area 5 Radioactive Waste Management Site performance assessment model are summarized. Version 4.113 includes the Fiscal Year 2011 inventory estimate.

  19. SSM - SOLID SURFACE MODELER, VERSION 6.0

    Science.gov (United States)

    Goza, S. P.

    1994-01-01

    The Solid Surface Modeler (SSM) is an interactive graphics software application for solid-shaded and wireframe three- dimensional geometric modeling. It enables the user to construct models of real-world objects as simple as boxes or as complex as Space Station Freedom. The program has a versatile user interface that, in many cases, allows mouse input for intuitive operation or keyboard input when accuracy is critical. SSM can be used as a stand-alone model generation and display program and offers high-fidelity still image rendering. Models created in SSM can also be loaded into other software for animation or engineering simulation. (See the information below for the availability of SSM with the Object Orientation Manipulator program, OOM, a graphics software application for three-dimensional rendering and animation.) Models are constructed within SSM using functions of the Create Menu to create, combine, and manipulate basic geometric building blocks called primitives. Among the simpler primitives are boxes, spheres, ellipsoids, cylinders, and plates; among the more complex primitives are tubes, skinned-surface models and surfaces of revolution. SSM also provides several methods for duplicating models. Constructive Solid Geometry (CSG) is one of the most powerful model manipulation tools provided by SSM. The CSG operations implemented in SSM are union, subtraction and intersection. SSM allows the user to transform primitives with respect to each axis, transform the camera (the user's viewpoint) about its origin, apply texture maps and bump maps to model surfaces, and define color properties; to select and combine surface-fill attributes, including wireframe, constant, and smooth; and to specify models' points of origin (the positions about which they rotate). SSM uses Euler angle transformations for calculating the results of translation and rotation operations. The user has complete control over the modeling environment from within the system. A variety of file

  20. Alternative Factor Models and Heritability of the Short Leyton Obsessional Inventory--Children's Version

    Science.gov (United States)

    Moore, Janette; Smith, Gillian W.; Shevlin, Mark; O'Neill, Francis A.

    2010-01-01

    An alternative models framework was used to test three confirmatory factor analytic models for the Short Leyton Obsessional Inventory-Children's Version (Short LOI-CV) in a general population sample of 517 young adolescent twins (11-16 years). A one-factor model as implicit in current classification systems of Obsessive-Compulsive Disorder (OCD),…

  1. The MiniBIOS model (version 1A4) at the RIVM

    NARCIS (Netherlands)

    Uijt de Haag PAM; Laheij GMH

    1993-01-01

    This report is the user's guide of the MiniBIOS model, version 1A4. The model is operational at the Laboratory of Radiation Research of the RIVM. MiniBIOS is a simulation model for calculating the transport of radionuclides in the biosphere and the consequential radiation dose to humans. The

  2. The MiniBIOS model (version 1A4) at the RIVM

    NARCIS (Netherlands)

    Uijt de Haag PAM; Laheij GMH

    1993-01-01

    This report is the user's guide of the MiniBIOS model, version 1A4. The model is operational at the Laboratory of Radiation Research of the RIVM. MiniBIOS is a simulation model for calculating the transport of radionuclides in the biosphere and the consequential radiation dose to humans. The

  3. Simulating historical landscape dynamics using the landscape fire succession model LANDSUM version 4.0

    Science.gov (United States)

    Robert E. Keane; Lisa M. Holsinger; Sarah D. Pratt

    2006-01-01

    The range and variation of historical landscape dynamics could provide a useful reference for designing fuel treatments on today's landscapes. Simulation modeling is a vehicle that can be used to estimate the range of conditions experienced on historical landscapes. A landscape fire succession model called LANDSUMv4 (LANDscape SUccession Model version 4.0) is...

  4. The MiniBIOS model (version 1A4) at the RIVM

    NARCIS (Netherlands)

    Uijt de Haag PAM; Laheij GMH

    1993-01-01

    This report is the user's guide of the MiniBIOS model, version 1A4. The model is operational at the Laboratory of Radiation Research of the RIVM. MiniBIOS is a simulation model for calculating the transport of radionuclides in the biosphere and the consequential radiation dose to humans. The transp

  5. The MiniBIOS model (version 1A4) at the RIVM

    NARCIS (Netherlands)

    Uijt de Haag PAM; Laheij GMH

    1993-01-01

    This report is the user's guide of the MiniBIOS model, version 1A4. The model is operational at the Laboratory of Radiation Research of the RIVM. MiniBIOS is a simulation model for calculating the transport of radionuclides in the biosphere and the consequential radiation dose to humans. The

  6. Alternative Factor Models and Heritability of the Short Leyton Obsessional Inventory--Children's Version

    Science.gov (United States)

    Moore, Janette; Smith, Gillian W.; Shevlin, Mark; O'Neill, Francis A.

    2010-01-01

    An alternative models framework was used to test three confirmatory factor analytic models for the Short Leyton Obsessional Inventory-Children's Version (Short LOI-CV) in a general population sample of 517 young adolescent twins (11-16 years). A one-factor model as implicit in current classification systems of Obsessive-Compulsive Disorder (OCD),…

  7. Le disque à saphir dans l’édition phonographique – Première partie

    OpenAIRE

    Sébald, Bruno

    2010-01-01

    L’expression « disque à saphir » que nous emploierons dans cet article recouvre un terme générique utilisé pour décrire des disques plats lus, à l’origine, par le truchement d’un saphir de forme sphérique. Cette technique relève d’un procédé de gravure verticale et se démarque ainsi du disque à aiguille, dont la pointe de lecture diffère et dont le mode de gravure est latéral. Elle s’en distingue également physiquement par la texture perlée qui couvre la surface du disque. Historiquement, le ...

  8. Complexity, accuracy and practical applicability of different biogeochemical model versions

    Science.gov (United States)

    Los, F. J.; Blaas, M.

    2010-04-01

    The construction of validated biogeochemical model applications as prognostic tools for the marine environment involves a large number of choices particularly with respect to the level of details of the .physical, chemical and biological aspects. Generally speaking, enhanced complexity might enhance veracity, accuracy and credibility. However, very complex models are not necessarily effective or efficient forecast tools. In this paper, models of varying degrees of complexity are evaluated with respect to their forecast skills. In total 11 biogeochemical model variants have been considered based on four different horizontal grids. The applications vary in spatial resolution, in vertical resolution (2DH versus 3D), in nature of transport, in turbidity and in the number of phytoplankton species. Included models range from 15 year old applications with relatively simple physics up to present state of the art 3D models. With all applications the same year, 2003, has been simulated. During the model intercomparison it has been noticed that the 'OSPAR' Goodness of Fit cost function (Villars and de Vries, 1998) leads to insufficient discrimination of different models. This results in models obtaining similar scores although closer inspection of the results reveals large differences. In this paper therefore, we have adopted the target diagram by Jolliff et al. (2008) which provides a concise and more contrasting picture of model skill on the entire model domain and for the entire period of the simulations. Correctness in prediction of the mean and the variability are separated and thus enhance insight in model functioning. Using the target diagrams it is demonstrated that recent models are more consistent and have smaller biases. Graphical inspection of time series confirms this, as the level of variability appears more realistic, also given the multi-annual background statistics of the observations. Nevertheless, whether the improvements are all genuine for the particular

  9. Efficient Modelling and Generation of Markov Automata (extended version)

    NARCIS (Netherlands)

    Timmer, Mark; Katoen, Joost-Pieter; Pol, van de Jaco; Stoelinga, Mariëlle

    2012-01-01

    This paper introduces a framework for the efficient modelling and generation of Markov automata. It consists of (1) the data-rich process-algebraic language MAPA, allowing concise modelling of systems with nondeterminism, probability and Markovian timing; (2) a restricted form of the language, the M

  10. Modeling the complete Otto cycle: Preliminary version. [computer programming

    Science.gov (United States)

    Zeleznik, F. J.; Mcbride, B. J.

    1977-01-01

    A description is given of the equations and the computer program being developed to model the complete Otto cycle. The program incorporates such important features as: (1) heat transfer, (2) finite combustion rates, (3) complete chemical kinetics in the burned gas, (4) exhaust gas recirculation, and (5) manifold vacuum or supercharging. Changes in thermodynamic, kinetic and transport data as well as model parameters can be made without reprogramming. Preliminary calculations indicate that: (1) chemistry and heat transfer significantly affect composition and performance, (2) there seems to be a strong interaction among model parameters, and (3) a number of cycles must be calculated in order to obtain steady-state conditions.

  11. Flipped version of the supersymmetric strongly coupled preon model

    Science.gov (United States)

    Fajfer, S.; Mileković, M.; Tadić, D.

    1989-12-01

    In the supersymmetric SU(5) [SUSY SU(5)] composite model (which was described in an earlier paper) the fermion mass terms can be easily constructed. The SUSY SU(5)⊗U(1), i.e., flipped, composite model possesses a completely analogous composite-particle spectrum. However, in that model one cannot construct a renormalizable superpotential which would generate fermion mass terms. This contrasts with the standard noncomposite grand unified theories (GUT's) in which both the Georgi-Glashow electrical charge embedding and its flipped counterpart lead to the renormalizable theories.

  12. ONKALO rock mechanics model (RMM) - Version 2.0

    Energy Technology Data Exchange (ETDEWEB)

    Moenkkoenen, H. [WSP Finland Oy, Helsinki (Finland); Hakala, M. [KMS Hakala Oy, Nokia (Finland); Paananen, M.; Laine, E. [Geological Survey of Finland, Espoo (Finland)

    2012-02-15

    The Rock Mechanics Model of the ONKALO rock volume is a description of the significant features and parameters related to rock mechanics. The main objective is to develop a tool to predict the rock properties, quality and hence the potential for stress failure which can then be used for continuing design of the ONKALO and the repository. This is the second implementation of the Rock Mechanics Model and it includes sub-models of the intact rock strength, in situ stress, thermal properties, rock mass quality and properties of the brittle deformation zones. Because of the varying quantities of available data for the different parameters, the types of presentations also vary: some data sets can be presented in the style of a 3D block model but, in other cases, a single distribution represents the whole rock volume hosting the ONKALO. (orig.)

  13. U.S. Coastal Relief Model - Southern California Version 2

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NGDC's U.S. Coastal Relief Model (CRM) provides a comprehensive view of the U.S. coastal zone integrating offshore bathymetry with land topography into a seamless...

  14. The Oak Ridge Competitive Electricity Dispatch (ORCED) Model Version 9

    Energy Technology Data Exchange (ETDEWEB)

    Hadley, Stanton W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Baek, Young Sun [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-11-01

    The Oak Ridge Competitive Electricity Dispatch (ORCED) model dispatches power plants in a region to meet the electricity demands for any single given year up to 2030. It uses publicly available sources of data describing electric power units such as the National Energy Modeling System and hourly demands from utility submittals to the Federal Energy Regulatory Commission that are projected to a future year. The model simulates a single region of the country for a given year, matching generation to demands and predefined net exports from the region, assuming no transmission constraints within the region. ORCED can calculate a number of key financial and operating parameters for generating units and regional market outputs including average and marginal prices, air emissions, and generation adequacy. By running the model with and without changes such as generation plants, fuel prices, emission costs, plug-in hybrid electric vehicles, distributed generation, or demand response, the marginal impact of these changes can be found.

  15. Macro System Model (MSM) User Guide, Version 1.3

    Energy Technology Data Exchange (ETDEWEB)

    Ruth, M.; Diakov, V.; Sa, T.; Goldsby, M.

    2011-09-01

    This user guide describes the macro system model (MSM). The MSM has been designed to allow users to analyze the financial, environmental, transitional, geographical, and R&D issues associated with the transition to a hydrogen economy. Basic end users can use the MSM to answer cross-cutting questions that were previously difficult to answer in a consistent and timely manner due to various assumptions and methodologies among different models.

  16. A Systems Engineering Capability Maturity Model, Version 1.1,

    Science.gov (United States)

    1995-11-01

    Ongoing Skills and Knowledge 4-113 PA 18: Coordinate with Suppliers 4-120 Part 3: Appendices Appendix A Appendix B Appendix C Appendix D...Ward-Callan, C. Wasson, A. Wilbur, A.M. Wilhite, R. Williams, H. Wilson, D. Zaugg, and C. Zumba . continued on next page SM CMM and Capability...Model (SE-CMM) was developed as a response to industry requests for assistance in coordinating and publishing a model that would foster improvement

  17. Due Regard Encounter Model Version 1.0

    Science.gov (United States)

    2013-08-19

    Note that no existing model covers encoun- ters between two IFR aircraft in oceanic airspace. The reason for this is that one cannot observe encounters...encounters between instrument flight rules ( IFR ) and non- IFR traffic beyond 12NM. 2 TABLE 1 Encounter model categories. Aircraft of Interest Intruder...Aircraft Location Flight Rule IFR VFR Noncooperative Noncooperative Conventional Unconventional CONUS IFR C C U X VFR C U U X Offshore IFR C C U X VFR C U

  18. Using the Global Forest Products Model (GFPM version 2012)

    Science.gov (United States)

    Joseph Buongiorno; Shushuai Zhu

    2012-01-01

    The purpose of this manual is to enable users of the Global Forest Products Model to: • Install and run the GFPM software • Understand the input data • Change the input data to explore different scenarios • Interpret the output The GFPM is an economic model of global production, consumption and trade of forest products (Buongiorno et al. 2003). The GFPM2012 has data...

  19. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) GEM Manual

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; J. Schroeder; S. T. Beck

    2008-08-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer running the Microsoft Windows? operating system. SAPHIRE is primarily funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory (INL). The INL's primary role in this project is that of software developer and tester. Using the SAPHIRE analysis engine and relational database is a complementary program called GEM. GEM has been designed to simplify using existing PRA analysis for activities such as the NRC’s Accident Sequence Precursor program. In this report, the theoretical framework behind GEM-type calculations are discussed in addition to providing guidance and examples for performing evaluations when using the GEM software. As part of this analysis framework, the two types of GEM analysis are outlined, specifically initiating event (where an initiator occurs) and condition (where a component is failed for some length of time) assessments.

  20. Institutional Transformation Version 2.5 Modeling and Planning.

    Energy Technology Data Exchange (ETDEWEB)

    Villa, Daniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mizner, Jack H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Passell, Howard D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gallegos, Gerald R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Peplinski, William John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vetter, Douglas W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Evans, Christopher A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Malczynski, Leonard A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Addison, Marlin [Arizona State Univ., Mesa, AZ (United States); Schaffer, Matthew A. [Bridgers and Paxton Engineering Firm, Albuquerque, NM (United States); Higgins, Matthew W. [Vibrantcy, Albuquerque, NM (United States)

    2017-02-01

    Reducing the resource consumption and emissions of large institutions is an important step toward a sustainable future. Sandia National Laboratories' (SNL) Institutional Transformation (IX) project vision is to provide tools that enable planners to make well-informed decisions concerning sustainability, resource conservation, and emissions reduction across multiple sectors. The building sector has been the primary focus so far because it is the largest consumer of resources for SNL. The IX building module allows users to define the evolution of many buildings over time. The module has been created so that it can be generally applied to any set of DOE-2 ( http://doe2.com ) building models that have been altered to include parameters and expressions required by energy conservation measures (ECM). Once building models have been appropriately prepared, they are checked into a Microsoft Access (r) database. Each building can be represented by many models. This enables the capability to keep a continuous record of models in the past, which are replaced with different models as changes occur to the building. In addition to this, the building module has the capability to apply climate scenarios through applying different weather files to each simulation year. Once the database has been configured, a user interface in Microsoft Excel (r) is used to create scenarios with one or more ECMs. The capability to include central utility buildings (CUBs) that service more than one building with chilled water has been developed. A utility has been created that joins multiple building models into a single model. After using the utility, several manual steps are required to complete the process. Once this CUB model has been created, the individual contributions of each building are still tracked through meters. Currently, 120 building models from SNL's New Mexico and California campuses have been created. This includes all buildings at SNL greater than 10,000 sq. ft

  1. Zig-zag version of the Frenkel-Kontorova model

    DEFF Research Database (Denmark)

    Christiansen, Peter Leth; Savin, A.V.; Zolotaryuk, Alexander

    1996-01-01

    We study a generalization of the Frenkel-Kontorova model which describes a zig-zag chain of particles coupled by both the first- and second-neighbor harmonic forces and subjected to a planar substrate with a commensurate potential relief. The particles are supposed to have two degrees of freedom:...

  2. A node-based version of the cellular Potts model.

    Science.gov (United States)

    Scianna, Marco; Preziosi, Luigi

    2016-09-01

    The cellular Potts model (CPM) is a lattice-based Monte Carlo method that uses an energetic formalism to describe the phenomenological mechanisms underlying the biophysical problem of interest. We here propose a CPM-derived framework that relies on a node-based representation of cell-scale elements. This feature has relevant consequences on the overall simulation environment. First, our model can be implemented on any given domain, provided a proper discretization (which can be regular or irregular, fixed or time evolving). Then, it allowed an explicit representation of cell membranes, whose displacements realistically result in cell movement. Finally, our node-based approach can be easily interfaced with continuous mechanics or fluid dynamics models. The proposed computational environment is here applied to some simple biological phenomena, such as cell sorting and chemotactic migration, also in order to achieve an analysis of the performance of the underlying algorithm. This work is finally equipped with a critical comparison between the advantages and disadvantages of our model with respect to the traditional CPM and to some similar vertex-based approaches. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Red Storm usage model :Version 1.12.

    Energy Technology Data Exchange (ETDEWEB)

    Jefferson, Karen L.; Sturtevant, Judith E.

    2005-12-01

    Red Storm is an Advanced Simulation and Computing (ASC) funded massively parallel supercomputer located at Sandia National Laboratories (SNL). The Red Storm Usage Model (RSUM) documents the capabilities and the environment provided for the FY05 Tri-Lab Level II Limited Availability Red Storm User Environment Milestone and the FY05 SNL Level II Limited Availability Red Storm Platform Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and SNL. Additionally, the Red Storm Usage Model maps the provided capabilities to the Tri-Lab ASC Computing Environment (ACE) requirements. The ACE requirements reflect the high performance computing requirements for the ASC community and have been updated in FY05 to reflect the community's needs. For each section of the RSUM, Appendix I maps the ACE requirements to the Limited Availability User Environment capabilities and includes a description of ACE requirements met and those requirements that are not met in that particular section. The Red Storm Usage Model, along with the ACE mappings, has been issued and vetted throughout the Tri-Lab community.

  4. Parameter Estimation in Rainfall-Runoff Modelling Using Distributed Versions of Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Michala Jakubcová

    2015-01-01

    Full Text Available The presented paper provides the analysis of selected versions of the particle swarm optimization (PSO algorithm. The tested versions of the PSO were combined with the shuffling mechanism, which splits the model population into complexes and performs distributed PSO optimization. One of them is a new proposed PSO modification, APartW, which enhances the global exploration and local exploitation in the parametric space during the optimization process through the new updating mechanism applied on the PSO inertia weight. The performances of four selected PSO methods were tested on 11 benchmark optimization problems, which were prepared for the special session on single-objective real-parameter optimization CEC 2005. The results confirm that the tested new APartW PSO variant is comparable with other existing distributed PSO versions, AdaptW and LinTimeVarW. The distributed PSO versions were developed for finding the solution of inverse problems related to the estimation of parameters of hydrological model Bilan. The results of the case study, made on the selected set of 30 catchments obtained from MOPEX database, show that tested distributed PSO versions provide suitable estimates of Bilan model parameters and thus can be used for solving related inverse problems during the calibration process of studied water balance hydrological model.

  5. Incremental Testing of the Community Multiscale Air Quality (CMAQ) Modeling System Version 4.7

    Science.gov (United States)

    This paper describes the scientific and structural updates to the latest release of the Community Multiscale Air Quality (CMAQ) modeling system version 4.7 (v4.7) and points the reader to additional resources for further details. The model updates were evaluated relative to obse...

  6. Connected Equipment Maturity Model Version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    Butzbaugh, Joshua B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Mayhorn, Ebony T. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sullivan, Greg [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Whalen, Scott A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-05-01

    The Connected Equipment Maturity Model (CEMM) evaluates the high-level functionality and characteristics that enable equipment to provide the four categories of energy-related services through communication with other entities (e.g., equipment, third parties, utilities, and users). The CEMM will help the U.S. Department of Energy, industry, energy efficiency organizations, and research institutions benchmark the current state of connected equipment and identify capabilities that may be attained to reach a more advanced, future state.

  7. Development of polygonal-surface version of ICRP reference phantoms: Lymphatic node modeling

    Energy Technology Data Exchange (ETDEWEB)

    Thang, Ngyen Tat; Yeom, Yeon Soo; Han, Min Cheol; Kim, Chan Hyeong [Hanyang University, Seoul (Korea, Republic of)

    2014-04-15

    Among radiosensitive organs/tissues considered in ICRP Publication 103, lymphatic nodes are many small size tissues and widely distributed in the ICRP reference phantoms. It is difficult to directly convert lymphatic nodes of ICRP reference voxel phantoms to polygonal surfaces. Furthermore, in the ICRP reference phantoms lymphatic nodes were manually drawn only in six lymphatic node regions and the reference number of lymphatic nodes reported in ICRP Publication 89 was not considered. To address aforementioned limitations, the present study developed a new lymphatic node modeling method for the polygonal-surface version of ICRP reference phantoms. By using the developed method, lymphatic nodes were modelled in the preliminary version of ICRP male polygonal-surface phantom. Then, lymphatic node dose values were calculated and compared with those of the ICRP reference male voxel phantom to validate the developed modeling method. The present study developed the new lymphatic node modeling method and successfully modeled lymphatic nodes in the preliminary version of the ICRP male polygonal-surface phantom. From the results, it was demonstrated that the developed modeling method can be used to model lymphatic nodes in polygonal-surface version of ICRP reference phantoms.

  8. Response Surface Modeling Tool Suite, Version 1.x

    Energy Technology Data Exchange (ETDEWEB)

    2016-07-05

    The Response Surface Modeling (RSM) Tool Suite is a collection of three codes used to generate an empirical interpolation function for a collection of drag coefficient calculations computed with Test Particle Monte Carlo (TPMC) simulations. The first code, "Automated RSM", automates the generation of a drag coefficient RSM for a particular object to a single command. "Automated RSM" first creates a Latin Hypercube Sample (LHS) of 1,000 ensemble members to explore the global parameter space. For each ensemble member, a TPMC simulation is performed and the object drag coefficient is computed. In the next step of the "Automated RSM" code, a Gaussian process is used to fit the TPMC simulations. In the final step, Markov Chain Monte Carlo (MCMC) is used to evaluate the non-analytic probability distribution function from the Gaussian process. The second code, "RSM Area", creates a look-up table for the projected area of the object based on input limits on the minimum and maximum allowed pitch and yaw angles and pitch and yaw angle intervals. The projected area from the look-up table is used to compute the ballistic coefficient of the object based on its pitch and yaw angle. An accurate ballistic coefficient is crucial in accurately computing the drag on an object. The third code, "RSM Cd", uses the RSM generated by the "Automated RSM" code and the projected area look-up table generated by the "RSM Area" code to accurately compute the drag coefficient and ballistic coefficient of the object. The user can modify the object velocity, object surface temperature, the translational temperature of the gas, the species concentrations of the gas, and the pitch and yaw angles of the object. Together, these codes allow for the accurate derivation of an object's drag coefficient and ballistic coefficient under any conditions with only knowledge of the object's geometry and mass.

  9. Atmospheric photochemistry of aromatic hydrocarbons: Analysis of OH budgets during SAPHIR chamber experiments and evaluation of MCMv3.2

    Science.gov (United States)

    Nehr, S.; Bohn, B.; Brauers, T.; Dorn, H.; Fuchs, H.; Häseler, R.; Hofzumahaus, A.; Li, X.; Lu, K.; Rohrer, F.; Tillmann, R.; Wahner, A.

    2012-12-01

    Aromatic hydrocarbons, almost exclusively originating from anthropogenic sources, comprise a significant fraction of volatile organic compounds observed in urban air. The photo-oxidation of aromatics results in the formation of secondary pollutants and impacts air quality in cities, industrialized areas, and districts of dense traffic. Up-to-date photochemical oxidation schemes of the Master Chemical Mechanism (MCMv3.2) exhibit moderate performance in simulating aromatic compound degradation observed during previous environmental chamber studies. To obtain a better understanding of aromatic photo-oxidation mechanisms, we performed experiments with a number of aromatic hydrocarbons in the outdoor atmosphere simulation chamber SAPHIR located in Jülich, Germany. These chamber studies were designed to derive OH turnover rates exclusively based on experimental data. Simultaneous measurements of NOx (= NO + NO2), HOx (= OH + HO2), and the total OH loss rate constant k(OH) facilitate a detailed analysis of the OH budgets during photo-oxidation experiments. The OH budget analysis was complemented by numerical model simulations using MCMv3.2. Despite MCM's tendency to overestimate k(OH) and to underpredict radical concentrations, the OH budgets are reasonably balanced for all investigated aromatics. However, the results leave some scope for OH producing pathways that are not considered in the current MCMv3.2. An improved reaction mechanism, derived from MCMv3.2 sensitivity studies, is presented. The model performance is basically improved by changes of the mechanistic representation of ring fragmentation channels.

  10. The ``Nordic`` HBV model. Description and documentation of the model version developed for the project Climate Change and Energy Production

    Energy Technology Data Exchange (ETDEWEB)

    Saelthun, N.R.

    1996-12-31

    The model described in this report is a version of the HBV model developed for the project Climate Change and Energy Production. This was a Nordic project aimed at evaluating the impacts of the Scandinavian countries including Greenland with emphasis on hydropower production. The model incorporates many of the features found in individual versions of the HBV model in use in the Nordic countries, and some new ones. It has catchment subdivision in altitude intervals, a simple vegetation parametrization including interception, temperature based evapotranspiration calculation, lake evaporation, lake routing, glacier mass balance simulation, special functions for climate change simulations etc. The user interface is very basic, and the model is primarily intended for research and educational purposes. Commercial versions of the model should be used for operational implementations. 5 refs., 4 figs., 1 tab.

  11. OH regeneration from methacrolein oxidation investigated in the atmosphere simulation chamber SAPHIR

    Science.gov (United States)

    Fuchs, H.; Acir, I.-H.; Bohn, B.; Brauers, T.; Dorn, H.-P.; Häseler, R.; Hofzumahaus, A.; Holland, F.; Kaminski, M.; Li, X.; Lu, K.; Lutz, A.; Nehr, S.; Rohrer, F.; Tillmann, R.; Wegener, R.; Wahner, A.

    2014-08-01

    Hydroxyl radicals (OH) are the most important reagent for the oxidation of trace gases in the atmosphere. OH concentrations measured during recent field campaigns in isoprene-rich environments were unexpectedly large. A number of studies showed that unimolecular reactions of organic peroxy radicals (RO2) formed in the initial reaction step of isoprene with OH play an important role for the OH budget in the atmosphere at low mixing ratios of nitrogen monoxide (NO) of less than 100 pptv. It has also been suggested that similar reactions potentially play an important role for RO2 from other compounds. Here, we investigate the oxidation of methacrolein (MACR), one major oxidation product of isoprene, by OH in experiments in the simulation chamber SAPHIR under controlled atmospheric conditions. The experiments show that measured OH concentrations are approximately 50% larger than calculated by the Master Chemical Mechanism (MCM) for conditions of the experiments (NO mixing ratio of 90 pptv). The analysis of the OH budget reveals an OH source that is not accounted for in MCM, which is correlated with the production rate of RO2 radicals from MACR. In order to balance the measured OH destruction rate, 0.77 OH radicals (1σ error: ± 0.31) need to be additionally reformed from each reaction of OH with MACR. The strong correlation of the missing OH source with the production of RO2 radicals is consistent with the concept of OH formation from unimolecular isomerization and decomposition reactions of RO2. The comparison of observations with model calculations gives a lower limit of 0.03 s-1 for the reaction rate constant if the OH source is attributed to an isomerization reaction of MACR-1-OH-2-OO and MACR-2-OH-2-OO formed in the MACR + OH reaction as suggested in the literature (Crounse et al., 2012). This fast isomerization reaction would be a competitor to the reaction of this RO2 species with a minimum of 150 pptv NO. The isomerization reaction would be the dominant

  12. COMODI: An ontology to characterise differences in versions of computational models in biology

    OpenAIRE

    Scharm, Martin; Waltemath, Dagmar; Mendes, Pedro; Wolkenhauer, Olaf

    2016-01-01

    Motivation: Open model repositories provide ready-to-reuse computational models of biological systems. Models within those repositories evolve over time, leading to many alternative and subsequent versions. Taken together, the underlying changes reflect a model’s provenance and thus can give valuable insights into the studied biology. Currently, however, changes cannot be semantically interpreted. To improve this situation, we developed an ontology of terms describing changes in computational...

  13. 78 FR 76791 - Availability of Version 4.0 of the Connect America Fund Phase II Cost Model; Adopting Current...

    Science.gov (United States)

    2013-12-19

    ... provide additional protection from harsh weather. This version modifies the prior methodology used for..., which provides more detail on the current model architecture, processing steps, and data sources...

  14. A new version of code Java for 3D simulation of the CCA model

    Science.gov (United States)

    Zhang, Kebo; Xiong, Hailing; Li, Chao

    2016-07-01

    In this paper we present a new version of the program of CCA model. In order to benefit from the advantages involved in the latest technologies, we migrated the running environment from JDK1.6 to JDK1.7. And the old program was optimized into a new framework, so promoted extendibility.

  15. All-Ages Lead Model (Aalm) Version 1.05 (External Draft Report)

    Science.gov (United States)

    The All-Ages Lead Model (AALM) Version 1.05, is an external review draft software and guidance manual. EPA released this software and associated documentation for public review and comment beginning September 27, 2005, until October 27, 2005. The public comments will be accepte...

  16. Using the Global Forest Products Model (GFPM version 2016 with BPMPD)

    Science.gov (United States)

    Joseph Buongiorno; Shushuai   Zhu

    2016-01-01

     The GFPM is an economic model of global production, consumption and trade of forest products. The original formulation and several applications are described in Buongiorno et al. (2003). However, subsequent versions, including the GFPM 2016 reflect significant changes and extensions. The GFPM 2016 software uses the...

  17. [Psychometric properties of the French version of the Effort-Reward Imbalance model].

    Science.gov (United States)

    Niedhammer, I; Siegrist, J; Landre, M F; Goldberg, M; Leclerc, A

    2000-10-01

    Two main models are currently used to evaluate psychosocial factors at work: the Job Strain model developed by Karasek and the Effort-Reward Imbalance model. A French version of the first model has been validated for the dimensions of psychological demands and decision latitude. As regards the second one evaluating three dimensions (extrinsic effort, reward, and intrinsic effort), there are several versions in different languages, but until recently there was no validated French version. The objective of this study was to explore the psychometric properties of the French version of the Effort-Reward Imbalance model in terms of internal consistency, factorial validity, and discriminant validity. The present study was based on the GAZEL cohort and included the 10 174 subjects who were working at the French national electric and gas company (EDF-GDF) and answered the questionnaire in 1998. A French version of Effort-Reward Imbalance was included in this questionnaire. This version was obtained by a standard forward/backward translation procedure. Internal consistency was satisfactory for the three scales of extrinsic effort, reward, and intrinsic effort: Cronbach's Alpha coefficients higher than 0.7 were observed. A one-factor solution was retained for the factor analysis of the scale of extrinsic effort. A three-factor solution was retained for the factor analysis of reward, and these dimensions were interpreted as the factor analysis of intrinsic effort did not support the expected four-dimension structure. The analysis of discriminant validity displayed significant associations between measures of Effort-Reward Imbalance and the variables of sex, age, education level, and occupational grade. This study is the first one supporting satisfactory psychometric properties of the French version of the Effort-Reward Imbalance model. However, the factorial validity of intrinsic effort could be questioned. Furthermore, as most previous studies were based on male samples

  18. User guide for MODPATH Version 7—A particle-tracking model for MODFLOW

    Science.gov (United States)

    Pollock, David W.

    2016-09-26

    MODPATH is a particle-tracking post-processing program designed to work with MODFLOW, the U.S. Geological Survey (USGS) finite-difference groundwater flow model. MODPATH version 7 is the fourth major release since its original publication. Previous versions were documented in USGS Open-File Reports 89–381 and 94–464 and in USGS Techniques and Methods 6–A41.MODPATH version 7 works with MODFLOW-2005 and MODFLOW–USG. Support for unstructured grids in MODFLOW–USG is limited to smoothed, rectangular-based quadtree and quadpatch grids.A software distribution package containing the computer program and supporting documentation, such as input instructions, output file descriptions, and example problems, is available from the USGS over the Internet (http://water.usgs.gov/ogw/modpath/).

  19. Site investigation SFR. Hydrogeological modelling of SFR. Model version 0.2

    Energy Technology Data Exchange (ETDEWEB)

    Oehman, Johan (Golder Associates AB (Sweden)); Follin, Sven (SF GeoLogic (Sweden))

    2010-01-15

    The Swedish Nuclear Fuel and Waste Management Company (SKB) has conducted site investigations for a planned extension of the existing final repository for short-lived radioactive waste (SFR). A hydrogeological model is developed in three model versions, which will be used for safety assessment and design analyses. This report presents a data analysis of the currently available hydrogeological data from the ongoing Site Investigation SFR (KFR27, KFR101, KFR102A, KFR102B, KFR103, KFR104, and KFR105). The purpose of this work is to develop a preliminary hydrogeological Discrete Fracture Network model (hydro-DFN) parameterisation that can be applied in regional-scale modelling. During this work, the Geologic model had not yet been updated for the new data set. Therefore, all analyses were made to the rock mass outside Possible Deformation Zones, according to Single Hole Interpretation. Owing to this circumstance, it was decided not to perform a complete hydro-DFN calibration at this stage. Instead focus was re-directed to preparatory test cases and conceptual questions with the aim to provide a sound strategy for developing the hydrogeological model SFR v. 1.0. The presented preliminary hydro-DFN consists of five fracture sets and three depth domains. A statistical/geometrical approach (connectivity analysis /Follin et al. 2005/) was performed to estimate the size (i.e. fracture radius) distribution of fractures that are interpreted as Open in geologic mapping of core data. Transmissivity relations were established based on an assumption of a correlation between the size and evaluated specific capacity of geologic features coupled to inflows measured by the Posiva Flow Log device (PFL-f data). The preliminary hydro-DFN was applied in flow simulations in order to test its performance and to explore the role of PFL-f data. Several insights were gained and a few model technical issues were raised. These are summarised in Table 5-1

  20. a Version-Similarity Based Trust Degree Computation Model for Crowdsourcing Geographic Data

    Science.gov (United States)

    Zhou, Xiaoguang; Zhao, Yijiang

    2016-06-01

    Quality evaluation and control has become the main concern of VGI. In this paper, trust is used as a proxy of VGI quality, a version-similarity based trust degree computation model for crowdsourcing geographic data is presented. This model is based on the assumption that the quality of VGI objects mainly determined by the professional skill and integrity (called reputation in this paper), and the reputation of the contributor is movable. The contributor's reputation is calculated using the similarity degree among the multi-versions for the same entity state. The trust degree of VGI object is determined by the trust degree of its previous version, the reputation of the last contributor and the modification proportion. In order to verify this presented model, a prototype system for computing the trust degree of VGI objects is developed by programming with Visual C# 2010. The historical data of Berlin of OpenStreetMap (OSM) are employed for experiments. The experimental results demonstrate that the quality of crowdsourcing geographic data is highly positive correlation with its trustworthiness. As the evaluation is based on version-similarity, not based on the direct subjective evaluation among users, the evaluation result is objective. Furthermore, as the movability property of the contributors' reputation is used in this presented method, our method has a higher assessment coverage than the existing methods.

  1. Independent Verification and Validation Of SAPHIRE 8 Software Requirements Project Number: N6423 U.S. Nuclear Regulatory Commission

    Energy Technology Data Exchange (ETDEWEB)

    Kent Norris

    2009-09-01

    The purpose of the Independent Verification and Validation (IV&V) role in the evaluation of the SAPHIRE requirements definition is to assess the activities that results in the specification, documentation, and review of the requirements that the software product must satisfy, including functionality, performance, design constraints, attributes and external interfaces. The IV&V team began this endeavor after the software engineering and software development of SAPHIRE had already been in production. IV&V reviewed the requirements specified in the NRC Form 189s to verify these requirements were included in SAPHIRE’s Software Verification and Validation Plan (SVVP).

  2. Independent Verification and Validation Of SAPHIRE 8 Software Requirements Project Number: N6423 U.S. Nuclear Regulatory Commission

    Energy Technology Data Exchange (ETDEWEB)

    Kent Norris

    2010-03-01

    The purpose of the Independent Verification and Validation (IV&V) role in the evaluation of the SAPHIRE requirements definition is to assess the activities that results in the specification, documentation, and review of the requirements that the software product must satisfy, including functionality, performance, design constraints, attributes and external interfaces. The IV&V team began this endeavor after the software engineering and software development of SAPHIRE had already been in production. IV&V reviewed the requirements specified in the NRC Form 189s to verify these requirements were included in SAPHIRE’s Software Verification and Validation Plan (SVVP).

  3. A Fast Version of LASG/IAP Climate System Model and Its 1000-year Control Integration

    Institute of Scientific and Technical Information of China (English)

    ZHOU Tianjun; WU Bo; WEN Xinyu; LI Lijuan; WANG Bin

    2008-01-01

    A fast version of the State Key Laboratory of Numerical Modeling for Atmospheric Sciences and Geo- physical Fluid Dynamics (LASG)/Institute of Atmospheric Physics (IAP) climate system model is briefly documented. The fast coupled model employs a low resolution version of the atmospheric component Grid Atmospheric Model of IAP/LASG (GAMIL), with the other parts of the model, namely an oceanic com- ponent LASG/IAP Climate Ocean Model (LICOM), land component Common Land Model (CLM), and sea ice component from National Center for Atmospheric Research Community Climate System Model (NCAR CCSM2), as the same as in the standard version of LASG/IAP Flexible Global Ocean Atmosphere Land System model (FGOALS_g). The parameterizatious of physical and dynamical processes of the at- mospheric component in the fast version are identical to the standard version, although some parameter values are different. However, by virtue of reduced horizontal resolution and increased time-step of the most time-consuming atmospheric component, it runs faster by a factor of 3 and can serve as a useful tool for long- term and large-ensemble integrations. A 1000-year control simulation of the present-day climate has been completed without flux adjustments. The final 600 years of this simulation has virtually no trends in global mean sea surface temperatures and is recommended for internal variability studies. Several aspects of the control simulation's mean climate and variability axe evaluated against the observational or reanalysis data. The strengths and weaknesses of the control simulation are evaluated. The mean atmospheric circulation is well simulated, except in high latitudes. The Asian-Australian monsoonal meridional cell shows realistic features, however, an artificial rainfall center is located to the eastern periphery of the Tibetan Plateau persists throughout the year. The mean bias of SST resembles that of the standard version, appearing as a "double ITCZ" (Inter

  4. The Lagrangian particle dispersion model FLEXPART-WRF VERSION 3.1

    Energy Technology Data Exchange (ETDEWEB)

    Brioude, J.; Arnold, D.; Stohl, A.; Cassiani, M.; Morton, Don; Seibert, P.; Angevine, W. M.; Evan, S.; Dingwell, A.; Fast, Jerome D.; Easter, Richard C.; Pisso, I.; Bukhart, J.; Wotawa, G.

    2013-11-01

    The Lagrangian particle dispersion model FLEXPART was originally designed for cal- culating long-range and mesoscale dispersion of air pollutants from point sources, such as after an accident in a nuclear power plant. In the meantime FLEXPART has evolved into a comprehensive tool for atmospheric transport modeling and analysis at different scales. This multiscale need from the modeler community has encouraged new developments in FLEXPART. In this document, we present a version that works with the Weather Research and Forecasting (WRF) mesoscale meteoro- logical model. Simple procedures on how to run FLEXPART-WRF are presented along with special options and features that differ from its predecessor versions. In addition, test case data, the source code and visualization tools are provided to the reader as supplementary material.

  5. Thermal site descriptive model. A strategy for the model development during site investigations - version 2

    Energy Technology Data Exchange (ETDEWEB)

    Back, Paer-Erik; Sundberg, Jan [Geo Innova AB (Sweden)

    2007-09-15

    This report presents a strategy for describing, predicting and visualising the thermal aspects of the site descriptive model. The strategy is an updated version of an earlier strategy applied in all SDM versions during the initial site investigation phase at the Forsmark and Oskarshamn areas. The previous methodology for thermal modelling did not take the spatial correlation fully into account during simulation. The result was that the variability of thermal conductivity in the rock mass was not sufficiently well described. Experience from earlier thermal SDMs indicated that development of the methodology was required in order describe the spatial distribution of thermal conductivity in the rock mass in a sufficiently reliable way, taking both variability within rock types and between rock types into account. A good description of the thermal conductivity distribution is especially important for the lower tail. This tail is important for the design of a repository because it affects the canister spacing. The presented approach is developed to be used for final SDM regarding thermal properties, primarily thermal conductivity. Specific objectives for the strategy of thermal stochastic modelling are: Description: statistical description of the thermal conductivity of a rock domain. Prediction: prediction of thermal conductivity in a specific rock volume. Visualisation: visualisation of the spatial distribution of thermal conductivity. The thermal site descriptive model should include the temperature distribution and thermal properties of the rock mass. The temperature is the result of the thermal processes in the repository area. Determination of thermal transport properties can be made using different methods, such as laboratory investigations, field measurements, modelling from mineralogical composition and distribution, modelling from density logging and modelling from temperature logging. The different types of data represent different scales, which has to be

  6. A new plant chamber facility, PLUS, coupled to the atmosphere simulation chamber SAPHIR

    Science.gov (United States)

    Hohaus, T.; Kuhn, U.; Andres, S.; Kaminski, M.; Rohrer, F.; Tillmann, R.; Wahner, A.; Wegener, R.; Yu, Z.; Kiendler-Scharr, A.

    2016-03-01

    A new PLant chamber Unit for Simulation (PLUS) for use with the atmosphere simulation chamber SAPHIR (Simulation of Atmospheric PHotochemistry In a large Reaction Chamber) has been built and characterized at the Forschungszentrum Jülich GmbH, Germany. The PLUS chamber is an environmentally controlled flow-through plant chamber. Inside PLUS the natural blend of biogenic emissions of trees is mixed with synthetic air and transferred to the SAPHIR chamber, where the atmospheric chemistry and the impact of biogenic volatile organic compounds (BVOCs) can be studied in detail. In PLUS all important environmental parameters (e.g., temperature, photosynthetically active radiation (PAR), soil relative humidity (RH)) are well controlled. The gas exchange volume of 9.32 m3 which encloses the stem and the leaves of the plants is constructed such that gases are exposed to only fluorinated ethylene propylene (FEP) Teflon film and other Teflon surfaces to minimize any potential losses of BVOCs in the chamber. Solar radiation is simulated using 15 light-emitting diode (LED) panels, which have an emission strength up to 800 µmol m-2 s-1. Results of the initial characterization experiments are presented in detail. Background concentrations, mixing inside the gas exchange volume, and transfer rate of volatile organic compounds (VOCs) through PLUS under different humidity conditions are explored. Typical plant characteristics such as light- and temperature- dependent BVOC emissions are studied using six Quercus ilex trees and compared to previous studies. Results of an initial ozonolysis experiment of BVOC emissions from Quercus ilex at typical atmospheric concentrations inside SAPHIR are presented to demonstrate a typical experimental setup and the utility of the newly added plant chamber.

  7. A new plant chamber facility PLUS coupled to the atmospheric simulation chamber SAPHIR

    Science.gov (United States)

    Hohaus, T.; Kuhn, U.; Andres, S.; Kaminski, M.; Rohrer, F.; Tillmann, R.; Wahner, A.; Wegener, R.; Yu, Z.; Kiendler-Scharr, A.

    2015-11-01

    A new PLant chamber Unit for Simulation (PLUS) for use with the atmosphere simulation chamber SAPHIR (Simulation of Atmospheric PHotochemistry In a large Reaction Chamber) has been build and characterized at the Forschungszentrum Jülich GmbH, Germany. The PLUS chamber is an environmentally controlled flow through plant chamber. Inside PLUS the natural blend of biogenic emissions of trees are mixed with synthetic air and are transferred to the SAPHIR chamber where the atmospheric chemistry and the impact of biogenic volatile organic compounds (BVOC) can be studied in detail. In PLUS all important enviromental parameters (e.g. temperature, PAR, soil RH etc.) are well-controlled. The gas exchange volume of 9.32 m3 which encloses the stem and the leafes of the plants is constructed such that gases are exposed to FEP Teflon film and other Teflon surfaces only to minimize any potential losses of BVOCs in the chamber. Solar radiation is simulated using 15 LED panels which have an emission strength up to 800 μmol m-2 s-1. Results of the initial characterization experiments are presented in detail. Background concentrations, mixing inside the gas exchange volume, and transfer rate of volatile organic compounds (VOC) through PLUS under different humidity conditions are explored. Typical plant characteristics such as light and temperature dependent BVOC emissions are studied using six Quercus Ilex trees and compared to previous studies. Results of an initial ozonolysis experiment of BVOC emissions from Quercus Ilex at typical atmospheric concentrations inside SAPHIR are presented to demonstrate a typical experimental set up and the utility of the newly added plant chamber.

  8. The Hamburg Oceanic Carbon Cycle Circulation Model. Version 1. Version 'HAMOCC2s' for long time integrations

    Energy Technology Data Exchange (ETDEWEB)

    Heinze, C.; Maier-Reimer, E. [Max-Planck-Institut fuer Meteorologie, Hamburg (Germany)

    1999-11-01

    The Hamburg Ocean Carbon Cycle Circulation Model (HAMOCC, configuration HAMOCC2s) predicts the atmospheric carbon dioxide partial pressure (as induced by oceanic processes), production rates of biogenic particulate matter, and geochemical tracer distributions in the water column as well as the bioturbated sediment. Besides the carbon cycle this model version includes also the marine silicon cycle (silicic acid in the water column and the sediment pore waters, biological opal production, opal flux through the water column and opal sediment pore water interaction). The model is based on the grid and geometry of the LSG ocean general circulation model (see the corresponding manual, LSG=Large Scale Geostrophic) and uses a velocity field provided by the LSG-model in 'frozen' state. In contrast to the earlier version of the model (see Report No. 5), the present version includes a multi-layer sediment model of the bioturbated sediment zone, allowing for variable tracer inventories within the complete model system. (orig.)

  9. A one-dimensional material transfer model for HECTR version 1. 5

    Energy Technology Data Exchange (ETDEWEB)

    Geller, A.S.; Wong, C.C.

    1991-08-01

    HECTR (Hydrogen Event Containment Transient Response) is a lumped-parameter computer code developed for calculating the pressure-temperature response to combustion in a nuclear power plant containment building. The code uses a control-volume approach and subscale models to simulate the mass, momentum, and energy transfer occurring in the containment during a loss-of-collant-accident (LOCA). This document describes one-dimensional subscale models for mass and momentum transfer, and the modifications to the code required to implement them. Two problems were analyzed: the first corresponding to a standard problem studied with previous HECTR versions, the second to experiments. The performance of the revised code relative to previous HECTR version is discussed as is the ability of the code to model the experiments. 8 refs., 5 figs., 3 tabs.

  10. Description and evaluation of the Model for Ozone and Related chemical Tracers, version 4 (MOZART-4

    Directory of Open Access Journals (Sweden)

    L. K. Emmons

    2010-01-01

    Full Text Available The Model for Ozone and Related chemical Tracers, version 4 (MOZART-4 is an offline global chemical transport model particularly suited for studies of the troposphere. The updates of the model from its previous version MOZART-2 are described, including an expansion of the chemical mechanism to include more detailed hydrocarbon chemistry and bulk aerosols. Online calculations of a number of processes, such as dry deposition, emissions of isoprene and monoterpenes and photolysis frequencies, are now included. Results from an eight-year simulation (2000–2007 are presented and evaluated. The MOZART-4 source code and standard input files are available for download from the NCAR Community Data Portal (http://cdp.ucar.edu.

  11. The global chemistry transport model TM5: description and evaluation of the tropospheric chemistry version 3.0

    NARCIS (Netherlands)

    Huijnen, V.; Williams, J.; van Weele, M.; van Noije, T.; Krol, M.; Dentener, F.; Segers, A.; Houweling, S.; Peters, W.; de Laat, J.; Boersma, F.; Bergamaschi, P.; van Velthoven, P.; Le Sager, P.; Eskes, H.; Alkemade, F.; Scheele, R.; Nédélec, P.; Pätz, H.-W.

    2010-01-01

    We present a comprehensive description and benchmark evaluation of the tropospheric chemistry version of the global chemistry transport model TM5 (Tracer Model 5, version TM5-chem-v3.0). A full description is given concerning the photochemical mechanism, the interaction with aerosol, the treatment o

  12. New versions of the BDS/GNSS zenith tropospheric delay model IGGtrop

    Science.gov (United States)

    Li, Wei; Yuan, Yunbin; Ou, Jikun; Chai, Yanju; Li, Zishen; Liou, Yuei-An; Wang, Ningbo

    2015-01-01

    The initial IGGtrop model proposed for Chinese BDS (BeiDou System) is not very suitable for BDS/GNSS research and application due to its large data volume while it shows a global mean accuracy of 4 cm. New versions of the global zenith tropospheric delay (ZTD) model IGGtrop are developed through further investigation on the spatial and temporal characteristics of global ZTD. From global GNSS ZTD observations and weather reanalysis data, new ZTD characteristics are found and discussed in this study including: small and inconsistent seasonal variation in ZTD between and stable seasonal variation outside; weak zonal variation in ZTD at higher latitudes (north of and south of ) and at heights above 6 km, etc. Based on these analyses, new versions of IGGtrop, named , are established through employing corresponding strategies: using a simple algorithm for equatorial ZTD; generating an adaptive spatial grid with lower resolutions in regions where ZTD varies little; and creating a method for optimized storage of model parameters. Thus, the models require much less parameters than the IGGtrop model, nearly 3.1-21.2 % of that for the IGGtrop model. The three new versions are validated by five years of GNSS-derived ZTDs at 125 IGS sites, and it shows that: demonstrates the highest ZTD correction performance, similar to IGGtrop; requires the least model parameters; is moderate in both zenith delay prediction performance and number of model parameters. For the model, the biases at those IGS sites are between and 4.3 cm with a mean value of cm and RMS errors are between 2.1 and 8.5 cm with a mean value of 4.0 cm. Different BDS and other GNSS users can choose a suitable model according to their application and research requirements.

  13. Digital elevation models for site investigation programme in Oskarshamn. Site description version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Brydsten, Lars; Stroemgren, Maarten [Umeaa Univ. (Sweden). Dept. of Biology and Environmental Science

    2005-06-01

    In the Oskarshamn area, a digital elevation model has been produced using elevation data from many elevation sources on both land and sea. Many elevation model users are only interested in elevation models over land, so the model has been designed in three versions: Version 1 describes land surface, lake water surface, and sea bottom. Version 2 describes land surface, sediment levels at lake bottoms, and sea bottoms. Version 3 describes land surface, sediment levels at lake bottoms, and sea surface. In cases where the different sources of data were not in point form 'such as existing elevation models of land or depth lines from nautical charts' they have been converted to point values using GIS software. Because data from some sources often overlaps with data from other sources, several tests were conducted to determine if both sources of data or only one source would be included in the dataset used for the interpolation procedure. The tests resulted in the decision to use only the source judged to be of highest quality for most areas with overlapping data sources. All data were combined into a database of approximately 3.3 million points unevenly spread over an area of about 800 km{sup 2}. The large number of data points made it difficult to construct the model with a single interpolation procedure, the area was divided into 28 sub-models that were processed one by one and finally merged together into one single model. The software ArcGis 8.3 and its extension Geostatistical Analysis were used for the interpolation. The Ordinary Kriging method was used for interpolation. This method allows both a cross validation and a validation before the interpolation is conducted. Cross validation with different Kriging parameters were performed and the model with the most reasonable statistics was chosen. Finally, a validation with the most appropriate Kriging parameters was performed in order to verify that the model fit unmeasured localities. Since both the

  14. A new tool for modeling dune field evolution based on an accessible, GUI version of the Werner dune model

    Science.gov (United States)

    Barchyn, Thomas E.; Hugenholtz, Chris H.

    2012-02-01

    Research into aeolian dune form and dynamics has benefited from simple and abstract cellular automata computer models. Many of these models are based upon a seminal framework proposed by Werner (1995). Unfortunately, most versions of this model are not publicly available or are not provided in a format that promotes widespread use. In our view, this hinders progress in linking model simulations to empirical data (and vice versa). To this end, we introduce an accessible, graphical user interface (GUI) version of the Werner model. The novelty of this contribution is that it provides a simple interface and detailed instructions that encourage widespread use and extension of the Werner dune model for research and training purposes. By lowering barriers for researchers to develop and test hypotheses about aeolian dune and dune field patterns, this release addresses recent calls to improve access to earth surface models.

  15. Independent Verification and Validation Of SAPHIRE 8 Software Configuration Management Plan Project Number: N6423 U.S. Nuclear Regulatory Commission

    Energy Technology Data Exchange (ETDEWEB)

    Kent Norris

    2009-10-01

    The purpose of the Independent Verification and Validation (IV&V) role in the evaluation of the SAPHIRE configuration management is to assess the activities that results in the process of identifying and defining the baselines associated with the SAPHIRE software product; controlling the changes to baselines and release of baselines throughout the life cycle; recording and reporting the status of baselines and the proposed and actual changes to the baselines; and verifying the correctness and completeness of baselines.. The IV&V team began this endeavor after the software engineering and software development of SAPHIRE had already been in production.

  16. Independent Verification and Validation Of SAPHIRE 8 Software Configuration Management Plan Project Number: N6423 U.S. Nuclear Regulatory Commission

    Energy Technology Data Exchange (ETDEWEB)

    Kent Norris

    2010-02-01

    The purpose of the Independent Verification and Validation (IV&V) role in the evaluation of the SAPHIRE configuration management is to assess the activities that results in the process of identifying and defining the baselines associated with the SAPHIRE software product; controlling the changes to baselines and release of baselines throughout the life cycle; recording and reporting the status of baselines and the proposed and actual changes to the baselines; and verifying the correctness and completeness of baselines.. The IV&V team began this endeavor after the software engineering and software development of SAPHIRE had already been in production.

  17. Thermal modelling. Preliminary site description. Forsmark area - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Sundberg, Jan; Back, Paer-Erik; Bengtsson, Anna; Laendell, Maerta [Geo Innova AB, Linkoeping (Sweden)

    2005-08-01

    This report presents the thermal site descriptive model for the Forsmark area, version 1.2. The main objective of this report is to present the thermal modelling work where data has been identified, quality controlled, evaluated and summarised in order to make an upscaling to lithological domain level possible. The thermal conductivity at canister scale has been modelled for two different lithological domains (RFM029 and RFM012, both dominated by granite to granodiorite (101057)). A main modelling approach has been used to determine the mean value of the thermal conductivity. Two alternative/complementary approaches have been used to evaluate the spatial variability of the thermal conductivity at domain level. The thermal modelling approaches are based on the lithological model for the Forsmark area, version 1.2 together with rock type models constituted from measured and calculated (from mineral composition) thermal conductivities. Results indicate that the mean of thermal conductivity is expected to exhibit a small variation between the different domains, 3.46 W/(mxK) for RFM012 to 3.55 W/(mxK) for RFM029. The spatial distribution of the thermal conductivity does not follow a simple model. Lower and upper 95% confidence limits are based on the modelling results, but have been rounded of to only two significant figures. Consequently, the lower limit is 2.9 W/(mxK), while the upper is 3.8 W/(mxK). This is applicable to both the investigated domains. The temperature dependence is rather small with a decrease in thermal conductivity of 10.0% per 100 deg C increase in temperature for the dominating rock type. There are a number of important uncertainties associated with these results. One of the uncertainties considers the representative scale for the canister. Another important uncertainty is the methodological uncertainties associated with the upscaling of thermal conductivity from cm-scale to canister scale. In addition, the representativeness of rock samples is

  18. Independent Verification and Validation Of SAPHIRE 8 Risk Management Project Number: N6423 U.S. Nuclear Regulatory Commission

    Energy Technology Data Exchange (ETDEWEB)

    Kent Norris

    2009-11-01

    This report provides an evaluation of the risk management. Risk management is intended to ensure a methodology for conducting risk management planning, identification, analysis, responses, and monitoring and control activities associated with the SAPHIRE project work, and to meet the contractual commitments prepared by the sponsor; the Nuclear Regulatory Commission.

  19. Isotope effect in the formation of H2 from H2CO studied at the atmospheric simulation chamber SAPHIR

    NARCIS (Netherlands)

    Röckmann, T.; Walter, S.; Bohn, B.; Wegener, R.; Spahn, H.; Brauers, T.; Tillmann, R.; Schlosser, E.; Koppmann, R.; Rohrer, F.

    2010-01-01

    Formaldehyde of known, near-natural isotopic composition was photolyzed in the SAPHIR atmosphere simulation chamber under ambient conditions. The isotopic composition of the product H2 was used to determine the isotope effects in formaldehyde photolysis. The experiments are sensitive to the molecula

  20. Incremental testing of the Community Multiscale Air Quality (CMAQ modeling system version 4.7

    Directory of Open Access Journals (Sweden)

    K. M. Foley

    2010-03-01

    Full Text Available This paper describes the scientific and structural updates to the latest release of the Community Multiscale Air Quality (CMAQ modeling system version 4.7 (v4.7 and points the reader to additional resources for further details. The model updates were evaluated relative to observations and results from previous model versions in a series of simulations conducted to incrementally assess the effect of each change. The focus of this paper is on five major scientific upgrades: (a updates to the heterogeneous N2O5 parameterization, (b improvement in the treatment of secondary organic aerosol (SOA, (c inclusion of dynamic mass transfer for coarse-mode aerosol, (d revisions to the cloud model, and (e new options for the calculation of photolysis rates. Incremental test simulations over the eastern United States during January and August 2006 are evaluated to assess the model response to each scientific improvement, providing explanations of differences in results between v4.7 and previously released CMAQ model versions. Particulate sulfate predictions are improved across all monitoring networks during both seasons due to cloud module updates. Numerous updates to the SOA module improve the simulation of seasonal variability and decrease the bias in organic carbon predictions at urban sites in the winter. Bias in the total mass of fine particulate matter (PM2.5 is dominated by overpredictions of unspeciated PM2.5 (PMother in the winter and by underpredictions of carbon in the summer. The CMAQv4.7 model results show slightly worse performance for ozone predictions. However, changes to the meteorological inputs are found to have a much greater impact on ozone predictions compared to changes to the CMAQ modules described here. Model updates had little effect on existing biases in wet deposition predictions.

  1. Incremental testing of the community multiscale air quality (CMAQ modeling system version 4.7

    Directory of Open Access Journals (Sweden)

    K. M. Foley

    2009-10-01

    Full Text Available This paper describes the scientific and structural updates to the latest release of the Community Multiscale Air Quality (CMAQ modeling system version 4.7 (v4.7 and points the reader to additional resources for further details. The model updates were evaluated relative to observations and results from previous model versions in a series of simulations conducted to incrementally assess the effect of each change. The focus of this paper is on five major scientific upgrades: (a updates to the heterogeneous N2O5 parameterization, (b improvement in the treatment of secondary organic aerosol (SOA, (c inclusion of dynamic mass transfer for coarse-mode aerosol, (d revisions to the cloud model, and (e new options for the calculation of photolysis rates. Incremental test simulations over the eastern United States during January and August 2006 are evaluated to assess the model response to each scientific improvement, providing explanations of differences in results between v4.7 and previously released CMAQ model versions. Particulate sulfate predictions are improved across all monitoring networks during both seasons due to cloud module updates. Numerous updates to the SOA module improve the simulation of seasonal variability and decrease the bias in organic carbon predictions at urban sites in the winter. Bias in the total mass of fine particulate matter (PM2.5 is dominated by overpredictions of unspeciated PM2.5 (PMother in the winter and by underpredictions of carbon in the summer. The CMAQ v4.7 model results show slightly worse performance for ozone predictions. However, changes to the meteorological inputs are found to have a much greater impact on ozone predictions compared to changes to the CMAQ modules described here. Model updates had little effect on existing biases in wet deposition predictions.

  2. Thermal site descriptive model. A strategy for the model development during site investigations - version 2

    Energy Technology Data Exchange (ETDEWEB)

    Back, Paer-Erik; Sundberg, Jan [Geo Innova AB (Sweden)

    2007-09-15

    This report presents a strategy for describing, predicting and visualising the thermal aspects of the site descriptive model. The strategy is an updated version of an earlier strategy applied in all SDM versions during the initial site investigation phase at the Forsmark and Oskarshamn areas. The previous methodology for thermal modelling did not take the spatial correlation fully into account during simulation. The result was that the variability of thermal conductivity in the rock mass was not sufficiently well described. Experience from earlier thermal SDMs indicated that development of the methodology was required in order describe the spatial distribution of thermal conductivity in the rock mass in a sufficiently reliable way, taking both variability within rock types and between rock types into account. A good description of the thermal conductivity distribution is especially important for the lower tail. This tail is important for the design of a repository because it affects the canister spacing. The presented approach is developed to be used for final SDM regarding thermal properties, primarily thermal conductivity. Specific objectives for the strategy of thermal stochastic modelling are: Description: statistical description of the thermal conductivity of a rock domain. Prediction: prediction of thermal conductivity in a specific rock volume. Visualisation: visualisation of the spatial distribution of thermal conductivity. The thermal site descriptive model should include the temperature distribution and thermal properties of the rock mass. The temperature is the result of the thermal processes in the repository area. Determination of thermal transport properties can be made using different methods, such as laboratory investigations, field measurements, modelling from mineralogical composition and distribution, modelling from density logging and modelling from temperature logging. The different types of data represent different scales, which has to be

  3. The Lagrangian particle dispersion model FLEXPART-WRF version 3.1

    Science.gov (United States)

    Brioude, J.; Arnold, D.; Stohl, A.; Cassiani, M.; Morton, D.; Seibert, P.; Angevine, W.; Evan, S.; Dingwell, A.; Fast, J. D.; Easter, R. C.; Pisso, I.; Burkhart, J.; Wotawa, G.

    2013-11-01

    The Lagrangian particle dispersion model FLEXPART was originally designed for calculating long-range and mesoscale dispersion of air pollutants from point sources, such that occurring after an accident in a nuclear power plant. In the meantime, FLEXPART has evolved into a comprehensive tool for atmospheric transport modeling and analysis at different scales. A need for further multiscale modeling and analysis has encouraged new developments in FLEXPART. In this paper, we present a FLEXPART version that works with the Weather Research and Forecasting (WRF) mesoscale meteorological model. We explain how to run this new model and present special options and features that differ from those of the preceding versions. For instance, a novel turbulence scheme for the convective boundary layer has been included that considers both the skewness of turbulence in the vertical velocity as well as the vertical gradient in the air density. To our knowledge, FLEXPART is the first model for which such a scheme has been developed. On a more technical level, FLEXPART-WRF now offers effective parallelization, and details on computational performance are presented here. FLEXPART-WRF output can either be in binary or Network Common Data Form (NetCDF) format, both of which have efficient data compression. In addition, test case data and the source code are provided to the reader as a Supplement. This material and future developments will be accessible at http://www.flexpart.eu.

  4. A computationally efficient description of heterogeneous freezing: A simplified version of the Soccer ball model

    Science.gov (United States)

    Niedermeier, Dennis; Ervens, Barbara; Clauss, Tina; Voigtländer, Jens; Wex, Heike; Hartmann, Susan; Stratmann, Frank

    2014-01-01

    In a recent study, the Soccer ball model (SBM) was introduced for modeling and/or parameterizing heterogeneous ice nucleation processes. The model applies classical nucleation theory. It allows for a consistent description of both apparently singular and stochastic ice nucleation behavior, by distributing contact angles over the nucleation sites of a particle population assuming a Gaussian probability density function. The original SBM utilizes the Monte Carlo technique, which hampers its usage in atmospheric models, as fairly time-consuming calculations must be performed to obtain statistically significant results. Thus, we have developed a simplified and computationally more efficient version of the SBM. We successfully used the new SBM to parameterize experimental nucleation data of, e.g., bacterial ice nucleation. Both SBMs give identical results; however, the new model is computationally less expensive as confirmed by cloud parcel simulations. Therefore, it is a suitable tool for describing heterogeneous ice nucleation processes in atmospheric models.

  5. Community Land Model Version 3.0 (CLM3.0) Developer's Guide

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, FM

    2004-12-21

    This document describes the guidelines adopted for software development of the Community Land Model (CLM) and serves as a reference to the entire code base of the released version of the model. The version of the code described here is Version 3.0 which was released in the summer of 2004. This document, the Community Land Model Version 3.0 (CLM3.0) User's Guide (Vertenstein et al., 2004), the Technical Description of the Community Land Model (CLM) (Oleson et al., 2004), and the Community Land Model's Dynamic Global Vegetation Model (CLM-DGVM): Technical Description and User's Guide (Levis et al., 2004) provide the developer, user, or researcher with details of implementation, instructions for using the model, a scientific description of the model, and a scientific description of the Dynamic Global Vegetation Model integrated with CLM respectively. The CLM is a single column (snow-soil-vegetation) biogeophysical model of the land surface which can be run serially (on a laptop or personal computer) or in parallel (using distributed or shared memory processors or both) on both vector and scalar computer architectures. Written in Fortran 90, CLM can be run offline (i.e., run in isolation using stored atmospheric forcing data), coupled to an atmospheric model (e.g., the Community Atmosphere Model (CAM)), or coupled to a climate system model (e.g., the Community Climate System Model Version 3 (CCSM3)) through a flux coupler (e.g., Coupler 6 (CPL6)). When coupled, CLM exchanges fluxes of energy, water, and momentum with the atmosphere. The horizontal land surface heterogeneity is represented by a nested subgrid hierarchy composed of gridcells, landunits, columns, and plant functional types (PFTs). This hierarchical representation is reflected in the data structures used by the model code. Biophysical processes are simulated for each subgrid unit (landunit, column, and PFT) independently, and prognostic variables are maintained for each subgrid unit

  6. Statistical model of fractures and deformation zones. Preliminary site description, Laxemar subarea, version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Hermanson, Jan; Forssberg, Ola [Golder Associates AB, Stockholm (Sweden); Fox, Aaron; La Pointe, Paul [Golder Associates Inc., Redmond, WA (United States)

    2005-10-15

    The goal of this summary report is to document the data sources, software tools, experimental methods, assumptions, and model parameters in the discrete-fracture network (DFN) model for the local model volume in Laxemar, version 1.2. The model parameters presented herein are intended for use by other project modeling teams. Individual modeling teams may elect to simplify or use only a portion of the DFN model, depending on their needs. This model is not intended to be a flow model or a mechanical model; as such, only the geometrical characterization is presented. The derivations of the hydraulic or mechanical properties of the fractures or their subsurface connectivities are not within the scope of this report. This model represents analyses carried out on particular data sets. If additional data are obtained, or values for existing data are changed or excluded, the conclusions reached in this report, and the parameter values calculated, may change as well. The model volume is divided into two subareas; one located on the Simpevarp peninsula adjacent to the power plant (Simpevarp), and one further to the west (Laxemar). The DFN parameters described in this report were determined by analysis of data collected within the local model volume. As such, the final DFN model is only valid within this local model volume and the modeling subareas (Laxemar and Simpevarp) within.

  7. Aerosol specification in single-column Community Atmosphere Model version 5

    Science.gov (United States)

    Lebassi-Habtezion, B.; Caldwell, P. M.

    2015-03-01

    Single-column model (SCM) capability is an important tool for general circulation model development. In this study, the SCM mode of version 5 of the Community Atmosphere Model (CAM5) is shown to handle aerosol initialization and advection improperly, resulting in aerosol, cloud-droplet, and ice crystal concentrations which are typically much lower than observed or simulated by CAM5 in global mode. This deficiency has a major impact on stratiform cloud simulations but has little impact on convective case studies because aerosol is currently not used by CAM5 convective schemes and convective cases are typically longer in duration (so initialization is less important). By imposing fixed aerosol or cloud-droplet and crystal number concentrations, the aerosol issues described above can be avoided. Sensitivity studies using these idealizations suggest that the Meyers et al. (1992) ice nucleation scheme prevents mixed-phase cloud from existing by producing too many ice crystals. Microphysics is shown to strongly deplete cloud water in stratiform cases, indicating problems with sequential splitting in CAM5 and the need for careful interpretation of output from sequentially split climate models. Droplet concentration in the general circulation model (GCM) version of CAM5 is also shown to be far too low (~ 25 cm-3) at the southern Great Plains (SGP) Atmospheric Radiation Measurement (ARM) site.

  8. MESOI Version 2. 0: an interactive mesoscale Lagrangian puff dispersion model with deposition and decay

    Energy Technology Data Exchange (ETDEWEB)

    Ramsdell, J.V.; Athey, G.F.; Glantz, C.S.

    1983-11-01

    MESOI Version 2.0 is an interactive Lagrangian puff model for estimating the transport, diffusion, deposition and decay of effluents released to the atmosphere. The model is capable of treating simultaneous releases from as many as four release points, which may be elevated or at ground-level. The puffs are advected by a horizontal wind field that is defined in three dimensions. The wind field may be adjusted for expected topographic effects. The concentration distribution within the puffs is initially assumed to be Gaussian in the horizontal and vertical. However, the vertical concentration distribution is modified by assuming reflection at the ground and the top of the atmospheric mixing layer. Material is deposited on the surface using a source depletion, dry deposition model and a washout coefficient model. The model also treats the decay of a primary effluent species and the ingrowth and decay of a single daughter species using a first order decay process. This report is divided into two parts. The first part discusses the theoretical and mathematical bases upon which MESOI Version 2.0 is based. The second part contains the MESOI computer code. The programs were written in the ANSI standard FORTRAN 77 and were developed on a VAX 11/780 computer. 43 references, 14 figures, 13 tables.

  9. A p-version embedded model for simulation of concrete temperature fields with cooling pipes

    Directory of Open Access Journals (Sweden)

    Sheng Qiang

    2015-07-01

    Full Text Available Pipe cooling is an effective method of mass concrete temperature control, but its accurate and convenient numerical simulation is still a cumbersome problem. An improved embedded model, considering the water temperature variation along the pipe, was proposed for simulating the temperature field of early-age concrete structures containing cooling pipes. The improved model was verified with an engineering example. Then, the p-version self-adaption algorithm for the improved embedded model was deduced, and the initial values and boundary conditions were examined. Comparison of some numerical samples shows that the proposed model can provide satisfying precision and a higher efficiency. The analysis efficiency can be doubled at the same precision, even for a large-scale element. The p-version algorithm can fit grids of different sizes for the temperature field simulation. The convenience of the proposed algorithm lies in the possibility of locating more pipe segments in one element without the need of so regular a shape as in the explicit model.

  10. Independent Verification and Validation Of SAPHIRE 8 Volume 3 Users' Guide Project Number: N6423 U.S. Nuclear Regulatory Commission

    Energy Technology Data Exchange (ETDEWEB)

    Kent Norris

    2010-03-01

    The purpose of the Independent Verification and Validation (IV&V) role in the evaluation of the SAPHIRE 8 Volume 3 Users’ Guide is to assess the user documentation for its completeness, correctness, and consistency with respect to requirements for user interface and for any functionality that can be invoked by the user. The IV&V team began this endeavor after the software engineering and software development of SAPHIRE had already been in production.

  11. The Lagrangian particle dispersion model FLEXPART-WRF version 3.0

    Directory of Open Access Journals (Sweden)

    J. Brioude

    2013-07-01

    Full Text Available The Lagrangian particle dispersion model FLEXPART was originally designed for calculating long-range and mesoscale dispersion of air pollutants from point sources, such as after an accident in a nuclear power plant. In the meantime FLEXPART has evolved into a comprehensive tool for atmospheric transport modeling and analysis at different scales. This multiscale need has encouraged new developments in FLEXPART. In this document, we present a FLEXPART version that works with the Weather Research and Forecasting (WRF mesoscale meteorological model. We explain how to run and present special options and features that differ from its predecessor versions. For instance, a novel turbulence scheme for the convective boundary layer has been included that considers both the skewness of turbulence in the vertical velocity as well as the vertical gradient in the air density. To our knowledge, FLEXPART is the first model for which such a scheme has been developed. On a more technical level, FLEXPART-WRF now offers effective parallelization and details on computational performance are presented here. FLEXPART-WRF output can either be in binary or Network Common Data Form (NetCDF format with efficient data compression. In addition, test case data and the source code are provided to the reader as Supplement. This material and future developments will be accessible at http://www.flexpart.eu.

  12. The Lagrangian particle dispersion model FLEXPART-WRF version 3.0

    Science.gov (United States)

    Brioude, J.; Arnold, D.; Stohl, A.; Cassiani, M.; Morton, D.; Seibert, P.; Angevine, W.; Evan, S.; Dingwell, A.; Fast, J. D.; Easter, R. C.; Pisso, I.; Burkhart, J.; Wotawa, G.

    2013-07-01

    The Lagrangian particle dispersion model FLEXPART was originally designed for calculating long-range and mesoscale dispersion of air pollutants from point sources, such as after an accident in a nuclear power plant. In the meantime FLEXPART has evolved into a comprehensive tool for atmospheric transport modeling and analysis at different scales. This multiscale need has encouraged new developments in FLEXPART. In this document, we present a FLEXPART version that works with the Weather Research and Forecasting (WRF) mesoscale meteorological model. We explain how to run and present special options and features that differ from its predecessor versions. For instance, a novel turbulence scheme for the convective boundary layer has been included that considers both the skewness of turbulence in the vertical velocity as well as the vertical gradient in the air density. To our knowledge, FLEXPART is the first model for which such a scheme has been developed. On a more technical level, FLEXPART-WRF now offers effective parallelization and details on computational performance are presented here. FLEXPART-WRF output can either be in binary or Network Common Data Form (NetCDF) format with efficient data compression. In addition, test case data and the source code are provided to the reader as Supplement. This material and future developments will be accessible at http://www.flexpart.eu.

  13. Technical note: The Lagrangian particle dispersion model FLEXPART version 6.2

    Directory of Open Access Journals (Sweden)

    A. Stohl

    2005-01-01

    Full Text Available The Lagrangian particle dispersion model FLEXPART was originally (about 8 years ago designed for calculating the long-range and mesoscale dispersion of air pollutants from point sources, such as after an accident in a nuclear power plant. In the meantime FLEXPART has evolved into a comprehensive tool for atmospheric transport modeling and analysis. Its application fields were extended from air pollution studies to other topics where atmospheric transport plays a role (e.g., exchange between the stratosphere and troposphere, or the global water cycle. It has evolved into a true community model that is now being used by at least 25 groups from 14 different countries and is seeing both operational and research applications. A user manual has been kept actual over the years and was distributed over an internet page along with the model's source code. In this note we provide a citeable technical description of FLEXPART's latest version (6.2.

  14. Muninn: A versioning flash key-value store using an object-based storage model

    OpenAIRE

    Kang, Y.; Pitchumani, R; Marlette, T; Miller, El

    2014-01-01

    While non-volatile memory (NVRAM) devices have the po-tential to alleviate the trade-off between performance, scal-ability, and energy in storage and memory subsystems, a block interface and storage subsystems designed for slow I/O devices make it difficult to efficiently exploit NVRAMs in a portable and extensible way. We propose an object-based storage model as a way of addressing the shortfalls of the current interfaces. Through the design of Muninn, an object-based versioning key-value st...

  15. QMM – A Quarterly Macroeconomic Model of the Icelandic Economy. Version 2.0

    DEFF Research Database (Denmark)

    Ólafsson, Tjörvi

    This paper documents and describes Version 2.0 of the Quarterly Macroeconomic Model of the Central Bank of Iceland (QMM). QMM and the underlying quarterly database have been under construction since 2001 at the Research and Forecasting Division of the Economics Department at the Bank and was first...... implemented in the forecasting round for the Monetary Bulletin 2006/1 in March 2006. QMM is used by the Bank for forecasting and various policy simulations and therefore plays a key role as an organisational framework for viewing the medium-term future when formulating monetary policy at the Bank. This paper...

  16. User’s Manual for the Navy Coastal Ocean Model (NCOM) Version 4.0

    Science.gov (United States)

    2009-02-06

    Manual for the Navy Coastal Ocean Model (NCOM) Version 4.0 Paul J. Martin Charlie n. Barron luCy F. SMedStad tiMothy J. CaMPBell alan J. WallCraFt...Timothy J. Campbell, Alan J. Wallcraft, Robert C. Rhodes, Clark Rowley, Tamara L. Townsend, and Suzanne N. Carroll* Naval Research Laboratory...1997- 1998 ENSO event. Bound.-Layer Meteor. 103: 439-458. Large, W.G., J.C. McWilliams , and S. Doney, (1994). Oceanic vertical mixing: a review and

  17. Navy Coastal Ocean Model (NCOM) Version 4.0 (User’s Manual)

    Science.gov (United States)

    2009-02-06

    Manual for the Navy Coastal Ocean Model (NCOM) Version 4.0 Paul J. Martin Charlie n. Barron luCy F. SMedStad tiMothy J. CaMPBell alan J. WallCraFt...Timothy J. Campbell, Alan J. Wallcraft, Robert C. Rhodes, Clark Rowley, Tamara L. Townsend, and Suzanne N. Carroll* Naval Research Laboratory...the 1997- 1998 ENSO event. Bound.-Layer Meteor. 103: 439-458. Large, W.G., J.C. McWilliams , and S. Doney, (1994). Oceanic vertical mixing: a review

  18. QMM – A Quarterly Macroeconomic Model of the Icelandic Economy. Version 2.0

    DEFF Research Database (Denmark)

    Ólafsson, Tjörvi

    This paper documents and describes Version 2.0 of the Quarterly Macroeconomic Model of the Central Bank of Iceland (QMM). QMM and the underlying quarterly database have been under construction since 2001 at the Research and Forecasting Division of the Economics Department at the Bank and was first...... implemented in the forecasting round for the Monetary Bulletin 2006/1 in March 2006. QMM is used by the Bank for forecasting and various policy simulations and therefore plays a key role as an organisational framework for viewing the medium-term future when formulating monetary policy at the Bank. This paper...

  19. Update of the Polar SWIFT model for polar stratospheric ozone loss (Polar SWIFT version 2)

    Science.gov (United States)

    Wohltmann, Ingo; Lehmann, Ralph; Rex, Markus

    2017-07-01

    The Polar SWIFT model is a fast scheme for calculating the chemistry of stratospheric ozone depletion in polar winter. It is intended for use in global climate models (GCMs) and Earth system models (ESMs) to enable the simulation of mutual interactions between the ozone layer and climate. To date, climate models often use prescribed ozone fields, since a full stratospheric chemistry scheme is computationally very expensive. Polar SWIFT is based on a set of coupled differential equations, which simulate the polar vortex-averaged mixing ratios of the key species involved in polar ozone depletion on a given vertical level. These species are O3, chemically active chlorine (ClOx), HCl, ClONO2 and HNO3. The only external input parameters that drive the model are the fraction of the polar vortex in sunlight and the fraction of the polar vortex below the temperatures necessary for the formation of polar stratospheric clouds. Here, we present an update of the Polar SWIFT model introducing several improvements over the original model formulation. In particular, the model is now trained on vortex-averaged reaction rates of the ATLAS Chemistry and Transport Model, which enables a detailed look at individual processes and an independent validation of the different parameterizations contained in the differential equations. The training of the original Polar SWIFT model was based on fitting complete model runs to satellite observations and did not allow for this. A revised formulation of the system of differential equations is developed, which closely fits vortex-averaged reaction rates from ATLAS that represent the main chemical processes influencing ozone. In addition, a parameterization for the HNO3 change by denitrification is included. The rates of change of the concentrations of the chemical species of the Polar SWIFT model are purely chemical rates of change in the new version, whereas in the original Polar SWIFT model, they included a transport effect caused by the

  20. Observation of the positive-strangeness pentaquark $\\Theta^+$ in photoproduction with the SAPHIR detector at ELSA

    CERN Document Server

    Barth, J; Ernst, J; Glander, K H; Hannappel, J; Jöpen, N; Kalinowsky, H; Klein, F; Klempt, E; Lawall, R; Link, J; Menze, D W; Neuerburg, W; Ostrick, M; Paul, E; Van Pee, H; Schulday, I; Schwille, W J; Wiegers, B; Wieland, F W; Wisskirchen, J; Wu, C

    2003-01-01

    The positive--strangeness baryon resonance $\\Theta^+$ is observed in photoproduction of the $\\rm nK^+K^0_s$ final state with the SAPHIR detector at the Bonn ELectron Stretcher Accelerator ELSA. It is seen as a peak in the $\\rm nK^+$ invariant mass distribution with a $4.8\\sigma$ confidence level. We find a mass $\\rm M_{\\Theta^+} = 1540\\pm 4\\pm 2$ MeV and an upper limit of the width $\\rm \\Gamma_{\\Theta^+} < 25$ MeV at 90% c.l. The photoproduction cross section for $\\rm\\bar K^0\\Theta^+$ is in the order of 300 nb. From the absence of a signal in the $\\rm pK^+$ invariant mass distribution in $\\rm\\gamma p\\to pK^+K^-$ at the expected strength we conclude that the $\\Theta^+$ must be isoscalar.

  1. Flood predictions using the parallel version of distributed numerical physical rainfall-runoff model TOPKAPI

    Science.gov (United States)

    Boyko, Oleksiy; Zheleznyak, Mark

    2015-04-01

    The original numerical code TOPKAPI-IMMS of the distributed rainfall-runoff model TOPKAPI ( Todini et al, 1996-2014) is developed and implemented in Ukraine. The parallel version of the code has been developed recently to be used on multiprocessors systems - multicore/processors PC and clusters. Algorithm is based on binary-tree decomposition of the watershed for the balancing of the amount of computation for all processors/cores. Message passing interface (MPI) protocol is used as a parallel computing framework. The numerical efficiency of the parallelization algorithms is demonstrated for the case studies for the flood predictions of the mountain watersheds of the Ukrainian Carpathian regions. The modeling results is compared with the predictions based on the lumped parameters models.

  2. The Systems Biology Markup Language (SBML) Level 3 Package: Qualitative Models, Version 1, Release 1.

    Science.gov (United States)

    Chaouiya, Claudine; Keating, Sarah M; Berenguier, Duncan; Naldi, Aurélien; Thieffry, Denis; van Iersel, Martijn P; Le Novère, Nicolas; Helikar, Tomáš

    2015-09-04

    Quantitative methods for modelling biological networks require an in-depth knowledge of the biochemical reactions and their stoichiometric and kinetic parameters. In many practical cases, this knowledge is missing. This has led to the development of several qualitative modelling methods using information such as, for example, gene expression data coming from functional genomic experiments. The SBML Level 3 Version 1 Core specification does not provide a mechanism for explicitly encoding qualitative models, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The SBML Qualitative Models package for SBML Level 3 adds features so that qualitative models can be directly and explicitly encoded. The approach taken in this package is essentially based on the definition of regulatory or influence graphs. The SBML Qualitative Models package defines the structure and syntax necessary to describe qualitative models that associate discrete levels of activities with entity pools and the transitions between states that describe the processes involved. This is particularly suited to logical models (Boolean or multi-valued) and some classes of Petri net models can be encoded with the approach.

  3. User guide for MODPATH version 6 - A particle-tracking model for MODFLOW

    Science.gov (United States)

    Pollock, David W.

    2012-01-01

    MODPATH is a particle-tracking post-processing model that computes three-dimensional flow paths using output from groundwater flow simulations based on MODFLOW, the U.S. Geological Survey (USGS) finite-difference groundwater flow model. This report documents MODPATH version 6. Previous versions were documented in USGS Open-File Reports 89-381 and 94-464. The program uses a semianalytical particle-tracking scheme that allows an analytical expression of a particle's flow path to be obtained within each finite-difference grid cell. A particle's path is computed by tracking the particle from one cell to the next until it reaches a boundary, an internal sink/source, or satisfies another termination criterion. Data input to MODPATH consists of a combination of MODFLOW input data files, MODFLOW head and flow output files, and other input files specific to MODPATH. Output from MODPATH consists of several output files, including a number of particle coordinate output files intended to serve as input data for other programs that process, analyze, and display the results in various ways. MODPATH is written in FORTRAN and can be compiled by any FORTRAN compiler that fully supports FORTRAN-2003 or by most commercially available FORTRAN-95 compilers that support the major FORTRAN-2003 language extensions.

  4. Landfill Gas Energy Cost Model Version 3.0 (LFGcost-Web V3 ...

    Science.gov (United States)

    To help stakeholders estimate the costs of a landfill gas (LFG) energy project, in 2002, LMOP developed a cost tool (LFGcost). Since then, LMOP has routinely updated the tool to reflect changes in the LFG energy industry. Initially the model was designed for EPA to assist landfills in evaluating the economic and financial feasibility of LFG energy project development. In 2014, LMOP developed a public version of the model, LFGcost-Web (Version 3.0), to allow landfill and industry stakeholders to evaluate project feasibility on their own. LFGcost-Web can analyze costs for 12 energy recovery project types. These project costs can be estimated with or without the costs of a gas collection and control system (GCCS). The EPA used select equations from LFGcost-Web to estimate costs of the regulatory options in the 2015 proposed revisions to the MSW Landfills Standards of Performance (also known as New Source Performance Standards) and the Emission Guidelines (herein thereafter referred to collectively as the Landfill Rules). More specifically, equations derived from LFGcost-Web were applied to each landfill expected to be impacted by the Landfill Rules to estimate annualized installed capital costs and annual O&M costs of a gas collection and control system. In addition, after applying the LFGcost-Web equations to the list of landfills expected to require a GCCS in year 2025 as a result of the proposed Landfill Rules, the regulatory analysis evaluated whether electr

  5. LERC-SLAM - THE NASA LEWIS RESEARCH CENTER SATELLITE LINK ATTENUATION MODEL PROGRAM (IBM PC VERSION)

    Science.gov (United States)

    Manning, R. M.

    1994-01-01

    The frequency and intensity of rain attenuation affecting the communication between a satellite and an earth terminal is an important consideration in planning satellite links. The NASA Lewis Research Center Satellite Link Attenuation Model Program (LeRC-SLAM) provides a static and dynamic statistical assessment of the impact of rain attenuation on a communications link established between an earth terminal and a geosynchronous satellite. The program is designed for use in the specification, design and assessment of satellite links for any terminal location in the continental United States. The basis for LeRC-SLAM is the ACTS Rain Attenuation Prediction Model, which uses a log-normal cumulative probability distribution to describe the random process of rain attenuation on satellite links. The derivation of the statistics for the rainrate process at the specified terminal location relies on long term rainfall records compiled by the U.S. Weather Service during time periods of up to 55 years in length. The theory of extreme value statistics is also utilized. The user provides 1) the longitudinal position of the satellite in geosynchronous orbit, 2) the geographical position of the earth terminal in terms of latitude and longitude, 3) the height above sea level of the terminal site, 4) the yearly average rainfall at the terminal site, and 5) the operating frequency of the communications link (within 1 to 1000 GHz, inclusive). Based on the yearly average rainfall at the terminal location, LeRC-SLAM calculates the relevant rain statistics for the site using an internal data base. The program then generates rain attenuation data for the satellite link. This data includes a description of the static (i.e., yearly) attenuation process, an evaluation of the cumulative probability distribution for attenuation effects, and an evaluation of the probability of fades below selected fade depths. In addition, LeRC-SLAM calculates the elevation and azimuth angles of the terminal

  6. LERC-SLAM - THE NASA LEWIS RESEARCH CENTER SATELLITE LINK ATTENUATION MODEL PROGRAM (MACINTOSH VERSION)

    Science.gov (United States)

    Manning, R. M.

    1994-01-01

    The frequency and intensity of rain attenuation affecting the communication between a satellite and an earth terminal is an important consideration in planning satellite links. The NASA Lewis Research Center Satellite Link Attenuation Model Program (LeRC-SLAM) provides a static and dynamic statistical assessment of the impact of rain attenuation on a communications link established between an earth terminal and a geosynchronous satellite. The program is designed for use in the specification, design and assessment of satellite links for any terminal location in the continental United States. The basis for LeRC-SLAM is the ACTS Rain Attenuation Prediction Model, which uses a log-normal cumulative probability distribution to describe the random process of rain attenuation on satellite links. The derivation of the statistics for the rainrate process at the specified terminal location relies on long term rainfall records compiled by the U.S. Weather Service during time periods of up to 55 years in length. The theory of extreme value statistics is also utilized. The user provides 1) the longitudinal position of the satellite in geosynchronous orbit, 2) the geographical position of the earth terminal in terms of latitude and longitude, 3) the height above sea level of the terminal site, 4) the yearly average rainfall at the terminal site, and 5) the operating frequency of the communications link (within 1 to 1000 GHz, inclusive). Based on the yearly average rainfall at the terminal location, LeRC-SLAM calculates the relevant rain statistics for the site using an internal data base. The program then generates rain attenuation data for the satellite link. This data includes a description of the static (i.e., yearly) attenuation process, an evaluation of the cumulative probability distribution for attenuation effects, and an evaluation of the probability of fades below selected fade depths. In addition, LeRC-SLAM calculates the elevation and azimuth angles of the terminal

  7. Evaluation of the Snow Simulations from the Community Land Model, Version 4 (CLM4)

    Science.gov (United States)

    Toure, Ally M.; Rodell, Matthew; Yang, Zong-Liang; Beaudoing, Hiroko; Kim, Edward; Zhang, Yongfei; Kwon, Yonghwan

    2015-01-01

    This paper evaluates the simulation of snow by the Community Land Model, version 4 (CLM4), the land model component of the Community Earth System Model, version 1.0.4 (CESM1.0.4). CLM4 was run in an offline mode forced with the corrected land-only replay of the Modern-Era Retrospective Analysis for Research and Applications (MERRA-Land) and the output was evaluated for the period from January 2001 to January 2011 over the Northern Hemisphere poleward of 30 deg N. Simulated snow-cover fraction (SCF), snow depth, and snow water equivalent (SWE) were compared against a set of observations including the Moderate Resolution Imaging Spectroradiometer (MODIS) SCF, the Interactive Multisensor Snow and Ice Mapping System (IMS) snow cover, the Canadian Meteorological Centre (CMC) daily snow analysis products, snow depth from the National Weather Service Cooperative Observer (COOP) program, and Snowpack Telemetry (SNOTEL) SWE observations. CLM4 SCF was converted into snow-cover extent (SCE) to compare with MODIS SCE. It showed good agreement, with a correlation coefficient of 0.91 and an average bias of -1.54 x 10(exp 2) sq km. Overall, CLM4 agreed well with IMS snow cover, with the percentage of correctly modeled snow-no snow being 94%. CLM4 snow depth and SWE agreed reasonably well with the CMC product, with the average bias (RMSE) of snow depth and SWE being 0.044m (0.19 m) and -0.010m (0.04 m), respectively. CLM4 underestimated SNOTEL SWE and COOP snow depth. This study demonstrates the need to improve the CLM4 snow estimates and constitutes a benchmark against which improvement of the model through data assimilation can be measured.

  8. Version 3.0 of code Java for 3D simulation of the CCA model

    Science.gov (United States)

    Zhang, Kebo; Zuo, Junsen; Dou, Yifeng; Li, Chao; Xiong, Hailing

    2016-10-01

    In this paper we provide a new version of program for replacing the previous version. The frequency of traversing the clusters-list was reduced, and some code blocks were optimized properly; in addition, we appended and revised the comments of the source code for some methods or attributes. The compared experimental results show that new version has better time efficiency than the previous version.

  9. Igpet software for modeling igneous processes: examples of application using the open educational version

    Science.gov (United States)

    Carr, Michael J.; Gazel, Esteban

    2016-09-01

    We provide here an open version of Igpet software, called t-Igpet to emphasize its application for teaching and research in forward modeling of igneous geochemistry. There are three programs, a norm utility, a petrologic mixing program using least squares and Igpet, a graphics program that includes many forms of numerical modeling. Igpet is a multifaceted tool that provides the following basic capabilities: igneous rock identification using the IUGS (International Union of Geological Sciences) classification and several supplementary diagrams; tectonic discrimination diagrams; pseudo-quaternary projections; least squares fitting of lines, polynomials and hyperbolae; magma mixing using two endmembers, histograms, x-y plots, ternary plots and spider-diagrams. The advanced capabilities of Igpet are multi-element mixing and magma evolution modeling. Mixing models are particularly useful for understanding the isotopic variations in rock suites that evolved by mixing different sources. The important melting models include, batch melting, fractional melting and aggregated fractional melting. Crystallization models include equilibrium and fractional crystallization and AFC (assimilation and fractional crystallization). Theses, reports and proposals concerning igneous petrology are improved by numerical modeling. For reviewed publications some elements of modeling are practically a requirement. Our intention in providing this software is to facilitate improved communication and lower entry barriers to research, especially for students.

  10. Igpet software for modeling igneous processes: examples of application using the open educational version

    Science.gov (United States)

    Carr, Michael J.; Gazel, Esteban

    2017-04-01

    We provide here an open version of Igpet software, called t-Igpet to emphasize its application for teaching and research in forward modeling of igneous geochemistry. There are three programs, a norm utility, a petrologic mixing program using least squares and Igpet, a graphics program that includes many forms of numerical modeling. Igpet is a multifaceted tool that provides the following basic capabilities: igneous rock identification using the IUGS (International Union of Geological Sciences) classification and several supplementary diagrams; tectonic discrimination diagrams; pseudo-quaternary projections; least squares fitting of lines, polynomials and hyperbolae; magma mixing using two endmembers, histograms, x-y plots, ternary plots and spider-diagrams. The advanced capabilities of Igpet are multi-element mixing and magma evolution modeling. Mixing models are particularly useful for understanding the isotopic variations in rock suites that evolved by mixing different sources. The important melting models include, batch melting, fractional melting and aggregated fractional melting. Crystallization models include equilibrium and fractional crystallization and AFC (assimilation and fractional crystallization). Theses, reports and proposals concerning igneous petrology are improved by numerical modeling. For reviewed publications some elements of modeling are practically a requirement. Our intention in providing this software is to facilitate improved communication and lower entry barriers to research, especially for students.

  11. Ocean Model, Analysis and Prediction System version 3: operational global ocean forecasting

    Science.gov (United States)

    Brassington, Gary; Sandery, Paul; Sakov, Pavel; Freeman, Justin; Divakaran, Prasanth; Beckett, Duan

    2017-04-01

    The Ocean Model, Analysis and Prediction System version 3 (OceanMAPSv3) is a near-global (75S-75N; no sea-ice), uniform horizontal resolution (0.1°x0.1°), 51 vertical level ocean forecast system producing daily analyses and 7 day forecasts. This system was declared operational at the Bureau of Meteorology in April 2016 and subsequently upgraded to include ACCESS-G APS2 in June 2016 and finally ported to the Bureau's new supercomputer in Sep 2016. This system realises the original vision of the BLUElink projects (2003-2015) to provide global forecasts of the ocean geostrophic turbulence (eddies and fronts) in support of Naval operations as well as other national services. The analysis system has retained an ensemble-based optimal interpolation method with 144 stationary ensemble members derived from a multi-year hindcast. However, the BODAS code has been upgraded to a new code base ENKF-C. A new strategy for initialisation has been introduced leading to greater retention of analysis increments and reduced shock. The analysis cycle has been optimised for a 3-cycle system with 3 day observation windows retaining an advantage as a multi-cycle time-lagged ensemble. The sea surface temperature and sea surface height anomaly analysis errors in the Australian region are 0.34 degC and 6.2 cm respectively an improvement of 10% and 20% respectively over version 2. In addition, the RMSE of the 7 day forecast has lower error than the 1 day forecast from the previous system (version 2). International intercomparisons have shown that this system is comparable in performance with the two leading systems and is often the leading performer for surface temperature and upper ocean temperature. We present an overview of the system, the data assimilation and initialisation, demonstrate the performance and outline future directions.

  12. Integrated Medical Model (IMM) Optimization Version 4.0 Functional Improvements

    Science.gov (United States)

    Arellano, John; Young, M.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Goodenow, D. A.; Myers, J. G.

    2016-01-01

    The IMMs ability to assess mission outcome risk levels relative to available resources provides a unique capability to provide guidance on optimal operational medical kit and vehicle resources. Post-processing optimization allows IMM to optimize essential resources to improve a specific model outcome such as maximization of the Crew Health Index (CHI), or minimization of the probability of evacuation (EVAC) or the loss of crew life (LOCL). Mass and or volume constrain the optimized resource set. The IMMs probabilistic simulation uses input data on one hundred medical conditions to simulate medical events that may occur in spaceflight, the resources required to treat those events, and the resulting impact to the mission based on specific crew and mission characteristics. Because IMM version 4.0 provides for partial treatment for medical events, IMM Optimization 4.0 scores resources at the individual resource unit increment level as opposed to the full condition-specific treatment set level, as done in version 3.0. This allows the inclusion of as many resources as possible in the event that an entire set of resources called out for treatment cannot satisfy the constraints. IMM Optimization version 4.0 adds capabilities that increase efficiency by creating multiple resource sets based on differing constraints and priorities, CHI, EVAC, or LOCL. It also provides sets of resources that improve mission-related IMM v4.0 outputs with improved performance compared to the prior optimization. The new optimization represents much improved fidelity that will improve the utility of the IMM 4.0 for decision support.

  13. APPLICATION OF TWO VERSIONS OF A RNG BASED k-ε MODEL TO NUMERICAL SIMULATIONS OF TURBULENT IMPINGING JET FLOW

    Institute of Scientific and Technical Information of China (English)

    Chen Qing-guang; Xu Zhong; Zhang Yong-jian

    2003-01-01

    Two independent versions of the RNG based k-ε turbulence model in conjunction with the law of the wall have been applied to the numerical simulation of an axisymmetric turbulent impinging jet flow field. The two model predictions are compared with those of the standard k-ε model and with the experimental data measured by LDV (Laser Doppler Velocimetry). It shows that the original version of the RNG k-ε model with the choice of Cε1=1.063 can not yield good results, among them the predicted turbulent kinetic energy profiles in the vicinity of the stagnation region are even worse than those predicted by the standard k-ε model. However, the new version of RNG k-ε model behaves well. This is mainly due to the corrections to the constants Cε1 and Cε2 along with a modification of the production term to account for non-equilibrium strain rates in the flow.

  14. Description of the Earth system model of intermediate complexity LOVECLIM version 1.2

    Directory of Open Access Journals (Sweden)

    H. Goosse

    2010-11-01

    Full Text Available The main characteristics of the new version 1.2 of the three-dimensional Earth system model of intermediate complexity LOVECLIM are briefly described. LOVECLIM 1.2 includes representations of the atmosphere, the ocean and sea ice, the land surface (including vegetation, the ice sheets, the icebergs and the carbon cycle. The atmospheric component is ECBilt2, a T21, 3-level quasi-geostrophic model. The ocean component is CLIO3, which consists of an ocean general circulation model coupled to a comprehensive thermodynamic-dynamic sea-ice model. Its horizontal resolution is of 3° by 3°, and there are 20 levels in the ocean. ECBilt-CLIO is coupled to VECODE, a vegetation model that simulates the dynamics of two main terrestrial plant functional types, trees and grasses, as well as desert. VECODE also simulates the evolution of the carbon cycle over land while the ocean carbon cycle is represented by LOCH, a comprehensive model that takes into account both the solubility and biological pumps. The ice sheet component AGISM is made up of a three-dimensional thermomechanical model of the ice sheet flow, a visco-elastic bedrock model and a model of the mass balance at the ice-atmosphere and ice-ocean interfaces. For both the Greenland and Antarctic ice sheets, calculations are made on a 10 km by 10 km resolution grid with 31 sigma levels. LOVECLIM1.2 reproduces well the major characteristics of the observed climate both for present-day conditions and for key past periods such as the last millennium, the mid-Holocene and the Last Glacial Maximum. However, despite some improvements compared to earlier versions, some biases are still present in the model. The most serious ones are mainly located at low latitudes with an overestimation of the temperature there, a too symmetric distribution of precipitation between the two hemispheres, and an overestimation of precipitation and vegetation cover in the subtropics. In addition, the atmospheric circulation is

  15. Exact solution for a metapopulation version of Schelling’s model

    Science.gov (United States)

    Durrett, Richard; Zhang, Yuan

    2014-01-01

    In 1971, Schelling introduced a model in which families move if they have too many neighbors of the opposite type. In this paper, we will consider a metapopulation version of the model in which a city is divided into N neighborhoods, each of which has L houses. There are ρNL red families and ρNL blue families for some ρ ρb, a new segregated equilibrium appears; for ρb < ρ < ρd, there is bistability, but when ρ increases past ρd the random state is no longer stable. When ρc is small enough, the random state will again be the stationary distribution when ρ is close to 1/2. If so, this is preceded by a region of bistability. PMID:25225367

  16. New Source Term Model for the RESRAD-OFFSITE Code Version 3

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Charley [Argonne National Lab. (ANL), Argonne, IL (United States); Gnanapragasam, Emmanuel [Argonne National Lab. (ANL), Argonne, IL (United States); Cheng, Jing-Jy [Argonne National Lab. (ANL), Argonne, IL (United States); Kamboj, Sunita [Argonne National Lab. (ANL), Argonne, IL (United States); Chen, Shih-Yew [Argonne National Lab. (ANL), Argonne, IL (United States)

    2013-06-01

    This report documents the new source term model developed and implemented in Version 3 of the RESRAD-OFFSITE code. This new source term model includes: (1) "first order release with transport" option, in which the release of the radionuclide is proportional to the inventory in the primary contamination and the user-specified leach rate is the proportionality constant, (2) "equilibrium desorption release" option, in which the user specifies the distribution coefficient which quantifies the partitioning of the radionuclide between the solid and aqueous phases, and (3) "uniform release" option, in which the radionuclides are released from a constant fraction of the initially contaminated material during each time interval and the user specifies the duration over which the radionuclides are released.

  17. Validity study of the Beck Anxiety Inventory (Portuguese version by the Rasch Rating Scale model

    Directory of Open Access Journals (Sweden)

    Sónia Quintão

    2013-01-01

    Full Text Available Our objective was to conduct a validation study of the Portuguese version of the Beck Anxiety Inventory (BAI by means of the Rasch Rating Scale Model, and then compare it with the most used scales of anxiety in Portugal. The sample consisted of 1,160 adults (427 men and 733 women, aged 18-82 years old (M=33.39; SD=11.85. Instruments were Beck Anxiety Inventory, State-Trait Anxiety Inventory and Zung Self-Rating Anxiety Scale. It was found that Beck Anxiety Inventory's system of four categories, the data-model fit, and people reliability were adequate. The measure can be considered as unidimensional. Gender and age-related differences were not a threat to the validity. BAI correlated significantly with other anxiety measures. In conclusion, BAI shows good psychometric quality.

  18. An improved version of the consequence analysis model for chemical emergencies, ESCAPE

    Science.gov (United States)

    Kukkonen, J.; Nikmo, J.; Riikonen, K.

    2017-02-01

    We present a refined version of a mathematical model called ESCAPE, "Expert System for Consequence Analysis and Preparing for Emergencies". The model has been designed for evaluating the releases of toxic and flammable gases into the atmosphere, their atmospheric dispersion and the effects on humans and the environment. We describe (i) the mathematical treatments of this model, (ii) a verification and evaluation of the model against selected experimental field data, and (iii) a new operational implementation of the model. The new mathematical treatments include state-of-the-art atmospheric vertical profiles and new submodels for dense gas and passive atmospheric dispersion. The model performance was first successfully verified using the data of the Thorney Island campaign, and then evaluated against the Desert Tortoise campaign. For the latter campaign, the geometric mean bias was 1.72 (this corresponds to an underprediction of approximately 70%) and 0.71 (overprediction of approximately 30%) for the concentration and the plume half-width, respectively. The geometric variance was computers, tablets and mobile phones. The predicted results can be post-processed using geographic information systems. The model has already proved to be a useful tool of assessment for the needs of emergency response authorities in contingency planning.

  19. VALIDATION OF THE ASTER GLOBAL DIGITAL ELEVATION MODEL VERSION 3 OVER THE CONTERMINOUS UNITED STATES

    Directory of Open Access Journals (Sweden)

    D. Gesch

    2016-06-01

    Full Text Available The ASTER Global Digital Elevation Model Version 3 (GDEM v3 was evaluated over the conterminous United States in a manner similar to the validation conducted for the original GDEM Version 1 (v1 in 2009 and GDEM Version 2 (v2 in 2011. The absolute vertical accuracy of GDEM v3 was calculated by comparison with more than 23,000 independent reference geodetic ground control points from the U.S. National Geodetic Survey. The root mean square error (RMSE measured for GDEM v3 is 8.52 meters. This compares with the RMSE of 8.68 meters for GDEM v2. Another important descriptor of vertical accuracy is the mean error, or bias, which indicates if a DEM has an overall vertical offset from true ground level. The GDEM v3 mean error of −1.20 meters reflects an overall negative bias in GDEM v3. The absolute vertical accuracy assessment results, both mean error and RMSE, were segmented by land cover type to provide insight into how GDEM v3 performs in various land surface conditions. While the RMSE varies little across cover types (6.92 to 9.25 meters, the mean error (bias does appear to be affected by land cover type, ranging from −2.99 to +4.16 meters across 14 land cover classes. These results indicate that in areas where built or natural aboveground features are present, GDEM v3 is measuring elevations above the ground level, a condition noted in assessments of previous GDEM versions (v1 and v2 and an expected condition given the type of stereo-optical image data collected by ASTER. GDEM v3 was also evaluated by differencing with the Shuttle Radar Topography Mission (SRTM dataset. In many forested areas, GDEM v3 has elevations that are higher in the canopy than SRTM. The overall validation effort also included an evaluation of the GDEM v3 water mask. In general, the number of distinct water polygons in GDEM v3 is much lower than the number in a reference land cover dataset, but the total areas compare much more closely.

  20. Validation of the ASTER Global Digital Elevation Model version 3 over the conterminous United States

    Science.gov (United States)

    Gesch, Dean B.; Oimoen, Michael J.; Danielson, Jeffrey J.; Meyer, David

    2016-01-01

    The ASTER Global Digital Elevation Model Version 3 (GDEM v3) was evaluated over the conterminous United States in a manner similar to the validation conducted for the original GDEM Version 1 (v1) in 2009 and GDEM Version 2 (v2) in 2011. The absolute vertical accuracy of GDEM v3 was calculated by comparison with more than 23,000 independent reference geodetic ground control points from the U.S. National Geodetic Survey. The root mean square error (RMSE) measured for GDEM v3 is 8.52 meters. This compares with the RMSE of 8.68 meters for GDEM v2. Another important descriptor of vertical accuracy is the mean error, or bias, which indicates if a DEM has an overall vertical offset from true ground level. The GDEM v3 mean error of −1.20 meters reflects an overall negative bias in GDEM v3. The absolute vertical accuracy assessment results, both mean error and RMSE, were segmented by land cover type to provide insight into how GDEM v3 performs in various land surface conditions. While the RMSE varies little across cover types (6.92 to 9.25 meters), the mean error (bias) does appear to be affected by land cover type, ranging from −2.99 to +4.16 meters across 14 land cover classes. These results indicate that in areas where built or natural aboveground features are present, GDEM v3 is measuring elevations above the ground level, a condition noted in assessments of previous GDEM versions (v1 and v2) and an expected condition given the type of stereo-optical image data collected by ASTER. GDEM v3 was also evaluated by differencing with the Shuttle Radar Topography Mission (SRTM) dataset. In many forested areas, GDEM v3 has elevations that are higher in the canopy than SRTM. The overall validation effort also included an evaluation of the GDEM v3 water mask. In general, the number of distinct water polygons in GDEM v3 is much lower than the number in a reference land cover dataset, but the total areas compare much more closely.

  1. Validation of the Aster Global Digital Elevation Model Version 3 Over the Conterminous United States

    Science.gov (United States)

    Gesch, D.; Oimoen, M.; Danielson, J.; Meyer, D.

    2016-06-01

    The ASTER Global Digital Elevation Model Version 3 (GDEM v3) was evaluated over the conterminous United States in a manner similar to the validation conducted for the original GDEM Version 1 (v1) in 2009 and GDEM Version 2 (v2) in 2011. The absolute vertical accuracy of GDEM v3 was calculated by comparison with more than 23,000 independent reference geodetic ground control points from the U.S. National Geodetic Survey. The root mean square error (RMSE) measured for GDEM v3 is 8.52 meters. This compares with the RMSE of 8.68 meters for GDEM v2. Another important descriptor of vertical accuracy is the mean error, or bias, which indicates if a DEM has an overall vertical offset from true ground level. The GDEM v3 mean error of -1.20 meters reflects an overall negative bias in GDEM v3. The absolute vertical accuracy assessment results, both mean error and RMSE, were segmented by land cover type to provide insight into how GDEM v3 performs in various land surface conditions. While the RMSE varies little across cover types (6.92 to 9.25 meters), the mean error (bias) does appear to be affected by land cover type, ranging from -2.99 to +4.16 meters across 14 land cover classes. These results indicate that in areas where built or natural aboveground features are present, GDEM v3 is measuring elevations above the ground level, a condition noted in assessments of previous GDEM versions (v1 and v2) and an expected condition given the type of stereo-optical image data collected by ASTER. GDEM v3 was also evaluated by differencing with the Shuttle Radar Topography Mission (SRTM) dataset. In many forested areas, GDEM v3 has elevations that are higher in the canopy than SRTM. The overall validation effort also included an evaluation of the GDEM v3 water mask. In general, the number of distinct water polygons in GDEM v3 is much lower than the number in a reference land cover dataset, but the total areas compare much more closely.

  2. Immersion freezing by natural dust based on a soccer ball model with the Community Atmospheric Model version 5: climate effects

    Science.gov (United States)

    Wang, Yong; Liu, Xiaohong

    2014-12-01

    We introduce a simplified version of the soccer ball model (SBM) developed by Niedermeier et al (2014 Geophys. Res. Lett. 41 736-741) into the Community Atmospheric Model version 5 (CAM5). It is the first time that SBM is used in an atmospheric model to parameterize the heterogeneous ice nucleation. The SBM, which was simplified for its suitable application in atmospheric models, uses the classical nucleation theory to describe the immersion/condensation freezing by dust in the mixed-phase cloud regime. Uncertain parameters (mean contact angle, standard deviation of contact angle probability distribution, and number of surface sites) in the SBM are constrained by fitting them to recent natural dust (Saharan dust) datasets. With the SBM in CAM5, we investigate the sensitivity of modeled cloud properties to the SBM parameters, and find significant seasonal and regional differences in the sensitivity among the three SBM parameters. Changes of mean contact angle and the number of surface sites lead to changes of cloud properties in Arctic in spring, which could be attributed to the transport of dust ice nuclei to this region. In winter, significant changes of cloud properties induced by these two parameters mainly occur in northern hemispheric mid-latitudes (e.g., East Asia). In comparison, no obvious changes of cloud properties caused by changes of standard deviation can be found in all the seasons. These results are valuable for understanding the heterogeneous ice nucleation behavior, and useful for guiding the future model developments.

  3. UNSAT-H Version 2. 0: Unsaturated soil water and heat flow model

    Energy Technology Data Exchange (ETDEWEB)

    Fayer, M.J.; Jones, T.L.

    1990-04-01

    This report documents UNSAT-H Version 2.0, a model for calculating water and heat flow in unsaturated media. The documentation includes the bases for the conceptual model and its numerical implementation, benchmark test cases, example simulations involving layered soils and plant transpiration, and the code listing. Waste management practices at the Hanford Site have included disposal of low-level wastes by near-surface burial. Predicting the future long-term performance of any such burial site in terms of migration of contaminants requires a model capable of simulating water flow in the unsaturated soils above the buried waste. The model currently used to meet this need is UNSAT-H. This model was developed at Pacific Northwest Laboratory to assess water dynamics of near-surface, waste-disposal sites at the Hanford Site. The code is primarily used to predict deep drainage as a function of such environmental conditions as climate, soil type, and vegetation. UNSAT-H is also used to simulate the effects of various practices to enhance isolation of wastes. 66 refs., 29 figs., 7 tabs.

  4. A new version of the NeQuick ionosphere electron density model

    Science.gov (United States)

    Nava, B.; Coïsson, P.; Radicella, S. M.

    2008-12-01

    NeQuick is a three-dimensional and time dependent ionospheric electron density model developed at the Aeronomy and Radiopropagation Laboratory of the Abdus Salam International Centre for Theoretical Physics (ICTP), Trieste, Italy and at the Institute for Geophysics, Astrophysics and Meteorology of the University of Graz, Austria. It is a quick-run model particularly tailored for trans-ionospheric applications that allows one to calculate the electron concentration at any given location in the ionosphere and thus the total electron content (TEC) along any ground-to-satellite ray-path by means of numerical integration. Taking advantage of the increasing amount of available data, the model formulation is continuously updated to improve NeQuick capabilities to provide representations of the ionosphere at global scales. Recently, major changes have been introduced in the model topside formulation and important modifications have also been introduced in the bottomside description. In addition, specific revisions have been applied to the computer package associated to NeQuick in order to improve its computational efficiency. It has therefore been considered appropriate to finalize all the model developments in a new version of the NeQuick. In the present work the main features of NeQuick 2 are illustrated and some results related to validation tests are reported.

  5. A description of the FAMOUS (version XDBUA climate model and control run

    Directory of Open Access Journals (Sweden)

    A. Osprey

    2008-12-01

    Full Text Available FAMOUS is an ocean-atmosphere general circulation model of low resolution, capable of simulating approximately 120 years of model climate per wallclock day using current high performance computing facilities. It uses most of the same code as HadCM3, a widely used climate model of higher resolution and computational cost, and has been tuned to reproduce the same climate reasonably well. FAMOUS is useful for climate simulations where the computational cost makes the application of HadCM3 unfeasible, either because of the length of simulation or the size of the ensemble desired. We document a number of scientific and technical improvements to the original version of FAMOUS. These improvements include changes to the parameterisations of ozone and sea-ice which alleviate a significant cold bias from high northern latitudes and the upper troposphere, and the elimination of volume-averaged drifts in ocean tracers. A simple model of the marine carbon cycle has also been included. A particular goal of FAMOUS is to conduct millennial-scale paleoclimate simulations of Quaternary ice ages; to this end, a number of useful changes to the model infrastructure have been made.

  6. Solid Waste Projection Model: Database User`s Guide. Version 1.4

    Energy Technology Data Exchange (ETDEWEB)

    Blackburn, C.L.

    1993-10-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC) specifically to address Hanford solid waste management issues. This document is one of a set of documents supporting the SWPM system and providing instructions in the use and maintenance of SWPM components. This manual contains instructions for using Version 1.4 of the SWPM database: system requirements and preparation, entering and maintaining data, and performing routine database functions. This document supports only those operations which are specific to SWPM database menus and functions and does not Provide instruction in the use of Paradox, the database management system in which the SWPM database is established.

  7. Hydrogeochemical evaluation of the Forsmark site, model version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    Laaksoharju, Marcus (ed.) [GeoPoint AB, Sollentuna (Sweden); Gimeno, Maria; Auque, Luis; Gomez, Javier [Univ. of Zaragoza (Spain). Dept. of Earth Sciences; Smellie, John [Conterra AB, Uppsala (Sweden); Tullborg, Eva-Lena [Terralogica AB, Graabo (Sweden); Gurban, Ioana [3D-Terra, Montreal (Canada)

    2004-01-01

    Siting studies for SKB's programme of deep geological disposal of nuclear fuel waste currently involves the investigation of two locations, Forsmark and Simpevarp, on the eastern coast of Sweden to determine their geological, geochemical and hydrogeological characteristics. Present work completed has resulted in model version 1.1 which represents the first evaluation of the available Forsmark groundwater analytical data collected up to May 1, 2003 (i.e. the first 'data freeze'). The HAG group had access to a total of 456 water samples collected mostly from the surface and sub-surface environment (e.g. soil pipes in the overburden, streams and lakes); only a few samples were collected from drilled boreholes. The deepest samples reflected depths down to 200 m. Furthermore, most of the waters sampled (74%) lacked crucial analytical information that restricted the evaluation. Consequently, model version 1.1 focussed on the processes taking place in the uppermost part of the bedrock rather than at repository levels. The complex groundwater evolution and patterns at Forsmark are a result of many factors such as: a) the flat topography and closeness to the Baltic Sea resulting in relative small hydrogeological driving forces which can preserve old water types from being flushed out, b) the changes in hydrogeology related to glaciation/deglaciation and land uplift, c) repeated marine/lake water regressions/transgressions, and d) organic or inorganic alteration of the groundwater caused by microbial processes or water/rock interactions. The sampled groundwaters reflect to various degrees modern or ancient water/rock interactions and mixing processes. Based on the general geochemical character and the apparent age two major water types occur in Forsmark: fresh-meteoric waters with a bicarbonate imprint and low residence times (tritium values above detection limit), and brackish-marine waters with Cl contents up to 6,000 mg/L and longer residence times (tritium

  8. Thermal modelling. Preliminary site description Simpevarp subarea - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Sundberg, Jan; Back, Paer-Erik; Bengtsson, Anna; Laendell, Maerta [Geo Innova AB, Linkoeping (Sweden)

    2005-08-15

    This report presents the thermal site descriptive model for the Simpevarp subarea, version 1.2. The main objective of this report is to present the thermal modelling work where data has been identified, quality controlled, evaluated and summarised in order to make an upscaling to lithological domain level possible. The thermal conductivity at possible canister scale has been modelled for four different lithological domains (RSMA01 (Aevroe granite), RSMB01 (Fine-grained dioritoid), RSMC01 (mixture of Aevroe granite and Quartz monzodiorite), and RSMD01 (Quartz monzodiorite)). A main modelling approach has been used to determine the mean value of the thermal conductivity. Three alternative/complementary approaches have been used to evaluate the spatial variability of the thermal conductivity at domain level. The thermal modelling approaches are based on the lithological model for the Simpevarp subarea, version 1.2 together with rock type models constituted from measured and calculated (from mineral composition) thermal conductivities. For one rock type, the Aevroe granite (501044), density loggings within the specific rock type has also been used in the domain modelling in order to consider the spatial variability within the Aevroe granite. This has been possible due to the presented relationship between density and thermal conductivity, valid for the Aevroe granite. Results indicate that the mean of thermal conductivity is expected to exhibit only a small variation between the different domains, from 2.62 W/(m.K) to 2.80 W/(m.K). The standard deviation varies according to the scale considered and for the canister scale it is expected to range from 0.20 to 0.28 W/(m.K). Consequently, the lower confidence limit (95% confidence) for the canister scale is within the range 2.04-2.35 W/(m.K) for the different domains. The temperature dependence is rather small with a decrease in thermal conductivity of 1.1-3.4% per 100 deg C increase in temperature for the dominating rock

  9. Thermal modelling. Preliminary site description Laxemar subarea - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Sundberg, Jan; Wrafter, John; Back, Paer-Erik; Laendell, Maerta [Geo Innova AB, Linkoeping (Sweden)

    2006-02-15

    This report presents the thermal site descriptive model for the Laxemar subarea, version 1.2. The main objective of this report is to present the thermal modelling work where data has been identified, quality controlled, evaluated and summarised in order to make an upscaling to lithological domain level possible. The thermal conductivity at canister scale has been modelled for five different lithological domains: RSMA (Aevroe granite), RSMBA (mixture of Aevroe granite and fine-grained dioritoid), RSMD (quartz monzodiorite), RSME (diorite/gabbro) and RSMM (mix domain with high frequency of diorite to gabbro). A base modelling approach has been used to determine the mean value of the thermal conductivity. Four alternative/complementary approaches have been used to evaluate the spatial variability of the thermal conductivity at domain level. The thermal modelling approaches are based on the lithological domain model for the Laxemar subarea, version 1.2 together with rock type models based on measured and calculated (from mineral composition) thermal conductivities. For one rock type, Aevroe granite (501044), density loggings have also been used in the domain modelling in order to evaluate the spatial variability within the Aevroe granite. This has been possible due to an established relationship between density and thermal conductivity, valid for the Aevroe granite. Results indicate that the means of thermal conductivity for the various domains are expected to exhibit a variation from 2.45 W/(m.K) to 2.87 W/(m.K). The standard deviation varies according to the scale considered, and for the 0.8 m scale it is expected to range from 0.17 to 0.29 W/(m.K). Estimates of lower tail percentiles for the same scale are presented for all five domains. The temperature dependence is rather small with a decrease in thermal conductivity of 1.1-5.3% per 100 deg C increase in temperature for the dominant rock types. There are a number of important uncertainties associated with these

  10. Thermal modelling. Preliminary site description Laxemar subarea - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Sundberg, Jan; Wrafter, John; Back, Paer-Erik; Laendell, Maerta [Geo Innova AB, Linkoeping (Sweden)

    2006-02-15

    This report presents the thermal site descriptive model for the Laxemar subarea, version 1.2. The main objective of this report is to present the thermal modelling work where data has been identified, quality controlled, evaluated and summarised in order to make an upscaling to lithological domain level possible. The thermal conductivity at canister scale has been modelled for five different lithological domains: RSMA (Aevroe granite), RSMBA (mixture of Aevroe granite and fine-grained dioritoid), RSMD (quartz monzodiorite), RSME (diorite/gabbro) and RSMM (mix domain with high frequency of diorite to gabbro). A base modelling approach has been used to determine the mean value of the thermal conductivity. Four alternative/complementary approaches have been used to evaluate the spatial variability of the thermal conductivity at domain level. The thermal modelling approaches are based on the lithological domain model for the Laxemar subarea, version 1.2 together with rock type models based on measured and calculated (from mineral composition) thermal conductivities. For one rock type, Aevroe granite (501044), density loggings have also been used in the domain modelling in order to evaluate the spatial variability within the Aevroe granite. This has been possible due to an established relationship between density and thermal conductivity, valid for the Aevroe granite. Results indicate that the means of thermal conductivity for the various domains are expected to exhibit a variation from 2.45 W/(m.K) to 2.87 W/(m.K). The standard deviation varies according to the scale considered, and for the 0.8 m scale it is expected to range from 0.17 to 0.29 W/(m.K). Estimates of lower tail percentiles for the same scale are presented for all five domains. The temperature dependence is rather small with a decrease in thermal conductivity of 1.1-5.3% per 100 deg C increase in temperature for the dominant rock types. There are a number of important uncertainties associated with these

  11. Thermal modelling. Preliminary site description Simpevarp subarea - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Sundberg, Jan; Back, Paer-Erik; Bengtsson, Anna; Laendell, Maerta [Geo Innova AB, Linkoeping (Sweden)

    2005-08-15

    This report presents the thermal site descriptive model for the Simpevarp subarea, version 1.2. The main objective of this report is to present the thermal modelling work where data has been identified, quality controlled, evaluated and summarised in order to make an upscaling to lithological domain level possible. The thermal conductivity at possible canister scale has been modelled for four different lithological domains (RSMA01 (Aevroe granite), RSMB01 (Fine-grained dioritoid), RSMC01 (mixture of Aevroe granite and Quartz monzodiorite), and RSMD01 (Quartz monzodiorite)). A main modelling approach has been used to determine the mean value of the thermal conductivity. Three alternative/complementary approaches have been used to evaluate the spatial variability of the thermal conductivity at domain level. The thermal modelling approaches are based on the lithological model for the Simpevarp subarea, version 1.2 together with rock type models constituted from measured and calculated (from mineral composition) thermal conductivities. For one rock type, the Aevroe granite (501044), density loggings within the specific rock type has also been used in the domain modelling in order to consider the spatial variability within the Aevroe granite. This has been possible due to the presented relationship between density and thermal conductivity, valid for the Aevroe granite. Results indicate that the mean of thermal conductivity is expected to exhibit only a small variation between the different domains, from 2.62 W/(m.K) to 2.80 W/(m.K). The standard deviation varies according to the scale considered and for the canister scale it is expected to range from 0.20 to 0.28 W/(m.K). Consequently, the lower confidence limit (95% confidence) for the canister scale is within the range 2.04-2.35 W/(m.K) for the different domains. The temperature dependence is rather small with a decrease in thermal conductivity of 1.1-3.4% per 100 deg C increase in temperature for the dominating rock

  12. Technical Note: Chemistry-climate model SOCOL: version 2.0 with improved transport and chemistry/microphysics schemes

    Directory of Open Access Journals (Sweden)

    M. Schraner

    2008-10-01

    Full Text Available We describe version 2.0 of the chemistry-climate model (CCM SOCOL. The new version includes fundamental changes of the transport scheme such as transporting all chemical species of the model individually and applying a family-based correction scheme for mass conservation for species of the nitrogen, chlorine and bromine groups, a revised transport scheme for ozone, furthermore more detailed halogen reaction and deposition schemes, and a new cirrus parameterisation in the tropical tropopause region. By means of these changes the model manages to overcome or considerably reduce deficiencies recently identified in SOCOL version 1.1 within the CCM Validation activity of SPARC (CCMVal. In particular, as a consequence of these changes, regional mass loss or accumulation artificially caused by the semi-Lagrangian transport scheme can be significantly reduced, leading to much more realistic distributions of the modelled chemical species, most notably of the halogens and ozone.

  13. Comparison of OH concentration measurements by DOAS and LIF during SAPHIR chamber experiments at high OH reactivity and low NO concentration

    Directory of Open Access Journals (Sweden)

    H. Fuchs

    2012-07-01

    Full Text Available During recent field campaigns, hydroxyl radical (OH concentrations that were measured by laser-induced fluorescence (LIF were up to a factor of ten larger than predicted by current chemical models for conditions of high OH reactivity and low NO concentration. These discrepancies, which were observed in forests and urban-influenced rural environments, are so far not entirely understood. In summer 2011, a series of experiments was carried out in the atmosphere simulation chamber SAPHIR in Jülich, Germany, in order to investigate the photochemical degradation of isoprene, methyl-vinyl ketone (MVK, methacrolein (MACR and aromatic compounds by OH. Conditions were similar to those experienced during the PRIDE-PRD2006 campaign in the Pearl River Delta (PRD, China, in 2006, where a large difference between OH measurements and model predictions was found. During experiments in SAPHIR, OH was simultaneously detected by two independent instruments: LIF and differential optical absorption spectroscopy (DOAS. Because DOAS is an inherently calibration-free technique, DOAS measurements are regarded as a reference standard. The comparison of the two techniques was used to investigate potential artifacts in the LIF measurements for PRD-like conditions of OH reactivities of 10 to 30 s−1 and NO mixing ratios of 0.1 to 0.3 ppbv. The analysis of twenty experiment days shows good agreement. The linear regression of the combined data set (averaged to the DOAS time resolution, 2495 data points yields a slope of 1.02 ± 0.01 with an intercept of (0.10 ± 0.03 × 106 cm−3 and a linear correlation coefficient of R2 = 0.86. This indicates that the sensitivity of the LIF instrument is well-defined by its calibration procedure. No hints for artifacts are observed for isoprene, MACR, and different aromatic compounds. LIF measurements were approximately 30–40% (median larger than those by DOAS after MVK (20 ppbv and

  14. Comparison of OH concentration measurements by DOAS and LIF during SAPHIR chamber experiments at high OH reactivity and low NO concentration

    Science.gov (United States)

    Fuchs, H.; Dorn, H.-P.; Bachner, M.; Bohn, B.; Brauers, T.; Gomm, S.; Hofzumahaus, A.; Holland, F.; Nehr, S.; Rohrer, F.; Tillmann, R.; Wahner, A.

    2012-07-01

    During recent field campaigns, hydroxyl radical (OH) concentrations that were measured by laser-induced fluorescence (LIF) were up to a factor of ten larger than predicted by current chemical models for conditions of high OH reactivity and low NO concentration. These discrepancies, which were observed in forests and urban-influenced rural environments, are so far not entirely understood. In summer 2011, a series of experiments was carried out in the atmosphere simulation chamber SAPHIR in Jülich, Germany, in order to investigate the photochemical degradation of isoprene, methyl-vinyl ketone (MVK), methacrolein (MACR) and aromatic compounds by OH. Conditions were similar to those experienced during the PRIDE-PRD2006 campaign in the Pearl River Delta (PRD), China, in 2006, where a large difference between OH measurements and model predictions was found. During experiments in SAPHIR, OH was simultaneously detected by two independent instruments: LIF and differential optical absorption spectroscopy (DOAS). Because DOAS is an inherently calibration-free technique, DOAS measurements are regarded as a reference standard. The comparison of the two techniques was used to investigate potential artifacts in the LIF measurements for PRD-like conditions of OH reactivities of 10 to 30 s-1 and NO mixing ratios of 0.1 to 0.3 ppbv. The analysis of twenty experiment days shows good agreement. The linear regression of the combined data set (averaged to the DOAS time resolution, 2495 data points) yields a slope of 1.02 ± 0.01 with an intercept of (0.10 ± 0.03) × 106 cm-3 and a linear correlation coefficient of R2 = 0.86. This indicates that the sensitivity of the LIF instrument is well-defined by its calibration procedure. No hints for artifacts are observed for isoprene, MACR, and different aromatic compounds. LIF measurements were approximately 30-40% (median) larger than those by DOAS after MVK (20 ppbv) and toluene (90 ppbv) had been added. However, this discrepancy has a

  15. Systems Biology Markup Language (SBML) Level 2 Version 5: Structures and Facilities for Model Definitions.

    Science.gov (United States)

    Hucka, Michael; Bergmann, Frank T; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M; Le Novère, Nicolas; Myers, Chris J; Olivier, Brett G; Sahle, Sven; Schaff, James C; Smith, Lucian P; Waltemath, Dagmar; Wilkinson, Darren J

    2015-09-04

    Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org.

  16. Systems Biology Markup Language (SBML) Level 2 Version 5: Structures and Facilities for Model Definitions

    Science.gov (United States)

    Hucka, Michael; Bergmann, Frank T.; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M.; Le Novére, Nicolas; Myers, Chris J.; Olivier, Brett G.; Sahle, Sven; Schaff, James C.; Smith, Lucian P.; Waltemath, Dagmar; Wilkinson, Darren J.

    2017-01-01

    Summary Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/. PMID:26528569

  17. User manual for GEOCOST: a computer model for geothermal cost analysis. Volume 2. Binary cycle version

    Energy Technology Data Exchange (ETDEWEB)

    Huber, H.D.; Walter, R.A.; Bloomster, C.H.

    1976-03-01

    A computer model called GEOCOST has been developed to simulate the production of electricity from geothermal resources and calculate the potential costs of geothermal power. GEOCOST combines resource characteristics, power recovery technology, tax rates, and financial factors into one systematic model and provides the flexibility to individually or collectively evaluate their impacts on the cost of geothermal power. Both the geothermal reservoir and power plant are simulated to model the complete energy production system. In the version of GEOCOST in this report, geothermal fluid is supplied from wells distributed throughout a hydrothermal reservoir through insulated pipelines to a binary power plant. The power plant is simulated using a binary fluid cycle in which the geothermal fluid is passed through a series of heat exchangers. The thermodynamic state points in basic subcritical and supercritical Rankine cycles are calculated for a variety of working fluids. Working fluids which are now in the model include isobutane, n-butane, R-11, R-12, R-22, R-113, R-114, and ammonia. Thermodynamic properties of the working fluids at the state points are calculated using empirical equations of state. The Starling equation of state is used for hydrocarbons and the Martin-Hou equation of state is used for fluorocarbons and ammonia. Physical properties of working fluids at the state points are calculated.

  18. Modelling waste stabilisation ponds with an extended version of ASM3.

    Science.gov (United States)

    Gehring, T; Silva, J D; Kehl, O; Castilhos, A B; Costa, R H R; Uhlenhut, F; Alex, J; Horn, H; Wichern, M

    2010-01-01

    In this paper an extended version of IWA's Activated Sludge Model No 3 (ASM3) was developed to simulate processes in waste stabilisation ponds (WSP). The model modifications included the integration of algae biomass and gas transfer processes for oxygen, carbon dioxide and ammonia depending on wind velocity and a simple ionic equilibrium. The model was applied to a pilot-scale WSP system operated in the city of Florianópolis (Brazil). The system was used to treat leachate from a municipal waste landfill. Mean influent concentrations to the facultative pond of 1,456 g(COD)/m(3) and 505 g(NH4-N)/m(3) were measured. Experimental results indicated an ammonia nitrogen removal of 89.5% with negligible rates of nitrification but intensive ammonia stripping to the atmosphere. Measured data was used in the simulations to consider the impact of wind velocity on oxygen input of 11.1 to 14.4 g(O2)/(m(2) d) and sun radiation on photosynthesis. Good results for pH and ammonia removal were achieved with mean stripping rates of 18.2 and 4.5 g(N)/(m(2) d) for the facultative and maturation pond respectively. Based on measured chlorophyll a concentrations and depending on light intensity and TSS concentration it was possible to model algae concentrations.

  19. VALIDATION OF THE ASTER GLOBAL DIGITAL ELEVATION MODEL VERSION 2 OVER THE CONTERMINOUS UNITED STATES

    Directory of Open Access Journals (Sweden)

    D. Gesch

    2012-07-01

    Full Text Available The ASTER Global Digital Elevation Model Version 2 (GDEM v2 was evaluated over the conterminous United States in a manner similar to the validation conducted for the original GDEM Version 1 (v1 in 2009. The absolute vertical accuracy of GDEM v2 was calculated by comparison with more than 18,000 independent reference geodetic ground control points from the National Geodetic Survey. The root mean square error (RMSE measured for GDEM v2 is 8.68 meters. This compares with the RMSE of 9.34 meters for GDEM v1. Another important descriptor of vertical accuracy is the mean error, or bias, which indicates if a DEM has an overall vertical offset from true ground level. The GDEM v2 mean error of –0.20 meters is a significant improvement over the GDEM v1 mean error of –3.69 meters. The absolute vertical accuracy assessment results, both mean error and RMSE, were segmented by land cover to examine the effects of cover types on measured errors. The GDEM v2 mean errors by land cover class verify that the presence of aboveground features (tree canopies and built structures cause a positive elevation bias, as would be expected for an imaging system like ASTER. In open ground classes (little or no vegetation with significant aboveground height, GDEM v2 exhibits a negative bias on the order of 1 meter. GDEM v2 was also evaluated by differencing with the Shuttle Radar Topography Mission (SRTM dataset. In many forested areas, GDEM v2 has elevations that are higher in the canopy than SRTM.

  20. Accelerator System Model (ASM) user manual with physics and engineering model documentation. ASM version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-07-01

    The Accelerator System Model (ASM) is a computer program developed to model proton radiofrequency accelerators and to carry out system level trade studies. The ASM FORTRAN subroutines are incorporated into an intuitive graphical user interface which provides for the {open_quotes}construction{close_quotes} of the accelerator in a window on the computer screen. The interface is based on the Shell for Particle Accelerator Related Codes (SPARC) software technology written for the Macintosh operating system in the C programming language. This User Manual describes the operation and use of the ASM application within the SPARC interface. The Appendix provides a detailed description of the physics and engineering models used in ASM. ASM Version 1.0 is joint project of G. H. Gillespie Associates, Inc. and the Accelerator Technology (AT) Division of the Los Alamos National Laboratory. Neither the ASM Version 1.0 software nor this ASM Documentation may be reproduced without the expressed written consent of both the Los Alamos National Laboratory and G. H. Gillespie Associates, Inc.

  1. A generic method for automatic translation between input models for different versions of simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Serfontein, Dawid E., E-mail: Dawid.Serfontein@nwu.ac.za [School of Mechanical and Nuclear Engineering, North West University (PUK-Campus), PRIVATE BAG X6001 (Internal Post Box 360), Potchefstroom 2520 (South Africa); Mulder, Eben J. [School of Mechanical and Nuclear Engineering, North West University (South Africa); Reitsma, Frederik [Calvera Consultants (South Africa)

    2014-05-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications.

  2. Description and evaluation of the Community Multiscale Air Quality (CMAQ) modeling system version 5.1

    Science.gov (United States)

    Wyat Appel, K.; Napelenok, Sergey L.; Foley, Kristen M.; Pye, Havala O. T.; Hogrefe, Christian; Luecken, Deborah J.; Bash, Jesse O.; Roselle, Shawn J.; Pleim, Jonathan E.; Foroutan, Hosein; Hutzell, William T.; Pouliot, George A.; Sarwar, Golam; Fahey, Kathleen M.; Gantt, Brett; Gilliam, Robert C.; Heath, Nicholas K.; Kang, Daiwen; Mathur, Rohit; Schwede, Donna B.; Spero, Tanya L.; Wong, David C.; Young, Jeffrey O.

    2017-04-01

    The Community Multiscale Air Quality (CMAQ) model is a comprehensive multipollutant air quality modeling system developed and maintained by the US Environmental Protection Agency's (EPA) Office of Research and Development (ORD). Recently, version 5.1 of the CMAQ model (v5.1) was released to the public, incorporating a large number of science updates and extended capabilities over the previous release version of the model (v5.0.2). These updates include the following: improvements in the meteorological calculations in both CMAQ and the Weather Research and Forecast (WRF) model used to provide meteorological fields to CMAQ, updates to the gas and aerosol chemistry, revisions to the calculations of clouds and photolysis, and improvements to the dry and wet deposition in the model. Sensitivity simulations isolating several of the major updates to the modeling system show that changes to the meteorological calculations result in enhanced afternoon and early evening mixing in the model, periods when the model historically underestimates mixing. This enhanced mixing results in higher ozone (O3) mixing ratios on average due to reduced NO titration, and lower fine particulate matter (PM2. 5) concentrations due to greater dilution of primary pollutants (e.g., elemental and organic carbon). Updates to the clouds and photolysis calculations greatly improve consistency between the WRF and CMAQ models and result in generally higher O3 mixing ratios, primarily due to reduced cloudiness and attenuation of photolysis in the model. Updates to the aerosol chemistry result in higher secondary organic aerosol (SOA) concentrations in the summer, thereby reducing summertime PM2. 5 bias (PM2. 5 is typically underestimated by CMAQ in the summer), while updates to the gas chemistry result in slightly higher O3 and PM2. 5 on average in January and July. Overall, the seasonal variation in simulated PM2. 5 generally improves in CMAQv5.1 (when considering all model updates), as simulated PM2. 5

  3. Evaluating and improving cloud phase in the Community Atmosphere Model version 5 using spaceborne lidar observations

    Science.gov (United States)

    Kay, Jennifer E.; Bourdages, Line; Miller, Nathaniel B.; Morrison, Ariel; Yettella, Vineel; Chepfer, Helene; Eaton, Brian

    2016-04-01

    Spaceborne lidar observations from the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) satellite are used to evaluate cloud amount and cloud phase in the Community Atmosphere Model version 5 (CAM5), the atmospheric component of a widely used state-of-the-art global coupled climate model (Community Earth System Model). By embedding a lidar simulator within CAM5, the idiosyncrasies of spaceborne lidar cloud detection and phase assignment are replicated. As a result, this study makes scale-aware and definition-aware comparisons between model-simulated and observed cloud amount and cloud phase. In the global mean, CAM5 has insufficient liquid cloud and excessive ice cloud when compared to CALIPSO observations. Over the ice-covered Arctic Ocean, CAM5 has insufficient liquid cloud in all seasons. Having important implications for projections of future sea level rise, a liquid cloud deficit contributes to a cold bias of 2-3°C for summer daily maximum near-surface air temperatures at Summit, Greenland. Over the midlatitude storm tracks, CAM5 has excessive ice cloud and insufficient liquid cloud. Storm track cloud phase biases in CAM5 maximize over the Southern Ocean, which also has larger-than-observed seasonal variations in cloud phase. Physical parameter modifications reduce the Southern Ocean cloud phase and shortwave radiation biases in CAM5 and illustrate the power of the CALIPSO observations as an observational constraint. The results also highlight the importance of using a regime-based, as opposed to a geographic-based, model evaluation approach. More generally, the results demonstrate the importance and value of simulator-enabled comparisons of cloud phase in models used for future climate projection.

  4. Overview of the Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) Time-Independent Model

    Science.gov (United States)

    Field, E. H.; Arrowsmith, R.; Biasi, G. P.; Bird, P.; Dawson, T. E.; Felzer, K. R.; Jackson, D. D.; Johnson, K. M.; Jordan, T. H.; Madugo, C. M.; Michael, A. J.; Milner, K. R.; Page, M. T.; Parsons, T.; Powers, P.; Shaw, B. E.; Thatcher, W. R.; Weldon, R. J.; Zeng, Y.

    2013-12-01

    We present the time-independent component of the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3), where the primary achievements have been to relax fault segmentation and include multi-fault ruptures, both limitations of UCERF2. The rates of all earthquakes are solved for simultaneously, and from a broader range of data, using a system-level 'grand inversion' that is both conceptually simple and extensible. The inverse problem is large and underdetermined, so a range of models is sampled using an efficient simulated annealing algorithm. The approach is more derivative than prescriptive (e.g., magnitude-frequency distributions are no longer assumed), so new analysis tools were developed for exploring solutions. Epistemic uncertainties were also accounted for using 1440 alternative logic tree branches, necessitating access to supercomputers. The most influential uncertainties include alternative deformation models (fault slip rates), a new smoothed seismicity algorithm, alternative values for the total rate of M≥5 events, and different scaling relationships, virtually all of which are new. As a notable first, three deformation models are based on kinematically consistent inversions of geodetic and geologic data, also providing slip-rate constraints on faults previously excluded due to lack of geologic data. The grand inversion constitutes a system-level framework for testing hypotheses and balancing the influence of different experts. For example, we demonstrate serious challenges with the Gutenberg-Richter hypothesis for individual faults. UCERF3 is still an approximation of the system, however, and the range of models is limited (e.g., constrained to stay close to UCERF2). Nevertheless, UCERF3 removes the apparent UCERF2 over-prediction of M6.5-7 earthquake rates, and also includes types of multi-fault ruptures seen in nature. While UCERF3 fits the data better than UCERF2 overall, there may be areas that warrant further site

  5. Energy Integration for 2050 - A Strategic Impact Model (2050 SIM), Version 2.0

    Energy Technology Data Exchange (ETDEWEB)

    John Collins

    2011-09-01

    The United States (U.S.) energy infrastructure is among the most reliable, accessible, and economic in the world. On the other hand, it is also excessively reliant on foreign energy sources, experiences high volatility in energy prices, does not always practice good stewardship of finite indigenous energy resources, and emits significant quantities of greenhouse gas. The U.S. Department of Energy is conducting research and development on advanced nuclear reactor concepts and technologies, including High Temperature Gas Reactor (HTGR) technologies, directed at helping the United States meet its current and future energy challenges. This report discusses the Draft Strategic Impact Model (SIM), an initial version of which was created during the later part of FY-2010. SIM was developed to analyze and depict the benefits of various energy sources in meeting the energy demand and to provide an overall system understanding of the tradeoffs between building and using HTGRs versus other existing technologies for providing energy (heat and electricity) to various energy-use sectors in the United States. This report also provides the assumptions used in the model, the rationale for the methodology, and the references for the source documentation and source data used in developing the SIM.

  6. Energy Integration for 2050 - A Strategic Impact Model (2050 SIM), Version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    2010-10-01

    The United States (U.S.) energy infrastructure is among the most reliable, accessible, and economic in the world. On the other hand, it is also excessively reliant on foreign energy sources, experiences high volatility in energy prices, does not always practice good stewardship of finite indigenous energy resources, and emits significant quantities of greenhouse gas. The U.S. Department of Energy is conducting research and development on advanced nuclear reactor concepts and technologies, including High Temperature Gas Reactor (HTGR) technologies, directed at helping the United States meet its current and future energy challenges. This report discusses the Draft Strategic Impact Model (SIM), an initial version of which was created during the later part of FY-2010. SIM was developed to analyze and depict the benefits of various energy sources in meeting the energy demand and to provide an overall system understanding of the tradeoffs between building and using HTGRs versus other existing technologies for providing energy (heat and electricity) to various energy-use sectors in the United States. This report also provides the assumptions used in the model, the rationale for the methodology, and the references for the source documentation and source data used in developing the SIM.

  7. A multi-sectoral version of the Post-Keynesian growth model

    Directory of Open Access Journals (Sweden)

    Ricardo Azevedo Araujo

    2015-03-01

    Full Text Available Abstract With this inquiry, we seek to develop a disaggregated version of the post-Keynesian approach to economic growth, by showing that indeed it can be treated as a particular case of the Pasinettian model of structural change and economic expansion. By relying upon vertical integration it becomes possible to carry out the analysis initiated by Kaldor (1956 and Robinson (1956, 1962, and followed by Dutt (1984, Rowthorn (1982 and later Bhaduri and Marglin (1990 in a multi-sectoral model in which demand and productivity increase at different paces in each sector. By adopting this approach it is possible to show that the structural economic dynamics is conditioned not only to patterns of evolving demand and diffusion of technological progress but also to the distributive features of the economy, which can give rise to different regimes of economic growth. Besides, we find it possible to determine the natural rate of profit that makes the mark-up rate to be constant over time.

  8. Technical Note: Intercomparison of formaldehyde measurements at the atmosphere simulation chamber SAPHIR

    Directory of Open Access Journals (Sweden)

    A. Wisthaler

    2007-11-01

    Full Text Available The atmosphere simulation chamber SAPHIR at the Research Centre Jülich was used to test the suitability of state-of-the-art analytical instruments for the measurement of gas-phase formaldehyde (HCHO in air. Five analyzers based on four different sensing principles were deployed: a differential optical absorption spectrometer (DOAS, cartridges for 2,4-dinitro-phenyl-hydrazine (DNPH derivatization followed by off-line high pressure liquid chromatography (HPLC analysis, two different types of commercially available wet chemical sensors based on Hantzsch fluorimetry, and a proton-transfer-reaction mass spectrometer (PTR-MS. A new optimized mode of operation was used for the PTR-MS instrument which significantly enhanced its performance for on-line HCHO detection at low absolute humidities.

    The instruments were challenged with typical ambient levels of HCHO ranging from zero to several ppb. Synthetic air of high purity and particulate-filtered ambient air were used as sample matrices in the atmosphere simulation chamber onto which HCHO was spiked under varying levels of humidity and ozone. Measurements were compared to mixing ratios calculated from the chamber volume and the known amount of HCHO injected into the chamber; measurements were also compared between the different instruments. The formal and blind intercomparison exercise was conducted under the control of an independent referee. A number of analytical problems associated with the experimental set-up and with individual instruments were identified, the overall agreement between the methods was good.

  9. Technical Note: Intercomparison of formaldehyde measurements at the atmosphere simulation chamber SAPHIR

    Science.gov (United States)

    Wisthaler, A.; Apel, E. C.; Bossmeyer, J.; Hansel, A.; Junkermann, W.; Koppmann, R.; Meier, R.; Müller, K.; Solomon, S. J.; Steinbrecher, R.; Tillmann, R.; Brauers, T.

    2008-04-01

    The atmosphere simulation chamber SAPHIR at the Research Centre Jülich was used to test the suitability of state-of-the-art analytical instruments for the measurement of gas-phase formaldehyde (HCHO) in air. Five analyzers based on four different sensing principles were deployed: a differential optical absorption spectrometer (DOAS), cartridges for 2,4-dinitrophenylhydrazine (DNPH) derivatization followed by off-line high pressure liquid chromatography (HPLC) analysis, two different types of commercially available wet chemical sensors based on Hantzsch fluorimetry, and a proton-transfer-reaction mass spectrometer (PTR-MS). A new optimized mode of operation was used for the PTR-MS instrument which significantly enhanced its performance for online HCHO detection at low absolute humidities. The instruments were challenged with typical ambient levels of HCHO ranging from zero to several ppb. Synthetic air of high purity and particulate-filtered ambient air were used as sample matrices in the atmosphere simulation chamber onto which HCHO was spiked under varying levels of humidity and ozone. Measurements were compared to mixing ratios calculated from the chamber volume and the known amount of HCHO injected into the chamber; measurements were also compared between the different instruments. The formal and blind intercomparison exercise was conducted under the control of an independent referee. A number of analytical problems associated with the experimental set-up and with individual instruments were identified, the overall agreement between the methods was fair.

  10. Technical Note: Intercomparison of formaldehyde measurements at the atmosphere simulation chamber SAPHIR

    Directory of Open Access Journals (Sweden)

    A. Wisthaler

    2008-04-01

    Full Text Available The atmosphere simulation chamber SAPHIR at the Research Centre Jülich was used to test the suitability of state-of-the-art analytical instruments for the measurement of gas-phase formaldehyde (HCHO in air. Five analyzers based on four different sensing principles were deployed: a differential optical absorption spectrometer (DOAS, cartridges for 2,4-dinitro-phenyl-hydrazine (DNPH derivatization followed by off-line high pressure liquid chromatography (HPLC analysis, two different types of commercially available wet chemical sensors based on Hantzsch fluorimetry, and a proton-transfer-reaction mass spectrometer (PTR-MS. A new optimized mode of operation was used for the PTR-MS instrument which significantly enhanced its performance for online HCHO detection at low absolute humidities.

    The instruments were challenged with typical ambient levels of HCHO ranging from zero to several ppb. Synthetic air of high purity and particulate-filtered ambient air were used as sample matrices in the atmosphere simulation chamber onto which HCHO was spiked under varying levels of humidity and ozone. Measurements were compared to mixing ratios calculated from the chamber volume and the known amount of HCHO injected into the chamber; measurements were also compared between the different instruments. The formal and blind intercomparison exercise was conducted under the control of an independent referee. A number of analytical problems associated with the experimental set-up and with individual instruments were identified, the overall agreement between the methods was fair.

  11. Total OH reactivity study from VOC photochemical oxidation in the SAPHIR chamber

    Science.gov (United States)

    Yu, Z.; Tillmann, R.; Hohaus, T.; Fuchs, H.; Novelli, A.; Wegener, R.; Kaminski, M.; Schmitt, S. H.; Wahner, A.; Kiendler-Scharr, A.

    2015-12-01

    It is well known that hydroxyl radicals (OH) act as a dominant reactive species in the degradation of VOCs in the atmosphere. In recent field studies, directly measured total OH reactivity often showed poor agreement with OH reactivity calculated from VOC measurements (e.g. Nölscher et al., 2013; Lu et al., 2012a). This "missing OH reactivity" is attributed to unaccounted biogenic VOC emissions and/or oxidation products. The comparison of total OH reactivity being directly measured and calculated from single component measurements of VOCs and their oxidation products gives us a further understanding on the source of unmeasured reactive species in the atmosphere. This allows also the determination of the magnitude of the contribution of primary VOC emissions and their oxidation products to the missing OH reactivity. A series of experiments was carried out in the atmosphere simulation chamber SAPHIR in Jülich, Germany, to explore in detail the photochemical degradation of VOCs (isoprene, ß-pinene, limonene, and D6-benzene) by OH. The total OH reactivity was determined from the measurement of VOCs and their oxidation products by a Proton Transfer Reaction Time of Flight Mass Spectrometer (PTR-TOF-MS) with a GC/MS/FID system, and directly measured by a laser-induced fluorescence (LIF) at the same time. The comparison between these two total OH reactivity measurements showed an increase of missing OH reactivity in the presence of oxidation products of VOCs, indicating a strong contribution to missing OH reactivity from uncharacterized oxidation products.

  12. Impact of horizontal and vertical localization scales on microwave sounder SAPHIR radiance assimilation

    Science.gov (United States)

    Krishnamoorthy, C.; Balaji, C.

    2016-05-01

    In the present study, the effect of horizontal and vertical localization scales on the assimilation of direct SAPHIR radiances is studied. An Artificial Neural Network (ANN) has been used as a surrogate for the forward radiative calculations. The training input dataset for ANN consists of vertical layers of atmospheric pressure, temperature, relative humidity and other hydrometeor profiles with 6 channel Brightness Temperatures (BTs) as output. The best neural network architecture has been arrived at, by a neuron independence study. Since vertical localization of radiance data requires weighting functions, a ANN has been trained for this purpose. The radiances were ingested into the NWP using the Ensemble Kalman Filter (EnKF) technique. The horizontal localization has been taken care of, by using a Gaussian localization function centered around the observed coordinates. Similarly, the vertical localization is accomplished by assuming a function which depends on the weighting function of the channel to be assimilated. The effect of both horizontal and vertical localizations has been studied in terms of ensemble spread in the precipitation. Aditionally, improvements in 24 hr forecast from assimilation are also reported.

  13. Comparison of OH reactivity instruments in the atmosphere simulation chamber SAPHIR

    Science.gov (United States)

    Fuchs, Hendrik

    2016-04-01

    OH reactivity measurement has become an important measurement to constrain the total OH loss frequency in field experiments. Different techniques have been developed by various groups. They can be based on flow-tube or pump and probe techniques, which include direct OH detection by fluorescence, or on a comparative method, in which the OH loss of a reference species competes with the OH loss of trace gases in the sampled air. In order to ensure that these techniques deliver equivalent results, a comparison exercise was performed under controlled conditions. Nine OH reactivity instruments measured together in the atmosphere simulation chamber SAPHIR (volume 270 m3) during ten daylong experiments in October 2015 at ambient temperature (5 to 10° C) and pressure (990-1010 hPa). The chemical complexity of air mixtures in these experiments varied from CO in pure synthetic air to emissions from real plants and VOC/NOx mixtures representative of urban atmospheres. Potential differences between measurements were systematically investigated by changing the amount of reactants (including isoprene, monoterpenes and sesquiterpenes), water vapour, and nitrogen oxides. Some of the experiments also included the oxidation of reactants with ozone or hydroxyl radicals, in order to elaborate, if the presence of oxidation products leads to systematic differences between measurements of different instruments. Here we present first results of this comparison exercise.

  14. 雷肯Saphir-7系列播种机

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    雷肯Saphir-7系列播种机,是同大型拖拉机配套使用的机械传动播种机械。该机悬挂于拖拉机后进行播种作业,利用3点悬挂及机前控制装置调节播种深度,适用于大面积浅耕麦类播种作业。主要特点:可选装单、双圆盘开沟器及锄齿式开沟器,对土地适应性强:采用播种计量轮及油浴式齿轮箱,无级调节播种量,播量准确、节省种子;可与驱动耙或耕耘机具组成机组进行复式作业。

  15. Re-evaluation of Predictive Models in Light of New Data: Sunspot Number Version 2.0

    Science.gov (United States)

    Gkana, A.; Zachilas, L.

    2016-10-01

    The original version of the Zürich sunspot number (Sunspot Number Version 1.0) has been revised by an entirely new series (Sunspot Number Version 2.0). We re-evaluate the performance of our previously proposed models for predicting solar activity in the light of the revised data. We perform new monthly and yearly predictions using the Sunspot Number Version 2.0 as input data and compare them with our original predictions (using the Sunspot Number Version 1.0 series as input data). We show that our previously proposed models are still able to produce quite accurate solar-activity predictions despite the full revision of the Zürich Sunspot Number, indicating that there is no significant degradation in their performance. Extending our new monthly predictions (July 2013 - August 2015) by 50 time-steps (months) ahead in time (from September 2015 to October 2019), we provide evidence that we are heading into a period of dramatically low solar activity. Finally, our new future long-term predictions endorse our previous claim that a prolonged solar activity minimum is expected to occur, lasting up to the year ≈ 2100.

  16. Relative humidity distribution from SAPHIR experiment on board Megha-Tropiques satellite mission: Comparison with global radiosonde and other satellite and reanalysis data sets

    Science.gov (United States)

    Venkat Ratnam, M.; Basha, Ghouse; Krishna Murthy, B. V.; Jayaraman, A.

    2013-09-01

    For better understanding the life cycle of the convective systems and their interactions with the environment, a joint Indo-French satellite mission named Megha-Tropiques has been launched in October 2011 in a low-inclination (20°) orbit. In the present study, we show the first results on the comparison of relative humidity (RH) obtained using a six-channel microwave sounder, covering from surface to 100 hPa, from one of the payloads SAPHIR (Sounder for Atmospheric Profiling of Humidity in the Inter-tropical Regions). The RH observations from SAPHIR illustrated the numerous scales of variability in the atmosphere both vertically and horizontally. As a part of its validation, we compare SAPHIR RH with simultaneous observations from a network of radiosondes distributed across the world (±30° latitude), other satellites (Atmospheric Infrared Sounder, Infrared Atmospheric Sounder Interferometer, Constellation Observation System for Meteorology Ionosphere and Climate (COSMIC)), and various reanalysis (National Center for Environmental Prediction (NCEP), European Center for Medium-Range Weather Forecasts reanalysis (ERA)-Interim, Modern-Era Retrospective Analysis for Research and Application (MERRA)) products. Being at a low inclination, SAPHIR is able to show better global coverage when compared to any other existing satellites in the tropical region where some important weather processes take place. A very good correlation is noticed with the RH obtained from a global radiosonde network particularly in the altitude range corresponding to 850-250 hPa, thus providing a valuable data set for investigating the convective processes. In the case of satellite data sets, SAPHIR RH is well comparable with COSMIC RH. Among the reanalysis products, NCEP shows less difference with SAPHIR followed by ERA-Interim, and the MERRA products show large differences in the middle and upper troposphere.

  17. Hydrogeochemical evaluation for Simpevarp model version 1.2. Preliminary site description of the Simpevarp area

    Energy Technology Data Exchange (ETDEWEB)

    Laaksoharju, Marcus (ed.) [Geopoint AB, Stockholm (Sweden)

    2004-12-01

    Siting studies for SKB's programme of deep geological disposal of nuclear fuel waste currently involves the investigation of two locations, Simpevarp and Forsmark, to determine their geological, hydrogeochemical and hydrogeological characteristics. Present work completed has resulted in Model version 1.2 which represents the second evaluation of the available Simpevarp groundwater analytical data collected up to April, 2004. The deepest fracture groundwater samples with sufficient analytical data reflected depths down to 1.7 km. Model version 1.2 focusses on geochemical and mixing processes affecting the groundwater composition in the uppermost part of the bedrock, down to repository levels, and eventually extending to 1000 m depth. The groundwater flow regimes at Laxemar/Simpevarp are considered local and extend down to depths of around 600-1000 m depending on local topography. The marked differences in the groundwater flow regimes between Laxemar and Simpevarp are reflected in the groundwater chemistry where four major hydrochemical groups of groundwaters (types A-D) have been identified: TYPE A: This type comprises dilute groundwaters (< 1000 mg/L Cl; 0.5-2.0 g/L TDS) of Na-HCO{sub 3} type present at shallow (<200 m) depths at Simpevarp, but at greater depths (0-900 m) at Laxemar. At both localities the groundwaters are marginally oxidising close to the surface, but otherwise reducing. Main reactions involve weathering, ion exchange (Ca, Mg), surface complexation, and dissolution of calcite. Redox reactions include precipitation of Fe-oxyhydroxides and some microbially mediated reactions (SRB). Meteoric recharge water is mainly present at Laxemar whilst at Simpevarp potential mixing of recharge meteoric water and a modern sea component is observed. Localised mixing of meteoric water with deeper saline groundwaters is indicated at both Laxemar and Simpevarp. TYPE B: This type comprises brackish groundwaters (1000-6000 mg/L Cl; 5-10 g/L TDS) present at

  18. Independent Verification and Validation Of SAPHIRE 8 Software Design and Interface Design Project Number: N6423 U.S. Nuclear Regulatory Commission

    Energy Technology Data Exchange (ETDEWEB)

    Kent Norris

    2009-10-01

    The purpose of the Independent Verification and Validation (IV&V) role in the evaluation of the SAPHIRE software design and interface design is to assess the activities that results in the development, documentation, and review of a software design that meets the requirements defined in the software requirements documentation. The IV&V team began this endeavor after the software engineering and software development of SAPHIRE had already been in production. IV&V reviewed the requirements specified in the NRC Form 189s to verify these requirements were included in SAPHIRE’s Software Verification and Validation Plan (SVVP) design specification.

  19. Independent Verification and Validation Of SAPHIRE 8 Software Acceptance Test Plan Project Number: N6423 U.S. Nuclear Regulatory Commission

    Energy Technology Data Exchange (ETDEWEB)

    Kent Norris

    2010-03-01

    The purpose of the Independent Verification and Validation (IV&V) role in the evaluation of the SAPHIRE 8 Software Acceptance Test Plan is to assess the approach to be taken for intended testing activities. The plan typically identifies the items to be tested, the requirements being tested, the testing to be performed, test schedules, personnel requirements, reporting requirements, evaluation criteria, and any risks requiring contingency planning. The IV&V team began this endeavor after the software engineering and software development of SAPHIRE had already been in production.

  20. Independent Verification and Validation Of SAPHIRE 8 Software Design and Interface Design Project Number: N6423 U.S. Nuclear Regulatory Commission

    Energy Technology Data Exchange (ETDEWEB)

    Kent Norris

    2010-03-01

    The purpose of the Independent Verification and Validation (IV&V) role in the evaluation of the SAPHIRE software design and interface design is to assess the activities that results in the development, documentation, and review of a software design that meets the requirements defined in the software requirements documentation. The IV&V team began this endeavor after the software engineering and software development of SAPHIRE had already been in production. IV&V reviewed the requirements specified in the NRC Form 189s to verify these requirements were included in SAPHIRE’s Software Verification and Validation Plan (SVVP) design specification.

  1. ASTER Global Digital Elevation Model Version 2 - summary of validation results

    Science.gov (United States)

    Tachikawa, Tetushi; Kaku, Manabu; Iwasaki, Akira; Gesch, Dean B.; Oimoen, Michael J.; Zhang, Z.; Danielson, Jeffrey J.; Krieger, Tabatha; Curtis, Bill; Haase, Jeff; Abrams, Michael; Carabajal, C.; Meyer, Dave

    2011-01-01

    On June 29, 2009, NASA and the Ministry of Economy, Trade and Industry (METI) of Japan released a Global Digital Elevation Model (GDEM) to users worldwide at no charge as a contribution to the Global Earth Observing System of Systems (GEOSS). This “version 1” ASTER GDEM (GDEM1) was compiled from over 1.2 million scenebased DEMs covering land surfaces between 83°N and 83°S latitudes. A joint U.S.-Japan validation team assessed the accuracy of the GDEM1, augmented by a team of 20 cooperators. The GDEM1 was found to have an overall accuracy of around 20 meters at the 95% confidence level. The team also noted several artifacts associated with poor stereo coverage at high latitudes, cloud contamination, water masking issues and the stacking process used to produce the GDEM1 from individual scene-based DEMs (ASTER GDEM Validation Team, 2009). Two independent horizontal resolution studies estimated the effective spatial resolution of the GDEM1 to be on the order of 120 meters.

  2. Uniform California earthquake rupture forecast, version 3 (UCERF3): the time-independent model

    Science.gov (United States)

    Field, Edward H.; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David D.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin R.; Page, Morgan T.; Parsons, Thomas; Powers, Peter M.; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua; ,

    2013-01-01

    In this report we present the time-independent component of the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3), which provides authoritative estimates of the magnitude, location, and time-averaged frequency of potentially damaging earthquakes in California. The primary achievements have been to relax fault segmentation assumptions and to include multifault ruptures, both limitations of the previous model (UCERF2). The rates of all earthquakes are solved for simultaneously, and from a broader range of data, using a system-level "grand inversion" that is both conceptually simple and extensible. The inverse problem is large and underdetermined, so a range of models is sampled using an efficient simulated annealing algorithm. The approach is more derivative than prescriptive (for example, magnitude-frequency distributions are no longer assumed), so new analysis tools were developed for exploring solutions. Epistemic uncertainties were also accounted for using 1,440 alternative logic tree branches, necessitating access to supercomputers. The most influential uncertainties include alternative deformation models (fault slip rates), a new smoothed seismicity algorithm, alternative values for the total rate of M≥5 events, and different scaling relationships, virtually all of which are new. As a notable first, three deformation models are based on kinematically consistent inversions of geodetic and geologic data, also providing slip-rate constraints on faults previously excluded because of lack of geologic data. The grand inversion constitutes a system-level framework for testing hypotheses and balancing the influence of different experts. For example, we demonstrate serious challenges with the Gutenberg-Richter hypothesis for individual faults. UCERF3 is still an approximation of the system, however, and the range of models is limited (for example, constrained to stay close to UCERF2). Nevertheless, UCERF3 removes the apparent UCERF2 overprediction of

  3. Atmospheric radionuclide transport model with radon postprocessor and SBG module. Model description version 2.8.0; ARTM. Atmosphaerisches Radionuklid-Transport-Modell mit Radon Postprozessor und SBG-Modul. Modellbeschreibung zu Version 2.8.0

    Energy Technology Data Exchange (ETDEWEB)

    Richter, Cornelia; Sogalla, Martin; Thielen, Harald; Martens, Reinhard

    2015-04-20

    The study on the atmospheric radionuclide transport model with radon postprocessor and SBG module (model description version 2.8.0) covers the following issues: determination of emissions, radioactive decay, atmospheric dispersion calculation for radioactive gases, atmospheric dispersion calculation for radioactive dusts, determination of the gamma cloud radiation (gamma submersion), terrain roughness, effective source height, calculation area and model points, geographic reference systems and coordinate transformations, meteorological data, use of invalid meteorological data sets, consideration of statistical uncertainties, consideration of housings, consideration of bumpiness, consideration of terrain roughness, use of frequency distributions of the hourly dispersion situation, consideration of the vegetation period (summer), the radon post processor radon.exe, the SBG module, modeling of wind fields, shading settings.

  4. RAMS Model for Terrestrial Pathways Version 3. 0 (for microcomputers). Model-Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Niebla, E.

    1989-01-01

    The RAMS Model for Terrestrial Pathways is a computer program for calculation of numeric criteria for land application and distribution and marketing of sludges under the sewage-sludge regulations at 40 CFR Part 503. The risk-assessment models covered assume that municipal sludge with specified characteristics is spread across a defined area of ground at a known rate once each year for a given number of years. Risks associated with direct land application of sludge applied after distribution and marketing are both calculated. The computer program calculates the maximum annual loading of contaminants that can be land applied and still meet the risk criteria specified as input. Software Description: The program is written in the Turbo/Basic programming language for implementation on IBM PC/AT or compatible machines using DOS 3.0 or higher operating system. Minimum core storage is 512K.

  5. Planar version of the CPT-even gauge sector of the standard model extension

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira Junior, Manoel M.; Casana, Rodolfo; Gomes, Adalto Rodrigues; Carvalho, Eduardo S. [Universidade Federal do Maranhao (UFMA), Sao Luis, MA (Brazil). Dept. de Fisica

    2011-07-01

    The CPT-even abelian gauge sector of the Standard Model Extension is represented by the Maxwell term supplemented by (K{sub F} ){sub {mu}}{nu}{rho}{sigma} F{sup {mu}}{nu} F{sup {rho}}{sigma}, where the Lorentz-violating background tensor, (K{sub F} ){sub {mu}}{nu}{rho}{sigma}, possesses the symmetries of the Riemann tensor and a double null trace, which renders nineteen independent components. From these ones, ten components yield birefringence while nine are nonbirefringent ones. In the present work, we examine the planar version of this theory, obtained by means of a typical dimensional reduction procedure to (1 + 2) dimensions. We obtain a kind of planar scalar electrodynamics, which is composed of a gauge sector containing six Lorentz-violating coefficients, a scalar field endowed with a noncanonical kinetic term, and a coupling term that links the scalar and gauge sectors. The dispersion relation is exactly determined, revealing that the six parameters related to the pure electromagnetic sector do not yield birefringence at any order. In this model, the birefringence may appear only as a second order effect associated with the coupling tensor linking the gauge and scalar sectors.The equations of motion are written and solved in the stationary regime. The Lorentz-violating parameters do not alter the asymptotic behavior of the fields but induce an angular dependence not observed in the Maxwell planar theory. The energy-momentum tensor was evaluated as well, revealing that the theory presents energy stability. (author)

  6. A Comparison of Different Versions of the Method of Multiple Scales for an Arbitrary Model of Odd Nonlinearities

    OpenAIRE

    Pakdemirli, Mehmet; Boyacı, Hakan

    1999-01-01

    A general model of cubic and fifth order nonlinearities is considered. The linear part as well as the nonlinearities are expressed in terms of arbitrary operators. Two different versions of the method of multiple scales are used in constructing the general transient and steady-state solutions of the model: Modified Rahman-Burton method and the Reconstitution method. It is found that the usual ordering of reconstitution can be used, if at higher orders of approximation, the time scale correspo...

  7. Scaling and long-range dependence in option pricing III: A fractional version of the Merton model with transaction costs

    Science.gov (United States)

    Wang, Xiao-Tian; Yan, Hai-Gang; Tang, Ming-Ming; Zhu, En-Hui

    2010-02-01

    A model for option pricing of fractional version of the Merton model with ‘Hurst exponent’ H being in [1/2,1) is established with transaction costs. In particular, for H∈(1/2,1) the minimal price Cmin(t,St) of an option under transaction costs is obtained, which displays that the timestep δt and the ‘Hurst exponent’ H play an important role in option pricing with transaction costs.

  8. Effects of Lower and Higher Quality Brand Versions on Brand Evaluation: an Opponent-Process Model Plus Differential Brand-Version Weighting

    National Research Council Canada - National Science Library

    Timothy Heath; Devon DelVecchio; Michael McCarthy; Subimal Chatterjee

    2009-01-01

    ...) or lower-quality versions (e.g., Ruby Tuesday's Corner Diner). A brand-quality asymmetry emerges on measures ranging from brand choice to brand attitude to perceptions of brand expertise, innovativeness, and prestige...

  9. Does Diversity Matter In Modeling? Testing A New Version Of The FORMIX3 Growth Model For Madagascar Rainforests

    Science.gov (United States)

    Armstrong, A. H.; Fischer, R.; Shugart, H. H.; Huth, A.

    2012-12-01

    Ecological forecasting has become an essential tool used by ecologists to understand the dynamics of growth and disturbance response in threatened ecosystems such as the rainforests of Madagascar. In the species rich tropics, forest conservation is often eclipsed by anthropogenic factors, resulting in a heightened need for accurate assessment of biomass before these ecosystems disappear. The objective of this study was to test a new Madagascar rainforest specific version of the FORMIX3 growth model (Huth and Ditzer, 2000; Huth et al 1998) to assess how accurately biomass can be simulated in high biodiversity forests using a method of functional type aggregation in an individual-based model framework. Rainforest survey data collected over three growing seasons, including 265 tree species, was aggregated into 12 plant functional types based on size and light requirements. Findings indicated that the forest study site compared best when the simulated forest reached mature successional status. Multiple level comparisons between model simulation data and survey plot data found that though some features, such as the dominance of canopy emergent species and relative absence of small woody treelets are captured by the model, other forest attributes were not well reflected. Overall, the ability to accurately simulate the Madagascar rainforest was slightly diminished by the aggregation of tree species into size and light requirement functional type groupings.

  10. The Digital Astronaut Project Computational Bone Remodeling Model (Beta Version) Bone Summit Summary Report

    Science.gov (United States)

    Pennline, James; Mulugeta, Lealem

    2013-01-01

    Under the conditions of microgravity, astronauts lose bone mass at a rate of 1% to 2% a month, particularly in the lower extremities such as the proximal femur [1-3]. The most commonly used countermeasure against bone loss in microgravity has been prescribed exercise [4]. However, data has shown that existing exercise countermeasures are not as effective as desired for preventing bone loss in long duration, 4 to 6 months, spaceflight [1,3,5,6]. This spaceflight related bone loss may cause early onset of osteoporosis to place the astronauts at greater risk of fracture later in their lives. Consequently, NASA seeks to have improved understanding of the mechanisms of bone demineralization in microgravity in order to appropriately quantify this risk, and to establish appropriate countermeasures [7]. In this light, NASA's Digital Astronaut Project (DAP) is working with the NASA Bone Discipline Lead to implement well-validated computational models to help predict and assess bone loss during spaceflight, and enhance exercise countermeasure development. More specifically, computational modeling is proposed as a way to augment bone research and exercise countermeasure development to target weight-bearing skeletal sites that are most susceptible to bone loss in microgravity, and thus at higher risk for fracture. Given that hip fractures can be debilitating, the initial model development focused on the femoral neck. Future efforts will focus on including other key load bearing bone sites such as the greater trochanter, lower lumbar, proximal femur and calcaneus. The DAP has currently established an initial model (Beta Version) of bone loss due to skeletal unloading in femoral neck region. The model calculates changes in mineralized volume fraction of bone in this segment and relates it to changes in bone mineral density (vBMD) measured by Quantitative Computed Tomography (QCT). The model is governed by equations describing changes in bone volume fraction (BVF), and rates of

  11. Software Design Description for the Navy Coastal Ocean Model (NCOM) Version 4.0

    Science.gov (United States)

    2008-12-31

    cstr ,lenc) Data Declaration: Integer lenc Character cstr Coamps_uvg2uv Subroutine COAMPS_UVG2UV...are removed from the substrings. Calling Sequence: strpars(cline, cdelim, nstr, cstr , nsto, ierr) NRL/MR/7320--08-9149...NCOM Version 4.0 SDD 92 Subroutine Description Data Declaration: Character cline, cstr ,cdelim

  12. MATILDA Version 2: Rough Earth TIALD Model for Laser Probabilistic Risk Assessment in Hilly Terrain - Part I

    Science.gov (United States)

    2017-03-13

    AFRL-RH-FS-TR-2017-0009 MATILDA Version-2: Rough Earth TIALD Model for Laser Probabilistic Risk Assessment in Hilly Terrain – Part I Paul K...Probabilistic Risk Assessment in Hilly Terrain – Part I ii Distribution A: Approved for public release; distribution unlimited. PA Case No: TSRL-PA-2017-0169...any penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN

  13. The Flexible Global Ocean-Atmosphere-Land System Model,Grid-point Version 2:FGOALS-g2

    Institute of Scientific and Technical Information of China (English)

    LI Lijuan; LIN Pengfei; YU Yongqiang; WANG Bin; ZHOU Tianjun; LIU Li; LIU Jiping

    2013-01-01

    This study mainly introduces the development of the Flexible Global Ocean-Atmosphere-Land System Model:Grid-point Version 2 (FGOALS-g2) and the preliminary evaluations of its performances based on results from the pre-industrial control run and four members of historical runs according to the fifth phase of the Coupled Model Intercomparison Project (CMIP5) experiment design.The results suggest that many obvious improvements have been achieved by the FGOALS-g2 compared with the previous version,FGOALS-g1,including its climatological mean states,climate variability,and 20th century surface temperature evolution.For example,FGOALS-g2 better simulates the frequency of tropical land precipitation,East Asian Monsoon precipitation and its seasonal cycle,MJO and ENSO,which are closely related to the updated cumulus parameterization scheme,as well as the alleviation of uncertainties in some key parameters in shallow and deep convection schemes,cloud fraction,cloud macro/microphysical processes and the boundary layer scheme in its atmospheric model.The annual cycle of sea surface temperature along the equator in the Pacific is significantly improved in the new version.The sea ice salinity simulation is one of the unique characteristics of FGOALS-g2,although it is somehow inconsistent with empirical observations in the Antarctic.

  14. Programs OPTMAN and SHEMMAN Version 6 (1999) - Coupled-Channels optical model and collective nuclear structure calculation -

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Jong Hwa; Lee, Jeong Yeon; Lee, Young Ouk; Sukhovitski, Efrem Sh. [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-01-01

    Programs SHEMMAN and OPTMAN (Version 6) have been developed for determinations of nuclear Hamiltonian parameters and for optical model calculations, respectively. The optical model calculations by OPTMAN with coupling schemes built on wave functions functions of non-axial soft-rotator are self-consistent, since the parameters of the nuclear Hamiltonian are determined by adjusting the energies of collective levels to experimental values with SHEMMAN prior to the optical model calculation. The programs have been installed at Nuclear Data Evaluation Laboratory of KAERI. This report is intended as a brief manual of these codes. 43 refs., 9 figs., 1 tabs. (Author)

  15. Comparison of N2O5 mixing ratios during NO3Comp 2007 in SAPHIR

    Directory of Open Access Journals (Sweden)

    A. W. Rollins

    2012-07-01

    Full Text Available N2O5 detection in the atmosphere has been accomplished using techniques which have been developed during the last decade. Most techniques use a heated inlet to thermally decompose N2O5 to NO3, which can be detected by either cavity based absorption at 662 nm or by laser-induced fluorescence. In summer 2007, a large set of instruments, which were capable of measuring NO3 mixing ratios, were simultaneously deployed in the atmosphere simulation chamber SAPHIR in Jülich, Germany. Some of these instruments measured N2O5 mixing ratios either simultaneously or alternatively. Experiments focussed on the investigation of potential interferences from e.g. water vapor or aerosol and on the investigation of the oxidation of biogenic volatile organic compounds by NO3. The comparison of N2O5 mixing ratios shows an excellent agreement between measurements of instruments applying different techniques (3 cavity ring-down (CRDS instruments, 2 laser-induced fluorescence (LIF instruments. Data sets are highly correlated as indicated by the square of the linear correlation coefficients, R2, which values are larger than 0.96 for the entire data sets. N2O5 mixing ratios well agree within the combined accuracy of measurements. Slopes of the linear regression range between 0.87 and 1.26 and intercepts are negligible. The most critical aspect of N2O5 measurements by cavity ring-down instruments is the determination of the inlet and filter transmission efficiency. Measurements here show that the N2O5 inlet transmission efficiency can decrease in the presence of high aerosol loads, and that frequent filter/inlet changing is necessary to quantitatively sample N2O5 in some environments. The analysis of data also demonstrates that a general correction for degrading filter transmission is not applicable for all conditions encountered during this campaign. Besides the effect of a gradual degradation of the inlet transmission efficiency aerosol exposure, no other interference

  16. Comparison of N2O5 mixing ratios during NO3Comp 2007 in SAPHIR

    Science.gov (United States)

    Fuchs, H.; Simpson, W. R.; Apodaca, R. L.; Brauers, T.; Cohen, R. C.; Crowley, J. N.; Dorn, H.-P.; Dubé, W. P.; Fry, J. L.; Häseler, R.; Kajii, Y.; Kiendler-Scharr, A.; Labazan, I.; Matsumoto, J.; Mentel, T. F.; Nakashima, Y.; Rohrer, F.; Rollins, A. W.; Schuster, G.; Tillmann, R.; Wahner, A.; Wooldridge, P. J.; Brown, S. S.

    2012-11-01

    N2O5 detection in the atmosphere has been accomplished using techniques which have been developed during the last decade. Most techniques use a heated inlet to thermally decompose N2O5 to NO3, which can be detected by either cavity based absorption at 662 nm or by laser-induced fluorescence. In summer 2007, a large set of instruments, which were capable of measuring NO3 mixing ratios, were simultaneously deployed in the atmosphere simulation chamber SAPHIR in Jülich, Germany. Some of these instruments measured N2O5 mixing ratios either simultaneously or alternatively. Experiments focused on the investigation of potential interferences from, e.g., water vapour or aerosol and on the investigation of the oxidation of biogenic volatile organic compounds by NO3. The comparison of N2O5 mixing ratios shows an excellent agreement between measurements of instruments applying different techniques (3 cavity ring-down (CRDS) instruments, 2 laser-induced fluorescence (LIF) instruments). Datasets are highly correlated as indicated by the square of the linear correlation coefficients, R2, which values were larger than 0.96 for the entire datasets. N2O5 mixing ratios well agree within the combined accuracy of measurements. Slopes of the linear regression range between 0.87 and 1.26 and intercepts are negligible. The most critical aspect of N2O5 measurements by cavity ring-down instruments is the determination of the inlet and filter transmission efficiency. Measurements here show that the N2O5 inlet transmission efficiency can decrease in the presence of high aerosol loads, and that frequent filter/inlet changing is necessary to quantitatively sample N2O5 in some environments. The analysis of data also demonstrates that a general correction for degrading filter transmission is not applicable for all conditions encountered during this campaign. Besides the effect of a gradual degradation of the inlet transmission efficiency aerosol exposure, no other interference for N2O5

  17. UNSAT-H Version 3.0: Unsaturated Soil Water and Heat Flow Model Theory, User Manual, and Examples

    Energy Technology Data Exchange (ETDEWEB)

    MJ Fayer

    2000-06-12

    The UNSAT-H model was developed at Pacific Northwest National Laboratory (PNNL) to assess the water dynamics of arid sites and, in particular, estimate recharge fluxes for scenarios pertinent to waste disposal facilities. During the last 4 years, the UNSAT-H model received support from the Immobilized Waste Program (IWP) of the Hanford Site's River Protection Project. This program is designing and assessing the performance of on-site disposal facilities to receive radioactive wastes that are currently stored in single- and double-shell tanks at the Hanford Site (LMHC 1999). The IWP is interested in estimates of recharge rates for current conditions and long-term scenarios involving the vadose zone disposal of tank wastes. Simulation modeling with UNSAT-H is one of the methods being used to provide those estimates (e.g., Rockhold et al. 1995; Fayer et al. 1999). To achieve the above goals for assessing water dynamics and estimating recharge rates, the UNSAT-H model addresses soil water infiltration, redistribution, evaporation, plant transpiration, deep drainage, and soil heat flow as one-dimensional processes. The UNSAT-H model simulates liquid water flow using Richards' equation (Richards 1931), water vapor diffusion using Fick's law, and sensible heat flow using the Fourier equation. This report documents UNSAT-H .Version 3.0. The report includes the bases for the conceptual model and its numerical implementation, benchmark test cases, example simulations involving layered soils and plants, and the code manual. Version 3.0 is an, enhanced-capability update of UNSAT-H Version 2.0 (Fayer and Jones 1990). New features include hysteresis, an iterative solution of head and temperature, an energy balance check, the modified Picard solution technique, additional hydraulic functions, multiple-year simulation capability, and general enhancements.

  18. Technical report series on global modeling and data assimilation. Volume 1: Documentation of the Goddard Earth Observing System (GEOS) General Circulation Model, version 1

    Science.gov (United States)

    Suarez, Max J. (Editor); Takacs, Lawrence L.; Molod, Andrea; Wang, Tina

    1994-01-01

    This technical report documents Version 1 of the Goddard Earth Observing System (GEOS) General Circulation Model (GCM). The GEOS-1 GCM is being used by NASA's Data Assimilation Office (DAO) to produce multiyear data sets for climate research. This report provides a documentation of the model components used in the GEOS-1 GCM, a complete description of model diagnostics available, and a User's Guide to facilitate GEOS-1 GCM experiments.

  19. A Fast and Efficient Version of the TwO-Moment Aerosol Sectional (TOMAS) Global Aerosol Microphysics Model

    Science.gov (United States)

    Lee, Yunha; Adams, P. J.

    2012-01-01

    This study develops more computationally efficient versions of the TwO-Moment Aerosol Sectional (TOMAS) microphysics algorithms, collectively called Fast TOMAS. Several methods for speeding up the algorithm were attempted, but only reducing the number of size sections was adopted. Fast TOMAS models, coupled to the GISS GCM II-prime, require a new coagulation algorithm with less restrictive size resolution assumptions but only minor changes in other processes. Fast TOMAS models have been evaluated in a box model against analytical solutions of coagulation and condensation and in a 3-D model against the original TOMAS (TOMAS-30) model. Condensation and coagulation in the Fast TOMAS models agree well with the analytical solution but show slightly more bias than the TOMAS-30 box model. In the 3-D model, errors resulting from decreased size resolution in each process (i.e., emissions, cloud processing wet deposition, microphysics) are quantified in a series of model sensitivity simulations. Errors resulting from lower size resolution in condensation and coagulation, defined as the microphysics error, affect number and mass concentrations by only a few percent. The microphysics error in CN70CN100 (number concentrations of particles larger than 70100 nm diameter), proxies for cloud condensation nuclei, range from 5 to 5 in most regions. The largest errors are associated with decreasing the size resolution in the cloud processing wet deposition calculations, defined as cloud-processing error, and range from 20 to 15 in most regions for CN70CN100 concentrations. Overall, the Fast TOMAS models increase the computational speed by 2 to 3 times with only small numerical errors stemming from condensation and coagulation calculations when compared to TOMAS-30. The faster versions of the TOMAS model allow for the longer, multi-year simulations required to assess aerosol effects on cloud lifetime and precipitation.

  20. Simulations of the mid-Pliocene Warm Period using two versions of the NASA/GISS ModelE2-R Coupled Model

    Directory of Open Access Journals (Sweden)

    M. A. Chandler

    2013-04-01

    Full Text Available The mid-Pliocene Warm Period (mPWP bears many similarities to aspects of future global warming as projected by the Intergovernmental Panel on Climate Change (IPCC, 2007. Both marine and terrestrial data point to high-latitude temperature amplification, including large decreases in sea ice and land ice, as well as expansion of warmer climate biomes into higher latitudes. Here we present our most recent simulations of the mid-Pliocene climate using the CMIP5 version of the NASA/GISS Earth System Model (ModelE2-R. We describe the substantial impact associated with a recent correction made in the implementation of the Gent-McWilliams ocean mixing scheme (GM, which has a large effect on the simulation of ocean surface temperatures, particularly in the North Atlantic Ocean. The effect of this correction on the Pliocene climate results would not have been easily determined from examining its impact on the preindustrial runs alone, a useful demonstration of how the consequences of code improvements as seen in modern climate control runs do not necessarily portend the impacts in extreme climates. Both the GM-corrected and GM-uncorrected simulations were contributed to the Pliocene Model Intercomparison Project (PlioMIP Experiment 2. Many findings presented here corroborate results from other PlioMIP multi-model ensemble papers, but we also emphasise features in the ModelE2-R simulations that are unlike the ensemble means. The corrected version yields results that more closely resemble the ocean core data as well as the PRISM3D reconstructions of the mid-Pliocene, especially the dramatic warming in the North Atlantic and Greenland-Iceland-Norwegian Sea, which in the new simulation appears to be far more realistic than previously found with older versions of the GISS model. Our belief is that continued development of key physical routines in the atmospheric model, along with higher resolution and recent corrections to mixing parameterisations in the ocean

  1. Land-total and Ocean-total Precipitation and Evaporation from a Community Atmosphere Model version 5 Perturbed Parameter Ensemble

    Energy Technology Data Exchange (ETDEWEB)

    Covey, Curt [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Lucas, Donald D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Trenberth, Kevin E. [National Center for Atmospheric Research, Boulder, CO (United States)

    2016-03-02

    This document presents the large scale water budget statistics of a perturbed input-parameter ensemble of atmospheric model runs. The model is Version 5.1.02 of the Community Atmosphere Model (CAM). These runs are the “C-Ensemble” described by Qian et al., “Parametric Sensitivity Analysis of Precipitation at Global and Local Scales in the Community Atmosphere Model CAM5” (Journal of Advances in Modeling the Earth System, 2015). As noted by Qian et al., the simulations are “AMIP type” with temperature and sea ice boundary conditions chosen to match surface observations for the five year period 2000-2004. There are 1100 ensemble members in addition to one run with default inputparameter values.

  2. Simple geometrical explanation of Gurtin-Murdoch model of surface elasticity with clarification of its related versions

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    It is showed that all equations of the linearized Gurtin-Murdoch model of surface elasticity can be derived, in a straightforward way, from a simple second-order expression for the ratio of deformed surface area to initial surface area. This elementary derivation offers a simple explanation for all unique features of the model and its simplified/modified versions, and helps to clarify some misunderstandings of the model already occurring in the literature. Finally, it is demonstrated that, because the Gurtin-Murdoch model is based on a hybrid formulation combining linearized deformation of bulk material with 2nd-order finite deformation of the surface, caution is needed when the original form of this model is applied to bending deformation of thin-walled elastic structures with surface stress.

  3. Evaluation of the tropospheric aerosol number concentrations simulated by two versions of the global model ECHAM5-HAM

    Science.gov (United States)

    Zhang, K.; Kazil, J.; Feichter, J.

    2009-04-01

    Since its first version developed by Stier et al. (2005), the global aerosol-climate model ECHAM5-HAM has gone through further development and updates. The changes in the model include (1) a new time integration scheme for the condensation of the sulfuric acid gas on existing particles, (2) a new aerosol nucleation scheme that takes into account the charged nucleation caused by cosmic rays, and (3) a parameterization scheme explicitly describing the conversion of aerosol particles to cloud nuclei. In this work, simulations performed with the old and new model versions are evaluated against some measurements reported in recent years. The focus is on the aerosol size distribution in the troposphere. Results show that modifications in the parameterizations have led to significant changes in the simulated aerosol concentrations. Vertical profiles of the total particle number concentration (diameter > 3nm) compiled by Clarke et al. (2002) suggest that, over the Pacific in the upper free troposphere, the tropics are associated with much higher concentrations than the mid-latitude regions. This feature is more reasonably reproduced by the new model version, mainly due to the improved results of the nucleation mode aerosols. In the lower levels (2-5 km above the Earth's surface), the number concentrations of the Aitken mode particles are overestimated compared to both the Pacific data given in Clarke et al. (2002) and the vertical profiles over Europe reported by Petzold et al. (2007). The physical and chemical processes that have led to these changes are identified by sensitivity tests. References: Clarke and Kapustin: A Pacific aerosol survey - part 1: a decade of data on production, transport, evolution and mixing in the troposphere, J. Atmos. Sci., 59, 363-382, 2002. Petzold et al.: Perturbation of the European free troposphere aerosol by North American forest fire plumes during the ICARTT-ITOP experiment in summer 2004, Atmos. Chem. Phys., 7, 5105-5127, 2007

  4. Isotope effect in the formation of H2 from H2CO studied at the atmospheric simulation chamber SAPHIR

    Directory of Open Access Journals (Sweden)

    R. Koppmann

    2010-06-01

    Full Text Available Formaldehyde of known, near-natural isotopic composition was photolyzed in the SAPHIR atmosphere simulation chamber under ambient conditions. The isotopic composition of the product H2 was used to determine the isotope effects in formaldehyde photolysis. The experiments are sensitive to the molecular photolysis channel, and the radical channel has only an indirect effect and cannot be effectively constrained. The molecular channel kinetic isotope effect KIEmol, the ratio of photolysis frequencies j(HCHO→CO+H2/j(HCDO→CO+HD at surface pressure, is determined to be KIEmol=1.63−0.046+0.038. This is similar to the kinetic isotope effect for the total removal of HCHO from a recent relative rate experiment (KIEtot=1.58±0.03, which indicates that the KIEs in the molecular and radical photolysis channels at surface pressure (≈100 kPa may not be as different as described previously in the literature.

  5. SAPHIRE (scintillator avalanche photoconductor with high resolution emitter readout) for low dose x-ray imaging: spatial resolution.

    Science.gov (United States)

    Li, Dan; Zhao, Wei

    2008-07-01

    An indirect flat panel imager (FPI) with programmable avalanche gain and field emitter array (FEA) readout is being investigated for low-dose and high resolution x-ray imaging. It is made by optically coupling a structured x-ray scintillator, e.g., thallium (Tl) doped cesium iodide (CsI), to an amorphous selenium (a-Se) avalanche photoconductor called high-gain avalanche rushing amorphous photoconductor (HARP). The charge image created by the scintillator/HARP (SHARP) combination is read out by the electron beams emitted from the FEA. The proposed detector is called scintillator avalanche photoconductor with high resolution emitter readout (SAPHIRE). The programmable avalanche gain of HARP can improve the low dose performance of indirect FPI while the FEA can be made with pixel sizes down to 50 microm. Because of the avalanche gain, a high resolution type of CsI (Tl), which has not been widely used in indirect FPI due to its lower light output, can be used to improve the high spatial frequency performance. The purpose of the present article is to investigate the factors affecting the spatial resolution of SAPHIRE. Since the resolution performance of the SHARP combination has been well studied, the focus of the present work is on the inherent resolution of the FEA readout method. The lateral spread of the electron beam emitted from a 50 microm x 50 microm pixel FEA was investigated with two different electron-optical designs: mesh-electrode-only and electrostatic focusing. Our results showed that electrostatic focusing can limit the lateral spread of electron beams to within the pixel size of down to 50 microm. Since electrostatic focusing is essentially independent of signal intensity, it will provide excellent spatial uniformity.

  6. Update of the Polar SWIFT model for polar stratospheric ozone loss (Polar SWIFT version 2

    Directory of Open Access Journals (Sweden)

    I. Wohltmann

    2017-07-01

    Full Text Available The Polar SWIFT model is a fast scheme for calculating the chemistry of stratospheric ozone depletion in polar winter. It is intended for use in global climate models (GCMs and Earth system models (ESMs to enable the simulation of mutual interactions between the ozone layer and climate. To date, climate models often use prescribed ozone fields, since a full stratospheric chemistry scheme is computationally very expensive. Polar SWIFT is based on a set of coupled differential equations, which simulate the polar vortex-averaged mixing ratios of the key species involved in polar ozone depletion on a given vertical level. These species are O3, chemically active chlorine (ClOx, HCl, ClONO2 and HNO3. The only external input parameters that drive the model are the fraction of the polar vortex in sunlight and the fraction of the polar vortex below the temperatures necessary for the formation of polar stratospheric clouds. Here, we present an update of the Polar SWIFT model introducing several improvements over the original model formulation. In particular, the model is now trained on vortex-averaged reaction rates of the ATLAS Chemistry and Transport Model, which enables a detailed look at individual processes and an independent validation of the different parameterizations contained in the differential equations. The training of the original Polar SWIFT model was based on fitting complete model runs to satellite observations and did not allow for this. A revised formulation of the system of differential equations is developed, which closely fits vortex-averaged reaction rates from ATLAS that represent the main chemical processes influencing ozone. In addition, a parameterization for the HNO3 change by denitrification is included. The rates of change of the concentrations of the chemical species of the Polar SWIFT model are purely chemical rates of change in the new version, whereas in the original Polar SWIFT model, they included a transport effect

  7. Secondary organic aerosols - formation and ageing studies in the SAPHIR chamber

    Science.gov (United States)

    Spindler, Christian; Müller, Lars; Trimborn, Achim; Mentel, Thomas; Hoffmann, Thorsten

    2010-05-01

    Secondary organic aerosol (SOA) formation from oxidation products of biogenic volatile organic compounds (BVOC) constitutes an important coupling between vegetation, atmospheric chemistry, and climate change. Such secondary organic aerosol components play an important role in particle formation in Boreal regions ((Laaksonen et al., 2008)), where biogenic secondary organic aerosols contribute to an overall negative radiative forcing, thus a negative feed back between vegetation and climate warming (Spracklen et al., 2008). Within the EUCAARI project we investigated SOA formation from mixtures of monoterpenes (and sesquiterpenes) as emitted typically from Boreal tree species in Southern Finland. The experiments were performed in the large photochemical reactor SAPHIR in Juelich at natural light and oxidant levels. Oxidation of the BVOC mixtures and SOA formation was induced by OH radicals and O3. The SOA was formed on the first day and then aged for another day. The resulting SOA was characterized by HR-ToF-AMS, APCI-MS, and filter samples with subsequent H-NMR, GC-MS and HPLC-MS analysis. The chemical evolution of the SOA is characterized by a fast increase of the O/C ratio during the formation process on the first day, stable O/C ratio during night, and a distinctive increase of O/C ratio at the second day. The increase of the O/C ratio on the second day is highly correlated to the OH dose and is accompanied by condensational growth of the particles. We will present simultaneous factor analysis of AMS times series (PMF, Ulbrich et al., 2009 ) and direct measurements of individual chemical species. We found that four factors were needed to represent the time evolution of the SOA composition (in the mass spectra) if oxidation by OH plays a mayor role. Corresponding to these factors we observed individual, representative molecules with very similar time behaviour. The correlation between tracers and AMS factors is astonishingly good as the molecular tracers

  8. Intercomparison of NO3 radical detection instruments in the atmosphere simulation chamber SAPHIR

    Directory of Open Access Journals (Sweden)

    J. Thieser

    2013-01-01

    Full Text Available The detection of atmospheric NO3 radicals is still challenging owing to its low mixing ratios (≈ 1 to 300 pptv in the troposphere. While long-path differential optical absorption spectroscopy (DOAS is a well established NO3 detection approach for over 25 yr, newly sensitive techniques have been developed in the past decade. This publication outlines the results of the first comprehensive intercomparison of seven instruments developed for the spectroscopic detection of tropospheric NO3. Four instruments were based on cavity ring-down spectroscopy (CRDS, two utilised open-path cavity enhanced absorption spectroscopy (CEAS, and one applied "classical" long-path DOAS. The intercomparison campaign "NO3Comp" was held at the atmosphere simulation chamber SAPHIR in Jülich (Germany in June 2007. Twelve experiments were performed in the well mixed chamber for variable concentrations of NO3, N2O5, NO2, hydrocarbons, and water vapour, in the absence and in the presence of inorganic or organic aerosol. The overall precision of the cavity instruments varied between 0.5 and 5 pptv for integration times of 1 s to 5 min; that of the DOAS instrument was 9 pptv for an acquisition time of 1 min. The NO3 data of all instruments correlated excellently with the NOAA-CRDS instrument, which was selected as the common reference because of its superb sensitivity, high time resolution, and most comprehensive data coverage. The median of the coefficient of determination (r2 over all experiments of the campaign (60 correlations is r2 = 0.981 (25th/75th percentiles: 0.949/0.994; min/max: 0.540/0.999. The linear regression analysis of the campaign data set yielded very small intercepts (1.2 ± 5.3 pptv and the average slope of the regression lines was close to unity (1.02, min: 0.72, max: 1.36. The deviation of individual regression slopes from unity was always within the combined accuracies of each instrument pair. The very good correspondence between the NO3 measurements

  9. Intercomparison of NO3 radical detection instruments in the atmosphere simulation chamber SAPHIR

    Directory of Open Access Journals (Sweden)

    H.-P. Dorn

    2013-05-01

    Full Text Available The detection of atmospheric NO3 radicals is still challenging owing to its low mixing ratios (≈ 1 to 300 pptv in the troposphere. While long-path differential optical absorption spectroscopy (DOAS has been a well-established NO3 detection approach for over 25 yr, newly sensitive techniques have been developed in the past decade. This publication outlines the results of the first comprehensive intercomparison of seven instruments developed for the spectroscopic detection of tropospheric NO3. Four instruments were based on cavity ring-down spectroscopy (CRDS, two utilised open-path cavity-enhanced absorption spectroscopy (CEAS, and one applied "classical" long-path DOAS. The intercomparison campaign "NO3Comp" was held at the atmosphere simulation chamber SAPHIR in Jülich (Germany in June 2007. Twelve experiments were performed in the well-mixed chamber for variable concentrations of NO3, N2O5, NO2, hydrocarbons, and water vapour, in the absence and in the presence of inorganic or organic aerosol. The overall precision of the cavity instruments varied between 0.5 and 5 pptv for integration times of 1 s to 5 min; that of the DOAS instrument was 9 pptv for an acquisition time of 1 min. The NO3 data of all instruments correlated excellently with the NOAA-CRDS instrument, which was selected as the common reference because of its superb sensitivity, high time resolution, and most comprehensive data coverage. The median of the coefficient of determination (r2 over all experiments of the campaign (60 correlations is r2 = 0.981 (quartile 1 (Q1: 0.949; quartile 3 (Q3: 0.994; min/max: 0.540/0.999. The linear regression analysis of the campaign data set yielded very small intercepts (median: 1.1 pptv; Q1/Q3: −1.1/2.6 pptv; min/max: −14.1/28.0 pptv, and the slopes of the regression lines were close to unity (median: 1.01; Q1/Q3: 0.92/1.10; min/max: 0.72/1.36. The deviation of individual regression slopes from unity was always within the combined

  10. Intercomparison of NO3 radical detection instruments in the atmosphere simulation chamber SAPHIR

    Science.gov (United States)

    Dorn, H.-P.; Apodaca, R. L.; Ball, S. M.; Brauers, T.; Brown, S. S.; Crowley, J. N.; Dubé, W. P.; Fuchs, H.; Häseler, R.; Heitmann, U.; Jones, R. L.; Kiendler-Scharr, A.; Labazan, I.; Langridge, J. M.; Meinen, J.; Mentel, T. F.; Platt, U.; Pöhler, D.; Rohrer, F.; Ruth, A. A.; Schlosser, E.; Schuster, G.; Shillings, A. J. L.; Simpson, W. R.; Thieser, J.; Tillmann, R.; Varma, R.; Venables, D. S.; Wahner, A.

    2013-05-01

    The detection of atmospheric NO3 radicals is still challenging owing to its low mixing ratios (≈ 1 to 300 pptv) in the troposphere. While long-path differential optical absorption spectroscopy (DOAS) has been a well-established NO3 detection approach for over 25 yr, newly sensitive techniques have been developed in the past decade. This publication outlines the results of the first comprehensive intercomparison of seven instruments developed for the spectroscopic detection of tropospheric NO3. Four instruments were based on cavity ring-down spectroscopy (CRDS), two utilised open-path cavity-enhanced absorption spectroscopy (CEAS), and one applied "classical" long-path DOAS. The intercomparison campaign "NO3Comp" was held at the atmosphere simulation chamber SAPHIR in Jülich (Germany) in June 2007. Twelve experiments were performed in the well-mixed chamber for variable concentrations of NO3, N2O5, NO2, hydrocarbons, and water vapour, in the absence and in the presence of inorganic or organic aerosol. The overall precision of the cavity instruments varied between 0.5 and 5 pptv for integration times of 1 s to 5 min; that of the DOAS instrument was 9 pptv for an acquisition time of 1 min. The NO3 data of all instruments correlated excellently with the NOAA-CRDS instrument, which was selected as the common reference because of its superb sensitivity, high time resolution, and most comprehensive data coverage. The median of the coefficient of determination (r2) over all experiments of the campaign (60 correlations) is r2 = 0.981 (quartile 1 (Q1): 0.949; quartile 3 (Q3): 0.994; min/max: 0.540/0.999). The linear regression analysis of the campaign data set yielded very small intercepts (median: 1.1 pptv; Q1/Q3: -1.1/2.6 pptv; min/max: -14.1/28.0 pptv), and the slopes of the regression lines were close to unity (median: 1.01; Q1/Q3: 0.92/1.10; min/max: 0.72/1.36). The deviation of individual regression slopes from unity was always within the combined accuracies of each

  11. Application of a short-time version of the Equalization-Cancellation model to speech intelligibility experiments with speech maskers.

    Science.gov (United States)

    Wan, Rui; Durlach, Nathaniel I; Colburn, H Steven

    2014-08-01

    A short-time-processing version of the Equalization-Cancellation (EC) model of binaural processing is described and applied to speech intelligibility tasks in the presence of multiple maskers, including multiple speech maskers. This short-time EC model, called the STEC model, extends the model described by Wan et al. [J. Acoust. Soc. Am. 128, 3678-3690 (2010)] to allow the EC model's equalization parameters τ and α to be adjusted as a function of time, resulting in improved masker cancellation when the dominant masker location varies in time. Using the Speech Intelligibility Index, the STEC model is applied to speech intelligibility with maskers that vary in number, type, and spatial arrangements. Most notably, when maskers are located on opposite sides of the target, this STEC model predicts improved thresholds when the maskers are modulated independently with speech-envelope modulators; this includes the most relevant case of independent speech maskers. The STEC model describes the spatial dependence of the speech reception threshold with speech maskers better than the steady-state model. Predictions are also improved for independently speech-modulated noise maskers but are poorer for reversed-speech maskers. In general, short-term processing is useful, but much remains to be done in the complex task of understanding speech in speech maskers.

  12. Process Definition and Process Modeling Methods Version 01.01.00

    Science.gov (United States)

    1991-09-01

    process model. This generic process model is a state machine model . It permits progress in software development to be characterized as transitions...e.g., Entry-Task-Validation-Exit (ETVX) diagram, Petri Net, two-level state machine model , state machine, and Structured Analysis and Design

  13. Approaches in highly parameterized inversion—PEST++ Version 3, a Parameter ESTimation and uncertainty analysis software suite optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.

    2015-09-18

    The PEST++ Version 1 object-oriented parameter estimation code is here extended to Version 3 to incorporate additional algorithms and tools to further improve support for large and complex environmental modeling problems. PEST++ Version 3 includes the Gauss-Marquardt-Levenberg (GML) algorithm for nonlinear parameter estimation, Tikhonov regularization, integrated linear-based uncertainty quantification, options of integrated TCP/IP based parallel run management or external independent run management by use of a Version 2 update of the GENIE Version 1 software code, and utilities for global sensitivity analyses. The Version 3 code design is consistent with PEST++ Version 1 and continues to be designed to lower the barriers of entry for users as well as developers while providing efficient and optimized algorithms capable of accommodating large, highly parameterized inverse problems. As such, this effort continues the original focus of (1) implementing the most popular and powerful features of the PEST software suite in a fashion that is easy for novice or experienced modelers to use and (2) developing a software framework that is easy to extend.

  14. Approaches in highly parameterized inversion—PEST++ Version 3, a Parameter ESTimation and uncertainty analysis software suite optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.

    2015-09-18

    The PEST++ Version 1 object-oriented parameter estimation code is here extended to Version 3 to incorporate additional algorithms and tools to further improve support for large and complex environmental modeling problems. PEST++ Version 3 includes the Gauss-Marquardt-Levenberg (GML) algorithm for nonlinear parameter estimation, Tikhonov regularization, integrated linear-based uncertainty quantification, options of integrated TCP/IP based parallel run management or external independent run management by use of a Version 2 update of the GENIE Version 1 software code, and utilities for global sensitivity analyses. The Version 3 code design is consistent with PEST++ Version 1 and continues to be designed to lower the barriers of entry for users as well as developers while providing efficient and optimized algorithms capable of accommodating large, highly parameterized inverse problems. As such, this effort continues the original focus of (1) implementing the most popular and powerful features of the PEST software suite in a fashion that is easy for novice or experienced modelers to use and (2) developing a software framework that is easy to extend.

  15. Assimilation of MODIS Snow Cover Through the Data Assimilation Research Testbed and the Community Land Model Version 4

    Science.gov (United States)

    Zhang, Yong-Fei; Hoar, Tim J.; Yang, Zong-Liang; Anderson, Jeffrey L.; Toure, Ally M.; Rodell, Matthew

    2014-01-01

    To improve snowpack estimates in Community Land Model version 4 (CLM4), the Moderate Resolution Imaging Spectroradiometer (MODIS) snow cover fraction (SCF) was assimilated into the Community Land Model version 4 (CLM4) via the Data Assimilation Research Testbed (DART). The interface between CLM4 and DART is a flexible, extensible approach to land surface data assimilation. This data assimilation system has a large ensemble (80-member) atmospheric forcing that facilitates ensemble-based land data assimilation. We use 40 randomly chosen forcing members to drive 40 CLM members as a compromise between computational cost and the data assimilation performance. The localization distance, a parameter in DART, was tuned to optimize the data assimilation performance at the global scale. Snow water equivalent (SWE) and snow depth are adjusted via the ensemble adjustment Kalman filter, particularly in regions with large SCF variability. The root-mean-square error of the forecast SCF against MODIS SCF is largely reduced. In DJF (December-January-February), the discrepancy between MODIS and CLM4 is broadly ameliorated in the lower-middle latitudes (2345N). Only minimal modifications are made in the higher-middle (4566N) and high latitudes, part of which is due to the agreement between model and observation when snow cover is nearly 100. In some regions it also reveals that CLM4-modeled snow cover lacks heterogeneous features compared to MODIS. In MAM (March-April-May), adjustments to snowmove poleward mainly due to the northward movement of the snowline (i.e., where largest SCF uncertainty is and SCF assimilation has the greatest impact). The effectiveness of data assimilation also varies with vegetation types, with mixed performance over forest regions and consistently good performance over grass, which can partly be explained by the linearity of the relationship between SCF and SWE in the model ensembles. The updated snow depth was compared to the Canadian Meteorological

  16. Accounting for observation uncertainties in an evaluation metric of low latitude turbulent air-sea fluxes: application to the comparison of a suite of IPSL model versions

    Science.gov (United States)

    Servonnat, Jérôme; Găinuşă-Bogdan, Alina; Braconnot, Pascale

    2017-09-01

    Turbulent momentum and heat (sensible heat and latent heat) fluxes at the air-sea interface are key components of the whole energetic of the Earth's climate. The evaluation of these fluxes in the climate models is still difficult because of the large uncertainties associated with the reference products. In this paper we present an objective metric accounting for reference uncertainties to evaluate the annual cycle of the low latitude turbulent fluxes of a suite of IPSL climate models. This metric consists in a Hotelling T 2 test between the simulated and observed field in a reduce space characterized by the dominant modes of variability that are common to both the model and the reference, taking into account the observational uncertainty. The test is thus more severe when uncertainties are small as it is the case for sea surface temperature (SST). The results of the test show that for almost all variables and all model versions the model-reference differences are not zero. It is not possible to distinguish between model versions for sensible heat and meridional wind stress, certainly due to the large observational uncertainties. All model versions share similar biases for the different variables. There is no improvement between the reference versions of the IPSL model used for CMIP3 and CMIP5. The test also reveals that the higher horizontal resolution fails to improve the representation of the turbulent surface fluxes compared to the other versions. The representation of the fluxes is further degraded in a version with improved atmospheric physics with an amplification of some of the biases in the Indian Ocean and in the intertropical convergence zone. The ranking of the model versions for the turbulent fluxes is not correlated with the ranking found for SST. This highlights that despite the fact that SST gradients are important for the large-scale atmospheric circulation patterns, other factors such as wind speed, and air-sea temperature contrast play an

  17. Accounting for observation uncertainties in an evaluation metric of low latitude turbulent air-sea fluxes: application to the comparison of a suite of IPSL model versions

    Science.gov (United States)

    Servonnat, Jérôme; Găinuşă-Bogdan, Alina; Braconnot, Pascale

    2016-11-01

    Turbulent momentum and heat (sensible heat and latent heat) fluxes at the air-sea interface are key components of the whole energetic of the Earth's climate. The evaluation of these fluxes in the climate models is still difficult because of the large uncertainties associated with the reference products. In this paper we present an objective metric accounting for reference uncertainties to evaluate the annual cycle of the low latitude turbulent fluxes of a suite of IPSL climate models. This metric consists in a Hotelling T 2 test between the simulated and observed field in a reduce space characterized by the dominant modes of variability that are common to both the model and the reference, taking into account the observational uncertainty. The test is thus more severe when uncertainties are small as it is the case for sea surface temperature (SST). The results of the test show that for almost all variables and all model versions the model-reference differences are not zero. It is not possible to distinguish between model versions for sensible heat and meridional wind stress, certainly due to the large observational uncertainties. All model versions share similar biases for the different variables. There is no improvement between the reference versions of the IPSL model used for CMIP3 and CMIP5. The test also reveals that the higher horizontal resolution fails to improve the representation of the turbulent surface fluxes compared to the other versions. The representation of the fluxes is further degraded in a version with improved atmospheric physics with an amplification of some of the biases in the Indian Ocean and in the intertropical convergence zone. The ranking of the model versions for the turbulent fluxes is not correlated with the ranking found for SST. This highlights that despite the fact that SST gradients are important for the large-scale atmospheric circulation patterns, other factors such as wind speed, and air-sea temperature contrast play an

  18. Assessment of two versions of regional climate model in simulating the Indian Summer Monsoon over South Asia CORDEX domain

    Science.gov (United States)

    Pattnayak, K. C.; Panda, S. K.; Saraswat, Vaishali; Dash, S. K.

    2017-07-01

    This study assess the performance of two versions of Regional Climate Model (RegCM) in simulating the Indian summer monsoon over South Asia for the period 1998 to 2003 with an aim of conducting future climate change simulations. Two sets of experiments were carried out with two different versions of RegCM (viz. RegCM4.2 and RegCM4.3) with the lateral boundary forcings provided from European Center for Medium Range Weather Forecast Reanalysis (ERA-interim) at 50 km horizontal resolution. The major updates in RegCM4.3 in comparison to the older version RegCM4.2 are the inclusion of measured solar irradiance in place of hardcoded solar constant and additional layers in the stratosphere. The analysis shows that the Indian summer monsoon rainfall, moisture flux and surface net downward shortwave flux are better represented in RegCM4.3 than that in the RegCM4.2 simulations. Excessive moisture flux in the RegCM4.2 simulation over the northern Arabian Sea and Peninsular India resulted in an overestimation of rainfall over the Western Ghats, Peninsular region as a result of which the all India rainfall has been overestimated. RegCM4.3 has performed well over India as a whole as well as its four rainfall homogenous zones in reproducing the mean monsoon rainfall and inter-annual variation of rainfall. Further, the monsoon onset, low-level Somali Jet and the upper level tropical easterly jet are better represented in the RegCM4.3 than RegCM4.2. Thus, RegCM4.3 has performed better in simulating the mean summer monsoon circulation over the South Asia. Hence, RegCM4.3 may be used to study the future climate change over the South Asia.

  19. Development of a user-friendly interface version of the Salmonella source-attribution model

    DEFF Research Database (Denmark)

    Hald, Tine; Lund, Jan

    of questions, where the use of a classical quantitative risk assessment model (i.e. transmission models) would be impaired due to a lack of data and time limitations. As these models require specialist knowledge, it was requested by EFSA to develop a flexible user-friendly source attribution model for use...... with a user-manual, which is also part of this report. Users of the interface are recommended to read this report before starting using the interface to become familiar with the model principles and the mathematics behind, which is required in order to interpret the model results and assess the validity...

  20. The Marine Virtual Laboratory (version 2.1): enabling efficient ocean model configuration

    Science.gov (United States)

    Oke, Peter R.; Proctor, Roger; Rosebrock, Uwe; Brinkman, Richard; Cahill, Madeleine L.; Coghlan, Ian; Divakaran, Prasanth; Freeman, Justin; Pattiaratchi, Charitha; Roughan, Moninya; Sandery, Paul A.; Schaeffer, Amandine; Wijeratne, Sarath

    2016-09-01

    The technical steps involved in configuring a regional ocean model are analogous for all community models. All require the generation of a model grid, preparation and interpolation of topography, initial conditions, and forcing fields. Each task in configuring a regional ocean model is straightforward - but the process of downloading and reformatting data can be time-consuming. For an experienced modeller, the configuration of a new model domain can take as little as a few hours - but for an inexperienced modeller, it can take much longer. In pursuit of technical efficiency, the Australian ocean modelling community has developed the Web-based MARine Virtual Laboratory (WebMARVL). WebMARVL allows a user to quickly and easily configure an ocean general circulation or wave model through a simple interface, reducing the time to configure a regional model to a few minutes. Through WebMARVL, a user is prompted to define the basic options needed for a model configuration, including the model, run duration, spatial extent, and input data. Once all aspects of the configuration are selected, a series of data extraction, reprocessing, and repackaging services are run, and a "take-away bundle" is prepared for download. Building on the capabilities developed under Australia's Integrated Marine Observing System, WebMARVL also extracts all of the available observations for the chosen time-space domain. The user is able to download the take-away bundle and use it to run the model of his or her choice. Models supported by WebMARVL include three community ocean general circulation models and two community wave models. The model configuration from the take-away bundle is intended to be a starting point for scientific research. The user may subsequently refine the details of the model set-up to improve the model performance for the given application. In this study, WebMARVL is described along with a series of results from test cases comparing WebMARVL-configured models to observations

  1. Department of Defense Data Model, Version 1, Fy 1998, Volume 8.

    Science.gov (United States)

    2007-11-02

    15 C g ’■s c 3 oo O) CO IO CM CO O) CO a. Appendix A IDEFl-x Modeling Conventions APPENDIX A: IDEFIX MODELING CONVENTIONS...1.0 IDEFIX DATA MODELING CONVENTIONS Whenever data structures and business rules required to support a functional area need to be specified, it is...etc.). An entity must have an attribute or A-l APPENDIX A: IDEFIX MODELING CONVENTIONS combination of attributes whose values uniquely identify

  2. Evaluation of the Community Multiscale Air Quality model version 5.1

    Science.gov (United States)

    The Community Multiscale Air Quality model is a state-of-the-science air quality model that simulates the emission, transport and fate of numerous air pollutants, including ozone and particulate matter. The Atmospheric Modeling and Analysis Division (AMAD) of the U.S. Environment...

  3. Technical description of the RIVM/KNMI PUFF dispersion model. Version 4.0

    NARCIS (Netherlands)

    van Pul WAJ

    1992-01-01

    This report provides a technical description of the RIVM/KNMI PUFF model. The model may be used to calculate, given wind and rain field data, the dispersion of components emitted following an accident, emergency or calamity; the model area may be freely chosen to match the area of concern. The re

  4. A cloud feedback emulator (CFE, version 1.0) for an intermediate complexity model

    Science.gov (United States)

    Ullman, David J.; Schmittner, Andreas

    2017-02-01

    The dominant source of inter-model differences in comprehensive global climate models (GCMs) are cloud radiative effects on Earth's energy budget. Intermediate complexity models, while able to run more efficiently, often lack cloud feedbacks. Here, we describe and evaluate a method for applying GCM-derived shortwave and longwave cloud feedbacks from 4 × CO2 and Last Glacial Maximum experiments to the University of Victoria Earth System Climate Model. The method generally captures the spread in top-of-the-atmosphere radiative feedbacks between the original GCMs, which impacts the magnitude and spatial distribution of surface temperature changes and climate sensitivity. These results suggest that the method is suitable to incorporate multi-model cloud feedback uncertainties in ensemble simulations with a single intermediate complexity model.

  5. Global assessment of Vegetation Index and Phenology Lab (VIP and Global Inventory Modeling and Mapping Studies (GIMMS version 3 products

    Directory of Open Access Journals (Sweden)

    M. Marshall

    2015-06-01

    Full Text Available Earth observation based long-term global vegetation index products are used by scientists from a wide range of disciplines concerned with global change. Inter-comparison studies are commonly performed to keep the user community informed on the consistency and accuracy of such records as they evolve. In this study, we compared two new records: (1 Global Inventory Modeling and Mapping Studies (GIMMS Normalized Difference Vegetation Index Version 3 (NDVI3g and (2 Vegetation Index and Phenology Lab (VIP Version 3 NDVI (NDVI3v and Enhanced Vegetation Index 2 (EVI3v. We evaluated the two records via three experiments that addressed the primary use of such records in global change research: (1 prediction of the Leaf Area Index (LAI used in light-use efficiency modeling, (2 estimation of vegetation climatology in Soil-Vegetation-Atmosphere Transfer models, and (3 trend analysis of the magnitude and phenology of vegetation productivity. Experiment one, unlike previous inter-comparison studies, was performed with a unique Landsat 30 m spatial resolution and in situ LAI database for major crop types on five continents. Overall, the two records showed a high level of agreement both in direction and magnitude on a monthly basis, though VIP values were higher and more variable and showed lower correlations and higher error with in situ LAI. The records were most consistent at northern latitudes during the primary growing season and southern latitudes and the tropics throughout much of the year, while the records were less consistent at northern latitudes during green-up and senescence and in the great deserts of the world throughout much of the year. The two records were also highly consistent in terms of trend direction/magnitude, showing a 30+ year increase (decrease in NDVI over much of the globe (tropical rainforests. The two records were less consistent in terms of timing due to the poor correlation of the records during start and end of growing season.

  6. Hydrogen Macro System Model User Guide, Version 1.2.1

    Energy Technology Data Exchange (ETDEWEB)

    Ruth, M.; Diakov, V.; Sa, T.; Goldsby, M.; Genung, K.; Hoseley, R.; Smith, A.; Yuzugullu, E.

    2009-07-01

    The Hydrogen Macro System Model (MSM) is a simulation tool that links existing and emerging hydrogen-related models to perform rapid, cross-cutting analysis. It allows analysis of the economics, primary energy-source requirements, and emissions of hydrogen production and delivery pathways.

  7. OH Oxidation of α-Pinene in the Atmosphere Simulation Chamber SAPHIR: Investigation of the Role of Pinonaldehyde Photolysis as an HO2 Source

    Science.gov (United States)

    Kaminski, M.; Acir, I. H.; Bohn, B.; Dorn, H. P.; Fuchs, H.; Häseler, R.; Hofzumahaus, A.; Li, X.; Rohrer, F.; Tillmann, R.; Wegener, R.; Kiendler-Scharr, A.; Wahner, A.

    2015-12-01

    About one third of the land surface is covered by forests, emitting approximately 75% of the total biogenic volatile organic compounds (BVOCs). The main atmospheric sink of these BVOCs during daytime is the oxidation by the hydroxyl radical (OH). Over the last decades field campaigns investigating the radical chemistry in forested regions showed that atmospheric chemistry models are often not able to describe the measured OH concentration well. At low NO concentrations and an OH reactivity dominated by BVOCs the OH was underestimated. This discrepancy could only partly be explained by the discovery of new OH regeneration pathways in the isoprene oxidation mechanism. Field campaigns in the U.S.A and Finland (Kim 2013 ACP, Hens 2014 ACP) demonstrated that in monoterpene (e.g. α-pinene) dominated environments model calculations also underpredict the observed HO2 and OH concentrations significantly even if the OH budget was closed by the measured OH production and destruction terms. These observations suggest the existence of an unaccounted source of HO2. One potential HO2 source in forests is the photolysis of monoterpene degradation products such as aldehydes. In the present study the photochemical degradation mechanism of α-pinene was investigated in the Jülich atmosphere simulation chamber SAPHIR. The focus of this study was in particular on the investigation of the role of pinonaldehyde, a main first generation product of α-pinene, as a possible HO2 source. For that purpose the pinonaldehyde yields of the reaction α-pinene + OH were determined at ambient monoterpene concentrations (<5 ppb) under low NOx as well as high NOx conditions. The pinonaldehyde yield under high NOx conditions (30.5 %) is in agreement with literature values of Wisthaler (2001 AE) and Aschmann (2002 JGR), under low NOx conditions the yield (10.8 %) is approximately a factor of three lower than the value published by Eddingsaas (2012 ACP). In a second set of experiments the photolysis

  8. PhytoSFDM version 1.0.0: Phytoplankton Size and Functional Diversity Model

    Science.gov (United States)

    Acevedo-Trejos, Esteban; Brandt, Gunnar; Smith, S. Lan; Merico, Agostino

    2016-11-01

    Biodiversity is one of the key mechanisms that facilitate the adaptive response of planktonic communities to a fluctuating environment. How to allow for such a flexible response in marine ecosystem models is, however, not entirely clear. One particular way is to resolve the natural complexity of phytoplankton communities by explicitly incorporating a large number of species or plankton functional types. Alternatively, models of aggregate community properties focus on macroecological quantities such as total biomass, mean trait, and trait variance (or functional trait diversity), thus reducing the observed natural complexity to a few mathematical expressions. We developed the PhytoSFDM modelling tool, which can resolve species discretely and can capture aggregate community properties. The tool also provides a set of methods for treating diversity under realistic oceanographic settings. This model is coded in Python and is distributed as open-source software. PhytoSFDM is implemented in a zero-dimensional physical scheme and can be applied to any location of the global ocean. We show that aggregate community models reduce computational complexity while preserving relevant macroecological features of phytoplankton communities. Compared to species-explicit models, aggregate models are more manageable in terms of number of equations and have faster computational times. Further developments of this tool should address the caveats associated with the assumptions of aggregate community models and about implementations into spatially resolved physical settings (one-dimensional and three-dimensional). With PhytoSFDM we embrace the idea of promoting open-source software and encourage scientists to build on this modelling tool to further improve our understanding of the role that biodiversity plays in shaping marine ecosystems.

  9. Parameterization Improvements and Functional and Structural Advances in Version 4 of the Community Land Model

    Directory of Open Access Journals (Sweden)

    Andrew G. Slater

    2011-05-01

    Full Text Available The Community Land Model is the land component of the Community Climate System Model. Here, we describe a broad set of model improvements and additions that have been provided through the CLM development community to create CLM4. The model is extended with a carbon-nitrogen (CN biogeochemical model that is prognostic with respect to vegetation, litter, and soil carbon and nitrogen states and vegetation phenology. An urban canyon model is added and a transient land cover and land use change (LCLUC capability, including wood harvest, is introduced, enabling study of historic and future LCLUC on energy, water, momentum, carbon, and nitrogen fluxes. The hydrology scheme is modified with a revised numerical solution of the Richards equation and a revised ground evaporation parameterization that accounts for litter and within-canopy stability. The new snow model incorporates the SNow and Ice Aerosol Radiation model (SNICAR - which includes aerosol deposition, grain-size dependent snow aging, and vertically-resolved snowpack heating –– as well as new snow cover and snow burial fraction parameterizations. The thermal and hydrologic properties of organic soil are accounted for and the ground column is extended to ~50-m depth. Several other minor modifications to the land surface types dataset, grass and crop optical properties, atmospheric forcing height, roughness length and displacement height, and the disposition of snow-capped runoff are also incorporated.Taken together, these augmentations to CLM result in improved soil moisture dynamics, drier soils, and stronger soil moisture variability. The new model also exhibits higher snow cover, cooler soil temperatures in organic-rich soils, greater global river discharge, and lower albedos over forests and grasslands, all of which are improvements compared to CLM3.5. When CLM4 is run with CN, the mean biogeophysical simulation is slightly degraded because the vegetation structure is prognostic rather

  10. A Scalable Version of the Navy Operational Global Atmospheric Prediction System Spectral Forecast Model

    Directory of Open Access Journals (Sweden)

    Thomas E. Rosmond

    2000-01-01

    Full Text Available The Navy Operational Global Atmospheric Prediction System (NOGAPS includes a state-of-the-art spectral forecast model similar to models run at several major operational numerical weather prediction (NWP centers around the world. The model, developed by the Naval Research Laboratory (NRL in Monterey, California, has run operational at the Fleet Numerical Meteorological and Oceanographic Center (FNMOC since 1982, and most recently is being run on a Cray C90 in a multi-tasked configuration. Typically the multi-tasked code runs on 10 to 15 processors with overall parallel efficiency of about 90%. resolution is T159L30, but other operational and research applications run at significantly lower resolutions. A scalable NOGAPS forecast model has been developed by NRL in anticipation of a FNMOC C90 replacement in about 2001, as well as for current NOGAPS research requirements to run on DOD High-Performance Computing (HPC scalable systems. The model is designed to run with message passing (MPI. Model design criteria include bit reproducibility for different processor numbers and reasonably efficient performance on fully shared memory, distributed memory, and distributed shared memory systems for a wide range of model resolutions. Results for a wide range of processor numbers, model resolutions, and different vendor architectures are presented. Single node performance has been disappointing on RISC based systems, at least compared to vector processor performance. This is a common complaint, and will require careful re-examination of traditional numerical weather prediction (NWP model software design and data organization to fully exploit future scalable architectures.

  11. Statistical analysis of fracture data, adapted for modelling Discrete Fracture Networks-Version 2

    Energy Technology Data Exchange (ETDEWEB)

    Munier, Raymond

    2004-04-01

    The report describes the parameters which are necessary for DFN modelling, the way in which they can be extracted from the data base acquired during site investigations, and their assignment to geometrical objects in the geological model. The purpose here is to present a methodology for use in SKB modelling projects. Though the methodology is deliberately tuned to facilitate subsequent DFN modelling with other tools, some of the recommendations presented here are applicable to other aspects of geo-modelling as well. For instance, we here recommend a nomenclature to be used within SKB modelling projects, which are truly multidisciplinary, to ease communications between scientific disciplines and avoid misunderstanding of common concepts. This report originally occurred as an appendix to a strategy report for geological modelling (SKB-R--03-07). Strategy reports were intended to be successively updated to include experience gained during site investigations and site modelling. Rather than updating the entire strategy report, we choose to present the update of the appendix as a stand-alone document. This document thus replaces Appendix A2 in SKB-R--03-07. In short, the update consists of the following: The target audience has been broadened and as a consequence thereof, the purpose of the document. Correction of errors found in various formulae. All expressions have been rewritten. Inclusion of more worked examples in each section. A new section describing area normalisation. A new section on spatial correlation. A new section describing anisotropy. A new chapter describing the expected output from DFN modelling, within SKB projects.

  12. The Model of Emissions of Gases and Aerosols from Nature version 2.1 (MEGAN2.1: an extended and updated framework for modeling biogenic emissions

    Directory of Open Access Journals (Sweden)

    A. B. Guenther

    2012-06-01

    Full Text Available The Model of Emissions of Gases and Aerosols from Nature version 2.1 (MEGAN2.1 is a modeling framework for estimating fluxes of 147 biogenic compounds between terrestrial ecosystems and the atmosphere using simple mechanistic algorithms to account for the major known processes controlling biogenic emissions. It is available as an offline code and has also been coupled into land surface models and atmospheric chemistry models. MEGAN2.1 is an update from the previous versions including MEGAN2.0 for isoprene emissions and MEGAN2.04, which estimates emissions of 138 compounds. Isoprene comprises about half of the estimated total global biogenic volatile organic compound (BVOC emission of 1 Pg (1000 Tg or 1015 g. Another 10 compounds including methanol, ethanol, acetaldehyde, acetone, α-pinene, β-pinene, t−β-ocimene, limonene, ethene, and propene together contribute another 30% of the estimated emission. An additional 20 compounds (mostly terpenoids are associated with another 17% of the total emission with the remaining 3% distributed among 125 compounds. Emissions of 41 monoterpenes and 32 sesquiterpenes together comprise about 15% and 3%, respectively, of the total global BVOC emission. Tropical trees cover about 18% of the global land surface and are estimated to be responsible for 60% of terpenoid emissions and 48% of other VOC emissions. Other trees cover about the same area but are estimated to contribute only about 10% of total emissions. The magnitude of the emissions estimated with MEGAN2.1 are within the range of estimates reported using other approaches and much of the differences between reported values can be attributed to landcover and meteorological driving variables. The offline version of MEGAN2.1 source code and driving variables is available from http://acd.ucar.edu/~guenther/MEGAN/MEGAN.htm and the version integrated into the

  13. Description of the Mountain Cloud Chemistry Program version of the PLUVIUS MOD 5. 0 reactive storm simulation model

    Energy Technology Data Exchange (ETDEWEB)

    Luecken, D.J.; Whiteman, C.D.; Chapman, E.G.; Andrews, G.L.; Bader, D.C.

    1987-07-01

    Damage to forest ecosystems on mountains in the eastern United States has prompted a study conducted for the US Environmental Protection Agency's Mountain Cloud Chemistry Program (MCCP). This study has led to the development of a numerical model called MCCP PLUVIUS, which has been used to investigate the chemical transformations and cloud droplet deposition in shallow, nonprecipitating orographic clouds. The MCCP PLUVIUS model was developed as a specialized version of the existing PLUVIUS MOD 5.0 reactive storm model. It is capable of simulating aerosol scavenging, nonreactive gas scavenging, aqueous phase SO/sub 2/ reactions, and cloud water deposition. A description of the new model is provided along with information on model inputs and outputs, as well as suggestions for its further development. The MCCP PLUVIUS incorporates a new method to determine the depth of the layer of air which flows over a mountaintop to produce an orographic cloud event. It provides a new method for calculating hydrogen ion concentrations, and provides updated expressions and values for solubility, dissociation and reaction rate constants.

  14. FAME: Friendly Applied Modelling Environment. Version 2.2 User Manual

    NARCIS (Netherlands)

    Wortelboer FG; Aldenberg T

    1989-01-01

    FAME (Friendly Applied Modelling Environment) is een algemene modelleer omgeving, ontwikkeld voor de dynamische simulatie van waterkwaliteitsmodellen. De modellen worden beschreven als sets van differentiaalvergelijkingen, waarbij van een algemene notatie gebruik wordt gemaakt. Geen kennis van een

  15. Technical manual for basic version of the Markov chain nest productivity model (MCnest)

    Science.gov (United States)

    The Markov Chain Nest Productivity Model (or MCnest) integrates existing toxicity information from three standardized avian toxicity tests with information on species life history and the timing of pesticide applications relative to the timing of avian breeding seasons to quantit...

  16. User’s manual for basic version of MCnest Markov chain nest productivity model

    Science.gov (United States)

    The Markov Chain Nest Productivity Model (or MCnest) integrates existing toxicity information from three standardized avian toxicity tests with information on species life history and the timing of pesticide applications relative to the timing of avian breeding seasons to quantit...

  17. Illustrating and homology modeling the proteins of the Zika virus [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Sean Ekins

    2016-09-01

    Full Text Available The Zika virus (ZIKV is a flavivirus of the family Flaviviridae, which is similar to dengue virus, yellow fever and West Nile virus. Recent outbreaks in South America, Latin America, the Caribbean and in particular Brazil have led to concern for the spread of the disease and potential to cause Guillain-Barré syndrome and microcephaly. Although ZIKV has been known of for over 60 years there is very little in the way of knowledge of the virus with few publications and no crystal structures. No antivirals have been tested against it either in vitro or in vivo. ZIKV therefore epitomizes a neglected disease. Several suggested steps have been proposed which could be taken to initiate ZIKV antiviral drug discovery using both high throughput screens as well as structure-based design based on homology models for the key proteins. We now describe preliminary homology models created for NS5, FtsJ, NS4B, NS4A, HELICc, DEXDc, peptidase S7, NS2B, NS2A, NS1, E stem, glycoprotein M, propeptide, capsid and glycoprotein E using SWISS-MODEL. Eleven out of 15 models pass our model quality criteria for their further use. While a ZIKV glycoprotein E homology model was initially described in the immature conformation as a trimer, we now describe the mature dimer conformer which allowed the construction of an illustration of the complete virion. By comparing illustrations of ZIKV based on this new homology model and the dengue virus crystal structure we propose potential differences that could be exploited for antiviral and vaccine design. The prediction of sites for glycosylation on this protein may also be useful in this regard. While we await a cryo-EM structure of ZIKV and eventual crystal structures of the individual proteins, these homology models provide the community with a starting point for structure-based design of drugs and vaccines as well as a for computational virtual screening.

  18. Development and validation of THUMS version 5 with 1D muscle models for active and passive automotive safety research.

    Science.gov (United States)

    Kimpara, Hideyuki; Nakahira, Yuko; Iwamoto, Masami

    2016-08-01

    Accurately predicting the occupant kinematics is critical to better understand the injury mechanisms during an automotive crash event. The objectives of this study were to develop and validate a finite element (FE) model of the human body integrated with an active muscle model called Total HUman Model for Safety (THUMS) version 5, which has the body size of the 50th percentile American adult male (AM50). This model is characterized by being able to generate a force owing to muscle tone and to predict the occupant response during an automotive crash event. Deformable materials were assigned to all body parts of THUMS model in order to evaluate the injury probabilities. Each muscle was modeled as a Hill-type muscle model with 800 muscle-tendon compartments of 1D truss and seatbelt elements covering whole joints in the neck, thorax, lumbar region, and upper and lower extremities. THUMS was validated against 36 series of post-mortem human surrogate (PMHS) and volunteer tests on frontal, lateral, and rear impacts. The muscle architectural and kinetic properties for the hip, knee, shoulder, and elbow joints were validated in terms of the moment arms and maximum isometric joint torques over a wide range of joint angles. The muscular moment arms and maximum joint torques estimated from THUMS occupant model with 1D muscles agreed with the experimental data for a wide range of joint angles. Therefore, this model has the potential to predict the occupant kinematics and injury outcomes considering appropriate human body motions associated with various human body postures, such as sitting or standing.

  19. Geological discrete fracture network model for the Olkiluoto site, Eurajoki, Finland. Version 2.0

    Energy Technology Data Exchange (ETDEWEB)

    Fox, A.; Forchhammer, K.; Pettersson, A. [Golder Associates AB, Stockholm (Sweden); La Pointe, P.; Lim, D-H. [Golder Associates Inc. (Finland)

    2012-06-15

    This report describes the methods, analyses, and conclusions of the modeling team in the production of the 2010 revision to the geological discrete fracture network (DFN) model for the Olkiluoto Site in Finland. The geological DFN is a statistical model for stochastically simulating rock fractures and minor faults at a scale ranging from approximately 0.05 m to approximately 565m; deformation zones are expressly excluded from the DFN model. The DFN model is presented as a series of tables summarizing probability distributions for several parameters necessary for fracture modeling: fracture orientation, fracture size, fracture intensity, and associated spatial constraints. The geological DFN is built from data collected during site characterization (SC) activities at Olkiluoto, which is selected to function as a final deep geological repository for spent fuel and nuclear waste from the Finnish nuclear power program. Data used in the DFN analyses include fracture maps from surface outcrops and trenches, geological and structural data from cored drillholes, and fracture information collected during the construction of the main tunnels and shafts at the ONKALO laboratory. Unlike the initial geological DFN, which was focused on the vicinity of the ONKALO tunnel, the 2010 revisions present a model parameterization for the entire island. Fracture domains are based on the tectonic subdivisions at the site (northern, central, and southern tectonic units) presented in the Geological Site Model (GSM), and are further subdivided along the intersection of major brittle-ductile zones. The rock volume at Olkiluoto is dominated by three distinct fracture sets: subhorizontally-dipping fractures striking north-northeast and dipping to the east that is subparallel to the mean bedrock foliation direction, a subvertically-dipping fracture set striking roughly north-south, and a subvertically-dipping fracture set striking approximately east-west. The subhorizontally-dipping fractures

  20. Water, Energy, and Biogeochemical Model (WEBMOD), user’s manual, version 1

    Science.gov (United States)

    Webb, Richard M.T.; Parkhurst, David L.

    2017-02-08

    The Water, Energy, and Biogeochemical Model (WEBMOD) uses the framework of the U.S. Geological Survey (USGS) Modular Modeling System to simulate fluxes of water and solutes through watersheds. WEBMOD divides watersheds into model response units (MRU) where fluxes and reactions are simulated for the following eight hillslope reservoir types: canopy; snowpack; ponding on impervious surfaces; O-horizon; two reservoirs in the unsaturated zone, which represent preferential flow and matrix flow; and two reservoirs in the saturated zone, which also represent preferential flow and matrix flow. The reservoir representing ponding on impervious surfaces, currently not functional (2016), will be implemented once the model is applied to urban areas. MRUs discharge to one or more stream reservoirs that flow to the outlet of the watershed. Hydrologic fluxes in the watershed are simulated by modules derived from the USGS Precipitation Runoff Modeling System; the National Weather Service Hydro-17 snow model; and a topography-driven hydrologic model (TOPMODEL). Modifications to the standard TOPMODEL include the addition of heterogeneous vertical infiltration rates; irrigation; lateral and vertical preferential flows through the unsaturated zone; pipe flow draining the saturated zone; gains and losses to regional aquifer systems; and the option to simulate baseflow discharge by using an exponential, parabolic, or linear decrease in transmissivity. PHREEQC, an aqueous geochemical model, is incorporated to simulate chemical reactions as waters evaporate, mix, and react within the various reservoirs of the model. The reactions that can be specified for a reservoir include equilibrium reactions among water; minerals; surfaces; exchangers; and kinetic reactions such as kinetic mineral dissolution or precipitation, biologically mediated reactions, and radioactive decay. WEBMOD also simulates variations in the concentrations of the stable isotopes deuterium and oxygen-18 as a result of

  1. Representing winter wheat in the Community Land Model (version 4.5)

    Science.gov (United States)

    Lu, Yaqiong; Williams, Ian N.; Bagley, Justin E.; Torn, Margaret S.; Kueppers, Lara M.

    2017-05-01

    Winter wheat is a staple crop for global food security, and is the dominant vegetation cover for a significant fraction of Earth's croplands. As such, it plays an important role in carbon cycling and land-atmosphere interactions in these key regions. Accurate simulation of winter wheat growth is not only crucial for future yield prediction under a changing climate, but also for accurately predicting the energy and water cycles for winter wheat dominated regions. We modified the winter wheat model in the Community Land Model (CLM) to better simulate winter wheat leaf area index, latent heat flux, net ecosystem exchange of CO2, and grain yield. These included schemes to represent vernalization as well as frost tolerance and damage. We calibrated three key parameters (minimum planting temperature, maximum crop growth days, and initial value of leaf carbon allocation coefficient) and modified the grain carbon allocation algorithm for simulations at the US Southern Great Plains ARM site (US-ARM), and validated the model performance at eight additional sites across North America. We found that the new winter wheat model improved the prediction of monthly variation in leaf area index, reduced latent heat flux, and net ecosystem exchange root mean square error (RMSE) by 41 and 35 % during the spring growing season. The model accurately simulated the interannual variation in yield at the US-ARM site, but underestimated yield at sites and in regions (northwestern and southeastern US) with historically greater yields by 35 %.

  2. Hybrid Model of the Context Dependent Vestibulo-Ocular Reflex: Implications for Vergence-Version Interactions

    Directory of Open Access Journals (Sweden)

    Mina eRanjbaran

    2015-02-01

    Full Text Available The vestibulo-ocular reflex (VOR is an involuntary eye movement evoked by head movements. It is also influenced by viewing distance. This paper presents a hybrid nonlinear bilateral model for the horizontal angular vestibulo-ocular reflex (AVOR in the dark. The model is based on known interconnections between saccadic burst circuits in the brainstem and ocular premotor areas in the vestibular nuclei during fast and slow phase intervals of nystagmus. We implemented a viable switching strategy for the timing of nystagmus events to allow emulation of real nystagmus data. The performance of the hybrid model is evaluated with simulations, and results are consistent with experimental observations. The hybrid model replicates realistic AVOR nystagmus patterns during sinusoidal or step head rotations in the dark and during interactions with vergence, e.g. fixation distance. By simply assigning proper nonlinear neural computations at the premotor level, the model replicates all reported experimental observations. This work sheds light on potential underlying neural mechanisms driving the context dependent AVOR and explains contradictory results in the literature. Moreover, context-dependent behaviors in more complex motor systems could also rely on local nonlinear neural computations.

  3. Long-term Industrial Energy Forecasting (LIEF) model (18-sector version)

    Energy Technology Data Exchange (ETDEWEB)

    Ross, M.H. (Univ. of Michigan, Ann Arbor, MI (United States). Dept. of Physics); Thimmapuram, P.; Fisher, R.E.; Maciorowski, W. (Argonne National Lab., IL (United States))

    1993-05-01

    The new 18-sector Long-term Industrial Energy Forecasting (LIEF) model is designed for convenient study of future industrial energy consumption, taking into account the composition of production, energy prices, and certain kinds of policy initiatives. Electricity and aggregate fossil fuels are modeled. Changes in energy intensity in each sector are driven by autonomous technological improvement (price-independent trend), the opportunity for energy-price-sensitive improvements, energy price expectations, and investment behavior. Although this decision-making framework involves more variables than the simplest econometric models, it enables direct comparison of an econometric approach with conservation supply curves from detailed engineering analysis. It also permits explicit consideration of a variety of policy approaches other than price manipulation. The model is tested in terms of historical data for nine manufacturing sectors, and parameters are determined for forecasting purposes. Relatively uniform and satisfactory parameters are obtained from this analysis. In this report, LIEF is also applied to create base-case and demand-side management scenarios to briefly illustrate modeling procedures and outputs.

  4. Long-term Industrial Energy Forecasting (LIEF) model (18-sector version)

    Energy Technology Data Exchange (ETDEWEB)

    Ross, M.H. [Univ. of Michigan, Ann Arbor, MI (US). Dept. of Physics; Thimmapuram, P.; Fisher, R.E.; Maciorowski, W. [Argonne National Lab., IL (US)

    1993-05-01

    The new 18-sector Long-term Industrial Energy Forecasting (LIEF) model is designed for convenient study of future industrial energy consumption, taking into account the composition of production, energy prices, and certain kinds of policy initiatives. Electricity and aggregate fossil fuels are modeled. Changes in energy intensity in each sector are driven by autonomous technological improvement (price-independent trend), the opportunity for energy-price-sensitive improvements, energy price expectations, and investment behavior. Although this decision-making framework involves more variables than the simplest econometric models, it enables direct comparison of an econometric approach with conservation supply curves from detailed engineering analysis. It also permits explicit consideration of a variety of policy approaches other than price manipulation. The model is tested in terms of historical data for nine manufacturing sectors, and parameters are determined for forecasting purposes. Relatively uniform and satisfactory parameters are obtained from this analysis. In this report, LIEF is also applied to create base-case and demand-side management scenarios to briefly illustrate modeling procedures and outputs.

  5. Hybrid model of the context dependent vestibulo-ocular reflex: implications for vergence-version interactions.

    Science.gov (United States)

    Ranjbaran, Mina; Galiana, Henrietta L

    2015-01-01

    The vestibulo-ocular reflex (VOR) is an involuntary eye movement evoked by head movements. It is also influenced by viewing distance. This paper presents a hybrid nonlinear bilateral model for the horizontal angular vestibulo-ocular reflex (AVOR) in the dark. The model is based on known interconnections between saccadic burst circuits in the brainstem and ocular premotor areas in the vestibular nuclei during fast and slow phase intervals of nystagmus. We implemented a viable switching strategy for the timing of nystagmus events to allow emulation of real nystagmus data. The performance of the hybrid model is evaluated with simulations, and results are consistent with experimental observations. The hybrid model replicates realistic AVOR nystagmus patterns during sinusoidal or step head rotations in the dark and during interactions with vergence, e.g., fixation distance. By simply assigning proper nonlinear neural computations at the premotor level, the model replicates all reported experimental observations. This work sheds light on potential underlying neural mechanisms driving the context dependent AVOR and explains contradictory results in the literature. Moreover, context-dependent behaviors in more complex motor systems could also rely on local nonlinear neural computations.

  6. A Prototypicality Validation of the Comprehensive Assessment of Psychopathic Personality (CAPP) Model Spanish Version.

    Science.gov (United States)

    Flórez, Gerardo; Casas, Alfonso; Kreis, Mette K F; Forti, Leonello; Martínez, Joaquín; Fernández, Juan; Conde, Manuel; Vázquez-Noguerol, Raúl; Blanco, Tania; Hoff, Helge A; Cooke, David J

    2015-10-01

    The Comprehensive Assessment of Psychopathic Personality (CAPP) is a newly developed, lexically based, conceptual model of psychopathy. The content validity of the Spanish language CAPP model was evaluated using prototypicality analysis. Prototypicality ratings were collected from 187 mental health experts and from samples of 143 health professionals and 282 community residents. Across the samples the majority of CAPP items were rated as highly prototypical of psychopathy. The Self, Dominance, and Attachment domains were evaluated as being more prototypical than the Behavioral and Cognitive domains. These findings are consistent with findings from similar studies in other languages and provide further support for the content validation of the CAPP model across languages and the lexical approach.

  7. An integrated assessment modeling framework for uncertainty studies in global and regional climate change: the MIT IGSM-CAM (version 1.0)

    OpenAIRE

    Monier, E.; Scott, J R; A. P. Sokolov; C. E. Forest; C. A. Schlosser

    2013-01-01

    This paper describes a computationally efficient framework for uncertainty studies in global and regional climate change. In this framework, the Massachusetts Institute of Technology (MIT) Integrated Global System Model (IGSM), an integrated assessment model that couples an Earth system model of intermediate complexity to a human activity model, is linked to the National Center for Atmospheric Research (NCAR) Community Atmosphere Model (CAM). Since the MIT IGSM-CAM framework (version 1.0) inc...

  8. The Flexible Global Ocean-Atmosphere-Land System Model,Spectral Version 2:FGOALS-s2

    Institute of Scientific and Technical Information of China (English)

    BAO Qing; LIN Pengfei; ZHOU Tianjun; LIU Yimin; YU Yongqiang; WU Guoxiong; HE Bian

    2013-01-01

    The Flexible Global Ocean-Atmosphere-Land System model,Spectral Version 2 (FGOALS-s2) was used to simulate realistic climates and to study anthropogenic influences on climate change.Specifically,the FGOALS-s2 was integrated with Coupled Model Intercomparison Project Phase 5 (CMIP5) to conduct coordinated experiments that will provide valuable scientific information to climate research communities.The performances of FGOALS-s2 were assessed in simulating major climate phenomena,and documented both the strengths and weaknesses of the model.The results indicate that FGOALS-s2 successfully overcomes climate drift,and realistically models global and regional climate characteristics,including SST,precipitation,and atmospheric circulation.In particular,the model accurately captures annual and semi-annual SST cycles in the equatorial Pacific Ocean,and the main characteristic features of the Asian summer monsoon,which include a low-level southwestern jet and five monsoon rainfall centers.The simulated climate variability was further examined in terms of teleconnections,leading modes of global SST (namely,ENSO),Pacific Decadal Oscillations (PDO),and changes in 19th-20th century climate.The analysis demonstrates that FGOALS-s2 realistically simulates extra-tropical teleconnection patterns of large-scale climate,and irregular ENSO periods.The model gives fairly reasonable reconstructions of spatial patterns of PDO and global monsoon changes in the 20th century.However,because the indirect effects of aerosols are not included in the model,the simulated global temperature change during the period 1850-2005 is greater than the observed warming,by 0.6℃.Some other shortcomings of the model are also noted.

  9. Unit testing, model validation, and biological simulation [version 1; referees: 2 approved, 1 approved with reservations

    Directory of Open Access Journals (Sweden)

    Gopal P. Sarma

    2016-08-01

    Full Text Available The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models.

  10. Hydrogeological DFN modelling using structural and hydraulic data from KLX04. Preliminary site description Laxemar subarea - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Follin, Sven [SF GeoLogic AB, Taeby (Sweden); Stigsson, Martin [Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden); Svensson, Urban [Computer-aided Fluid Engineering AB, Norrkoeping (Sweden)

    2006-04-15

    SKB is conducting site investigations for a high-level nuclear waste repository in fractured crystalline rocks at two coastal areas in Sweden. The two candidate areas are named Forsmark and Simpevarp. The site characterisation work is divided into two phases, an initial site investigation phase (ISI) and a complete site investigation phase (CSI). The results of the ISI phase are used as a basis for deciding on the subsequent CSI phase. On the basis of the CSI investigations a decision is made as to whether detailed characterisation will be performed (including sinking of a shaft). An integrated component in the site characterisation work is the development of site descriptive models. These comprise basic models in three dimensions with an accompanying text description. Central in the modelling work is the geological model which provides the geometrical context in terms of a model of deformation zones and the less fractured rock mass between the zones. Using the geological and geometrical description models as a basis, descriptive models for other disciplines (surface ecosystems, hydrogeology, hydrogeochemistry, rock mechanics, thermal properties and transport properties) will be developed. Great care is taken to arrive at a general consistency in the description of the various models and assessment of uncertainty and possible needs of alternative models. The main objective of this study is to support the development of a hydrogeological DFN model (Discrete Fracture Network) for the Preliminary Site Description of the Laxemar area on a regional-scale (SDM version L1.2). A more specific objective of this study is to assess the propagation of uncertainties in the geological DFN modelling reported for L1.2 into the groundwater flow modelling. An improved understanding is necessary in order to gain credibility for the Site Description in general and the hydrogeological description in particular. The latter will serve as a basis for describing the present

  11. Preliminary site description: Groundwater flow simulations. Simpevarp area (version 1.1) modelled with CONNECTFLOW

    Energy Technology Data Exchange (ETDEWEB)

    Hartley, Lee; Worth, David [Serco Assurance Ltd, Risley (United Kingdom); Gylling, Bjoern; Marsic, Niko [Kemakta Konsult AB, Stockholm (Sweden); Holmen, Johan [Golder Associates, Stockholm (Sweden)

    2004-08-01

    The main objective of this study is to assess the role of known and unknown hydrogeological conditions for the present-day distribution of saline groundwater at the Simpevarp and Laxemar sites. An improved understanding of the paleo-hydrogeology is necessary in order to gain credibility for the Site Descriptive Model in general and the Site Hydrogeological Description in particular. This is to serve as a basis for describing the present hydrogeological conditions as well as predictions of future hydrogeological conditions. This objective implies a testing of: geometrical alternatives in the structural geology and bedrock fracturing, variants in the initial and boundary conditions, and parameter uncertainties (i.e. uncertainties in the hydraulic property assignment). This testing is necessary in order to evaluate the impact on the groundwater flow field of the specified components and to promote proposals of further investigations of the hydrogeological conditions at the site. The general methodology for modelling transient salt transport and groundwater flow using CONNECTFLOW that was developed for Forsmark has been applied successfully also for Simpevarp. Because of time constraints only a key set of variants were performed that focussed on the influences of DFN model parameters, the kinematic porosity, and the initial condition. Salinity data in deep boreholes available at the time of the project was too limited to allow a good calibration exercise. However, the model predictions are compared with the available data from KLX01 and KLX02 below. Once more salinity data is available it may be possible to draw more definite conclusions based on the differences between variants. At the moment though the differences should just be used understand the sensitivity of the models to various input parameters.

  12. Stochastic empirical loading and dilution model (SELDM) version 1.0.0

    Science.gov (United States)

    Granato, Gregory E.

    2013-01-01

    The Stochastic Empirical Loading and Dilution Model (SELDM) is designed to transform complex scientific data into meaningful information about the risk of adverse effects of runoff on receiving waters, the potential need for mitigation measures, and the potential effectiveness of such management measures for reducing these risks. The U.S. Geological Survey developed SELDM in cooperation with the Federal Highway Administration to help develop planning-level estimates of event mean concentrations, flows, and loads in stormwater from a site of interest and from an upstream basin. Planning-level estimates are defined as the results of analyses used to evaluate alternative management measures; planning-level estimates are recognized to include substantial uncertainties (commonly orders of magnitude). SELDM uses information about a highway site, the associated receiving-water basin, precipitation events, stormflow, water quality, and the performance of mitigation measures to produce a stochastic population of runoff-quality variables. SELDM provides input statistics for precipitation, prestorm flow, runoff coefficients, and concentrations of selected water-quality constituents from National datasets. Input statistics may be selected on the basis of the latitude, longitude, and physical characteristics of the site of interest and the upstream basin. The user also may derive and input statistics for each variable that are specific to a given site of interest or a given area. SELDM is a stochastic model because it uses Monte Carlo methods to produce the random combinations of input variable values needed to generate the stochastic population of values for each component variable. SELDM calculates the dilution of runoff in the receiving waters and the resulting downstream event mean concentrations and annual average lake concentrations. Results are ranked, and plotting positions are calculated, to indicate the level of risk of adverse effects caused by runoff concentrations

  13. The big challenges in modeling human and environmental well-being [version 1; referees: 3 approved

    Directory of Open Access Journals (Sweden)

    Shripad Tuljapurkar

    2016-04-01

    Full Text Available This article is a selective review of quantitative research, historical and prospective, that is needed to inform sustainable development policy. I start with a simple framework to highlight how demography and productivity shape human well-being. I use that to discuss three sets of issues and corresponding challenges to modeling: first, population prehistory and early human development and their implications for the future; second, the multiple distinct dimensions of human and environmental well-being and the meaning of sustainability; and, third, inequality as a phenomenon triggered by development and models to examine changing inequality and its consequences. I conclude with a few words about other important factors: political, institutional, and cultural.

  14. A new version of variational integrated technology for environmental modeling with assimilation of available data

    Science.gov (United States)

    Penenko, Vladimir; Tsvetova, Elena; Penenko, Aleksey

    2014-05-01

    A modeling technology based on coupled models of atmospheric dynamics and chemistry are presented [1-3]. It is the result of application of variational methods in combination with the methods of decomposition and splitting. The idea of Euler's integrating factors combined with technique of adjoint problems is also used. In online technologies, a significant part of algorithmic and computational work consist in solving the problems like convection-diffusion-reaction and in organizing data assimilation techniques based on them. For equations of convection-diffusion, the methodology gives us the unconditionally stable and monotone discrete-analytical schemes in the frames of methods of decomposition and splitting. These schemes are exact for locally one-dimensional problems respect to the spatial variables. For stiff systems of equations describing transformation of gas and aerosol substances, the monotone and stable schemes are also obtained. They are implemented by non- iterative algorithms. By construction, all schemes for different components of state functions are structurally uniform. They are coordinated among themselves in the sense of forward and inverse modeling. Variational principles are constructed taking into account the fact that the behavior of the different dynamic and chemical components of the state function is characterized by high variability and uncertainty. Information on the parameters of models, sources and emission impacts is also not determined precisely. Therefore, to obtain the consistent solutions, we construct methods of the sensitivity theory taking into account the influence of uncertainty. For this purpose, new methods of data assimilation of hydrodynamic fields and gas-aerosol substances measured by different observing systems are proposed. Optimization criteria for data assimilation problems are defined so that they include a set of functionals evaluating the total measure of uncertainties. The latter are explicitly introduced into

  15. Business models for renewable energy in the built environment. Updated version

    Energy Technology Data Exchange (ETDEWEB)

    Wuertenberger, L.; Menkveld, M.; Vethman, P.; Van Tilburg, X. [ECN Policy Studies, Amsterdam (Netherlands); Bleyl, J.W. [Energetic Solutions, Graz (Austria)

    2012-04-15

    The project RE-BIZZ aims to provide insight to policy makers and market actors in the way new and innovative business models (and/or policy measures) can stimulate the deployment of renewable energy technologies (RET) and energy efficiency (EE) measures in the built environment. The project is initiated and funded by the IEA Implementing Agreement for Renewable Energy Technology Deployment (IEA-RETD). It analysed ten business models in three categories (amongst others different types of Energy Service Companies (ESCOs), Developing properties certified with a 'green' building label, Building owners profiting from rent increases after EE measures, Property Assessed Clean Energy (PACE) financing, On-bill financing, and Leasing of RET equipment) including their organisational and financial structure, the existing market and policy context, and an analysis of Strengths, Weaknesses, Opportunities and Threats (SWOT). The study concludes with recommendations for policy makers and other market actors.

  16. First implementation of secondary inorganic aerosols in the MOCAGE version R2.15.0 chemistry transport model

    Science.gov (United States)

    Guth, J.; Josse, B.; Marécal, V.; Joly, M.; Hamer, P.

    2016-01-01

    In this study we develop a secondary inorganic aerosol (SIA) module for the MOCAGE chemistry transport model developed at CNRM. The aim is to have a module suitable for running at different model resolutions and for operational applications with reasonable computing times. Based on the ISORROPIA II thermodynamic equilibrium module, the new version of the model is presented and evaluated at both the global and regional scales. The results show high concentrations of secondary inorganic aerosols in the most polluted regions: Europe, Asia and the eastern part of North America. Asia shows higher sulfate concentrations than other regions thanks to emission reductions in Europe and North America. Using two simulations, one with and the other without secondary inorganic aerosol formation, the global model outputs are compared to previous studies, to MODIS AOD retrievals, and also to in situ measurements from the HTAP database. The model shows a better agreement with MODIS AOD retrievals in all geographical regions after introducing the new SIA scheme. It also provides a good statistical agreement with in situ measurements of secondary inorganic aerosol composition: sulfate, nitrate and ammonium. In addition, the simulation with SIA generally gives a better agreement with observations for secondary inorganic aerosol precursors (nitric acid, sulfur dioxide, ammonia), in particular with a reduction of the modified normalized mean bias (MNMB). At the regional scale, over Europe, the model simulation with SIA is compared to the in situ measurements from the EMEP database and shows a good agreement with secondary inorganic aerosol composition. The results at the regional scale are consistent with those obtained from the global simulations. The AIRBASE database was used to compare the model to regulated air quality pollutants: particulate matter, ozone and nitrogen dioxide concentrations. Introduction of the SIA in MOCAGE provides a reduction in the PM2.5 MNMB of 0.44 on a

  17. User’s Guide for COMBIMAN Programs (COMputerized BIomechanical MAN-Model) Version 5

    Science.gov (United States)

    1982-04-01

    accomplishing this has been to build mock-ups and use an undetermined number of "representative" test pilots to evaluate the work environment and...the "representative" pilots depends on the availability of pilots and the whims of the designers. The COMputerized Blomechanical MAN-model (COMBIMAN...de- fined with letter S, is the field of stereovision , which is the field visible to both eyes simultaneously. The field defined with letter F

  18. User`s guide to the META-Net economic modeling system. Version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Lamont, A.

    1994-11-24

    In a market economy demands for commodities are met through various technologies and resources. Markets select the technologies and resources to meet these demands based on their costs. Over time, the competitiveness of different technologies can change due to the exhaustion of resources they depend on, the introduction of newer, more efficient technologies, or even shifts in user demands. As this happens, the structure of the economy changes. The Market Equilibrium and Technology Assessment Network Modelling System, META{center_dot}Net, has been developed for building and solving multi-period equilibrium models to analyze the shifts in the energy system that may occur as new technologies are introduced and resources are exhausted. META{center_dot}Net allows a user to build and solve complex economic models. It models` a market economy as a network of nodes representing resources, conversion processes, markets, and end-use demands. Commodities flow through this network from resources, through conversion processes and market, to the end-users. META{center_dot}Net then finds the multiperiod equilibrium prices and quantities. The solution includes the prices and quantities demanded for each commodity along with the capacity additions (and retirements) for each conversion process, and the trajectories of resource extraction. Although the changes in the economy are largely driven by consumers` behavior and the costs of technologies and resources, they are also affected by various government policies. These can include constraints on prices and quantities, and various taxes and constraints on environmental emissions. META{center_dot}Net can incorporate many of these mechanisms and evaluate their potential impact on the development of the economic system.

  19. Modelling turbulent vertical mixing sensitivity using a 1-D version of NEMO

    Directory of Open Access Journals (Sweden)

    G. Reffray

    2014-08-01

    Full Text Available Through two numerical experiments, a 1-D vertical model called NEMO1D was used to investigate physical and numerical turbulent-mixing behaviour. The results show that all the turbulent closures tested (k + l from Blanke and Delecluse, 1993 and two equation models: Generic Lengh Scale closures from Umlauf and Burchard, 2003 are able to correctly reproduce the classical test of Kato and Phillips (1969 under favourable numerical conditions while some solutions may diverge depending on the degradation of the spatial and time discretization. The performances of turbulence models were then compared with data measured over a one-year period (mid-2010 to mid-2011 at the PAPA station, located in the North Pacific Ocean. The modelled temperature and salinity were in good agreement with the observations, with a maximum temperature error between −2 and 2 °C during the stratified period (June to October. However the results also depend on the numerical conditions. The vertical RMSE varied, for different turbulent closures, from 0.1 to 0.3 °C during the stratified period and from 0.03 to 0.15 °C during the homogeneous period. This 1-D configuration at the PAPA station (called PAPA1D is now available in NEMO as a reference configuration including the input files and atmospheric forcing set described in this paper. Thus, all the results described can be recovered by downloading and launching PAPA1D. The configuration is described on the NEMO site (http://www.nemo-ocean.eu/Using-NEMO/Configurations/C1D_PAPA. This package is a good starting point for further investigation of vertical processes.

  20. The Everglades Depth Estimation Network (EDEN) surface-water model, version 2

    Science.gov (United States)

    Telis, Pamela A.; Xie, Zhixiao; Liu, Zhongwei; Li, Yingru; Conrads, Paul A.

    2015-01-01

    The Everglades Depth Estimation Network (EDEN) is an integrated network of water-level gages, interpolation models that generate daily water-level and water-depth data, and applications that compute derived hydrologic data across the freshwater part of the greater Everglades landscape. The U.S. Geological Survey Greater Everglades Priority Ecosystems Science provides support for EDEN in order for EDEN to provide quality-assured monitoring data for the U.S. Army Corps of Engineers Comprehensive Everglades Restoration Plan.

  1. The Canadian Defence Input-Output Model DIO Version 4.41

    Science.gov (United States)

    2011-09-01

    Output models, for instance to study the regional benefits of different large procure- ment programmes, the data censorship limitation would...excluding potato chips and nuts 113 0960 Cocoa and chocolate 114 0979 Nuts DRDC CORA TM 2011-147 31 Index Code Commodity name 115 0989 Chocolate...Private hospital services 631 5631 Private residential care facilities 632 5632 Child care, outside the home 633 5633 Other health and social services 634

  2. Uncorrelated Encounter Model of the National Airspace System, Version 2.0

    Science.gov (United States)

    2013-08-19

    between two IFR aircraft in oceanic airspace. The reason for this is that one cannot observe encounters of sufficient fidelity in the available data...does not observe a sufficient number of encounters between instrument flight rules ( IFR ) and non- IFR traffic beyond 12 NM from the shore. 4 TABLE 1...Encounter model categories. Aircraft of Interest Intruder Aircraft Location Flight Rule IFR VFR Noncooperative Noncooperative Conventional

  3. Advanced Propagation Model (APM) Version 2.1.04 Computer Software Configuration Item (CSCI) Documents

    Science.gov (United States)

    2007-02-01

    19 3.1.2.17 Ray Trace ( RAYTRACE ) SU................................................................................ 20 3.1.2.18...NOSC TD 1015, Feb. 1984. Horst, M.M., Dyer, F.B., Tuley, M.T., “ Radar Sea Clutter Model,”, IEEE International Conference on Antennas and Propagation...3.1.2.17 Ray Trace ( RAYTRACE ) SU Using standard ray trace techniques, a ray is traced from a starting height and range with a specified starting

  4. System cost model user`s manual, version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Shropshire, D.

    1995-06-01

    The System Cost Model (SCM) was developed by Lockheed Martin Idaho Technologies in Idaho Falls, Idaho and MK-Environmental Services in San Francisco, California to support the Baseline Environmental Management Report sensitivity analysis for the U.S. Department of Energy (DOE). The SCM serves the needs of the entire DOE complex for treatment, storage, and disposal (TSD) of mixed low-level, low-level, and transuranic waste. The model can be used to evaluate total complex costs based on various configuration options or to evaluate site-specific options. The site-specific cost estimates are based on generic assumptions such as waste loads and densities, treatment processing schemes, existing facilities capacities and functions, storage and disposal requirements, schedules, and cost factors. The SCM allows customization of the data for detailed site-specific estimates. There are approximately forty TSD module designs that have been further customized to account for design differences for nonalpha, alpha, remote-handled, and transuranic wastes. The SCM generates cost profiles based on the model default parameters or customized user-defined input and also generates costs for transporting waste from generators to TSD sites.

  5. T2LBM Version 1.0: Landfill bioreactor model for TOUGH2

    Energy Technology Data Exchange (ETDEWEB)

    Oldenburg, Curtis M.

    2001-05-22

    The need to control gas and leachate production and minimize refuse volume in landfills has motivated the development of landfill simulation models that can be used by operators to predict and design optimal treatment processes. T2LBM is a module for the TOUGH2 simulator that implements a Landfill Bioreactor Model to provide simulation capability for the processes of aerobic or anaerobic biodegradation of municipal solid waste and the associated flow and transport of gas and liquid through the refuse mass. T2LBM incorporates a Monod kinetic rate law for the biodegradation of acetic acid in the aqueous phase by either aerobic or anaerobic microbes as controlled by the local oxygen concentration. Acetic acid is considered a proxy for all biodegradable substrates in the refuse. Aerobic and anaerobic microbes are assumed to be immobile and not limited by nutrients in their growth. Methane and carbon dioxide generation due to biodegradation with corresponding thermal effects are modeled. The numerous parameters needed to specify biodegradation are input by the user in the SELEC block of the TOUGH2 input file. Test problems show that good matches to laboratory experiments of biodegradation can be obtained. A landfill test problem demonstrates the capabilities of T2LBM for a hypothetical two-dimensional landfill scenario with permeability heterogeneity and compaction.

  6. Regional groundwater flow model for a glaciation scenario. Simpevarp subarea - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Jaquet, O.; Siegel, P. [Colenco Power Engineering Ltd, Baden-Daettwil (Switzerland)

    2006-10-15

    A groundwater flow model (glaciation model) was developed at a regional scale in order to study long term transient effects related to a glaciation scenario likely to occur in response to climatic changes. Conceptually the glaciation model was based on the regional model of Simpevarp and was then extended to a mega-regional scale (of several hundred kilometres) in order to account for the effects of the ice sheet. These effects were modelled using transient boundary conditions provided by a dynamic ice sheet model describing the phases of glacial build-up, glacial completeness and glacial retreat needed for the glaciation scenario. The results demonstrate the strong impact of the ice sheet on the flow field, in particular during the phases of the build-up and the retreat of the ice sheet. These phases last for several thousand years and may cause large amounts of melt water to reach the level of the repository and below. The highest fluxes of melt water are located in the vicinity of the ice margin. As the ice sheet approaches the repository location, the advective effects gain dominance over diffusive effects in the flow field. In particular, up-coning effects are likely to occur at the margin of the ice sheet leading to potential increases in salinity at repository level. For the base case, the entire salinity field of the model is almost completely flushed out at the end of the glaciation period. The flow patterns are strongly governed by the location of the conductive features in the subglacial layer. The influence of these glacial features is essential for the salinity distribution as is their impact on the flow trajectories and, therefore, on the resulting performance measures. Travel times and F-factor were calculated using the method of particle tracking. Glacial effects cause major consequences on the results. In particular, average travel times from the repository to the surface are below 10 a during phases of glacial build-up and retreat. In comparison

  7. Refinement and evaluation of the Massachusetts firm-yield estimator model version 2.0

    Science.gov (United States)

    Levin, Sara B.; Archfield, Stacey A.; Massey, Andrew J.

    2011-01-01

    The firm yield is the maximum average daily withdrawal that can be extracted from a reservoir without risk of failure during an extended drought period. Previously developed procedures for determining the firm yield of a reservoir were refined and applied to 38 reservoir systems in Massachusetts, including 25 single- and multiple-reservoir systems that were examined during previous studies and 13 additional reservoir systems. Changes to the firm-yield model include refinements to the simulation methods and input data, as well as the addition of several scenario-testing capabilities. The simulation procedure was adapted to run at a daily time step over a 44-year simulation period, and daily streamflow and meteorological data were compiled for all the reservoirs for input to the model. Another change to the model-simulation methods is the adjustment of the scaling factor used in estimating groundwater contributions to the reservoir. The scaling factor is used to convert the daily groundwater-flow rate into a volume by multiplying the rate by the length of reservoir shoreline that is hydrologically connected to the aquifer. Previous firm-yield analyses used a constant scaling factor that was estimated from the reservoir surface area at full pool. The use of a constant scaling factor caused groundwater flows during periods when the reservoir stage was very low to be overestimated. The constant groundwater scaling factor used in previous analyses was replaced with a variable scaling factor that is based on daily reservoir stage. This change reduced instability in the groundwater-flow algorithms and produced more realistic groundwater-flow contributions during periods of low storage. Uncertainty in the firm-yield model arises from many sources, including errors in input data. The sensitivity of the model to uncertainty in streamflow input data and uncertainty in the stage-storage relation was examined. A series of Monte Carlo simulations were performed on 22 reservoirs

  8. Implementing and Evaluating Variable Soil Thickness in the Community Land Model, Version 4.5 (CLM4.5)

    Energy Technology Data Exchange (ETDEWEB)

    Brunke, Michael A.; Broxton, Patrick; Pelletier, Jon; Gochis, David; Hazenberg, Pieter; Lawrence, David M.; Leung, L. Ruby; Niu, Guo-Yue; Troch, Peter A.; Zeng, Xubin

    2016-05-01

    One of the recognized weaknesses of land surface models as used in weather and climate models is the assumption of constant soil thickness due to the lack of global estimates of bedrock depth. Using a 30 arcsecond global dataset for the thickness of relatively porous, unconsolidated sediments over bedrock, spatial variation in soil thickness is included here in version 4.5 of the Community Land Model (CLM4.5). The number of soil layers for each grid cell is determined from the average soil depth for each 0.9° latitude x 1.25° longitude grid cell. Including variable soil thickness affects the simulations most in regions with shallow bedrock corresponding predominantly to areas of mountainous terrain. The greatest changes are to baseflow, with the annual minimum generally occurring earlier, while smaller changes are seen in surface fluxes like latent heat flux and surface runoff in which only the annual cycle amplitude is increased. These changes are tied to soil moisture changes which are most substantial in locations with shallow bedrock. Total water storage (TWS) anomalies do not change much over most river basins around the globe, since most basins contain mostly deep soils. However, it was found that TWS anomalies substantially differ for a river basin with more mountainous terrain. Additionally, the annual cycle in soil temperature are affected by including realistic soil thicknesses due to changes to heat capacity and thermal conductivity.

  9. Validation of the French version of the marijuana craving questionnaire (MCQ) generates a two-factor model.

    Science.gov (United States)

    Chauchard, Emeline; Goutaudier, Nelly; Heishman, Stephen J; Gorelick, David A; Chabrol, Henri

    2015-01-01

    Craving is a major issue in drug addiction, and a target for drug treatment. The Marijuana Craving Questionnaire-Short Form (MCQ-SF) is a useful tool for assessing cannabis craving in clinical and research settings. To validate the French version of the MCQ-SF (FMCQ-SF). Young adult cannabis users not seeking treatment (n = 679) completed the FMCQ-SF and questionnaires assessing their frequency of cannabis use and craving, cannabis use disorder criteria, and alcohol use. Confirmatory factor analysis of the four-factor FMCQ-SF model did not fit the data well. Exploratory factor analysis suggested a two-factor solution ("pleasure", characterized by planning and expectation of positive effects, and "release of tension", characterized by relief from anxiety, nervousness, or tension) with good psychometric properties. This two-factor model showed good internal and convergent validity and correlated with cannabis abuse and dependence and with frequency of cannabis use and craving. Validation of the FMCQ-SF generated a two-factor model, different from the four-factor solution generated in English language studies. Considering that craving plays an important role in withdrawal and relapse, this questionnaire should be useful for French-language addiction professionals.

  10. Presentation, calibration and validation of the low-order, DCESS Earth System Model (Version 1

    Directory of Open Access Journals (Sweden)

    J. O. Pepke Pedersen

    2008-11-01

    Full Text Available A new, low-order Earth System Model is described, calibrated and tested against Earth system data. The model features modules for the atmosphere, ocean, ocean sediment, land biosphere and lithosphere and has been designed to simulate global change on time scales of years to millions of years. The atmosphere module considers radiation balance, meridional transport of heat and water vapor between low-mid latitude and high latitude zones, heat and gas exchange with the ocean and sea ice and snow cover. Gases considered are carbon dioxide and methane for all three carbon isotopes, nitrous oxide and oxygen. The ocean module has 100 m vertical resolution, carbonate chemistry and prescribed circulation and mixing. Ocean biogeochemical tracers are phosphate, dissolved oxygen, dissolved inorganic carbon for all three carbon isotopes and alkalinity. Biogenic production of particulate organic matter in the ocean surface layer depends on phosphate availability but with lower efficiency in the high latitude zone, as determined by model fit to ocean data. The calcite to organic carbon rain ratio depends on surface layer temperature. The semi-analytical, ocean sediment module considers calcium carbonate dissolution and oxic and anoxic organic matter remineralisation. The sediment is composed of calcite, non-calcite mineral and reactive organic matter. Sediment porosity profiles are related to sediment composition and a bioturbated layer of 0.1 m thickness is assumed. A sediment segment is ascribed to each ocean layer and segment area stems from observed ocean depth distributions. Sediment burial is calculated from sedimentation velocities at the base of the bioturbated layer. Bioturbation rates and oxic and anoxic remineralisation rates depend on organic carbon rain rates and dissolved oxygen concentrations. The land biosphere module considers leaves, wood, litter and soil. Net primary production depends on atmospheric carbon dioxide concentration and

  11. Regional hydrogeological simulations. Numerical modelling using ConnectFlow. Preliminary site description Simpevarp sub area - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Hartley, Lee; Hoch, Andrew; Hunter, Fiona; Jackson, Peter [Serco Assurance, Risley (United Kingdom); Marsic, Niko [Kemakta Konsult, Stockholm (Sweden)

    2005-02-01

    objective of this study is to support the development of a preliminary Site Description of the Simpevarp area on a regional-scale based on the available data of August 2004 (Data Freeze S1.2) and the previous Site Description. A more specific objective of this study is to assess the role of known and unknown hydrogeological conditions for the present-day distribution of saline groundwater in the Simpevarp area on a regional-scale. An improved understanding of the paleo-hydrogeology is necessary in order to gain credibility for the Site Description in general and the hydrogeological description in particular. This is to serve as a basis for describing the present hydrogeological conditions on a local-scale as well as predictions of future hydrogeological conditions. Other key objectives were to identify the model domain required to simulate regional flow and solute transport at the Simpevarp area and to incorporate a new geological model of the deformation zones produced for Version S1.2.Another difference with Version S1.1 is the increased effort invested in conditioning the hydrogeological property models to the fracture boremap and hydraulic data. A new methodology was developed for interpreting the discrete fracture network (DFN) by integrating the geological description of the DFN (GeoDFN) with the hydraulic test data from Posiva Flow-Log and Pipe-String System double-packer techniques to produce a conditioned Hydro-DFN model. This was done in a systematic way that addressed uncertainties associated with the assumptions made in interpreting the data, such as the relationship between fracture transmissivity and length. Consistent hydraulic data was only available for three boreholes, and therefore only relatively simplistic models were proposed as there isn't sufficient data to justify extrapolating the DFN away from the boreholes based on rock domain, for example. Significantly, a far greater quantity of hydro-geochemical data was available for calibration in the

  12. Solid Modeling Aerospace Research Tool (SMART) user's guide, version 2.0

    Science.gov (United States)

    Mcmillin, Mark L.; Spangler, Jan L.; Dahmen, Stephen M.; Rehder, John J.

    1993-01-01

    The Solid Modeling Aerospace Research Tool (SMART) software package is used in the conceptual design of aerospace vehicles. It provides a highly interactive and dynamic capability for generating geometries with Bezier cubic patches. Features include automatic generation of commonly used aerospace constructs (e.g., wings and multilobed tanks); cross-section skinning; wireframe and shaded presentation; area, volume, inertia, and center-of-gravity calculations; and interfaces to various aerodynamic and structural analysis programs. A comprehensive description of SMART and how to use it is provided.

  13. Intercomparison of measurements of NO2 concentrations in the atmosphere simulation chamber SAPHIR during the NO3Comp campaign

    Directory of Open Access Journals (Sweden)

    H. Fuchs

    2010-01-01

    Full Text Available NO2 concentrations were measured by various instruments during the NO3Comp campaign at the atmosphere simulation chamber SAPHIR at Forschungszentrum Jülich, Germany, in June 2007. Analytical methods included photolytic conversion with chemiluminescence (PC-CLD, broadband cavity ring-down spectroscopy (BBCRDS, pulsed cavity ring-down spectroscopy (CRDS, incoherent broadband cavity-enhanced absorption spectroscopy (IBB\\-CEAS, and laser-induced fluorescence (LIF. All broadband absorption spectrometers were optimized for the detection of the main target species of the campaign, NO3, but were also capable of detecting NO2 simultaneously with reduced sensitivity. NO2 mixing ratios in the chamber were within a range characteristic of polluted, urban conditions, with a maximum mixing ratio of approximately 75 ppbv. The overall agreement between measurements of all instruments was excellent. Linear fits of the combined data sets resulted in slopes that differ from unity only within the stated uncertainty of each instrument. Possible interferences from species such as water vapor and ozone were negligible under the experimental conditions.

  14. Intercomparison of measurements of NO2 concentrations in the atmosphere simulation chamber SAPHIR during the NO3Comp campaign

    Directory of Open Access Journals (Sweden)

    R. M. Varma

    2009-10-01

    Full Text Available NO2 concentrations were measured by various instruments during the NO3Comp campaign at the atmosphere simulation chamber SAPHIR at Forschungszentrum Jülich, Germany, in June 2007. Analytic methods included photolytic conversion with chemiluminescence (PC-CLD, broadband cavity ring-down spectroscopy (BBCRDS, pulsed cavity ring-down spectroscopy (CRDS, incoherent broadband cavity-enhanced absorption spectroscopy (IBBCEAS, and laser-induced fluorescence (LIF. All broadband absorption spectrometers were optimized for the detection of the main target species of the campaign, NO2, but were also capable of detecting NO2 simultaneously with reduced sensitivity. NO2 mixing ratios in the chamber were within a range characteristic of polluted, urban conditions, with a maximum mixing ratio of approximately 75 ppbv. The overall agreement between measurements of all instruments was excellent. Linear fits of the combined data sets resulted in slopes that differ from unity only within the stated uncertainty of each instrument. Possible interferences from species such as water vapor and ozone were negligible under the experimental conditions.

  15. Feynman propagator for the planar version of the CPT-even electrodynamics of Standard Model Extension

    Energy Technology Data Exchange (ETDEWEB)

    Casana, Rodolfo; Ferreira Junior, Manoel M.; Moreira, Roemir P.M. [Universidade Federal do Maranhao (UFMA), MA (Brazil); Gomes, Adalto R. [Instituto Federal de Educacao Ciencia e Tecnologia do Maranhao (IFMA), MA (Brazil)

    2011-07-01

    Full text: In a recent work, we have accomplished the dimensional reduction of the non birefringent CPT-even gauge sector of the Standard Model Extension. As well-known, the CPT-even gauge sector is composed of nineteen components comprised by the fourth-rank tensor, (K{sub F} ){sub μνρσ}, of which nine do not yield birefringence. These nine components can be parametrized in terms of the symmetric and traceless tensor, k{sub μν} = (K{sub F}){sup ρ} νρσ. Starting from this parametrization, and applying the dimensional reduction procedure, we obtain a planar theory corresponding to the non birefringent sector, composed of a gauge and scalar sectors, mutually coupled. These sectors possess six and three independent components, respectively. Some interesting properties of this theory, concerning classical stationary solutions, were examined recently. In the present work, we explicitly evaluate the Feynman propagator for this model, in a tensor closed way, using a set of operators defined in terms of three 3-vectors. We use this propagator to examine the dispersion relations of this theory, and analyze some properties related to its causality, stability, and unitarity. (author)

  16. Representing icebergs in the iLOVECLIM model (version 1.0 – a sensitivity study

    Directory of Open Access Journals (Sweden)

    M. Bügelmayer

    2014-07-01

    Full Text Available Recent modelling studies have indicated that icebergs alter the ocean's state, the thickness of sea ice and the prevailing atmospheric conditions, in short play an active role in the climate system. The icebergs' impact is due to their slowly released melt water which freshens and cools the ocean. The spatial distribution of the icebergs and thus their melt water depends on the forces (atmospheric and oceanic acting on them as well as on the icebergs' size. The studies conducted so far have in common that the icebergs were moved by reconstructed or modelled forcing fields and that the initial size distribution of the icebergs was prescribed according to present day observations. To address these shortcomings, we used the climate model iLOVECLIM that includes actively coupled ice-sheet and iceberg modules, to conduct 15 sensitivity experiments to analyse (1 the impact of the forcing fields (atmospheric vs. oceanic on the icebergs' distribution and melt flux, and (2 the effect of the used initial iceberg size on the resulting Northern Hemisphere climate and ice sheet under different climate conditions (pre-industrial, strong/weak radiative forcing. Our results show that, under equilibrated pre-industrial conditions, the oceanic currents cause the bergs to stay close to the Greenland and North American coast, whereas the atmospheric forcing quickly distributes them further away from their calving site. These different characteristics strongly affect the lifetime of icebergs, since the wind-driven icebergs melt up to two years faster as they are quickly distributed into the relatively warm North Atlantic waters. Moreover, we find that local variations in the spatial distribution due to different iceberg sizes do not result in different climate states and Greenland ice sheet volume, independent of the prevailing climate conditions (pre-industrial, warming or cooling climate. Therefore, we conclude that local differences in the distribution of their

  17. Unitary version of the single-particle dispersive optical model and single-hole excitations in medium-heavy spherical nuclei

    Science.gov (United States)

    Kolomiytsev, G. V.; Igashov, S. Yu.; Urin, M. H.

    2017-07-01

    A unitary version of the single-particle dispersive optical model was proposed with the aim of applying it to describing high-energy single-hole excitations in medium-heavy mass nuclei. By considering the example of experimentally studied single-hole excitations in the 90Zr and 208Pb parent nuclei, the contribution of the fragmentation effect to the real part of the optical-model potential was estimated quantitatively in the framework of this version. The results obtained in this way were used to predict the properties of such excitations in the 132Sn parent nucleus.

  18. Midlatitude atmospheric responses to Arctic sensible heat flux anomalies in Community Climate Model, Version 4

    Science.gov (United States)

    Mills, Catrin M.; Cassano, John J.; Cassano, Elizabeth N.

    2016-12-01

    Possible linkages between Arctic sea ice loss and midlatitude weather are strongly debated in the literature. We analyze a coupled model simulation to assess the possibility of Arctic ice variability forcing a midlatitude response, ensuring consistency between atmosphere, ocean, and ice components. We work with weekly running mean daily sensible heat fluxes with the self-organizing map technique to identify Arctic sensible heat flux anomaly patterns and the associated atmospheric response, without the need of metrics to define the Arctic forcing or measure the midlatitude response. We find that low-level warm anomalies during autumn can build planetary wave patterns that propagate downstream into the midlatitudes, creating robust surface cold anomalies in the eastern United States.

  19. Programs OPTMAN and SHEMMAN version 5 (1998). Coupled channels optical model and collective nuclear structure calculation

    Energy Technology Data Exchange (ETDEWEB)

    Sukhovitskii, E.Sh.; Porodzinskii, Y.V.; Iwamoto, Osamu; Chiba, Satoshi; Shibata, Keiichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-05-01

    Program OPTMAN has been developed to be a tool for optical model calculations and employed in nuclear data evaluation at Radiation Physics and Chemistry Problems Institute. The code had been continuously improved to incorporate a number of options for more than twenty years. For the last three years it was successfully applied for evaluation of minor actinides nuclear data for a contract with International Science and Technology Center with Japan as the financing party. This code is now installed on the PC and UNIX work station by the authors at Nuclear Data Center of JAERI as well as program SHEMMAN which is used for the determination of nuclear Hamiltonian parameters. This report is intended as a brief manual of these codes for the users at JAERI. (author)

  20. Offshore Wind Guidance Document: Oceanography and Sediment Stability (Version 1) Development of a Conceptual Site Model.

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, Jesse D.; Jason Magalen; Craig Jones

    2014-06-01

    This guidance document provide s the reader with an overview of the key environmental considerations for a typical offshore wind coastal location and the tools to help guide the reader through a thoro ugh planning process. It will enable readers to identify the key coastal processes relevant to their offshore wind site and perform pertinent analysis to guide siting and layout design, with the goal of minimizing costs associated with planning, permitting , and long - ter m maintenance. The document highlight s site characterization and assessment techniques for evaluating spatial patterns of sediment dynamics in the vicinity of a wind farm under typical, extreme, and storm conditions. Finally, the document des cribe s the assimilation of all of this information into the conceptual site model (CSM) to aid the decision - making processes.

  1. Theoretical modelling of epigenetically modified DNA sequences [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Alexandra Teresa Pires Carvalho

    2015-05-01

    Full Text Available We report herein a set of calculations designed to examine the effects of epigenetic modifications on the structure of DNA. The incorporation of methyl, hydroxymethyl, formyl and carboxy substituents at the 5-position of cytosine is shown to hardly affect the geometry of CG base pairs, but to result in rather larger changes to hydrogen-bond and stacking binding energies, as predicted by dispersion-corrected density functional theory (DFT methods. The same modifications within double-stranded GCG and ACA trimers exhibit rather larger structural effects, when including the sugar-phosphate backbone as well as sodium counterions and implicit aqueous solvation. In particular, changes are observed in the buckle and propeller angles within base pairs and the slide and roll values of base pair steps, but these leave the overall helical shape of DNA essentially intact. The structures so obtained are useful as a benchmark of faster methods, including molecular mechanics (MM and hybrid quantum mechanics/molecular mechanics (QM/MM methods. We show that previously developed MM parameters satisfactorily reproduce the trimer structures, as do QM/MM calculations which treat bases with dispersion-corrected DFT and the sugar-phosphate backbone with AMBER. The latter are improved by inclusion of all six bases in the QM region, since a truncated model including only the central CG base pair in the QM region is considerably further from the DFT structure. This QM/MM method is then applied to a set of double-stranded DNA heptamers derived from a recent X-ray crystallographic study, whose size puts a DFT study beyond our current computational resources. These data show that still larger structural changes are observed than in base pairs or trimers, leading us to conclude that it is important to model epigenetic modifications within realistic molecular contexts.

  2. Hybrid2: The hybrid system simulation model, Version 1.0, user manual

    Energy Technology Data Exchange (ETDEWEB)

    Baring-Gould, E.I.

    1996-06-01

    In light of the large scale desire for energy in remote communities, especially in the developing world, the need for a detailed long term performance prediction model for hybrid power systems was seen. To meet these ends, engineers from the National Renewable Energy Laboratory (NREL) and the University of Massachusetts (UMass) have spent the last three years developing the Hybrid2 software. The Hybrid2 code provides a means to conduct long term, detailed simulations of the performance of a large array of hybrid power systems. This work acts as an introduction and users manual to the Hybrid2 software. The manual describes the Hybrid2 code, what is included with the software and instructs the user on the structure of the code. The manual also describes some of the major features of the Hybrid2 code as well as how to create projects and run hybrid system simulations. The Hybrid2 code test program is also discussed. Although every attempt has been made to make the Hybrid2 code easy to understand and use, this manual will allow many organizations to consider the long term advantages of using hybrid power systems instead of conventional petroleum based systems for remote power generation.

  3. Sensitivity of precipitation to parameter values in the community atmosphere model version 5

    Energy Technology Data Exchange (ETDEWEB)

    Johannesson, Gardar; Lucas, Donald; Qian, Yun; Swiler, Laura Painton; Wildey, Timothy Michael

    2014-03-01

    One objective of the Climate Science for a Sustainable Energy Future (CSSEF) program is to develop the capability to thoroughly test and understand the uncertainties in the overall climate model and its components as they are being developed. The focus on uncertainties involves sensitivity analysis: the capability to determine which input parameters have a major influence on the output responses of interest. This report presents some initial sensitivity analysis results performed by Lawrence Livermore National Laboratory (LNNL), Sandia National Laboratories (SNL), and Pacific Northwest National Laboratory (PNNL). In the 2011-2012 timeframe, these laboratories worked in collaboration to perform sensitivity analyses of a set of CAM5, 2° runs, where the response metrics of interest were precipitation metrics. The three labs performed their sensitivity analysis (SA) studies separately and then compared results. Overall, the results were quite consistent with each other although the methods used were different. This exercise provided a robustness check of the global sensitivity analysis metrics and identified some strongly influential parameters.

  4. Variational assimilation of land surface temperature within the ORCHIDEE Land Surface Model Version 1.2.6

    Science.gov (United States)

    Benavides Pinjosovsky, Hector Simon; Thiria, Sylvie; Ottlé, Catherine; Brajard, Julien; Badran, Fouad; Maugis, Pascal

    2017-01-01

    The SECHIBA module of the ORCHIDEE land surface model describes the exchanges of water and energy between the surface and the atmosphere. In the present paper, the adjoint semi-generator software called YAO was used as a framework to implement a 4D-VAR assimilation scheme of observations in SECHIBA. The objective was to deliver the adjoint model of SECHIBA (SECHIBA-YAO) obtained with YAO to provide an opportunity for scientists and end users to perform their own assimilation. SECHIBA-YAO allows the control of the 11 most influential internal parameters of the soil water content, by observing the land surface temperature or remote sensing data such as the brightness temperature. The paper presents the fundamental principles of the 4D-VAR assimilation, the semi-generator software YAO and a large number of experiments showing the accuracy of the adjoint code in different conditions (sites, PFTs, seasons). In addition, a distributed version is available in the case for which only the land surface temperature is observed.

  5. Fuel Cell Power Model Version 2: Startup Guide, System Designs, and Case Studies. Modeling Electricity, Heat, and Hydrogen Generation from Fuel Cell-Based Distributed Energy Systems

    Energy Technology Data Exchange (ETDEWEB)

    Steward, D.; Penev, M.; Saur, G.; Becker, W.; Zuboy, J.

    2013-06-01

    This guide helps users get started with the U.S. Department of Energy/National Renewable Energy Laboratory Fuel Cell Power (FCPower) Model Version 2, which is a Microsoft Excel workbook that analyzes the technical and economic aspects of high-temperature fuel cell-based distributed energy systems with the aim of providing consistent, transparent, comparable results. This type of energy system would provide onsite-generated heat and electricity to large end users such as hospitals and office complexes. The hydrogen produced could be used for fueling vehicles or stored for later conversion to electricity.

  6. MIG version 0.0 model interface guidelines: Rules to accelerate installation of numerical models into any compliant parent code

    Energy Technology Data Exchange (ETDEWEB)

    Brannon, R.M.; Wong, M.K.

    1996-08-01

    A set of model interface guidelines, called MIG, is presented as a means by which any compliant numerical material model can be rapidly installed into any parent code without having to modify the model subroutines. Here, {open_quotes}model{close_quotes} usually means a material model such as one that computes stress as a function of strain, though the term may be extended to any numerical operation. {open_quotes}Parent code{close_quotes} means a hydrocode, finite element code, etc. which uses the model and enforces, say, the fundamental laws of motion and thermodynamics. MIG requires the model developer (who creates the model package) to specify model needs in a standardized but flexible way. MIG includes a dictionary of technical terms that allows developers and parent code architects to share a common vocabulary when specifying field variables. For portability, database management is the responsibility of the parent code. Input/output occurs via structured calling arguments. As much model information as possible (such as the lists of required inputs, as well as lists of precharacterized material data and special needs) is supplied by the model developer in an ASCII text file. Every MIG-compliant model also has three required subroutines to check data, to request extra field variables, and to perform model physics. To date, the MIG scheme has proven flexible in beta installations of a simple yield model, plus a more complicated viscodamage yield model, three electromechanical models, and a complicated anisotropic microcrack constitutive model. The MIG yield model has been successfully installed using identical subroutines in three vectorized parent codes and one parallel C++ code, all predicting comparable results. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort, thereby reducing the cost of installing and sharing models in diverse new codes.

  7. Simulating the 2012 High Plains Drought Using Three Single Column Model Versions of the Community Earth System Model (SCM-CESM)

    Science.gov (United States)

    Medina, I. D.; Denning, S.

    2014-12-01

    The impact of changes in the frequency and severity of drought on fresh water sustainability is a great concern for many regions of the world. One such location is the High Plains, where the local economy is primarily driven by fresh water withdrawals from the Ogallala Aquifer, which accounts for approximately 30% of total irrigation withdrawals from all U.S. aquifers combined. Modeling studies that focus on the feedback mechanisms that control the climate and eco-hydrology during times of drought are limited in the sense that they use conventional General Circulation Models (GCMs) with grid length scales ranging from one hundred to several hundred kilometers. Additionally, these models utilize crude statistical parameterizations of cloud processes for estimating sub-grid fluxes of heat and moisture and have a poor representation of land surface heterogeneity. For this research, we focus on the 2012 High Plains drought, and will perform numerical simulations using three single column model versions of the Community Earth System Model (SCM-CESM) at multiple sites overlying the Ogallala Aquifer for the 2010-2012 period. In the first version of SCM-CESM, CESM will be used in standard mode (Community Atmospheric Model (CAM) coupled to a single instance of the Community Land Model (CLM)), secondly, CESM will be used in Super-Parameterized mode (SP-CESM), where a cloud resolving model (CRM consists of 32 atmospheric columns) replaces the standard CAM atmospheric parameterization and is coupled to a single instance of CLM, and thirdly, CESM is used in "Multi Instance" SP-CESM mode, where an instance of CLM is coupled to each CRM column of SP-CESM (32 CRM columns coupled to 32 instances of CLM). To assess the physical realism of the land-atmosphere feedbacks simulated at each site by all versions of SCM-CESM, differences in simulated energy and moisture fluxes will be computed between years for the 2010-2012 period, and will be compared to differences calculated using

  8. User Manual for Graphical User Interface Version 2.10 with Fire and Smoke Simulation Model (FSSIM) Version 1.2

    Science.gov (United States)

    2010-05-10

    calculations, while fast, have limitations in applicability and large uncertainties in their results. CFD computations have the potential to be accurate...variables or a CFD model that uses a multitude of variables. A network representation allows for maximum physical extent of a simulation with a minimum...are separated; therefore, the floor of the upper deck and the ceiling of the lower d eck are highlighted. A vertical surf ace would only appear as a

  9. Evaluating litter decomposition in earth system models with long-term litterbag experiments: an example using the Community Land Model version 4 (CLM4).

    Science.gov (United States)

    Bonan, Gordon B; Hartman, Melannie D; Parton, William J; Wieder, William R

    2013-03-01

    Decomposition is a large term in the global carbon budget, but models of the earth system that simulate carbon cycle-climate feedbacks are largely untested with respect to litter decomposition. We tested the litter decomposition parameterization of the community land model version 4 (CLM4), the terrestrial component of the community earth system model, with data from the long-term intersite decomposition experiment team (LIDET). The LIDET dataset is a 10-year study of litter decomposition at multiple sites across North America and Central America. We performed 10-year litter decomposition simulations comparable with LIDET for 9 litter types and 20 sites in tundra, grassland, and boreal, conifer, deciduous, and tropical forest biomes using the LIDET-provided climatic decomposition index to constrain temperature and moisture effects on decomposition. We performed additional simulations with DAYCENT, a version of the CENTURY model, to ask how well an established ecosystem model matches the observations. The results show large discrepancy between the laboratory microcosm studies used to parameterize the CLM4 litter decomposition and the LIDET field study. Simulated carbon loss is more rapid than the observations across all sites, and nitrogen immobilization is biased high. Closer agreement with the observations requires much lower decomposition rates, obtained with the assumption that soil mineral nitrogen severely limits decomposition. DAYCENT better replicates the observations, for both carbon mass remaining and nitrogen, independent of nitrogen limitation. CLM4 has low soil carbon in global earth system simulations. These results suggest that this bias arises, in part, from too rapid litter decomposition. More broadly, the terrestrial biogeochemistry of earth system models must be critically tested with observations, and the consequences of particular model choices must be documented. Long-term litter decomposition experiments such as LIDET provide a real

  10. The global aerosol-climate model ECHAM-HAM, version 2: sensitivity to improvements in process representations

    Directory of Open Access Journals (Sweden)

    K. Zhang

    2012-10-01

    Full Text Available This paper introduces and evaluates the second version of the global aerosol-climate model ECHAM-HAM. Major changes have been brought into the model, including new parameterizations for aerosol nucleation and water uptake, an explicit treatment of secondary organic aerosols, modified emission calculations for sea salt and mineral dust, the coupling of aerosol microphysics to a two-moment stratiform cloud microphysics scheme, and alternative wet scavenging parameterizations. These revisions extend the model's capability to represent details of the aerosol lifecycle and its interaction with climate. Nudged simulations of the year 2000 are carried out to compare the aerosol properties and global distribution in HAM1 and HAM2, and to evaluate them against various observations. Sensitivity experiments are performed to help identify the impact of each individual update in model formulation.

    Results indicate that from HAM1 to HAM2 there is a marked weakening of aerosol water uptake in the lower troposphere, reducing the total aerosol water burden from 75 Tg to 51 Tg. The main reason is the newly introduced κ-Köhler-theory-based water uptake scheme uses a lower value for the maximum relative humidity cutoff. Particulate organic matter loading in HAM2 is considerably higher in the upper troposphere, because the explicit treatment of secondary organic aerosols allows highly volatile oxidation products of the precursors to be vertically transported to regions of very low temperature and to form aerosols there. Sulfate, black carbon, particulate organic matter and mineral dust in HAM2 have longer lifetimes than in HAM1 because of weaker in-cloud scavenging, which is in turn related to lower autoconversion efficiency in the newly introduced two-moment cloud microphysics scheme. Modification in the sea salt emission scheme causes a significant increase in the ratio (from 1.6 to 7.7 between accumulation mode and coarse mode emission fluxes of

  11. ADVANCED UTILITY SIMULATION MODEL, REPORT OF SENSITIVITY TESTING, CALIBRATION, AND MODEL OUTPUT COMPARISONS (VERSION 3.0)

    Science.gov (United States)

    The report gives results of activities relating to the Advanced Utility Simulation Model (AUSM): sensitivity testing. comparison with a mature electric utility model, and calibration to historical emissions. The activities were aimed at demonstrating AUSM's validity over input va...

  12. Updating sea spray aerosol emissions in the Community Multiscale Air Quality (CMAQ model version 5.0.2

    Directory of Open Access Journals (Sweden)

    B. Gantt

    2015-05-01

    Full Text Available Sea spray aerosols (SSA impact the particle mass concentration and gas-particle partitioning in coastal environments, with implications for human and ecosystem health. Despite their importance, the emission magnitude of SSA remains highly uncertain with global estimates varying by nearly two orders of magnitude. In this study, the Community Multiscale Air Quality (CMAQ model was updated to enhance fine mode SSA emissions, include sea surface temperature (SST dependency, and reduce coastally-enhanced emissions. Predictions from the updated CMAQ model and those of the previous release version, CMAQv5.0.2, were evaluated using several regional and national observational datasets in the continental US. The updated emissions generally reduced model underestimates of sodium, chloride, and nitrate surface concentrations for an inland site of the Bay Regional Atmospheric Chemistry Experiment (BRACE near Tampa, Florida. Including SST-dependency to the SSA emission parameterization led to increased sodium concentrations in the southeast US and decreased concentrations along parts of the Pacific coast and northeastern US. The influence of sodium on the gas-particle partitioning of nitrate resulted in higher nitrate particle concentrations in many coastal urban areas due to increased condensation of nitric acid in the updated simulations, potentially affecting the predicted nitrogen deposition in sensitive ecosystems. Application of the updated SSA emissions to the California Research at the Nexus of Air Quality and Climate Change (CalNex study period resulted in modest improvement in the predicted surface concentration of sodium and nitrate at several central and southern California coastal sites. This SSA emission update enabled a more realistic simulation of the atmospheric chemistry in environments where marine air mixes with urban pollution.

  13. Study of the Eco-Economic Indicators by Means of the New Version of the Merge Integrated Model. Part 1

    Directory of Open Access Journals (Sweden)

    Boris Vadimovich Digas

    2015-12-01

    Full Text Available One of the most relevant issues of the day is the forecasting problem of climatic changes and mitigation of their consequences. The official point of view reflected in the Climate doctrine of the Russian Federation consists in the recognition of the need of the development of the state approach to the climatic problems and related issues on the basis of the comprehensive scientific analysis of ecological, economic and social factors. For this purpose, the integrated estimation models of interdisciplinary character are attracted. Their functionality is characterized by the possibility of construction and testing of various dynamic scenarios of complex systems. The main purposes of the computing experiments described in the article are a review of the consequences of hypothetical participation of Russia in initiatives for greenhouse gas reduction as the Kyoto Protocol and approbation of one of the calculation methods of the green GDP representing the efficiency of environmental management in the modelling. To implement the given goals, the MERGE optimization model is used, its classical version is intended for the quantitative estimation of the application results of nature protection strategies. The components of the model are the eco-power module, climatic module and the module of loss estimates. In the work, the main attention is paid to the adaptation of the MERGE model to a current state of the world economy in the conditions of a complicated geopolitical situation and introduction of a new component to the model, realizing a simplified method for calculation the green GDP. The Project of scenario conditions and the key macroeconomic forecast parameters of the socio-economic development of Russia for 2016 and the schedule date of 2017−2018 made by the Ministry of Economic Development of the Russian Federation are used as a basic source of entrance data for the analysis of possible trajectories of the economic development of Russia and the

  14. Study of the Eco-Economic Indicators by Means of the New Version of the Merge Integrated Model Part 2

    Directory of Open Access Journals (Sweden)

    Boris Vadimovich Digas

    2016-03-01

    Full Text Available One of the most relevant issues of the day is the forecasting problem of climatic changes and mitigation of their consequences. The official point of view reflected in the Climate doctrine of the Russian Federation consists in the recognition of the need of the development of the state approach to the climatic problems and related issues on the basis of the comprehensive scientific analysis of ecological, economic and social factors. For this purpose, the integrated estimation models of interdisciplinary character are attracted. Their functionality is characterized by the possibility of construction and testing of various dynamic scenarios of complex systems. The main purposes of the computing experiments described in the article are a review of the consequences of hypothetical participation of Russia in initiatives for greenhouse gas reduction as the Kyoto Protocol and approbation of one of the calculation methods of the green gross domestic product representing the efficiency of environmental management in the modelling. To implement the given goals, the MERGE optimization model is used, its classical version is intended for the quantitative estimation of the application results of nature protection strategies. The components of the model are the eco-power module, climatic module and the module of loss estimates. In the work, the main attention is paid to the adaptation of the MERGE model to a current state of the world economy in the conditions of a complicated geopolitical situation and introduction of a new component to the model, realizing a simplified method for calculation the green gross domestic product. The Project of scenario conditions and the key macroeconomic forecast parameters of the socio-economic development of Russia for 2016 and the schedule date of 2017−2018 made by the Ministry of Economic Development of the Russian Federation are used as a basic source of entrance data for the analysis of possible trajectories of the

  15. VELMA Ecohydrological Model, Version 2.0 -- Analyzing Green Infrastructure Options for Enhancing Water Quality and Ecosystem Service Co-Benefits

    Science.gov (United States)

    This 2-page factsheet describes an enhanced version (2.0) of the VELMA eco-hydrological model. VELMA – Visualizing Ecosystem Land Management Assessments – has been redesigned to assist communities, land managers, policy makers and other decision makers in evaluataing the effecti...

  16. VELMA Ecohydrological Model, Version 2.0 -- Analyzing Green Infrastructure Options for Enhancing Water Quality and Ecosystem Service Co-Benefits

    Science.gov (United States)

    This 2-page factsheet describes an enhanced version (2.0) of the VELMA eco-hydrological model. VELMA – Visualizing Ecosystem Land Management Assessments – has been redesigned to assist communities, land managers, policy makers and other decision makers in evaluataing the effecti...

  17. Hierarchical linear modeling of California Verbal Learning Test--Children's Version learning curve characteristics following childhood traumatic head injury.

    Science.gov (United States)

    Warschausky, Seth; Kay, Joshua B; Chi, PaoLin; Donders, Jacobus

    2005-03-01

    California Verbal Learning Test-Children's Version (CVLT-C) indices have been shown to be sensitive to the neurocognitive effects of traumatic brain injury (TBI). The effects of TBI on the learning process were examined with a growth curve analysis of CVLT-C raw scores across the 5 learning trials. The sample with history of TBI comprised 86 children, ages 6-16 years, at a mean of 10.0 (SD=19.5) months postinjury; 37.2% had severe injury, 27.9% moderate, and 34.9% mild. The best-fit model for verbal learning was with a quadratic function. Greater TBI severity was associated with lower rate of acquisition and more gradual deceleration in the rate of acquisition. Intelligence test index scores, previously shown to be sensitive to severity of TBI, were positively correlated with rate of acquisition. Results provide evidence that the CVLT-C learning slope is not a simple linear function and further support for specific effects of TBI on verbal learning. ((c) 2005 APA, all rights reserved).

  18. An interactive code (NETPATH) for modeling NET geochemical reactions along a flow PATH, version 2.0

    Science.gov (United States)

    Plummer, L. Niel; Prestemon, Eric C.; Parkhurst, David L.

    1994-01-01

    NETPATH is an interactive Fortran 77 computer program used to interpret net geochemical mass-balance reactions between an initial and final water along a hydrologic flow path. Alternatively, NETPATH computes the mixing proportions of two to five initial waters and net geochemical reactions that can account for the observed composition of a final water. The program utilizes previously defined chemical and isotopic data for waters from a hydrochemical system. For a set of mineral and (or) gas phases hypothesized to be the reactive phases in the system, NETPATH calculates the mass transfers in every possible combination of the selected phases that accounts for the observed changes in the selected chemical and (or) isotopic compositions observed along the flow path. The calculations are of use in interpreting geochemical reactions, mixing proportions, evaporation and (or) dilution of waters, and mineral mass transfer in the chemical and isotopic evolution of natural and environmental waters. Rayleigh distillation calculations are applied to each mass-balance model that satisfies the constraints to predict carbon, sulfur, nitrogen, and strontium isotopic compositions at the end point, including radiocarbon dating. DB is an interactive Fortran 77 computer program used to enter analytical data into NETPATH, and calculate the distribution of species in aqueous solution. This report describes the types of problems that can be solved, the methods used to solve problems, and the features available in the program to facilitate these solutions. Examples are presented to demonstrate most of the applications and features of NETPATH. The codes DB and NETPATH can be executed in the UNIX or DOS1 environment. This report replaces U.S. Geological Survey Water-Resources Investigations Report 91-4078, by Plummer and others, which described the original release of NETPATH, version 1.0 (dated December, 1991), and documents revisions and enhancements that are included in version 2.0. 1 The

  19. EIA model documentation: World oil refining logistics demand model,``WORLD`` reference manual. Version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    1994-04-11

    This manual is intended primarily for use as a reference by analysts applying the WORLD model to regional studies. It also provides overview information on WORLD features of potential interest to managers and analysts. Broadly, the manual covers WORLD model features in progressively increasing detail. Section 2 provides an overview of the WORLD model, how it has evolved, what its design goals are, what it produces, and where it can be taken with further enhancements. Section 3 reviews model management covering data sources, managing over-optimization, calibration and seasonality, check-points for case construction and common errors. Section 4 describes in detail the WORLD system, including: data and program systems in overview; details of mainframe and PC program control and files;model generation, size management, debugging and error analysis; use with different optimizers; and reporting and results analysis. Section 5 provides a detailed description of every WORLD model data table, covering model controls, case and technology data. Section 6 goes into the details of WORLD matrix structure. It provides an overview, describes how regional definitions are controlled and defines the naming conventions for-all model rows, columns, right-hand sides, and bounds. It also includes a discussion of the formulation of product blending and specifications in WORLD. Several Appendices supplement the main sections.

  20. Discrete-Element bonded particle Sea Ice model DESIgn, version 1.3 – model description and implementation

    Directory of Open Access Journals (Sweden)

    A. Herman

    2015-07-01

    Full Text Available This paper presents theoretical foundations, numerical implementation and examples of application of a two-dimensional Discrete-Element bonded-particle Sea Ice model DESIgn. In the model, sea ice is represented as an assemblage of objects of two types: disk-shaped "grains", and semi-elastic bonds connecting them. Grains move on the sea surface under the influence of forces from the atmosphere and the ocean, as well as interactions with surrounding grains through a direct contact (Hertzian contact mechanics and/or through bonds. The model has an option of taking into account quasi-threedimensional effects related to space- and time-varying curvature of the sea surface, thus enabling simulation of ice breaking due to stresses resulting from bending moments associated with surface waves. Examples of the model's application to simple sea ice deformation and breaking problems are presented, with an analysis of the influence of the basic model parameters ("microscopic" properties of grains and bonds on the large-scale response of the modeled material. The model is written as a toolbox suitable for usage with the open-source numerical library LIGGGHTS. The code, together with a full technical documentation and example input files, is freely available with this paper and on the Internet.

  1. Discrete-Element bonded-particle Sea Ice model DESIgn, version 1.3a - model description and implementation

    Science.gov (United States)

    Herman, Agnieszka

    2016-04-01

    This paper presents theoretical foundations, numerical implementation and examples of application of the two-dimensional Discrete-Element bonded-particle Sea Ice model - DESIgn. In the model, sea ice is represented as an assemblage of objects of two types: disk-shaped "grains" and semi-elastic bonds connecting them. Grains move on the sea surface under the influence of forces from the atmosphere and the ocean, as well as interactions with surrounding grains through direct contact (Hertzian contact mechanics) and/or through bonds. The model has an experimental option of taking into account quasi-three-dimensional effects related to the space- and time-varying curvature of the sea surface, thus enabling simulation of ice breaking due to stresses resulting from bending moments associated with surface waves. Examples of the model's application to simple sea ice deformation and breaking problems are presented, with an analysis of the influence of the basic model parameters ("microscopic" properties of grains and bonds) on the large-scale response of the modeled material. The model is written as a toolbox suitable for usage with the open-source numerical library LIGGGHTS. The code, together with full technical documentation and example input files, is freely available with this paper and on the Internet.

  2. Models of intestinal infection by Salmonella enterica: introduction of a new neonate mouse model [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Marc Schulte

    2016-06-01

    Full Text Available Salmonella enterica serovar Typhimurium is a foodborne pathogen causing inflammatory disease in the intestine following diarrhea and is responsible for thousands of deaths worldwide. Many in vitro investigations using cell culture models are available, but these do not represent the real natural environment present in the intestine of infected hosts. Several in vivo animal models have been used to study the host-pathogen interaction and to unravel the immune responses and cellular processes occurring during infection. An animal model for Salmonella-induced intestinal inflammation relies on the pretreatment of mice with streptomycin. This model is of great importance but still shows limitations to investigate the host-pathogen interaction in the small intestine in vivo. Here, we review the use of mouse models for Salmonella infections and focus on a new small animal model using 1-day-old neonate mice. The neonate model enables researchers to observe infection of both the small and large intestine, thereby offering perspectives for new experimental approaches, as well as to analyze the Salmonella-enterocyte interaction in the small intestine in vivo.

  3. Isobar Model for Photoproduction of K+Sigma0 and K0Sigma+ on the Proton

    CERN Document Server

    Mart, T

    2005-01-01

    Kaon photoproductions on the proton gamma p --> K+Sigma0 and gamma p --> K0Sigma+ have been simultaneously analyzed by using isobar models and new SAPHIR data. The result shows that isobar models such as KAON MAID require more resonances in order to explain the data.

  4. Development of an Information Exchange format for the Observations Data Model version 2 using OGC Observations and Measures

    Science.gov (United States)

    Valentine, D. W., Jr.; Aufdenkampe, A. K.; Horsburgh, J. S.; Hsu, L.; Lehnert, K. A.; Mayorga, E.; Song, L.; Zaslavsky, I.; Whitenack, T.

    2014-12-01

    The Observations Data Model v1 (ODMv1) schema has been utilized of the basis hydrologic cyberinfrastructures include the CUAHSI HIS. The first version of ODM focused on timeseries, and ultimately led the development of OGC "WaterML2 Part 1: Timeseries", which is being proposed to be developed into OGC TimeseriesML.Our team has developed an ODMv2 model to address ODMv1 shortcomings, and to encompass a wider community of spatially discrete, feature-based earth observations. The development process included collecting requirements from several existing Earth Observations data systems: HIS,CZOData, IEDA and EarthChem system, and IOOS. We developed ODM2 as a set of core entities with additional extensioncomponents that can be utilized. These extensions include for shared functionality (e.g. data quality, provenance), as well as specific use cases (e.g. laboratory analysis, equipment). Initially, we closely followed the Observations and Measures (ISO19156) concept model. After prototyping and reviewing the requirements, we extended the ODMv2 conceptual model to include entities to document ancillary acts that do not always produce a result. Differing from O&M where acts are expected to produce a result. ODMv2 includes the core concept of an "Action" which encapsulates activities or actions associated that are performed in the process of making an observation, but may not produce a result. Actions, such as a sample analysis, that observe a property and produce a result are equivalent to O&M observation. But in many use cases, many actions have no resulting observation. Examples of such actions are a site visit or sample preparation (splitting of a sample). These actions are part of a chain of actions, iwhich produce the final observation. Overall the ODMv2 generally follows the O&M conceptual model. The nearly final ODMv2 includes a core and extensions. The core entities include actions, feature actions (observations), datasets (groupings), methods (procedures), sampling

  5. Investigation of the formaldehyde differential absorption cross section at high and low spectral resolution in the simulation chamber SAPHIR

    Directory of Open Access Journals (Sweden)

    T. Brauers

    2007-07-01

    Full Text Available The results from a simulation chamber study on the formaldehyde (HCHO absorption cross section in the UV spectral region are presented. We performed 4 experiments at ambient HCHO concentrations with simultaneous measurements of two DOAS instruments in the atmosphere simulation chamber SAPHIR in Jülich. The two instruments differ in their spectral resolution, one working at 0.2 nm (broad-band, BB-DOAS, the other at 2.7 pm (high-resolution, HR-DOAS. Both instruments use dedicated multi reflection cells to achieve long light path lengths of 960 m and 2240 m, respectively, inside the chamber. During two experiments HCHO was injected into the clean chamber by thermolysis of well defined amounts of para-formaldehyde reaching mixing rations of 30 ppbV at maximum. The HCHO concentration calculated from the injection and the chamber volume agrees with the BB-DOAS measured value when the absorption cross section of Meller and Moortgat (2000 and the temperature coefficient of Cantrell (1990 were used for data evaluation. In two further experiments we produced HCHO in-situ from the ozone + ethene reaction which was intended to provide an independent way of HCHO calibration through the measurements of ozone and ethene. However, we found an unexpected deviation from the current understanding of the ozone + ethene reaction when CO was added to suppress possible oxidation of ethene by OH radicals. The reaction of the Criegee intermediate with CO could be 240 times slower than currently assumed. Based on the BB-DOAS measurements we could deduce a high-resolution cross section for HCHO which was not measured directly so far.

  6. Modeling the structure of the attitudes and belief scale 2 using CFA and bifactor approaches: Toward the development of an abbreviated version.

    Science.gov (United States)

    Hyland, Philip; Shevlin, Mark; Adamson, Gary; Boduszek, Daniel

    2014-01-01

    The Attitudes and Belief Scale-2 (ABS-2: DiGiuseppe, Leaf, Exner, & Robin, 1988. The development of a measure of rational/irrational thinking. Paper presented at the World Congress of Behavior Therapy, Edinburg, Scotland.) is a 72-item self-report measure of evaluative rational and irrational beliefs widely used in Rational Emotive Behavior Therapy research contexts. However, little psychometric evidence exists regarding the measure's underlying factor structure. Furthermore, given the length of the ABS-2 there is a need for an abbreviated version that can be administered when there are time demands on the researcher, such as in clinical settings. This study sought to examine a series of theoretical models hypothesized to represent the latent structure of the ABS-2 within an alternative models framework using traditional confirmatory factor analysis as well as utilizing a bifactor modeling approach. Furthermore, this study also sought to develop a psychometrically sound abbreviated version of the ABS-2. Three hundred and thirteen (N = 313) active emergency service personnel completed the ABS-2. Results indicated that for each model, the application of bifactor modeling procedures improved model fit statistics, and a novel eight-factor intercorrelated solution was identified as the best fitting model of the ABS-2. However, the observed fit indices failed to satisfy commonly accepted standards. A 24-item abbreviated version was thus constructed and an intercorrelated eight-factor solution yielded satisfactory model fit statistics. Current results support the use of a bifactor modeling approach to determining the factor structure of the ABS-2. Furthermore, results provide empirical support for the psychometric properties of the newly developed abbreviated version.

  7. Lord-Wingersky Algorithm Version 2.0 for Hierarchical Item Factor Models with Applications in Test Scoring, Scale Alignment, and Model Fit Testing.

    Science.gov (United States)

    Cai, Li

    2015-06-01

    Lord and Wingersky's (Appl Psychol Meas 8:453-461, 1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined on a grid formed by direct products of quadrature points. However, the increase in computational burden remains exponential in the number of dimensions, making the implementation of the recursive algorithm cumbersome for truly high-dimensional models. In this paper, a dimension reduction method that is specific to the Lord-Wingersky recursions is developed. This method can take advantage of the restrictions implied by hierarchical item factor models, e.g., the bifactor model, the testlet model, or the two-tier model, such that a version of the Lord-Wingersky recursive algorithm can operate on a dramatically reduced set of quadrature points. For instance, in a bifactor model, the dimension of integration is always equal to 2, regardless of the number of factors. The new algorithm not only provides an effective mechanism to produce summed score to IRT scaled score translation tables properly adjusted for residual dependence, but leads to new applications in test scoring, linking, and model fit checking as well. Simulated and empirical examples are used to illustrate the new applications.

  8. Temperature and Humidity Profiles in the TqJoint Data Group of AIRS Version 6 Product for the Climate Model Evaluation

    Science.gov (United States)

    Ding, Feng; Fang, Fan; Hearty, Thomas J.; Theobald, Michael; Vollmer, Bruce; Lynnes, Christopher

    2014-01-01

    The Atmospheric Infrared Sounder (AIRS) mission is entering its 13th year of global observations of the atmospheric state, including temperature and humidity profiles, outgoing long-wave radiation, cloud properties, and trace gases. Thus AIRS data have been widely used, among other things, for short-term climate research and observational component for model evaluation. One instance is the fifth phase of the Coupled Model Intercomparison Project (CMIP5) which uses AIRS version 5 data in the climate model evaluation. The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is the home of processing, archiving, and distribution services for data from the AIRS mission. The GES DISC, in collaboration with the AIRS Project, released data from the version 6 algorithm in early 2013. The new algorithm represents a significant improvement over previous versions in terms of greater stability, yield, and quality of products. The ongoing Earth System Grid for next generation climate model research project, a collaborative effort of GES DISC and NASA JPL, will bring temperature and humidity profiles from AIRS version 6. The AIRS version 6 product adds a new "TqJoint" data group, which contains data for a common set of observations across water vapor and temperature at all atmospheric levels and is suitable for climate process studies. How different may the monthly temperature and humidity profiles in "TqJoint" group be from the "Standard" group where temperature and water vapor are not always valid at the same time? This study aims to answer the question by comprehensively comparing the temperature and humidity profiles from the "TqJoint" group and the "Standard" group. The comparison includes mean differences at different levels globally and over land and ocean. We are also working on examining the sampling differences between the "TqJoint" and "Standard" group using MERRA data.

  9. Flipped versions of the universal 3-3-1 and the left-right symmetric models in [S U (3 )]3 : A comprehensive approach

    Science.gov (United States)

    Rodríguez, Oscar; Benavides, Richard H.; Ponce, William A.; Rojas, Eduardo

    2017-01-01

    By considering the 3-3-1 and the left-right symmetric models as low-energy effective theories of the S U (3 )C⊗S U (3 )L⊗S U (3 )R (for short [S U (3 )]3 ) gauge group, alternative versions of these models are found. The new neutral gauge bosons of the universal 3-3-1 model and its flipped versions are presented; also, the left-right symmetric model and its flipped variants are studied. Our analysis shows that there are two flipped versions of the universal 3-3-1 model, with the particularity that both of them have the same weak charges. For the left-right symmetric model, we also found two flipped versions; one of them is new in the literature and, unlike those of the 3-3-1, requires a dedicated study of its electroweak properties. For all the models analyzed, the couplings of the Z' bosons to the standard model fermions are reported. The explicit form of the null space of the vector boson mass matrix for an arbitrary Higgs tensor and gauge group is also presented. In the general framework of the [S U (3 )]3 gauge group, and by using the LHC experimental results and EW precision data, limits on the Z' mass and the mixing angle between Z and the new gauge bosons Z' are obtained. The general results call for very small mixing angles in the range 1 0-3 radians and MZ'>2.5 TeV .

  10. Statistical model of fractures and deformations zones for Forsmark. Preliminary site description Forsmark area - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    La Pointe, Paul R. [Golder Associate Inc., Redmond, WA (United States); Olofsson, Isabelle; Hermanson, Jan [Golder Associates AB, Uppsala (Sweden)

    2005-04-01

    Compared to version 1.1, a much larger amount of data especially from boreholes is available. Both one-hole interpretation and Boremap indicate the presence of high and low fracture intensity intervals in the rock mass. The depth and width of these intervals varies from borehole to borehole but these constant fracture intensity intervals are contiguous and present quite sharp transitions. There is not a consistent pattern of intervals of high fracture intensity at or near to the surface. In many cases, the intervals of highest fracture intensity are considerably below the surface. While some fractures may have occurred or been reactivated in response to surficial stress relief, surficial stress relief does not appear to be a significant explanatory variable for the observed variations in fracture intensity. Data from the high fracture intensity intervals were extracted and statistical analyses were conducted in order to identify common geological factors. Stereoplots of fracture orientation versus depth for the different fracture intensity intervals were also produced for each borehole. Moreover percussion borehole data were analysed in order to identify the persistence of these intervals throughout the model volume. The main conclusions of these analyses are the following: The fracture intensity is conditioned by the rock domain, but inside a rock domain intervals of high and low fracture intensity are identified. The intervals of high fracture intensity almost always correspond to intervals with distinct fracture orientations (whether a set, most often the NW sub-vertical set, is highly dominant, or some orientation sets are missing). These high fracture intensity intervals are positively correlated to the presence of first and second generation minerals (epidote, calcite). No clear correlation for these fracture intensity intervals has been identified between holes. Based on these results the fracture frequency has been calculated in each rock domain for the

  11. Rock mechanics modelling of rock mass properties - summary of primary data. Preliminary site description Laxemar subarea - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Lanaro, Flavio [Berg Bygg Konsult AB, Solna (Sweden); Oehman, Johan; Fredriksson, Anders [Golder Associates AB, Uppsala (Sweden)

    2006-05-15

    The results presented in this report are the summary of the primary data for the Laxemar Site Descriptive Modelling version 1.2. At this stage, laboratory tests on intact rock and fracture samples from borehole KSH01A, KSH02A, KAV01 (already considered in Simpevarp SDM version 1.2) and borehole KLX02 and KLX04 were available. Concerning the mechanical properties of the intact rock, the rock type 'granite to quartz monzodiorite' or 'Aevroe granite' (code 501044) was tested for the first time within the frame of the site descriptive modelling. The average uniaxial compressive strength and Young's modulus of the granite to quartz to monzodiorite are 192 MPa and 72 GPa, respectively. The crack initiation stress is observed to be 0.5 times the uniaxial compressive strength for the same rock type. Non negligible differences are observed between the statistics of the mechanical properties of the granite to quartz monzodiorite in borehole KLX02 and KLX04. The available data on rock fractures were analysed to determine the mechanical properties of the different fracture sets at the site (based on tilt test results) and to determine systematic differences between the results obtained with different sample preparation techniques (based on direct shear tests). The tilt tests show that there are not significant differences of the mechanical properties due to the fracture orientation. Thus, all fracture sets seem to have the same strength and deformability. The average peak friction angle for the Coulomb's Criterion of the fracture sets varies between 33.6 deg and 34.1 deg, while the average cohesion ranges between 0.46 and 0.52 MPa, respectively. The average of the Coulomb's residual cohesion and friction angle vary in the ranges 28.0 deg - 29.2 deg and 0.40-0.45 MPa, respectively. The only significant difference could be observed on the average cohesion between fracture set S{sub A} and S{sub d}. The direct shear tests show that the

  12. Investigation of spatial resolution and temporal performance of SAPHIRE (scintillator avalanche photoconductor with high resolution emitter readout) with integrated electrostatic focusing

    Science.gov (United States)

    Scaduto, David A.; Lubinsky, Anthony R.; Rowlands, John A.; Kenmotsu, Hidenori; Nishimoto, Norihito; Nishino, Takeshi; Tanioka, Kenkichi; Zhao, Wei

    2014-03-01

    We have previously proposed SAPHIRE (scintillator avalanche photoconductor with high resolution emitter readout), a novel detector concept with potentially superior spatial resolution and low-dose performance compared with existing flat-panel imagers. The detector comprises a scintillator that is optically coupled to an amorphous selenium photoconductor operated with avalanche gain, known as high-gain avalanche rushing photoconductor (HARP). High resolution electron beam readout is achieved using a field emitter array (FEA). This combination of avalanche gain, allowing for very low-dose imaging, and electron emitter readout, providing high spatial resolution, offers potentially superior image quality compared with existing flat-panel imagers, with specific applications to fluoroscopy and breast imaging. Through the present collaboration, a prototype HARP sensor with integrated electrostatic focusing and nano- Spindt FEA readout technology has been fabricated. The integrated electron-optic focusing approach is more suitable for fabricating large-area detectors. We investigate the dependence of spatial resolution on sensor structure and operating conditions, and compare the performance of electrostatic focusing with previous technologies. Our results show a clear dependence of spatial resolution on electrostatic focusing potential, with performance approaching that of the previous design with external mesh-electrode. Further, temporal performance (lag) of the detector is evaluated and the results show that the integrated electrostatic focusing design exhibits comparable or better performance compared with the mesh-electrode design. This study represents the first technical evaluation and characterization of the SAPHIRE concept with integrated electrostatic focusing.

  13. An indirect flat-panel detector with avalanche gain for low dose x-ray imaging: SAPHIRE (scintillator avalanche photoconductor with high resolution emitter readout)

    Science.gov (United States)

    Zhao, Wei; Li, Dan; Rowlands, J. A.; Egami, N.; Takiguchi, Y.; Nanba, M.; Honda, Y.; Ohkawa, Y.; Kubota, M.; Tanioka, K.; Suzuki, K.; Kawai, T.

    2008-03-01

    An indirect flat-imager with programmable avalanche gain and field emitter array (FEA) readout is being investigated for low-dose x-ray imaging with high resolution. It is made by optically coupling a structured x-ray scintillator CsI (Tl) to an amorphous selenium (a-Se) avalanche photoconductor called HARP (high-gain avalanche rushing photoconductor). The charge image created by HARP is read out by electron beams generated by the FEA. The proposed detector is called SAPHIRE (Scintillator Avalanche Photoconductor with HIgh Resolution Emitter readout). The avalanche gain of HARP depends on both a-Se thickness and applied electric field E Se. At E Se of > 80 V/μm, the avalanche gain can enhance the signal at low dose (e.g. fluoroscopy) and make the detector x-ray quantum noise limited down to a single x-ray photon. At high exposure (e.g. radiography), the avalanche gain can be turned off by decreasing E Se to < 70 V/μm. In this paper the imaging characteristics of the FEA readout method, including the spatial resolution and noise, were investigated experimentally using a prototype optical HARP-FEA image sensor. The potential x-ray imaging performance of SAPHIRE, especially the aspect of programmable gain to ensure wide dynamic range and x-ray quantum noise limited performance at the lowest exposure in fluoroscopy, was investigated.

  14. BaP (PAH) air quality modelling exercise over Zaragoza (Spain) using an adapted version of WRF-CMAQ model.

    Science.gov (United States)

    San José, Roberto; Pérez, Juan Luis; Callén, María Soledad; López, José Manuel; Mastral, Ana

    2013-12-01

    Benzo(a)pyrene (BaP) is one of the most dangerous PAH due to its high carcinogenic and mutagenic character. Because of this reason, the Directive 2004/107/CE of the European Union establishes a target value of 1 ng/m(3) of BaP in the atmosphere. In this paper, the main aim is to estimate the BaP concentrations in the atmosphere by using last generation of air quality dispersion models with the inclusion of the transport, scavenging and deposition processes for the BaP. The degradation of the particulated BaP by the ozone has been considered. The aerosol-gas partitioning phenomenon in the atmosphere is modelled taking into a count that the concentrations in the gas and the aerosol phases. If the pre-existing organic aerosol concentrations are zero gas/particle equilibrium is established. The model has been validated at local scale with data from a sampling campaign carried out in the area of Zaragoza (Spain) during 12 weeks.

  15. Coupling of the VAMPER permafrost model within the earth system model iLOVECLIM (version 1.0: description and validation

    Directory of Open Access Journals (Sweden)

    D. Kitover

    2014-11-01

    Full Text Available The VAMPER permafrost model has been enhanced for coupling within the iLOVECLIM earth system model of intermediate complexity by including snow thickness and active layer calculations. In addition, the coupling between iLOVECLIM and the VAMPER model includes two spatially variable maps of geothermal heat flux and generalized lithology. A semi-coupled version is validated using the modern day extent of permafrost along with observed permafrost thickness and subsurface temperatures at selected borehole sites. The modeling run not including the effects of snow cover overestimate the present permafrost extent. However, when the snow component is included, the extent is overall reduced too much. It was found that most of the modeled thickness values and subsurface temperatures fall within a reasonable range of the corresponding observed values. Discrepancies are due to lack of captured effects from features such as topography and organic soil layers. In addition, some discrepancy is also due to disequilibrium with the current climate, meaning that some permafrost is a result of colder states and therefore cannot be reproduced accurately with the iLOVECLIM preindustrial forcings.

  16. Modeling herring population dynamics: herring catch-at-age model version 2 = Modelisation de la dynamique des populations de hareng : modele des captures a l'age de harengs, Version 2

    National Research Council Canada - National Science Library

    Christensen, L.B; Haist, V; Schweigert, J

    2010-01-01

    The herring catch-at-age model (HCAM) is an age-structured stock assessment model developed specifically for Pacific herring which is assumed to be a multi-stock population that has experienced periods of significant fishery impact...

  17. [Measuring psychosocial stress at work in Spanish hospital's personnel. Psychometric properties of the Spanish version of Effort-Reward Imbalance model].

    Science.gov (United States)

    Macías Robles, María Dolores; Fernández-López, Juan Antonio; Hernández-Mejía, Radhamés; Cueto-Espinar, Antonio; Rancaño, Iván; Siegrist, Johannes

    2003-05-10

    Two main models are currently used to evaluate the psychosocial factors at work: the Demand-Control (or job strain) model developed by Karasek and the Effort-Reward Imbalance model, developed by Siegrist. A Spanish version of the first model has been validated, yet so far no validated Spanish version of the second model is available. The objective of this study was to explore the psychometric properties of the Spanish version of the Effort-Reward Imbalance model in terms of internal consistency, factorial validity, and discriminate validity. A cross-sectional study on a representative sample of 298 workers of the Spanish public hospital San Agustin in Asturias was performed. The Spanish version of Effort-Reward Imbalance Questionnaire (23 items) was obtained by a standard forward/backward translation procedure, and the information was gathered by a self-administered application. Exploratory factor analysis were performed to test the dimensional structure of the theoretical model. Cronbach's alpha coefficient was calculated to estimate the internal consistency reliability. Information on discriminate validity is given for sex, age and education. Differences were calculated with the t-test for two independent samples or ANOVA, respectively. Internal consistency was satisfactory for the two scales (reward and intrinsic effort) and Cronbach's Alpha coefficients higher than 0.80 were observed. The internal consistency for the scale of extrinsic effort was lower (alpha = 0.63). A three-factor solution was retained for the factor analysis of reward as expected, and these dimensions were interpreted as a) esteem, b) job promotion and salary and c) job instability. A one-factor solution was retained for the factor analysis of intrinsic effort. The factor analysis of the scale of extrinsic effort did not support the expected one-dimension structure. The analysis of discriminate validity displayed significant associations between measures of Effort-Reward Imbalance and the

  18. PVWatts Version 5 Manual

    Energy Technology Data Exchange (ETDEWEB)

    Dobos, A. P.

    2014-09-01

    The NREL PVWatts calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes includes several built-in parameters that are hidden from the user. This technical reference describes the sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimate. This reference is applicable to the significantly revised version of PVWatts released by NREL in 2014.

  19. Quantitative and qualitative assessment of diurnal variability in tropospheric humidity using SAPHIR on-board Megha-Tropiques

    Science.gov (United States)

    Uma, K. N.; Das, Siddarth Shankar

    2016-08-01

    The global diurnal variability of relative humidity (RH) from August 2012 to May 2014 is discussed for the first time using 'Sounder for Atmospheric Profiling of Humidity in the Inter-tropical Regions (SAPHIR)', a microwave humidity sounder onboard Megha-Tropiques (MT). It is superior to other microwave satellite humidity sounders in terms of its higher repetitive cycle in the tropics owing to its low-inclination orbit and the availability of six dedicated humidity sounding channels. The six layers obtained are 1000-850, 850-700, 700-550, 550-400, 400-250 and 250-100 hPa. Three hourly data over a month has been combined using equivalent day analysis to attain a composite profile of complete diurnal cycle in each grid (2.5°×2.5°). A distinct diurnal variation is obtained over the continental and the oceanic regions at all the layers. The magnitude in the lower tropospheric humidity (LTH), middle tropospheric humidity (MTH) and the upper tropospheric humidity (UTH) show a large variability over the continental regions compared to that over oceans. The monthly variability of the diurnal variation over the years has also been discussed by segregating into five different continental and four different oceanic regions. Afternoon peaks dominate in the LTH over the land and the desert regions. The MTH is found to vary between the evening and the early morning hours over different geographical regions and not as consistent as that of the LTH. The UTH maximum magnitude is generally observed during the early morning hours, over the continents. Interestingly, the Oceanic regions are found to have a dominant magnitude in the afternoon hours similar to that of the continents in the LTH, evening maximum in the MTH and the early morning maximum in the UTH. The underlying mechanisms involved in the variability of humidity over different regions are also discussed. The study reveals the complexity involved in the understanding the diurnal variability over the continents and open

  20. The Nexus Land-Use model version 1.0, an approach articulating biophysical potentials and economic dynamics to model competition for land-use

    Directory of Open Access Journals (Sweden)

    F. Souty

    2012-02-01

    Full Text Available Interactions between food demand, biomass energy and forest preservation are driving both food prices and land-use changes, regionally and globally. This study presents a new model called Nexus Land-Use version 1.0 which describes these interactions through a generic representation of agricultural intensification mechanisms. The Nexus Land-Use model equations combine biophysics and economics into a single coherent framework to calculate crop yields, food prices, and resulting pasture and cropland areas within 12 regions inter-connected with each other by international trade. The representation of cropland and livestock production systems in each region relies on three components: (i a biomass production function derived from the crop yield response function to inputs such as industrial fertilisers; (ii a detailed representation of the livestock production system subdivided into an intensive and an extensive component, and (iii a spatially explicit distribution of potential (maximal crop yields prescribed from the Lund-Postdam-Jena global vegetation model for managed Land (LPJmL. The economic principles governing decisions about land-use and intensification are adapted from the Ricardian rent theory, assuming cost minimisation for farmers. The land-use modelling approach described in this paper entails several advantages. Firstly, it makes it possible to explore interactions among different types of biomass demand for food and animal feed, in a consistent approach, including indirect effects on land-use change resulting from international trade. Secondly, yield variations induced by the possible expansion of croplands on less suitable marginal lands are modelled by using regional land area distributions of potential yields, and a calculated boundary between intensive and extensive production. The model equations and parameter values are first described in details. Then, idealised scenarios exploring the impact of forest preservation policies or

  1. AERONET Version 3 processing

    Science.gov (United States)

    Holben, B. N.; Slutsker, I.; Giles, D. M.; Eck, T. F.; Smirnov, A.; Sinyuk, A.; Schafer, J.; Rodriguez, J.

    2014-12-01

    The Aerosol Robotic Network (AERONET) database has evolved in measurement accuracy, data quality products, availability to the scientific community over the course of 21 years with the support of NASA, PHOTONS and all federated partners. This evolution is periodically manifested as a new data version release by carefully reprocessing the entire database with the most current algorithms that fundamentally change the database and ultimately the data products used by the community. The newest processing, Version 3, will be released in 2015 after the entire database is reprocessed and real-time data processing becomes operational. All V 3 algorithms have been developed, individually vetted and represent four main categories: aerosol optical depth (AOD) processing, inversion processing, database management and new products. The primary trigger for release of V 3 lies with cloud screening of the direct sun observations and computation of AOD that will fundamentally change all data available for analysis and all subsequent retrieval products. This presentation will illustrate the innovative approach used for cloud screening and assesses the elements of V3 AOD relative to the current version. We will also present the advances in the inversion product processing with emphasis on the random and systematic uncertainty estimates. This processing will be applied to the new hybrid measurement scenario intended to provide inversion retrievals for all solar zenith angles. We will introduce automatic quality assurance criteria that will allow near real time quality assured aerosol products necessary for real time satellite and model validation and assimilation. Last we will introduce the new management structure that will improve access to the data database. The current version 2 will be supported for at least two years after the initial release of V3 to maintain continuity for on going investigations.

  2. Development and analysis of some versions of the fractional-order point reactor kinetics model for a nuclear reactor with slab geometry

    Science.gov (United States)

    Vyawahare, Vishwesh A.; Nataraj, P. S. V.

    2013-07-01

    In this paper, we report the development and analysis of some novel versions and approximations of the fractional-order (FO) point reactor kinetics model for a nuclear reactor with slab geometry. A systematic development of the FO Inhour equation, Inverse FO point reactor kinetics model, and fractional-order versions of the constant delayed neutron rate approximation model and prompt jump approximation model is presented for the first time (for both one delayed group and six delayed groups). These models evolve from the FO point reactor kinetics model, which has been derived from the FO Neutron Telegraph Equation for the neutron transport considering the subdiffusive neutron transport. Various observations and the analysis results are reported and the corresponding justifications are addressed using the subdiffusive framework for the neutron transport. The FO Inhour equation is found out to be a pseudo-polynomial with its degree depending on the order of the fractional derivative in the FO model. The inverse FO point reactor kinetics model is derived and used to find the reactivity variation required to achieve exponential and sinusoidal power variation in the core. The situation of sudden insertion of negative reactivity is analyzed using the FO constant delayed neutron rate approximation. Use of FO model for representing the prompt jump in reactor power is advocated on the basis of subdiffusion. Comparison with the respective integer-order models is carried out for the practical data. Also, it has been shown analytically that integer-order models are a special case of FO models when the order of time-derivative is one. Development of these FO models plays a crucial role in reactor theory and operation as it is the first step towards achieving the FO control-oriented model for a nuclear reactor. The results presented here form an important step in the efforts to establish a step-by-step and systematic theory for the FO modeling of a nuclear reactor.

  3. Measurement of the reaction {gamma}p{yields}K{sup 0}{sigma}{sup +} for photon energies up to 2.65 GeV with the SAPHIR detector at ELSA; Messung der Reaktion {gamma}p {yields} K{sup 0}{sigma}{sup +} fuer Photonenergien bis 2.65 GeV mit dem SAPHIR-Detektor an ELSA

    Energy Technology Data Exchange (ETDEWEB)

    Lawall, R.

    2004-01-01

    The reaction {gamma}p {yields} K{sup 0}{sigma}{sup +} was measured with the SAPHIR-detector at ELSA during the run periods 1997 and 1998. Results were obtained for cross sections in the photon energy range from threshold up to 2.65 GeV for all production angles and for the {sigma}{sup +}-polarization. Emphasis has been put on the determination and reduction of the contributions of background reactions and the comparison with other measurements and theoretical predictions. (orig.)

  4. The Nexus Land-Use model version 1.0, an approach articulating biophysical potentials and economic dynamics to model competition for land-use

    Directory of Open Access Journals (Sweden)

    F. Souty

    2012-10-01

    Full Text Available Interactions between food demand, biomass energy and forest preservation are driving both food prices and land-use changes, regionally and globally. This study presents a new model called Nexus Land-Use version 1.0 which describes these interactions through a generic representation of agricultural intensification mechanisms within agricultural lands. The Nexus Land-Use model equations combine biophysics and economics into a single coherent framework to calculate crop yields, food prices, and resulting pasture and cropland areas within 12 regions inter-connected with each other by international trade. The representation of cropland and livestock production systems in each region relies on three components: (i a biomass production function derived from the crop yield response function to inputs such as industrial fertilisers; (ii a detailed representation of the livestock production system subdivided into an intensive and an extensive component, and (iii a spatially explicit distribution of potential (maximal crop yields prescribed from the Lund-Postdam-Jena global vegetation model for managed Land (LPJmL. The economic principles governing decisions about land-use and intensification are adapted from the Ricardian rent theory, assuming cost minimisation for farmers. In contrast to the other land-use models linking economy and biophysics, crops are aggregated as a representative product in calories and intensification for the representative crop is a non-linear function of chemical inputs. The model equations and parameter values are first described in details. Then, idealised scenarios exploring the impact of forest preservation policies or rising energy price on agricultural intensification are described, and their impacts on pasture and cropland areas are investigated.

  5. The Nexus Land-Use model version 1.0, an approach articulating biophysical potentials and economic dynamics to model competition for land-use

    Science.gov (United States)

    Souty, F.; Brunelle, T.; Dumas, P.; Dorin, B.; Ciais, P.; Crassous, R.; Müller, C.; Bondeau, A.

    2012-10-01

    Interactions between food demand, biomass energy and forest preservation are driving both food prices and land-use changes, regionally and globally. This study presents a new model called Nexus Land-Use version 1.0 which describes these interactions through a generic representation of agricultural intensification mechanisms within agricultural lands. The Nexus Land-Use model equations combine biophysics and economics into a single coherent framework to calculate crop yields, food prices, and resulting pasture and cropland areas within 12 regions inter-connected with each other by international trade. The representation of cropland and livestock production systems in each region relies on three components: (i) a biomass production function derived from the crop yield response function to inputs such as industrial fertilisers; (ii) a detailed representation of the livestock production system subdivided into an intensive and an extensive component, and (iii) a spatially explicit distribution of potential (maximal) crop yields prescribed from the Lund-Postdam-Jena global vegetation model for managed Land (LPJmL). The economic principles governing decisions about land-use and intensification are adapted from the Ricardian rent theory, assuming cost minimisation for farmers. In contrast to the other land-use models linking economy and biophysics, crops are aggregated as a representative product in calories and intensification for the representative crop is a non-linear function of chemical inputs. The model equations and parameter values are first described in details. Then, idealised scenarios exploring the impact of forest preservation policies or rising energy price on agricultural intensification are described, and their impacts on pasture and cropland areas are investigated.

  6. Users` manual for LEHGC: A Lagrangian-Eulerian Finite-Element Model of Hydrogeochemical Transport Through Saturated-Unsaturated Media. Version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    Yeh, Gour-Tsyh [Pennsylvania State Univ., University Park, PA (United States). Dept. of Civil and Environmental Engineering; Carpenter, S.L. [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Earth and Planetary Sciences; Hopkins, P.L.; Siegel, M.D. [Sandia National Labs., Albuquerque, NM (United States)

    1995-11-01

    The computer program LEHGC is a Hybrid Lagrangian-Eulerian Finite-Element Model of HydroGeo-Chemical (LEHGC) Transport Through Saturated-Unsaturated Media. LEHGC iteratively solves two-dimensional transport and geochemical equilibrium equations and is a descendant of HYDROGEOCHEM, a strictly Eulerian finite-element reactive transport code. The hybrid Lagrangian-Eulerian scheme improves on the Eulerian scheme by allowing larger time steps to be used in the advection-dominant transport calculations. This causes less numerical dispersion and alleviates the problem of calculated negative concentrations at sharp concentration fronts. The code also is more computationally efficient than the strictly Eulerian version. LEHGC is designed for generic application to reactive transport problems associated with contaminant transport in subsurface media. Input to the program includes the geometry of the system, the spatial distribution of finite elements and nodes, the properties of the media, the potential chemical reactions, and the initial and boundary conditions. Output includes the spatial distribution of chemical element concentrations as a function of time and space and the chemical speciation at user-specified nodes. LEHGC Version 1.1 is a modification of LEHGC Version 1.0. The modification includes: (1) devising a tracking algorithm with the computational effort proportional to N where N is the number of computational grid nodes rather than N{sup 2} as in LEHGC Version 1.0, (2) including multiple adsorbing sites and multiple ion-exchange sites, (3) using four preconditioned conjugate gradient methods for the solution of matrix equations, and (4) providing a model for some features of solute transport by colloids.

  7. Technical report series on global modeling and data assimilation. Volume 5: Documentation of the AIRES/GEOS dynamical core, version 2

    Science.gov (United States)

    Suarez, Max J. (Editor); Takacs, Lawrence L.

    1995-01-01

    A detailed description of the numerical formulation of Version 2 of the ARIES/GEOS 'dynamical core' is presented. This code is a nearly 'plug-compatible' dynamics for use in atmospheric general circulation models (GCMs). It is a finite difference model on a staggered latitude-longitude C-grid. It uses second-order differences for all terms except the advection of vorticity by the rotation part of the flow, which is done at fourth-order accuracy. This dynamical core is currently being used in the climate (ARIES) and data assimilation (GEOS) GCMs at Goddard.

  8. Extended-range prediction trials using the global cloud/cloud-system resolving model NICAM and its new ocean-coupled version NICOCO

    Science.gov (United States)

    Miyakawa, Tomoki

    2017-04-01

    The global cloud/cloud-system resolving model NICAM and its new fully-coupled version NICOCO is run on one of the worlds top-tier supercomputers, the K computer. NICOCO couples the full-3D ocean component COCO of the general circulation model MIROC using a general-purpose coupler Jcup. We carried out multiple MJO simulations using NICAM and the new ocean-coupled version NICOCO to examine their extended-range MJO prediction skills and the impact of ocean coupling. NICAM performs excellently in terms of MJO prediction, maintaining a valid skill up to 27 days after the model is initialized (Miyakawa et al 2014). As is the case in most global models, ocean coupling frees the model from being anchored by the observed SST and allows the model climate to drift away further from reality compared to the atmospheric version of the model. Thus, it is important to evaluate the model bias, and in an initial value problem such as the seasonal extended-range prediction, it is essential to be able to distinguish the actual signal from the early transition of the model from the observed state to its own climatology. Since NICAM is a highly resource-demanding model, evaluation and tuning of the model climatology (order of years) is challenging. Here we focus on the initial 100 days to estimate the early drift of the model, and subsequently evaluate MJO prediction skills of NICOCO. Results show that in the initial 100 days, NICOCO forms a La-Nina like SST bias compared to observation, with a warmer Maritime Continent warm pool and a cooler equatorial central Pacific. The enhanced convection over the Maritime Continent associated with this bias project on to the real-time multi-variate MJO indices (RMM, Wheeler and Hendon 2004), and contaminates the MJO skill score. However, the bias does not appear to demolish the MJO signal severely. The model maintains a valid MJO prediction skill up to nearly 4 weeks when evaluated after linearly removing the early drift component estimated from

  9. Land Boundary Conditions for the Goddard Earth Observing System Model Version 5 (GEOS-5) Climate Modeling System: Recent Updates and Data File Descriptions

    Science.gov (United States)

    Mahanama, Sarith P.; Koster, Randal D.; Walker, Gregory K.; Takacs, Lawrence L.; Reichle, Rolf H.; De Lannoy, Gabrielle; Liu, Qing; Zhao, Bin; Suarez, Max J.

    2015-01-01

    The Earths land surface boundary conditions in the Goddard Earth Observing System version 5 (GEOS-5) modeling system were updated using recent high spatial and temporal resolution global data products. The updates include: (i) construction of a global 10-arcsec land-ocean lakes-ice mask; (ii) incorporation of a 10-arcsec Globcover 2009 land cover dataset; (iii) implementation of Level 12 Pfafstetter hydrologic catchments; (iv) use of hybridized SRTM global topography data; (v) construction of the HWSDv1.21-STATSGO2 merged global 30 arc second soil mineral and carbon data in conjunction with a highly-refined soil classification system; (vi) production of diffuse visible and near-infrared 8-day MODIS albedo climatologies at 30-arcsec from the period 2001-2011; and (vii) production of the GEOLAND2 and MODIS merged 8-day LAI climatology at 30-arcsec for GEOS-5. The global data sets were preprocessed and used to construct global raster data files for the software (mkCatchParam) that computes parameters on catchment-tiles for various atmospheric grids. The updates also include a few bug fixes in mkCatchParam, as well as changes (improvements in algorithms, etc.) to mkCatchParam that allow it to produce tile-space parameters efficiently for high resolution AGCM grids. The update process also includes the construction of data files describing the vegetation type fractions, soil background albedo, nitrogen deposition and mean annual 2m air temperature to be used with the future Catchment CN model and the global stream channel network to be used with the future global runoff routing model. This report provides detailed descriptions of the data production process and data file format of each updated data set.

  10. Improving the WRF model's (version 3.6.1) simulation over sea ice surface through coupling with a complex thermodynamic sea ice model (HIGHTSI)

    Science.gov (United States)

    Yao, Yao; Huang, Jianbin; Luo, Yong; Zhao, Zongci

    2016-06-01

    Sea ice plays an important role in the air-ice-ocean interaction, but it is often represented simply in many regional atmospheric models. The Noah sea ice scheme, which is the only option in the current Weather Research and Forecasting (WRF) model (version 3.6.1), has a problem of energy imbalance due to its simplification in snow processes and lack of ablation and accretion processes in ice. Validated against the Surface Heat Budget of the Arctic Ocean (SHEBA) in situ observations, Noah underestimates the sea ice temperature which can reach -10 °C in winter. Sensitivity tests show that this bias is mainly attributed to the simulation within the ice when a time-dependent ice thickness is specified. Compared with the Noah sea ice model, the high-resolution thermodynamic snow and ice model (HIGHTSI) uses more realistic thermodynamics for snow and ice. Most importantly, HIGHTSI includes the ablation and accretion processes of sea ice and uses an interpolation method which can ensure the heat conservation during its integration. These allow the HIGHTSI to better resolve the energy balance in the sea ice, and the bias in sea ice temperature is reduced considerably. When HIGHTSI is coupled with the WRF model, the simulation of sea ice temperature by the original Polar WRF is greatly improved. Considering the bias with reference to SHEBA observations, WRF-HIGHTSI improves the simulation of surface temperature, 2 m air temperature and surface upward long-wave radiation flux in winter by 6, 5 °C and 20 W m-2, respectively. A discussion on the impact of specifying sea ice thickness in the WRF model is presented. Consistent with previous research, prescribing the sea ice thickness with observational information results in the best simulation among the available methods. If no observational information is available, we present a new method in which the sea ice thickness is initialized from empirical estimation and its further change is predicted by a complex thermodynamic

  11. Comparison of Relative Humidity obtained from SAPHIR on board Megha-Tropiques and Ground based Microwave Radiometer Profiler over an equatorial station

    Science.gov (United States)

    Renju, Ramachandran Pillai; Uma, K. N.; Krishna Moorthy, K.; Mathew, Nizy; Raju C, Suresh

    A comparison has been made between the SAPHIR on board Megha-Tropiques (MT) derived Relative Humidity (RH (%)) with that derived from a ground based multi-frequency Microwave Radiometer Profiler (MRP) observations over an equatorial station Thiruvananthapuram (8.5(°) N and 76.9(°) E) for a one year period. As a first step, the validation of MRP has been made against the radiosonde for two years (2010 and 2011) during the Indian monsoon period July-September. This analysis shows a wet bias below 6 km and dry bias above. The comparison between the MRP and the MT derived RH has been made at five different altitudinal levels (0.75, 2.25, 4.0, 6.25 and 9.2 km range) strictly under clear sky condition. The regression analysis between the two reveals very good correlation (>0.8) in the altitudinal layer of 2.25 to 6.25 km. The differences between the two observations had also been explained interms of percentage of occurrence between MT and the MRP at each altitudinal layer. About 70-80% of the time, the difference in the RH is found to below 10% at first three layer. The RMSE of 2% is observed at almost all the height layers. The differences have been attributed to the different measurement and retrieval techniques involved in the ground based and satellite based measurements. Since MRP frequecy channels are not sensitive to small water vapor variabilities above 6 km, large differences are observed. Radiative Transfer computation for the channels of both MRP and SAPHIR will be carried out to understand the variabilities.

  12. ABEL model: Evaluates corporations` claims of inability to afford penalties and compliance costs (version 3.0.16). Model-simulation

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-11-01

    The easy-to-use ABEL software evaluates for-profit company claims of inability to afford penalties, clean-up costs, or compliance costs. Violators raise the issue of inability to pay in most of EPA`s enforcement actions regardless of whether there is any hard evidence supporting those claims. The program enables Federal, State and local enforcement professionals to quickly determine if there was any validity to those claims. ABEL is a tool that promotes quick settlements by performing screening analyses of defendants and potentially responsible parties (PRP`s) to determine their financial capacity. After analyzing some basic financial ratios that reflect a company`s solvency, ABEL assesses the firm`s ability to pay by focusing on projected cash flows. The model explicitly calculates the value of projected, internally generated cash flows from historical tax information, and compares these cash flows to the proposed environmental expenditure(s). The software is extremely easy to use. Version 3.0.16 updates the standard values for inflation and discount rate.

  13. Reconstructions of f(T) gravity from entropy-corrected holographic and new agegraphic dark energy models in power-law and logarithmic versions

    Energy Technology Data Exchange (ETDEWEB)

    Saha, Pameli; Debnath, Ujjal [Indian Institute of Engineering Science and Technology, Department of Mathematics, Howrah (India)

    2016-09-15

    Here, we peruse cosmological usage of the most promising candidates of dark energy in the framework of f(T) gravity theory where T represents the torsion scalar teleparallel gravity. We reconstruct the different f(T) modified gravity models in the spatially flat Friedmann-Robertson-Walker universe according to entropy-corrected versions of the holographic and new agegraphic dark energy models in power-law and logarithmic corrections, which describe an accelerated expansion history of the universe. We conclude that the equation of state parameter of the entropy-corrected models can transit from the quintessence state to the phantom regime as indicated by recent observations or can lie entirely in the phantom region. Also, using these models, we investigate the different areas of the stability with the help of the squared speed of sound. (orig.)

  14. Reconstructions of $f(T)$ Gravity from Entropy Corrected Holographic and New Agegraphic Dark Energy Models in Power-law and Logarithmic Versions

    CERN Document Server

    Saha, Pameli

    2016-01-01

    Here, we peruse cosmological usage of the most promising candidates of dark energy in the framework of $f(T)$ gravity theory. We reconstruct the different $f(T)$ modifed gravity models in the spatially flat FRW universe according to entropy-corrected versions of the holographic and new agegraphic dark energy models in power-law and logarithmic corrections, which describe accelerated expansion history of the universe. We conclude that the equation of state parameter of the entropy-corrected models can transit from quintessence state to phantom regime as indicated by recent observations or can lie entirely in the phantom region. Also, using these models, we investigate the different erase of the stability with the help of the squared speed of sound.

  15. Reconstructions of f( T) gravity from entropy-corrected holographic and new agegraphic dark energy models in power-law and logarithmic versions

    Science.gov (United States)

    Saha, Pameli; Debnath, Ujjal

    2016-09-01

    Here, we peruse cosmological usage of the most promising candidates of dark energy in the framework of f( T) gravity theory where T represents the torsion scalar teleparallel gravity. We reconstruct the different f( T) modified gravity models in the spatially flat Friedmann-Robertson-Walker universe according to entropy-corrected versions of the holographic and new agegraphic dark energy models in power-law and logarithmic corrections, which describe an accelerated expansion history of the universe. We conclude that the equation of state parameter of the entropy-corrected models can transit from the quintessence state to the phantom regime as indicated by recent observations or can lie entirely in the phantom region. Also, using these models, we investigate the different areas of the stability with the help of the squared speed of sound.

  16. Systematic comparison of barriers for heavy-ion fusion calculated on the basis of the double-folding model by employing two versions of nucleon-nucleon interaction

    Science.gov (United States)

    Gontchar, I. I.; Chushnyakova, M. V.

    2016-07-01

    A systematic calculation of barriers for heavy-ion fusion was performed on the basis of the double-folding model by employing two versions of an effective nucleon-nucleon interaction: M3Y interaction and Migdal interaction. The results of calculations by the Hartree-Fockmethod with the SKX coefficients were taken for nuclear densities. The calculations reveal that the fusion barrier is higher in the case of employing theMigdal interaction than in the case of employing the M3Y interaction. In view of this, the use of the Migdal interaction in describing heavy-ion fusion is questionable.

  17. Developing and validating a tablet version of an illness explanatory model interview for a public health survey in Pune, India.

    Directory of Open Access Journals (Sweden)

    Joseph G Giduthuri

    Full Text Available BACKGROUND: Mobile electronic devices are replacing paper-based instruments and questionnaires for epidemiological and public health research. The elimination of a data-entry step after an interview is a notable advantage over paper, saving investigator time, decreasing the time lags in managing and analyzing data, and potentially improving the data quality by removing the error-prone data-entry step. Research has not yet provided adequate evidence, however, to substantiate the claim of fewer errors for computerized interviews. METHODOLOGY: We developed an Android-based illness explanatory interview for influenza vaccine acceptance and tested the instrument in a field study in Pune, India, for feasibility and acceptability. Error rates for tablet and paper were compared with reference to the voice recording of the interview as gold standard to assess discrepancies. We also examined the preference of interviewers for the classical paper-based or the electronic version of the interview and compared the costs of research with both data collection devices. RESULTS: In 95 interviews with household respondents, total error rates with paper and tablet devices were nearly the same (2.01% and 1.99% respectively. Most interviewers indicated no preference for a particular device; but those with a preference opted for tablets. The initial investment in tablet-based interviews was higher compared to paper, while the recurring costs per interview were lower with the use of tablets. CONCLUSION: An Android-based tablet version of a complex interview was developed and successfully validated. Advantages were not compromised by increased errors, and field research assistants with a preference preferred the Android device. Use of tablets may be more costly than paper for small samples and less costly for large studies.

  18. Accounting for observational uncertainties in the evaluation of low latitude turbulent air-sea fluxes simulated in a suite of IPSL model versions

    Science.gov (United States)

    Servonnat, Jerome; Braconnot, Pascale; Gainusa-Bogdan, Alina

    2015-04-01

    Turbulent momentum and heat (sensible and latent) fluxes at the air-sea interface are key components of the whole energetic of the Earth's climate and their good representation in climate models is of prime importance. In this work, we use the methodology developed by Braconnot & Frankignoul (1993) to perform a Hotelling T2 test on spatio-temporal fields (annual cycles). This statistic provides a quantitative measure accounting for an estimate of the observational uncertainty for the evaluation of low-latitude turbulent air-sea fluxes in a suite of IPSL model versions. The spread within the observational ensemble of turbulent flux data products assembled by Gainusa-Bogdan et al (submitted) is used as an estimate of the observational uncertainty for the different turbulent fluxes. The methodology holds on a selection of a small number of dominating variability patterns (EOFs) that are common to both the model and the observations for the comparison. Consequently it focuses on the large-scale variability patterns and avoids the possibly noisy smaller scales. The results show that different versions of the IPSL couple model share common large scale model biases, but also that there the skill on sea surface temperature is not necessarily directly related to the skill in the representation of the different turbulent fluxes. Despite the large error bars on the observations the test clearly distinguish the different merits of the different model version. The analyses of the common EOF patterns and related time series provide guidance on the major differences with the observations. This work is a first attempt to use such statistic on the evaluation of the spatio-temporal variability of the turbulent fluxes, accounting for an observational uncertainty, and represents an efficient tool for systematic evaluation of simulated air-seafluxes, considering both the fluxes and the related atmospheric variables. References Braconnot, P., and C. Frankignoul (1993), Testing Model

  19. Can we model observed soil carbon changes from a dense inventory? A case study over england and wales using three version of orchidee ecosystem model (AR5, AR5-PRIM and O-CN

    Directory of Open Access Journals (Sweden)

    B. Guenet

    2013-07-01

    Full Text Available A widespread decrease of the top soil carbon content was observed over England and Wales during the period 1978–2003 in the National Soil Inventory (NSI, amounting to a carbon loss of 4.44 Tg yr-1 over 141 550 km2. Subsequent modelling studies have shown that changes in temperature and precipitation could only account for a small part of the observed decrease, and therefore that changes in land use and management and resulting changes in soil respiration or primary production were the main causes. So far, all the models used to reproduce the NSI data did not account for plant-soil interactions and were only soil carbon models with carbon inputs forced by data. Here, we use three different versions of a process-based coupled soil-vegetation model called ORCHIDEE, in order to separate the effect of trends in soil carbon input, and soil carbon mineralisation induced by climate trends over 1978–2003. The first version of the model (ORCHIDEE-AR5 used for IPCC-AR5 CMIP5 Earth System simulations, is based on three soil carbon pools defined with first order decomposition kinetics, as in the CENTURY model. The second version (ORCHIDEE-AR5-PRIM built for this study includes a relationship between litter carbon and decomposition rates, to reproduce a priming effect on decomposition. The last version (O-CN takes into account N-related processes. Soil carbon decomposition in O-CN is based on CENTURY, but adds N limitations on litter decomposition. We performed regional gridded simulations with these three versions of the ORCHIDEE model over England and Wales. None of the three model versions was able to reproduce the observed NSI soil carbon trend. This suggests that either climate change is not the main driver for observed soil carbon losses, or that the ORCHIDEE model even with priming or N-effects on decomposition lacks the basic mechanisms to explain soil carbon change in response to climate, which would raise a caution flag about the ability of this

  20. CLMT2 user's guide: A Coupled Model for Simulation of HydraulicProcesses from Canopy to Aquifer Version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Lehua

    2006-07-26

    CLMT2 is designed to simulate the land-surface andsubsurface hydrologic response to meteorological forcing. This modelcombines a state-of-the-art land-surface model, the NCAR Community LandModel version 3 (CLM3), with a variably saturated groundwater model, theTOUGH2, through an internal interface that includes flux and statevariables shared by the two submodels. Specifically, TOUGH2, in itssimulation, uses infiltration, evaporation, and root-uptake rates,calculated by CLM3, as source/sink terms; CLM3, in its simulation, usessaturation and capillary pressure profiles, calculated by TOUGH2, asstate variables. This new model, CLMT2, preserves the best aspects ofboth submodels: the state-of-the-art modeling capability of surfaceenergy and hydrologic processes from CLM3 (including snow, runoff,freezing/melting, evapotranspiration, radiation, and biophysiologicalprocesses) and the more realistic physical-process-based modelingcapability of subsurface hydrologic processes from TOUGH2 (includingheterogeneity, three-dimensional flow, seamless combining of unsaturatedand saturated zone, and water table). The preliminary simulation resultsshow that the coupled model greatly improved the predictions of the watertable, evapotranspiration, and surface temperature at a real watershed,as evaluated using 18 years of observed data. The new model is also readyto be coupled with an atmospheric simulation model, representing one ofthe first models that are capable to simulate hydraulic processes fromtop of the atmosphere to deep-ground.

  1. Phase diagrams of the corner cubic Heisenberg model and its site-diluted version on a triangular lattice: Renormalization-group treatment

    Science.gov (United States)

    Nagai, Kiyoshi

    1985-02-01

    The global phase diagrams of the corner cubic anisotropic discrete-spin Heisenberg (CH) model and its site-diluted version (dCH) on a triangular lattice are investigated through the position-space renormalization-group method of the simple Migdal-Kadanoff type. The two models include many simpler models as their subspaces, and the interrelations among these models are elucidated. The five-dimensional (5D) phase diagram of the dCH model is generated from the 3D one of the CH model by introducing 2D site-dilution operation. The structure of the 5D phase diagram and the effect of site dilution on the CH model are conveniently visualized by introducing the concept of paths in the 3D subspace. The path describes the temperature variation provided that the ratios between the interaction parameters in the original CH model are fixed. The resulting phase diagrams of the dCH model exhibit the typical three-phase coexistence of solid, liquid, and gas, and their qualitative interpretations are summarized.

  2. A new version of the CNRM Chemistry-Climate Model, CNRM-CCM: description and improvements from the CCMVal-2 simulations

    Directory of Open Access Journals (Sweden)

    M. Michou

    2011-10-01

    Full Text Available This paper presents a new version of the Météo-France CNRM Chemistry-Climate Model, so-called CNRM-CCM. It includes some fundamental changes from the previous version (CNRM-ACM which was extensively evaluated in the context of the CCMVal-2 validation activity. The most notable changes concern the radiative code of the GCM, and the inclusion of the detailed stratospheric chemistry of our Chemistry-Transport model MOCAGE on-line within the GCM. A 47-yr transient simulation (1960–2006 is the basis of our analysis. CNRM-CCM generates satisfactory dynamical and chemical fields in the stratosphere. Several shortcomings of CNRM-ACM simulations for CCMVal-2 that resulted from an erroneous representation of the impact of volcanic aerosols as well as from transport deficiencies have been eliminated.

    Remaining problems concern the upper stratosphere (5 to 1 hPa where temperatures are too high, and where there are biases in the NO2, N2O5 and O3 mixing ratios. In contrast, temperatures at the tropical tropopause are too cold. These issues are addressed through the implementation of a more accurate radiation scheme at short wavelengths. Despite these problems we show that this new CNRM CCM is a useful tool to study chemistry-climate applications.

  3. Technical report series on global modeling and data assimilation. Volume 4: Documentation of the Goddard Earth Observing System (GEOS) data assimilation system, version 1

    Science.gov (United States)

    Suarez, Max J. (Editor); Pfaendtner, James; Bloom, Stephen; Lamich, David; Seablom, Michael; Sienkiewicz, Meta; Stobie, James; Dasilva, Arlindo

    1995-01-01

    This report describes the analysis component of the Goddard Earth Observing System, Data Assimilation System, Version 1 (GEOS-1 DAS). The general features of the data assimilation system are outlined, followed by a thorough description of the statistical interpolation algorithm, including specification of error covariances and quality control of observations. We conclude with a discussion of the current status of development of the GEOS data assimilation system. The main components of GEOS-1 DAS are an atmospheric general circulation model and an Optimal Interpolation algorithm. The system is cycled using the Incremental Analysis Update (IAU) technique in which analysis increments are introduced as time independent forcing terms in a forecast model integration. The system is capable of producing dynamically balanced states without the explicit use of initialization, as well as a time-continuous representation of non- observables such as precipitation and radiational fluxes. This version of the data assimilation system was used in the five-year reanalysis project completed in April 1994 by Goddard's Data Assimilation Office (DAO) Data from this reanalysis are available from the Goddard Distributed Active Center (DAAC), which is part of NASA's Earth Observing System Data and Information System (EOSDIS). For information on how to obtain these data sets, contact the Goddard DAAC at (301) 286-3209, EMAIL daac@gsfc.nasa.gov.

  4. Rapidity distribution of protons from the potential version of UrQMD model and the traditional coalescence afterburner

    CERN Document Server

    Li, Qingfeng; Wang, Xiaobao; Shen, Caiwan

    2016-01-01

    Rapidity distributions of both E895 proton data at AGS energies and NA49 net proton data at SPS energies can be described reasonably well with a potential version of the UrQMD in which mean-field potentials for both pre-formed hadrons and confined baryons are considered, with the help of a traditional coalescence afterburner in which one parameter set for both relative distance $R_0$ and relative momentum $P_0$, (3.8 fm, 0.3 GeV$/$c), is used. Because of the large cancellation between the expansion in $R_0$ and the shrinkage in $P_0$ through the Lorentz transformation, the relativistic effect in clusters has little effect on the rapidity distribution of free (net) protons. Using a Woods-Saxon-like function instead of a pure logarithmic function as seen by FOPI collaboration at SIS energies, one can fit well both the data at SIS energies and the UrQMD calculation results at AGS and SPS energies. Further, it is found that for central Au+Au or Pb+Pb collisions at top SIS, SPS and RHIC energies, the proton fracti...

  5. GENII Version 2 Users’ Guide

    Energy Technology Data Exchange (ETDEWEB)

    Napier, Bruce A.

    2004-03-08

    The GENII Version 2 computer code was developed for the Environmental Protection Agency (EPA) at Pacific Northwest National Laboratory (PNNL) to incorporate the internal dosimetry models recommended by the International Commission on Radiological Protection (ICRP) and the radiological risk estimating procedures of Federal Guidance Report 13 into updated versions of existing environmental pathway analysis models. The resulting environmental dosimetry computer codes are compiled in the GENII Environmental Dosimetry System. The GENII system was developed to provide a state-of-the-art, technically peer-reviewed, documented set of programs for calculating radiation dose and risk from radionuclides released to the environment. The codes were designed with the flexibility to accommodate input parameters for a wide variety of generic sites. Operation of a new version of the codes, GENII Version 2, is described in this report. Two versions of the GENII Version 2 code system are available, a full-featured version and a version specifically designed for demonstrating compliance with the dose limits specified in 40 CFR 61.93(a), the National Emission Standards for Hazardous Air Pollutants (NESHAPS) for radionuclides. The only differences lie in the limitation of the capabilities of the user to change specific parameters in the NESHAPS version. This report describes the data entry, accomplished via interactive, menu-driven user interfaces. Default exposure and consumption parameters are provided for both the average (population) and maximum individual; however, these may be modified by the user. Source term information may be entered as radionuclide release quantities for transport scenarios, or as basic radionuclide concentrations in environmental media (air, water, soil). For input of basic or derived concentrations, decay of parent radionuclides and ingrowth of radioactive decay products prior to the start of the exposure scenario may be considered. A single code run can

  6. Updating sea spray aerosol emissions in the Community Multiscale Air Quality (CMAQ) model version 5.0.2

    Data.gov (United States)

    U.S. Environmental Protection Agency — The uploaded data consists of the BRACE Na aerosol observations paired with CMAQ model output, the updated model's parameterization of sea salt aerosol emission size...

  7. On the Benefits of Latent Variable Modeling for Norming Scales: The Case of the "Supports Intensity Scale-Children's Version"

    Science.gov (United States)

    Seo, Hyojeong; Little, Todd D.; Shogren, Karrie A.; Lang, Kyle M.

    2016-01-01

    Structural equation modeling (SEM) is a powerful and flexible analytic tool to model latent constructs and their relations with observed variables and other constructs. SEM applications offer advantages over classical models in dealing with statistical assumptions and in adjusting for measurement error. So far, however, SEM has not been fully used…

  8. Spatial-temporal reproducibility assessment of global seasonal forecasting system version 5 model for Dam Inflow forecasting

    Science.gov (United States)

    Moon, S.; Suh, A. S.; Soohee, H.

    2016-12-01

    The GloSea5(Global Seasonal forecasting system version 5) is provided and operated by the KMA(Korea Meteorological Administration). GloSea5 provides Forecast(FCST) and Hindcast(HCST) data and its horizontal resolution is about 60km (0.83° x 0.56°) in the mid-latitudes. In order to use this data in watershed-scale water management, GloSea5 needs spatial-temporal downscaling. As such, statistical downscaling was used to correct for systematic biases of variables and to improve data reliability. HCST data is provided in ensemble format, and the highest statistical correlation(R2 = 0.60, RMSE = 88.92, NSE = 0.57) of ensemble precipitation was reported for the Yongdam Dam watershed on the #6 grid. Additionally, the original GloSea5(600.1mm) showed the greatest difference(-26.5%) compared to observations(816.1mm) during the summer flood season. However, downscaled GloSea5 was shown to have only a ?3.1% error rate. Most of the underestimated results corresponded to precipitation levels during the flood season and the downscaled GloSea5 showed important results of restoration in precipitation levels. Per the analysis results of spatial autocorrelation using seasonal Moran's I, the spatial distribution was shown to be statistically significant. These results can improve the uncertainty of original GloSea5 and substantiate its spatial-temporal accuracy and validity. The spatial-temporal reproducibility assessment will play a very important role as basic data for watershed-scale water management.

  9. Reliability and construct validity of the Bahasa Malaysia version of transtheoretical model (TTM) questionnaire for smoking cessation and relapse among Malaysian adult.

    Science.gov (United States)

    Yasin, Siti Munira; Taib, Khairul Mizan; Zaki, Rafdzah Ahmad

    2011-01-01

    The transtheoretical model (TTM) has been used as one of the major constructs in developing effective cognitive behavioural interventions for smoking cessation and relapse prevention, in Western societies. This study aimed to examine the reliability and construct validity of the translated Bahasa Malaysia version of TTM questionnaire among adult smokers in Klang Valley, Malaysia. The sample consisted of 40 smokers from four different worksites in Klang Valley. A 26-item TTM questionnaire was administered, and a similar set one week later. The questionnaire consisted of three measures; decisional balance, temptations and impact of smoking. Construct validity was measured by factor analysis and the reliability by Cronbach' s alpha (internal consistency) and test-retest correlation. Results revealed that Cronbach' s alpha coefficients for the items were: decisional balance (0.84; 0.74) and temptations (0.89; 0.54; 0.85). The values for test retest correlation were all above 0.4. In addition, factor analysis suggested two meaningful common factors for decisional balance and three for temptations. This is consistent with the original construct of the TTM questionnaire. Overall results demonstrated that construct validity and reliability were acceptable for all items. In conclusion, the Bahasa Malaysia version of TTM questionnaire is a reliable and valid tool in ass.

  10. The consistency evaluation of the climate version of the Eta regional forecast model developed for regional climate downscaling

    CERN Document Server

    Pisnichenko, I A

    2007-01-01

    The regional climate model prepared from Eta WS (workstation) forecast model has been integrated over South America with the horizontal resolution of 40 km for the period of 1961-1977. The model was forced at its lateral boundaries by the outputs of HadAMP. The data of HadAMP represent the simulation of modern climate with the resolution about150 km. In order to prepare climate regional model from the Eta forecast model was added new blocks and multiple modifications and corrections was made in the original model. The running of climate Eta model was made on the supercomputer SX-6. The detailed analysis of the results of dynamical downscaling experiment includes an investigation of a consistency between the regional and AGCM models as well as of ability of the regional model to resolve important features of climate fields on the finer scale than that resolved by AGCM. In this work we show the results of our investigation of the consistency of the output fields of the Eta model and HadAMP. We have analysed geo...

  11. The CSIRO Mk3L climate system model version 1.0 – Part 1: Description and evaluation

    Directory of Open Access Journals (Sweden)

    S. J. Phipps

    2011-06-01

    Full Text Available The CSIRO Mk3L climate system model is a coupled general circulation model, designed primarily for millennial-scale climate simulations and palaeoclimate research. Mk3L includes components which describe the atmosphere, ocean, sea ice and land surface, and combines computational efficiency with a stable and realistic control climatology. This paper describes the model physics and software, analyses the control climatology, and evaluates the ability of the model to simulate the modern climate.

    Mk3L incorporates a spectral atmospheric general circulation model, a z-coordinate ocean general circulation model, a dynamic-thermodynamic sea ice model and a land surface scheme with static vegetation. The source code is highly portable, and has no dependence upon proprietary software. The model distribution is freely available to the research community. A 1000-yr climate simulation can be completed in around one-and-a-half months on a typical desktop computer, with greater throughput being possible on high-performance computing facilities.

    Mk3L produces realistic simulations of the larger-scale features of the modern climate, although with some biases on the regional scale. The model also produces reasonable representations of the leading modes of internal climate variability in both the tropics and extratropics. The control state of the model exhibits a high degree of stability, with only a weak cooling trend on millennial timescales. Ongoing development work aims to improve the model climatology and transform Mk3L into a comprehensive earth system model.

  12. ECCO version 4: an integrated framework for non-linear inverse modeling and global ocean state estimation

    Directory of Open Access Journals (Sweden)

    G. Forget

    2015-10-01

    Full Text Available This paper presents the ECCO v4 non-linear inverse modeling framework and its baseline solution for the evolving ocean state over the period 1992–2011. Both components are publicly available and subjected to regular, automated regression tests. The modeling framework includes sets of global conformal grids, a global model setup, implementations of data constraints and control parameters, an interface to algorithmic differentiation, as well as a grid-independent, fully capable Matlab toolbox. The baseline ECCO v4 solution is a dynamically consistent ocean state estimate without unidentified sources of heat and buoyancy, which any interested user will be able to reproduce accurately. The solution is an acceptable fit to most data and has been found to be physically plausible in many respects, as documented here and in related publications. Users are being provided with capabilities to assess model–data misfits for themselves. The synergy between modeling and data synthesis is asserted through the joint presentation of the modeling framework and the state estimate. In particular, the inverse estimate of parameterized physics was instrumental in improving the fit to the observed hydrography, and becomes an integral part of the ocean model setup available for general use. More generally, a first assessment of the relative importance of external, parametric and structural model errors is presented. Parametric and external model uncertainties appear to be of comparable importance and dominate over structural model uncertainty. The results generally underline the importance of including turbulent transport parameters in the inverse problem.

  13. Interactive lakes in the Canadian Regional Climate Model, version 5: the role of lakes in the regional climate of North America

    Directory of Open Access Journals (Sweden)

    Bernard Dugas

    2012-02-01

    Full Text Available Two one-dimensional (1-D column lake models have been coupled interactively with a developmental version of the Canadian Regional Climate Model. Multidecadal reanalyses-driven simulations with and without lakes revealed the systematic biases of the model and the impact of lakes on the simulated North American climate.The presence of lakes strongly influences the climate of the lake-rich region of the Canadian Shield. Due to their large thermal inertia, lakes act to dampen the diurnal and seasonal cycle of low-level air temperature. In late autumn and winter, ice-free lakes induce large sensible and latent heat fluxes, resulting in a strong enhancement of precipitation downstream of the Laurentian Great Lakes, which is referred to as the snow belt.The FLake (FL and Hostetler (HL lake models perform adequately for small subgrid-scale lakes and for large resolved lakes with shallow depth, located in temperate or warm climatic regions. Both lake models exhibit specific strengths and weaknesses. For example, HL simulates too rapid spring warming and too warm surface temperature, especially in large and deep lakes; FL tends to damp the diurnal cycle of surface temperature. An adaptation of 1-D lake models might be required for an adequate simulation of large and deep lakes.

  14. Steric Sea Level Change in Twentieth Century Historical Climate Simulation and IPCC-RCP8.5 Scenario Projection: A Comparison of Two Versions of FGOALS Model

    Institute of Scientific and Technical Information of China (English)

    DONG Lu; ZHOU Tianjun

    2013-01-01

    To reveal the steric sea level change in 20th century historical climate simulations and future climate change projections under the IPCC's Representative Concentration Pathway 8.5 (RCP8.5) scenario,the results of two versions of LASG/IAP's Flexible Global Ocean-Atmosphere-Land System model (FGOALS) are analyzed.Both models reasonably reproduce the mean dynamic sea level features,with a spatial pattern correlation coefficient of 0.97 with the observation.Characteristics of steric sea level changes in the 20th century historical climate simulations and RCP8.5 scenario projections are investigated.The results show that,in the 20th century,negative trends covered most parts of the global ocean.Under the RCP8.5 scenario,global-averaged steric sea level exhibits a pronounced rising trend throughout the 21st century and the general rising trend appears in most parts of the global ocean.The magnitude of the changes in the 21st century is much larger than that in the 20th century.By the year 2100,the global-averaged steric sea level anomaly is 18 cm and 10 cm relative to the year 1850 in the second spectral version of FGOALS (FGOALS-s2) and the second grid-point version of FGOALS (FGOALS-g2),respectively.The separate contribution of the thermosteric and halosteric components from various ocean layers is further evaluated.In the 20th century,the steric sea level changes in FGOALS-s2 (FGOALS-g2) are largely attributed to the thermosteric (halosteric) component relative to the pre-industrial control run.In contrast,in the 21st century,the thermosteric component,mainly from the upper 1000 m,dominates the steric sea level change in both models under the RCP8.5 scenario.In addition,the steric sea level change in the marginal sea of China is attributed to the thermosteric component.

  15. ECCO version 4: an integrated framework for non-linear inverse modeling and global ocean state estimation

    Directory of Open Access Journals (Sweden)

    G. Forget

    2015-05-01

    Full Text Available This paper presents the ECCO v4 non-linear inverse modeling framework and its baseline solution for the evolving ocean state over the period 1992–2011. Both components are publicly available and highly integrated with the MITgcm. They are both subjected to regular, automated regression tests. The modeling framework includes sets of global conformal grids, a global model setup, implementations of model-data constraints and adjustable control parameters, an interface to algorithmic differentiation, as well as a grid-independent, fully capable Matlab toolbox. The reference ECCO v4 solution is a dynamically consistent ocean state estimate (ECCO-Production, release 1 without un-identified sources of heat and buoyancy, which any interested user will be able to reproduce accurately. The solution is an acceptable fit to most data and has been found physically plausible in many respects, as documented here and in related publications. Users are being provided with capabilities to assess model-data misfits for themselves. The synergy between modeling and data synthesis is asserted through the joint presentation of the modeling framework and the state estimate. In particular, the inverse estimate of parameterized physics was instrumental in improving the fit to the observed hydrography, and becomes an integral part of the ocean model setup available for general use. More generally, a first assessment of the relative importance of external, parametric and structural model errors is presented. Parametric and external model uncertainties appear to be of comparable importance and dominate over structural model uncertainty. The results generally underline the importance of including turbulent transport parameters in the inverse problem.

  16. The new version of the Institute of Numerical Mathematics Sigma Ocean Model (INMSOM) for simulation of Global Ocean circulation and its variability

    Science.gov (United States)

    Gusev, Anatoly; Fomin, Vladimir; Diansky, Nikolay; Korshenko, Evgeniya

    2017-04-01

    In this paper, we present the improved version of the ocean general circulation sigma-model developed in the Institute of Numerical Mathematics of the Russian Academy of Sciences (INM RAS). The previous version referred to as INMOM (Institute of Numerical Mathematics Ocean Model) is used as the oceanic component of the IPCC climate system model INMCM (Institute of Numerical Mathematics Climate Model (Volodin et al 2010,2013). Besides, INMOM as the only sigma-model was used for simulations according to CORE-II scenario (Danabasoglu et al. 2014,2016; Downes et al. 2015; Farneti et al. 2015). In general, INMOM results are comparable to ones of other OGCMs and were used for investigation of climatic variations in the North Atlantic (Gusev and Diansky 2014). However, detailed analysis of some CORE-II INMOM results revealed some disadvantages of the INMOM leading to considerable errors in reproducing some ocean characteristics. So, the mass transport in the Antarctic Circumpolar Current (ACC) was overestimated. As well, there were noticeable errors in reproducing thermohaline structure of the ocean. After analysing the previous results, the new version of the OGCM was developed. It was decided to entitle is INMSOM (Institute of Numerical Mathematics Sigma Ocean Model). The new title allows one to distingwish the new model, first, from its older version, and second, from another z-model developed in the INM RAS and referred to as INMIO (Institute of Numerical Mathematics and Institute of Oceanology ocean model) (Ushakov et al. 2016). There were numerous modifications in the model, some of them are as follows. 1) Formulation of the ocean circulation problem in terms of full free surface with taking into account water amount variation. 2) Using tensor form of lateral viscosity operator invariant to rotation. 3) Using isopycnal diffusion including Gent-McWilliams mixing. 4) Using atmospheric forcing computation according to NCAR methodology (Large and Yeager 2009). 5

  17. Technical documentation and user's guide for City-County Allocation Model (CCAM). Version 1. 0

    Energy Technology Data Exchange (ETDEWEB)

    Clark, L.T. Jr.; Scott, M.J.; Hammer, P.

    1986-05-01

    The City-County Allocation Model (CCAM) was developed as part of the Monitored Retrievable Storage (MRS) Program. The CCAM model was designed to allocate population changes forecasted by the MASTER model to specific local communities within commuting distance of the MRS facility. The CCAM model was designed to then forecast the potential changes in demand for key community services such as housing, police protection, and utilities for these communities. The CCAM model uses a flexible on-line data base on demand for community services that is based on a combination of local service levels and state and national service standards. The CCAM model can be used to quickly forecast the potential community service consequence of economic development for local communities anywhere in the country. The remainder of this document is organized as follows. The purpose of this manual is to assist the user in understanding and operating the City-County Allocation Model (CCAM). The annual explains the data sources for the model and code modifications as well as the operational procedures.

  18. SMASS - a simulation model of physical and chemical processes in acid sulphate soils; Version 2.1

    NARCIS (Netherlands)

    Bosch, van den H.; Bronswijk, J.J.B.; Groenenberg, J.E.; Ritsema, C.J.

    1998-01-01

    The Simulation Model for Acid Sulphate Soils (SMASS) has been developed to predict the effects of water management strategies on acidification and de-acidification in areas with acid sulphate soils. It has submodels for solute transport, chemistry, oxygen transport and pyrite oxidation. The model mu

  19. A Multi-Year Plan for Enhancing Turbulence Modeling in Hydra-TH Revised and Updated Version 2.0

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Thomas M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Berndt, Markus [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Baglietto, Emilio [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Magolan, Ben [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2015-10-01

    The purpose of this report is to document a multi-year plan for enhancing turbulence modeling in Hydra-TH for the Consortium for Advanced Simulation of Light Water Reactors (CASL) program. Hydra-TH is being developed to the meet the high- fidelity, high-Reynolds number CFD based thermal hydraulic simulation needs of the program. This work is being conducted within the thermal hydraulics methods (THM) focus area. This report is an extension of THM CASL milestone L3:THM.CFD.P10.02 [33] (March, 2015) and picks up where it left off. It will also serve to meet the requirements of CASL THM level three milestone, L3:THM.CFD.P11.04, scheduled for completion September 30, 2015. The objectives of this plan will be met by: maturation of recently added turbulence models, strategic design/development of new models and systematic and rigorous testing of existing and new models and model extensions. While multi-phase turbulent flow simulations are important to the program, only single-phase modeling will be considered in this report. Large Eddy Simulation (LES) is also an important modeling methodology. However, at least in the first year, the focus is on steady-state Reynolds Averaged Navier-Stokes (RANS) turbulence modeling.

  20. On the Renormalization of a Bosonized Version of the Chiral Fermion-Meson Model at Finite Temperature

    CERN Document Server

    Caldas, H C G

    2001-01-01

    Feynman's functional formulation of statistical mechanics is used to study the renormalizability of the well known Linear Chiral Sigma Model in the presence of fermionic fields at finite temperature in an alternative way. It is shown that the renormalization conditions coincide with those of the zero temperature model.

  1. APEX user`s guide - (Argonne production, expansion, and exchange model for electrical systems), version 3.0

    Energy Technology Data Exchange (ETDEWEB)

    VanKuiken, J.C.; Veselka, T.D.; Guziel, K.A.; Blodgett, D.W.; Hamilton, S.; Kavicky, J.A.; Koritarov, V.S.; North, M.J.; Novickas, A.A.; Paprockas, K.R. [and others

    1994-11-01

    This report describes operating procedures and background documentation for the Argonne Production, Expansion, and Exchange Model for Electrical Systems (APEX). This modeling system was developed to provide the U.S. Department of Energy, Division of Fossil Energy, Office of Coal and Electricity with in-house capabilities for addressing policy options that affect electrical utilities. To meet this objective, Argonne National Laboratory developed a menu-driven programming package that enables the user to develop and conduct simulations of production costs, system reliability, spot market network flows, and optimal system capacity expansion. The APEX system consists of three basic simulation components, supported by various databases and data management software. The components include (1) the investigation of Costs and Reliability in Utility Systems (ICARUS) model, (2) the Spot Market Network (SMN) model, and (3) the Production and Capacity Expansion (PACE) model. The ICARUS model provides generating-unit-level production-cost and reliability simulations with explicit recognition of planned and unplanned outages. The SMN model addresses optimal network flows with recognition of marginal costs, wheeling charges, and transmission constraints. The PACE model determines long-term (e.g., longer than 10 years) capacity expansion schedules on the basis of candidate expansion technologies and load growth estimates. In addition, the Automated Data Assembly Package (ADAP) and case management features simplify user-input requirements. The ADAP, ICARUS, and SMN modules are described in detail. The PACE module is expected to be addressed in a future publication.

  2. Persistence and Global Attractivity for a Discretized Version of a General Model of Glucose-Insulin Interaction

    Directory of Open Access Journals (Sweden)

    Huong Dinh Cong

    2016-09-01

    Full Text Available In this paper, we construct a non-standard finite difference scheme for a general model of glucose-insulin interaction. We establish some new sufficient conditions to ensure that the discretized model preserves the persistence and global attractivity of the continuous model. One of the main findings in this paper is that we derive two important propositions (Proposition 3.1 and Proposition 3.2 which are used to prove the global attractivity of the discretized model. Furthermore, when investigating the persistence and, in some cases, the global attractivity of the discretized model, the nonlinear functions f and h are not required to be differentiable. Hence, our results are more realistic because the statistical data of glucose and insulin are collected and reported in discrete time. We also present some numerical examples and their simulations to illustrate our results.

  3. Towards a representation of priming on soil carbon decomposition in the global land biosphere model ORCHIDEE (version 1.9.5.2)

    Science.gov (United States)

    Guenet, Bertrand; Esteban Moyano, Fernando; Peylin, Philippe; Ciais, Philippe; Janssens, Ivan A.

    2016-03-01

    Priming of soil carbon decomposition encompasses different processes through which the decomposition of native (already present) soil organic matter is amplified through the addition of new organic matter, with new inputs typically being more labile than the native soil organic matter. Evidence for priming comes from laboratory and field experiments, but to date there is no estimate of its impact at global scale and under the current anthropogenic perturbation of the carbon cycle. Current soil carbon decomposition models do not include priming mechanisms, thereby introducing uncertainty when extrapolating short-term local observations to ecosystem and regional to global scale. In this study we present a simple conceptual model of decomposition priming, called PRIM, able to reproduce laboratory (incubation) and field (litter manipulation) priming experiments. Parameters for this model were first optimized against data from 20 soil incubation experiments using a Bayesian framework. The optimized parameter values were evaluated against another set of soil incubation data independent from the ones used for calibration and the PRIM model reproduced the soil incubations data better than the original, CENTURY-type soil decomposition model, whose decomposition equations are based only on first-order kinetics. We then compared the PRIM model and the standard first-order decay model incorporated into the global land biosphere model ORCHIDEE (Organising Carbon and Hydrology In Dynamic Ecosystems). A test of both models was performed at ecosystem scale using litter manipulation experiments from five sites. Although both versions were equally able to reproduce observed decay rates of litter, only ORCHIDEE-PRIM could simulate the observed priming (R2 = 0.54) in cases where litter was added or removed. This result suggests that a conceptually simple and numerically tractable representation of priming adapted to global models is able to capture the sign and magnitude of the

  4. An integrated assessment modeling framework for uncertainty studies in global and regional climate change: the MIT IGSM-CAM (version 1.0)

    Science.gov (United States)

    Monier, E.; Scott, J. R.; Sokolov, A. P.; Forest, C. E.; Schlosser, C. A.

    2013-12-01

    This paper describes a computationally efficient framework for uncertainty studies in global and regional climate change. In this framework, the Massachusetts Institute of Technology (MIT) Integrated Global System Model (IGSM), an integrated assessment model that couples an Earth system model of intermediate complexity to a human activity model, is linked to the National Center for Atmospheric Research (NCAR) Community Atmosphere Model (CAM). Since the MIT IGSM-CAM framework (version 1.0) incorporates a human activity model, it is possible to analyze uncertainties in emissions resulting from both uncertainties in the underlying socio-economic characteristics of the economic model and in the choice of climate-related policies. Another major feature is the flexibility to vary key climate parameters controlling the climate system response to changes in greenhouse gases and aerosols concentrations, e.g., climate sensitivity, ocean heat uptake rate, and strength of the aerosol forcing. The IGSM-CAM is not only able to realistically simulate the present-day mean climate and the observed trends at the global and continental scale, but it also simulates ENSO variability with realistic time scales, seasonality and patterns of SST anomalies, albeit with stronger magnitudes than observed. The IGSM-CAM shares the same general strengths and limitations as the Coupled Model Intercomparison Project Phase 3 (CMIP3) models in simulating present-day annual mean surface temperature and precipitation. Over land, the IGSM-CAM shows similar biases to the NCAR Community Climate System Model (CCSM) version 3, which shares the same atmospheric model. This study also presents 21st century simulations based on two emissions scenarios (unconstrained scenario and stabilization scenario at 660 ppm CO2-equivalent) similar to, respectively, the Representative Concentration Pathways RCP8.5 and RCP4.5 scenarios, and three sets of climate parameters. Results of the simulations with the chosen

  5. 基于双向链表的产品协同设计版本存储模型%Version Storage Model of Product Collaborative Design Based on Doubly Linked List

    Institute of Scientific and Technical Information of China (English)

    刘国军; 杨宏志

    2013-01-01

    The study is on version storages issues of the version management in the product collaborative design. Based on the analysis of the current version incremental storage and complete storage technologies, this study poses a new version storage model of product collaborative design which will combine inverse incremental storage with complete storage. The storage structure of version is defined as a two-way version of the list according to their parent-child relationship, when the design process produces a new version, it will be stored in the list structure of version by the way of repeated iteration and insertion, in order to achieve the fast storage, save the storage space and improve the security of the version storage in the product collaborative design process.%针对产品协同设计的版本管理的版本存储问题进行研究。在分析了目前已有的版本增量存储和完整存储技术的基础上,提出了一种将完整存储和逆增量存储相结合的产品协同设计版本存储模型。将版本的存储结构按照其父子关系定义为一个双向版本链表,当设计过程产生新的版本时,采用反复迭代和插入的方式,将其存放在版本链表结构中,以实现协同设计版本的快速存储、节约存储空间和提高版本存储的安全性。

  6. Distributed Version Control and Library Metadata

    Directory of Open Access Journals (Sweden)

    Galen M. Charlton

    2008-06-01

    Full Text Available Distributed version control systems (DVCSs are effective tools for managing source code and other artifacts produced by software projects with multiple contributors. This article describes DVCSs and compares them with traditional centralized version control systems, then describes extending the DVCS model to improve the exchange of library metadata.

  7. Geothermal Energy Market Study on the Atlantic Coastal Plain. GRITS (Version 9): Model Description and User's Guide

    Energy Technology Data Exchange (ETDEWEB)

    Kroll, Peter; Kane, Sally Minch [eds.

    1982-04-01

    The Geothermal Resource Interactive Temporal Simulation (GRITS) model calculates the cost and revenue streams for the lifetime of a project that utilizes low to moderate temperature geothermal resources. With these estimates, the net present value of the project is determined. The GRITS model allows preliminary economic evaluations of direct-use applications of geothermal energy under a wide range of resource, demand, and financial conditions, some of which change over the lifetime of the project.

  8. Versioning Complex Data

    Energy Technology Data Exchange (ETDEWEB)

    Macduff, Matt C.; Lee, Benno; Beus, Sherman J.

    2014-06-29

    Using the history of ARM data files, we designed and demonstrated a data versioning paradigm that is feasible. Assigning versions to sets of files that are modified with some special assumptions and domain specific rules was effective in the case of ARM data, which has more than 5000 datastreams and 500TB of data.

  9. A multi-scale computational model of the effects of TMS on motor cortex [version 3; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Hyeon Seo

    2017-05-01

    Full Text Available The detailed biophysical mechanisms through which transcranial magnetic stimulation (TMS activates cortical circuits are still not fully understood. Here we present a multi-scale computational model to describe and explain the activation of different pyramidal cell types in motor cortex due to TMS. Our model determines precise electric fields based on an individual head model derived from magnetic resonance imaging and calculates how these electric fields activate morphologically detailed models of different neuron types. We predict neural activation patterns for different coil orientations consistent with experimental findings. Beyond this, our model allows us to calculate activation thresholds for individual neurons and precise initiation sites of individual action potentials on the neurons’ complex morphologies. Specifically, our model predicts that cortical layer 3 pyramidal neurons are generally easier to stimulate than layer 5 pyramidal neurons, thereby explaining the lower stimulation thresholds observed for I-waves compared to D-waves. It also shows differences in the regions of activated cortical layer 5 and layer 3 pyramidal cells depending on coil orientation. Finally, it predicts that under standard stimulation conditions, action potentials are mostly generated at the axon initial segment of cortical pyramidal cells, with a much less important activation site being the part of a layer 5 pyramidal cell axon where it crosses the boundary between grey matter and white matter. In conclusion, our computational model offers a detailed account of the mechanisms through which TMS activates different cortical pyramidal cell types, paving the way for more targeted application of TMS based on individual brain morphology in clinical and basic research settings.

  10. Machine learning models identify molecules active against the Ebola virus in vitro [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Sean Ekins

    2016-01-01

    Full Text Available The search for small molecule inhibitors of Ebola virus (EBOV has led to several high throughput screens over the past 3 years. These have identified a range of FDA-approved active pharmaceutical ingredients (APIs with anti-EBOV activity in vitro and several of which are also active in a mouse infection model. There are millions of additional commercially-available molecules that could be screened for potential activities as anti-EBOV compounds. One way to prioritize compounds for testing is to generate computational models based on the high throughput screening data and then virtually screen compound libraries. In the current study, we have generated Bayesian machine learning models with viral pseudotype entry assay and the EBOV replication assay data. We have validated the models internally and externally. We have also used these models to computationally score the MicroSource library of drugs to select those likely to be potential inhibitors. Three of the highest scoring molecules that were not in the model training sets, quinacrine, pyronaridine and tilorone, were tested in vitro and had EC50 values of 350, 420 and 230 nM, respectively. Pyronaridine is a component of a combination therapy for malaria that was recently approved by the European Medicines Agency, which may make it more readily accessible for clinical testing. Like other known antimalarial drugs active against EBOV, it shares the 4-aminoquinoline scaffold. Tilorone, is an investigational antiviral agent that has shown a broad array of biological activities including cell growth inhibition in cancer cells, antifibrotic properties, α7 nicotinic receptor agonist activity, radioprotective activity and activation of hypoxia inducible factor-1. Quinacrine is an antimalarial but also has use as an anthelmintic. Our results suggest data sets with less than 1,000 molecules can produce validated machine learning models that can in turn be utilized to identify novel EBOV inhibitors in

  11. Machine learning models identify molecules active against the Ebola virus in vitro [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Sean Ekins

    2015-10-01

    Full Text Available The search for small molecule inhibitors of Ebola virus (EBOV has led to several high throughput screens over the past 3 years. These have identified a range of FDA-approved active pharmaceutical ingredients (APIs with anti-EBOV activity in vitro and several of which are also active in a mouse infection model. There are millions of additional commercially-available molecules that could be screened for potential activities as anti-EBOV compounds. One way to prioritize compounds for testing is to generate computational models based on the high throughput screening data and then virtually screen compound libraries. In the current study, we have generated Bayesian machine learning models with viral pseudotype entry assay and the EBOV replication assay data. We have validated the models internally and externally. We have also used these models to computationally score the MicroSource library of drugs to select those likely to be potential inhibitors. Three of the highest scoring molecules that were not in the model training sets, quinacrine, pyronaridine and tilorone, were tested in vitro and had EC50 values of 350, 420 and 230 nM, respectively. Pyronaridine is a component of a combination therapy for malaria that was recently approved by the European Medicines Agency, which may make it more readily accessible for clinical testing. Like other known antimalarial drugs active against EBOV, it shares the 4-aminoquinoline scaffold. Tilorone, is an investigational antiviral agent that has shown a broad array of biological activities including cell growth inhibition in cancer cells, antifibrotic properties, α7 nicotinic receptor agonist activity, radioprotective activity and activation of hypoxia inducible factor-1. Quinacrine is an antimalarial but also has use as an anthelmintic. Our results suggest data sets with less than 1,000 molecules can produce validated machine learning models that can in turn be utilized to identify novel EBOV inhibitors in

  12. Machine learning models identify molecules active against the Ebola virus in vitro [version 3; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Sean Ekins

    2017-01-01

    Full Text Available The search for small molecule inhibitors of Ebola virus (EBOV has led to several high throughput screens over the past 3 years. These have identified a range of FDA-approved active pharmaceutical ingredients (APIs with anti-EBOV activity in vitro and several of which are also active in a mouse infection model. There are millions of additional commercially-available molecules that could be screened for potential activities as anti-EBOV compounds. One way to prioritize compounds for testing is to generate computational models based on the high throughput screening data and then virtually screen compound libraries. In the current study, we have generated Bayesian machine learning models with viral pseudotype entry assay and the EBOV replication assay data. We have validated the models internally and externally. We have also used these models to computationally score the MicroSource library of drugs to select those likely to be potential inhibitors. Three of the highest scoring molecules that were not in the model training sets, quinacrine, pyronaridine and tilorone, were tested in vitro and had EC50 values of 350, 420 and 230 nM, respectively. Pyronaridine is a component of a combination therapy for malaria that was recently approved by the European Medicines Agency, which may make it more readily accessible for clinical testing. Like other known antimalarial drugs active against EBOV, it shares the 4-aminoquinoline scaffold. Tilorone, is an investigational antiviral agent that has shown a broad array of biological activities including cell growth inhibition in cancer cells, antifibrotic properties, α7 nicotinic receptor agonist activity, radioprotective activity and activation of hypoxia inducible factor-1. Quinacrine is an antimalarial but also has use as an anthelmintic. Our results suggest data sets with less than 1,000 molecules can produce validated machine learning models that can in turn be utilized to identify novel EBOV inhibitors in

  13. Regional hydrogeological simulations for Forsmark - numerical modelling using CONNECTFLOW. Preliminary site description Forsmark area - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Hartley, Lee; Cox, Ian; Hunter, Fiona; Jackson, Peter; Joyce, Steve; Swift, Ben [Serco Assurance, Risley (United Kingdom); Gylling, Bjoern; Marsic, Niko [Kemakta Konsult AB, Stockholm (Sweden)

    2005-05-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) carries out site investigations in two different candidate areas in Sweden with the objective of describing the in-situ conditions for a bedrock repository for spent nuclear fuel. The site characterisation work is divided into two phases, an initial site investigation phase (IPLU) and a complete site investigation phase (KPLU). The results of IPLU are used as a basis for deciding on a subsequent KPLU phase. On the basis of the KPLU investigations a decision is made as to whether detailed characterisation will be performed (including sinking of a shaft). An integrated component in the site characterisation work is the development of site descriptive models. These comprise basic models in three dimensions with an accompanying text description. Central in the modelling work is the geological model, which provides the geometrical context in terms of a model of deformation zones and the rock mass between the zones. Using the geological and geometrical description models as a basis, descriptive models for other geo-disciplines (hydrogeology, hydro-geochemistry, rock mechanics, thermal properties and transport properties) will be developed. Great care is taken to arrive at a general consistency in the description of the various models and assessment of uncertainty and possible needs of alternative models. Here, a numerical model is developed on a regional-scale (hundreds of square kilometres) to understand the zone of influence for groundwater flow that affects the Forsmark area. Transport calculations are then performed by particle tracking from a local-scale release area (a few square kilometres) to identify potential discharge areas for the site and using greater grid resolution. The main objective of this study is to support the development of a preliminary Site Description of the Forsmark area on a regional-scale based on the available data of 30 June 2004 and the previous Site Description. A more specific

  14. An online trajectory module (version 1.0 for the non-hydrostatic numerical weather prediction model COSMO

    Directory of Open Access Journals (Sweden)

    A. K. Miltenberger

    2013-02-01

    Full Text Available A module to calculate online trajectories has been implemented into the non-hydrostatic limited-area weather prediction and climate model COSMO. Whereas offline trajectories are calculated with wind fields from model output, which is typically available every one to six hours, online trajectories use the simulated wind field at every model time step (typically less than a minute to solve the trajectory equation. As a consequence, online trajectories much better capture the short-term temporal fluctuations of the wind field, which is particularly important for mesoscale flows near topography and convective clouds, and they do not suffer from temporal interpolation errors between model output times. The numerical implementation of online trajectories in the COSMO model is based upon an established offline trajectory tool and takes full account of the horizontal domain decomposition that is used for parallelization of the COSMO model. Although a perfect workload balance cannot be achieved for the trajectory module (due to the fact that trajectory positions are not necessarily equally distributed over the model domain, the additional computational costs are fairly small for high-resolution simulations. Various options have been implemented to initialize online trajectories at different locations and times during the model simulation. As a first application of the new COSMO module an Alpine North Föhn event in summer 1987 has been simulated with horizontal resolutions of 2.2 km, 7 km, and 14 km. It is shown that low-tropospheric trajectories calculated offline with one- to six-hourly wind fields can significantly deviate from trajectories calculated online. Deviations increase with decreasing model grid spacing and are particularly large in regions of deep convection and strong orographic flow distortion. On average, for this particular case study, horizontal and vertical positions between online and offline trajectories differed by 50–190 km and

  15. Enhanced Representation of Soil NO Emissions in the Community Multiscale Air Quality (CMAQ) Model Version 5.0.2

    Science.gov (United States)

    Rasool, Quazi Z.; Zhang, Rui; Lash, Benjamin; Cohan, Daniel S.; Cooter, Ellen J.; Bash, Jesse O.; Lamsal, Lok N.

    2016-01-01

    Modeling of soil nitric oxide (NO) emissions is highly uncertain and may misrepresent its spatial and temporal distribution. This study builds upon a recently introduced parameterization to improve the timing and spatial distribution of soil NO emission estimates in the Community Multiscale Air Quality (CMAQ) model. The parameterization considers soil parameters, meteorology, land use, and mineral nitrogen (N) availability to estimate NO emissions. We incorporate daily year-specific fertilizer data from the Environmental Policy Integrated Climate (EPIC) agricultural model to replace the annual generic data of the initial parameterization, and use a 12km resolution soil biome map over the continental USA. CMAQ modeling for July 2011 shows slight differences in model performance in simulating fine particulate matter and ozone from Interagency Monitoring of Protected Visual Environments (IMPROVE) and Clean Air Status and Trends Network (CASTNET) sites and NO2 columns from Ozone Monitoring Instrument (OMI) satellite retrievals. We also simulate how the change in soil NO emissions scheme affects the expected O3 response to projected emissions reductions.

  16. Enhanced representation of soil NO emissions in the Community Multiscale Air Quality (CMAQ) model version 5.0.2

    Science.gov (United States)

    Rasool, Quazi Z.; Zhang, Rui; Lash, Benjamin; Cohan, Daniel S.; Cooter, Ellen J.; Bash, Jesse O.; Lamsal, Lok N.

    2016-09-01

    Modeling of soil nitric oxide (NO) emissions is highly uncertain and may misrepresent its spatial and temporal distribution. This study builds upon a recently introduced parameterization to improve the timing and spatial distribution of soil NO emission estimates in the Community Multiscale Air Quality (CMAQ) model. The parameterization considers soil parameters, meteorology, land use, and mineral nitrogen (N) availability to estimate NO emissions. We incorporate daily year-specific fertilizer data from the Environmental Policy Integrated Climate (EPIC) agricultural model to replace the annual generic data of the initial parameterization, and use a 12 km resolution soil biome map over the continental USA. CMAQ modeling for July 2011 shows slight differences in model performance in simulating fine particulate matter and ozone from Interagency Monitoring of Protected Visual Environments (IMPROVE) and Clean Air Status and Trends Network (CASTNET) sites and NO2 columns from Ozone Monitoring Instrument (OMI) satellite retrievals. We also simulate how the change in soil NO emissions scheme affects the expected O3 response to projected emissions reductions.

  17. Public participation and rural management of Brazilian waters: an alternative to the deficit model (Portuguese original version

    Directory of Open Access Journals (Sweden)

    Alessandro Luís Piolli

    2008-12-01

    Full Text Available The knowledge deficit model with regard to the public has been severely criticized in the sociology of the public perception of science. However, when dealing with public decisions regarding scientific matters, political and scientific institutions insist on defending the deficit model. The idea that only certified experts, or those with vast experience, should have the right to participate in decisions can bring about problems for the future of democracies. Through a type of "topography of ideas", in which some concepts from the social studies of science are used in order to think about these problems, and through the case study of public participation in the elaboration of the proposal of discounts in the fees charged for rural water use in Brazil, we will try to point out an alternative to the deficit model. This alternative includes a "minimum comprehension" of the scientific matters involved in the decision on the part of the participants, using criteria judged by the public itself.

  18. Modelling of neutron and photon transport in iron and concrete radiation shieldings by the Monte Carlo method - Version 2

    CERN Document Server

    Žukauskaite, A; Plukiene, R; Plukis, A

    2007-01-01

    Particle accelerators and other high energy facilities produce penetrating ionizing radiation (neutrons and γ-rays) that must be shielded. The objective of this work was to model photon and neutron transport in various materials, usually used as shielding, such as concrete, iron or graphite. Monte Carlo method allows obtaining answers by simulating individual particles and recording some aspects of their average behavior. In this work several nuclear experiments were modeled: AVF 65 – γ-ray beams (1-10 MeV), HIMAC and ISIS-800 – high energy neutrons (20-800 MeV) transport in iron and concrete. The results were then compared with experimental data.

  19. Effect of sex in the MRMT-1 model of cancer-induced bone pain [version 3; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Sarah Falk

    2015-11-01

    Full Text Available An overwhelming amount of evidence demonstrates sex-induced variation in pain processing, and has thus increased the focus on sex as an essential parameter for optimization of in vivo models in pain research. Mammary cancer cells are often used to model metastatic bone pain in vivo, and are commonly used in both males and females. Here we demonstrate that compared to male rats, female rats have an increased capacity for recovery following inoculation of MRMT-1 mammary cells, thus potentially causing a sex-dependent bias in interpretation of the data.

  20. Reliability Growth Modeling and Optimal Release Policy Under Fuzzy Environment of an N-version Programming System Incorporating the Effect of Fault Removal Efficiency

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Failure of a safety critical system can lead to big losses. Very high software reliability is required for automating the working of systems such as aircraft controller and nuclear reactor controller software systems. Fault-tolerant softwares are used to increase the overall reliability of software systems. Fault tolerance is achieved using the fault-tolerant schemes such as fault recovery (recovery block scheme), fault masking (N-version programming (NVP)) or a combination of both (Hybrid scheme). These softwares incorporate the ability of system survival even on a failure. Many researchers in the field of software engineering have done excellent work to study the reliability of fault-tolerant systems. Most of them consider the stable system reliability. Few attempts have been made in reliability modeling to study the reliability growth for an NVP system. Recently, a model was proposed to analyze the reliability growth of an NVP system incorporating the effect of fault removal efficiency. In this model, a proportion of the number of failures is assumed to be a measure of fault generation while an appropriate measure of fault generation should be the proportion of faults removed. In this paper, we first propose a testing efficiency model incorporating the effect of imperfect fault debugging and error generation. Using this model, a software reliability growth model (SRGM) is developed to model the reliability growth of an NVP system. The proposed model is useful for practical applications and can provide the measures of debugging effectiveness and additional workload or skilled professional required. It is very important for a developer to determine the optimal release time of the software to improve its performance in terms of competition and cost. In this paper, we also formulate the optimal software release time problem for a 3VP system under fuzzy environment and discuss a the fuzzy optimization technique for solving the problem with a numerical illustration.

  1. EnKF and 4D-Var data assimilation with chemical transport model BASCOE (version 05.06)

    Science.gov (United States)

    Skachko, Sergey; Ménard, Richard; Errera, Quentin; Christophe, Yves; Chabrillat, Simon

    2016-08-01

    We compare two optimized chemical data assimilation systems, one based on the ensemble Kalman filter (EnKF) and the other based on four-dimensional variational (4D-Var) data assimilation, using a comprehensive stratospheric chemistry transport model (CTM). This work is an extension of the Belgian Assimilation System for Chemical ObsErvations (BASCOE), initially designed to work with a 4D-Var data assimilation. A strict comparison of both methods in the case of chemical tracer transport was done in a previous study and indicated that both methods provide essentially similar results. In the present work, we assimilate observations of ozone, HCl, HNO3, H2O and N2O from EOS Aura-MLS data into the BASCOE CTM with a full description of stratospheric chemistry. Two new issues related to the use of the full chemistry model with EnKF are taken into account. One issue is a large number of error variance parameters that need to be optimized. We estimate an observation error variance parameter as a function of pressure level for each observed species using the Desroziers method. For comparison purposes, we apply the same estimate procedure in the 4D-Var data assimilation, where both scale factors of the background and observation error covariance matrices are estimated using the Desroziers method. However, in EnKF the background error covariance is modelled using the full chemistry model and a model error term which is tuned using an adjustable parameter. We found that it is adequate to have the same value of this parameter based on the chemical tracer formulation that is applied for all observed species. This is an indication that the main source of model error in chemical transport model is due to the transport. The second issue in EnKF with comprehensive atmospheric chemistry models is the noise in the cross-covariance between species that occurs when species are weakly chemically related at the same location. These errors need to be filtered out in addition to a

  2. GWSCREEN: A semi-analytical model for assessment of the groundwater pathway from surface or buried contamination: Version 2.0 theory and user`s manual

    Energy Technology Data Exchange (ETDEWEB)

    Rood, A.S.

    1993-06-01

    GWSCREEN was developed for assessment of the groundwater pathway from leaching of radioactive and non radioactive substances from surface or buried sources. The code was designed for implementation in the Track I and Track II assessment of CERCLA (Comprehensive Environmental Response, Compensation and Liability Act) sites identified as low probability hazard at the Idaho National Engineering Laboratory (DOE, 1992). The code calculates the limiting soil concentration such that, after leaching and transport to the aquifer, regulatory contaminant levels in groundwater are not exceeded. The code uses a mass conservation approach to model three processes: contaminant release from a source volume, contaminant transport in the unsaturated zone, and contaminant transport in the saturated zone. The source model considers the sorptive properties and solubility of the contaminant. Transport in the unsaturated zone is described by a plug flow model. Transport in the saturated zone is calculated with a semi-analytical solution to the advection dispersion equation in groundwater. In Version 2.0, GWSCREEN has incorporated an additional source model to calculate the impacts to groundwater resulting from the release to percolation ponds. In addition, transport of radioactive progeny has also been incorporated. GWSCREEN has shown comparable results when compared against other codes using similar algorithms and techniques. This code was designed for assessment and screening of the groundwater pathway when field data is limited. It was not intended to be a predictive tool.

  3. Numerical simulations of oceanic oxygen cycling in the FAMOUS Earth-System model: FAMOUS-ES, version 1.0

    Directory of Open Access Journals (Sweden)

    J. H. T. Williams

    2014-02-01

    Full Text Available Addition and validation of an oxygen cycle to the ocean component of the FAMOUS climate model are described. Surface validation is carried out with respect to HadGEM2-ES where good agreement is found and where discrepancies are mainly attributed to disagreement in surface temperature structure between the models. The agreement between the models at depth (where observations are also used in the comparison in the Southern Hemisphere is less encouraging than in the Northern Hemisphere. This is attributed to a combination of excessive surface productivity in FAMOUS' equatorial waters (and its concomitant effect on remineralisation at depth and its reduced overturning circulation compared to HadGEM2-ES. For the entire Atlantic basin FAMOUS has a circulation strength of 12.7 ± 0.4 Sv compared to 15.0 ± 0.9 for HadGEM2-ES. The HadGEM2-ES data used in this paper were obtained from the online database of the fifth Coupled Model Intercomparison Project, CMIP5 (Taylor et al., 2012.

  4. Enhanced representation of soil NO emissions in the Community Multiscale Air Quality (CMAQ) model version 5.0.2

    Science.gov (United States)

    Modeling of soil nitric oxide (NO) emissions is highly uncertain and may misrepresent its spatial and temporal distribution. This study builds upon a recently introduced parameterization to improve the timing and spatial distribution of soil NO emission estimates in the Community...

  5. The CSIRO Mk3L climate system model version 1.0 – Part 2: Response to external forcings

    Directory of Open Access Journals (Sweden)

    S. J. Phipps

    2012-05-01

    Full Text Available The CSIRO Mk3L climate system model is a coupled general circulation model, designed primarily for millennial-scale climate simulation and palaeoclimate research. Mk3L includes components which describe the atmosphere, ocean, sea ice and land surface, and combines computational efficiency with a stable and realistic control climatology. It is freely available to the research community. This paper evaluates the response of the model to external forcings which correspond to past and future changes in the climate system.

    A simulation of the mid-Holocene climate is performed, in which changes in the seasonal and meridional distribution of incoming solar radiation are imposed. Mk3L correctly simulates increased summer temperatures at northern mid-latitudes and cooling in the tropics. However, it is unable to capture some of the regional-scale features of the mid-Holocene climate, with the precipitation over Northern Africa being deficient. The model simulates a reduction of between 7 and 15% in the amplitude of El Niño-Southern Oscillation, a smaller decrease than that implied by the palaeoclimate record. However, the realism of the simulated ENSO is limited by the model's relatively coarse spatial resolution.

    Transient simulations of the late Holocene climate are then performed. The evolving distribution of insolation is imposed, and an acceleration technique is applied and assessed. The model successfully captures the temperature changes in each hemisphere and the upward trend in ENSO variability. However, the lack of a dynamic vegetation scheme does not allow it to simulate an abrupt desertification of the Sahara.

    To assess the response of Mk3L to other forcings, transient simulations of the last millennium are performed. Changes in solar irradiance, atmospheric greenhouse gas concentrations and volcanic emissions are applied to the model. The model is again broadly successful at simulating larger-scale changes in the

  6. Itzï (version 17.1): an open-source, distributed GIS model for dynamic flood simulation

    Science.gov (United States)

    Guillaume Courty, Laurent; Pedrozo-Acuña, Adrián; Bates, Paul David

    2017-05-01

    Worldwide, floods are acknowledged as one of the most destructive hazards. In human-dominated environments, their negative impacts are ascribed not only to the increase in frequency and intensity of floods but also to a strong feedback between the hydrological cycle and anthropogenic development. In order to advance a more comprehensive understanding of this complex interaction, this paper presents the development of a new open-source tool named Itzï that enables the 2-D numerical modelling of rainfall-runoff processes and surface flows integrated with the open-source geographic information system (GIS) software known as GRASS. Therefore, it takes advantage of the ability given by GIS environments to handle datasets with variations in both temporal and spatial resolutions. Furthermore, the presented numerical tool can handle datasets from different sources with varied spatial resolutions, facilitating the preparation and management of input and forcing data. This ability reduces the preprocessing time usually required by other models. Itzï uses a simplified form of the shallow water equations, the damped partial inertia equation, for the resolution of surface flows, and the Green-Ampt model for the infiltration. The source code is now publicly available online, along with complete documentation. The numerical model is verified against three different tests cases: firstly, a comparison with an analytic solution of the shallow water equations is introduced; secondly, a hypothetical flooding event in an urban area is implemented, where results are compared to those from an established model using a similar approach; and lastly, the reproduction of a real inundation event that occurred in the city of Kingston upon Hull, UK, in June 2007, is presented. The numerical approach proved its ability at reproducing the analytic and synthetic test cases. Moreover, simulation results of the real flood event showed its suitability at identifying areas affected by flooding

  7. User guide to UTDefect, Version 3: A computer program modelling ultrasonic nondestructive testing of a defect in an isotropic component

    Energy Technology Data Exchange (ETDEWEB)

    Bostroem, A. [Chalmers Univ. of Technology, Goeteborg (Sweden). Dept. of Mechanics

    2000-10-01

    This user guide to the computer program UTDefect should give a reasonable overview of the program and its possibilities and limitations and it should make it possible to run the program. UTDefect models the ultrasonic nondestructive testing of some simply shaped defects in an isotropic and homogeneous component. Such a model can be useful for educational purposes, for parametric studies, for the development of testing procedures, for the development of signal processing and data inversion procedures and for the qualification of NDT procedures and personnel. The theories behind UTDefect are all of the type that can be called 'exact', meaning that the full linear elastodynamic wave equations are solved, essentially without any approximations. The basic assumption in UTDefect is that the tested component is homogeneous and isotropic, although viscoelastic losses can be included. The ultrasonic probes are modelled by the traction they are exerting on the component. The action of the receiving probe is modelled by a reciprocity argument. The various defects are all idealized with smooth surfaces and sharp crack edges, although a model for rough cracks is also included. The wave propagation and scattering is solved for by Fourier transforms, integral equation techniques, the null field approach and separation-of-variables. The methods are all of the semi analytical kind and with enough truncations, number of integration points, etc, give very good accuracy. The models are all three-dimensional and give reasonable execution times in most cases. In comparison, the more general volume discretation methods like EFIT and FEM still tend to be useful for wave propagation problems mainly in two dimensions. The probe model in UTDefect admits the usual kind of contact probes with arbitrary type, angle and frequency. The effective contact area can be rectangular or elliptic and the contact lubricated or glued. Focused probes are also possible. Two simple types of

  8. Influence of Solar and Thermal Radiation on Future Heat Stress Using CMIP5 Archive Driving the Community Land Model Version 4.5

    Science.gov (United States)

    Buzan, J. R.; Huber, M.

    2015-12-01

    The summer of 2015 has experienced major heat waves on 4 continents, and heat stress left ~4000 people dead in India and Pakistan. Heat stress is caused by a combination of meteorological factors: temperature, humidity, and radiation. The International Organization for Standardization (ISO) uses Wet Bulb Globe Temperature (WBGT)—an empirical metric this is calibrated with temperature, humidity, and radiation—for determining labor capacity during heat stress. Unfortunately, most literature studying global heat stress focuses on extreme temperature events, and a limited number of studies use the combination of temperature and humidity. Recent global assessments use WBGT, yet omit the radiation component without recalibrating the metric.Here we explicitly calculate future WBGT within a land surface model, including radiative fluxes as produced by a modeled globe thermometer. We use the Community Land Model version 4.5 (CLM4.5), which is a component model of the Community Earth System Model (CESM), and is maintained by the National Center for Atmospheric Research (NCAR). To drive our CLM4.5 simulations, we use greenhouse gasses Representative Concentration Pathway 8.5 (business as usual), and atmospheric output from the CMIP5 Archive. Humans work in a variety of environments, and we place the modeled globe thermometer in a variety of environments. We modify CLM4.5 code to calculate solar and thermal radiation fluxes below and above canopy vegetation, and in bare ground. To calculate wet bulb temperature, we implemented the HumanIndexMod into CLM4.5. The temperature, wet bulb temperature, and radiation fields are calculated at every model time step and are outputted 4x Daily. We use these fields to calculate WBGT and labor capacity for two time slices: 2026-2045 and 2081-2100.

  9. GPU-accelerated atmospheric chemical kinetics in the ECHAM/MESSy (EMAC Earth system model (version 2.52

    Directory of Open Access Journals (Sweden)

    M. Alvanos

    2017-10-01

    Full Text Available This paper presents an application of GPU accelerators in Earth system modeling. We focus on atmospheric chemical kinetics, one of the most computationally intensive tasks in climate–chemistry model simulations. We developed a software package that automatically generates CUDA kernels to numerically integrate atmospheric chemical kinetics in the global climate model ECHAM/MESSy Atmospheric Chemistry (EMAC, used to study climate change and air quality scenarios. A source-to-source compiler outputs a CUDA-compatible kernel by parsing the FORTRAN code generated by the Kinetic PreProcessor (KPP general analysis tool. All Rosenbrock methods that are available in the KPP numerical library are supported.Performance evaluation, using Fermi and Pascal CUDA-enabled GPU accelerators, shows achieved speed-ups of 4. 5 ×  and 20. 4 × , respectively, of the kernel execution time. A node-to-node real-world production performance comparison shows a 1. 75 ×  speed-up over the non-accelerated application using the KPP three-stage Rosenbrock solver. We provide a detailed description of the code optimizations used to improve the performance including memory optimizations, control code simplification, and reduction of idle time. The accuracy and correctness of the accelerated implementation are evaluated by comparing to the CPU-only code of the application. The median relative difference is found to be less than 0.000000001 % when comparing the output of the accelerated kernel the CPU-only code.The approach followed, including the computational workload division, and the developed GPU solver code can potentially be used as the basis for hardware acceleration of numerous geoscientific models that rely on KPP for atmospheric chemical kinetics applications.

  10. a Detailed Account of Alain CONNES’ Version of the Standard Model in Non-Commutative Differential Geometry III

    Science.gov (United States)

    Kastler, Daniel

    We describe in detail Alain Connes’ last presentation of the (classical level of the) standard model in noncommutative differential geometry, now free of the cumbersome adynamical fields which parasited the initial treatment. Accessorily, the theory is presented in a more transparent way by systematic use of the skew tensor-product structure, and of 2×2 matrices with 2×2 matrix-entries instead of the previous 4×4 matrices.

  11. Application of a modified Anaerobic Digestion Model 1 version for fermentative hydrogen production from sweet sorghum extract by Ruminococcus albus

    Energy Technology Data Exchange (ETDEWEB)

    Ntaikou, I.; Lyberatos, G. [Department of Chemical Engineering, University of Patras, Karatheodori 1 St., 26500 Patras (Greece); Institute of Chemical Engineering and High Temperature Chemical Processes, 26504 Patras (Greece); Gavala, H.N. [Department of Chemical Engineering, University of Patras, Karatheodori 1 St., 26500 Patras (Greece); Copenhagen Institute of Technology (Aalborg University Copenhagen), Section for Sustainable Biotechnology, Department of Biotechnology, Chemistry and Environmental Engineering, Lautrupvang 15, DK 2750 Ballerup (Denmark)

    2010-04-15

    The aim of the present study was to evaluate the effectiveness of a developed, ADM1-based kinetic model for the hydrogen production process in batch and continuous cultures of the bacterium Ruminococcus albus grown on sweet sorghum extract as the sole carbon source. Although sorghum extract is known to contain at least two different sugars, i.e. sucrose and glucose, no biphasic growth was observed in batch cultures as such growth is reported to occur in cultures of R. albus with mixed substrates. Thus, taking into account that the main sugar of sweet sorghum extract is sucrose, batch experiments with different initial concentrations of sucrose were performed in order to estimate the growth kinetics of the bacterium on this substrate. The kinetic parameters used, concerning the endogenous metabolism of the bacterium as well as those concerning the effect of pH and hydrogen partial pressure (P{sub H2}), were the same as those estimated in a previous study with glucose as carbon source. Subsequently, the experimental data of batch and continuous experiments with sweet sorghum extract were simulated based on the already developed, modified ADM1 model accounting for the use of sugar-based substrate. It was shown that the model which was developed on synthetic substrates was successful in adequately describing the behavior of the microorganism on a real substrate such as sweet sorghum extract and predicting the experimental results quite well with a deviation of the model predictions from the experimental results being between 5-18% for the hydrogen yield. (author)

  12. Regional hydrogeological simulations for Forsmark - numerical modelling using DarcyTools. Preliminary site description Forsmark area version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-12-15

    A numerical model is developed on a regional-scale (hundreds of square kilometres) to study the zone of influence for variable-density groundwater flow that affects the Forsmark area. Transport calculations are performed by particle tracking from a local-scale release area (a few square kilometres) to test the sensitivity to different hydrogeological uncertainties and the need for far-field realism. The main objectives of the regional flow modelling were to achieve the following: I. Palaeo-hydrogeological understanding: An improved understanding of the palaeohydrogeological conditions is necessary in order to gain credibility for the site descriptive model in general and the hydrogeological description in particular. This requires modelling of the groundwater flow from the last glaciation up to present-day with comparisons against measured TDS and other hydro-geochemical measures. II. Simulation of flow paths: The simulation and visualisation of flow paths from a tentative repository area is a means for describing the role of the current understanding of the modelled hydrogeological conditions in the target volume, i.e. the conditions of primary interest for Safety Assessment. Of particular interest here is demonstration of the need for detailed far-field realism in the numerical simulations. The motivation for a particular model size (and resolution) and set of boundary conditions for a realistic description of the recharge and discharge connected to the flow at repository depth is an essential part of the groundwater flow path simulations. The numerical modelling was performed by two separate modelling teams, the ConnectFlow Team and the DarcyTools Team. The work presented in this report was based on the computer code DarcyTools developed by Computer-aided Fluid Engineering. DarcyTools is a kind of equivalent porous media (EPM) flow code specifically designed to treat flow and salt transport in sparsely fractured crystalline rock intersected by transmissive

  13. Pharmacokinetic-pharmacodynamic relationship of anesthetic drugs: from modeling to clinical use [version 1; referees: 4 approved

    Directory of Open Access Journals (Sweden)

    Valerie Billard

    2015-11-01

    Full Text Available Anesthesia is a combination of unconsciousness, amnesia, and analgesia, expressed in sleeping patients by limited reaction to noxious stimulations. It is achieved by several classes of drugs, acting mainly on central nervous system. Compared to other therapeutic families, the anesthetic drugs, administered by intravenous or pulmonary route, are quickly distributed in the blood and induce in a few minutes effects that are fully reversible within minutes or hours. These effects change in parallel with the concentration of the drug, and the concentration time course of the drug follows with a reasonable precision mathematical models based on the Fick principle. Therefore, understanding concentration time course allows adjusting the dosing delivery scheme in order to control the effects.   The purpose of this short review is to describe the basis of pharmacokinetics and modeling, the concentration-effects relationship, and drug interactions modeling to offer to anesthesiologists and non-anesthesiologists an overview of the rules to follow to optimize anesthetic drug delivery.

  14. Development of a source oriented version of the WRF/Chem model and its application to the California Regional PM10/PM2.5 Air Quality Study

    Directory of Open Access Journals (Sweden)

    H. Zhang

    2013-06-01

    flux, and primary and secondary particulate matter concentrations relative to the internally mixed version of the model. Downward shortwave radiation predicted by source-oriented model is enhanced by 1% at ground level chiefly because diesel engine particles in the source-oriented mixture are not artificially coated with material that increases their absorption efficiency. The extinction coefficient predicted by the source-oriented WRF/Chem model is reduced by an average of ∼ 5–10% in the central valley with a maximum reduction of ∼ 20%. Particulate matter concentrations predicted by the source-oriented WRF/Chem model are ∼ 5–10% lower than the internally mixed version of the same model because increased solar radiation at the ground increases atmospheric mixing. All of these results stem from the mixing state of black carbon. The source-oriented model representation with realistic aging processes predicts that hydrophobic diesel engine particles remain largely uncoated over the +7 day simulation period, while the internal mixture model representation predicts significant accumulation of secondary nitrate and water on diesel engine particles. Similar results will likely be found in any air pollution stagnation episode that is characterized by significant particulate nitrate production.

  15. Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    Vigil,Benny Manuel [Los Alamos National Laboratory; Ballance, Robert [SNL; Haskell, Karen [SNL

    2012-08-09

    Cielo is a massively parallel supercomputer funded by the DOE/NNSA Advanced Simulation and Computing (ASC) program, and operated by the Alliance for Computing at Extreme Scale (ACES), a partnership between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL). The primary Cielo compute platform is physically located at Los Alamos National Laboratory. This Cielo Computational Environment Usage Model documents the capabilities and the environment to be provided for the Q1 FY12 Level 2 Cielo Capability Computing (CCC) Platform Production Readiness Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory, or Sandia National Laboratories, but also addresses the needs of users working in the unclassified environment. The Cielo Computational Environment Usage Model maps the provided capabilities to the tri-Lab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the Production Readiness Milestone user environment capabilities of the ASC community. A description of ACE requirements met, and those requirements that are not met, are included in each section of this document. The Cielo Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the tri-Lab community.

  16. Using Rasch rating scale model to reassess the psychometric properties of the Persian version of the PedsQLTM 4.0 Generic Core Scales in school children

    Directory of Open Access Journals (Sweden)

    Jafari Peyman

    2012-03-01

    Full Text Available Abstract Background Item response theory (IRT is extensively used to develop adaptive instruments of health-related quality of life (HRQoL. However, each IRT model has its own function to estimate item and category parameters, and hence different results may be found using the same response categories with different IRT models. The present study used the Rasch rating scale model (RSM to examine and reassess the psychometric properties of the Persian version of the PedsQLTM 4.0 Generic Core Scales. Methods The PedsQLTM 4.0 Generic Core Scales was completed by 938 Iranian school children and their parents. Convergent, discriminant and construct validity of the instrument were assessed by classical test theory (CTT. The RSM was applied to investigate person and item reliability, item statistics and ordering of response categories. Results The CTT method showed that the scaling success rate for convergent and discriminant validity were 100% in all domains with the exception of physical health in the child self-report. Moreover, confirmatory factor analysis supported a four-factor model similar to its original version. The RSM showed that 22 out of 23 items had acceptable infit and outfit statistics (0.6, person reliabilities were low, item reliabilities were high, and item difficulty ranged from -1.01 to 0.71 and -0.68 to 0.43 for child self-report and parent proxy-report, respectively. Also the RSM showed that successive response categories for all items were not located in the expected order. Conclusions This study revealed that, in all domains, the five response categories did not perform adequately. It is not known whether this problem is a function of the meaning of the response choices in the Persian language or an artifact of a mostly healthy population that did not use the full range of the response categories. The response categories should be evaluated in further validation studies, especially in large samples of chronically ill patients.

  17. Breeding novel solutions in the brain: a model of Darwinian neurodynamics [version 1; referees: 1 approved, 2 approved with reservations

    Directory of Open Access Journals (Sweden)

    András Szilágyi

    2016-09-01

    Full Text Available Background: The fact that surplus connections and neurons are pruned during development is well established. We complement this selectionist picture by a proof-of-principle model of evolutionary search in the brain, that accounts for new variations in theory space. We present a model for Darwinian evolutionary search for candidate solutions in the brain. Methods: We combine known components of the brain – recurrent neural networks (acting as attractors, the action selection loop and implicit working memory – to provide the appropriate Darwinian architecture. We employ a population of attractor networks with palimpsest memory. The action selection loop is employed with winners-share-all dynamics to select for candidate solutions that are transiently stored in implicit working memory. Results: We document two processes: selection of stored solutions and evolutionary search for novel solutions. During the replication of candidate solutions attractor networks occasionally produce recombinant patterns, increasing variation on which selection can act. Combinatorial search acts on multiplying units (activity patterns with hereditary variation and novel variants appear due to (i noisy recall of patterns from the attractor networks, (ii noise during transmission of candidate solutions as messages between networks, and, (iii spontaneously generated, untrained patterns in spurious attractors. Conclusions: Attractor dynamics of recurrent neural networks can be used to model Darwinian search. The proposed architecture can be used for fast search among stored solutions (by selection and for evolutionary search when novel candidate solutions are generated in successive iterations. Since all the suggested components are present in advanced nervous systems, we hypothesize that the brain could implement a truly evolutionary combinatorial search system, capable of generating novel variants.

  18. GASP: A Performance Analysis Tool Interface for Global AddressSpace Programming Models, Version 1.5

    Energy Technology Data Exchange (ETDEWEB)

    Leko, Adam; Bonachea, Dan; Su, Hung-Hsun; George, Alan D.; Sherburne, Hans; George, Alan D.

    2006-09-14

    Due to the wide range of compilers and the lack of astandardized performance tool interface, writers of performance toolsface many challenges when incorporating support for global address space(GAS) programming models such as Unified Parallel C (UPC), Titanium, andCo-Array Fortran (CAF). This document presents a Global Address SpacePerformance tool interface (GASP) that is flexible enough to be adaptedinto current global address space compiler and runtime infrastructureswith little effort, while allowing performance analysis tools to gathermuch information about the performance of global address spaceprograms.

  19. From disease modelling to personalised therapy in patients with CEP290 mutations [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Elisa Molinari

    2017-05-01

    Full Text Available Mutations that give rise to premature termination codons are a common cause of inherited genetic diseases. When transcripts containing these changes are generated, they are usually rapidly removed by the cell through the process of nonsense-mediated decay. Here we discuss observed changes in transcripts of the centrosomal protein CEP290 resulting not from degradation, but from changes in exon usage. We also comment on a landmark paper (Drivas et al. Sci Transl Med. 2015 where modelling this process of exon usage may be used to predict disease severity in CEP290 ciliopathies, and how understanding this process may potentially be used for therapeutic benefit in the future.

  20. M3 version 3.0: Verification and validation; Hydrochemical model of ground water at repository site

    Energy Technology Data Exchange (ETDEWEB)

    Gomez, Javier B. (Dept. of Earth Sciences, Univ. of Zaragoza, Zaragoza (Spain)); Laaksoharju, Marcus (Geopoint AB, Sollentuna (Sweden)); Skaarman, Erik (Abscondo, Bromma (Sweden)); Gurban, Ioana (3D-Terra (Canada))

    2009-01-15

    Hydrochemical evaluation is a complex type of work that is carried out by specialists. The outcome of this work is generally presented as qualitative models and process descriptions of a site. To support and help to quantify the processes in an objective way, a multivariate mathematical tool entitled M3 (Multivariate Mixing and Mass balance calculations) has been constructed. The computer code can be used to trace the origin of the groundwater, and to calculate the mixing proportions and mass balances from groundwater data. The M3 code is a groundwater response model, which means that changes in the groundwater chemistry in terms of sources and sinks are traced in relation to an ideal mixing model. The complexity of the measured groundwater data determines the configuration of the ideal mixing model. Deviations from the ideal mixing model are interpreted as being due to reactions. Assumptions concerning important mineral phases altering the groundwater or uncertainties associated with thermodynamic constants do not affect the modelling because the calculations are solely based on the measured groundwater composition. M3 uses the opposite approach to that of many standard hydrochemical models. In M3, mixing is evaluated and calculated first. The constituents that cannot be described by mixing are described by reactions. The M3 model consists of three steps: the first is a standard principal component analysis, followed by mixing and finally mass balance calculations. The measured groundwater composition can be described in terms of mixing proportions (%), while the sinks and sources of an element associated with reactions are reported in mg/L. This report contains a set of verification and validation exercises with the intention of building confidence in the use of the M3 methodology. At the same time, clear answers are given to questions related to the accuracy and the precision of the results, including the inherent uncertainties and the errors that can be made

  1. Analysing $j/\\Psi$ Production in Various RHIC Interactions with a Version of Sequential Chain Model (SCM)

    CERN Document Server

    Guptaroy, P; Sau, Goutam; Biswas, S K; Bhattacharya, S

    2009-01-01

    We have attempted to develop here tentatively a model for $J/\\Psi$ production in p+p, d+Au, Cu + Cu and Au + Au collisions at RHIC energies on the basic ansatz that the results of nucleus-nucleus collisions could be arrived at from the nucleon-nucleon (p + p)-interactions with induction of some additional specific features of high energy nuclear collisions. Based on the proposed new and somewhat unfamiliar model, we have tried (i) to capture the properties of invariant $p_T$ -spectra for $J/\\Psi$ meson production; (ii) to study the nature of centrality dependence of the $p_T$ -spectra; (iii) to understand the rapidity distributions; (iv) to obtain the characteristics of the average transverse momentum $$ and the values of $$ as well and (v) to trace the nature of nuclear modification factor. The alternative approach adopted here describes the data-sets on the above-mentioned various observables in a fairly satisfactory manner. And, finally, the nature of $J/\\Psi$-production at Large Hadron Collider(LHC)-energ...

  2. M3 version 3.0: Verification and validation; Hydrochemical model of ground water at repository site

    Energy Technology Data Exchange (ETDEWEB)

    Gomez, Javier B. (Dept. of Earth Sciences, Univ. of Zaragoza, Zaragoza (Spain)); Laaksoharju, Marcus (Geopoint AB, Sollentuna (Sweden)); Skaarman, Erik (Abscondo, Bromma (Sweden)); Gurban, Ioana (3D-Terra (Canada))

    2009-01-15

    Hydrochemical evaluation is a complex type of work that is carried out by specialists. The outcome of this work is generally presented as qualitative models and process descriptions of a site. To support and help to quantify the processes in an objective way, a multivariate mathematical tool entitled M3 (Multivariate Mixing and Mass balance calculations) has been constructed. The computer code can be used to trace the origin of the groundwater, and to calculate the mixing proportions and mass balances from groundwater data. The M3 code is a groundwater response model, which means that changes in the groundwater chemistry in terms of sources and sinks are traced in relation to an ideal mixing model. The complexity of the measured groundwater data determines the configuration of the ideal mixing model. Deviations from the ideal mixing model are interpreted as being due to reactions. Assumptions concerning important mineral phases altering the groundwater or uncertainties associated with thermodynamic constants do not affect the modelling because the calculations are solely based on the measured groundwater composition. M3 uses the opposite approach to that of many standard hydrochemical models. In M3, mixing is evaluated and calculated first. The constituents that cannot be described by mixing are described by reactions. The M3 model consists of three steps: the first is a standard principal component analysis, followed by mixing and finally mass balance calculations. The measured groundwater composition can be described in terms of mixing proportions (%), while the sinks and sources of an element associated with reactions are reported in mg/L. This report contains a set of verification and validation exercises with the intention of building confidence in the use of the M3 methodology. At the same time, clear answers are given to questions related to the accuracy and the precision of the results, including the inherent uncertainties and the errors that can be made

  3. Inspection of the Math Model Tools for On-Orbit Assessment of Impact Damage Report. Version 1.0

    Science.gov (United States)

    Harris, Charles E.; Raju, Ivatury S.; Piascik, Robert S.; Kramer White, Julie; Labbe, Steve G.; Rotter, Hank A.

    2005-01-01

    In Spring of 2005, the NASA Engineering Safety Center (NESC) was engaged by the Space Shuttle Program (SSP) to peer review the suite of analytical tools being developed to support the determination of impact and damage tolerance of the Orbiter Thermal Protection Systems (TPS). The NESC formed an independent review team with the core disciplines of materials, flight sciences, structures, mechanical analysis and thermal analysis. The Math Model Tools reviewed included damage prediction and stress analysis, aeroheating analysis, and thermal analysis tools. Some tools are physics-based and other tools are empirically-derived. Each tool was created for a specific use and timeframe, including certification, real-time pre-launch assessments, and real-time on-orbit assessments. The tools are used together in an integrated strategy for assessing the ramifications of impact damage to tile and RCC. The NESC teams conducted a peer review of the engineering data package for each Math Model Tool. This report contains the summary of the team observations and recommendations from these reviews.

  4. Attribute-Based Signcryption: Signer Privacy, Strong Unforgeability and IND-CCA Security in Adaptive-Predicates Model (Extended Version

    Directory of Open Access Journals (Sweden)

    Tapas Pandit

    2016-08-01

    Full Text Available Attribute-Based Signcryption (ABSC is a natural extension of Attribute-Based Encryption (ABE and Attribute-Based Signature (ABS, where one can have the message confidentiality and authenticity together. Since the signer privacy is captured in security of ABS, it is quite natural to expect that the signer privacy will also be preserved in ABSC. In this paper, first we propose an ABSC scheme which is weak existential unforgeable and IND-CCA secure in adaptive-predicates models and, achieves signer privacy. Then, by applying strongly unforgeable one-time signature (OTS, the above scheme is lifted to an ABSC scheme to attain strong existential unforgeability in adaptive-predicates model. Both the ABSC schemes are constructed on common setup, i.e the public parameters and key are same for both the encryption and signature modules. Our first construction is in the flavor of CtE&S paradigm, except one extra component that will be computed using both signature components and ciphertext components. The second proposed construction follows a new paradigm (extension of CtE&S , we call it “Commit then Encrypt and Sign then Sign” (CtE&S . The last signature is generated using a strong OTS scheme. Since, the non-repudiation is achieved by CtE&S paradigm, our systems also achieve the same.

  5. caCORE version 3: Implementation of a model driven, service-oriented architecture for semantic interoperability

    Science.gov (United States)

    Komatsoulis, George A.; Warzel, Denise B.; Hartel, Frank W.; Shanbhag, Krishnakant; Chilukuri, Ram; Fragoso, Gilberto; de Coronado, Sherri; Reeves, Dianne M.; Hadfield, Jillaine B.; Ludet, Christophe; Covitz, Peter A.

    2008-01-01

    One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service Oriented Architecture (SSOA) for cancer research by the National Cancer Institute’s cancer Biomedical Informatics Grid (caBIG™). PMID:17512259

  6. A model using marginal efficiency of investment to analyze carbon and nitrogen interactions in terrestrial ecosystems (ACONITE Version 1)

    Science.gov (United States)

    Thomas, R. Q.; Williams, M.

    2014-09-01

    Carbon (C) and nitrogen (N) cycles are coupled in terrestrial ecosystems through multiple processes including photosynthesis, tissue allocation, respiration, N fixation, N uptake, and decomposition of litter and soil organic matter. Capturing the constraint of N on terrestrial C uptake and storage has been a focus of the Earth System Modeling community. However, there is little understanding of the trade-offs and sensitivities of allocating C and N to different tissues in order to optimize the productivity of plants. Here we describe a new, simple model of ecosystem C-N cycling and interactions (ACONITE), that builds on theory related to plant economics in order to predict key ecosystem properties (leaf area index, leaf C : N, N fixation, and plant C use efficiency) based on the outcome of assessments of the marginal change in net C or N uptake associated with a change in allocation of C or N to plant tissues. We simulated and evaluated steady-state ecosystem stocks and fluxes in three different forest ecosystems types (tropical evergreen, temperate deciduous, and temperate evergreen). Leaf C : N differed among the three ecosystem types (temperate deciduous demand for N and the marginal return on C investment to acquire N, was an order of magnitude higher in the tropical forest than in the temperate forest, consistent with observations. A sensitivity analysis revealed that parameterization of the relationship between leaf N and leaf respiration had the largest influence on leaf area index and leaf C : N. A parameter governing how photosynthesis scales with day length had the largest influence on total vegetation C, GPP, and NPP. Multiple parameters associated with photosynthesis, respiration, and N uptake influenced the rate of N fixation. Overall, our ability to constrain leaf area index and allow spatially and temporally variable leaf C : N can help address challenges simulating these properties in ecosystem and Earth System models. Furthermore, the simple

  7. A model using marginal efficiency of investment to analyse carbon and nitrogen interactions in terrestrial ecosystems (ACONITE Version 1)

    Science.gov (United States)

    Thomas, R. Q.; Williams, M.

    2014-04-01

    Carbon (C) and nitrogen (N) cycles are coupled in terrestrial ecosystems through multiple processes including photosynthesis, tissue allocation, respiration, N fixation, N uptake, and decomposition of litter and soil organic matter. Capturing the constraint of N on terrestrial C uptake and storage has been a focus of the Earth System modelling community. However there is little understanding of the trade-offs and sensitivities of allocating C and N to different tissues in order to optimize the productivity of plants. Here we describe a new, simple model of