WorldWideScience

Sample records for episodic model version

  1. Episodes, events, and models

    Directory of Open Access Journals (Sweden)

    Sangeet eKhemlani

    2015-10-01

    Full Text Available We describe a novel computational theory of how individuals segment perceptual information into representations of events. The theory is inspired by recent findings in the cognitive science and cognitive neuroscience of event segmentation. In line with recent theories, it holds that online event segmentation is automatic, and that event segmentation yields mental simulations of events. But it posits two novel principles as well: first, discrete episodic markers track perceptual and conceptual changes, and can be retrieved to construct event models. Second, the process of retrieving and reconstructing those episodic markers is constrained and prioritized. We describe a computational implementation of the theory, as well as a robotic extension of the theory that demonstrates the processes of online event segmentation and event model construction. The theory is the first unified computational account of event segmentation and temporal inference. We conclude by demonstrating now neuroimaging data can constrain and inspire the construction of process-level theories of human reasoning.

  2. The Generalized Quantum Episodic Memory Model.

    Science.gov (United States)

    Trueblood, Jennifer S; Hemmer, Pernille

    2017-11-01

    Recent evidence suggests that experienced events are often mapped to too many episodic states, including those that are logically or experimentally incompatible with one another. For example, episodic over-distribution patterns show that the probability of accepting an item under different mutually exclusive conditions violates the disjunction rule. A related example, called subadditivity, occurs when the probability of accepting an item under mutually exclusive and exhaustive instruction conditions sums to a number >1. Both the over-distribution effect and subadditivity have been widely observed in item and source-memory paradigms. These phenomena are difficult to explain using standard memory frameworks, such as signal-detection theory. A dual-trace model called the over-distribution (OD) model (Brainerd & Reyna, 2008) can explain the episodic over-distribution effect, but not subadditivity. Our goal is to develop a model that can explain both effects. In this paper, we propose the Generalized Quantum Episodic Memory (GQEM) model, which extends the Quantum Episodic Memory (QEM) model developed by Brainerd, Wang, and Reyna (2013). We test GQEM by comparing it to the OD model using data from a novel item-memory experiment and a previously published source-memory experiment (Kellen, Singmann, & Klauer, 2014) examining the over-distribution effect. Using the best-fit parameters from the over-distribution experiments, we conclude by showing that the GQEM model can also account for subadditivity. Overall these results add to a growing body of evidence suggesting that quantum probability theory is a valuable tool in modeling recognition memory. Copyright © 2016 Cognitive Science Society, Inc.

  3. The Unified Extensional Versioning Model

    DEFF Research Database (Denmark)

    Asklund, U.; Bendix, Lars Gotfred; Christensen, H. B.

    1999-01-01

    Versioning of components in a system is a well-researched field where various adequate techniques have already been established. In this paper, we look at how versioning can be extended to cover also the structural aspects of a system. There exist two basic techniques for versioning - intentional...

  4. Versions of the Waste Reduction Model (WARM)

    Science.gov (United States)

    This page provides a brief chronology of changes made to EPA’s Waste Reduction Model (WARM), organized by WARM version number. The page includes brief summaries of changes and updates since the previous version.

  5. Episodic grammar: a computational model of the interaction between episodic and semantic memory in language processing

    NARCIS (Netherlands)

    Borensztajn, G.; Zuidema, W.; Carlson, L.; Hoelscher, C.; Shipley, T.F.

    2011-01-01

    We present a model of the interaction of semantic and episodic memory in language processing. Our work shows how language processing can be understood in terms of memory retrieval. We point out that the perceived dichotomy between rule-based versus exemplar-based language modelling can be

  6. The Interpersonal Conflict Episode: A Systems Model.

    Science.gov (United States)

    Slawski, Carl

    A detailed systems diagram elaborates the process of dealing with a single conflict episode between two parties or persons. Hypotheses are fully stated to lead the reader through the flow diagram. A concrete example illustrates its use. Detail is provided in an accounting scheme of virtually all possible variables to consider in analyzing a…

  7. Model-based version management system framework

    International Nuclear Information System (INIS)

    Mehmood, W.

    2016-01-01

    In this paper we present a model-based version management system. Version Management System (VMS) a branch of software configuration management (SCM) aims to provide a controlling mechanism for evolution of software artifacts created during software development process. Controlling the evolution requires many activities to perform, such as, construction and creation of versions, identification of differences between versions, conflict detection and merging. Traditional VMS systems are file-based and consider software systems as a set of text files. File based VMS systems are not adequate for performing software configuration management activities such as, version control on software artifacts produced in earlier phases of the software life cycle. New challenges of model differencing, merge, and evolution control arise while using models as central artifact. The goal of this work is to present a generic framework model-based VMS which can be used to overcome the problem of tradition file-based VMS systems and provide model versioning services. (author)

  8. Elements of episodic-like memory in animal models.

    Science.gov (United States)

    Crystal, Jonathon D

    2009-03-01

    Representations of unique events from one's past constitute the content of episodic memories. A number of studies with non-human animals have revealed that animals remember specific episodes from their past (referred to as episodic-like memory). The development of animal models of memory holds enormous potential for gaining insight into the biological bases of human memory. Specifically, given the extensive knowledge of the rodent brain, the development of rodent models of episodic memory would open new opportunities to explore the neuroanatomical, neurochemical, neurophysiological, and molecular mechanisms of memory. Development of such animal models holds enormous potential for studying functional changes in episodic memory in animal models of Alzheimer's disease, amnesia, and other human memory pathologies. This article reviews several approaches that have been used to assess episodic-like memory in animals. The approaches reviewed include the discrimination of what, where, and when in a radial arm maze, dissociation of recollection and familiarity, object recognition, binding, unexpected questions, and anticipation of a reproductive state. The diversity of approaches may promote the development of converging lines of evidence on the difficult problem of assessing episodic-like memory in animals.

  9. Modeling HIV-1 drug resistance as episodic directional selection.

    Science.gov (United States)

    Murrell, Ben; de Oliveira, Tulio; Seebregts, Chris; Kosakovsky Pond, Sergei L; Scheffler, Konrad

    2012-01-01

    The evolution of substitutions conferring drug resistance to HIV-1 is both episodic, occurring when patients are on antiretroviral therapy, and strongly directional, with site-specific resistant residues increasing in frequency over time. While methods exist to detect episodic diversifying selection and continuous directional selection, no evolutionary model combining these two properties has been proposed. We present two models of episodic directional selection (MEDS and EDEPS) which allow the a priori specification of lineages expected to have undergone directional selection. The models infer the sites and target residues that were likely subject to directional selection, using either codon or protein sequences. Compared to its null model of episodic diversifying selection, MEDS provides a superior fit to most sites known to be involved in drug resistance, and neither one test for episodic diversifying selection nor another for constant directional selection are able to detect as many true positives as MEDS and EDEPS while maintaining acceptable levels of false positives. This suggests that episodic directional selection is a better description of the process driving the evolution of drug resistance.

  10. Modeling HIV-1 drug resistance as episodic directional selection.

    Directory of Open Access Journals (Sweden)

    Ben Murrell

    Full Text Available The evolution of substitutions conferring drug resistance to HIV-1 is both episodic, occurring when patients are on antiretroviral therapy, and strongly directional, with site-specific resistant residues increasing in frequency over time. While methods exist to detect episodic diversifying selection and continuous directional selection, no evolutionary model combining these two properties has been proposed. We present two models of episodic directional selection (MEDS and EDEPS which allow the a priori specification of lineages expected to have undergone directional selection. The models infer the sites and target residues that were likely subject to directional selection, using either codon or protein sequences. Compared to its null model of episodic diversifying selection, MEDS provides a superior fit to most sites known to be involved in drug resistance, and neither one test for episodic diversifying selection nor another for constant directional selection are able to detect as many true positives as MEDS and EDEPS while maintaining acceptable levels of false positives. This suggests that episodic directional selection is a better description of the process driving the evolution of drug resistance.

  11. Modelling episodic acidification of surface waters: the state of science.

    Science.gov (United States)

    Eshleman, K N; Wigington, P J; Davies, T D; Tranter, M

    1992-01-01

    Field studies of chemical changes in surface waters associated with rainfall and snowmelt events have provided evidence of episodic acidification of lakes and streams in Europe and North America. Modelling these chemical changes is particularly challenging because of the variability associated with hydrological transport and chemical transformation processes in catchments. This paper provides a review of mathematical models that have been applied to the problem of episodic acidification. Several empirical approaches, including regression models, mixing models and time series models, support a strong hydrological interpretation of episodic acidification. Regional application of several models has suggested that acidic episodes (in which the acid neutralizing capacity becomes negative) are relatively common in surface waters in several regions of the US that receive acid deposition. Results from physically based models have suggested a lack of understanding of hydrological flowpaths, hydraulic residence times and biogeochemical reactions, particularly those involving aluminum. The ability to better predict episodic chemical responses of surface waters is thus dependent upon elucidation of these and other physical and chemical processes.

  12. Modeling report of DYMOND code (DUPIC version)

    International Nuclear Information System (INIS)

    Park, Joo Hwan; Yacout, Abdellatif M.

    2003-04-01

    The DYMOND code employs the ITHINK dynamic modeling platform to assess the 100-year dynamic evolution scenarios for postulated global nuclear energy parks. Firstly, DYMOND code has been developed by ANL(Argonne National Laboratory) to perform the fuel cycle analysis of LWR once-through and LWR-FBR mixed plant. Since the extensive application of DYMOND code has been requested, the first version of DYMOND has been modified to adapt the DUPIC, MSR and RTF fuel cycle. DYMOND code is composed of three parts; the source language platform, input supply and output. But those platforms are not clearly distinguished. This report described all the equations which were modeled in the modified DYMOND code (which is called as DYMOND-DUPIC version). It divided into five parts;Part A deals model in reactor history which is included amount of the requested fuels and spent fuels. Part B aims to describe model of fuel cycle about fuel flow from the beginning to the end of fuel cycle. Part C is for model in re-processing which is included recovery of burned uranium, plutonium, minor actinide and fission product as well as the amount of spent fuels in storage and disposal. Part D is for model in other fuel cycle which is considered the thorium fuel cycle for MSR and RTF reactor. Part E is for model in economics. This part gives all the information of cost such as uranium mining cost, reactor operating cost, fuel cost etc

  13. Modeling report of DYMOND code (DUPIC version)

    Energy Technology Data Exchange (ETDEWEB)

    Park, Joo Hwan [KAERI, Taejon (Korea, Republic of); Yacout, Abdellatif M [Argonne National Laboratory, Ilinois (United States)

    2003-04-01

    The DYMOND code employs the ITHINK dynamic modeling platform to assess the 100-year dynamic evolution scenarios for postulated global nuclear energy parks. Firstly, DYMOND code has been developed by ANL(Argonne National Laboratory) to perform the fuel cycle analysis of LWR once-through and LWR-FBR mixed plant. Since the extensive application of DYMOND code has been requested, the first version of DYMOND has been modified to adapt the DUPIC, MSR and RTF fuel cycle. DYMOND code is composed of three parts; the source language platform, input supply and output. But those platforms are not clearly distinguished. This report described all the equations which were modeled in the modified DYMOND code (which is called as DYMOND-DUPIC version). It divided into five parts;Part A deals model in reactor history which is included amount of the requested fuels and spent fuels. Part B aims to describe model of fuel cycle about fuel flow from the beginning to the end of fuel cycle. Part C is for model in re-processing which is included recovery of burned uranium, plutonium, minor actinide and fission product as well as the amount of spent fuels in storage and disposal. Part D is for model in other fuel cycle which is considered the thorium fuel cycle for MSR and RTF reactor. Part E is for model in economics. This part gives all the information of cost such as uranium mining cost, reactor operating cost, fuel cost etc.

  14. Social functioning in Chinese college students with and without schizotypal personality traits: an exploratory study of the Chinese version of the First Episode Social Functioning Scale.

    Directory of Open Access Journals (Sweden)

    Yi Wang

    Full Text Available OBJECTIVES: The First Episode Social Functioning Scale (FESFS was designed to measure social functioning of young individuals with schizophrenia. The aim of this study was to validate a Chinese version of the FESFS in a sample of young Chinese adults. METHOD: The FESFS was translated to Chinese prior to being administered to 1576 college students. The factor structure, reliability, and validity of the scale were examined. RESULTS: Two items were deleted after item analysis and the internal consistency of the whole scale was .89. A six-factor structure was derived by exploratory factor analysis. The factors were interpersonal, family and friends, school, living skills, intimacy, and balance. Estimates of the structural equation model supported this structure, with Goodness of Fit Chi-Square χ(2 = 1097.53 (p<0.0001, the root mean square error of approximation (RMSEA = 0.058, and the comparative fit index (CFI = 0.93. Scale validity was supported by significant correlations between social functioning factors scores and schizophrenia personality questionnaire (SPQ scores. Individuals with schizotypal personality features presented poorer social functioning than those without schizotypal personality features. CONCLUSIONS: The Chinese revised version of the FESFS was found to have good psychometric properties and could be used in the future to examine social functioning in Chinese college students.

  15. Forsmark - site descriptive model version 0

    International Nuclear Information System (INIS)

    2002-10-01

    During 2002, the Swedish Nuclear Fuel and Waste Management Company (SKB) is starting investigations at two potential sites for a deep repository in the Precambrian basement of the Fennoscandian Shield. The present report concerns one of those sites, Forsmark, which lies in the municipality of Oesthammar, on the east coast of Sweden, about 150 kilometres north of Stockholm. The site description should present all collected data and interpreted parameters of importance for the overall scientific understanding of the site, for the technical design and environmental impact assessment of the deep repository, and for the assessment of long-term safety. The site description will have two main components: a written synthesis of the site, summarising the current state of knowledge, as documented in the databases containing the primary data from the site investigations, and one or several site descriptive models, in which the collected information is interpreted and presented in a form which can be used in numerical models for rock engineering, environmental impact and long-term safety assessments. The site descriptive models are devised and stepwise updated as the site investigations proceed. The point of departure for this process is the regional site descriptive model, version 0, which is the subject of the present report. Version 0 is developed out of the information available at the start of the site investigation. This information, with the exception of data from tunnels and drill holes at the sites of the Forsmark nuclear reactors and the underground low-middle active radioactive waste storage facility, SFR, is mainly 2D in nature (surface data), and is general and regional, rather than site-specific, in content. For this reason, the Forsmark site descriptive model, version 0, as detailed in the present report, has been developed at a regional scale. It covers a rectangular area, 15 km in a southwest-northeast and 11 km in a northwest-southeast direction, around the

  16. Modeling Regional Pollution Episodes With The Ctm Mocage.

    Science.gov (United States)

    Dufour, A.; Brocheton, F.; Amodei, M.; Peuch, V.-H.

    Several regional ozone pollution episodes have been studied in the context of two recent extensive field campaigns in France: ESQUIF, in the Paris region and ES- COMPTE, in the vicinity of Marseilles. MOCAGE is an off-line multi-scale Chem- istry and Transport Model (CTM), driven by the operational numerical weather pre- diction models of Météo-France, ARPEGE and ALADIN. It covers from the global to the regional scale, by means of up to four levels of nested domains, and extends up to the middle stratosphere; thus, there is no need for external boundary conditions, neither on the horizontal or on the vertical. These original features allows to cover with MOCAGE a wide range of scientific applications, from routine air-pollution fore- casts to long-term simulations related to climate issues. The present study focuses on the simulation of regional-scale photo-oxidant episodes and on the impact on larger scales of the transport of ozone, of precursors and of reservoir species. The first ex- ample concerns a polluted episode of the ESQUIF campaign (IOP6). In addition to ground measurements, 8 flights have documented the situation, showing a diversity of chemical regimes. This variability is quite satisfactorily reproduced by the model. A special attention was also paid to vertical and horizontal exchanges, particularly to interactions between the boundary layer and the free troposphere. An interesting case of an ill-represented residual nocturnal plume in the simulation of ESQUIF IOP5 will be presented: during this IOP, the vertical structure of the lower troposphere was well characterized by four flights. Free troposphere concentrations of ozone appear to be well reproduced by the model, except for the intensity and vertical extent of a residual plume, which are overestimated. For the day after, in addition to a direct impact on surface concentrations, the simulated development of the boundary layer is found to be too slow ; both errors contribute to an

  17. Forsmark - site descriptive model version 0

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-10-01

    During 2002, the Swedish Nuclear Fuel and Waste Management Company (SKB) is starting investigations at two potential sites for a deep repository in the Precambrian basement of the Fennoscandian Shield. The present report concerns one of those sites, Forsmark, which lies in the municipality of Oesthammar, on the east coast of Sweden, about 150 kilometres north of Stockholm. The site description should present all collected data and interpreted parameters of importance for the overall scientific understanding of the site, for the technical design and environmental impact assessment of the deep repository, and for the assessment of long-term safety. The site description will have two main components: a written synthesis of the site, summarising the current state of knowledge, as documented in the databases containing the primary data from the site investigations, and one or several site descriptive models, in which the collected information is interpreted and presented in a form which can be used in numerical models for rock engineering, environmental impact and long-term safety assessments. The site descriptive models are devised and stepwise updated as the site investigations proceed. The point of departure for this process is the regional site descriptive model, version 0, which is the subject of the present report. Version 0 is developed out of the information available at the start of the site investigation. This information, with the exception of data from tunnels and drill holes at the sites of the Forsmark nuclear reactors and the underground low-middle active radioactive waste storage facility, SFR, is mainly 2D in nature (surface data), and is general and regional, rather than site-specific, in content. For this reason, the Forsmark site descriptive model, version 0, as detailed in the present report, has been developed at a regional scale. It covers a rectangular area, 15 km in a southwest-northeast and 11 km in a northwest-southeast direction, around the

  18. Simpevarp - site descriptive model version 0

    International Nuclear Information System (INIS)

    2002-11-01

    During 2002, SKB is starting detailed investigations at two potential sites for a deep repository in the Precambrian rocks of the Fennoscandian Shield. The present report concerns one of those sites, Simpevarp, which lies in the municipality of Oskarshamn, on the southeast coast of Sweden, about 250 kilometres south of Stockholm. The site description will have two main components: a written synthesis of the site, summarising the current state of knowledge, as documented in the databases containing the primary data from the site investigations, and one or several site descriptive models, in which the collected information is interpreted and presented in a form which can be used in numerical models for rock engineering, environmental impact and long-term safety assessments. SKB maintains two main databases at the present time, a site characterisation database called SICADA and a geographic information system called SKB GIS. The site descriptive model will be developed and presented with the aid of the SKB GIS capabilities, and with SKBs Rock Visualisation System (RVS), which is also linked to SICADA. The version 0 model forms an important framework for subsequent model versions, which are developed successively, as new information from the site investigations becomes available. Version 0 is developed out of the information available at the start of the site investigation. In the case of Simpevarp, this is essentially the information which was compiled for the Oskarshamn feasibility study, which led to the choice of that area as a favourable object for further study, together with information collected since its completion. This information, with the exception of the extensive data base from the nearby Aespoe Hard Rock Laboratory, is mainly 2D in nature (surface data), and is general and regional, rather than site-specific, in content. Against this background, the present report consists of the following components: an overview of the present content of the databases

  19. Simpevarp - site descriptive model version 0

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-11-01

    During 2002, SKB is starting detailed investigations at two potential sites for a deep repository in the Precambrian rocks of the Fennoscandian Shield. The present report concerns one of those sites, Simpevarp, which lies in the municipality of Oskarshamn, on the southeast coast of Sweden, about 250 kilometres south of Stockholm. The site description will have two main components: a written synthesis of the site, summarising the current state of knowledge, as documented in the databases containing the primary data from the site investigations, and one or several site descriptive models, in which the collected information is interpreted and presented in a form which can be used in numerical models for rock engineering, environmental impact and long-term safety assessments. SKB maintains two main databases at the present time, a site characterisation database called SICADA and a geographic information system called SKB GIS. The site descriptive model will be developed and presented with the aid of the SKB GIS capabilities, and with SKBs Rock Visualisation System (RVS), which is also linked to SICADA. The version 0 model forms an important framework for subsequent model versions, which are developed successively, as new information from the site investigations becomes available. Version 0 is developed out of the information available at the start of the site investigation. In the case of Simpevarp, this is essentially the information which was compiled for the Oskarshamn feasibility study, which led to the choice of that area as a favourable object for further study, together with information collected since its completion. This information, with the exception of the extensive data base from the nearby Aespoe Hard Rock Laboratory, is mainly 2D in nature (surface data), and is general and regional, rather than site-specific, in content. Against this background, the present report consists of the following components: an overview of the present content of the databases

  20. Integrating incremental learning and episodic memory models of the hippocampal region.

    NARCIS (Netherlands)

    Meeter, M.; Myers, C.E; Gluck, M.A.

    2005-01-01

    By integrating previous computational models of corticohippocampal function, the authors develop and test a unified theory of the neural substrates of familiarity, recollection, and classical conditioning. This approach integrates models from 2 traditions of hippocampal modeling, those of episodic

  1. Version control of pathway models using XML patches.

    Science.gov (United States)

    Saffrey, Peter; Orton, Richard

    2009-03-17

    Computational modelling has become an important tool in understanding biological systems such as signalling pathways. With an increase in size complexity of models comes a need for techniques to manage model versions and their relationship to one another. Model version control for pathway models shares some of the features of software version control but has a number of differences that warrant a specific solution. We present a model version control method, along with a prototype implementation, based on XML patches. We show its application to the EGF/RAS/RAF pathway. Our method allows quick and convenient storage of a wide range of model variations and enables a thorough explanation of these variations. Trying to produce these results without such methods results in slow and cumbersome development that is prone to frustration and human error.

  2. Care episode retrieval: distributional semantic models for information retrieval in the clinical domain.

    Science.gov (United States)

    Moen, Hans; Ginter, Filip; Marsi, Erwin; Peltonen, Laura-Maria; Salakoski, Tapio; Salanterä, Sanna

    2015-01-01

    Patients' health related information is stored in electronic health records (EHRs) by health service providers. These records include sequential documentation of care episodes in the form of clinical notes. EHRs are used throughout the health care sector by professionals, administrators and patients, primarily for clinical purposes, but also for secondary purposes such as decision support and research. The vast amounts of information in EHR systems complicate information management and increase the risk of information overload. Therefore, clinicians and researchers need new tools to manage the information stored in the EHRs. A common use case is, given a--possibly unfinished--care episode, to retrieve the most similar care episodes among the records. This paper presents several methods for information retrieval, focusing on care episode retrieval, based on textual similarity, where similarity is measured through domain-specific modelling of the distributional semantics of words. Models include variants of random indexing and the semantic neural network model word2vec. Two novel methods are introduced that utilize the ICD-10 codes attached to care episodes to better induce domain-specificity in the semantic model. We report on experimental evaluation of care episode retrieval that circumvents the lack of human judgements regarding episode relevance. Results suggest that several of the methods proposed outperform a state-of-the art search engine (Lucene) on the retrieval task.

  3. Vagus nerve stimulation inhibits trigeminal nociception in a rodent model of episodic migraine

    Directory of Open Access Journals (Sweden)

    Jordan L. Hawkins

    2017-12-01

    Conclusion:. Our findings demonstrate that nVNS inhibits mechanical nociception and represses expression of proteins associated with peripheral and central sensitization of trigeminal neurons in a novel rodent model of episodic migraine.

  4. Model Adequacy Analysis of Matching Record Versions in Nosql Databases

    Directory of Open Access Journals (Sweden)

    E. V. Tsviashchenko

    2015-01-01

    Full Text Available The article investigates a model of matching record versions. The goal of this work is to analyse the model adequacy. This model allows estimating a user’s processing time distribution of the record versions and a distribution of the record versions count. The second option of the model was used, according to which, for a client the time to process record versions depends explicitly on the number of updates, performed by the other users between the sequential updates performed by a current client. In order to prove the model adequacy the real experiment was conducted in the cloud cluster. The cluster contains 10 virtual nodes, provided by DigitalOcean Company. The Ubuntu Server 14.04 was used as an operating system (OS. The NoSQL system Riak was chosen for experiments. In the Riak 2.0 version and later provide “dotted vector versions” (DVV option, which is an extension of the classic vector clock. Their use guarantees, that the versions count, simultaneously stored in DB, will not exceed the count of clients, operating in parallel with a record. This is very important while conducting experiments. For developing the application the java library, provided by Riak, was used. The processes run directly on the nodes. In experiment two records were used. They are: Z – the record, versions of which are handled by clients; RZ – service record, which contains record update counters. The application algorithm can be briefly described as follows: every client reads versions of the record Z, processes its updates using the RZ record counters, and saves treated record in database while old versions are deleted form DB. Then, a client rereads the RZ record and increments counters of updates for the other clients. After that, a client rereads the Z record, saves necessary statistics, and deliberates the results of processing. In the case of emerging conflict because of simultaneous updates of the RZ record, the client obtains all versions of that

  5. Solar Advisor Model User Guide for Version 2.0

    Energy Technology Data Exchange (ETDEWEB)

    Gilman, P.; Blair, N.; Mehos, M.; Christensen, C.; Janzou, S.; Cameron, C.

    2008-08-01

    The Solar Advisor Model (SAM) provides a consistent framework for analyzing and comparing power system costs and performance across the range of solar technologies and markets, from photovoltaic systems for residential and commercial markets to concentrating solar power and large photovoltaic systems for utility markets. This manual describes Version 2.0 of the software, which can model photovoltaic and concentrating solar power technologies for electric applications for several markets. The current version of the Solar Advisor Model does not model solar heating and lighting technologies.

  6. The ONKALO area model. Version 1

    International Nuclear Information System (INIS)

    Kemppainen, K.; Ahokas, T.; Ahokas, H.; Paulamaeki, S.; Paananen, M.; Gehoer, S.; Front, K.

    2007-11-01

    The geological model of the ONKALO area consists of three submodels: the lithological model, the brittle deformation model and the alteration model. The lithological model gives properties of definite rock units that can be defined on the basis the migmatite structures, textures and modal compositions. The brittle deformation model describes the results of brittle deformation, where geophysical and hydrogeological results are added. The alteration model describes occurrence of different alteration types and its possible effects. The rocks of Olkiluoto can be divided into two major classes: (1) supracrustal high-grade metamorphic rocks including various migmatitic gneisses, tonalitic-granodioriticgranitic gneisses, mica gneisses, quartz gneisses and mafic gneisses, and (2) igneous rocks including pegmatitic granites and diabase dykes. The migmatitic gneisses can further be divided into three subgroups in terms of the type of migmatite structure: veined gneisses, stromatic gneisses and diatexitic gneisses. On the basis of refolding and crosscutting relationships, the metamorphic supracrustal rocks have been subject to polyphased ductile deformation, including five stages. In 3D modelling of the lithological units, an assumption has been made, on the basis of measurements in outcrops, investigation trenches and drill cores, that the pervasive, composite foliation produced as a result a polyphase ductile deformation has a rather constant attitude in the ONKALO area. Consequently, the strike and dip of the foliation has been used as a tool, through which the lithologies have been correlated between the drillholes and from the surface to the drillholes. The bedrock in the Olkiluoto site has been subject to extensive hydrothermal alteration, which has taken place at reasonably low temperature conditions, the estimated temperature interval being from slightly over 300 deg C to less than 100 deg C. Two types of alteration can be observed: (1) pervasive (disseminated

  7. Micro dosimetry model. An extended version

    International Nuclear Information System (INIS)

    Vroegindewey, C.

    1994-07-01

    In an earlier study a relative simple mathematical model has been constructed to simulate the energy transfer on a cellular scale and thus gain insight in the fundamental processes of BNCT. Based on this work, a more realistic micro dosimetry model is developed. The new facets of the model are: the treatment of proton recoil, the calculation of the distribution of energy depositions, and the determination of the number of particles crossing the target nucleus subdivided in place of origin. Besides these extensions, new stopping power tables for the emitted particles are generated and biased Monte Carlo techniques are used to reduce computer time. (orig.)

  8. A Mathematical Model for the Hippocampus: Towards the Understanding of Episodic Memory and Imagination

    Science.gov (United States)

    Tsuda, I.; Yamaguti, Y.; Kuroda, S.; Fukushima, Y.; Tsukada, M.

    How does the brain encode episode? Based on the fact that the hippocampus is responsible for the formation of episodic memory, we have proposed a mathematical model for the hippocampus. Because episodic memory includes a time series of events, an underlying dynamics for the formation of episodic memory is considered to employ an association of memories. David Marr correctly pointed out in his theory of archecortex for a simple memory that the hippocampal CA3 is responsible for the formation of associative memories. However, a conventional mathematical model of associative memory simply guarantees a single association of memory unless a rule for an order of successive association of memories is given. The recent clinical studies in Maguire's group for the patients with the hippocampal lesion show that the patients cannot make a new story, because of the lack of ability of imagining new things. Both episodic memory and imagining things include various common characteristics: imagery, the sense of now, retrieval of semantic information, and narrative structures. Taking into account these findings, we propose a mathematical model of the hippocampus in order to understand the common mechanism of episodic memory and imagination.

  9. Computational Model-Based Prediction of Human Episodic Memory Performance Based on Eye Movements

    Science.gov (United States)

    Sato, Naoyuki; Yamaguchi, Yoko

    Subjects' episodic memory performance is not simply reflected by eye movements. We use a ‘theta phase coding’ model of the hippocampus to predict subjects' memory performance from their eye movements. Results demonstrate the ability of the model to predict subjects' memory performance. These studies provide a novel approach to computational modeling in the human-machine interface.

  10. Smart Grid Interoperability Maturity Model Beta Version

    Energy Technology Data Exchange (ETDEWEB)

    Widergren, Steven E.; Drummond, R.; Giroti, Tony; Houseman, Doug; Knight, Mark; Levinson, Alex; longcore, Wayne; Lowe, Randy; Mater, J.; Oliver, Terry V.; Slack, Phil; Tolk, Andreas; Montgomery, Austin

    2011-12-02

    The GridWise Architecture Council was formed by the U.S. Department of Energy to promote and enable interoperability among the many entities that interact with the electric power system. This balanced team of industry representatives proposes principles for the development of interoperability concepts and standards. The Council provides industry guidance and tools that make it an available resource for smart grid implementations. In the spirit of advancing interoperability of an ecosystem of smart grid devices and systems, this document presents a model for evaluating the maturity of the artifacts and processes that specify the agreement of parties to collaborate across an information exchange interface. You are expected to have a solid understanding of large, complex system integration concepts and experience in dealing with software component interoperation. Those without this technical background should read the Executive Summary for a description of the purpose and contents of the document. Other documents, such as checklists, guides, and whitepapers, exist for targeted purposes and audiences. Please see the www.gridwiseac.org website for more products of the Council that may be of interest to you.

  11. Reliability and validity of the self-report version of the apathy evaluation scale in first-episode Psychosis: Concordance with the clinical version at baseline and 12 months follow-up.

    Science.gov (United States)

    Faerden, Ann; Lyngstad, Siv Hege; Simonsen, Carmen; Ringen, Petter Andreas; Papsuev, Oleg; Dieset, Ingrid; Andreassen, Ole A; Agartz, Ingrid; Marder, Stephen R; Melle, Ingrid

    2018-05-31

    Negative symptoms have traditionally been assessed based on clinicians' observations. The subjective experience of negative symptoms in people with psychosis may bring new insight. The Apathy Evaluation Scale (AES) is commonly used to study apathy in psychosis and has corresponding self-rated (AES-S) and clinician-rated (AES-C) versions. The aim of the present study was to determine the validity and reliability of the AES-S by investigating its concordance with the AES-C. Eighty-four first-episode (FEP) patients completed the shortened 12-item AES-S and AES-C at baseline (T1) and 12 months (T2). Concordance was studied by degree of correlation, comparison of mean scores, and change and difference between diagnostic groups. The Positive and Negative Symptom Scale (PANSS) was used to study convergent and discriminative properties. High concordance was found between AES-S and AES-C at both T1 and T2 regarding mean values, change from T1 to T2, and the proportion with high levels of apathy. Both versions indicated high levels of apathy in FEP, while associations with PANSS negative symptoms were weaker for AES-S than AES-C. Controlling for depression did not significantly alter results. We concluded that self-rated apathy in FEP patients is in concordance with clinician ratings, but in need of further study. Copyright © 2018. Published by Elsevier B.V.

  12. IDC Use Case Model Survey Version 1.1.

    Energy Technology Data Exchange (ETDEWEB)

    Harris, James Mark [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carr, Dorthe B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    This document contains the brief descriptions for the actors and use cases contained in the IDC Use Case Model. REVISIONS Version Date Author/Team Revision Description Authorized by V1.0 12/2014 SNL IDC Reengineering Project Team Initial delivery M. Harris V1.1 2/2015 SNL IDC Reengineering Project Team Iteration I2 Review Comments M. Harris

  13. IDC Use Case Model Survey Version 1.0.

    Energy Technology Data Exchange (ETDEWEB)

    Carr, Dorthe B.; Harris, James M.

    2014-12-01

    This document contains the brief descriptions for the actors and use cases contained in the IDC Use Case Model Survey. REVISIONS Version Date Author/Team Revision Description Authorized by V1.0 12/2014 IDC Re- engineering Project Team Initial delivery M. Harris

  14. A conceptual model specification language (CMSL Version 2)

    NARCIS (Netherlands)

    Wieringa, Roelf J.

    1992-01-01

    Version 2 of a language (CMSL) to specify conceptual models is defined. CMSL consists of two parts, the value specification language VSL and the object spercification language OSL. There is a formal semantics and an inference system for CMSL but research on this still continues. A method for

  15. Fiscal impacts model documentation. Version 1.0

    International Nuclear Information System (INIS)

    Beck, S.L.; Scott, M.J.

    1986-05-01

    The Fiscal Impacts (FI) Model, Version 1.0 was developed under Pacific Northwest Laboratory's Monitored Retrievable Storage (MRS) Program to aid in development of the MRS Reference Site Environmental Document (PNL 5476). It computes estimates of 182 fiscal items for state and local government jurisdictions, using input data from the US Census Bureau's 1981 Survey of Governments and local population forecasts. The model can be adapted for any county or group of counties in the United States

  16. Design Challenges of an Episode-Based Payment Model in Oncology: The Centers for Medicare & Medicaid Services Oncology Care Model.

    Science.gov (United States)

    Kline, Ronald M; Muldoon, L Daniel; Schumacher, Heidi K; Strawbridge, Larisa M; York, Andrew W; Mortimer, Laura K; Falb, Alison F; Cox, Katherine J; Bazell, Carol; Lukens, Ellen W; Kapp, Mary C; Rajkumar, Rahul; Bassano, Amy; Conway, Patrick H

    2017-07-01

    The Centers for Medicare & Medicaid Services developed the Oncology Care Model as an episode-based payment model to encourage participating practitioners to provide higher-quality, better-coordinated care at a lower cost to the nearly three-quarter million fee-for-service Medicare beneficiaries with cancer who receive chemotherapy each year. Episode payment models can be complex. They combine into a single benchmark price all payments for services during an episode of illness, many of which may be delivered at different times by different providers in different locations. Policy and technical decisions include the definition of the episode, including its initiation, duration, and included services; the identification of beneficiaries included in the model; and beneficiary attribution to practitioners with overall responsibility for managing their care. In addition, the calculation and risk adjustment of benchmark episode prices for the bundle of services must reflect geographic cost variations and diverse patient populations, including varying disease subtypes, medical comorbidities, changes in standards of care over time, the adoption of expensive new drugs (especially in oncology), as well as diverse practice patterns. Other steps include timely monitoring and intervention as needed to avoid shifting the attribution of beneficiaries on the basis of their expected episode expenditures as well as to ensure the provision of necessary medical services and the development of a meaningful link to quality measurement and improvement through the episode-based payment methodology. The complex and diverse nature of oncology business relationships and the specific rules and requirements of Medicare payment systems for different types of providers intensify these issues. The Centers for Medicare & Medicaid Services believes that by sharing its approach to addressing these decisions and challenges, it may facilitate greater understanding of the model within the oncology

  17. Mind-to-mind heteroclinic coordination: Model of sequential episodic memory initiation

    Science.gov (United States)

    Afraimovich, V. S.; Zaks, M. A.; Rabinovich, M. I.

    2018-05-01

    Retrieval of episodic memory is a dynamical process in the large scale brain networks. In social groups, the neural patterns, associated with specific events directly experienced by single members, are encoded, recalled, and shared by all participants. Here, we construct and study the dynamical model for the formation and maintaining of episodic memory in small ensembles of interacting minds. We prove that the unconventional dynamical attractor of this process—the nonsmooth heteroclinic torus—is structurally stable within the Lotka-Volterra-like sets of equations. Dynamics on this torus combines the absence of chaos with asymptotic instability of every separate trajectory; its adequate quantitative characteristics are length-related Lyapunov exponents. Variation of the coupling strength between the participants results in different types of sequential switching between metastable states; we interpret them as stages in formation and modification of the episodic memory.

  18. Accounting for heterogeneity in travel episode satisfaction using a random parameters panel effects regression model

    NARCIS (Netherlands)

    Rasouli, Soora; Timmermans, Harry

    2014-01-01

    Rasouli & Timmermans1 suggested a model of travel episode satisfaction that includes the degree and nature of multitasking, activity envelope, transport mode, travel party, duration and a set of contextual and socio-economic variables. In this sequel, the focus of attention shifts to the analysis of

  19. Generalization through the Recurrent Interaction of Episodic Memories: A Model of the Hippocampal System

    Science.gov (United States)

    Kumaran, Dharshan; McClelland, James L.

    2012-01-01

    In this article, we present a perspective on the role of the hippocampal system in generalization, instantiated in a computational model called REMERGE (recurrency and episodic memory results in generalization). We expose a fundamental, but neglected, tension between prevailing computational theories that emphasize the function of the hippocampus…

  20. Seasonal versus Episodic Performance Evaluation for an Eulerian Photochemical Air Quality Model

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Ling; Brown, Nancy J.; Harley, Robert A.; Bao, Jian-Wen; Michelson, Sara A; Wilczak, James M

    2010-04-16

    This study presents detailed evaluation of the seasonal and episodic performance of the Community Multiscale Air Quality (CMAQ) modeling system applied to simulate air quality at a fine grid spacing (4 km horizontal resolution) in central California, where ozone air pollution problems are severe. A rich aerometric database collected during the summer 2000 Central California Ozone Study (CCOS) is used to prepare model inputs and to evaluate meteorological simulations and chemical outputs. We examine both temporal and spatial behaviors of ozone predictions. We highlight synoptically driven high-ozone events (exemplified by the four intensive operating periods (IOPs)) for evaluating both meteorological inputs and chemical outputs (ozone and its precursors) and compare them to the summer average. For most of the summer days, cross-domain normalized gross errors are less than 25% for modeled hourly ozone, and normalized biases are between {+-}15% for both hourly and peak (1 h and 8 h) ozone. The domain-wide aggregated metrics indicate similar performance between the IOPs and the whole summer with respect to predicted ozone and its precursors. Episode-to-episode differences in ozone predictions are more pronounced at a subregional level. The model performs consistently better in the San Joaquin Valley than other air basins, and episodic ozone predictions there are similar to the summer average. Poorer model performance (normalized peak ozone biases <-15% or >15%) is found in the Sacramento Valley and the Bay Area and is most noticeable in episodes that are subject to the largest uncertainties in meteorological fields (wind directions in the Sacramento Valley and timing and strength of onshore flow in the Bay Area) within the boundary layer.

  1. A multisensor evaluation of the asymmetric convective model, version 2, in southeast Texas.

    Science.gov (United States)

    Kolling, Jenna S; Pleim, Jonathan E; Jeffries, Harvey E; Vizuete, William

    2013-01-01

    There currently exist a number of planetary boundary layer (PBL) schemes that can represent the effects of turbulence in daytime convective conditions, although these schemes remain a large source of uncertainty in meteorology and air quality model simulations. This study evaluates a recently developed combined local and nonlocal closure PBL scheme, the Asymmetric Convective Model, version 2 (ACM2), against PBL observations taken from radar wind profilers, a ground-based lidar, and multiple daytime radiosonde balloon launches. These observations were compared against predictions of PBLs from the Weather Research and Forecasting (WRF) model version 3.1 with the ACM2 PBL scheme option, and the Fifth-Generation Meteorological Model (MM5) version 3.7.3 with the Eta PBL scheme option that is currently being used to develop ozone control strategies in southeast Texas. MM5 and WRF predictions during the regulatory modeling episode were evaluated on their ability to predict the rise and fall of the PBL during daytime convective conditions across southeastern Texas. The MM5 predicted PBLs consistently underpredicted observations, and were also less than the WRF PBL predictions. The analysis reveals that the MM5 predicted a slower rising and shallower PBL not representative of the daytime urban boundary layer. Alternatively, the WRF model predicted a more accurate PBL evolution improving the root mean square error (RMSE), both temporally and spatially. The WRF model also more accurately predicted vertical profiles of temperature and moisture in the lowest 3 km of the atmosphere. Inspection of median surface temperature and moisture time-series plots revealed higher predicted surface temperatures in WRF and more surface moisture in MM5. These could not be attributed to surface heat fluxes, and thus the differences in performance of the WRF and MM5 models are likely due to the PBL schemes. An accurate depiction of the diurnal evolution of the planetary boundary layer (PBL) is

  2. ONKALO rock mechanics model (RMM). Version 2.3

    Energy Technology Data Exchange (ETDEWEB)

    Haekkinen, T.; Merjama, S.; Moenkkoenen, H. [WSP Finland, Helsinki (Finland)

    2014-07-15

    The Rock Mechanics Model of the ONKALO rock volume includes the most important rock mechanics features and parameters at the Olkiluoto site. The main objective of the model is to be a tool to predict rock properties, rock quality and hence provide an estimate for the rock stability of the potential repository at Olkiluoto. The model includes a database of rock mechanics raw data and a block model in which the rock mechanics parameters are estimated through block volumes based on spatial rock mechanics raw data. In this version 2.3, special emphasis was placed on refining the estimation of the block model. The model was divided into rock mechanics domains which were used as constraints during the block model estimation. During the modelling process, a display profile and toolbar were developed for the GEOVIA Surpac software to improve visualisation and access to the rock mechanics data for the Olkiluoto area. (orig.)

  3. Modelling of air quality for Winter and Summer episodes in Switzerland. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Andreani-Aksoyoglu, S.; Keller, J.; Barmpadimos, L.; Oderbolz, D.; Tinguely, M.; Prevot, A. [Paul Scherrer Institute (PSI), Laboratory of Atmospheric Chemistry, Villigen (Switzerland); Alfarra, R. [University of Manchester, Manchester (United Kingdom); Sandradewi, J. [Jisca Sandradewi, Hoexter (Germany)

    2009-05-15

    This final report issued by the General Energy Research Department and its Laboratory of Atmospheric Chemistry at the Paul Scherrer Institute (PSI) reports on the results obtained from the modelling of regional air quality for three episodes, January-February 2006, June 2006 and January 2007. The focus of the calculations is on particulate matter concentrations, as well as on ozone levels in summer. The model results were compared with the aerosol data collected by an Aerosol Mass Spectrometer (AMS), which was operated during all three episodes as well as with the air quality monitoring data from further monitoring programs. The air quality model used in this study is described and the results obtained for various types of locations - rural, city, high-altitude and motorway-near - are presented and discussed. The models used are described.

  4. Latest NASA Instrument Cost Model (NICM): Version VI

    Science.gov (United States)

    Mrozinski, Joe; Habib-Agahi, Hamid; Fox, George; Ball, Gary

    2014-01-01

    The NASA Instrument Cost Model, NICM, is a suite of tools which allow for probabilistic cost estimation of NASA's space-flight instruments at both the system and subsystem level. NICM also includes the ability to perform cost by analogy as well as joint confidence level (JCL) analysis. The latest version of NICM, Version VI, was released in Spring 2014. This paper will focus on the new features released with NICM VI, which include: 1) The NICM-E cost estimating relationship, which is applicable for instruments flying on Explorer-like class missions; 2) The new cluster analysis ability which, alongside the results of the parametric cost estimation for the user's instrument, also provides a visualization of the user's instrument's similarity to previously flown instruments; and 3) includes new cost estimating relationships for in-situ instruments.

  5. Solid Waste Projection Model: Database (Version 1.3)

    International Nuclear Information System (INIS)

    Blackburn, C.L.

    1991-11-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC). The SWPM system provides a modeling and analysis environment that supports decisions in the process of evaluating various solid waste management alternatives. This document, one of a series describing the SWPM system, contains detailed information regarding the software and data structures utilized in developing the SWPM Version 1.3 Database. This document is intended for use by experienced database specialists and supports database maintenance, utility development, and database enhancement

  6. Some Remarks on Stochastic Versions of the Ramsey Growth Model

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    2012-01-01

    Roč. 19, č. 29 (2012), s. 139-152 ISSN 1212-074X R&D Projects: GA ČR GAP402/10/1610; GA ČR GAP402/10/0956; GA ČR GAP402/11/0150 Institutional support: RVO:67985556 Keywords : Economic dynamics * Ramsey growth model with disturbance * stochastic dynamic programming * multistage stochastic programs Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/sladky-some remarks on stochastic versions of the ramsey growth model.pdf

  7. H2A Production Model, Version 2 User Guide

    Energy Technology Data Exchange (ETDEWEB)

    Steward, D.; Ramsden, T.; Zuboy, J.

    2008-09-01

    The H2A Production Model analyzes the technical and economic aspects of central and forecourt hydrogen production technologies. Using a standard discounted cash flow rate of return methodology, it determines the minimum hydrogen selling price, including a specified after-tax internal rate of return from the production technology. Users have the option of accepting default technology input values--such as capital costs, operating costs, and capacity factor--from established H2A production technology cases or entering custom values. Users can also modify the model's financial inputs. This new version of the H2A Production Model features enhanced usability and functionality. Input fields are consolidated and simplified. New capabilities include performing sensitivity analyses and scaling analyses to various plant sizes. This User Guide helps users already familiar with the basic tenets of H2A hydrogen production cost analysis get started using the new version of the model. It introduces the basic elements of the model then describes the function and use of each of its worksheets.

  8. Integrated Farm System Model Version 4.3 and Dairy Gas Emissions Model Version 3.3 Software development and distribution

    Science.gov (United States)

    Modeling routines of the Integrated Farm System Model (IFSM version 4.2) and Dairy Gas Emission Model (DairyGEM version 3.2), two whole-farm simulation models developed and maintained by USDA-ARS, were revised with new components for: (1) simulation of ammonia (NH3) and greenhouse gas emissions gene...

  9. A Hamiltonian driven quantum-like model for overdistribution in episodic memory recollection.

    Science.gov (United States)

    Broekaert, Jan B.; Busemeyer, Jerome R.

    2017-06-01

    While people famously forget genuine memories over time, they also tend to mistakenly over-recall equivalent memories concerning a given event. The memory phenomenon is known by the name of episodic overdistribution and occurs both in memories of disjunctions and partitions of mutually exclusive events and has been tested, modeled and documented in the literature. The total classical probability of recalling exclusive sub-events most often exceeds the probability of recalling the composed event, i.e. a subadditive total. We present a Hamiltonian driven propagation for the Quantum Episodic Memory model developed by Brainerd (et al., 2015) for the episodic memory overdistribution in the experimental immediate item false memory paradigm (Brainerd and Reyna, 2008, 2010, 2015). Following the Hamiltonian method of Busemeyer and Bruza (2012) our model adds time-evolution of the perceived memory state through the stages of the experimental process based on psychologically interpretable parameters - γ_c for recollection capability of cues, κ_p for bias or description-dependence by probes and β for the average gist component in the memory state at start. With seven parameters the Hamiltonian model shows good accuracy of predictions both in the EOD-disjunction and in the EOD-subadditivity paradigm. We noticed either an outspoken preponderance of the gist over verbatim trace, or the opposite, in the initial memory state when β is real. Only for complex β a mix of both traces is present in the initial state for the EOD-subadditivity paradigm.

  10. Women’s Social Networks and Birth Attendant Decisions: Application of the Network-Episode Model

    OpenAIRE

    Edmonds, Joyce K.; Hruschka, Daniel; Bernard, H. Russell; Sibley, Lynn

    2011-01-01

    This paper examines the association of women's social networks with the use of skilled birth attendants in uncomplicated pregnancy and childbirth in Matlab, Bangladesh. The Network-Episode Model was applied to determine if network structure variables (density / kinship homogeneity / strength of ties) together with network content (endorsement for or against a particular type of birth attendant) explain the type of birth attendant used by women above and beyond the variance explained by women'...

  11. A Diffusion Model Analysis of Adult Age Differences in Episodic and Semantic Long-Term Memory Retrieval

    Science.gov (United States)

    Spaniol, Julia; Madden, David J.; Voss, Andreas

    2006-01-01

    Two experiments investigated adult age differences in episodic and semantic long-term memory tasks, as a test of the hypothesis of specific age-related decline in context memory. Older adults were slower and exhibited lower episodic accuracy than younger adults. Fits of the diffusion model (R. Ratcliff, 1978) revealed age-related increases in…

  12. Break model comparison in different RELAP5 versions

    International Nuclear Information System (INIS)

    Parzer, I.

    2003-01-01

    The presented work focuses on the break flow prediction in RELAP5/MOD3 code, which is crucial to predict core uncovering and heatup during the Small Break Loss-of-Coolant Accidents (SB LOCA). The code prediction has been compared to the IAEA-SPE-4 experiments conducted on the PMK-2 integral test facilities in Hungary. The simulations have been performed with MOD3.2.2 Beta, MOD3.2.2 Gamma, MOD3.3 Beta and MOD3.3 frozen code version. In the present work we have compared the Ransom-Trapp and Henry-Fauske break model predictions. Additionally, both model predictions have been compared to itself, when used as the main modeling tool or when used as another code option, as so-called 'secret developmental options' on input card no.1. (author)

  13. FUNDAMENTAL ASPECTS OF EPISODIC ACCRETION CHEMISTRY EXPLORED WITH SINGLE-POINT MODELS

    International Nuclear Information System (INIS)

    Visser, Ruud; Bergin, Edwin A.

    2012-01-01

    We explore a set of single-point chemical models to study the fundamental chemical aspects of episodic accretion in low-mass embedded protostars. Our goal is twofold: (1) to understand how the repeated heating and cooling of the envelope affects the abundances of CO and related species; and (2) to identify chemical tracers that can be used as a novel probe of the timescales and other physical aspects of episodic accretion. We develop a set of single-point models that serve as a general prescription for how the chemical composition of a protostellar envelope is altered by episodic accretion. The main effect of each accretion burst is to drive CO ice off the grains in part of the envelope. The duration of the subsequent quiescent stage (before the next burst hits) is similar to or shorter than the freeze-out timescale of CO, allowing the chemical effects of a burst to linger long after the burst has ended. We predict that the resulting excess of gas-phase CO can be observed with single-dish or interferometer facilities as evidence of an accretion burst in the past 10 3 -10 4 yr.

  14. GLEAM version 3: Global Land Evaporation Datasets and Model

    Science.gov (United States)

    Martens, B.; Miralles, D. G.; Lievens, H.; van der Schalie, R.; de Jeu, R.; Fernandez-Prieto, D.; Verhoest, N.

    2015-12-01

    Terrestrial evaporation links energy, water and carbon cycles over land and is therefore a key variable of the climate system. However, the global-scale magnitude and variability of the flux, and the sensitivity of the underlying physical process to changes in environmental factors, are still poorly understood due to limitations in in situ measurements. As a result, several methods have risen to estimate global patterns of land evaporation from satellite observations. However, these algorithms generally differ in their approach to model evaporation, resulting in large differences in their estimates. One of these methods is GLEAM, the Global Land Evaporation: the Amsterdam Methodology. GLEAM estimates terrestrial evaporation based on daily satellite observations of meteorological variables, vegetation characteristics and soil moisture. Since the publication of the first version of the algorithm (2011), the model has been widely applied to analyse trends in the water cycle and land-atmospheric feedbacks during extreme hydrometeorological events. A third version of the GLEAM global datasets is foreseen by the end of 2015. Given the relevance of having a continuous and reliable record of global-scale evaporation estimates for climate and hydrological research, the establishment of an online data portal to host these data to the public is also foreseen. In this new release of the GLEAM datasets, different components of the model have been updated, with the most significant change being the revision of the data assimilation algorithm. In this presentation, we will highlight the most important changes of the methodology and present three new GLEAM datasets and their validation against in situ observations and an alternative dataset of terrestrial evaporation (ERA-Land). Results of the validation exercise indicate that the magnitude and the spatiotemporal variability of the modelled evaporation agree reasonably well with the estimates of ERA-Land and the in situ

  15. Solid Waste Projection Model: Database (Version 1.4)

    International Nuclear Information System (INIS)

    Blackburn, C.; Cillan, T.

    1993-09-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC). The SWPM system provides a modeling and analysis environment that supports decisions in the process of evaluating various solid waste management alternatives. This document, one of a series describing the SWPM system, contains detailed information regarding the software and data structures utilized in developing the SWPM Version 1.4 Database. This document is intended for use by experienced database specialists and supports database maintenance, utility development, and database enhancement. Those interested in using the SWPM database should refer to the SWPM Database User's Guide. This document is available from the PNL Task M Project Manager (D. L. Stiles, 509-372-4358), the PNL Task L Project Manager (L. L. Armacost, 509-372-4304), the WHC Restoration Projects Section Manager (509-372-1443), or the WHC Waste Characterization Manager (509-372-1193)

  16. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan

    2011-01-06

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  17. Fitting a Bivariate Measurement Error Model for Episodically Consumed Dietary Components

    KAUST Repository

    Zhang, Saijuan; Krebs-Smith, Susan M.; Midthune, Douglas; Perez, Adriana; Buckman, Dennis W.; Kipnis, Victor; Freedman, Laurence S.; Dodd, Kevin W.; Carroll, Raymond J

    2011-01-01

    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components.We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole

  18. Deep ART Neural Model for Biologically Inspired Episodic Memory and Its Application to Task Performance of Robots.

    Science.gov (United States)

    Park, Gyeong-Moon; Yoo, Yong-Ho; Kim, Deok-Hwa; Kim, Jong-Hwan

    2017-06-26

    Robots are expected to perform smart services and to undertake various troublesome or difficult tasks in the place of humans. Since these human-scale tasks consist of a temporal sequence of events, robots need episodic memory to store and retrieve the sequences to perform the tasks autonomously in similar situations. As episodic memory, in this paper we propose a novel Deep adaptive resonance theory (ART) neural model and apply it to the task performance of the humanoid robot, Mybot, developed in the Robot Intelligence Technology Laboratory at KAIST. Deep ART has a deep structure to learn events, episodes, and even more like daily episodes. Moreover, it can retrieve the correct episode from partial input cues robustly. To demonstrate the effectiveness and applicability of the proposed Deep ART, experiments are conducted with the humanoid robot, Mybot, for performing the three tasks of arranging toys, making cereal, and disposing of garbage.

  19. Medicare Program; Advancing Care Coordination Through Episode Payment Models (EPMs); Cardiac Rehabilitation Incentive Payment Model; and Changes to the Comprehensive Care for Joint Replacement Model (CJR). Final rule.

    Science.gov (United States)

    2017-01-03

    This final rule implements three new Medicare Parts A and B episode payment models, a Cardiac Rehabilitation (CR) Incentive Payment model and modifications to the existing Comprehensive Care for Joint Replacement model under section 1115A of the Social Security Act. Acute care hospitals in certain selected geographic areas will participate in retrospective episode payment models targeting care for Medicare fee-forservice beneficiaries receiving services during acute myocardial infarction, coronary artery bypass graft, and surgical hip/femur fracture treatment episodes. All related care within 90 days of hospital discharge will be included in the episode of care. We believe these models will further our goals of improving the efficiency and quality of care for Medicare beneficiaries receiving care for these common clinical conditions and procedures.

  20. A modeling analysis of a heavy air pollution episode occurred in Beijing

    Directory of Open Access Journals (Sweden)

    X. An

    2007-06-01

    Full Text Available The concentrations of fine particulate matter (PM and ozone in Beijing often exceed healthful levels in recent years, therefore China is to taking steps to improve Beijing's air quality for the 2008 Olympic Games. In this paper, the Models-3 Community Multiscale Air Quality (CMAQ Modeling System was used to investigate a heavy air pollution episode in Beijing during 3–7 April 2005 to obtain the basic information of how heavy air pollution formed and the contributions of local sources and surround emissions. The modeling domain covered from East Asia with four nested grids with 81 to 3 km horizontal resolution focusing on urban Beijing. This was coupled with a regional emissions inventory with a 10 km resolution and a local 1 km Beijing emissions database. The trend of predicted concentrations of various pollutants agreed reasonably well with the observations and captured the main features of this heavy pollution episode. The simulated column concentration distribution of PM was correlated well with the MODIS remote sensing products. Control runs with and without Beijing emissions were conducted to quantify the contributions of non-Beijing sources (NBS to the Beijing local air pollution. The contributions of NBS to each species differed spatially and temporally with the order of PM2.5>PM10>SO2> soil for this episode. The percentage contribution of NBS to fine particle (PM2.5 in Beijing was averaged about 39%, up to 53% at the northwest of urban Beijing and only 15% at southwest. The spatial distribution of NBS contributions for PM10 was similar to that for PM2.5, with a slightly less average percentage of about 30%. The average NBS contributions for SO2 and soil (diameter between 2.5 μm and 10 μm were 18% and 10%. In addition, the pollutant transport flux was calculated and compared at different levels to investigate transport pathway and magnitude. It was found

  1. Genetic screening and testing in an episode-based payment model: preserving patient autonomy.

    Science.gov (United States)

    Sutherland, Sharon; Farrell, Ruth M; Lockwood, Charles

    2014-11-01

    The State of Ohio is implementing an episode-based payment model for perinatal care. All costs of care will be tabulated for each live birth and assigned to the delivering provider, creating a three-tiered model for reimbursement for care. Providers will be reimbursed as usual for care that is average in cost and quality, while instituting rewards or penalties for those outside the expected range in either domain. There are few exclusions, and all methods of genetic screening and diagnostic testing are included in the episode cost calculation as proposed. Prenatal ultrasonography, genetic screening, and diagnostic testing are critical components of the delivery of high-quality, evidence-based prenatal care. These tests provide pregnant women with key information about the pregnancy, which, in turn, allows them to work closely with their health care provider to determine optimal prenatal care. The concepts of informed consent and decision-making, cornerstones of the ethical practice of medicine, are founded on the principles of autonomy and respect for persons. These principles recognize that patients' rights to make choices and take actions are based on their personal beliefs and values. Given the personal nature of such decisions, it is critical that patients have unbarred access to prenatal genetic tests if they elect to use them as part of their prenatal care. The proposed restructuring of reimbursement creates a clear conflict between patient autonomy and physician financial incentives.

  2. Identifying Radiology's Place in the Expanding Landscape of Episode Payment Models.

    Science.gov (United States)

    Rosenkrantz, Andrew B; Hirsch, Joshua A; Allen, Bibb; Harvey, H Benjamin; Nicola, Gregory N

    2017-07-01

    The current fee-for-service system for health care reimbursement in the United Stated is argued to encourage fragmented care delivery and a lack of accountability that predisposes to insufficient focus on quality as well as unnecessary or duplicative resource utilization. Episode payment models (EPMs) seek to improve coordination by linking payments for all services related to a patient's condition or procedure, thereby improving quality and efficiency of care. The CMS Innovation Center has implemented a broadening array of EPMs. Early models with relevance to radiologists include Bundled Payment for Care Improvement (involving 48 possible clinical conditions), Comprehensive Care for Joint Replacement (involving knee and hip replacement), and the Oncology Care Model (involving chemotherapy). In July 2016, CMS expanded the range of EPMs through three new models with mandatory hospital participation addressing inpatient and 90-day postdischarge care for acute myocardial infarction, coronary artery bypass graft, and surgical hip and femur fracture treatment. Moreover, some of the EPMs include tracks that allow participating entities to qualify as an Advanced Alternative Payment Model under the Medicare Access and CHIP Reauthorization Act (MACRA), reaping the associated reporting and payment benefits. Even though none of the available EPMs are radiology specific, the models will nevertheless likely influence reimbursements for some radiologists. Thus, radiologists should partner with hospitals and other specialties in care coordination through these episode-based initiatives, thereby having opportunities to apply their imaging expertise to help lower spending while improving quality and overall levels of health. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  3. Land-Use Portfolio Modeler, Version 1.0

    Science.gov (United States)

    Taketa, Richard; Hong, Makiko

    2010-01-01

    -on-investment. The portfolio model, now known as the Land-Use Portfolio Model (LUPM), provided the framework for the development of the Land-Use Portfolio Modeler, Version 1.0 software (LUPM v1.0). The software provides a geographic information system (GIS)-based modeling tool for evaluating alternative risk-reduction mitigation strategies for specific natural-hazard events. The modeler uses information about a specific natural-hazard event and the features exposed to that event within the targeted study region to derive a measure of a given mitigation strategy`s effectiveness. Harnessing the spatial capabilities of a GIS enables the tool to provide a rich, interactive mapping environment in which users can create, analyze, visualize, and compare different

  4. Medicare Program; Advancing Care Coordination Through Episode Payment Models (EPMs); Cardiac Rehabilitation Incentive Payment Model; and Changes to the

    Science.gov (United States)

    2017-05-19

    This final rule finalizes May 20, 2017 as the effective date of the final rule titled "Advancing Care Coordination Through Episode Payment Models (EPMs); Cardiac Rehabilitation Incentive Payment Model; and Changes to the Comprehensive Care for Joint Replacement Model (CJR)" originally published in the January 3, 2017 Federal Register. This final rule also finalizes a delay of the applicability date of the regulations at 42 CFR part 512 from July 1, 2017 to January 1, 2018 and delays the effective date of the specific CJR regulations listed in the DATES section from July 1, 2017 to January 1, 2018.

  5. BehavePlus fire modeling system, version 5.0: Variables

    Science.gov (United States)

    Patricia L. Andrews

    2009-01-01

    This publication has been revised to reflect updates to version 4.0 of the BehavePlus software. It was originally published as the BehavePlus fire modeling system, version 4.0: Variables in July, 2008.The BehavePlus fire modeling system is a computer program based on mathematical models that describe wildland fire behavior and effects and the...

  6. Episodic Memories

    Science.gov (United States)

    Conway, Martin A.

    2009-01-01

    An account of episodic memories is developed that focuses on the types of knowledge they represent, their properties, and the functions they might serve. It is proposed that episodic memories consist of "episodic elements," summary records of experience often in the form of visual images, associated to a "conceptual frame" that provides a…

  7. NETPATH-WIN: an interactive user version of the mass-balance model, NETPATH

    Science.gov (United States)

    El-Kadi, A. I.; Plummer, Niel; Aggarwal, P.

    2011-01-01

    NETPATH-WIN is an interactive user version of NETPATH, an inverse geochemical modeling code used to find mass-balance reaction models that are consistent with the observed chemical and isotopic composition of waters from aquatic systems. NETPATH-WIN was constructed to migrate NETPATH applications into the Microsoft WINDOWS® environment. The new version facilitates model utilization by eliminating difficulties in data preparation and results analysis of the DOS version of NETPATH, while preserving all of the capabilities of the original version. Through example applications, the note describes some of the features of NETPATH-WIN as applied to adjustment of radiocarbon data for geochemical reactions in groundwater systems.

  8. Computerized transportation model for the NRC Physical Protection Project. Versions I and II

    International Nuclear Information System (INIS)

    Anderson, G.M.

    1978-01-01

    Details on two versions of a computerized model for the transportation system of the NRC Physical Protection Project are presented. The Version I model permits scheduling of all types of transport units associated with a truck fleet, including truck trailers, truck tractors, escort vehicles and crews. A fixed-fleet itinerary construction process is used in which iterations on fleet size are required until the service requirements are satisfied. The Version II model adds an aircraft mode capability and provides for a more efficient non-fixed-fleet itinerary generation process. Test results using both versions are included

  9. A Long-Term Prediction Model of Beijing Haze Episodes Using Time Series Analysis

    Directory of Open Access Journals (Sweden)

    Xiaoping Yang

    2016-01-01

    Full Text Available The rapid industrial development has led to the intermittent outbreak of pm2.5 or haze in developing countries, which has brought about great environmental issues, especially in big cities such as Beijing and New Delhi. We investigated the factors and mechanisms of haze change and present a long-term prediction model of Beijing haze episodes using time series analysis. We construct a dynamic structural measurement model of daily haze increment and reduce the model to a vector autoregressive model. Typical case studies on 886 continuous days indicate that our model performs very well on next day’s Air Quality Index (AQI prediction, and in severely polluted cases (AQI ≥ 300 the accuracy rate of AQI prediction even reaches up to 87.8%. The experiment of one-week prediction shows that our model has excellent sensitivity when a sudden haze burst or dissipation happens, which results in good long-term stability on the accuracy of the next 3–7 days’ AQI prediction.

  10. Source tagging modeling study of heavy haze episodes under complex regional transport processes over Wuhan megacity, Central China

    International Nuclear Information System (INIS)

    Lu, Miaomiao; Tang, Xiao; Wang, Zifa; Gbaguidi, Alex; Liang, Shengwen; Hu, Ke; Wu, Lin; Wu, Huangjian; Huang, Zhen; Shen, Longjiao

    2017-01-01

    Wuhan as a megacity of Central China was suffering from severe particulate matter pollution according to previous observation studies, however, the mechanism behind the pollution formation especially the impact of regional chemical transport is still unclear. This study, carried out on the Nested Air Quality Prediction Modeling System (NAQPMS) coupled with an on-line source-tagging module, explores different roles regional transport had in two strong haze episodes over Wuhan in October 2014 and quantitatively assesses the contributions from local and regional sources to PM 2.5 concentration. Validation of predictions based on observations shows modeling system good skills in reproducing key meteorological and chemical features. The first short-time haze episode occurred on 12 October under strong northerly winds, with a hourly PM 2.5 peak of 180 μg m −3 , and was found to be caused primarily by the long-range transport from the northern regions, which contributed 60.6% of the episode's PM 2.5 concentration (versus a total of 32.7% from sources in and near Wuhan). The second episode lasted from the 15–20 October under stable regional large-scale synoptic conditions and weak winds, and had an hourly PM 2.5 peak of 231.0 μg m −3 . In this episode, both the long-distance transport from far regions and short-range transport from the Wuhan-cluster were the primary causes of the haze episode and account for 24.8% and 29.2% of the PM 2.5 concentration respectively. Therefore, regional transport acts as a crucial driver of haze pollution over Wuhan through not only long-range transfer of pollutants, but also short-range aerosol movement under specific meteorological conditions. The present findings highlight the important role of regional transport in urban haze formation and indicate that the joint control of multi city-clusters are needed to reduce the particulate pollution level in Wuhan. - Highlights: • Regional transport impacts studied on two haze

  11. Using Data From Ontario's Episode-Based Funding Model to Assess Quality of Chemotherapy.

    Science.gov (United States)

    Kaizer, Leonard; Simanovski, Vicky; Lalonde, Carlin; Tariq, Huma; Blais, Irene; Evans, William K

    2016-10-01

    A new episode-based funding model for ambulatory systemic therapy was implemented in Ontario, Canada on April 1, 2014, after a comprehensive knowledge transfer and exchange strategy with providers and administrators. An analysis of the data from the first year of the new funding model provided an opportunity to assess the quality of chemotherapy, which was not possible under the old funding model. Options for chemotherapy regimens given with adjuvant/curative intent or palliative intent were informed by input from disease site groups. Bundles were developed and priced to enable evidence-informed best practice. Analysis of systemic therapy utilization after model implementation was performed to assess the concordance rate of the treatments chosen with recommended practice. The actual number of cycles of treatment delivered was also compared with expert recommendations. Significant improvement compared with baseline was seen in the proportion of adjuvant/curative regimens that aligned with disease site group-recommended options (98% v 90%). Similar improvement was seen for palliative regimens (94% v 89%). However, overall, the number of cycles of adjuvant/curative therapy delivered was lower than recommended best practice in 57.5% of patients. There was significant variation by disease site and between facilities. Linking funding to quality, supported by knowledge transfer and exchange, resulted in a rapid improvement in the quality of systemic treatment in Ontario. This analysis has also identified further opportunities for improvement and the need for model refinement.

  12. Modelling ESCOMPTE episodes with the CTM MOCAGE. Part 2 : sensitivity studies.

    Science.gov (United States)

    Dufour, A.; Amodei, M.; Brocheton, F.; Michou, M.; Peuch, V.-H.

    2003-04-01

    The multi-scale CTM MOCAGE has been applied to study pollution episodes documented during the ESCOMPTE field campain in June July 2001 in south eastern France (http://medias.obs-mip.fr/escompte). Several sensitivity studies have been performed on the basis of the 2nd IOP, covering 6 continuous days. The main objective of the present work is to investigate the question of chemical boundary conditions, as on the vertical than on the horizontal, for regional air quality simulations of several days. This issue, that often tended to be oversimplified (use of fixed continental climatology), raises increasing interest, particurlarly with the perspective of space-born tropospheric chemisry data assimilation in global model. In addition, we have examined how resolution refinements impact on the quality of the model outputs, at the surface and in altitude, against the observational database of dynamic and chemistry : resolution of the model by the way of the four nested models (from 2° to 0.01°), but also resolution of emission inventories (from 1° to 0.01°). Lastly, the impact of the refinement in the representation of chemistry has been assessed by using either detailed chemical schemes, such as RAM or SAPRC, or schemes used in global modelling, which just account for a limited amount of volatil hydrocarbon.

  13. Latent change models of adult cognition: are changes in processing speed and working memory associated with changes in episodic memory?

    Science.gov (United States)

    Hertzog, Christopher; Dixon, Roger A; Hultsch, David F; MacDonald, Stuart W S

    2003-12-01

    The authors used 6-year longitudinal data from the Victoria Longitudinal Study (VLS) to investigate individual differences in amount of episodic memory change. Latent change models revealed reliable individual differences in cognitive change. Changes in episodic memory were significantly correlated with changes in other cognitive variables, including speed and working memory. A structural equation model for the latent change scores showed that changes in speed and working memory predicted changes in episodic memory, as expected by processing resource theory. However, these effects were best modeled as being mediated by changes in induction and fact retrieval. Dissociations were detected between cross-sectional ability correlations and longitudinal changes. Shuffling the tasks used to define the Working Memory latent variable altered patterns of change correlations.

  14. Development of a Statistical Model for Forecasting Episodes of Visibility Degradation in the Denver Metropolitan Area.

    Science.gov (United States)

    Reddy, P. J.; Barbarick, D. E.; Osterburg, R. D.

    1995-03-01

    In 1990, the State of Colorado implemented a visibility standard of 0.076 km1 of beta extinction for the Denver metropolitan area. Meteorologists with Colorado's Air Pollution Control Division forecast high pollution days associated with visibility impairment as well as those due to high levels of the federal criteria pollutants. Visibility forecasts are made from a few hours up to about 26 h in advance of the period of interest. Here we discuss the key microscale, mesoscale, and synoptic-scale features associated with episodes of visibility impairment. Data from special studies, case studies, and the 22 NOAA Program for Regional Observing and Forecasting Services mesonet sites have been invaluable in identifying patterns associated with extremes in visibility conditions. A preliminary statistical forecast model has been developed using variables that represent many of these patterns. Six variables were selected from an initial pool of 27 to be used in a model based on linear logistic regression. These six variables include forecast measures of snow cover, surface pressures and a surface pressure gradient in eastern Colorado, relative humidity, and 500-mb ridge position. The initial testing of the model has been encouraging. The model correctly predicted 76% of the good visibility days and 67% of the poor visibility days for a test set of 171 days.

  15. ANLECIS-1: Version of ANLECIS Program for Calculations with the Asymetric Rotational Model

    International Nuclear Information System (INIS)

    Lopez Mendez, R.; Garcia Moruarte, F.

    1986-01-01

    A new modified version of the ANLECIS Code is reported. This version allows to fit simultaneously the cross section of the direct process by the asymetric rotational model, and the cross section of the compound nucleus process by the Hauser-Feshbach formalism with the modern statistical corrections. The calculations based in this version show a dependence of the compound nucleus cross section with respect to the asymetric parameter γ. (author). 19 refs

  16. CENTURY: Modeling Ecosystem Responses to Climate Change, Version 4 (VEMAP 1995)

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: The CENTURY model, Version 4, is a general model of plant-soil nutrient cycling that is being used to simulate carbon and nutrient dynamics for different...

  17. CENTURY: Modeling Ecosystem Responses to Climate Change, Version 4 (VEMAP 1995)

    Data.gov (United States)

    National Aeronautics and Space Administration — The CENTURY model, Version 4, is a general model of plant-soil nutrient cycling that is being used to simulate carbon and nutrient dynamics for different types of...

  18. An ozone episode in the Pearl River Delta: Field observation and model simulation

    Science.gov (United States)

    Jiang, F.; Guo, H.; Wang, T. J.; Cheng, H. R.; Wang, X. M.; Simpson, I. J.; Ding, A. J.; Saunders, S. M.; Lam, S. H. M.; Blake, D. R.

    2010-11-01

    In the fall of 2007 concurrent air sampling field measurements were conducted for the first time in Guangzhou (at Wan Qing Sha (WQS)) and Hong Kong (at Tung Chung (TC)), two cities in the rapidly developing Pearl River Delta region of China that are only 62 km apart. This region is known to suffer from poor air quality, especially during the autumn and winter months, when the prevailing meteorological conditions bring an outflow of continental air to the region. An interesting multiday O3 pollution event (daily maximum O3 > 122 ppbv) was captured during 9-17 November at WQS, while only one O3 episode day (10 November) was observed at TC during this time. The mean O3 mixing ratios at TC and WQS during the episode were 38 ± 3 (mean ± 95% confidence interval) and 51 ± 7 ppbv, respectively, with a mean difference of 13 ppbv and a maximum hourly difference of 150 ppbv. We further divided this event into two periods: 9-11 November as Period 1 and 12-17 November as Period 2. The mixing ratios of O3 and its precursors (NOx and CO) showed significant differences between the two periods at TC. By contrast, no obvious difference was found at WQS, indicating that different air masses arrived at TC for the two periods, as opposed to similar air masses at WQS for both periods. The analysis of VOC ratios and their relationship with O3 revealed strong O3 production at WQS during Period 2, in contrast to relatively weak photochemical O3 formation at TC. The weather conditions implied regional transport of O3 pollution during Period 1 at both sites. Furthermore, a comprehensive air quality model system (Weather Research and Forecasting-Community Multiscale Air Quality model (WRF-CMAQ)) was used to simulate this O3 pollution event. The model system generally reproduced the variations of weather conditions, simulated well the continuous high O3 episode event at WQS, and captured fairly well the elevated O3 mixing ratios in Period 1 and low O3 levels in Period 2 at TC. The modeled

  19. Aerosol transport model evaluation of an extreme smoke episode in Southeast Asia

    Science.gov (United States)

    Hyer, Edward J.; Chew, Boon Ning

    2010-04-01

    Biomass burning is one of many sources of particulate pollution in Southeast Asia, but its irregular spatial and temporal patterns mean that large episodes can cause acute air quality problems in urban areas. Fires in Sumatra and Borneo during September and October 2006 contributed to 24-h mean PM 10 concentrations above 150 μg m -3 at multiple locations in Singapore and Malaysia over several days. We use the FLAMBE model of biomass burning emissions and the NAAPS model of aerosol transport and evolution to simulate these events, and compare our simulation results to 24-h average PM 10 measurements from 54 stations in Singapore and Malaysia. The model simulation, including the FLAMBE smoke source as well as dust, sulfate, and sea salt aerosol species, was able to explain 50% or more of the variance in 24-h PM 10 observations at 29 of 54 sites. Simulation results indicated that biomass burning smoke contributed to nearly all of the extreme PM 10 observations during September-November 2006, but the exact contribution of smoke was unclear because the model severely underestimated total smoke emissions. Using regression analysis at each site, the bias in the smoke aerosol flux was determined to be a factor of between 2.5 and 10, and an overall factor of 3.5 was estimated. After application of this factor, the simulated smoke aerosol concentration averaged 20% of observed PM 10, and 40% of PM 10 for days with 24-h average concentrations above 150 μg m -3. These results suggest that aerosol transport models can aid analysis of severe pollution events in Southeast Asia, but that improvements are needed in models of biomass burning smoke emissions.

  20. A Constrained and Versioned Data Model for TEAM Data

    Science.gov (United States)

    Andelman, S.; Baru, C.; Chandra, S.; Fegraus, E.; Lin, K.

    2009-04-01

    The objective of the Tropical Ecology Assessment and Monitoring Network (www.teamnetwork.org) is "To generate real time data for monitoring long-term trends in tropical biodiversity through a global network of TEAM sites (i.e. field stations in tropical forests), providing an early warning system on the status of biodiversity to effectively guide conservation action". To achieve this, the TEAM Network operates by collecting data via standardized protocols at TEAM Sites. The standardized TEAM protocols include the Climate, Vegetation and Terrestrial Vertebrate Protocols. Some sites also implement additional protocols. There are currently 7 TEAM Sites with plans to grow the network to 15 by June 30, 2009 and 50 TEAM Sites by the end of 2010. At each TEAM Site, data is gathered as defined by the protocols and according to a predefined sampling schedule. The TEAM data is organized and stored in a database based on the TEAM spatio-temporal data model. This data model is at the core of the TEAM Information System - it consumes and executes spatio-temporal queries, and analytical functions that are performed on TEAM data, and defines the object data types, relationships and operations that maintain database integrity. The TEAM data model contains object types including types for observation objects (e.g. bird, butterfly and trees), sampling unit, person, role, protocol, site and the relationship of these object types. Each observation data record is a set of attribute values of an observation object and is always associated with a sampling unit, an observation timestamp or time interval, a versioned protocol and data collectors. The operations on the TEAM data model can be classified as read operations, insert operations and update operations. Following are some typical operations: The operation get(site, protocol, [sampling unit block, sampling unit,] start time, end time) returns all data records using the specified protocol and collected at the specified site, block

  1. Characterization of smoke and dust episode over West Africa: comparison of MERRA-2 modeling with multiwavelength Mie–Raman lidar observations

    Directory of Open Access Journals (Sweden)

    I. Veselovskii

    2018-02-01

    Full Text Available Observations of multiwavelength Mie–Raman lidar taken during the SHADOW field campaign are used to analyze a smoke–dust episode over West Africa on 24–27 December 2015. For the case considered, the dust layer extended from the ground up to approximately 2000 m while the elevated smoke layer occurred in the 2500–4000 m range. The profiles of lidar measured backscattering, extinction coefficients, and depolarization ratios are compared with the vertical distribution of aerosol parameters provided by the Modern-Era Retrospective Analysis for Research and Applications, version 2 (MERRA-2. The MERRA-2 model simulated the correct location of the near-surface dust and elevated smoke layers. The values of modeled and observed aerosol extinction coefficients at both 355 and 532 nm are also rather close. In particular, for the episode reported, the mean value of difference between the measured and modeled extinction coefficients at 355 nm is 0.01 km−1 with SD of 0.042 km−1. The model predicts significant concentration of dust particles inside the elevated smoke layer, which is supported by an increased depolarization ratio of 15 % observed in the center of this layer. The modeled at 355 nm the lidar ratio of 65 sr in the near-surface dust layer is close to the observed value (70 ± 10 sr. At 532 nm, however, the simulated lidar ratio (about 40 sr is lower than measurements (55 ± 8 sr. The results presented demonstrate that the lidar and model data are complimentary and the synergy of observations and models is a key to improve the aerosols characterization.

  2. Monitoring an air pollution episode in Shenzhen by combining MODIS satellite images and the HYSPLIT model

    Science.gov (United States)

    Li, Lili; Liu, Yihong; Wang, Yunpeng

    2017-07-01

    Urban air pollution is influenced not only by local emission sources including industry and vehicles, but also greatly by regional atmospheric pollutant transportation from the surrounding areas, especially in developed city clusters, like the Pearl River Delta (PRD). Taking an air pollution episode in Shenzhen as an example, this paper investigates the occurrence and evolution of the pollution episode and identifies the transport pathways of air pollutants in Shenzhen by combining MODIS satellite images and HYSPLIT back trajectory analysis. Results show that this pollution episode is mainly caused by the local emission of pollutants in PRD and oceanic air masses under specific weather conditions.

  3. A hybrid version of swan for fast and efficient practical wave modelling

    NARCIS (Netherlands)

    M. Genseberger (Menno); J. Donners

    2016-01-01

    htmlabstractIn the Netherlands, for coastal and inland water applications, wave modelling with SWAN has become a main ingredient. However, computational times are relatively high. Therefore we investigated the parallel efficiency of the current MPI and OpenMP versions of SWAN. The MPI version is

  4. A Systems Engineering Capability Maturity Model, Version 1.1,

    Science.gov (United States)

    1995-11-01

    of a sequence of actions to be taken to perform a given task. [SECMM] 1. A set of activities ( ISO 12207 ). 2. A set of practices that address the...standards One of the design goals of the SE-CMM effort was to capture the salient concepts from emerging standards and initiatives (e.g.; ISO 9001...history for the SE-CMM: Version Designator Content Change Notes Release 1 • architecture rationale • Process Areas • ISO (SPICE) BPG 0.05 summary

  5. An insight into the formation of severe ozone episodes: modeling the 21/03/01 event in the ESCOMPTE region

    Science.gov (United States)

    Lasry, Fanny; Coll, Isabelle; Buisson, Emmanuel

    2005-03-01

    High ozone concentrations are observed more and more frequently in the lower troposphere. The development of such polluted episodes is linked to a complex set of chemical, physical and dynamical parameters that interact with each other. To improve air quality, it is necessary to understand and quantify the role of all these processes on the intensity of ozone formation. The ESCOMPTE program, especially dedicated to the numerical simulation of photochemical episodes, offers an ideal frame to investigate details of the roles of many of these processes through 3D modeling. This paper presents the analysis, with a 3D eulerian model, of a severe and local episode of ozone pollution that occurred on the 21st of March 2001 in the ESCOMPTE region. This episode is particularly interesting due to the intensity of the observed ozone peaks (450 μg/m 3 during 15 mn) but also because it did not occur in summer but at the beginning of spring. As part of the premodeling work of the ESCOMPTE program, this study focuses on the sensitivity of the simulated ozone peaks to various chemical and physical phenomena (long-range transport, industrial emissions, local dynamic phenomena…) to determine their influence on the rise of high local photooxidant concentrations and to better picture the photochemistry of the ESCOMPTE region. Through sensitivity tests to dynamical calculation resolution and emissions, this paper shows how the combination of sea and pond breezes with emissions of reactive VOCs can generate local intense ozone peaks.

  6. Tier I Rice Model - Version 1.0 - Guidance for Estimating Pesticide Concentrations in Rice Paddies

    Science.gov (United States)

    Describes a Tier I Rice Model (Version 1.0) for estimating surface water exposure from the use of pesticides in rice paddies. The concentration calculated can be used for aquatic ecological risk and drinking water exposure assessments.

  7. Estimating Parameters for the PVsyst Version 6 Photovoltaic Module Performance Model

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    We present an algorithm to determine parameters for the photovoltaic module perf ormance model encoded in the software package PVsyst(TM) version 6. Our method operates on current - voltage (I - V) measured over a range of irradiance and temperature conditions. We describe the method and illustrate its steps using data for a 36 cell crystalli ne silicon module. We qualitatively compare our method with one other technique for estimating parameters for the PVsyst(TM) version 6 model .

  8. A viscoplastic shear-zone model for episodic slow slip events in oceanic subduction zones

    Science.gov (United States)

    Yin, A.; Meng, L.

    2016-12-01

    Episodic slow slip events occur widely along oceanic subduction zones at the brittle-ductile transition depths ( 20-50 km). Although efforts have been devoted to unravel their mechanical origins, it remains unclear about the physical controls on the wide range of their recurrence intervals and slip durations. In this study we present a simple mechanical model that attempts to account for the observed temporal evolution of slow slip events. In our model we assume that slow slip events occur in a viscoplastic shear zone (i.e., Bingham material), which has an upper static and a lower dynamic plastic yield strength. We further assume that the hanging wall deformation is approximated as an elastic spring. We envision the shear zone to be initially locked during forward/landward motion but is subsequently unlocked when the elastic and gravity-induced stress exceeds the static yield strength of the shear zone. This leads to backward/trenchward motion damped by viscous shear-zone deformation. As the elastic spring progressively loosens, the hanging wall velocity evolves with time and the viscous shear stress eventually reaches the dynamic yield strength. This is followed by the termination of the trenchward motion when the elastic stress is balanced by the dynamic yield strength of the shear zone and the gravity. In order to account for the zig-saw slip-history pattern of typical repeated slow slip events, we assume that the shear zone progressively strengthens after each slow slip cycle, possibly caused by dilatancy as commonly assumed or by progressive fault healing through solution-transport mechanisms. We quantify our conceptual model by obtaining simple analytical solutions. Our model results suggest that the duration of the landward motion increases with the down-dip length and the static yield strength of the shear zone, but decreases with the ambient loading velocity and the elastic modulus of the hanging wall. The duration of the backward/trenchward motion depends

  9. Integrated Biosphere Simulator Model (IBIS), Version 2.5

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: The Integrated Biosphere Simulator (or IBIS) is designed to be a comprehensive model of the terrestrial biosphere. Tthe model represents a wide range of...

  10. Integrated Biosphere Simulator Model (IBIS), Version 2.5

    Data.gov (United States)

    National Aeronautics and Space Administration — The Integrated Biosphere Simulator (or IBIS) is designed to be a comprehensive model of the terrestrial biosphere. Tthe model represents a wide range of processes,...

  11. Simulating episodic memory deficits in semantic dementia with the TraceLink model

    NARCIS (Netherlands)

    Meeter, M.; Murre, J.M.J.

    2004-01-01

    Although semantic dementia is primarily characterised by deficits in semantic memory, episodic memory is also impaired. Patients show poor recall of old autobiographical and semantic memories, with better retrieval of recent experiences; they can form new memories, and normal performance on

  12. Prediction models for successful external cephalic version: a systematic review.

    Science.gov (United States)

    Velzel, Joost; de Hundt, Marcella; Mulder, Frederique M; Molkenboer, Jan F M; Van der Post, Joris A M; Mol, Ben W; Kok, Marjolein

    2015-12-01

    To provide an overview of existing prediction models for successful ECV, and to assess their quality, development and performance. We searched MEDLINE, EMBASE and the Cochrane Library to identify all articles reporting on prediction models for successful ECV published from inception to January 2015. We extracted information on study design, sample size, model-building strategies and validation. We evaluated the phases of model development and summarized their performance in terms of discrimination, calibration and clinical usefulness. We collected different predictor variables together with their defined significance, in order to identify important predictor variables for successful ECV. We identified eight articles reporting on seven prediction models. All models were subjected to internal validation. Only one model was also validated in an external cohort. Two prediction models had a low overall risk of bias, of which only one showed promising predictive performance at internal validation. This model also completed the phase of external validation. For none of the models their impact on clinical practice was evaluated. The most important predictor variables for successful ECV described in the selected articles were parity, placental location, breech engagement and the fetal head being palpable. One model was assessed using discrimination and calibration using internal (AUC 0.71) and external validation (AUC 0.64), while two other models were assessed with discrimination and calibration, respectively. We found one prediction model for breech presentation that was validated in an external cohort and had acceptable predictive performance. This model should be used to council women considering ECV. Copyright © 2015. Published by Elsevier Ireland Ltd.

  13. PM(10) episodes in Greece: Local sources versus long-range transport-observations and model simulations.

    Science.gov (United States)

    Matthaios, Vasileios N; Triantafyllou, Athanasios G; Koutrakis, Petros

    2017-01-01

    Periods of abnormally high concentrations of atmospheric pollutants, defined as air pollution episodes, can cause adverse health effects. Southern European countries experience high particulate matter (PM) levels originating from local and distant sources. In this study, we investigated the occurrence and nature of extreme PM 10 (PM with an aerodynamic diameter ≤10 μm) pollution episodes in Greece. We examined PM 10 concentration data from 18 monitoring stations located at five sites across the country: (1) an industrial area in northwestern Greece (Western Macedonia Lignite Area, WMLA), which includes sources such as lignite mining operations and lignite power plants that generate a high percentage of the energy in Greece; (2) the greater Athens area, the most populated area of the country; and (3) Thessaloniki, (4) Patra, and (5) Volos, three large cities in Greece. We defined extreme PM 10 pollution episodes (EEs) as days during which PM 10 concentrations at all five sites exceeded the European Union (EU) 24-hr PM 10 standards. For each EE, we identified the corresponding prevailing synoptic and local meteorological conditions, including wind surface data, for the period from January 2009 through December 2011. We also analyzed data from remote sensing and model simulations. We recorded 14 EEs that occurred over 49 days and could be grouped into two categories: (1) Local Source Impact (LSI; 26 days, 53%) and (2) African Dust Impact (ADI; 23 days, 47%). Our analysis suggested that the contribution of local sources to ADI EEs was relatively small. LSI EEs were observed only in the cold season, whereas ADI EEs occurred throughout the year, with a higher frequency during the cold season. The EEs with the highest intensity were recorded during African dust intrusions. ADI episodes were found to contribute more than local sources in Greece, with ADI and LSI fraction contribution ranging from 1.1 to 3.10. The EE contribution during ADI fluctuated from 41 to 83

  14. The MiniBIOS model (version 1A4) at the RIVM

    NARCIS (Netherlands)

    Uijt de Haag PAM; Laheij GMH

    1993-01-01

    This report is the user's guide of the MiniBIOS model, version 1A4. The model is operational at the Laboratory of Radiation Research of the RIVM. MiniBIOS is a simulation model for calculating the transport of radionuclides in the biosphere and the consequential radiation dose to humans. The

  15. Integrated Baseline System (IBS) Version 2.0: Models guide

    Energy Technology Data Exchange (ETDEWEB)

    1994-03-01

    The Integrated Baseline System (IBS) is an emergency management planning and analysis tool being developed under the direction of the US Army Nuclear and Chemical Agency. This Models Guide summarizes the IBS use of several computer models for predicting the results of emergency situations. These include models for predicting dispersion/doses of airborne contaminants, traffic evacuation, explosion effects, heat radiation from a fire, and siren sound transmission. The guide references additional technical documentation on the models when such documentation is available from other sources. The audience for this manual is chiefly emergency management planners and analysts, but also data managers and system managers.

  16. Microsoft Repository Version 2 and the Open Information Model.

    Science.gov (United States)

    Bernstein, Philip A.; Bergstraesser, Thomas; Carlson, Jason; Pal, Shankar; Sanders, Paul; Shutt, David

    1999-01-01

    Describes the programming interface and implementation of the repository engine and the Open Information Model for Microsoft Repository, an object-oriented meta-data management facility that ships in Microsoft Visual Studio and Microsoft SQL Server. Discusses Microsoft's component object model, object manipulation, queries, and information…

  17. Prediction models for successful external cephalic version: a systematic review

    NARCIS (Netherlands)

    Velzel, Joost; de Hundt, Marcella; Mulder, Frederique M.; Molkenboer, Jan F. M.; van der Post, Joris A. M.; Mol, Ben W.; Kok, Marjolein

    2015-01-01

    To provide an overview of existing prediction models for successful ECV, and to assess their quality, development and performance. We searched MEDLINE, EMBASE and the Cochrane Library to identify all articles reporting on prediction models for successful ECV published from inception to January 2015.

  18. Efficient Modelling and Generation of Markov Automata (extended version)

    NARCIS (Netherlands)

    Timmer, Mark; Katoen, Joost P.; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette

    2012-01-01

    This paper introduces a framework for the efficient modelling and generation of Markov automata. It consists of (1) the data-rich process-algebraic language MAPA, allowing concise modelling of systems with nondeterminism, probability and Markovian timing; (2) a restricted form of the language, the

  19. STORM WATER MANAGEMENT MODEL USER'S MANUAL VERSION 5.0

    Science.gov (United States)

    The EPA Storm Water Management Model (SWMM) is a dynamic rainfall-runoff simulation model used for single event or long-term (continuous) simulation of runoff quantity and quality from primarily urban areas. SWMM was first developed in 1971 and has undergone several major upgrade...

  20. Integrated Baseline Bystem (IBS) Version 1.03: Models guide

    Energy Technology Data Exchange (ETDEWEB)

    1993-01-01

    The Integrated Baseline System)(IBS), operated by the Federal Emergency Management Agency (FEMA), is a system of computerized tools for emergency planning and analysis. This document is the models guide for the IBS and explains how to use the emergency related computer models. This document provides information for the experienced system user, and is the primary reference for the computer modeling software supplied with the system. It is designed for emergency managers and planners, and others familiar with the concepts of computer modeling. Although the IBS manual set covers basic and advanced operations, it is not a complete reference document set. Emergency situation modeling software in the IBS is supported by additional technical documents. Some of the other IBS software is commercial software for which more complete documentation is available. The IBS manuals reference such documentation where necessary.

  1. Flipped version of the supersymmetric strongly coupled preon model

    Energy Technology Data Exchange (ETDEWEB)

    Fajfer, S. (Institut za Fiziku, University of Sarajevo, Sarajevo, (Yugoslavia)); Milekovic, M.; Tadic, D. (Zavod za Teorijsku Fiziku, Prirodoslovno-Matematicki Fakultet, University of Zagreb, Croatia, (Yugoslavia))

    1989-12-01

    In the supersymmetric SU(5) (SUSY SU(5)) composite model (which was described in an earlier paper) the fermion mass terms can be easily constructed. The SUSY SU(5){direct product}U(1), i.e., flipped, composite model possesses a completely analogous composite-particle spectrum. However, in that model one cannot construct a renormalizable superpotential which would generate fermion mass terms. This contrasts with the standard noncomposite grand unified theories (GUT's) in which both the Georgi-Glashow electrical charge embedding and its flipped counterpart lead to the renormalizable theories.

  2. Radarsat Antarctic Mapping Project Digital Elevation Model, Version 2

    Data.gov (United States)

    National Aeronautics and Space Administration — The high-resolution Radarsat Antarctic Mapping Project (RAMP) Digital Elevation Model (DEM) combines topographic data from a variety of sources to provide consistent...

  3. U.S. Coastal Relief Model - Southern California Version 2

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NGDC's U.S. Coastal Relief Model (CRM) provides a comprehensive view of the U.S. coastal zone integrating offshore bathymetry with land topography into a seamless...

  4. ONKALO rock mechanics model (RMM) - Version 2.0

    International Nuclear Information System (INIS)

    Moenkkoenen, H.; Hakala, M.; Paananen, M.; Laine, E.

    2012-02-01

    The Rock Mechanics Model of the ONKALO rock volume is a description of the significant features and parameters related to rock mechanics. The main objective is to develop a tool to predict the rock properties, quality and hence the potential for stress failure which can then be used for continuing design of the ONKALO and the repository. This is the second implementation of the Rock Mechanics Model and it includes sub-models of the intact rock strength, in situ stress, thermal properties, rock mass quality and properties of the brittle deformation zones. Because of the varying quantities of available data for the different parameters, the types of presentations also vary: some data sets can be presented in the style of a 3D block model but, in other cases, a single distribution represents the whole rock volume hosting the ONKALO. (orig.)

  5. Geological model of the ONKALO area version 0

    International Nuclear Information System (INIS)

    Paananen, M.; Paulamaeki, S.; Gehoer, S.; Kaerki, A.

    2006-03-01

    The geological model of the ONKALO area is composed of four submodels: ductile deformation model, lithological model, brittle deformation model and alteration model. The ductile deformation model describes and models the products of polyphase ductile deformation, which facilitates the definition of dimensions and geometrical properties of individual lithological units determined in the lithological model. The lithological model describes the properties of rock units that can be defined on the basis the migmatite structures, textures and modal compositions. The brittle deformation model describes the products of multiple phases of brittle deformation, and the alteration model describes the types, occurrence and the effects of the hydrothermal alteration. On the basis of refolding and crosscutting relationships, the metamorphic supracrustal rocks have been subject to five stages of ductile deformation. This resulted in a pervasive, composite foliation which shows a rather constant attitude in the ONKALO area. Based on observations in outcrops, investigation trenches and drill cores, 3D modelling of the lithological units is carried out assuming that the contacts are quasiconcordant. Using this assumption, the strike and dip of the foliation has been used as a tool to correlate the lithologies between the drillholes, and from surface and tunnel outcrops to drillholes. Consequently, the strike and dip of the foliation has been used as a tool, through which the lithologies have been correlated between the drillholes and from surface to drillholes. The rocks at Olkiluoto can be divided into two major groups: (1) supracrustal high-grade metamorphic rocks including various migmatitic gneisses, homogeneous tonaliticgranodioritic- granitic gneisses, mica gneisses and quartzitic gneisses, and mafic gneisses, (2) igneous rocks, including pegmatitic granites and diabase dykes. The migmatitic gneisses can further be divided into three subgroups in terms of the type of migmatite

  6. The Oak Ridge Competitive Electricity Dispatch (ORCED) Model Version 9

    Energy Technology Data Exchange (ETDEWEB)

    Hadley, Stanton W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Baek, Young Sun [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-11-01

    The Oak Ridge Competitive Electricity Dispatch (ORCED) model dispatches power plants in a region to meet the electricity demands for any single given year up to 2030. It uses publicly available sources of data describing electric power units such as the National Energy Modeling System and hourly demands from utility submittals to the Federal Energy Regulatory Commission that are projected to a future year. The model simulates a single region of the country for a given year, matching generation to demands and predefined net exports from the region, assuming no transmission constraints within the region. ORCED can calculate a number of key financial and operating parameters for generating units and regional market outputs including average and marginal prices, air emissions, and generation adequacy. By running the model with and without changes such as generation plants, fuel prices, emission costs, plug-in hybrid electric vehicles, distributed generation, or demand response, the marginal impact of these changes can be found.

  7. Due Regard Encounter Model Version 1.0

    Science.gov (United States)

    2013-08-19

    Note that no existing model covers encoun- ters between two IFR aircraft in oceanic airspace. The reason for this is that one cannot observe encounters...encounters between instrument flight rules ( IFR ) and non- IFR traffic beyond 12NM. 2 TABLE 1 Encounter model categories. Aircraft of Interest Intruder...Aircraft Location Flight Rule IFR VFR Noncooperative Noncooperative Conventional Unconventional CONUS IFR C C U X VFR C U U X Offshore IFR C C U X VFR C U

  8. Geological model of the Olkiluoto site Version O

    International Nuclear Information System (INIS)

    Paulamaeki, S.; Paananen, M.; Gehoer, S.

    2006-05-01

    The geological model of the Olkiluoto site consists of four submodels: the lithological model, the ductile deformation model, the brittle deformation model and the alteration model. The lithological model gives properties of definite rock units that can be defined on the basis the migmatite structures, textures and modal compositions. The ductile deformation model describes and models the products of polyphase ductile deformation, which enables to define the dimensions and geometrical properties of individual lithological units determined in the lithological model. The brittle deformation model describes the products of multiple phases of brittle deformation. The alteration model describes the types, occurrence and the effects of the hydrothermal alteration. The rocks of Olkiluoto can be divided into two major classes: (1) supracrustal high-grade metamorphic rocks including various migmatitic gneisses, tonalitic-granodioriticgranitic gneisses, mica gneisses, quartz gneisses and mafic gneisses, and (2) igneous rocks including pegmatitic granites and diabase dykes. The migmatitic gneisses can further be divided into three subgroups in terms of the type of migmatite structure: veined gneisses, stromatic gneisses and diatexitic gneisses. On the basis of refolding and crosscutting relationships, the metamorphic supracrustal rocks have been subject to polyphased ductile deformation, including five stages. In 3D modelling of the lithological units, an assumption has been made, on the basis of measurements in outcrops, investigation trenches and drill cores, that the pervasive, composite foliation produced as a result a polyphase ductile deformation has a rather constant attitude in the ONKALO area. Consequently, the strike and dip of the foliation has been used as a tool, through which the lithologies have been correlated between the drillholes and from the surface to the drillholes. The bedrock in the Olkiluoto site has been subject to extensive hydrothermal alteration

  9. Institutional Transformation Version 2.5 Modeling and Planning.

    Energy Technology Data Exchange (ETDEWEB)

    Villa, Daniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mizner, Jack H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Passell, Howard D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gallegos, Gerald R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Peplinski, William John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vetter, Douglas W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Evans, Christopher A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Malczynski, Leonard A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Addison, Marlin [Arizona State Univ., Mesa, AZ (United States); Schaffer, Matthew A. [Bridgers and Paxton Engineering Firm, Albuquerque, NM (United States); Higgins, Matthew W. [Vibrantcy, Albuquerque, NM (United States)

    2017-02-01

    Reducing the resource consumption and emissions of large institutions is an important step toward a sustainable future. Sandia National Laboratories' (SNL) Institutional Transformation (IX) project vision is to provide tools that enable planners to make well-informed decisions concerning sustainability, resource conservation, and emissions reduction across multiple sectors. The building sector has been the primary focus so far because it is the largest consumer of resources for SNL. The IX building module allows users to define the evolution of many buildings over time. The module has been created so that it can be generally applied to any set of DOE-2 ( http://doe2.com ) building models that have been altered to include parameters and expressions required by energy conservation measures (ECM). Once building models have been appropriately prepared, they are checked into a Microsoft Access (r) database. Each building can be represented by many models. This enables the capability to keep a continuous record of models in the past, which are replaced with different models as changes occur to the building. In addition to this, the building module has the capability to apply climate scenarios through applying different weather files to each simulation year. Once the database has been configured, a user interface in Microsoft Excel (r) is used to create scenarios with one or more ECMs. The capability to include central utility buildings (CUBs) that service more than one building with chilled water has been developed. A utility has been created that joins multiple building models into a single model. After using the utility, several manual steps are required to complete the process. Once this CUB model has been created, the individual contributions of each building are still tracked through meters. Currently, 120 building models from SNL's New Mexico and California campuses have been created. This includes all buildings at SNL greater than 10,000 sq. ft

  10. Better Patient Care At High-Quality Hospitals May Save Medicare Money And Bolster Episode-Based Payment Models.

    Science.gov (United States)

    Tsai, Thomas C; Greaves, Felix; Zheng, Jie; Orav, E John; Zinner, Michael J; Jha, Ashish K

    2016-09-01

    US policy makers are making efforts to simultaneously improve the quality of and reduce spending on health care through alternative payment models such as bundled payment. Bundled payment models are predicated on the theory that aligning financial incentives for all providers across an episode of care will lower health care spending while improving quality. Whether this is true remains unknown. Using national Medicare fee-for-service claims for the period 2011-12 and data on hospital quality, we evaluated how thirty- and ninety-day episode-based spending were related to two validated measures of surgical quality-patient satisfaction and surgical mortality. We found that patients who had major surgery at high-quality hospitals cost Medicare less than those who had surgery at low-quality institutions, for both thirty- and ninety-day periods. The difference in Medicare spending between low- and high-quality hospitals was driven primarily by postacute care, which accounted for 59.5 percent of the difference in thirty-day episode spending, and readmissions, which accounted for 19.9 percent. These findings suggest that efforts to achieve value through bundled payment should focus on improving care at low-quality hospitals and reducing unnecessary use of postacute care. Project HOPE—The People-to-People Health Foundation, Inc.

  11. Multi-center MRI prediction models : Predicting sex and illness course in first episode psychosis patients

    OpenAIRE

    Nieuwenhuis, Mireille; Schnack, Hugo G.; van Haren, Neeltje E.; Kahn, René S.; Lappin, Julia; Dazzan, Paola; Morgan, Craig; Reinders, Antje A.; Gutierrez-Tordesillas, Diana; Gutierrez-Tordesillas, Diana; Roiz-Santiañez, Roberto; Crespo-Facorro, Benedicto; Schaufelberger, Maristela S.; Rosa, Pedro G.; Zanetti, Marcus V.

    2017-01-01

    Structural Magnetic Resonance Imaging (MRI) studies have attempted to use brain measures obtained at the first-episode of psychosis to predict subsequent outcome, with inconsistent results. Thus, there is a real need to validate the utility of brain measures in the prediction of outcome using large datasets, from independent samples, obtained with different protocols and from different MRI scanners. This study had three main aims: 1) to investigate whether structural MRI data from multiple ce...

  12. Modeling the Role of Working Memory and Episodic Memory in Behavioral Tasks

    OpenAIRE

    Zilli, Eric A.; Hasselmo, Michael E.

    2008-01-01

    The mechanisms of goal-directed behavior have been studied using reinforcement learning theory, but these theoretical techniques have not often been used to address the role of memory systems in performing behavioral tasks. The present work addresses this shortcoming by providing a way in which working memory and episodic memory may be included in the reinforcement learning framework, then simulating the successful acquisition and performance of six behavioral tasks, drawn from or inspired by...

  13. Changing physician incentives for affordable, quality cancer care: results of an episode payment model.

    Science.gov (United States)

    Newcomer, Lee N; Gould, Bruce; Page, Ray D; Donelan, Sheila A; Perkins, Monica

    2014-09-01

    This study tested the combination of an episode payment coupled with actionable use and quality data as an incentive to improve quality and reduce costs. Medical oncologists were paid a single fee, in lieu of any drug margin, to treat their patients. Chemotherapy medications were reimbursed at the average sales price, a proxy for actual cost. Five volunteer medical groups were compared with a large national payer registry of fee-for-service patients with cancer to examine the difference in cost before and after the initiation of the payment change. Between October 2009 and December 2012, the five groups treated 810 patients with breast, colon, and lung cancer using the episode payments. The registry-predicted fee-for-service cost of the episodes cohort was $98,121,388, but the actual cost was $64,760,116. The predicted cost of chemotherapy drugs was $7,519,504, but the actual cost was $20,979,417. There was no difference between the groups on multiple quality measures. Modifying the current fee-for-service payment system for cancer therapy with feedback data and financial incentives that reward outcomes and cost efficiency resulted in a significant total cost reduction. Eliminating existing financial chemotherapy drug incentives paradoxically increased the use of chemotherapy. Copyright © 2014 by American Society of Clinical Oncology.

  14. Mars Global Reference Atmospheric Model 2010 Version: Users Guide

    Science.gov (United States)

    Justh, H. L.

    2014-01-01

    This Technical Memorandum (TM) presents the Mars Global Reference Atmospheric Model 2010 (Mars-GRAM 2010) and its new features. Mars-GRAM is an engineering-level atmospheric model widely used for diverse mission applications. Applications include systems design, performance analysis, and operations planning for aerobraking, entry, descent and landing, and aerocapture. Additionally, this TM includes instructions on obtaining the Mars-GRAM source code and data files as well as running Mars-GRAM. It also contains sample Mars-GRAM input and output files and an example of how to incorporate Mars-GRAM as an atmospheric subroutine in a trajectory code.

  15. Enteric disease episodes and the risk of acquiring a future sexually transmitted infection: a prediction model in Montreal residents.

    Science.gov (United States)

    Caron, Melissa; Allard, Robert; Bédard, Lucie; Latreille, Jérôme; Buckeridge, David L

    2016-11-01

    The sexual transmission of enteric diseases poses an important public health challenge. We aimed to build a prediction model capable of identifying individuals with a reported enteric disease who could be at risk of acquiring future sexually transmitted infections (STIs). Passive surveillance data on Montreal residents with at least 1 enteric disease report was used to construct the prediction model. Cases were defined as all subjects with at least 1 STI report following their initial enteric disease episode. A final logistic regression prediction model was chosen using forward stepwise selection. The prediction model with the greatest validity included age, sex, residential location, number of STI episodes experienced prior to the first enteric disease episode, type of enteric disease acquired, and an interaction term between age and male sex. This model had an area under the curve of 0.77 and had acceptable calibration. A coordinated public health response to the sexual transmission of enteric diseases requires that a distinction be made between cases of enteric diseases transmitted through sexual activity from those transmitted through contaminated food or water. A prediction model can aid public health officials in identifying individuals who may have a higher risk of sexually acquiring a reportable disease. Once identified, these individuals could receive specialized intervention to prevent future infection. The information produced from a prediction model capable of identifying higher risk individuals can be used to guide efforts in investigating and controlling reported cases of enteric diseases and STIs. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Red Storm usage model :Version 1.12.

    Energy Technology Data Exchange (ETDEWEB)

    Jefferson, Karen L.; Sturtevant, Judith E.

    2005-12-01

    Red Storm is an Advanced Simulation and Computing (ASC) funded massively parallel supercomputer located at Sandia National Laboratories (SNL). The Red Storm Usage Model (RSUM) documents the capabilities and the environment provided for the FY05 Tri-Lab Level II Limited Availability Red Storm User Environment Milestone and the FY05 SNL Level II Limited Availability Red Storm Platform Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and SNL. Additionally, the Red Storm Usage Model maps the provided capabilities to the Tri-Lab ASC Computing Environment (ACE) requirements. The ACE requirements reflect the high performance computing requirements for the ASC community and have been updated in FY05 to reflect the community's needs. For each section of the RSUM, Appendix I maps the ACE requirements to the Limited Availability User Environment capabilities and includes a description of ACE requirements met and those requirements that are not met in that particular section. The Red Storm Usage Model, along with the ACE mappings, has been issued and vetted throughout the Tri-Lab community.

  17. Zig-zag version of the Frenkel-Kontorova model

    DEFF Research Database (Denmark)

    Christiansen, Peter Leth; Savin, A.V.; Zolotaryuk, Alexander

    1996-01-01

    We study a generalization of the Frenkel-Kontorova model which describes a zig-zag chain of particles coupled by both the first- and second-neighbor harmonic forces and subjected to a planar substrate with a commensurate potential relief. The particles are supposed to have two degrees of freedom...

  18. The ``KILDER`` air pollution modelling system, version 2.0

    Energy Technology Data Exchange (ETDEWEB)

    Gram, F.

    1996-12-31

    This report describes the KILDER Air Pollution Modelling System, which is a system of small PC-programs for calculation of long-term emission, dispersion, concentration and exposure from different source categories. The system consists of three parts: (1) The dispersion models POI-KILD and ARE-KILD for point- and area-sources, respectively, (2) Meterological programs WINDFREC, STABFREC and METFREC, (3) Supporting programs for calculating emissions and exposure and for operating with binary data fields. The file structure is based on binary files with data fields. The data fields are matrices with different types of values and may be read into the computer or be calculated in other programs. 19 refs., 22 figs., 3 tabs.

  19. Implementation of a parallel version of a regional climate model

    Energy Technology Data Exchange (ETDEWEB)

    Gerstengarbe, F.W. [ed.; Kuecken, M. [Potsdam-Institut fuer Klimafolgenforschung (PIK), Potsdam (Germany); Schaettler, U. [Deutscher Wetterdienst, Offenbach am Main (Germany). Geschaeftsbereich Forschung und Entwicklung

    1997-10-01

    A regional climate model developed by the Max Planck Institute for Meterology and the German Climate Computing Centre in Hamburg based on the `Europa` and `Deutschland` models of the German Weather Service has been parallelized and implemented on the IBM RS/6000 SP computer system of the Potsdam Institute for Climate Impact Research including parallel input/output processing, the explicit Eulerian time-step, the semi-implicit corrections, the normal-mode initialization and the physical parameterizations of the German Weather Service. The implementation utilizes Fortran 90 and the Message Passing Interface. The parallelization strategy used is a 2D domain decomposition. This report describes the parallelization strategy, the parallel I/O organization, the influence of different domain decomposition approaches for static and dynamic load imbalances and first numerical results. (orig.)

  20. External Validation of a Prediction Model for Successful External Cephalic Version

    NARCIS (Netherlands)

    de Hundt, Marcella; Vlemmix, Floortje; Kok, Marjolein; van der Steeg, Jan W.; Bais, Joke M.; Mol, Ben W.; van der Post, Joris A.

    2012-01-01

    We sought external validation of a prediction model for the probability of a successful external cephalic version (ECV). We evaluated the performance of the prediction model with calibration and discrimination. For clinical practice, we developed a score chart to calculate the probability of a

  1. Regularized integrable version of the one-dimensional quantum sine-Gordon model

    International Nuclear Information System (INIS)

    Japaridze, G.I.; Nersesyan, A.A.; Wiegmann, P.B.

    1983-01-01

    The authors derive a regularized exactly solvable version of the one-dimensional quantum sine-Gordon model proceeding from the exact solution of the U(1)-symmetric Thirring model. The ground state and the excitation spectrum are obtained in the region ν 2 < 8π. (Auth.)

  2. Connected Equipment Maturity Model Version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    Butzbaugh, Joshua B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Mayhorn, Ebony T. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sullivan, Greg [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Whalen, Scott A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-05-01

    The Connected Equipment Maturity Model (CEMM) evaluates the high-level functionality and characteristics that enable equipment to provide the four categories of energy-related services through communication with other entities (e.g., equipment, third parties, utilities, and users). The CEMM will help the U.S. Department of Energy, industry, energy efficiency organizations, and research institutions benchmark the current state of connected equipment and identify capabilities that may be attained to reach a more advanced, future state.

  3. System cost model user's manual, version 1.2

    International Nuclear Information System (INIS)

    Shropshire, D.

    1995-06-01

    The System Cost Model (SCM) was developed by Lockheed Martin Idaho Technologies in Idaho Falls, Idaho and MK-Environmental Services in San Francisco, California to support the Baseline Environmental Management Report sensitivity analysis for the U.S. Department of Energy (DOE). The SCM serves the needs of the entire DOE complex for treatment, storage, and disposal (TSD) of mixed low-level, low-level, and transuranic waste. The model can be used to evaluate total complex costs based on various configuration options or to evaluate site-specific options. The site-specific cost estimates are based on generic assumptions such as waste loads and densities, treatment processing schemes, existing facilities capacities and functions, storage and disposal requirements, schedules, and cost factors. The SCM allows customization of the data for detailed site-specific estimates. There are approximately forty TSD module designs that have been further customized to account for design differences for nonalpha, alpha, remote-handled, and transuranic wastes. The SCM generates cost profiles based on the model default parameters or customized user-defined input and also generates costs for transporting waste from generators to TSD sites

  4. Evolution of a 90-day model of care for bundled episodic payments for congestive heart failure in home care.

    Science.gov (United States)

    Feld, April; Madden-Baer, Rose; McCorkle, Ruth

    2016-01-01

    The Centers for Medicare and Medicaid Services Innovation Center's Episode-Based Payment initiatives propose a large opportunity to reduce cost from waste and variation and stand to align hospitals, physicians, and postacute providers in the redesign of care that achieves savings and improve quality. Community-based organizations are at the forefront of this care redesign through innovative models of care aimed at bridging gaps in care coordination and reducing hospital readmissions. This article describes a community-based provider's approach to participation under the Bundled Payments for Care Improvement initiative and a 90-day model of care for congestive heart failure in home care.

  5. Geological Model of the Olkiluoto Site. Version 2.0

    International Nuclear Information System (INIS)

    Aaltonen, I.

    2010-10-01

    The rocks of Olkiluoto can be divided into two major classes: 1) supracrustal high-grade metamorphic rocks including various migmatitic gneisses, tonalitic-granodioriticgranitic gneisses, mica gneisses, quartz gneisses and mafic gneisses, and 2) igneous rocks including pegmatitic granites and diabase dykes. The migmatitic gneisses can further be divided into three subgroups in terms of the type of migmatite structure: veined gneisses, stromatic gneisses and diatexitic gneisses. On the basis of refolding and crosscutting relationships, the metamorphic supracrustal rocks have been subjected to polyphased ductile deformation, consisting of five stages, the D2 being locally the most intensive phase, producing thrust-related folding, strong migmatisation and pervasive foliation. In 3D modelling of the lithological units, an assumption has been made, on the basis of measurements in the outcrops, investigation trenches and drill cores, that the pervasive, composite foliation produced as a result of polyphase ductile deformation has a rather constant attitude in the ONKALO area. Consequently, the strike and dip of the foliation has been used as a tool, through which the lithologies have been correlated between the drillholes and from the surface to the drillholes. In addition, the largest ductile deformation zones and tectonic units are described in 3D model. The bedrock at the Olkiluoto site has been subjected to extensive hydrothermal alteration, which has taken place at reasonably low temperature conditions, the estimated temperature interval being from slightly over 300 deg C to less than 100 deg C. Two types of alteration can be observed: firstly, pervasive alteration and secondly fracturecontrolled alteration. Clay mineralisation and sulphidisation are the most prominent alteration events in the site area. Sulphides are located in the uppermost part of the model volume following roughly the foliation and lithological trend. Kaolinite is also mainly located in the

  6. A magnetic version of the Smilansky-Solomyak model

    Czech Academy of Sciences Publication Activity Database

    Barseghyan, Diana; Exner, Pavel

    2017-01-01

    Roč. 50, č. 48 (2017), č. článku 485203. ISSN 1751-8113 R&D Projects: GA ČR GA17-01706S Institutional support: RVO:61389005 Keywords : Smilansky-Solomyak model * spectral transition * homegeneous magnetic field * discrete spectrum * essential spectrum Subject RIV: BE - Theoretical Physics OBOR OECD: Atomic, molecular and chemical physics (physics of atoms and molecules including collision, interaction with radiation, magnetic resonances, Mössbauer effect) Impact factor: 1.857, year: 2016

  7. PUMA Version 6 Multiplatform with Facilities to be coupled with other Simulation Models

    International Nuclear Information System (INIS)

    Grant, Carlos

    2013-01-01

    PUMA is a code for nuclear reactor calculation used in all nuclear installations in Argentina for simulation of fuel management, power cycles and transient events by means of spatial kinetic diffusion theory in 3D. For the versions used up to now the WINDOWS platform was used with very good results. Nowadays PUMA must work in different operative systems, LINUX among others, and must also have facilities to be coupled with other models. For this reason this new version was reprogrammed in ADA, language oriented to a safe programming and be found in any operative system. In former versions PUMA was executed through macro instructions written in LOGO. For this version it is possible to use also PYTHON, which makes also possible the access in execution time to internal data of PUMA. The use of PYTHON allows a easy way to couple PUMA with other codes. The possibilities of this new version of PUMA are shown by means of examples of input data and process control using PYTHON and LOGO. It is discussed the implementation of this methodology in other codes to be coupled with PUMA for versions run in WINDOWS and LINUX. (author)

  8. Geological model of the Olkiluoto site. Version 1.0

    International Nuclear Information System (INIS)

    Mattila, J.; Aaltonen, I.; Kemppainen, K.

    2008-01-01

    The rocks of Olkiluoto can be divided into two major classes: (1) supracrustal high-grade metamorphic rocks including various migmatitic gneisses, tonalitic-granodioriticgranitic gneisses, mica gneisses, quartz gneisses and mafic gneisses, and (2) igneous rocks including pegmatitic granites and diabase dykes. The migmatitic gneisses can further be divided into three subgroups in terms of the type of migmatite structure: veined gneisses, stromatic gneisses and diatexitic gneisses. On the basis of refolding and crosscutting relationships, the metamorphic supracrustal rocks have been subjected to polyphased ductile deformation, consisting of five stages, the D2 being locally the most intensive phase, producing thrust-related folding, strong migmatisation and pervasive foliation. In 3D modelling of the lithological units, an assumption has been made, on the basis of measurements in the outcrops, investigation trenches and drill cores, that the pervasive, composite foliation produced as a result of polyphase ductile deformation has a rather constant attitude in the ONKALO area. Consequently, the strike and dip of the foliation has been used as a tool, through which the lithologies have been correlated between the drillholes and from the surface to the drillholes. The bedrock at the Olkiluoto site has been subjected to extensive hydrothermal alteration, which has taken place at reasonably low temperature conditions, the estimated temperature interval being from slightly over 300 deg C to less than 100 deg C. Two types of alteration can be observed: (1) pervasive (disseminated) alteration and (2) fracture-controlled (veinlet) alteration. Kaolinisation and sulphidisation are the most prominent alteration events in the site area. Sulphides are located in the uppermost part of the model volume following roughly the lithological trend (slightly dipping to the SE). Kaolinite is also located in the uppermost part, but the orientation is opposite to the main lithological trend

  9. Response Surface Modeling Tool Suite, Version 1.x

    Energy Technology Data Exchange (ETDEWEB)

    2016-07-05

    The Response Surface Modeling (RSM) Tool Suite is a collection of three codes used to generate an empirical interpolation function for a collection of drag coefficient calculations computed with Test Particle Monte Carlo (TPMC) simulations. The first code, "Automated RSM", automates the generation of a drag coefficient RSM for a particular object to a single command. "Automated RSM" first creates a Latin Hypercube Sample (LHS) of 1,000 ensemble members to explore the global parameter space. For each ensemble member, a TPMC simulation is performed and the object drag coefficient is computed. In the next step of the "Automated RSM" code, a Gaussian process is used to fit the TPMC simulations. In the final step, Markov Chain Monte Carlo (MCMC) is used to evaluate the non-analytic probability distribution function from the Gaussian process. The second code, "RSM Area", creates a look-up table for the projected area of the object based on input limits on the minimum and maximum allowed pitch and yaw angles and pitch and yaw angle intervals. The projected area from the look-up table is used to compute the ballistic coefficient of the object based on its pitch and yaw angle. An accurate ballistic coefficient is crucial in accurately computing the drag on an object. The third code, "RSM Cd", uses the RSM generated by the "Automated RSM" code and the projected area look-up table generated by the "RSM Area" code to accurately compute the drag coefficient and ballistic coefficient of the object. The user can modify the object velocity, object surface temperature, the translational temperature of the gas, the species concentrations of the gas, and the pitch and yaw angles of the object. Together, these codes allow for the accurate derivation of an object's drag coefficient and ballistic coefficient under any conditions with only knowledge of the object's geometry and mass.

  10. Atmospheric Circulation Response to Episodic Arctic Warming in an Idealized Model

    Science.gov (United States)

    Hell, M. C.; Schneider, T.; Li, C.

    2017-12-01

    Recent Arctic sea ice loss has drawn attention as a potential driver of fall/winter circulation changes. Past work has shown that sea ice loss can be related to a stratospheric polar vortex breakdown, with the result of long-delayed surface weather phenomena in late winter/early spring. In this study, we separate the atmospheric dynamic components and mean timescales to episodic polar surface heat fluxes using large ensembles of an idealized GCM in absence of continents and seasons. The atmospheric ensemble-mean response is linear related to the surface forcing strength and insensitive to the forcing symmetry. Analyses in the Transformed Eulerian Mean show that the responses can be separated into 1) an in-phase thermal adjustment, and 2) a lagged, eddy-driven component invoking long-standing anomalies in the lower stratosphere. The mid-latitude adjustment to the episodically reduced baroclinity leads to stratosphere-directed eddy-heat fluxes, establishing a stratospheric temperature anomaly responsible for vortex break down. In addition, we discuss the dependence on the background state via correlation in ensemble member space. Thus, we range the role of arctic perturbations in the transient large-scale circulation.

  11. Technical Note: Description and assessment of a nudged version of the new dynamics Unified Model

    Directory of Open Access Journals (Sweden)

    O. Morgenstern

    2008-03-01

    Full Text Available We present a "nudged" version of the Met Office general circulation model, the Unified Model. We constrain this global climate model using ERA-40 re-analysis data with the aim of reproducing the observed "weather" over a year from September 1999. Quantitative assessments are made of its performance, focusing on dynamical aspects of nudging and demonstrating that the "weather" is well simulated.

  12. Site investigation SFR. Hydrogeological modelling of SFR. Model version 0.2

    Energy Technology Data Exchange (ETDEWEB)

    Oehman, Johan (Golder Associates AB (Sweden)); Follin, Sven (SF GeoLogic (Sweden))

    2010-01-15

    The Swedish Nuclear Fuel and Waste Management Company (SKB) has conducted site investigations for a planned extension of the existing final repository for short-lived radioactive waste (SFR). A hydrogeological model is developed in three model versions, which will be used for safety assessment and design analyses. This report presents a data analysis of the currently available hydrogeological data from the ongoing Site Investigation SFR (KFR27, KFR101, KFR102A, KFR102B, KFR103, KFR104, and KFR105). The purpose of this work is to develop a preliminary hydrogeological Discrete Fracture Network model (hydro-DFN) parameterisation that can be applied in regional-scale modelling. During this work, the Geologic model had not yet been updated for the new data set. Therefore, all analyses were made to the rock mass outside Possible Deformation Zones, according to Single Hole Interpretation. Owing to this circumstance, it was decided not to perform a complete hydro-DFN calibration at this stage. Instead focus was re-directed to preparatory test cases and conceptual questions with the aim to provide a sound strategy for developing the hydrogeological model SFR v. 1.0. The presented preliminary hydro-DFN consists of five fracture sets and three depth domains. A statistical/geometrical approach (connectivity analysis /Follin et al. 2005/) was performed to estimate the size (i.e. fracture radius) distribution of fractures that are interpreted as Open in geologic mapping of core data. Transmissivity relations were established based on an assumption of a correlation between the size and evaluated specific capacity of geologic features coupled to inflows measured by the Posiva Flow Log device (PFL-f data). The preliminary hydro-DFN was applied in flow simulations in order to test its performance and to explore the role of PFL-f data. Several insights were gained and a few model technical issues were raised. These are summarised in Table 5-1

  13. Site investigation SFR. Hydrogeological modelling of SFR. Model version 0.2

    International Nuclear Information System (INIS)

    Oehman, Johan; Follin, Sven

    2010-01-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) has conducted site investigations for a planned extension of the existing final repository for short-lived radioactive waste (SFR). A hydrogeological model is developed in three model versions, which will be used for safety assessment and design analyses. This report presents a data analysis of the currently available hydrogeological data from the ongoing Site Investigation SFR (KFR27, KFR101, KFR102A, KFR102B, KFR103, KFR104, and KFR105). The purpose of this work is to develop a preliminary hydrogeological Discrete Fracture Network model (hydro-DFN) parameterisation that can be applied in regional-scale modelling. During this work, the Geologic model had not yet been updated for the new data set. Therefore, all analyses were made to the rock mass outside Possible Deformation Zones, according to Single Hole Interpretation. Owing to this circumstance, it was decided not to perform a complete hydro-DFN calibration at this stage. Instead focus was re-directed to preparatory test cases and conceptual questions with the aim to provide a sound strategy for developing the hydrogeological model SFR v. 1.0. The presented preliminary hydro-DFN consists of five fracture sets and three depth domains. A statistical/geometrical approach (connectivity analysis /Follin et al. 2005/) was performed to estimate the size (i.e. fracture radius) distribution of fractures that are interpreted as Open in geologic mapping of core data. Transmissivity relations were established based on an assumption of a correlation between the size and evaluated specific capacity of geologic features coupled to inflows measured by the Posiva Flow Log device (PFL-f data). The preliminary hydro-DFN was applied in flow simulations in order to test its performance and to explore the role of PFL-f data. Several insights were gained and a few model technical issues were raised. These are summarised in Table 5-1

  14. A new version of code Java for 3D simulation of the CCA model

    Science.gov (United States)

    Zhang, Kebo; Xiong, Hailing; Li, Chao

    2016-07-01

    In this paper we present a new version of the program of CCA model. In order to benefit from the advantages involved in the latest technologies, we migrated the running environment from JDK1.6 to JDK1.7. And the old program was optimized into a new framework, so promoted extendibility.

  15. User's guide to the Yucca Mountain Integrating Model (YMIM) Version 2.1

    International Nuclear Information System (INIS)

    Gansemer, J.; Lamont, A.

    1995-04-01

    The Yucca Mountain Integrating Model (YMIM) is an integrated model of the engineered barrier system. It contains models of the processes of waste container failure and nuclide release from the fuel rods. YMIM is driven by scenarios of container and rod temperature, near-field chemistry, and near-field hydrology provided by other modules. It is designed to be highly modular so that a model of an individual process can be easily modified to replaced without interfering with the models of other processes. This manual describes the process models and provides instructions for setting up and running YMIM Version 2.1

  16. Performance Evaluation of PBL Schemes of ARW Model in Simulating Thermo-Dynamical Structure of Pre-Monsoon Convective Episodes over Kharagpur Using STORM Data Sets

    Science.gov (United States)

    Madala, Srikanth; Satyanarayana, A. N. V.; Srinivas, C. V.; Tyagi, Bhishma

    2016-05-01

    In the present study, advanced research WRF (ARW) model is employed to simulate convective thunderstorm episodes over Kharagpur (22°30'N, 87°20'E) region of Gangetic West Bengal, India. High-resolution simulations are conducted using 1 × 1 degree NCEP final analysis meteorological fields for initial and boundary conditions for events. The performance of two non-local [Yonsei University (YSU), Asymmetric Convective Model version 2 (ACM2)] and two local turbulence kinetic energy closures [Mellor-Yamada-Janjic (MYJ), Bougeault-Lacarrere (BouLac)] are evaluated in simulating planetary boundary layer (PBL) parameters and thermodynamic structure of the atmosphere. The model-simulated parameters are validated with available in situ meteorological observations obtained from micro-meteorological tower as well has high-resolution DigiCORA radiosonde ascents during STORM-2007 field experiment at the study location and Doppler Weather Radar (DWR) imageries. It has been found that the PBL structure simulated with the TKE closures MYJ and BouLac are in better agreement with observations than the non-local closures. The model simulations with these schemes also captured the reflectivity, surface pressure patterns such as wake-low, meso-high, pre-squall low and the convective updrafts and downdrafts reasonably well. Qualitative and quantitative comparisons reveal that the MYJ followed by BouLac schemes better simulated various features of the thunderstorm events over Kharagpur region. The better performance of MYJ followed by BouLac is evident in the lesser mean bias, mean absolute error, root mean square error and good correlation coefficient for various surface meteorological variables as well as thermo-dynamical structure of the atmosphere relative to other PBL schemes. The better performance of the TKE closures may be attributed to their higher mixing efficiency, larger convective energy and better simulation of humidity promoting moist convection relative to non

  17. The Lagrangian particle dispersion model FLEXPART-WRF VERSION 3.1

    Energy Technology Data Exchange (ETDEWEB)

    Brioude, J.; Arnold, D.; Stohl, A.; Cassiani, M.; Morton, Don; Seibert, P.; Angevine, W. M.; Evan, S.; Dingwell, A.; Fast, Jerome D.; Easter, Richard C.; Pisso, I.; Bukhart, J.; Wotawa, G.

    2013-11-01

    The Lagrangian particle dispersion model FLEXPART was originally designed for cal- culating long-range and mesoscale dispersion of air pollutants from point sources, such as after an accident in a nuclear power plant. In the meantime FLEXPART has evolved into a comprehensive tool for atmospheric transport modeling and analysis at different scales. This multiscale need from the modeler community has encouraged new developments in FLEXPART. In this document, we present a version that works with the Weather Research and Forecasting (WRF) mesoscale meteoro- logical model. Simple procedures on how to run FLEXPART-WRF are presented along with special options and features that differ from its predecessor versions. In addition, test case data, the source code and visualization tools are provided to the reader as supplementary material.

  18. Intensive case management for high-risk patients with first-episode psychosis: service model and outcomes.

    Science.gov (United States)

    Brewer, Warrick J; Lambert, Timothy J; Witt, Katrina; Dileo, John; Duff, Cameron; Crlenjak, Carol; McGorry, Patrick D; Murphy, Brendan P

    2015-01-01

    The first episode of psychosis is a crucial period when early intervention can alter the trajectory of the young person's ongoing mental health and general functioning. After an investigation into completed suicides in the Early Psychosis Prevention and Intervention Centre (EPPIC) programme, the intensive case management subprogramme was developed in 2003 to provide assertive outreach to young people having a first episode of psychosis who are at high risk owing to risk to self or others, disengagement, or suboptimal recovery. We report intensive case management model development, characterise the target cohort, and report on outcomes compared with EPPIC treatment as usual. Inclusion criteria, staff support, referral pathways, clinical review processes, models of engagement and care, and risk management protocols are described. We compared 120 consecutive referrals with 50 EPPIC treatment as usual patients (age 15-24 years) in a naturalistic stratified quasi-experimental real-world design. Key performance indicators of service use plus engagement and suicide attempts were compared between EPPIC treatment as usual and intensive case management, and psychosocial and clinical measures were compared between intensive case management referral and discharge. Referrals were predominately unemployed males with low levels of functioning and educational attainment. They were characterised by a family history of mental illness, migration and early separation, with substantial trauma, history of violence, and forensic attention. Intensive case management improved psychopathology and psychosocial outcomes in high-risk patients and reduced risk ratings, admissions, bed days, and crisis contacts. Characterisation of intensive case management patients validated the clinical research focus and identified a first episode of psychosis high-risk subgroup. In a real-world study, implementation of an intensive case management stream within a well-established first episode of psychosis

  19. Ten years of a model of aesthetic appreciation and aesthetic judgments : The aesthetic episode - Developments and challenges in empirical aesthetics.

    Science.gov (United States)

    Leder, Helmut; Nadal, Marcos

    2014-11-01

    About a decade ago, psychology of the arts started to gain momentum owing to a number of drives: technological progress improved the conditions under which art could be studied in the laboratory, neuroscience discovered the arts as an area of interest, and new theories offered a more comprehensive look at aesthetic experiences. Ten years ago, Leder, Belke, Oeberst, and Augustin (2004) proposed a descriptive information-processing model of the components that integrate an aesthetic episode. This theory offered explanations for modern art's large number of individualized styles, innovativeness, and for the diverse aesthetic experiences it can stimulate. In addition, it described how information is processed over the time course of an aesthetic episode, within and over perceptual, cognitive and emotional components. Here, we review the current state of the model, and its relation to the major topics in empirical aesthetics today, including the nature of aesthetic emotions, the role of context, and the neural and evolutionary foundations of art and aesthetics. © 2014 The British Psychological Society.

  20. Towards New Empirical Versions of Financial and Accounting Models Corrected for Measurement Errors

    OpenAIRE

    Francois-Éric Racicot; Raymond Théoret; Alain Coen

    2006-01-01

    In this paper, we propose a new empirical version of the Fama and French Model based on the Hausman (1978) specification test and aimed at discarding measurement errors in the variables. The proposed empirical framework is general enough to be used for correcting other financial and accounting models of measurement errors. Removing measurement errors is important at many levels as information disclosure, corporate governance and protection of investors.

  1. Centers for medicare and medicaid services: using an episode-based payment model to improve oncology care.

    Science.gov (United States)

    Kline, Ronald M; Bazell, Carol; Smith, Erin; Schumacher, Heidi; Rajkumar, Rahul; Conway, Patrick H

    2015-03-01

    Cancer is a medically complex and expensive disease with costs projected to rise further as new treatment options increase and the United States population ages. Studies showing significant regional variation in oncology quality and costs and model tests demonstrating cost savings without adverse outcomes suggest there are opportunities to create a system of oncology care in the US that delivers higher quality care at lower cost. The Centers for Medicare and Medicaid Services (CMS) have designed an episode-based payment model centered around 6 month periods of chemotherapy treatment. Monthly per-patient care management payments will be made to practices to support practice transformation, including additional patient services and specific infrastructure enhancements. Quarterly reporting of quality metrics will drive continuous quality improvement and the adoption of best practices among participants. Practices achieving cost savings will also be eligible for performance-based payments. Savings are expected through improved care coordination and appropriately aligned payment incentives, resulting in decreased avoidable emergency department visits and hospitalizations and more efficient and evidence-based use of imaging, laboratory tests, and therapeutic agents, as well as improved end of life care. New therapies and better supportive care have significantly improved cancer survival in recent decades. This has come at a high cost, with cancer therapy consuming $124 billion in 2010. CMS has designed an episode-based model of oncology care that incorporates elements from several successful model tests. By providing care management and performance based payments in conjunction with quality metrics and a rapid learning environment, it is hoped that this model will demonstrate how oncology care in the US can transform into a high value, high quality system. Copyright © 2015 by American Society of Clinical Oncology.

  2. Description and evaluation of the Model for Ozone and Related chemical Tracers, version 4 (MOZART-4

    Directory of Open Access Journals (Sweden)

    L. K. Emmons

    2010-01-01

    Full Text Available The Model for Ozone and Related chemical Tracers, version 4 (MOZART-4 is an offline global chemical transport model particularly suited for studies of the troposphere. The updates of the model from its previous version MOZART-2 are described, including an expansion of the chemical mechanism to include more detailed hydrocarbon chemistry and bulk aerosols. Online calculations of a number of processes, such as dry deposition, emissions of isoprene and monoterpenes and photolysis frequencies, are now included. Results from an eight-year simulation (2000–2007 are presented and evaluated. The MOZART-4 source code and standard input files are available for download from the NCAR Community Data Portal (http://cdp.ucar.edu.

  3. A one-dimensional material transfer model for HECTR version 1.5

    International Nuclear Information System (INIS)

    Geller, A.S.; Wong, C.C.

    1991-08-01

    HECTR (Hydrogen Event Containment Transient Response) is a lumped-parameter computer code developed for calculating the pressure-temperature response to combustion in a nuclear power plant containment building. The code uses a control-volume approach and subscale models to simulate the mass, momentum, and energy transfer occurring in the containment during a loss-of-collant-accident (LOCA). This document describes one-dimensional subscale models for mass and momentum transfer, and the modifications to the code required to implement them. Two problems were analyzed: the first corresponding to a standard problem studied with previous HECTR versions, the second to experiments. The performance of the revised code relative to previous HECTR version is discussed as is the ability of the code to model the experiments. 8 refs., 5 figs., 3 tabs

  4. The Hamburg Oceanic Carbon Cycle Circulation Model. Version 1. Version 'HAMOCC2s' for long time integrations

    Energy Technology Data Exchange (ETDEWEB)

    Heinze, C.; Maier-Reimer, E. [Max-Planck-Institut fuer Meteorologie, Hamburg (Germany)

    1999-11-01

    The Hamburg Ocean Carbon Cycle Circulation Model (HAMOCC, configuration HAMOCC2s) predicts the atmospheric carbon dioxide partial pressure (as induced by oceanic processes), production rates of biogenic particulate matter, and geochemical tracer distributions in the water column as well as the bioturbated sediment. Besides the carbon cycle this model version includes also the marine silicon cycle (silicic acid in the water column and the sediment pore waters, biological opal production, opal flux through the water column and opal sediment pore water interaction). The model is based on the grid and geometry of the LSG ocean general circulation model (see the corresponding manual, LSG=Large Scale Geostrophic) and uses a velocity field provided by the LSG-model in 'frozen' state. In contrast to the earlier version of the model (see Report No. 5), the present version includes a multi-layer sediment model of the bioturbated sediment zone, allowing for variable tracer inventories within the complete model system. (orig.)

  5. Assessment of winter air pollution episodes using long-range transport modeling in Hangzhou, China, during World Internet Conference, 2015.

    Science.gov (United States)

    Ni, Zhi-Zhen; Luo, Kun; Zhang, Jun-Xi; Feng, Rui; Zheng, He-Xin; Zhu, Hao-Ran; Wang, Jing-Fan; Fan, Jian-Ren; Gao, Xiang; Cen, Ke-Fa

    2018-05-01

    A winter air pollution episode was observed in Hangzhou, South China, during the Second World Internet Conference, 2015. To study the pollution characteristics and underlying causes, the Weather Research and Forecasting with Chemistry model was used to simulate the spatial and temporal evolution of the pollution episode from December 8 to 19, 2015. In addition to scenario simulations, analysis of the atmospheric trajectory and synoptic weather conditions were also performed. The results demonstrated that control measures implemented during the week preceding the conference reduced the fine particulate matter (PM 2.5 ) pollution level to some extent, with a decline in the total PM 2.5 concentration in Hangzhou of 15% (7%-25% daily). Pollutant long-range transport, which occurred due to a southward intrusion of strong cold air driven by the Siberia High, led to severe pollution in Hangzhou on December 15, 2015, accounting for 85% of the PM 2.5 concentration. This study provides new insights into the challenge of winter pollution prevention in Hangzhou. For adequate pollution prevention, more regional collaborations should be fostered when creating policies for northern China. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Digital elevation models for site investigation programme in Oskarshamn. Site description version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Brydsten, Lars; Stroemgren, Maarten [Umeaa Univ. (Sweden). Dept. of Biology and Environmental Science

    2005-06-01

    In the Oskarshamn area, a digital elevation model has been produced using elevation data from many elevation sources on both land and sea. Many elevation model users are only interested in elevation models over land, so the model has been designed in three versions: Version 1 describes land surface, lake water surface, and sea bottom. Version 2 describes land surface, sediment levels at lake bottoms, and sea bottoms. Version 3 describes land surface, sediment levels at lake bottoms, and sea surface. In cases where the different sources of data were not in point form 'such as existing elevation models of land or depth lines from nautical charts' they have been converted to point values using GIS software. Because data from some sources often overlaps with data from other sources, several tests were conducted to determine if both sources of data or only one source would be included in the dataset used for the interpolation procedure. The tests resulted in the decision to use only the source judged to be of highest quality for most areas with overlapping data sources. All data were combined into a database of approximately 3.3 million points unevenly spread over an area of about 800 km{sup 2}. The large number of data points made it difficult to construct the model with a single interpolation procedure, the area was divided into 28 sub-models that were processed one by one and finally merged together into one single model. The software ArcGis 8.3 and its extension Geostatistical Analysis were used for the interpolation. The Ordinary Kriging method was used for interpolation. This method allows both a cross validation and a validation before the interpolation is conducted. Cross validation with different Kriging parameters were performed and the model with the most reasonable statistics was chosen. Finally, a validation with the most appropriate Kriging parameters was performed in order to verify that the model fit unmeasured localities. Since both the

  7. Digital elevation models for site investigation programme in Oskarshamn. Site description version 1.2

    International Nuclear Information System (INIS)

    Brydsten, Lars; Stroemgren, Maarten

    2005-06-01

    In the Oskarshamn area, a digital elevation model has been produced using elevation data from many elevation sources on both land and sea. Many elevation model users are only interested in elevation models over land, so the model has been designed in three versions: Version 1 describes land surface, lake water surface, and sea bottom. Version 2 describes land surface, sediment levels at lake bottoms, and sea bottoms. Version 3 describes land surface, sediment levels at lake bottoms, and sea surface. In cases where the different sources of data were not in point form 'such as existing elevation models of land or depth lines from nautical charts' they have been converted to point values using GIS software. Because data from some sources often overlaps with data from other sources, several tests were conducted to determine if both sources of data or only one source would be included in the dataset used for the interpolation procedure. The tests resulted in the decision to use only the source judged to be of highest quality for most areas with overlapping data sources. All data were combined into a database of approximately 3.3 million points unevenly spread over an area of about 800 km 2 . The large number of data points made it difficult to construct the model with a single interpolation procedure, the area was divided into 28 sub-models that were processed one by one and finally merged together into one single model. The software ArcGis 8.3 and its extension Geostatistical Analysis were used for the interpolation. The Ordinary Kriging method was used for interpolation. This method allows both a cross validation and a validation before the interpolation is conducted. Cross validation with different Kriging parameters were performed and the model with the most reasonable statistics was chosen. Finally, a validation with the most appropriate Kriging parameters was performed in order to verify that the model fit unmeasured localities. Since both the quality and the

  8. Modelling of surface stresses and fracturing during dyke emplacement: Application to the 2009 episode at Harrat Lunayyir, Saudi Arabia

    Science.gov (United States)

    Al Shehri, Azizah; Gudmundsson, Agust

    2018-05-01

    Correct interpretation of surface stresses and deformation or displacement during volcanotectonic episodes is of fundamental importance for hazard assessment and dyke-path forecasting. Here we present new general numerical models on the local stresses induced by arrested dykes. In the models, the crustal segments hosting the dyke vary greatly in mechanical properties, from uniform or non-layered (elastic half-spaces) to highly anisotropic (layers with strong contrast in Young's modulus). The shallow parts of active volcanoes and volcanic zones are normally highly anisotropic and some with open contacts. The numerical results show that, for a given surface deformation, non-layered (half-space) models underestimate the dyke overpressure/thickness needed and overestimate the likely depth to the tip of the dyke. Also, as the mechanical contrast between the layers increases, so does the stress dissipation and associated reduction in surface stresses (and associated fracturing). In the absence of open contacts, the distance between the two dyke-induced tensile and shear stress peaks (and fractures, if any) at the surface is roughly twice the depth to the tip of the dyke. The width of a graben, if it forms, should therefore be roughly twice the depth to the tip of the associated arrested dyke. When applied to the 2009 episode at Harrat Lunayyir, the main results are as follows. The entire 3-7 km wide fracture zone/graben formed during the episode is far too wide to have been generated by induced stresses of a single, arrested dyke. The eastern part of the zone/graben may have been generated by the inferred, arrested dyke, but the western zone primarily by regional extensional loading. The dyke tip was arrested at only a few hundred metres below the surface, the estimated thickness of the uppermost part of the dyke being between about 6 and 12 m. For the inferred dyke length (strike dimension) of about 14 km, this yields a dyke length/thickness ratio between 2400 and 1200

  9. Thermal modelling. Preliminary site description. Forsmark area - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Sundberg, Jan; Back, Paer-Erik; Bengtsson, Anna; Laendell, Maerta [Geo Innova AB, Linkoeping (Sweden)

    2005-08-01

    This report presents the thermal site descriptive model for the Forsmark area, version 1.2. The main objective of this report is to present the thermal modelling work where data has been identified, quality controlled, evaluated and summarised in order to make an upscaling to lithological domain level possible. The thermal conductivity at canister scale has been modelled for two different lithological domains (RFM029 and RFM012, both dominated by granite to granodiorite (101057)). A main modelling approach has been used to determine the mean value of the thermal conductivity. Two alternative/complementary approaches have been used to evaluate the spatial variability of the thermal conductivity at domain level. The thermal modelling approaches are based on the lithological model for the Forsmark area, version 1.2 together with rock type models constituted from measured and calculated (from mineral composition) thermal conductivities. Results indicate that the mean of thermal conductivity is expected to exhibit a small variation between the different domains, 3.46 W/(mxK) for RFM012 to 3.55 W/(mxK) for RFM029. The spatial distribution of the thermal conductivity does not follow a simple model. Lower and upper 95% confidence limits are based on the modelling results, but have been rounded of to only two significant figures. Consequently, the lower limit is 2.9 W/(mxK), while the upper is 3.8 W/(mxK). This is applicable to both the investigated domains. The temperature dependence is rather small with a decrease in thermal conductivity of 10.0% per 100 deg C increase in temperature for the dominating rock type. There are a number of important uncertainties associated with these results. One of the uncertainties considers the representative scale for the canister. Another important uncertainty is the methodological uncertainties associated with the upscaling of thermal conductivity from cm-scale to canister scale. In addition, the representativeness of rock samples is

  10. Thermal site descriptive model. A strategy for the model development during site investigations - version 2

    Energy Technology Data Exchange (ETDEWEB)

    Back, Paer-Erik; Sundberg, Jan [Geo Innova AB (Sweden)

    2007-09-15

    This report presents a strategy for describing, predicting and visualising the thermal aspects of the site descriptive model. The strategy is an updated version of an earlier strategy applied in all SDM versions during the initial site investigation phase at the Forsmark and Oskarshamn areas. The previous methodology for thermal modelling did not take the spatial correlation fully into account during simulation. The result was that the variability of thermal conductivity in the rock mass was not sufficiently well described. Experience from earlier thermal SDMs indicated that development of the methodology was required in order describe the spatial distribution of thermal conductivity in the rock mass in a sufficiently reliable way, taking both variability within rock types and between rock types into account. A good description of the thermal conductivity distribution is especially important for the lower tail. This tail is important for the design of a repository because it affects the canister spacing. The presented approach is developed to be used for final SDM regarding thermal properties, primarily thermal conductivity. Specific objectives for the strategy of thermal stochastic modelling are: Description: statistical description of the thermal conductivity of a rock domain. Prediction: prediction of thermal conductivity in a specific rock volume. Visualisation: visualisation of the spatial distribution of thermal conductivity. The thermal site descriptive model should include the temperature distribution and thermal properties of the rock mass. The temperature is the result of the thermal processes in the repository area. Determination of thermal transport properties can be made using different methods, such as laboratory investigations, field measurements, modelling from mineralogical composition and distribution, modelling from density logging and modelling from temperature logging. The different types of data represent different scales, which has to be

  11. Thermal site descriptive model. A strategy for the model development during site investigations - version 2

    International Nuclear Information System (INIS)

    Back, Paer-Erik; Sundberg, Jan

    2007-09-01

    This report presents a strategy for describing, predicting and visualising the thermal aspects of the site descriptive model. The strategy is an updated version of an earlier strategy applied in all SDM versions during the initial site investigation phase at the Forsmark and Oskarshamn areas. The previous methodology for thermal modelling did not take the spatial correlation fully into account during simulation. The result was that the variability of thermal conductivity in the rock mass was not sufficiently well described. Experience from earlier thermal SDMs indicated that development of the methodology was required in order describe the spatial distribution of thermal conductivity in the rock mass in a sufficiently reliable way, taking both variability within rock types and between rock types into account. A good description of the thermal conductivity distribution is especially important for the lower tail. This tail is important for the design of a repository because it affects the canister spacing. The presented approach is developed to be used for final SDM regarding thermal properties, primarily thermal conductivity. Specific objectives for the strategy of thermal stochastic modelling are: Description: statistical description of the thermal conductivity of a rock domain. Prediction: prediction of thermal conductivity in a specific rock volume. Visualisation: visualisation of the spatial distribution of thermal conductivity. The thermal site descriptive model should include the temperature distribution and thermal properties of the rock mass. The temperature is the result of the thermal processes in the repository area. Determination of thermal transport properties can be made using different methods, such as laboratory investigations, field measurements, modelling from mineralogical composition and distribution, modelling from density logging and modelling from temperature logging. The different types of data represent different scales, which has to be

  12. COMODI: an ontology to characterise differences in versions of computational models in biology.

    Science.gov (United States)

    Scharm, Martin; Waltemath, Dagmar; Mendes, Pedro; Wolkenhauer, Olaf

    2016-07-11

    Open model repositories provide ready-to-reuse computational models of biological systems. Models within those repositories evolve over time, leading to different model versions. Taken together, the underlying changes reflect a model's provenance and thus can give valuable insights into the studied biology. Currently, however, changes cannot be semantically interpreted. To improve this situation, we developed an ontology of terms describing changes in models. The ontology can be used by scientists and within software to characterise model updates at the level of single changes. When studying or reusing a model, these annotations help with determining the relevance of a change in a given context. We manually studied changes in selected models from BioModels and the Physiome Model Repository. Using the BiVeS tool for difference detection, we then performed an automatic analysis of changes in all models published in these repositories. The resulting set of concepts led us to define candidate terms for the ontology. In a final step, we aggregated and classified these terms and built the first version of the ontology. We present COMODI, an ontology needed because COmputational MOdels DIffer. It empowers users and software to describe changes in a model on the semantic level. COMODI also enables software to implement user-specific filter options for the display of model changes. Finally, COMODI is a step towards predicting how a change in a model influences the simulation results. COMODI, coupled with our algorithm for difference detection, ensures the transparency of a model's evolution, and it enhances the traceability of updates and error corrections. COMODI is encoded in OWL. It is openly available at http://comodi.sems.uni-rostock.de/ .

  13. Attentional episodes in visual perception

    NARCIS (Netherlands)

    Wyble, Brad; Potter, Mary C.; Bowman, Howard; Nieuwenstein, Mark

    Is one's temporal perception of the world truly as seamless as it appears? This article presents a computationally motivated theory suggesting that visual attention samples information from temporal episodes (episodic simultaneous type/serial token model; Wyble, Bowman, & Nieuwenstein, 2009). Breaks

  14. Main modelling features of the ASTEC V2.1 major version

    International Nuclear Information System (INIS)

    Chatelard, P.; Belon, S.; Bosland, L.; Carénini, L.; Coindreau, O.; Cousin, F.; Marchetto, C.; Nowack, H.; Piar, L.; Chailan, L.

    2016-01-01

    Highlights: • Recent modelling improvements of the ASTEC European severe accident code are outlined. • Key new physical models now available in the ASTEC V2.1 major version are described. • ASTEC progress towards a multi-design reactor code is illustrated for BWR and PHWR. • ASTEC strong link with the on-going EC CESAM FP7 project is emphasized. • Main remaining modelling issues (on which IRSN efforts are now directing) are given. - Abstract: A new major version of the European severe accident integral code ASTEC, developed by IRSN with some GRS support, was delivered in November 2015 to the ASTEC worldwide community. Main modelling features of this V2.1 version are summarised in this paper. In particular, the in-vessel coupling technique between the reactor coolant system thermal-hydraulics module and the core degradation module has been strongly re-engineered to remove some well-known weaknesses of the former V2.0 series. The V2.1 version also includes new core degradation models specifically addressing BWR and PHWR reactor types, as well as several other physical modelling improvements, notably on reflooding of severely damaged cores, Zircaloy oxidation under air atmosphere, corium coolability during corium concrete interaction and source term evaluation. Moreover, this V2.1 version constitutes the back-bone of the CESAM FP7 project, which final objective is to further improve ASTEC for use in Severe Accident Management analysis of the Gen.II–III nuclear power plants presently under operation or foreseen in near future in Europe. As part of this European project, IRSN efforts to continuously improve both code numerical robustness and computing performances at plant scale as well as users’ tools are being intensified. Besides, ASTEC will continue capitalising the whole knowledge on severe accidents phenomenology by progressively keeping physical models at the state of the art through a regular feed-back from the interpretation of the current and

  15. GARUSO - Version 1.0. Uncertainty model for multipath ultrasonic transit time gas flow meters

    Energy Technology Data Exchange (ETDEWEB)

    Lunde, Per; Froeysa, Kjell-Eivind; Vestrheim, Magne

    1997-09-01

    This report describes an uncertainty model for ultrasonic transit time gas flow meters configured with parallel chords, and a PC program, GARUSO Version 1.0, implemented for calculation of the meter`s relative expanded uncertainty. The program, which is based on the theoretical uncertainty model, is used to carry out a simplified and limited uncertainty analysis for a 12`` 4-path meter, where examples of input and output uncertainties are given. The model predicts a relative expanded uncertainty for the meter at a level which further justifies today`s increasing tendency to use this type of instruments for fiscal metering of natural gas. 52 refs., 15 figs., 11 tabs.

  16. Incorporation of detailed eye model into polygon-mesh versions of ICRP-110 reference phantoms.

    Science.gov (United States)

    Nguyen, Thang Tat; Yeom, Yeon Soo; Kim, Han Sung; Wang, Zhao Jun; Han, Min Cheol; Kim, Chan Hyeong; Lee, Jai Ki; Zankl, Maria; Petoussi-Henss, Nina; Bolch, Wesley E; Lee, Choonsik; Chung, Beom Sun

    2015-11-21

    The dose coefficients for the eye lens reported in ICRP 2010 Publication 116 were calculated using both a stylized model and the ICRP-110 reference phantoms, according to the type of radiation, energy, and irradiation geometry. To maintain consistency of lens dose assessment, in the present study we incorporated the ICRP-116 detailed eye model into the converted polygon-mesh (PM) version of the ICRP-110 reference phantoms. After the incorporation, the dose coefficients for the eye lens were calculated and compared with those of the ICRP-116 data. The results showed generally a good agreement between the newly calculated lens dose coefficients and the values of ICRP 2010 Publication 116. Significant differences were found for some irradiation cases due mainly to the use of different types of phantoms. Considering that the PM version of the ICRP-110 reference phantoms preserve the original topology of the ICRP-110 reference phantoms, it is believed that the PM version phantoms, along with the detailed eye model, provide more reliable and consistent dose coefficients for the eye lens.

  17. Incremental testing of the Community Multiscale Air Quality (CMAQ modeling system version 4.7

    Directory of Open Access Journals (Sweden)

    K. M. Foley

    2010-03-01

    Full Text Available This paper describes the scientific and structural updates to the latest release of the Community Multiscale Air Quality (CMAQ modeling system version 4.7 (v4.7 and points the reader to additional resources for further details. The model updates were evaluated relative to observations and results from previous model versions in a series of simulations conducted to incrementally assess the effect of each change. The focus of this paper is on five major scientific upgrades: (a updates to the heterogeneous N2O5 parameterization, (b improvement in the treatment of secondary organic aerosol (SOA, (c inclusion of dynamic mass transfer for coarse-mode aerosol, (d revisions to the cloud model, and (e new options for the calculation of photolysis rates. Incremental test simulations over the eastern United States during January and August 2006 are evaluated to assess the model response to each scientific improvement, providing explanations of differences in results between v4.7 and previously released CMAQ model versions. Particulate sulfate predictions are improved across all monitoring networks during both seasons due to cloud module updates. Numerous updates to the SOA module improve the simulation of seasonal variability and decrease the bias in organic carbon predictions at urban sites in the winter. Bias in the total mass of fine particulate matter (PM2.5 is dominated by overpredictions of unspeciated PM2.5 (PMother in the winter and by underpredictions of carbon in the summer. The CMAQv4.7 model results show slightly worse performance for ozone predictions. However, changes to the meteorological inputs are found to have a much greater impact on ozone predictions compared to changes to the CMAQ modules described here. Model updates had little effect on existing biases in wet deposition predictions.

  18. Nonlinear response of hail precipitation rate to environmental moisture content: A real case modeling study of an episodic midlatitude severe convective event

    Science.gov (United States)

    Li, Mingxin; Zhang, Fuqing; Zhang, Qinghong; Harrington, Jerry Y.; Kumjian, Matthew R.

    2017-07-01

    The dependence of hail production on initial moisture content in a simulated midlatitude episodic convective event occurred in northeast China on 10-11 June 2005 was investigated using the Weather Research and Forecasting (WRF) model with a double-moment microphysics scheme where both graupel and hail are considered. Three sensitivity experiments were performed by modifying the initial water vapor mixing ratio profile to 90% ("Q-10%"), 105% ("Q+5%"), and 110% ("Q+10%") of the initial conditions used for the control simulation. It was found that increasing the initial water vapor content caused the hail and total precipitation rates to increase during the first 5 h. The precipitation response to increasing water vapor content was monotonic for this first episode; however, for the event's second episode, the hail precipitation rate responds to the initial water vapor profile nonlinearly, while the total precipitation rate responds mostly monotonically. In particular, simulation Q+5% achieves the largest hail production rate while simulation Q+10% has the largest total precipitation rate. In contrast, during the second episode simulation Q-10% has the strongest vertical motion, produces the most cloud ice and snow, but has the lowest hail production. Analysis shows that increasing the initial moisture content directly increases the precipitation during the first episode, which subsequently induces a stronger, longer-lasting cold pool that limits the development of deep convection during the second episode.

  19. Characterising an intense PM pollution episode in March 2015 in France from multi-site approach and near real time data: Climatology, variabilities, geographical origins and model evaluation

    Science.gov (United States)

    Petit, J.-E.; Amodeo, T.; Meleux, F.; Bessagnet, B.; Menut, L.; Grenier, D.; Pellan, Y.; Ockler, A.; Rocq, B.; Gros, V.; Sciare, J.; Favez, O.

    2017-04-01

    During March 2015, a severe and large-scale particulate matter (PM) pollution episode occurred in France. Measurements in near real-time of the major chemical composition at four different urban background sites across the country (Paris, Creil, Metz and Lyon) allowed the investigation of spatiotemporal variabilities during this episode. A climatology approach showed that all sites experienced clear unusual rain shortage, a pattern that is also found on a longer timescale, highlighting the role of synoptic conditions over Wester-Europe. This episode is characterized by a strong predominance of secondary pollution, and more particularly of ammonium nitrate, which accounted for more than 50% of submicron aerosols at all sites during the most intense period of the episode. Pollution advection is illustrated by similar variabilities in Paris and Creil (distant of around 100 km), as well as trajectory analyses applied on nitrate and sulphate. Local sources, especially wood burning, are however found to contribute to local/regional sub-episodes, notably in Metz. Finally, simulated concentrations from Chemistry-Transport model CHIMERE were compared to observed ones. Results highlighted different patterns depending on the chemical components and the measuring site, reinforcing the need of such exercises over other pollution episodes and sites.

  20. Multi-state models for bleeding episodes and mortality in liver cirrhosis

    DEFF Research Database (Denmark)

    Andersen, Per Kragh; Esbjerg, Sille; Sørensen, Thorkild I.A.

    2000-01-01

    Data from a controlled clinical trial in liver cirrhosis are used to illustrate that multi-state models may be a useful tool in the analysis of data where survival is the ultimate outcome of interest but where intermediate, transient states are identified. We compare models for the marginal survi...

  1. Budget calculations for ozone and its precursors: Seasonal and episodic features based on model simulations

    NARCIS (Netherlands)

    Memmesheimer, M.; Ebel, A.; Roemer, M.

    1997-01-01

    Results from two air quality models (LOTOS, EURAD) have been used to analyse the contribution of the different terms in the continuity equation to the budget of ozone, NO(x) and PAN. Both models cover large parts of Europe and describe the processes relevant for tropospheric chemistry and dynamics.

  2. Forecasting Individual Headache Attacks Using Perceived Stress: Development of a Multivariable Prediction Model for Persons With Episodic Migraine.

    Science.gov (United States)

    Houle, Timothy T; Turner, Dana P; Golding, Adrienne N; Porter, John A H; Martin, Vincent T; Penzien, Donald B; Tegeler, Charles H

    2017-07-01

    To develop and validate a prediction model that forecasts future migraine attacks for an individual headache sufferer. Many headache patients and physicians believe that precipitants of headache can be identified and avoided or managed to reduce the frequency of headache attacks. Of the numerous candidate triggers, perceived stress has received considerable attention for its association with the onset of headache in episodic and chronic headache sufferers. However, no evidence is available to support forecasting headache attacks within individuals using any of the candidate headache triggers. This longitudinal cohort with forecasting model development study enrolled 100 participants with episodic migraine with or without aura, and N = 95 contributed 4626 days of electronic diary data and were included in the analysis. Individual headache forecasts were derived from current headache state and current levels of stress using several aspects of the Daily Stress Inventory, a measure of daily hassles that is completed at the end of each day. The primary outcome measure was the presence/absence of any headache attack (head pain > 0 on a numerical rating scale of 0-10) over the next 24 h period. After removing missing data (n = 431 days), participants in the study experienced a headache attack on 1613/4195 (38.5%) days. A generalized linear mixed-effects forecast model using either the frequency of stressful events or the perceived intensity of these events fit the data well. This simple forecasting model possessed promising predictive utility with an AUC of 0.73 (95% CI 0.71-0.75) in the training sample and an AUC of 0.65 (95% CI 0.6-0.67) in a leave-one-out validation sample. This forecasting model had a Brier score of 0.202 and possessed good calibration between forecasted probabilities and observed frequencies but had only low levels of resolution (ie, sharpness). This study demonstrates that future headache attacks can be forecasted for a diverse group of

  3. Statistical model of fractures and deformation zones. Preliminary site description, Laxemar subarea, version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Hermanson, Jan; Forssberg, Ola [Golder Associates AB, Stockholm (Sweden); Fox, Aaron; La Pointe, Paul [Golder Associates Inc., Redmond, WA (United States)

    2005-10-15

    The goal of this summary report is to document the data sources, software tools, experimental methods, assumptions, and model parameters in the discrete-fracture network (DFN) model for the local model volume in Laxemar, version 1.2. The model parameters presented herein are intended for use by other project modeling teams. Individual modeling teams may elect to simplify or use only a portion of the DFN model, depending on their needs. This model is not intended to be a flow model or a mechanical model; as such, only the geometrical characterization is presented. The derivations of the hydraulic or mechanical properties of the fractures or their subsurface connectivities are not within the scope of this report. This model represents analyses carried out on particular data sets. If additional data are obtained, or values for existing data are changed or excluded, the conclusions reached in this report, and the parameter values calculated, may change as well. The model volume is divided into two subareas; one located on the Simpevarp peninsula adjacent to the power plant (Simpevarp), and one further to the west (Laxemar). The DFN parameters described in this report were determined by analysis of data collected within the local model volume. As such, the final DFN model is only valid within this local model volume and the modeling subareas (Laxemar and Simpevarp) within.

  4. Assessing the meteorological conditions of a deep Italian Alpine valley system by means of a measuring campaign and simulations with two models during a summer smog episode

    NARCIS (Netherlands)

    Dosio, A.; Emeis, S.; Graziani, G.; Junkermann, W.; Levy, A.

    2001-01-01

    The typical features of a summer smog episode in the highly complex terrain of the Province of Bolzano (Northern Italy) were investigated by numerical modelling with two non-hydrostatic models, ground-based monitoring stations, and vertical profiling with two sodars and an ultra-light aircraft. High

  5. A computationally efficient description of heterogeneous freezing: A simplified version of the Soccer ball model

    Science.gov (United States)

    Niedermeier, Dennis; Ervens, Barbara; Clauss, Tina; Voigtländer, Jens; Wex, Heike; Hartmann, Susan; Stratmann, Frank

    2014-01-01

    In a recent study, the Soccer ball model (SBM) was introduced for modeling and/or parameterizing heterogeneous ice nucleation processes. The model applies classical nucleation theory. It allows for a consistent description of both apparently singular and stochastic ice nucleation behavior, by distributing contact angles over the nucleation sites of a particle population assuming a Gaussian probability density function. The original SBM utilizes the Monte Carlo technique, which hampers its usage in atmospheric models, as fairly time-consuming calculations must be performed to obtain statistically significant results. Thus, we have developed a simplified and computationally more efficient version of the SBM. We successfully used the new SBM to parameterize experimental nucleation data of, e.g., bacterial ice nucleation. Both SBMs give identical results; however, the new model is computationally less expensive as confirmed by cloud parcel simulations. Therefore, it is a suitable tool for describing heterogeneous ice nucleation processes in atmospheric models.

  6. Community Land Model Version 3.0 (CLM3.0) Developer's Guide

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, FM

    2004-12-21

    This document describes the guidelines adopted for software development of the Community Land Model (CLM) and serves as a reference to the entire code base of the released version of the model. The version of the code described here is Version 3.0 which was released in the summer of 2004. This document, the Community Land Model Version 3.0 (CLM3.0) User's Guide (Vertenstein et al., 2004), the Technical Description of the Community Land Model (CLM) (Oleson et al., 2004), and the Community Land Model's Dynamic Global Vegetation Model (CLM-DGVM): Technical Description and User's Guide (Levis et al., 2004) provide the developer, user, or researcher with details of implementation, instructions for using the model, a scientific description of the model, and a scientific description of the Dynamic Global Vegetation Model integrated with CLM respectively. The CLM is a single column (snow-soil-vegetation) biogeophysical model of the land surface which can be run serially (on a laptop or personal computer) or in parallel (using distributed or shared memory processors or both) on both vector and scalar computer architectures. Written in Fortran 90, CLM can be run offline (i.e., run in isolation using stored atmospheric forcing data), coupled to an atmospheric model (e.g., the Community Atmosphere Model (CAM)), or coupled to a climate system model (e.g., the Community Climate System Model Version 3 (CCSM3)) through a flux coupler (e.g., Coupler 6 (CPL6)). When coupled, CLM exchanges fluxes of energy, water, and momentum with the atmosphere. The horizontal land surface heterogeneity is represented by a nested subgrid hierarchy composed of gridcells, landunits, columns, and plant functional types (PFTs). This hierarchical representation is reflected in the data structures used by the model code. Biophysical processes are simulated for each subgrid unit (landunit, column, and PFT) independently, and prognostic variables are maintained for each subgrid unit

  7. The modified version of the centre-of-mass correction to the bag model

    International Nuclear Information System (INIS)

    Bartelski, J.; Tatur, S.

    1986-01-01

    We propose the improvement of the recently considered version of the centre-of-mass correction to the bag model. We identify a nucleon bag with physical nucleon confined in an external fictitious spherical well potential with an additional external fictitious pressure characterized by the parameter b. The introduction of such a pressure restores the conservation of the canonical energy-momentum tensor, which was lost in the former model. We propose several methods to determine the numerical value of b. We calculate the Roper resonance mass as well as static electroweak parameters of a nucleon with centre-of-mass corrections taken into account. 7 refs., 1 tab. (author)

  8. A single-trace dual-process model of episodic memory: a novel computational account of familiarity and recollection.

    Science.gov (United States)

    Greve, Andrea; Donaldson, David I; van Rossum, Mark C W

    2010-02-01

    Dual-process theories of episodic memory state that retrieval is contingent on two independent processes: familiarity (providing a sense of oldness) and recollection (recovering events and their context). A variety of studies have reported distinct neural signatures for familiarity and recollection, supporting dual-process theory. One outstanding question is whether these signatures reflect the activation of distinct memory traces or the operation of different retrieval mechanisms on a single memory trace. We present a computational model that uses a single neuronal network to store memory traces, but two distinct and independent retrieval processes access the memory. The model is capable of performing familiarity and recollection-based discrimination between old and new patterns, demonstrating that dual-process models need not to rely on multiple independent memory traces, but can use a single trace. Importantly, our putative familiarity and recollection processes exhibit distinct characteristics analogous to those found in empirical data; they diverge in capacity and sensitivity to sparse and correlated patterns, exhibit distinct ROC curves, and account for performance on both item and associative recognition tests. The demonstration that a single-trace, dual-process model can account for a range of empirical findings highlights the importance of distinguishing between neuronal processes and the neuronal representations on which they operate.

  9. MESOI Version 2.0: an interactive mesoscale Lagrangian puff dispersion model with deposition and decay

    International Nuclear Information System (INIS)

    Ramsdell, J.V.; Athey, G.F.; Glantz, C.S.

    1983-11-01

    MESOI Version 2.0 is an interactive Lagrangian puff model for estimating the transport, diffusion, deposition and decay of effluents released to the atmosphere. The model is capable of treating simultaneous releases from as many as four release points, which may be elevated or at ground-level. The puffs are advected by a horizontal wind field that is defined in three dimensions. The wind field may be adjusted for expected topographic effects. The concentration distribution within the puffs is initially assumed to be Gaussian in the horizontal and vertical. However, the vertical concentration distribution is modified by assuming reflection at the ground and the top of the atmospheric mixing layer. Material is deposited on the surface using a source depletion, dry deposition model and a washout coefficient model. The model also treats the decay of a primary effluent species and the ingrowth and decay of a single daughter species using a first order decay process. This report is divided into two parts. The first part discusses the theoretical and mathematical bases upon which MESOI Version 2.0 is based. The second part contains the MESOI computer code. The programs were written in the ANSI standard FORTRAN 77 and were developed on a VAX 11/780 computer. 43 references, 14 figures, 13 tables

  10. A p-version embedded model for simulation of concrete temperature fields with cooling pipes

    Directory of Open Access Journals (Sweden)

    Sheng Qiang

    2015-07-01

    Full Text Available Pipe cooling is an effective method of mass concrete temperature control, but its accurate and convenient numerical simulation is still a cumbersome problem. An improved embedded model, considering the water temperature variation along the pipe, was proposed for simulating the temperature field of early-age concrete structures containing cooling pipes. The improved model was verified with an engineering example. Then, the p-version self-adaption algorithm for the improved embedded model was deduced, and the initial values and boundary conditions were examined. Comparison of some numerical samples shows that the proposed model can provide satisfying precision and a higher efficiency. The analysis efficiency can be doubled at the same precision, even for a large-scale element. The p-version algorithm can fit grids of different sizes for the temperature field simulation. The convenience of the proposed algorithm lies in the possibility of locating more pipe segments in one element without the need of so regular a shape as in the explicit model.

  11. Implementation of dust emission and chemistry into the Community Multiscale Air Quality modeling system and initial application to an Asian dust storm episode

    Directory of Open Access Journals (Sweden)

    K. Wang

    2012-11-01

    Full Text Available The US Environmental Protection Agency's (EPA Community Multiscale Air Quality (CMAQ modeling system version 4.7 is further developed to enhance its capability in simulating the photochemical cycles in the presence of dust particles. The new model treatments implemented in CMAQ v4.7 in this work include two online dust emission schemes (i.e., the Zender and Westphal schemes, nine dust-related heterogeneous reactions, an updated aerosol inorganic thermodynamic module ISORROPIA II with an explicit treatment of crustal species, and the interface between ISORROPIA II and the new dust treatments. The resulting improved CMAQ (referred to as CMAQ-Dust, offline-coupled with the Weather Research and Forecast model (WRF, is applied to the April 2001 dust storm episode over the trans-Pacific domain to examine the impact of new model treatments and understand associated uncertainties. WRF/CMAQ-Dust produces reasonable spatial distribution of dust emissions and captures the dust outbreak events, with the total dust emissions of ~111 and 223 Tg when using the Zender scheme with an erodible fraction of 0.5 and 1.0, respectively. The model system can reproduce well observed meteorological and chemical concentrations, with significant improvements for suspended particulate matter (PM, PM with aerodynamic diameter of 10 μm, and aerosol optical depth than the default CMAQ v4.7. The sensitivity studies show that the inclusion of crustal species reduces the concentration of PM with aerodynamic diameter of 2.5 μm (PM2.5 over polluted areas. The heterogeneous chemistry occurring on dust particles acts as a sink for some species (e.g., as a lower limit estimate, reducing O3 by up to 3.8 ppb (~9% and SO2 by up to 0.3 ppb (~27% and as a source for some others (e.g., increasing fine-mode SO42− by up to 1.1 μg m−3 (~12% and PM2.5 by up to 1.4 μg m−3 (~3% over the domain. The

  12. On the influence of meteorological input on photochemical modelling of a severe episode over a coastal area

    Science.gov (United States)

    Pirovano, G.; Coll, I.; Bedogni, M.; Alessandrini, S.; Costa, M. P.; Gabusi, V.; Lasry, F.; Menut, L.; Vautard, R.

    The modelling reconstruction of the processes determining the transport and mixing of ozone and its precursors in complex terrain areas is a challenging task, particularly when local-scale circulations, such as sea breeze, take place. Within this frame, the ESCOMPTE European campaign took place in the vicinity of Marseille (south-east of France) in summer 2001. The main objectives of the field campaign were to document several photochemical episodes, as well as to constitute a detailed database for chemistry transport models intercomparison. CAMx model has been applied on the largest intense observation periods (IOP) (June 21-26, 2001) in order to evaluate the impacts of two state-of-the-art meteorological models, RAMS and MM5, on chemical model outputs. The meteorological models have been used as best as possible in analysis mode, thus allowing to identify the spread arising in pollutant concentrations as an indication of the intrinsic uncertainty associated to the meteorological input. Simulations have been deeply investigated and compared with a considerable subset of observations both at ground level and along vertical profiles. The analysis has shown that both models were able to reproduce the main circulation features of the IOP. The strongest discrepancies are confined to the Planetary Boundary Layer, consisting of a clear tendency to underestimate or overestimate wind speed over the whole domain. The photochemical simulations showed that variability in circulation intensity was crucial mainly for the representation of the ozone peaks and of the shape of ozone plumes at the ground that have been affected in the same way over the whole domain and all along the simulated period. As a consequence, such differences can be thought of as a possible indicator for the uncertainty related to the definition of meteorological fields in a complex terrain area.

  13. Modeling Episodic Ephemeral Brine Lake Evaporation and Salt Crystallization on the Bonneville Salt Flats, Utah

    Science.gov (United States)

    Liu, T.; Harman, C. J.; Kipnis, E. L.; Bowen, B. B.

    2017-12-01

    Public concern about apparent reductions in the areal extent of the Bonneville Salt Flat (BSF) and perceived changes in inundation frequency has motivated renewed interest in the hydrologic and geochemical behavior of this salt playa. In this study, we develop a numerical modeling framework to simulate the relationship between hydrometeorologic variability, brine evaporation and salt crystallization processes on BSF. The BSF, locates in Utah, is the remnant of paleo-lake Bonneville, and is capped by up to 1 meter of salt deposition over a 100 km2 area. The BSF has two distinct hydrologic periods each year: a winter wet periods with standing surface brine and the summer dry periods when the brine is evaporated, exposing the surface salt crust. We develop a lumped non-linear dynamical models coupling conservation expressions from water, dissolved salt and thermal energy to investigate the seasonal and diurnal behavior of brine during the transition from standing brine to exposed salt at BSF. The lumped dynamic models capture important nonlinear and kinetic effects introduced by the high ionic concentration of the brine, including the pronounced effect of the depressed water activity coefficient on evaporation. The salt crystallization and dissolution rate is modeled as a kinetic process linearly proportional to the degree of supersaturation of brine. The model generates predictions of the brine temperature and the solute and solvent masses controlled by diurnal net radiation input and aerodynamic forcing. Two distinct mechanisms emerge as potential controls on salt production and dissolution: (1) evapo-concentration and (2) changes in solubility related to changes in brine temperature. Although the evaporation of water is responsible for ultimate disappearance of the brine each season ,variation in solubility is found to be the dominant control on diurnal cycles of salt precipitation and dissolution in the BSF case. Most salt is crystallized during nighttime, but the

  14. Description of the new version 4.0 of the tritium model UFOTRI including user guide

    International Nuclear Information System (INIS)

    Raskob, W.

    1993-08-01

    In view of the future operation of fusion reactors the release of tritium may play a dominant role during normal operation as well as after accidents. Because of its physical and chemical properties which differ significantly from those of other radionuclides, the model UFOTRI for assessing the radiological consequences of accidental tritium releases has been developed. It describes the behaviour of tritium in the biosphere and calculates the radiological impact on individuals and the population due to the direct exposure and by the ingestion pathways. Processes such as the conversion of tritium gas into tritiated water (HTO) in the soil, re-emission after deposition and the conversion of HTO into organically bound tritium, are considered. The use of UFOTRI in its probabilistic mode shows the spectrum of the radiological impact together with the associated probability of occurrence. A first model version was established in 1991. As the ongoing work on investigating the main processes of the tritium behaviour in the environment shows up new results, the model has been improved in several points. The report describes the changes incorporated into the model since 1991. Additionally provides the up-dated user guide for handling the revised UFOTRI version which will be distributed to interested organizations. (orig.) [de

  15. Modeling of episodic particulate matter events using a 3-D air quality model with fine grid: Applications to a pair of cities in the US/Mexico border

    Science.gov (United States)

    Choi, Yu-Jin; Hyde, Peter; Fernando, H. J. S.

    High (episodic) particulate matter (PM) events over the sister cities of Douglas (AZ) and Agua Prieta (Sonora), located in the US-Mexico border, were simulated using the 3D Eulerian air quality model, MODELS-3/CMAQ. The best available input information was used for the simulations, with pollution inventory specified on a fine grid. In spite of inherent uncertainties associated with the emission inventory as well as the chemistry and meteorology of the air quality simulation tool, model evaluations showed acceptable PM predictions, while demonstrating the need for including the interaction between meteorology and emissions in an interactive mode in the model, a capability currently unavailable in MODELS-3/CMAQ when dealing with PM. Sensitivity studies on boundary influence indicate an insignificant regional (advection) contribution of PM to the study area. The contribution of secondary particles to the occurrence of high PM events was trivial. High PM episodes in the study area, therefore, are purely local events that largely depend on local meteorological conditions. The major PM emission sources were identified as vehicular activities on unpaved/paved roads and wind-blown dust. The results will be of immediate utility in devising PM mitigation strategies for the study area, which is one of the US EPA-designated non-attainment areas with respect to PM.

  16. Dynamic Computation of Change Operations in Version Management of Business Process Models

    Science.gov (United States)

    Küster, Jochen Malte; Gerth, Christian; Engels, Gregor

    Version management of business process models requires that changes can be resolved by applying change operations. In order to give a user maximal freedom concerning the application order of change operations, position parameters of change operations must be computed dynamically during change resolution. In such an approach, change operations with computed position parameters must be applicable on the model and dependencies and conflicts of change operations must be taken into account because otherwise invalid models can be constructed. In this paper, we study the concept of partially specified change operations where parameters are computed dynamically. We provide a formalization for partially specified change operations using graph transformation and provide a concept for their applicability. Based on this, we study potential dependencies and conflicts of change operations and show how these can be taken into account within change resolution. Using our approach, a user can resolve changes of business process models without being unnecessarily restricted to a certain order.

  17. An information-processing model of three cortical regions: evidence in episodic memory retrieval.

    Science.gov (United States)

    Sohn, Myeong-Ho; Goode, Adam; Stenger, V Andrew; Jung, Kwan-Jin; Carter, Cameron S; Anderson, John R

    2005-03-01

    ACT-R (Anderson, J.R., et al., 2003. An information-processing model of the BOLD response in symbol manipulation tasks. Psychon. Bull. Rev. 10, 241-261) relates the inferior dorso-lateral prefrontal cortex to a retrieval buffer that holds information retrieved from memory and the posterior parietal cortex to an imaginal buffer that holds problem representations. Because the number of changes in a problem representation is not necessarily correlated with retrieval difficulties, it is possible to dissociate prefrontal-parietal activations. In two fMRI experiments, we examined this dissociation using the fan effect paradigm. Experiment 1 compared a recognition task, in which representation requirement remains the same regardless of retrieval difficulty, with a recall task, in which both representation and retrieval loads increase with retrieval difficulty. In the recognition task, the prefrontal activation revealed a fan effect but not the parietal activation. In the recall task, both regions revealed fan effects. In Experiment 2, we compared visually presented stimuli and aurally presented stimuli using the recognition task. While only the prefrontal region revealed the fan effect, the activation patterns in the prefrontal and the parietal region did not differ by stimulus presentation modality. In general, these results provide support for the prefrontal-parietal dissociation in terms of retrieval and representation and the modality-independent nature of the information processed by these regions. Using ACT-R, we also provide computational models that explain patterns of fMRI responses in these two areas during recognition and recall.

  18. Beyond the first episode: candidate factors for a risk prediction model of schizophrenia.

    Science.gov (United States)

    Murphy, Brendan P

    2010-01-01

    Many early psychosis services are financially compromised and cannot offer a full tenure of care to all patients. To maintain viability of services it is important that those with schizophrenia are identified early to maximize long-term outcomes, as are those with better prognoses who can be discharged early. The duration of untreated psychosis remains the mainstay in determining those who will benefit from extended care, yet its ability to inform on prognosis is modest in both the short and medium term. There are a number of known or putative genetic and environmental risk factors that have the potential to improve prognostication, though a multivariate risk prediction model combining them with clinical characteristics has yet to be developed. Candidate risk factors for such a model are presented, with an emphasis on environmental risk factors. More work is needed to corroborate many putative factors and to determine which of the established factors are salient and which are merely proxy measures. Future research should help clarify how gene-environment and environment-environment interactions occur and whether risk factors are dose-dependent, or if they act additively or synergistically, or are redundant in the presence (or absence) of other factors.

  19. Update of the Polar SWIFT model for polar stratospheric ozone loss (Polar SWIFT version 2)

    Science.gov (United States)

    Wohltmann, Ingo; Lehmann, Ralph; Rex, Markus

    2017-07-01

    The Polar SWIFT model is a fast scheme for calculating the chemistry of stratospheric ozone depletion in polar winter. It is intended for use in global climate models (GCMs) and Earth system models (ESMs) to enable the simulation of mutual interactions between the ozone layer and climate. To date, climate models often use prescribed ozone fields, since a full stratospheric chemistry scheme is computationally very expensive. Polar SWIFT is based on a set of coupled differential equations, which simulate the polar vortex-averaged mixing ratios of the key species involved in polar ozone depletion on a given vertical level. These species are O3, chemically active chlorine (ClOx), HCl, ClONO2 and HNO3. The only external input parameters that drive the model are the fraction of the polar vortex in sunlight and the fraction of the polar vortex below the temperatures necessary for the formation of polar stratospheric clouds. Here, we present an update of the Polar SWIFT model introducing several improvements over the original model formulation. In particular, the model is now trained on vortex-averaged reaction rates of the ATLAS Chemistry and Transport Model, which enables a detailed look at individual processes and an independent validation of the different parameterizations contained in the differential equations. The training of the original Polar SWIFT model was based on fitting complete model runs to satellite observations and did not allow for this. A revised formulation of the system of differential equations is developed, which closely fits vortex-averaged reaction rates from ATLAS that represent the main chemical processes influencing ozone. In addition, a parameterization for the HNO3 change by denitrification is included. The rates of change of the concentrations of the chemical species of the Polar SWIFT model are purely chemical rates of change in the new version, whereas in the original Polar SWIFT model, they included a transport effect caused by the

  20. QMM – A Quarterly Macroeconomic Model of the Icelandic Economy. Version 2.0

    DEFF Research Database (Denmark)

    Ólafsson, Tjörvi

    This paper documents and describes Version 2.0 of the Quarterly Macroeconomic Model of the Central Bank of Iceland (QMM). QMM and the underlying quarterly database have been under construction since 2001 at the Research and Forecasting Division of the Economics Department at the Bank and was first...... implemented in the forecasting round for the Monetary Bulletin 2006/1 in March 2006. QMM is used by the Bank for forecasting and various policy simulations and therefore plays a key role as an organisational framework for viewing the medium-term future when formulating monetary policy at the Bank. This paper...

  1. Youth at ultra high risk for psychosis: using the Revised Network Episode Model to examine pathways to mental health care.

    Science.gov (United States)

    Boydell, Katherine M; Volpe, Tiziana; Gladstone, Brenda M; Stasiulis, Elaine; Addington, Jean

    2013-05-01

    This paper aims to identify the ways in which youth at ultra high risk for psychosis access mental health services and the factors that advance or delay help seeking, using the Revised Network Episode Model (REV NEM) of mental health care. A case study approach documents help-seeking pathways, encompassing two qualitative interviews with 10 young people and 29 significant others. Theoretical propositions derived from the REV NEM are explored, consisting of the content, structure and function of the: (i) family; (ii) community and school; and (iii) treatment system. Although the aspects of the REV NEM are supported and shape pathways to care, we consider rethinking the model for help seeking with youth at ultra high risk for psychosis. The pathway concept is important to our understanding of how services and supports are received and experienced over time. Understanding this process and the strategies that support positive early intervention on the part of youth and significant others is critical. © 2012 Wiley Publishing Asia Pty Ltd.

  2. A model of fast radio bursts: collisions between episodic magnetic blobs

    Science.gov (United States)

    Li, Long-Biao; Huang, Yong-Feng; Geng, Jin-Jun; Li, Bing

    2018-06-01

    Fast radio bursts (FRBs) are bright radio pulses from the sky with millisecond durations and Jansky-level flux densities. Their origins are still largely uncertain. Here we suggest a new model for FRBs. We argue that the collision of a white dwarf with a black hole can generate a transient accretion disk, from which powerful episodicmagnetic blobs will be launched. The collision between two consecutive magnetic blobs can result in a catastrophic magnetic reconnection, which releases a large amount of free magnetic energy and forms a forward shock. The shock propagates through the cold magnetized plasma within the blob in the collision region, radiating through the synchrotron maser mechanism, which is responsible for a non-repeating FRB signal. Our calculations show that the theoretical energetics, radiation frequency, duration timescale and event rate can be very consistent with the observational characteristics of FRBs.

  3. GOOSE Version 1.4: A powerful object-oriented simulation environment for developing reactor models

    International Nuclear Information System (INIS)

    Nypaver, D.J.; March-Leuba, C.; Abdalla, M.A.; Guimaraes, L.

    1992-01-01

    A prototype software package for a fully interactive Generalized Object-Oriented Simulation Environment (GOOSE) is being developed at Oak Ridge National Laboratory. Dynamic models are easily constructed and tested; fully interactive capabilities allow the user to alter model parameters and complexity without recompilation. This environment provides assess to powerful tools such as numerical integration packages, graphical displays, and online help. In GOOSE, portability has been achieved by creating the environment in Objective-C 1 , which is supported by a variety of platforms including UNIX and DOS. GOOSE Version 1.4 introduces new enhancements like the capability of creating ''initial,'' ''dynamic,'' and ''digital'' methods. The object-oriented approach to simulation used in GOOSE combines the concept of modularity with the additional features of allowing precompilation, optimization, testing, and validation of individual modules. Once a library of classes has been defined and compiled, models can be built and modified without recompilation. GOOSE Version 1.4 is primarily command-line driven

  4. A RETRAN-02 model of the Sizewell B PCSR design - the Winfrith one-loop model, version 3.0

    International Nuclear Information System (INIS)

    Kinnersly, S.R.

    1983-11-01

    A one-loop RETRAN-02 model of the Sizewell B Pre Construction Safety Report (PCSR) design, set up at Winfrith, is described and documented. The model is suitable for symmetrical pressurised transients. Comparison with data from the Sizewell B PCSR shows that the model is a good representation of that design. Known errors, limitations and deficiencies are described. The mode of storage and maintenance at Winfrith using PROMUS (Program Maintenance and Update System) is noted. It is recommended that users modify the standard data by adding replacement cards to the end so as to aid in identification, use and maintenance of local versions. (author)

  5. Relationship of amotivation to neurocognition, self-efficacy and functioning in first-episode psychosis: a structural equation modeling approach.

    Science.gov (United States)

    Chang, W C; Kwong, V W Y; Hui, C L M; Chan, S K W; Lee, E H M; Chen, E Y H

    2017-03-01

    Better understanding of the complex interplay among key determinants of functional outcome is crucial to promoting recovery in psychotic disorders. However, this is understudied in the early course of illness. We aimed to examine the relationships among negative symptoms, neurocognition, general self-efficacy and global functioning in first-episode psychosis (FEP) patients using structural equation modeling (SEM). Three hundred and twenty-one Chinese patients aged 26-55 years presenting with FEP to an early intervention program in Hong Kong were recruited. Assessments encompassing symptom profiles, functioning, perceived general self-efficacy and a battery of neurocognitive tests were conducted. Negative symptom measurement was subdivided into amotivation and diminished expression (DE) domain scores based on the ratings in the Scale for the Assessment of Negative Symptoms. An initial SEM model showed no significant association between functioning and DE which was removed from further analysis. A final trimmed model yielded very good model fit (χ2 = 15.48, p = 0.63; comparative fit index = 1.00; root mean square error of approximation amotivation, neurocognition and general self-efficacy had a direct effect on global functioning. Amotivation was also found to mediate a significant indirect effect of neurocognition and general self-efficacy on functioning. Neurocognition was not significantly related to general self-efficacy. Our results indicate a critical intermediary role of amotivation in linking neurocognitive impairment to functioning in FEP. General self-efficacy may represent a promising treatment target for improvement of motivational deficits and functional outcome in the early illness stage.

  6. Modeling Data with Excess Zeros and Measurement Error: Application to Evaluating Relationships between Episodically Consumed Foods and Health Outcomes

    KAUST Repository

    Kipnis, Victor; Midthune, Douglas; Buckman, Dennis W.; Dodd, Kevin W.; Guenther, Patricia M.; Krebs-Smith, Susan M.; Subar, Amy F.; Tooze, Janet A.; Carroll, Raymond J.; Freedman, Laurence S.

    2009-01-01

    Dietary assessment of episodically consumed foods gives rise to nonnegative data that have excess zeros and measurement error. Tooze et al. (2006, Journal of the American Dietetic Association 106, 1575-1587) describe a general statistical approach

  7. The Systems Biology Markup Language (SBML) Level 3 Package: Qualitative Models, Version 1, Release 1.

    Science.gov (United States)

    Chaouiya, Claudine; Keating, Sarah M; Berenguier, Duncan; Naldi, Aurélien; Thieffry, Denis; van Iersel, Martijn P; Le Novère, Nicolas; Helikar, Tomáš

    2015-09-04

    Quantitative methods for modelling biological networks require an in-depth knowledge of the biochemical reactions and their stoichiometric and kinetic parameters. In many practical cases, this knowledge is missing. This has led to the development of several qualitative modelling methods using information such as, for example, gene expression data coming from functional genomic experiments. The SBML Level 3 Version 1 Core specification does not provide a mechanism for explicitly encoding qualitative models, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The SBML Qualitative Models package for SBML Level 3 adds features so that qualitative models can be directly and explicitly encoded. The approach taken in this package is essentially based on the definition of regulatory or influence graphs. The SBML Qualitative Models package defines the structure and syntax necessary to describe qualitative models that associate discrete levels of activities with entity pools and the transitions between states that describe the processes involved. This is particularly suited to logical models (Boolean or multi-valued) and some classes of Petri net models can be encoded with the approach.

  8. Application and evaluation of two air quality models for particulate matter for a southeastern U.S. episode.

    Science.gov (United States)

    Zhang, Yang; Pun, Betty; Wu, Shiang-Yuh; Vijayaraghavan, Krish; Seigneur, Christian

    2004-12-01

    The Models-3 Community Multiscale Air Quality (CMAQ) Modeling System and the Particulate Matter Comprehensive Air Quality Model with extensions (PMCAMx) were applied to simulate the period June 29-July 10, 1999, of the Southern Oxidants Study episode with two nested horizontal grid sizes: a coarse resolution of 32 km and a fine resolution of 8 km. The predicted spatial variations of ozone (O3), particulate matter with an aerodynamic diameter less than or equal to 2.5 microm (PM2.5), and particulate matter with an aerodynamic diameter less than or equal to 10 microm (PM10) by both models are similar in rural areas but differ from one another significantly over some urban/suburban areas in the eastern and southern United States, where PMCAMx tends to predict higher values of O3 and PM than CMAQ. Both models tend to predict O3 values that are higher than those observed. For observed O3 values above 60 ppb, O3 performance meets the U.S. Environmental Protection Agency's criteria for CMAQ with both grids and for PMCAMx with the fine grid only. It becomes unsatisfactory for PMCAMx and marginally satisfactory for CMAQ for observed O3 values above 40 ppb. Both models predict similar amounts of sulfate (SO4(2-)) and organic matter, and both predict SO4(2-) to be the largest contributor to PM2.5. PMCAMx generally predicts higher amounts of ammonium (NH4+), nitrate (NO3-), and black carbon (BC) than does CMAQ. PM performance for CMAQ is generally consistent with that of other PM models, whereas PMCAMx predicts higher concentrations of NO3-, NH4+, and BC than observed, which degrades its performance. For PM10 and PM2.5 predictions over the southeastern U.S. domain, the ranges of mean normalized gross errors (MNGEs) and mean normalized bias are 37-43% and -33-4% for CMAQ and 50-59% and 7-30% for PMCAMx. Both models predict the largest MNGEs for NO3- (98-104% for CMAQ 138-338% for PMCAMx). The inaccurate NO3- predictions by both models may be caused by the inaccuracies in the

  9. Landfill Gas Energy Cost Model Version 3.0 (LFGcost-Web V3 ...

    Science.gov (United States)

    To help stakeholders estimate the costs of a landfill gas (LFG) energy project, in 2002, LMOP developed a cost tool (LFGcost). Since then, LMOP has routinely updated the tool to reflect changes in the LFG energy industry. Initially the model was designed for EPA to assist landfills in evaluating the economic and financial feasibility of LFG energy project development. In 2014, LMOP developed a public version of the model, LFGcost-Web (Version 3.0), to allow landfill and industry stakeholders to evaluate project feasibility on their own. LFGcost-Web can analyze costs for 12 energy recovery project types. These project costs can be estimated with or without the costs of a gas collection and control system (GCCS). The EPA used select equations from LFGcost-Web to estimate costs of the regulatory options in the 2015 proposed revisions to the MSW Landfills Standards of Performance (also known as New Source Performance Standards) and the Emission Guidelines (herein thereafter referred to collectively as the Landfill Rules). More specifically, equations derived from LFGcost-Web were applied to each landfill expected to be impacted by the Landfill Rules to estimate annualized installed capital costs and annual O&M costs of a gas collection and control system. In addition, after applying the LFGcost-Web equations to the list of landfills expected to require a GCCS in year 2025 as a result of the proposed Landfill Rules, the regulatory analysis evaluated whether electr

  10. Modeling Data with Excess Zeros and Measurement Error: Application to Evaluating Relationships between Episodically Consumed Foods and Health Outcomes

    KAUST Repository

    Kipnis, Victor

    2009-03-03

    Dietary assessment of episodically consumed foods gives rise to nonnegative data that have excess zeros and measurement error. Tooze et al. (2006, Journal of the American Dietetic Association 106, 1575-1587) describe a general statistical approach (National Cancer Institute method) for modeling such food intakes reported on two or more 24-hour recalls (24HRs) and demonstrate its use to estimate the distribution of the food\\'s usual intake in the general population. In this article, we propose an extension of this method to predict individual usual intake of such foods and to evaluate the relationships of usual intakes with health outcomes. Following the regression calibration approach for measurement error correction, individual usual intake is generally predicted as the conditional mean intake given 24HR-reported intake and other covariates in the health model. One feature of the proposed method is that additional covariates potentially related to usual intake may be used to increase the precision of estimates of usual intake and of diet-health outcome associations. Applying the method to data from the Eating at America\\'s Table Study, we quantify the increased precision obtained from including reported frequency of intake on a food frequency questionnaire (FFQ) as a covariate in the calibration model. We then demonstrate the method in evaluating the linear relationship between log blood mercury levels and fish intake in women by using data from the National Health and Nutrition Examination Survey, and show increased precision when including the FFQ information. Finally, we present simulation results evaluating the performance of the proposed method in this context.

  11. LERC-SLAM - THE NASA LEWIS RESEARCH CENTER SATELLITE LINK ATTENUATION MODEL PROGRAM (IBM PC VERSION)

    Science.gov (United States)

    Manning, R. M.

    1994-01-01

    The frequency and intensity of rain attenuation affecting the communication between a satellite and an earth terminal is an important consideration in planning satellite links. The NASA Lewis Research Center Satellite Link Attenuation Model Program (LeRC-SLAM) provides a static and dynamic statistical assessment of the impact of rain attenuation on a communications link established between an earth terminal and a geosynchronous satellite. The program is designed for use in the specification, design and assessment of satellite links for any terminal location in the continental United States. The basis for LeRC-SLAM is the ACTS Rain Attenuation Prediction Model, which uses a log-normal cumulative probability distribution to describe the random process of rain attenuation on satellite links. The derivation of the statistics for the rainrate process at the specified terminal location relies on long term rainfall records compiled by the U.S. Weather Service during time periods of up to 55 years in length. The theory of extreme value statistics is also utilized. The user provides 1) the longitudinal position of the satellite in geosynchronous orbit, 2) the geographical position of the earth terminal in terms of latitude and longitude, 3) the height above sea level of the terminal site, 4) the yearly average rainfall at the terminal site, and 5) the operating frequency of the communications link (within 1 to 1000 GHz, inclusive). Based on the yearly average rainfall at the terminal location, LeRC-SLAM calculates the relevant rain statistics for the site using an internal data base. The program then generates rain attenuation data for the satellite link. This data includes a description of the static (i.e., yearly) attenuation process, an evaluation of the cumulative probability distribution for attenuation effects, and an evaluation of the probability of fades below selected fade depths. In addition, LeRC-SLAM calculates the elevation and azimuth angles of the terminal

  12. LERC-SLAM - THE NASA LEWIS RESEARCH CENTER SATELLITE LINK ATTENUATION MODEL PROGRAM (MACINTOSH VERSION)

    Science.gov (United States)

    Manning, R. M.

    1994-01-01

    The frequency and intensity of rain attenuation affecting the communication between a satellite and an earth terminal is an important consideration in planning satellite links. The NASA Lewis Research Center Satellite Link Attenuation Model Program (LeRC-SLAM) provides a static and dynamic statistical assessment of the impact of rain attenuation on a communications link established between an earth terminal and a geosynchronous satellite. The program is designed for use in the specification, design and assessment of satellite links for any terminal location in the continental United States. The basis for LeRC-SLAM is the ACTS Rain Attenuation Prediction Model, which uses a log-normal cumulative probability distribution to describe the random process of rain attenuation on satellite links. The derivation of the statistics for the rainrate process at the specified terminal location relies on long term rainfall records compiled by the U.S. Weather Service during time periods of up to 55 years in length. The theory of extreme value statistics is also utilized. The user provides 1) the longitudinal position of the satellite in geosynchronous orbit, 2) the geographical position of the earth terminal in terms of latitude and longitude, 3) the height above sea level of the terminal site, 4) the yearly average rainfall at the terminal site, and 5) the operating frequency of the communications link (within 1 to 1000 GHz, inclusive). Based on the yearly average rainfall at the terminal location, LeRC-SLAM calculates the relevant rain statistics for the site using an internal data base. The program then generates rain attenuation data for the satellite link. This data includes a description of the static (i.e., yearly) attenuation process, an evaluation of the cumulative probability distribution for attenuation effects, and an evaluation of the probability of fades below selected fade depths. In addition, LeRC-SLAM calculates the elevation and azimuth angles of the terminal

  13. Integrated Medical Model (IMM) Optimization Version 4.0 Functional Improvements

    Science.gov (United States)

    Arellano, John; Young, M.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Goodenow, D. A.; Myers, J. G.

    2016-01-01

    The IMMs ability to assess mission outcome risk levels relative to available resources provides a unique capability to provide guidance on optimal operational medical kit and vehicle resources. Post-processing optimization allows IMM to optimize essential resources to improve a specific model outcome such as maximization of the Crew Health Index (CHI), or minimization of the probability of evacuation (EVAC) or the loss of crew life (LOCL). Mass and or volume constrain the optimized resource set. The IMMs probabilistic simulation uses input data on one hundred medical conditions to simulate medical events that may occur in spaceflight, the resources required to treat those events, and the resulting impact to the mission based on specific crew and mission characteristics. Because IMM version 4.0 provides for partial treatment for medical events, IMM Optimization 4.0 scores resources at the individual resource unit increment level as opposed to the full condition-specific treatment set level, as done in version 3.0. This allows the inclusion of as many resources as possible in the event that an entire set of resources called out for treatment cannot satisfy the constraints. IMM Optimization version 4.0 adds capabilities that increase efficiency by creating multiple resource sets based on differing constraints and priorities, CHI, EVAC, or LOCL. It also provides sets of resources that improve mission-related IMM v4.0 outputs with improved performance compared to the prior optimization. The new optimization represents much improved fidelity that will improve the utility of the IMM 4.0 for decision support.

  14. Validity study of the Beck Anxiety Inventory (Portuguese version by the Rasch Rating Scale model

    Directory of Open Access Journals (Sweden)

    Sónia Quintão

    2013-01-01

    Full Text Available Our objective was to conduct a validation study of the Portuguese version of the Beck Anxiety Inventory (BAI by means of the Rasch Rating Scale Model, and then compare it with the most used scales of anxiety in Portugal. The sample consisted of 1,160 adults (427 men and 733 women, aged 18-82 years old (M=33.39; SD=11.85. Instruments were Beck Anxiety Inventory, State-Trait Anxiety Inventory and Zung Self-Rating Anxiety Scale. It was found that Beck Anxiety Inventory's system of four categories, the data-model fit, and people reliability were adequate. The measure can be considered as unidimensional. Gender and age-related differences were not a threat to the validity. BAI correlated significantly with other anxiety measures. In conclusion, BAI shows good psychometric quality.

  15. New Source Term Model for the RESRAD-OFFSITE Code Version 3

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Charley [Argonne National Lab. (ANL), Argonne, IL (United States); Gnanapragasam, Emmanuel [Argonne National Lab. (ANL), Argonne, IL (United States); Cheng, Jing-Jy [Argonne National Lab. (ANL), Argonne, IL (United States); Kamboj, Sunita [Argonne National Lab. (ANL), Argonne, IL (United States); Chen, Shih-Yew [Argonne National Lab. (ANL), Argonne, IL (United States)

    2013-06-01

    This report documents the new source term model developed and implemented in Version 3 of the RESRAD-OFFSITE code. This new source term model includes: (1) "first order release with transport" option, in which the release of the radionuclide is proportional to the inventory in the primary contamination and the user-specified leach rate is the proportionality constant, (2) "equilibrium desorption release" option, in which the user specifies the distribution coefficient which quantifies the partitioning of the radionuclide between the solid and aqueous phases, and (3) "uniform release" option, in which the radionuclides are released from a constant fraction of the initially contaminated material during each time interval and the user specifies the duration over which the radionuclides are released.

  16. The SGHWR version of the Monte Carlo code W-MONTE. Part 1. The theoretical model

    International Nuclear Information System (INIS)

    Allen, F.R.

    1976-03-01

    W-MONTE provides a multi-group model of neutron transport in the exact geometry of a reactor lattice using Monte Carlo methods. It is currently restricted to uniform axial properties. Material data is normally obtained from a preliminary WIMS lattice calculation in the transport group structure. The SGHWR version has been required for analysis of zero energy experiments and special aspects of power reactor lattices, such as the unmoderated lattice region above the moderator when drained to dump height. Neutron transport is modelled for a uniform infinite lattice, simultaneously treating the cases of no leakage, radial leakage or axial leakage only, and the combined effects of radial and axial leakage. Multigroup neutron balance edits are incorporated for the separate effects of radial and axial leakage to facilitate the analysis of leakage and to provide effective diffusion theory parameters for core representation in reactor cores. (author)

  17. Comparison of three ice cloud optical schemes in climate simulations with community atmospheric model version 5

    Science.gov (United States)

    Zhao, Wenjie; Peng, Yiran; Wang, Bin; Yi, Bingqi; Lin, Yanluan; Li, Jiangnan

    2018-05-01

    A newly implemented Baum-Yang scheme for simulating ice cloud optical properties is compared with existing schemes (Mitchell and Fu schemes) in a standalone radiative transfer model and in the global climate model (GCM) Community Atmospheric Model Version 5 (CAM5). This study systematically analyzes the effect of different ice cloud optical schemes on global radiation and climate by a series of simulations with a simplified standalone radiative transfer model, atmospheric GCM CAM5, and a comprehensive coupled climate model. Results from the standalone radiative model show that Baum-Yang scheme yields generally weaker effects of ice cloud on temperature profiles both in shortwave and longwave spectrum. CAM5 simulations indicate that Baum-Yang scheme in place of Mitchell/Fu scheme tends to cool the upper atmosphere and strengthen the thermodynamic instability in low- and mid-latitudes, which could intensify the Hadley circulation and dehydrate the subtropics. When CAM5 is coupled with a slab ocean model to include simplified air-sea interaction, reduced downward longwave flux to surface in Baum-Yang scheme mitigates ice-albedo feedback in the Arctic as well as water vapor and cloud feedbacks in low- and mid-latitudes, resulting in an overall temperature decrease by 3.0/1.4 °C globally compared with Mitchell/Fu schemes. Radiative effect and climate feedback of the three ice cloud optical schemes documented in this study can be referred for future improvements on ice cloud simulation in CAM5.

  18. Immersion freezing by natural dust based on a soccer ball model with the Community Atmospheric Model version 5: climate effects

    Science.gov (United States)

    Wang, Yong; Liu, Xiaohong

    2014-12-01

    We introduce a simplified version of the soccer ball model (SBM) developed by Niedermeier et al (2014 Geophys. Res. Lett. 41 736-741) into the Community Atmospheric Model version 5 (CAM5). It is the first time that SBM is used in an atmospheric model to parameterize the heterogeneous ice nucleation. The SBM, which was simplified for its suitable application in atmospheric models, uses the classical nucleation theory to describe the immersion/condensation freezing by dust in the mixed-phase cloud regime. Uncertain parameters (mean contact angle, standard deviation of contact angle probability distribution, and number of surface sites) in the SBM are constrained by fitting them to recent natural dust (Saharan dust) datasets. With the SBM in CAM5, we investigate the sensitivity of modeled cloud properties to the SBM parameters, and find significant seasonal and regional differences in the sensitivity among the three SBM parameters. Changes of mean contact angle and the number of surface sites lead to changes of cloud properties in Arctic in spring, which could be attributed to the transport of dust ice nuclei to this region. In winter, significant changes of cloud properties induced by these two parameters mainly occur in northern hemispheric mid-latitudes (e.g., East Asia). In comparison, no obvious changes of cloud properties caused by changes of standard deviation can be found in all the seasons. These results are valuable for understanding the heterogeneous ice nucleation behavior, and useful for guiding the future model developments.

  19. Immersion freezing by natural dust based on a soccer ball model with the Community Atmospheric Model version 5: climate effects

    International Nuclear Information System (INIS)

    Wang, Yong; Liu, Xiaohong

    2014-01-01

    We introduce a simplified version of the soccer ball model (SBM) developed by Niedermeier et al (2014 Geophys. Res. Lett. 41 736–741) into the Community Atmospheric Model version 5 (CAM5). It is the first time that SBM is used in an atmospheric model to parameterize the heterogeneous ice nucleation. The SBM, which was simplified for its suitable application in atmospheric models, uses the classical nucleation theory to describe the immersion/condensation freezing by dust in the mixed-phase cloud regime. Uncertain parameters (mean contact angle, standard deviation of contact angle probability distribution, and number of surface sites) in the SBM are constrained by fitting them to recent natural dust (Saharan dust) datasets. With the SBM in CAM5, we investigate the sensitivity of modeled cloud properties to the SBM parameters, and find significant seasonal and regional differences in the sensitivity among the three SBM parameters. Changes of mean contact angle and the number of surface sites lead to changes of cloud properties in Arctic in spring, which could be attributed to the transport of dust ice nuclei to this region. In winter, significant changes of cloud properties induced by these two parameters mainly occur in northern hemispheric mid-latitudes (e.g., East Asia). In comparison, no obvious changes of cloud properties caused by changes of standard deviation can be found in all the seasons. These results are valuable for understanding the heterogeneous ice nucleation behavior, and useful for guiding the future model developments. (letter)

  20. Incorporating remote sensing-based ET estimates into the Community Land Model version 4.5

    Directory of Open Access Journals (Sweden)

    D. Wang

    2017-07-01

    Full Text Available Land surface models bear substantial biases in simulating surface water and energy budgets despite the continuous development and improvement of model parameterizations. To reduce model biases, Parr et al. (2015 proposed a method incorporating satellite-based evapotranspiration (ET products into land surface models. Here we apply this bias correction method to the Community Land Model version 4.5 (CLM4.5 and test its performance over the conterminous US (CONUS. We first calibrate a relationship between the observational ET from the Global Land Evaporation Amsterdam Model (GLEAM product and the model ET from CLM4.5, and assume that this relationship holds beyond the calibration period. During the validation or application period, a simulation using the default CLM4.5 (CLM is conducted first, and its output is combined with the calibrated observational-vs.-model ET relationship to derive a corrected ET; an experiment (CLMET is then conducted in which the model-generated ET is overwritten with the corrected ET. Using the observations of ET, runoff, and soil moisture content as benchmarks, we demonstrate that CLMET greatly improves the hydrological simulations over most of the CONUS, and the improvement is stronger in the eastern CONUS than the western CONUS and is strongest over the Southeast CONUS. For any specific region, the degree of the improvement depends on whether the relationship between observational and model ET remains time-invariant (a fundamental hypothesis of the Parr et al. (2015 method and whether water is the limiting factor in places where ET is underestimated. While the bias correction method improves hydrological estimates without improving the physical parameterization of land surface models, results from this study do provide guidance for physically based model development effort.

  1. VALIDATION OF THE ASTER GLOBAL DIGITAL ELEVATION MODEL VERSION 3 OVER THE CONTERMINOUS UNITED STATES

    Directory of Open Access Journals (Sweden)

    D. Gesch

    2016-06-01

    Full Text Available The ASTER Global Digital Elevation Model Version 3 (GDEM v3 was evaluated over the conterminous United States in a manner similar to the validation conducted for the original GDEM Version 1 (v1 in 2009 and GDEM Version 2 (v2 in 2011. The absolute vertical accuracy of GDEM v3 was calculated by comparison with more than 23,000 independent reference geodetic ground control points from the U.S. National Geodetic Survey. The root mean square error (RMSE measured for GDEM v3 is 8.52 meters. This compares with the RMSE of 8.68 meters for GDEM v2. Another important descriptor of vertical accuracy is the mean error, or bias, which indicates if a DEM has an overall vertical offset from true ground level. The GDEM v3 mean error of −1.20 meters reflects an overall negative bias in GDEM v3. The absolute vertical accuracy assessment results, both mean error and RMSE, were segmented by land cover type to provide insight into how GDEM v3 performs in various land surface conditions. While the RMSE varies little across cover types (6.92 to 9.25 meters, the mean error (bias does appear to be affected by land cover type, ranging from −2.99 to +4.16 meters across 14 land cover classes. These results indicate that in areas where built or natural aboveground features are present, GDEM v3 is measuring elevations above the ground level, a condition noted in assessments of previous GDEM versions (v1 and v2 and an expected condition given the type of stereo-optical image data collected by ASTER. GDEM v3 was also evaluated by differencing with the Shuttle Radar Topography Mission (SRTM dataset. In many forested areas, GDEM v3 has elevations that are higher in the canopy than SRTM. The overall validation effort also included an evaluation of the GDEM v3 water mask. In general, the number of distinct water polygons in GDEM v3 is much lower than the number in a reference land cover dataset, but the total areas compare much more closely.

  2. Validation of the ASTER Global Digital Elevation Model version 3 over the conterminous United States

    Science.gov (United States)

    Gesch, Dean B.; Oimoen, Michael J.; Danielson, Jeffrey J.; Meyer, David; Halounova, L; Šafář, V.; Jiang, J.; Olešovská, H.; Dvořáček, P.; Holland, D.; Seredovich, V.A.; Muller, J.P.; Pattabhi Rama Rao, E.; Veenendaal, B.; Mu, L.; Zlatanova, S.; Oberst, J.; Yang, C.P.; Ban, Y.; Stylianidis, S.; Voženílek, V.; Vondráková, A.; Gartner, G.; Remondino, F.; Doytsher, Y.; Percivall, George; Schreier, G.; Dowman, I.; Streilein, A.; Ernst, J.

    2016-01-01

    The ASTER Global Digital Elevation Model Version 3 (GDEM v3) was evaluated over the conterminous United States in a manner similar to the validation conducted for the original GDEM Version 1 (v1) in 2009 and GDEM Version 2 (v2) in 2011. The absolute vertical accuracy of GDEM v3 was calculated by comparison with more than 23,000 independent reference geodetic ground control points from the U.S. National Geodetic Survey. The root mean square error (RMSE) measured for GDEM v3 is 8.52 meters. This compares with the RMSE of 8.68 meters for GDEM v2. Another important descriptor of vertical accuracy is the mean error, or bias, which indicates if a DEM has an overall vertical offset from true ground level. The GDEM v3 mean error of −1.20 meters reflects an overall negative bias in GDEM v3. The absolute vertical accuracy assessment results, both mean error and RMSE, were segmented by land cover type to provide insight into how GDEM v3 performs in various land surface conditions. While the RMSE varies little across cover types (6.92 to 9.25 meters), the mean error (bias) does appear to be affected by land cover type, ranging from −2.99 to +4.16 meters across 14 land cover classes. These results indicate that in areas where built or natural aboveground features are present, GDEM v3 is measuring elevations above the ground level, a condition noted in assessments of previous GDEM versions (v1 and v2) and an expected condition given the type of stereo-optical image data collected by ASTER. GDEM v3 was also evaluated by differencing with the Shuttle Radar Topography Mission (SRTM) dataset. In many forested areas, GDEM v3 has elevations that are higher in the canopy than SRTM. The overall validation effort also included an evaluation of the GDEM v3 water mask. In general, the number of distinct water polygons in GDEM v3 is much lower than the number in a reference land cover dataset, but the total areas compare much more closely.

  3. 78 FR 32224 - Availability of Version 3.1.2 of the Connect America Fund Phase II Cost Model; Additional...

    Science.gov (United States)

    2013-05-29

    ... Version 3.1.2 of the Connect America Fund Phase II Cost Model; Additional Discussion Topics in Connect America Cost Model Virtual Workshop AGENCY: Federal Communications Commission. ACTION: Proposed rule... America Cost Model (CAM v3.1.2), which allows Commission staff and interested parties to calculate costs...

  4. Version 2.0 of the European Gas Model. Changes and their impact on the German gas sector

    International Nuclear Information System (INIS)

    Balmert, David; Petrov, Konstantin

    2015-01-01

    In January 2015 ACER, the European Agency for the Cooperation of Energy Regulators, presented an updated version of its target model for the inner-European natural gas market, also referred to as version 2.0 of the Gas Target Model. During 2014 the existing model, originally developed by the Council of European Energy Regulators (CEER) and launched in 2011, had been analysed, revised and updated in preparation of the new version. While it has few surprises to offer, the new Gas Target Model contains specifies and goes into greater detail on many elements of the original model. Some of the new content is highly relevant to the German gas sector, not least the deliberations on the current key issues, which are security of supply and the ability of the gas markets to function.

  5. Mesoscale model simulation of low level equatorial winds over Borneo during the haze episode of September 1997

    Science.gov (United States)

    Mahmud, Mastura

    2009-08-01

    The large-scale vegetation fires instigated by the local farmers during the dry period of the major El Niño event in 1997 can be considered as one of the worst environmental disasters that have occurred in southeast Asia in recent history. This study investigated the local meteorology characteristics of an equatorial environment within a domain that includes the northwestern part of Borneo from the 17 to 27 September 1997 during the height of the haze episode by utilizing a limited area three-dimensional meteorological and dispersion model, The Air Pollution Model (TAPM). Daily land and sea breeze conditions near the northwestern coast of Borneo in the state of Sarawak, Malaysia were predicted with moderate success by the index of agreement of less than one between the observed and simulated values for wind speed and a slight overprediction of 2.3 of the skill indicator that evaluates the standard deviation to the observed values. The innermost domain of study comprises an area of 24,193 km2, from approximately 109°E to 111°E, and from 1°N to 2.3°N, which includes a part of the South China Sea. Tracer analysis of air particles that were sourced in the state of Sarawak on the island of Borneo verified the existence of the landward and shoreward movements of the air during the simulation of the low level wind field. Polluted air particles were transported seawards during night-time, and landwards during daytime, highlighting the recirculation features of aged and newer air particles during the length of eleven days throughout the model simulation. Near calm conditions at low levels were simulated by the trajectory analysis from midnight to mid-day on the 22 of September 1997. Low-level turbulence within the planetary boundary layer in terms of the total kinetic energy was weak, congruent with the weak strength of low level winds that reduced the ability of the air to transport the pollutants. Statistical evaluation showed that parameters such as the systematic

  6. Probabilistic Model for Integrated Assessment of the Behavior at the T.D.P. Version 2

    International Nuclear Information System (INIS)

    Hurtado, A.; Eguilior, S.; Recreo, F

    2015-01-01

    This report documents the completion of the first phase of the implementation of the methodology ABACO2G (Bayes Application to Geological Storage of CO2) and the final version of the ABACO2G probabilistic model for the injection phase before its future validation in the experimental field of the Technology Development Plant in Hontom (Burgos). The model, which is based on the determination of the probabilistic risk component of a geological storage of CO2 using the formalism of Bayesian networks and Monte Carlo probability yields quantitative probability functions of the total system CO2 storage and of each one of their subsystems (storage subsystem and the primary seal; secondary containment subsystem and dispersion subsystem or tertiary one); the implementation of the stochastic time evolution of the CO2 plume during the injection period, the stochastic time evolution of the drying front, the probabilistic evolution of the pressure front, decoupled from the CO2 plume progress front, and the implementation of submodels and leakage probability functions through major leakage risk elements (fractures / faults and wells / deep boreholes) which together define the space of events to estimate the risks associated with the CO2 geological storage system. The activities included in this report have been to replace the previous qualitative estimation submodels of former ABACO2G version developed during Phase I of the project ALM-10-017, by analytical, semi-analytical or numerical submodels for the main elements of risk (wells and fractures), to obtain an integrated probabilistic model of a CO2 storage complex in carbonate formations that meets the needs of the integrated behavior evaluation of the Technology Development Plant in Hontomín

  7. Hydrogeochemical evaluation for Simpevarp model version 1.2. Preliminary site description of the Simpevarp area

    International Nuclear Information System (INIS)

    Laaksoharju, Marcus

    2004-12-01

    Siting studies for SKB's programme of deep geological disposal of nuclear fuel waste currently involves the investigation of two locations, Simpevarp and Forsmark, to determine their geological, hydrogeochemical and hydrogeological characteristics. Present work completed has resulted in Model version 1.2 which represents the second evaluation of the available Simpevarp groundwater analytical data collected up to April, 2004. The deepest fracture groundwater samples with sufficient analytical data reflected depths down to 1.7 km. Model version 1.2 focusses on geochemical and mixing processes affecting the groundwater composition in the uppermost part of the bedrock, down to repository levels, and eventually extending to 1000 m depth. The groundwater flow regimes at Laxemar/Simpevarp are considered local and extend down to depths of around 600-1000 m depending on local topography. The marked differences in the groundwater flow regimes between Laxemar and Simpevarp are reflected in the groundwater chemistry where four major hydrochemical groups of groundwaters (types A-D) have been identified: TYPE A: This type comprises dilute groundwaters ( 3 type present at shallow ( 300 m) levels at Simpevarp, and at even greater depths (approx. 1200 m) at Laxemar. At Simpevarp the groundwaters are mainly Na-Ca-Cl with increasingly enhanced Br and SO 4 with depth. At Laxemar they are mainly Ca-Na-Cl also with increasing enhancements of Br and SO 4 with depth. Main reactions involve ion exchange (Ca). At both sites a glacial component and a deep saline component are present. At Simpevarp the saline component may be potentially non marine and/or non-marine/old Littorina marine in origin; at Laxemar it is more likely to be non-marine in origin. TYPE D: This type comprises reducing highly saline groundwaters (> 20 000 mg/L Cl; to a maximum of ∼70 g/L TDS) and only has been identified at Laxemar at depths exceeding 1200 m. It is mainly Ca-Na-Cl with higher Br but lower SO 4 compared

  8. Evaluation of a new CNRM-CM6 model version for seasonal climate predictions

    Science.gov (United States)

    Volpi, Danila; Ardilouze, Constantin; Batté, Lauriane; Dorel, Laurant; Guérémy, Jean-François; Déqué, Michel

    2017-04-01

    This work presents the quality assessment of a new version of the Météo-France coupled climate prediction system, which has been developed in the EU COPERNICUS Climate Change Services framework to carry out seasonal forecast. The system is based on the CNRM-CM6 model, with Arpege-Surfex 6.2.2 as atmosphere/land component and Nemo 3.2 as ocean component, which has directly embedded the sea-ice component Gelato 6.0. In order to have a robust diagnostic, the experiment is composed by 60 ensemble members generated with stochastic dynamic perturbations. The experiment has been performed over a 37-year re-forecast period from 1979 to 2015, with two start dates per year, respectively in May 1st and November 1st. The evaluation of the predictive skill of the model is shown under two perspectives: on the one hand, the ability of the model to faithfully respond to positive or negative ENSO, NAO and QBO events, independently of the predictability of these events. Such assessment is carried out through a composite analysis, and shows that the model succeeds in reproducing the main patterns for 2-meter temperature, precipitation and geopotential height at 500 hPa during the winter season. On the other hand, the model predictive skill of the same events (positive and negative ENSO, NAO and QBO) is evaluated.

  9. A description of the FAMOUS (version XDBUA climate model and control run

    Directory of Open Access Journals (Sweden)

    A. Osprey

    2008-12-01

    Full Text Available FAMOUS is an ocean-atmosphere general circulation model of low resolution, capable of simulating approximately 120 years of model climate per wallclock day using current high performance computing facilities. It uses most of the same code as HadCM3, a widely used climate model of higher resolution and computational cost, and has been tuned to reproduce the same climate reasonably well. FAMOUS is useful for climate simulations where the computational cost makes the application of HadCM3 unfeasible, either because of the length of simulation or the size of the ensemble desired. We document a number of scientific and technical improvements to the original version of FAMOUS. These improvements include changes to the parameterisations of ozone and sea-ice which alleviate a significant cold bias from high northern latitudes and the upper troposphere, and the elimination of volume-averaged drifts in ocean tracers. A simple model of the marine carbon cycle has also been included. A particular goal of FAMOUS is to conduct millennial-scale paleoclimate simulations of Quaternary ice ages; to this end, a number of useful changes to the model infrastructure have been made.

  10. Hydrogeochemical evaluation of the Forsmark site, model version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    Laaksoharju, Marcus (ed.) [GeoPoint AB, Sollentuna (Sweden); Gimeno, Maria; Auque, Luis; Gomez, Javier [Univ. of Zaragoza (Spain). Dept. of Earth Sciences; Smellie, John [Conterra AB, Uppsala (Sweden); Tullborg, Eva-Lena [Terralogica AB, Graabo (Sweden); Gurban, Ioana [3D-Terra, Montreal (Canada)

    2004-01-01

    Siting studies for SKB's programme of deep geological disposal of nuclear fuel waste currently involves the investigation of two locations, Forsmark and Simpevarp, on the eastern coast of Sweden to determine their geological, geochemical and hydrogeological characteristics. Present work completed has resulted in model version 1.1 which represents the first evaluation of the available Forsmark groundwater analytical data collected up to May 1, 2003 (i.e. the first 'data freeze'). The HAG group had access to a total of 456 water samples collected mostly from the surface and sub-surface environment (e.g. soil pipes in the overburden, streams and lakes); only a few samples were collected from drilled boreholes. The deepest samples reflected depths down to 200 m. Furthermore, most of the waters sampled (74%) lacked crucial analytical information that restricted the evaluation. Consequently, model version 1.1 focussed on the processes taking place in the uppermost part of the bedrock rather than at repository levels. The complex groundwater evolution and patterns at Forsmark are a result of many factors such as: a) the flat topography and closeness to the Baltic Sea resulting in relative small hydrogeological driving forces which can preserve old water types from being flushed out, b) the changes in hydrogeology related to glaciation/deglaciation and land uplift, c) repeated marine/lake water regressions/transgressions, and d) organic or inorganic alteration of the groundwater caused by microbial processes or water/rock interactions. The sampled groundwaters reflect to various degrees modern or ancient water/rock interactions and mixing processes. Based on the general geochemical character and the apparent age two major water types occur in Forsmark: fresh-meteoric waters with a bicarbonate imprint and low residence times (tritium values above detection limit), and brackish-marine waters with Cl contents up to 6,000 mg/L and longer residence times (tritium

  11. Thermal modelling. Preliminary site description Simpevarp subarea - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Sundberg, Jan; Back, Paer-Erik; Bengtsson, Anna; Laendell, Maerta [Geo Innova AB, Linkoeping (Sweden)

    2005-08-15

    This report presents the thermal site descriptive model for the Simpevarp subarea, version 1.2. The main objective of this report is to present the thermal modelling work where data has been identified, quality controlled, evaluated and summarised in order to make an upscaling to lithological domain level possible. The thermal conductivity at possible canister scale has been modelled for four different lithological domains (RSMA01 (Aevroe granite), RSMB01 (Fine-grained dioritoid), RSMC01 (mixture of Aevroe granite and Quartz monzodiorite), and RSMD01 (Quartz monzodiorite)). A main modelling approach has been used to determine the mean value of the thermal conductivity. Three alternative/complementary approaches have been used to evaluate the spatial variability of the thermal conductivity at domain level. The thermal modelling approaches are based on the lithological model for the Simpevarp subarea, version 1.2 together with rock type models constituted from measured and calculated (from mineral composition) thermal conductivities. For one rock type, the Aevroe granite (501044), density loggings within the specific rock type has also been used in the domain modelling in order to consider the spatial variability within the Aevroe granite. This has been possible due to the presented relationship between density and thermal conductivity, valid for the Aevroe granite. Results indicate that the mean of thermal conductivity is expected to exhibit only a small variation between the different domains, from 2.62 W/(m.K) to 2.80 W/(m.K). The standard deviation varies according to the scale considered and for the canister scale it is expected to range from 0.20 to 0.28 W/(m.K). Consequently, the lower confidence limit (95% confidence) for the canister scale is within the range 2.04-2.35 W/(m.K) for the different domains. The temperature dependence is rather small with a decrease in thermal conductivity of 1.1-3.4% per 100 deg C increase in temperature for the dominating rock

  12. Thermal modelling. Preliminary site description Simpevarp subarea - version 1.2

    International Nuclear Information System (INIS)

    Sundberg, Jan; Back, Paer-Erik; Bengtsson, Anna; Laendell, Maerta

    2005-08-01

    This report presents the thermal site descriptive model for the Simpevarp subarea, version 1.2. The main objective of this report is to present the thermal modelling work where data has been identified, quality controlled, evaluated and summarised in order to make an upscaling to lithological domain level possible. The thermal conductivity at possible canister scale has been modelled for four different lithological domains (RSMA01 (Aevroe granite), RSMB01 (Fine-grained dioritoid), RSMC01 (mixture of Aevroe granite and Quartz monzodiorite), and RSMD01 (Quartz monzodiorite)). A main modelling approach has been used to determine the mean value of the thermal conductivity. Three alternative/complementary approaches have been used to evaluate the spatial variability of the thermal conductivity at domain level. The thermal modelling approaches are based on the lithological model for the Simpevarp subarea, version 1.2 together with rock type models constituted from measured and calculated (from mineral composition) thermal conductivities. For one rock type, the Aevroe granite (501044), density loggings within the specific rock type has also been used in the domain modelling in order to consider the spatial variability within the Aevroe granite. This has been possible due to the presented relationship between density and thermal conductivity, valid for the Aevroe granite. Results indicate that the mean of thermal conductivity is expected to exhibit only a small variation between the different domains, from 2.62 W/(m.K) to 2.80 W/(m.K). The standard deviation varies according to the scale considered and for the canister scale it is expected to range from 0.20 to 0.28 W/(m.K). Consequently, the lower confidence limit (95% confidence) for the canister scale is within the range 2.04-2.35 W/(m.K) for the different domains. The temperature dependence is rather small with a decrease in thermal conductivity of 1.1-3.4% per 100 deg C increase in temperature for the dominating rock

  13. Thermal modelling. Preliminary site description Laxemar subarea - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Sundberg, Jan; Wrafter, John; Back, Paer-Erik; Laendell, Maerta [Geo Innova AB, Linkoeping (Sweden)

    2006-02-15

    This report presents the thermal site descriptive model for the Laxemar subarea, version 1.2. The main objective of this report is to present the thermal modelling work where data has been identified, quality controlled, evaluated and summarised in order to make an upscaling to lithological domain level possible. The thermal conductivity at canister scale has been modelled for five different lithological domains: RSMA (Aevroe granite), RSMBA (mixture of Aevroe granite and fine-grained dioritoid), RSMD (quartz monzodiorite), RSME (diorite/gabbro) and RSMM (mix domain with high frequency of diorite to gabbro). A base modelling approach has been used to determine the mean value of the thermal conductivity. Four alternative/complementary approaches have been used to evaluate the spatial variability of the thermal conductivity at domain level. The thermal modelling approaches are based on the lithological domain model for the Laxemar subarea, version 1.2 together with rock type models based on measured and calculated (from mineral composition) thermal conductivities. For one rock type, Aevroe granite (501044), density loggings have also been used in the domain modelling in order to evaluate the spatial variability within the Aevroe granite. This has been possible due to an established relationship between density and thermal conductivity, valid for the Aevroe granite. Results indicate that the means of thermal conductivity for the various domains are expected to exhibit a variation from 2.45 W/(m.K) to 2.87 W/(m.K). The standard deviation varies according to the scale considered, and for the 0.8 m scale it is expected to range from 0.17 to 0.29 W/(m.K). Estimates of lower tail percentiles for the same scale are presented for all five domains. The temperature dependence is rather small with a decrease in thermal conductivity of 1.1-5.3% per 100 deg C increase in temperature for the dominant rock types. There are a number of important uncertainties associated with these

  14. Solid waste projection model: Database user's guide (Version 1.0)

    International Nuclear Information System (INIS)

    Carr, F.; Stiles, D.

    1991-01-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC) specifically to address Hanford solid waste management issues. This document is one of a set of documents supporting the SWPM system and providing instructions in the use and maintenance of SWPM components. This manual contains instructions for preparing to use Version 1 of the SWPM database, for entering and maintaining data, and for performing routine database functions. This document supports only those operations which are specific to SWPM database menus and functions, and does not provide instructions in the use of Paradox, the database management system in which the SWPM database is established. 3 figs., 1 tab

  15. Solid Waste Projection Model: Database user's guide (Version 1.3)

    International Nuclear Information System (INIS)

    Blackburn, C.L.

    1991-11-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC) specifically to address Hanford solid waste management issues. This document is one of a set of documents supporting the SWPM system and providing instructions in the use and maintenance of SWPM components. This manual contains instructions for preparing to use Version 1.3 of the SWPM database, for entering and maintaining data, and for performing routine database functions. This document supports only those operations which are specific to SWPM database menus and functions and does not provide instruction in the use of Paradox, the database management system in which the SWPM database is established

  16. HAM Construction modeling using COMSOL with MatLab Modeling Guide version 1.0.

    NARCIS (Netherlands)

    Schijndel, van A.W.M.

    2006-01-01

    This paper presents a first modeling guide for the modeling and simulation of up to full 3D dynamic Heat, Air & Moisture (HAM) transport of building constructions using COMSOL with Matlab. The modeling scripts are provided at the appendix. Furthermore, all modeling files and results are published at

  17. HAM Construction modeling using COMSOL with MatLab Modeling Guide, version 1.0

    NARCIS (Netherlands)

    Schijndel, van A.W.M.

    2006-01-01

    This paper presents a first modeling guide for the modeling and simulation of up to full 3D dynamic Heat, Air & Moisture (HAM) transport of building constructions using COMSOL with Matlab. The modeling scripts are provided at the appendix. Furthermore, all modeling files and results are published at

  18. Development of a source oriented version of the WRF/Chem model and its application to the California regional PM10 / PM2.5 air quality study

    Directory of Open Access Journals (Sweden)

    H. Zhang

    2014-01-01

    Full Text Available A source-oriented version of the Weather Research and Forecasting model with chemistry (SOWC, hereinafter was developed. SOWC separately tracks primary particles with different hygroscopic properties rather than instantaneously combining them into an internal mixture. This approach avoids artificially mixing light absorbing black + brown carbon particles with materials such as sulfate that would encourage the formation of additional coatings. Source-oriented particles undergo coagulation and gas-particle conversion, but these processes are considered in a dynamic framework that realistically "ages" primary particles over hours and days in the atmosphere. SOWC more realistically predicts radiative feedbacks from anthropogenic aerosols compared to models that make internal mixing or other artificial mixing assumptions. A three-week stagnation episode (15 December 2000 to 6 January 2001 in the San Joaquin Valley (SJV during the California Regional PM10 / PM2.5 Air Quality Study (CRPAQS was chosen for the initial application of the new modeling system. Primary particles emitted from diesel engines, wood smoke, high-sulfur fuel combustion, food cooking, and other anthropogenic sources were tracked separately throughout the simulation as they aged in the atmosphere. Differences were identified between predictions from the source oriented vs. the internally mixed representation of particles with meteorological feedbacks in WRF/Chem for a number of meteorological parameters: aerosol extinction coefficients, downward shortwave flux, planetary boundary layer depth, and primary and secondary particulate matter concentrations. Comparisons with observations show that SOWC predicts particle scattering coefficients more accurately than the internally mixed model. Downward shortwave radiation predicted by SOWC is enhanced by ~1% at ground level chiefly because diesel engine particles in the source-oriented mixture are not artificially coated with material that

  19. Systems Biology Markup Language (SBML Level 2 Version 5: Structures and Facilities for Model Definitions

    Directory of Open Access Journals (Sweden)

    Hucka Michael

    2015-06-01

    Full Text Available Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.

  20. Systems Biology Markup Language (SBML) Level 2 Version 5: Structures and Facilities for Model Definitions.

    Science.gov (United States)

    Hucka, Michael; Bergmann, Frank T; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M; Le Novère, Nicolas; Myers, Chris J; Olivier, Brett G; Sahle, Sven; Schaff, James C; Smith, Lucian P; Waltemath, Dagmar; Wilkinson, Darren J

    2015-09-04

    Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org.

  1. Overview of the Meso-NH model version 5.4 and its applications

    Directory of Open Access Journals (Sweden)

    C. Lac

    2018-05-01

    Full Text Available This paper presents the Meso-NH model version 5.4. Meso-NH is an atmospheric non hydrostatic research model that is applied to a broad range of resolutions, from synoptic to turbulent scales, and is designed for studies of physics and chemistry. It is a limited-area model employing advanced numerical techniques, including monotonic advection schemes for scalar transport and fourth-order centered or odd-order WENO advection schemes for momentum. The model includes state-of-the-art physics parameterization schemes that are important to represent convective-scale phenomena and turbulent eddies, as well as flows at larger scales. In addition, Meso-NH has been expanded to provide capabilities for a range of Earth system prediction applications such as chemistry and aerosols, electricity and lightning, hydrology, wildland fires, volcanic eruptions, and cyclones with ocean coupling. Here, we present the main innovations to the dynamics and physics of the code since the pioneer paper of Lafore et al. (1998 and provide an overview of recent applications and couplings.

  2. Conceptual Model of an Application for Automated Generation of Webpage Mobile Versions

    Directory of Open Access Journals (Sweden)

    Todor Rachovski

    2017-11-01

    Full Text Available Accessing webpages through various types of mobile devices with different screen sizes and using different browsers has put new demands on web developers. The main challenge is the development of websites with responsive design that is adaptable depending on the mobile device used. The article presents a conceptual model of an app for automated generation of mobile pages. It has five-layer architecture: database, database management layer, business logic layer, web services layer and a presentation layer. The database stores all the data needed to run the application. The database management layer uses an ORM model to convert relational data into an object-oriented format and control the access to them. The business logic layer contains components that perform the actual work on building a mobile version of the page, including parsing, building a hierarchical model of the page and a number of transformations. The web services layer provides external applications with access to lower-level functionalities, and the presentation layer is responsible for choosing and using the appropriate CSS. A web application that uses the proposed model was developed and experiments were conducted.

  3. Accelerator System Model (ASM) user manual with physics and engineering model documentation. ASM version 1.0

    International Nuclear Information System (INIS)

    1993-07-01

    The Accelerator System Model (ASM) is a computer program developed to model proton radiofrequency accelerators and to carry out system level trade studies. The ASM FORTRAN subroutines are incorporated into an intuitive graphical user interface which provides for the open-quotes constructionclose quotes of the accelerator in a window on the computer screen. The interface is based on the Shell for Particle Accelerator Related Codes (SPARC) software technology written for the Macintosh operating system in the C programming language. This User Manual describes the operation and use of the ASM application within the SPARC interface. The Appendix provides a detailed description of the physics and engineering models used in ASM. ASM Version 1.0 is joint project of G. H. Gillespie Associates, Inc. and the Accelerator Technology (AT) Division of the Los Alamos National Laboratory. Neither the ASM Version 1.0 software nor this ASM Documentation may be reproduced without the expressed written consent of both the Los Alamos National Laboratory and G. H. Gillespie Associates, Inc

  4. Accelerator System Model (ASM) user manual with physics and engineering model documentation. ASM version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-07-01

    The Accelerator System Model (ASM) is a computer program developed to model proton radiofrequency accelerators and to carry out system level trade studies. The ASM FORTRAN subroutines are incorporated into an intuitive graphical user interface which provides for the {open_quotes}construction{close_quotes} of the accelerator in a window on the computer screen. The interface is based on the Shell for Particle Accelerator Related Codes (SPARC) software technology written for the Macintosh operating system in the C programming language. This User Manual describes the operation and use of the ASM application within the SPARC interface. The Appendix provides a detailed description of the physics and engineering models used in ASM. ASM Version 1.0 is joint project of G. H. Gillespie Associates, Inc. and the Accelerator Technology (AT) Division of the Los Alamos National Laboratory. Neither the ASM Version 1.0 software nor this ASM Documentation may be reproduced without the expressed written consent of both the Los Alamos National Laboratory and G. H. Gillespie Associates, Inc.

  5. Simulated pre-industrial climate in Bergen Climate Model (version 2: model description and large-scale circulation features

    Directory of Open Access Journals (Sweden)

    O. H. Otterå

    2009-11-01

    Full Text Available The Bergen Climate Model (BCM is a fully-coupled atmosphere-ocean-sea-ice model that provides state-of-the-art computer simulations of the Earth's past, present, and future climate. Here, a pre-industrial multi-century simulation with an updated version of BCM is described and compared to observational data. The model is run without any form of flux adjustments and is stable for several centuries. The simulated climate reproduces the general large-scale circulation in the atmosphere reasonably well, except for a positive bias in the high latitude sea level pressure distribution. Also, by introducing an updated turbulence scheme in the atmosphere model a persistent cold bias has been eliminated. For the ocean part, the model drifts in sea surface temperatures and salinities are considerably reduced compared to earlier versions of BCM. Improved conservation properties in the ocean model have contributed to this. Furthermore, by choosing a reference pressure at 2000 m and including thermobaric effects in the ocean model, a more realistic meridional overturning circulation is simulated in the Atlantic Ocean. The simulated sea-ice extent in the Northern Hemisphere is in general agreement with observational data except for summer where the extent is somewhat underestimated. In the Southern Hemisphere, large negative biases are found in the simulated sea-ice extent. This is partly related to problems with the mixed layer parametrization, causing the mixed layer in the Southern Ocean to be too deep, which in turn makes it hard to maintain a realistic sea-ice cover here. However, despite some problematic issues, the pre-industrial control simulation presented here should still be appropriate for climate change studies requiring multi-century simulations.

  6. VALIDATION OF THE ASTER GLOBAL DIGITAL ELEVATION MODEL VERSION 2 OVER THE CONTERMINOUS UNITED STATES

    Directory of Open Access Journals (Sweden)

    D. Gesch

    2012-07-01

    Full Text Available The ASTER Global Digital Elevation Model Version 2 (GDEM v2 was evaluated over the conterminous United States in a manner similar to the validation conducted for the original GDEM Version 1 (v1 in 2009. The absolute vertical accuracy of GDEM v2 was calculated by comparison with more than 18,000 independent reference geodetic ground control points from the National Geodetic Survey. The root mean square error (RMSE measured for GDEM v2 is 8.68 meters. This compares with the RMSE of 9.34 meters for GDEM v1. Another important descriptor of vertical accuracy is the mean error, or bias, which indicates if a DEM has an overall vertical offset from true ground level. The GDEM v2 mean error of –0.20 meters is a significant improvement over the GDEM v1 mean error of –3.69 meters. The absolute vertical accuracy assessment results, both mean error and RMSE, were segmented by land cover to examine the effects of cover types on measured errors. The GDEM v2 mean errors by land cover class verify that the presence of aboveground features (tree canopies and built structures cause a positive elevation bias, as would be expected for an imaging system like ASTER. In open ground classes (little or no vegetation with significant aboveground height, GDEM v2 exhibits a negative bias on the order of 1 meter. GDEM v2 was also evaluated by differencing with the Shuttle Radar Topography Mission (SRTM dataset. In many forested areas, GDEM v2 has elevations that are higher in the canopy than SRTM.

  7. Validation of the ASTER Global Digital Elevation Model Version 2 over the conterminous United States

    Science.gov (United States)

    Gesch, Dean B.; Oimoen, Michael J.; Zhang, Zheng; Meyer, David J.; Danielson, Jeffrey J.

    2012-01-01

    The ASTER Global Digital Elevation Model Version 2 (GDEM v2) was evaluated over the conterminous United States in a manner similar to the validation conducted for the original GDEM Version 1 (v1) in 2009. The absolute vertical accuracy of GDEM v2 was calculated by comparison with more than 18,000 independent reference geodetic ground control points from the National Geodetic Survey. The root mean square error (RMSE) measured for GDEM v2 is 8.68 meters. This compares with the RMSE of 9.34 meters for GDEM v1. Another important descriptor of vertical accuracy is the mean error, or bias, which indicates if a DEM has an overall vertical offset from true ground level. The GDEM v2 mean error of -0.20 meters is a significant improvement over the GDEM v1 mean error of -3.69 meters. The absolute vertical accuracy assessment results, both mean error and RMSE, were segmented by land cover to examine the effects of cover types on measured errors. The GDEM v2 mean errors by land cover class verify that the presence of aboveground features (tree canopies and built structures) cause a positive elevation bias, as would be expected for an imaging system like ASTER. In open ground classes (little or no vegetation with significant aboveground height), GDEM v2 exhibits a negative bias on the order of 1 meter. GDEM v2 was also evaluated by differencing with the Shuttle Radar Topography Mission (SRTM) dataset. In many forested areas, GDEM v2 has elevations that are higher in the canopy than SRTM.

  8. A generic method for automatic translation between input models for different versions of simulation codes

    International Nuclear Information System (INIS)

    Serfontein, Dawid E.; Mulder, Eben J.; Reitsma, Frederik

    2014-01-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications

  9. A generic method for automatic translation between input models for different versions of simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Serfontein, Dawid E., E-mail: Dawid.Serfontein@nwu.ac.za [School of Mechanical and Nuclear Engineering, North West University (PUK-Campus), PRIVATE BAG X6001 (Internal Post Box 360), Potchefstroom 2520 (South Africa); Mulder, Eben J. [School of Mechanical and Nuclear Engineering, North West University (South Africa); Reitsma, Frederik [Calvera Consultants (South Africa)

    2014-05-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications.

  10. RALOC Mod 1/81: Program description of RALOC version by the structural heat model HECU

    International Nuclear Information System (INIS)

    Pham, V.T.

    1984-01-01

    In the version RALOC-Mod 1/81 an expanded heat transfer model and structure heat model is included. This feature allows for a realistic simulation of the thermodynamic and fluiddynamic characteristics of the containment atmosphere. Steel and concrete substructures with a plain or rotational symmetry can be represented. The treat transfer calculations for the structures are problem oriented, taking into account, the time- and space dependencies. The influence of the heat transfer on the gas transport (in particular convection) in the reactor vessel is demonstrated by the numerical calculations. In contrast to the calculations without a simulation of the heat storage effects of the container structures showing a widely homogenious hydrogen distribution, the results on the basis of the HECU-model give an inhomogenious distribution during the first 8 to 12 days. However these results are only examples for the application of the RALOC-Mod 1/81 -code, which have not been intended to contribute to the discussion of hydrogen distributions in a PWR-type reactor. (orig./GL) [de

  11. Episodic payments (bundling): PART I.

    Science.gov (United States)

    Jacofsky, D J

    2017-10-01

    Episodic, or bundled payments, is a concept now familiar to most in the healthcare arena, but the models are often misunderstood. Under a traditional fee-for-service model, each provider bills separately for their services which creates financial incentives to maximise volumes. Under a bundled payment, a single entity, often referred to as a convener (maybe the hospital, the physician group, or a third party) assumes the risk through a payer contract for all services provided within a defined episode of care, and receives a single (bundled) payment for all services provided for that episode. The time frame around the intervention is variable, but defined in advance, as are included and excluded costs. Timing of the actual payment in a bundle may either be before the episode occurs (prospective payment model), or after the end of the episode through a reconciliation (retrospective payment model). In either case, the defined costs over the defined time frame are borne by the convener. Cite this article: Bone Joint J 2017;99-B:1280-5. ©2017 The British Editorial Society of Bone & Joint Surgery.

  12. Structure function of holographic quark-gluon plasma: Sakai-Sugimoto model versus its noncritical version

    International Nuclear Information System (INIS)

    Bu Yanyan; Yang Jinmin

    2011-01-01

    Motivated by recent studies of deep inelastic scattering off the N=4 super-Yang-Mills (SYM) plasma, holographically dual to an AdS 5 xS 5 black hole, we use the spacelike flavor current to probe the internal structure of one holographic quark-gluon plasma, which is described by the Sakai-Sugimoto model at high temperature phase (i.e., the chiral-symmetric phase). The plasma structure function is extracted from the retarded flavor current-current correlator. Our main aim in this paper is to explore the effect of nonconformality on these physical quantities. As usual, our study is under the supergravity approximation and the limit of large color number. Although the Sakai-Sugimoto model is nonconformal, which makes the calculations more involved than the well-studied N=4 SYM case, the result seems to indicate that the nonconformality has little essential effect on the physical picture of the internal structure of holographic plasma, which is consistent with the intuition from the asymptotic freedom of QCD at high energy. While the physical picture underlying our investigation is same as the deep inelastic scattering off the N=4 SYM plasma with(out) flavor, the plasma structure functions are quantitatively different, especially their scaling dependence on the temperature, which can be recognized as model dependent. As a comparison, we also do the same analysis for the noncritical version of the Sakai-Sugimoto model which is conformal in the sense that it has a constant dilaton vacuum. The result for this noncritical model is quite similar to the conformal N=4 SYM plasma. We therefore attribute the above difference to the effect of nonconformality of the Sakai-Sugimoto model.

  13. The Extrapolar SWIFT model (version 1.0): fast stratospheric ozone chemistry for global climate models

    Science.gov (United States)

    Kreyling, Daniel; Wohltmann, Ingo; Lehmann, Ralph; Rex, Markus

    2018-03-01

    The Extrapolar SWIFT model is a fast ozone chemistry scheme for interactive calculation of the extrapolar stratospheric ozone layer in coupled general circulation models (GCMs). In contrast to the widely used prescribed ozone, the SWIFT ozone layer interacts with the model dynamics and can respond to atmospheric variability or climatological trends.The Extrapolar SWIFT model employs a repro-modelling approach, in which algebraic functions are used to approximate the numerical output of a full stratospheric chemistry and transport model (ATLAS). The full model solves a coupled chemical differential equation system with 55 initial and boundary conditions (mixing ratio of various chemical species and atmospheric parameters). Hence the rate of change of ozone over 24 h is a function of 55 variables. Using covariances between these variables, we can find linear combinations in order to reduce the parameter space to the following nine basic variables: latitude, pressure altitude, temperature, overhead ozone column and the mixing ratio of ozone and of the ozone-depleting families (Cly, Bry, NOy and HOy). We will show that these nine variables are sufficient to characterize the rate of change of ozone. An automated procedure fits a polynomial function of fourth degree to the rate of change of ozone obtained from several simulations with the ATLAS model. One polynomial function is determined per month, which yields the rate of change of ozone over 24 h. A key aspect for the robustness of the Extrapolar SWIFT model is to include a wide range of stratospheric variability in the numerical output of the ATLAS model, also covering atmospheric states that will occur in a future climate (e.g. temperature and meridional circulation changes or reduction of stratospheric chlorine loading).For validation purposes, the Extrapolar SWIFT model has been integrated into the ATLAS model, replacing the full stratospheric chemistry scheme. Simulations with SWIFT in ATLAS have proven that the

  14. Ariadne version 4 - a program for simulation of QCD cascades implementing the colour dipole model

    International Nuclear Information System (INIS)

    Loennblad, L.

    1992-01-01

    The fourth version of the Ariadne program for generating QCD cascades in the colour dipole approximation is presented. The underlying physics issues are discussed and a manual for using the program is given together with a few sample programs. The major changes from previous versions are the introduction of photon radiation from quarks and inclusion of interfaces to the LEPTO and PYTHIA programs. (orig.)

  15. Financial incentive does not affect P300 (in response to certain episodic and semantic probe stimuli) in the Complex Trial Protocol (CTP) version of the Concealed Information Test (CIT) in detection of malingering.

    Science.gov (United States)

    Rosenfeld, J Peter; Labkovsky, Elena; Davydova, Elena; Ward, Anne; Rosenfeld, Lauren

    2017-05-01

    Previous research indicated that the skin conductance response of the autonomic nervous system in the Concealed Information Test (CIT) is typically increased in subjects who are financially and otherwise incentivized to defeat the CIT (the paradoxical "motivational impairment" effect). This is not the case for RT-based CITs, nor P300 tests based on the three-stimulus protocol for detection of cognitive malingering (although these are not the same as CITs). The present report is the first attempt to study the effect of financial motivation on the P300-based Complex Trial Protocol using both episodic and semantic memory probe and irrelevant stimuli. The Test of Memory Malingering (TOMM) was used to validate behavioral differences between the two groups we created by offering one (paid) group but not another (unpaid) group a financial reward for beating our tests. Group behavioral differences on the TOMM did confirm group manipulations. Probe-minus-irrelevant P300 differences did not differ between groups, although as previously, semantic memory-evoked P300s were larger than episodic memory-evoked P300s. © 2017 Society for Psychophysiological Research.

  16. Energy Integration for 2050 - A Strategic Impact Model (2050 SIM), Version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    2010-10-01

    The United States (U.S.) energy infrastructure is among the most reliable, accessible, and economic in the world. On the other hand, it is also excessively reliant on foreign energy sources, experiences high volatility in energy prices, does not always practice good stewardship of finite indigenous energy resources, and emits significant quantities of greenhouse gas. The U.S. Department of Energy is conducting research and development on advanced nuclear reactor concepts and technologies, including High Temperature Gas Reactor (HTGR) technologies, directed at helping the United States meet its current and future energy challenges. This report discusses the Draft Strategic Impact Model (SIM), an initial version of which was created during the later part of FY-2010. SIM was developed to analyze and depict the benefits of various energy sources in meeting the energy demand and to provide an overall system understanding of the tradeoffs between building and using HTGRs versus other existing technologies for providing energy (heat and electricity) to various energy-use sectors in the United States. This report also provides the assumptions used in the model, the rationale for the methodology, and the references for the source documentation and source data used in developing the SIM.

  17. Energy Integration for 2050 - A Strategic Impact Model (2050 SIM), Version 2.0

    Energy Technology Data Exchange (ETDEWEB)

    John Collins

    2011-09-01

    The United States (U.S.) energy infrastructure is among the most reliable, accessible, and economic in the world. On the other hand, it is also excessively reliant on foreign energy sources, experiences high volatility in energy prices, does not always practice good stewardship of finite indigenous energy resources, and emits significant quantities of greenhouse gas. The U.S. Department of Energy is conducting research and development on advanced nuclear reactor concepts and technologies, including High Temperature Gas Reactor (HTGR) technologies, directed at helping the United States meet its current and future energy challenges. This report discusses the Draft Strategic Impact Model (SIM), an initial version of which was created during the later part of FY-2010. SIM was developed to analyze and depict the benefits of various energy sources in meeting the energy demand and to provide an overall system understanding of the tradeoffs between building and using HTGRs versus other existing technologies for providing energy (heat and electricity) to various energy-use sectors in the United States. This report also provides the assumptions used in the model, the rationale for the methodology, and the references for the source documentation and source data used in developing the SIM.

  18. Planar version of the CPT-even gauge sector of the standard model extension

    International Nuclear Information System (INIS)

    Ferreira Junior, Manoel M.; Casana, Rodolfo; Gomes, Adalto Rodrigues; Carvalho, Eduardo S.

    2011-01-01

    The CPT-even abelian gauge sector of the Standard Model Extension is represented by the Maxwell term supplemented by (K F ) μνρσ F μν F ρσ , where the Lorentz-violating background tensor, (K F ) μνρσ , possesses the symmetries of the Riemann tensor and a double null trace, which renders nineteen independent components. From these ones, ten components yield birefringence while nine are nonbirefringent ones. In the present work, we examine the planar version of this theory, obtained by means of a typical dimensional reduction procedure to (1 + 2) dimensions. We obtain a kind of planar scalar electrodynamics, which is composed of a gauge sector containing six Lorentz-violating coefficients, a scalar field endowed with a noncanonical kinetic term, and a coupling term that links the scalar and gauge sectors. The dispersion relation is exactly determined, revealing that the six parameters related to the pure electromagnetic sector do not yield birefringence at any order. In this model, the birefringence may appear only as a second order effect associated with the coupling tensor linking the gauge and scalar sectors.The equations of motion are written and solved in the stationary regime. The Lorentz-violating parameters do not alter the asymptotic behavior of the fields but induce an angular dependence not observed in the Maxwell planar theory. The energy-momentum tensor was evaluated as well, revealing that the theory presents energy stability. (author)

  19. A multi-sectoral version of the Post-Keynesian growth model

    Directory of Open Access Journals (Sweden)

    Ricardo Azevedo Araujo

    2015-03-01

    Full Text Available Abstract With this inquiry, we seek to develop a disaggregated version of the post-Keynesian approach to economic growth, by showing that indeed it can be treated as a particular case of the Pasinettian model of structural change and economic expansion. By relying upon vertical integration it becomes possible to carry out the analysis initiated by Kaldor (1956 and Robinson (1956, 1962, and followed by Dutt (1984, Rowthorn (1982 and later Bhaduri and Marglin (1990 in a multi-sectoral model in which demand and productivity increase at different paces in each sector. By adopting this approach it is possible to show that the structural economic dynamics is conditioned not only to patterns of evolving demand and diffusion of technological progress but also to the distributive features of the economy, which can give rise to different regimes of economic growth. Besides, we find it possible to determine the natural rate of profit that makes the mark-up rate to be constant over time.

  20. Systems Security Engineering Capability Maturity Model (SSECMM), Model Description, Version 1.1

    National Research Council Canada - National Science Library

    1997-01-01

    This document is designed to acquaint the reader with the SSE-CMM Project as a whole and present the project's major work product - the Systems Security Engineering Capability Maturity Model (SSE- CMM...

  1. Multi-center MRI prediction models:Predicting sex and illness course in first episode psychosis patients

    OpenAIRE

    Nieuwenhuis, Mireille; Schnack, Hugo G; van Haren, Neeltje E; Lappin, Julia; Morgan, Craig; Reinders, Antje A; Gutierrez-Tordesillas, Diana; Roiz-Santiañez, Roberto; Schaufelberger, Maristela S; Rosa, Pedro G; Zanetti, Marcus V; Busatto, Geraldo F; Crespo-Facorro, Benedicto; McGorry, Patrick D; Velakoulis, Dennis

    2017-01-01

    Structural Magnetic Resonance Imaging (MRI) studies have attempted to use brain measures obtained at the first-episode of psychosis to predict subsequent outcome, with inconsistent results. Thus, there is a real need to validate the utility of brain measures in the prediction of outcome using large datasets, from independent samples, obtained with different protocols and from different MRI scanners. This study had three main aims: 1) to investigate whether structural MRI data from multiple ce...

  2. Hydrogeochemical evaluation for Simpevarp model version 1.2. Preliminary site description of the Simpevarp area

    Energy Technology Data Exchange (ETDEWEB)

    Laaksoharju, Marcus (ed.) [Geopoint AB, Stockholm (Sweden)

    2004-12-01

    Siting studies for SKB's programme of deep geological disposal of nuclear fuel waste currently involves the investigation of two locations, Simpevarp and Forsmark, to determine their geological, hydrogeochemical and hydrogeological characteristics. Present work completed has resulted in Model version 1.2 which represents the second evaluation of the available Simpevarp groundwater analytical data collected up to April, 2004. The deepest fracture groundwater samples with sufficient analytical data reflected depths down to 1.7 km. Model version 1.2 focusses on geochemical and mixing processes affecting the groundwater composition in the uppermost part of the bedrock, down to repository levels, and eventually extending to 1000 m depth. The groundwater flow regimes at Laxemar/Simpevarp are considered local and extend down to depths of around 600-1000 m depending on local topography. The marked differences in the groundwater flow regimes between Laxemar and Simpevarp are reflected in the groundwater chemistry where four major hydrochemical groups of groundwaters (types A-D) have been identified: TYPE A: This type comprises dilute groundwaters (< 1000 mg/L Cl; 0.5-2.0 g/L TDS) of Na-HCO{sub 3} type present at shallow (<200 m) depths at Simpevarp, but at greater depths (0-900 m) at Laxemar. At both localities the groundwaters are marginally oxidising close to the surface, but otherwise reducing. Main reactions involve weathering, ion exchange (Ca, Mg), surface complexation, and dissolution of calcite. Redox reactions include precipitation of Fe-oxyhydroxides and some microbially mediated reactions (SRB). Meteoric recharge water is mainly present at Laxemar whilst at Simpevarp potential mixing of recharge meteoric water and a modern sea component is observed. Localised mixing of meteoric water with deeper saline groundwaters is indicated at both Laxemar and Simpevarp. TYPE B: This type comprises brackish groundwaters (1000-6000 mg/L Cl; 5-10 g/L TDS) present at

  3. Investigating the episodic buffer

    Directory of Open Access Journals (Sweden)

    Alan Baddeley

    2010-10-01

    Full Text Available A brief account is presented of the three-component working memory model proposed by Baddeley and Hitch. This is followed by an account of some of the problems it encountered in explaining how information from different subsystems with different codes could be combined, and how it was capable of communicating with long-term memory. In order to account for these, a fourth component was proposed, the episodic buffer. This was assumed to be a multidimensional store of limited capacity that can be accessed through conscious awareness. In an attempt to test and develop the concept, a series of experiments have explored the role of working memory in the binding of visual features into objects and verbal sequences into remembered sentences. The experiments use a dual task paradigm to investigate the role of the various subcomponents of working memory in binding. In contrast to our initial assumption, the episodic buffer appears to be a passive store, capable of storing bound features and making them available to conscious awareness, but not itself responsible for the process of binding.

  4. A NetCDF version of the two-dimensional energy balance model based on the full multigrid algorithm

    Science.gov (United States)

    Zhuang, Kelin; North, Gerald R.; Stevens, Mark J.

    A NetCDF version of the two-dimensional energy balance model based on the full multigrid method in Fortran is introduced for both pedagogical and research purposes. Based on the land-sea-ice distribution, orbital elements, greenhouse gases concentration, and albedo, the code calculates the global seasonal surface temperature. A step-by-step guide with examples is provided for practice.

  5. RAMS Model for Terrestrial Pathways Version 3. 0 (for microcomputers). Model-Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Niebla, E.

    1989-01-01

    The RAMS Model for Terrestrial Pathways is a computer program for calculation of numeric criteria for land application and distribution and marketing of sludges under the sewage-sludge regulations at 40 CFR Part 503. The risk-assessment models covered assume that municipal sludge with specified characteristics is spread across a defined area of ground at a known rate once each year for a given number of years. Risks associated with direct land application of sludge applied after distribution and marketing are both calculated. The computer program calculates the maximum annual loading of contaminants that can be land applied and still meet the risk criteria specified as input. Software Description: The program is written in the Turbo/Basic programming language for implementation on IBM PC/AT or compatible machines using DOS 3.0 or higher operating system. Minimum core storage is 512K.

  6. The Community WRF-Hydro Modeling System Version 4 Updates: Merging Toward Capabilities of the National Water Model

    Science.gov (United States)

    McAllister, M.; Gochis, D.; Dugger, A. L.; Karsten, L. R.; McCreight, J. L.; Pan, L.; Rafieeinasab, A.; Read, L. K.; Sampson, K. M.; Yu, W.

    2017-12-01

    The community WRF-Hydro modeling system is publicly available and provides researchers and operational forecasters a flexible and extensible capability for performing multi-scale, multi-physics options for hydrologic modeling that can be run independent or fully-interactive with the WRF atmospheric model. The core WRF-Hydro physics model contains very high-resolution descriptions of terrestrial hydrologic process representations such as land-atmosphere exchanges of energy and moisture, snowpack evolution, infiltration, terrain routing, channel routing, basic reservoir representation and hydrologic data assimilation. Complementing the core physics components of WRF-Hydro are an ecosystem of pre- and post-processing tools that facilitate the preparation of terrain and meteorological input data, an open-source hydrologic model evaluation toolset (Rwrfhydro), hydrologic data assimilation capabilities with DART and advanced model visualization capabilities. The National Center for Atmospheric Research (NCAR), through collaborative support from the National Science Foundation and other funding partners, provides community support for the entire WRF-Hydro system through a variety of mechanisms. This presentation summarizes the enhanced user support capabilities that are being developed for the community WRF-Hydro modeling system. These products and services include a new website, open-source code repositories, documentation and user guides, test cases, online training materials, live, hands-on training sessions, an email list serve, and individual user support via email through a new help desk ticketing system. The WRF-Hydro modeling system and supporting tools which now include re-gridding scripts and model calibration have recently been updated to Version 4 and are merging toward capabilities of the National Water Model.

  7. ASTER Global Digital Elevation Model Version 2 - summary of validation results

    Science.gov (United States)

    Tachikawa, Tetushi; Kaku, Manabu; Iwasaki, Akira; Gesch, Dean B.; Oimoen, Michael J.; Zhang, Z.; Danielson, Jeffrey J.; Krieger, Tabatha; Curtis, Bill; Haase, Jeff; Abrams, Michael; Carabajal, C.; Meyer, Dave

    2011-01-01

    On June 29, 2009, NASA and the Ministry of Economy, Trade and Industry (METI) of Japan released a Global Digital Elevation Model (GDEM) to users worldwide at no charge as a contribution to the Global Earth Observing System of Systems (GEOSS). This “version 1” ASTER GDEM (GDEM1) was compiled from over 1.2 million scenebased DEMs covering land surfaces between 83°N and 83°S latitudes. A joint U.S.-Japan validation team assessed the accuracy of the GDEM1, augmented by a team of 20 cooperators. The GDEM1 was found to have an overall accuracy of around 20 meters at the 95% confidence level. The team also noted several artifacts associated with poor stereo coverage at high latitudes, cloud contamination, water masking issues and the stacking process used to produce the GDEM1 from individual scene-based DEMs (ASTER GDEM Validation Team, 2009). Two independent horizontal resolution studies estimated the effective spatial resolution of the GDEM1 to be on the order of 120 meters.

  8. Modeling the Potential Economic Impact of the Medicare Comprehensive Care for Joint Replacement Episode-Based Payment Model.

    Science.gov (United States)

    Maniya, Omar Z; Mather, Richard C; Attarian, David E; Mistry, Bipin; Chopra, Aneesh; Strickland, Matt; Schulman, Kevin A

    2017-11-01

    The Medicare program has initiated Comprehensive Care for Joint Replacement (CJR), a bundled payment mandate for lower extremity joint replacements. We sought to determine the degree to which hospitals will invest in care redesign in response to CJR, and to project its economic impacts. We defined 4 potential hospital management strategies to address CJR: no action, light care management, heavy care management, and heavy care management with contracting. For each of 798 hospitals included in CJR, we used hospital-specific volume, cost, and quality data to determine the hospital's economically dominant strategy. We aggregated data to assess the percentage of hospitals pursuing each strategy; savings to the health care system; and costs and percentages of CJR-derived revenues gained or lost for Medicare, hospitals, and postacute care facilities. In the model, 83.1% of hospitals (range 55.0%-100.0%) were expected to take no action in response to CJR, and 16.1% of hospitals (range 0.0%-45.0%) were expected to pursue heavy care management with contracting. Overall, CJR is projected to reduce health care expenditures by 0.5% (range 0.0%-4.1%) or $14 million (range $0-$119 million). Medicare is expected to save 2.2% (range 2.2%-2.2%), hospitals are projected to lose 3.7% (range 4.7% loss to 3.8% gain), and postacute care facilities are expected to lose 6.5% (range 0.0%-12.8%). Hospital administrative costs are projected to increase by $63 million (range $0-$148 million). CJR is projected to have a negligible impact on total health care expenditures for lower extremity joint replacements. Further research will be required to assess the actual care management strategies adopted by CJR hospitals. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Atmospheric radionuclide transport model with radon postprocessor and SBG module. Model description version 2.8.0; ARTM. Atmosphaerisches Radionuklid-Transport-Modell mit Radon Postprozessor und SBG-Modul. Modellbeschreibung zu Version 2.8.0

    Energy Technology Data Exchange (ETDEWEB)

    Richter, Cornelia; Sogalla, Martin; Thielen, Harald; Martens, Reinhard

    2015-04-20

    The study on the atmospheric radionuclide transport model with radon postprocessor and SBG module (model description version 2.8.0) covers the following issues: determination of emissions, radioactive decay, atmospheric dispersion calculation for radioactive gases, atmospheric dispersion calculation for radioactive dusts, determination of the gamma cloud radiation (gamma submersion), terrain roughness, effective source height, calculation area and model points, geographic reference systems and coordinate transformations, meteorological data, use of invalid meteorological data sets, consideration of statistical uncertainties, consideration of housings, consideration of bumpiness, consideration of terrain roughness, use of frequency distributions of the hourly dispersion situation, consideration of the vegetation period (summer), the radon post processor radon.exe, the SBG module, modeling of wind fields, shading settings.

  10. Uniform California earthquake rupture forecast, version 3 (UCERF3): the time-independent model

    Science.gov (United States)

    Field, Edward H.; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David D.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin R.; Page, Morgan T.; Parsons, Thomas; Powers, Peter M.; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua; ,

    2013-01-01

    In this report we present the time-independent component of the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3), which provides authoritative estimates of the magnitude, location, and time-averaged frequency of potentially damaging earthquakes in California. The primary achievements have been to relax fault segmentation assumptions and to include multifault ruptures, both limitations of the previous model (UCERF2). The rates of all earthquakes are solved for simultaneously, and from a broader range of data, using a system-level "grand inversion" that is both conceptually simple and extensible. The inverse problem is large and underdetermined, so a range of models is sampled using an efficient simulated annealing algorithm. The approach is more derivative than prescriptive (for example, magnitude-frequency distributions are no longer assumed), so new analysis tools were developed for exploring solutions. Epistemic uncertainties were also accounted for using 1,440 alternative logic tree branches, necessitating access to supercomputers. The most influential uncertainties include alternative deformation models (fault slip rates), a new smoothed seismicity algorithm, alternative values for the total rate of M≥5 events, and different scaling relationships, virtually all of which are new. As a notable first, three deformation models are based on kinematically consistent inversions of geodetic and geologic data, also providing slip-rate constraints on faults previously excluded because of lack of geologic data. The grand inversion constitutes a system-level framework for testing hypotheses and balancing the influence of different experts. For example, we demonstrate serious challenges with the Gutenberg-Richter hypothesis for individual faults. UCERF3 is still an approximation of the system, however, and the range of models is limited (for example, constrained to stay close to UCERF2). Nevertheless, UCERF3 removes the apparent UCERF2 overprediction of

  11. Hydrogeochemical evaluation of the Simpevarp area, model version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    Laaksoharju, Marcus (ed.) [Geopoint AB, Stockholm (Sweden); Smellie, John [Conterra AB, Uppsala (Sweden); Gimeno, Maria; Auque, Luis; Gomez, Javier [Univ. of Zaragoza (Spain). Dept. of Earth Sciences; Tullborg, Eva-Lena [Terralogica AB, Graabo (Sweden); Gurban, Ioana [3D-Terra (Sweden)

    2004-02-01

    Siting studies for SKB's programme of deep geological disposal of nuclear fuel waste currently involves the investigation of two locations, Simpevarp and Forsmark, on the eastern coast of Sweden to determine their geological, hydrogeochemical and hydrogeological characteristics. Present work completed has resulted in model version 1.1 which represents the first evaluation of the available Simpevarp groundwater analytical data collected up to July 1st, 2003 (i.e. the first 'data freeze' of the site). The HAG (Hydrochemical Analytical Group) group had access to a total of 535 water samples collected from the surface and sub-surface environment (e.g. soil pipes in the overburden, streams and lakes); only a few samples were collected from drilled boreholes. The deepest fracture groundwater samples with sufficient analytical data reflected depths down to 250 m. Furthermore, most of the waters sampled (79%) lacked crucial analytical information that restricted the evaluation. Consequently, model version 1.1 focussed on the processes taking place in the uppermost part of the bedrock rather than at repository levels. The complex groundwater evolution and patterns at Simpevarp are a result of many factors such as: a) the flat topography and proximity to the Baltic Sea, b) changes in hydrogeology related to glaciation/deglaciation and land uplift, c) repeated marine/lake water regressions/transgressions, and d) organic or inorganic alteration of the groundwater composition caused by microbial processes or water/rock interactions. The sampled groundwaters reflect to various degrees of modern or ancient water/rock interactions and mixing processes. Higher topography to the west of Simpevarp has resulted in hydraulic gradients which have partially flushed out old water types. Except for sea waters, most surface waters and some groundwaters from percussion boreholes are fresh, non-saline waters according to the classification used for Aespoe groundwaters. The rest

  12. User's guide to the MESOI diffusion model: Version 1.1 (for Data General Eclipse S/230 with AFOS)

    International Nuclear Information System (INIS)

    Athey, G.F.; Ramsdell, J.V.

    1982-09-01

    MESOI is an interactive, Langrangian puff trajectory model. The model theory is documented separately (Ramsdell and Athey, 1981). Version 1.1 is a modified form of the original 1.0. It is designed to run on a Data General Eclipse computer. The model has improved support features which make it useful as an emergency response tool. This report is intended to provide the user with the information necessary to successfully conduct model simulations using MESOI Version 1.1 and to use the support programs STAPREP and EXPLT. The user is also provided information on the use of the data file maintenance and review program UPDATE. Examples are given for the operation of the program. Test data sets are described which allow the user to practice with the programs and to confirm proper implementation and execution

  13. Single-Column Modeling of Convection During the CINDY2011/DYNAMO Field Campaign With the CNRM Climate Model Version 6

    Science.gov (United States)

    Abdel-Lathif, Ahmat Younous; Roehrig, Romain; Beau, Isabelle; Douville, Hervé

    2018-03-01

    A single-column model (SCM) approach is used to assess the CNRM climate model (CNRM-CM) version 6 ability to represent the properties of the apparent heat source (Q1) and moisture sink (Q2) as observed during the 3 month CINDY2011/DYNAMO field campaign, over its Northern Sounding Array (NSA). The performance of the CNRM SCM is evaluated in a constrained configuration in which the latent and sensible heat surface fluxes are prescribed, as, when forced by observed sea surface temperature, the model is strongly limited by the underestimate of the surface fluxes, most probably related to the SCM forcing itself. The model exhibits a significant cold bias in the upper troposphere, near 200 hPa, and strong wet biases close to the surface and above 700 hPa. The analysis of the Q1 and Q2 profile distributions emphasizes the properties of the convective parameterization of the CNRM-CM physics. The distribution of the Q2 profile is particularly challenging. The model strongly underestimates the frequency of occurrence of the deep moistening profiles, which likely involve misrepresentation of the shallow and congestus convection. Finally, a statistical approach is used to objectively define atmospheric regimes and construct a typical convection life cycle. A composite analysis shows that the CNRM SCM captures the general transition from bottom-heavy to mid-heavy to top-heavy convective heating. Some model errors are shown to be related to the stratiform regimes. The moistening observed during the shallow and congestus convection regimes also requires further improvements of this CNRM-CM physics.

  14. Children's episodic memory.

    Science.gov (United States)

    Ghetti, Simona; Lee, Joshua

    2011-07-01

    Episodic memory develops during childhood and adolescence. This trajectory depends on several underlying processes. In this article, we first discuss the development of the basic binding processes (e.g., the processes by which elements are bound together to form a memory episode) and control processes (e.g., reasoning and metamemory processes) involved in episodic remembering. Then, we discuss the role of these processes in false-memory formation. In the subsequent sections, we examine the neural substrates of the development of episodic memory. Finally, we discuss atypical development of episodic memory. As we proceed through the article, we suggest potential avenues for future research. WIREs Cogni Sci 2011 2 365-373 DOI: 10.1002/wcs.114 For further resources related to this article, please visit the WIREs website. Copyright © 2010 John Wiley & Sons, Ltd.

  15. Modeling CANDU type fuel behaviour during extended burnup irradiations using a revised version of the ELESIM code

    International Nuclear Information System (INIS)

    Arimescu, V.I.; Richmond, W.R.

    1992-05-01

    The high-burnup database for CANDU fuel, with a variety of cases, offers a good opportunity to check models of fuel behaviour, and to identify areas for improvement. Good agreement of calculated values of fission-gas release, and sheath hoop strain, with experimental data indicates that the global behaviour of the fuel element is adequately simulated by a computer code. Using, the ELESIM computer code, the fission-gas release, swelling, and fuel pellet expansion models were analysed, and changes made for gaseous swelling, and diffusional release of fission-gas atoms to the grain boundaries. Using this revised version of ELESIM, satisfactory agreement between measured values of fission-gas release was found for most of the high-burnup database cases. It is concluded that the revised version of the ELESIM code is able to simulate with reasonable accuracy high-burnup as well as low-burnup CANDU fuel

  16. A NetCDF version of the two-dimensional energy balance model based on the full multigrid algorithm

    Directory of Open Access Journals (Sweden)

    Kelin Zhuang

    2017-01-01

    Full Text Available A NetCDF version of the two-dimensional energy balance model based on the full multigrid method in Fortran is introduced for both pedagogical and research purposes. Based on the land–sea–ice distribution, orbital elements, greenhouse gases concentration, and albedo, the code calculates the global seasonal surface temperature. A step-by-step guide with examples is provided for practice.

  17. Programs OPTMAN and SHEMMAN Version 6 (1999) - Coupled-Channels optical model and collective nuclear structure calculation -

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Jong Hwa; Lee, Jeong Yeon; Lee, Young Ouk; Sukhovitski, Efrem Sh [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-01-01

    Programs SHEMMAN and OPTMAN (Version 6) have been developed for determinations of nuclear Hamiltonian parameters and for optical model calculations, respectively. The optical model calculations by OPTMAN with coupling schemes built on wave functions functions of non-axial soft-rotator are self-consistent, since the parameters of the nuclear Hamiltonian are determined by adjusting the energies of collective levels to experimental values with SHEMMAN prior to the optical model calculation. The programs have been installed at Nuclear Data Evaluation Laboratory of KAERI. This report is intended as a brief manual of these codes. 43 refs., 9 figs., 1 tabs. (Author)

  18. Human cortical EEG rhythms during long-term episodic memory task. A high-resolution EEG study of the HERA model.

    Science.gov (United States)

    Babiloni, Claudio; Babiloni, Fabio; Carducci, Filippo; Cappa, Stefano; Cincotti, Febo; Del Percio, Claudio; Miniussi, Carlo; Moretti, Davide Vito; Pasqualetti, Patrizio; Rossi, Simone; Sosta, Katiuscia; Rossini, Paolo Maria

    2004-04-01

    Many recent neuroimaging studies of episodic memory have indicated an asymmetry in prefrontal involvement, with the left prefrontal cortex more involved than the right in encoding, the right more than the left in retrieval (hemispheric encoding and retrieval asymmetry, or HERA model). In this electroencephalographic (EEG) high-resolution study, we studied brain rhythmicity during a visual episodic memory (recognition) task. The theta (4-6 Hz), alpha (6-12 Hz) and gamma (28-48 Hz) oscillations were investigated during a visuospatial long-term episodic memory task including an encoding (ENC) and retrieval (RET) phases. During the ENC phase, 25 figures representing interiors of buildings ("indoor") were randomly intermingled with 25 figures representing landscapes ("landscapes"). Subject's response was given at left ("indoor") or right ("landscapes") mouse button. During the RET phase (1 h later), 25 figures representing previously presented "indoor" pictures ("tests") were randomly intermingled with 25 figures representing novel "indoor" ("distractors"). Again, a mouse response was required. Theta and alpha EEG results showed no change of frontal rhythmicity. In contrast, the HERA prediction of asymmetry was fitted only by EEG gamma responses, but only in the posterior parietal areas. The ENC phase was associated with gamma EEG oscillations over left parietal cortex. Afterward, the RET phase was associated with gamma EEG oscillations predominantly over right parietal cortex. The predicted HERA asymmetry was thus observed in an unexpected location. This discrepancy may be due to the differential sensitivity of neuroimaging methods to selected components of cognitive processing. The strict relation between gamma response and perception suggests that retrieval processes of long-term memory deeply impinged upon sensory representation of the stored material.

  19. Anterior cingulate cortex-related connectivity in first-episode schizophrenia: a spectral dynamic causal modeling study with functional magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Long-Biao eCui

    2015-11-01

    Full Text Available Understanding the neural basis of schizophrenia (SZ is important for shedding light on the neurobiological mechanisms underlying this mental disorder. Structural and functional alterations in the anterior cingulate cortex (ACC, dorsolateral prefrontal cortex (DLPFC, hippocampus, and medial prefrontal cortex (MPFC have been implicated in the neurobiology of SZ. However, the effective connectivity among them in SZ remains unclear. The current study investigated how neuronal pathways involving these regions were affected in first-episode SZ using functional magnetic resonance imaging (fMRI. Forty-nine patients with a first-episode of psychosis and diagnosis of SZ—according to the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision—were studied. Fifty healthy controls (HCs were included for comparison. All subjects underwent resting state fMRI. We used spectral dynamic causal modeling (DCM to estimate directed connections among the bilateral ACC, DLPFC, hippocampus, and MPFC. We characterized the differences using Bayesian parameter averaging (BPA in addition to classical inference (t-test. In addition to common effective connectivity in these two groups, HCs displayed widespread significant connections predominantly involved in ACC not detected in SZ patients, but SZ showed few connections. Based on BPA results, SZ patients exhibited anterior cingulate cortico-prefrontal-hippocampal hyperconnectivity, as well as ACC-related and hippocampal-dorsolateral prefrontal-medial prefrontal hypoconnectivity. In summary, sDCM revealed the pattern of effective connectivity involving ACC in patients with first-episode SZ. This study provides a potential link between SZ and dysfunction of ACC, creating an ideal situation to associate mechanisms behind SZ with aberrant connectivity among these cognition and emotion-related regions.

  20. Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) (Final Report, Version 2)

    Science.gov (United States)

    EPA's announced the availability of the final report, Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) (Version 2). This update furthered land change modeling by providing nationwide housing developmen...

  1. Thermodynamic model for uplift and deflation episodes (bradyseism) associated with magmatic-hydrothermal activity at the Campi Flegrei (Italy)

    Science.gov (United States)

    Lima, Annamaria; De Vivo, Benedetto; Spera, Fran J.; Bodnar, Robert J.; Milia, Alfonsa; Nunziata, Concettina; Belkin, Harvey E.; Cannatelli, Claudia

    2009-01-01

    Campi Flegrei (CF) is a large volcanic complex located west of the city of Naples, Italy. Repeated episodes of bradyseism (slow vertical ground movement) near the town of Pozzuoli have been documented since Roman times. Bradyseismic events are interpreted as the consequence of aqueous fluid exsolution during magma solidification on a slow timescale (103–104 yr) superimposed upon a shorter (1–10 yr) timescale for the episodic expulsion of fluid from a deep (~ 3–5 km) lithostatically-pressured low-permeability reservoir to an overlying hydrostatic reservoir. Cycles of inflation and deflation occur during short duration transient events when connectivity is established between deep and shallow hydrothermal reservoirs. The total seismic energy released (4 × 1013 J) during the 1983–1984 bradyseismic crisis is consistent with the observed volume change (uplift) and consistent with the notion that seismic failure occurs in response to the shear stress release induced by volume change. Fluid transport and concomitant propagation of hydrofractures as fluid expands from lithostatic to hydrostatic pressure during decompression leads to ground surface displacement. Fluid decompression occurs along the fluid isenthalp (Joule–Thompson expansion) during transient periods of reservoir connectivity and leads to mineral precipitation. Each kilogram of fluid precipitates about 3 × 10− 3 kg of silica along a typical decompression path along the isenthalp. Mineral precipitation modifies the permeability and acts to reseal connection paths thereby isolating lithostatic and hydrostatic reservoirs ending one bradyseism phase and beginning another. Crystallization and exsolution of the magmatic fluid generates ≈ 7 × 1015 J of mechanical (PΔV) energy, and this is sufficient to accomplish the observed uplift at CF. Although magma emplacement is the ultimate origin of bradyseism, fresh recharge of magma is not a prerequisite. Instead, short to intermediate

  2. SHADOW3: a new version of the synchrotron X-ray optics modelling package

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez del Rio, Manuel, E-mail: srio@esrf.eu [European Synchrotron Radiation Facility, 6 Jules Horowitz, 38000 Grenoble (France); Canestrari, Niccolo [CNRS, Grenoble (France); European Synchrotron Radiation Facility, 6 Jules Horowitz, 38000 Grenoble (France); Jiang, Fan; Cerrina, Franco [Boston University, 8 St Mary’s Street, Boston, MA 02215 (United States)

    2011-09-01

    SHADOW3, a new version of the X-ray tracing code SHADOW, is introduced. A new version of the popular X-ray tracing code SHADOW is presented. An important step has been made in restructuring the code following new computer engineering standards, ending with a modular Fortran 2003 structure and an application programming interface (API). The new code has been designed to be compatible with the original file-oriented SHADOW philosophy, but simplifying the compilation, installation and use. In addition, users can now become programmers using the newly designed SHADOW3 API for creating scripts, macros and programs; being able to deal with optical system optimization, image simulation, and also low transmission calculations requiring a large number of rays (>10{sup 6}). Plans for future development and questions on how to accomplish them are also discussed.

  3. SHADOW3: a new version of the synchrotron X-ray optics modelling package

    International Nuclear Information System (INIS)

    Sanchez del Rio, Manuel; Canestrari, Niccolo; Jiang, Fan; Cerrina, Franco

    2011-01-01

    SHADOW3, a new version of the X-ray tracing code SHADOW, is introduced. A new version of the popular X-ray tracing code SHADOW is presented. An important step has been made in restructuring the code following new computer engineering standards, ending with a modular Fortran 2003 structure and an application programming interface (API). The new code has been designed to be compatible with the original file-oriented SHADOW philosophy, but simplifying the compilation, installation and use. In addition, users can now become programmers using the newly designed SHADOW3 API for creating scripts, macros and programs; being able to deal with optical system optimization, image simulation, and also low transmission calculations requiring a large number of rays (>10 6 ). Plans for future development and questions on how to accomplish them are also discussed

  4. Item and response-category functioning of the Persian version of the KIDSCREEN-27: Rasch partial credit model

    Directory of Open Access Journals (Sweden)

    Jafari Peyman

    2012-10-01

    Full Text Available Abstract Background The purpose of the study was to determine whether the Persian version of the KIDSCREEN-27 has the optimal number of response category to measure health-related quality of life (HRQoL in children and adolescents. Moreover, we aimed to determine if all the items contributed adequately to their own domain. Findings The Persian version of the KIDSCREEN-27 was completed by 1083 school children and 1070 of their parents. The Rasch partial credit model (PCM was used to investigate item statistics and ordering of response categories. The PCM showed that no item was misfitting. The PCM also revealed that, successive response categories for all items were located in the expected order except for category 1 in self- and proxy-reports. Conclusions Although Rasch analysis confirms that all the items belong to their own underlying construct, response categories should be reorganized and evaluated in further studies, especially in children with chronic conditions.

  5. UNSAT-H Version 3.0: Unsaturated Soil Water and Heat Flow Model Theory, User Manual, and Examples

    International Nuclear Information System (INIS)

    Fayer, M.J.

    2000-01-01

    The UNSAT-H model was developed at Pacific Northwest National Laboratory (PNNL) to assess the water dynamics of arid sites and, in particular, estimate recharge fluxes for scenarios pertinent to waste disposal facilities. During the last 4 years, the UNSAT-H model received support from the Immobilized Waste Program (IWP) of the Hanford Site's River Protection Project. This program is designing and assessing the performance of on-site disposal facilities to receive radioactive wastes that are currently stored in single- and double-shell tanks at the Hanford Site (LMHC 1999). The IWP is interested in estimates of recharge rates for current conditions and long-term scenarios involving the vadose zone disposal of tank wastes. Simulation modeling with UNSAT-H is one of the methods being used to provide those estimates (e.g., Rockhold et al. 1995; Fayer et al. 1999). To achieve the above goals for assessing water dynamics and estimating recharge rates, the UNSAT-H model addresses soil water infiltration, redistribution, evaporation, plant transpiration, deep drainage, and soil heat flow as one-dimensional processes. The UNSAT-H model simulates liquid water flow using Richards' equation (Richards 1931), water vapor diffusion using Fick's law, and sensible heat flow using the Fourier equation. This report documents UNSAT-H .Version 3.0. The report includes the bases for the conceptual model and its numerical implementation, benchmark test cases, example simulations involving layered soils and plants, and the code manual. Version 3.0 is an, enhanced-capability update of UNSAT-H Version 2.0 (Fayer and Jones 1990). New features include hysteresis, an iterative solution of head and temperature, an energy balance check, the modified Picard solution technique, additional hydraulic functions, multiple-year simulation capability, and general enhancements

  6. Simulations of the Mid-Pliocene Warm Period Using Two Versions of the NASA-GISS ModelE2-R Coupled Model

    Science.gov (United States)

    Chandler, M. A.; Sohl, L. E.; Jonas, J. A.; Dowsett, H. J.; Kelley, M.

    2013-01-01

    The mid-Pliocene Warm Period (mPWP) bears many similarities to aspects of future global warming as projected by the Intergovernmental Panel on Climate Change (IPCC, 2007). Both marine and terrestrial data point to high-latitude temperature amplification, including large decreases in sea ice and land ice, as well as expansion of warmer climate biomes into higher latitudes. Here we present our most recent simulations of the mid-Pliocene climate using the CMIP5 version of the NASAGISS Earth System Model (ModelE2-R). We describe the substantial impact associated with a recent correction made in the implementation of the Gent-McWilliams ocean mixing scheme (GM), which has a large effect on the simulation of ocean surface temperatures, particularly in the North Atlantic Ocean. The effect of this correction on the Pliocene climate results would not have been easily determined from examining its impact on the preindustrial runs alone, a useful demonstration of how the consequences of code improvements as seen in modern climate control runs do not necessarily portend the impacts in extreme climates.Both the GM-corrected and GM-uncorrected simulations were contributed to the Pliocene Model Intercomparison Project (PlioMIP) Experiment 2. Many findings presented here corroborate results from other PlioMIP multi-model ensemble papers, but we also emphasize features in the ModelE2-R simulations that are unlike the ensemble means. The corrected version yields results that more closely resemble the ocean core data as well as the PRISM3D reconstructions of the mid-Pliocene, especially the dramatic warming in the North Atlantic and Greenland-Iceland-Norwegian Sea, which in the new simulation appears to be far more realistic than previously found with older versions of the GISS model. Our belief is that continued development of key physical routines in the atmospheric model, along with higher resolution and recent corrections to mixing parameterisations in the ocean model, have led

  7. First Episode Psychosis

    Science.gov (United States)

    ... About Psychosis Treatment Share Fact Sheet: First Episode Psychosis Download PDF Download ePub Order a free hardcopy En Español Facts About Psychosis The word psychosis is used to describe conditions ...

  8. The FREED Project (first episode and rapid early intervention in eating disorders): service model, feasibility and acceptability.

    Science.gov (United States)

    Brown, Amy; McClelland, Jessica; Boysen, Elena; Mountford, Victoria; Glennon, Danielle; Schmidt, Ulrike

    2018-04-01

    Eating disorders (EDs) are disabling disorders, predominantly affecting adolescents and young adults. Untreated symptoms have lasting effects on brain, body and behaviour. Care pathway-related barriers often prevent early detection and treatment of ED. The aim of this study was to assess the feasibility and acceptability of FREED (First Episode and Rapid Early Intervention for Eating Disorder), a novel service for young people (aged 18-25 years) with recent ED onset (≤3 years), embedded in a specialist adult National Health Service ED service. Specifically, we assessed the impact of FREED on duration of time until specialist service contact (DUSC), duration of untreated ED (DUED) and wait-times for assessment and treatment compared with patients seen earlier in our service. Acceptability of FREED was also assessed. Sixty individuals were recruited from September 2014 to August 2015. Fifty-one of these were compared with 89 patients seen earlier. FREED patients, from areas with minimal National Health Service gatekeeping (14/51), had markedly shorter DUSC and DUED than controls (DUSC: 12.4 months vs. 16.2 months; DUED 13.0 months vs. 19.1 months), whereas those with complex gatekeeping (37/51) had shorter DUED (17.7 months), but longer DUSC (16.9 months) than controls. FREED patients waited significantly less time for both assessment and treatment than controls, had significantly better treatment uptake and were highly satisfied with the process of starting treatment. FREED is a feasible and acceptable service which successfully reduced waiting times. Reductions in DUSC and DUED depend on gatekeeping arrangements. More research is required to establish clinical outcomes of FREED. © 2016 John Wiley & Sons Australia, Ltd.

  9. Three-factor model of premorbid adjustment in a sample with chronic schizophrenia and first-episode psychosis.

    Science.gov (United States)

    Barajas, Ana; Usall, Judith; Baños, Iris; Dolz, Montserrat; Villalta-Gil, Victoria; Vilaplana, Miriam; Autonell, Jaume; Sánchez, Bernardo; Cervilla, Jorge A; Foix, Alexandrina; Obiols, Jordi E; Haro, Josep Maria; Ochoa, Susana

    2013-12-01

    The dimensionality of premorbid adjustment (PA) has been a debated issue, with attempts to determine whether PA is a unitary construct or composed of several independent domains characterized by a differential deterioration pattern and specific outcome correlates. This study examines the factorial structure of PA, as well as, the course and correlates of its domains. Retrospective study of 84 adult patients experiencing first-episode psychosis (FEP) (n=33) and individuals with schizophrenia (SCH) (n=51). All patients were evaluated with a comprehensive battery of instruments including clinical, functioning and neuropsychological variables. A principal component analysis accompanied by a varimax rotation method was used to examine the factor structure of the PAS-S scale. Paired t tests and Wilcoxon rank tests were used to assess the changes in PAS domains over time. Bivariate correlation analyses were performed to analyse the relationship between PAS factors and clinical, social and cognitive variables. PA was better explained by three factors (71.65% of the variance): Academic PA, Social PA and Socio-sexual PA. The academic domain showed higher scores of PA from childhood. Social and clinical variables were more strongly related to Social PA and Socio-sexual PA domains, and the Academic PA domain was exclusively associated with cognitive variables. This study supports previous evidence, emphasizing the validity of dividing PA into its sub-components. A differential deterioration pattern and specific correlates were observed in each PA domains, suggesting that impairments in each PA domain might predispose individuals to develop different expressions of psychotic dimensions. © 2013.

  10. Categorical Inputs, Sensitivity Analysis, Optimization and Importance Tempering with tgp Version 2, an R Package for Treed Gaussian Process Models

    Directory of Open Access Journals (Sweden)

    Robert B. Gramacy

    2010-02-01

    Full Text Available This document describes the new features in version 2.x of the tgp package for R, implementing treed Gaussian process (GP models. The topics covered include methods for dealing with categorical inputs and excluding inputs from the tree or GP part of the model; fully Bayesian sensitivity analysis for inputs/covariates; sequential optimization of black-box functions; and a new Monte Carlo method for inference in multi-modal posterior distributions that combines simulated tempering and importance sampling. These additions extend the functionality of tgp across all models in the hierarchy: from Bayesian linear models, to classification and regression trees (CART, to treed Gaussian processes with jumps to the limiting linear model. It is assumed that the reader is familiar with the baseline functionality of the package, outlined in the first vignette (Gramacy 2007.

  11. Land-total and Ocean-total Precipitation and Evaporation from a Community Atmosphere Model version 5 Perturbed Parameter Ensemble

    Energy Technology Data Exchange (ETDEWEB)

    Covey, Curt [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Lucas, Donald D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Trenberth, Kevin E. [National Center for Atmospheric Research, Boulder, CO (United States)

    2016-03-02

    This document presents the large scale water budget statistics of a perturbed input-parameter ensemble of atmospheric model runs. The model is Version 5.1.02 of the Community Atmosphere Model (CAM). These runs are the “C-Ensemble” described by Qian et al., “Parametric Sensitivity Analysis of Precipitation at Global and Local Scales in the Community Atmosphere Model CAM5” (Journal of Advances in Modeling the Earth System, 2015). As noted by Qian et al., the simulations are “AMIP type” with temperature and sea ice boundary conditions chosen to match surface observations for the five year period 2000-2004. There are 1100 ensemble members in addition to one run with default inputparameter values.

  12. Evaluation of dust and trace metal estimates from the Community Multiscale Air Quality (CMAQ model version 5.0

    Directory of Open Access Journals (Sweden)

    K. W. Appel

    2013-07-01

    Full Text Available The Community Multiscale Air Quality (CMAQ model is a state-of-the-science air quality model that simulates the emission, transformation, transport, and fate of the many different air pollutant species that comprise particulate matter (PM, including dust (or soil. The CMAQ model version 5.0 (CMAQv5.0 has several enhancements over the previous version of the model for estimating the emission and transport of dust, including the ability to track the specific elemental constituents of dust and have the model-derived concentrations of those elements participate in chemistry. The latest version of the model also includes a parameterization to estimate emissions of dust due to wind action. The CMAQv5.0 modeling system was used to simulate the entire year 2006 for the continental United States, and the model estimates were evaluated against daily surface-based measurements from several air quality networks. The CMAQ modeling system overall did well replicating the observed soil concentrations in the western United States (mean bias generally around ±0.5 μg m−3; however, the model consistently overestimated the observed soil concentrations in the eastern United States (mean bias generally between 0.5–1.5 μg m−3, regardless of season. The performance of the individual trace metals was highly dependent on the network, species, and season, with relatively small biases for Fe, Al, Si, and Ti throughout the year at the Interagency Monitoring of Protected Visual Environments (IMPROVE sites, while Ca, K, and Mn were overestimated and Mg underestimated. For the urban Chemical Speciation Network (CSN sites, Fe, Mg, and Mn, while overestimated, had comparatively better performance throughout the year than the other trace metals, which were consistently overestimated, including very large overestimations of Al (380%, Ti (370% and Si (470% in the fall. An underestimation of nighttime mixing in the urban areas appears to contribute to the overestimation of

  13. Result Summary for the Area 5 Radioactive Waste Management Site Performance Assessment Model Version 4.110

    International Nuclear Information System (INIS)

    2011-01-01

    Results for Version 4.110 of the Area 5 Radioactive Waste Management Site (RWMS) performance assessment (PA) model are summarized. Version 4.110 includes the fiscal year (FY) 2010 inventory estimate, including a future inventory estimate. Version 4.110 was implemented in GoldSim 10.11(SP4). The following changes have been implemented since the last baseline model, Version 4.105: (1) Updated the inventory and disposal unit configurations with data through the end of FY 2010. (1) Implemented Federal Guidance Report 13 Supplemental CD dose conversion factors (U.S. Environmental Protection Agency, 1999). Version 4.110 PA results comply with air pathway and all-pathways annual total effective dose (TED) performance objectives (Tables 2 and 3, Figures 1 and 2). Air pathways results decrease moderately for all scenarios. The time of the maximum for the air pathway open rangeland scenario shifts from 1,000 to 100 years (y). All-pathways annual TED increases for all scenarios except the resident scenario. The maximum member of public all-pathways dose occurs at 1,000 y for the resident farmer scenario. The resident farmer dose was predominantly due to technetium-99 (Tc-99) (82 percent) and lead-210 (Pb-210) (13 percent). Pb-210 present at 1,000 y is produced predominantly by radioactive decay of uranium-234 (U-234) present at the time of disposal. All results for the postdrilling and intruder-agriculture scenarios comply with the performance objectives (Tables 4 and 5, Figures 3 and 4). The postdrilling intruder results are similar to Version 4.105 results. The intruder-agriculture results are similar to Version 4.105, except for the Pit 6 Radium Disposal Unit (RaDU). The intruder-agriculture result for the Shallow Land Burial (SLB) disposal units is a significant fraction of the performance objective and exceeds the performance objective at the 95th percentile. The intruder-agriculture dose is due predominantly to Tc-99 (75 percent) and U-238 (9.5 percent). The acute

  14. A WRF/Chem sensitivity study using ensemble modelling for a high ozone episode in Slovenia and the Northern Adriatic area

    Science.gov (United States)

    Žabkar, Rahela; Koračin, Darko; Rakovec, Jože

    2013-10-01

    A high ozone (O3) concentrations episode during a heat wave event in the Northeastern Mediterranean was investigated using the WRF/Chem model. To understand the major model uncertainties and errors as well as the impacts of model inputs on the model accuracy, an ensemble modelling experiment was conducted. The 51-member ensemble was designed by varying model physics parameterization options (PBL schemes with different surface layer and land-surface modules, and radiation schemes); chemical initial and boundary conditions; anthropogenic and biogenic emission inputs; and model domain setup and resolution. The main impacts of the geographical and emission characteristics of three distinct regions (suburban Mediterranean, continental urban, and continental rural) on the model accuracy and O3 predictions were investigated. In spite of the large ensemble set size, the model generally failed to simulate the extremes; however, as expected from probabilistic forecasting the ensemble spread improved results with respect to extremes compared to the reference run. Noticeable model nighttime overestimations at the Mediterranean and some urban and rural sites can be explained by too strong simulated winds, which reduce the impact of dry deposition and O3 titration in the near surface layers during the nighttime. Another possible explanation could be inaccuracies in the chemical mechanisms, which are suggested also by model insensitivity to variations in the nitrogen oxides (NOx) and volatile organic compounds (VOC) emissions. Major impact factors for underestimations of the daytime O3 maxima at the Mediterranean and some rural sites include overestimation of the PBL depths, a lack of information on forest fires, too strong surface winds, and also possible inaccuracies in biogenic emissions. This numerical experiment with the ensemble runs also provided guidance on an optimum model setup and input data.

  15. High-resolution numerical modeling of tectonic underplating in circum-Pacific subduction zones: toward a better understanding of deformation in the episodic tremor and slip region?

    Science.gov (United States)

    Menant, A.; Angiboust, S.; Gerya, T.; Lacassin, R.; Simoes, M.; Grandin, R.

    2017-12-01

    Study of now-exhumed ancient subduction systems have evidenced km-scale tectonic units of marine sediments and oceanic crust that have been tectonically underplated (i.e. basally accreted) from the downgoing plate to the overriding plate at more than 30-km depth. Such huge mass transfers must have a major impact, both in term of long-term topographic variations and seismic/aseismic deformation in subduction zones. However, the quantification of such responses to the underplating process remains poorly constrained. Using high-resolution visco-elasto-plastic thermo-mechanical models, we present with unprecedented details the dynamics of formation and destruction of underplated complexes in subductions zones. Initial conditions in our experiments are defined in order to fit different subduction systems of the circum-Pacific region where underplating process is strongly suspected (e.g. the Cascadia, SW-Japan, New Zealand, and Chilean subduction zones). It appears that whatever the subduction system considered, underplating of sediments and oceanic crust always occur episodically forming a coherent nappe stacking at depths comprised between 10 and 50 km. At higher depth, a tectonic mélange with a serpentinized mantle wedge matrix developed along the plates interface. The size of these underplated complexes changes according to the subduction system considered. For instance, a 15-km thick nappe stacking is obtained for the N-Chilean subduction zone after a series of underplating events. Such an episodic event lasts 4-5 Myrs and can be responsible of a 2-km high uplift in the forearc region. Subsequent basal erosion of these underplated complexes results in their only partial preservation at crustal and mantle depth, suggesting that, after exhumation, only a tiny section of the overall underplated material can be observed nowadays in ancient subduction systems. Finally, tectonic underplating in our numerical models is systematically associated with (1) an increasing

  16. Use of two-part regression calibration model to correct for measurement error in episodically consumed foods in a single-replicate study design: EPIC case study.

    Science.gov (United States)

    Agogo, George O; van der Voet, Hilko; van't Veer, Pieter; Ferrari, Pietro; Leenders, Max; Muller, David C; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A; Boshuizen, Hendriek

    2014-01-01

    In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model.

  17. Bayesian random-effect model for predicting outcome fraught with heterogeneity--an illustration with episodes of 44 patients with intractable epilepsy.

    Science.gov (United States)

    Yen, A M-F; Liou, H-H; Lin, H-L; Chen, T H-H

    2006-01-01

    The study aimed to develop a predictive model to deal with data fraught with heterogeneity that cannot be explained by sampling variation or measured covariates. The random-effect Poisson regression model was first proposed to deal with over-dispersion for data fraught with heterogeneity after making allowance for measured covariates. Bayesian acyclic graphic model in conjunction with Markov Chain Monte Carlo (MCMC) technique was then applied to estimate the parameters of both relevant covariates and random effect. Predictive distribution was then generated to compare the predicted with the observed for the Bayesian model with and without random effect. Data from repeated measurement of episodes among 44 patients with intractable epilepsy were used as an illustration. The application of Poisson regression without taking heterogeneity into account to epilepsy data yielded a large value of heterogeneity (heterogeneity factor = 17.90, deviance = 1485, degree of freedom (df) = 83). After taking the random effect into account, the value of heterogeneity factor was greatly reduced (heterogeneity factor = 0.52, deviance = 42.5, df = 81). The Pearson chi2 for the comparison between the expected seizure frequencies and the observed ones at two and three months of the model with and without random effect were 34.27 (p = 1.00) and 1799.90 (p dispersion attributed either to correlated property or to subject-to-subject variability.

  18. Update of the Polar SWIFT model for polar stratospheric ozone loss (Polar SWIFT version 2

    Directory of Open Access Journals (Sweden)

    I. Wohltmann

    2017-07-01

    Full Text Available The Polar SWIFT model is a fast scheme for calculating the chemistry of stratospheric ozone depletion in polar winter. It is intended for use in global climate models (GCMs and Earth system models (ESMs to enable the simulation of mutual interactions between the ozone layer and climate. To date, climate models often use prescribed ozone fields, since a full stratospheric chemistry scheme is computationally very expensive. Polar SWIFT is based on a set of coupled differential equations, which simulate the polar vortex-averaged mixing ratios of the key species involved in polar ozone depletion on a given vertical level. These species are O3, chemically active chlorine (ClOx, HCl, ClONO2 and HNO3. The only external input parameters that drive the model are the fraction of the polar vortex in sunlight and the fraction of the polar vortex below the temperatures necessary for the formation of polar stratospheric clouds. Here, we present an update of the Polar SWIFT model introducing several improvements over the original model formulation. In particular, the model is now trained on vortex-averaged reaction rates of the ATLAS Chemistry and Transport Model, which enables a detailed look at individual processes and an independent validation of the different parameterizations contained in the differential equations. The training of the original Polar SWIFT model was based on fitting complete model runs to satellite observations and did not allow for this. A revised formulation of the system of differential equations is developed, which closely fits vortex-averaged reaction rates from ATLAS that represent the main chemical processes influencing ozone. In addition, a parameterization for the HNO3 change by denitrification is included. The rates of change of the concentrations of the chemical species of the Polar SWIFT model are purely chemical rates of change in the new version, whereas in the original Polar SWIFT model, they included a transport effect

  19. Implementation of methane cycling for deep time, global warming simulations with the DCESS Earth System Model (Version 1.2)

    DEFF Research Database (Denmark)

    Shaffer, Gary; Villanueva, Esteban Fernández; Rondanelli, Roberto

    2017-01-01

    Geological records reveal a number of ancient, large and rapid negative excursions of carbon-13 isotope. Such excursions can only be explained by massive injections of depleted carbon to the Earth System over a short duration. These injections may have forced strong global warming events, sometimes....... With this improved DCESS model version and paleo-reconstructions, we are now better armed to gauge the amounts, types, time scales and locations of methane injections driving specific, observed deep time, global warming events....

  20. Vortex dynamics in nonrelativistic version of Abelian Higgs model: Effects of the medium on the vortex motion

    Directory of Open Access Journals (Sweden)

    Kozhevnikov Arkadii

    2016-01-01

    Full Text Available The closed vortex dynamics is considered in the nonrelativistic version of the Abelian Higgs Model. The effect of the exchange of excitations propagating in the medium on the vortex string motion is taken into account. The obtained are the effective action and the equation of motion both including the exchange of the propagating excitations between the distant segments of the vortex and the possibility of its interaction with the static fermion asymmetric background. They are applied to the derivation of the time dependence of the basic geometrical contour characteristics.

  1. Recent extensions and use of the statistical model code EMPIRE-II - version: 2.17 Millesimo

    International Nuclear Information System (INIS)

    Herman, M.

    2003-01-01

    This lecture notes describe new features of the modular code EMPIRE-2.17 designed to perform comprehensive calculations of nuclear reactions using variety of nuclear reaction models. Compared to the version 2.13, the current release has been extended by including Coupled-Channel mechanism, exciton model, Monte Carlo approach to preequilibrium emission, use of microscopic level densities, widths fluctuation correction, detailed calculation of the recoil spectra, and powerful plotting capabilities provided by the ZVView package. The second part of this lecture concentrates on the use of the code in practical calculations, with emphasis on the aspects relevant to nuclear data evaluation. In particular, adjusting model parameters is discussed in details. (author)

  2. Online dynamical downscaling of temperature and precipitation within the iLOVECLIM model (version 1.1)

    Science.gov (United States)

    Quiquet, Aurélien; Roche, Didier M.; Dumas, Christophe; Paillard, Didier

    2018-02-01

    This paper presents the inclusion of an online dynamical downscaling of temperature and precipitation within the model of intermediate complexity iLOVECLIM v1.1. We describe the following methodology to generate temperature and precipitation fields on a 40 km × 40 km Cartesian grid of the Northern Hemisphere from the T21 native atmospheric model grid. Our scheme is not grid specific and conserves energy and moisture in the same way as the original climate model. We show that we are able to generate a high-resolution field which presents a spatial variability in better agreement with the observations compared to the standard model. Although the large-scale model biases are not corrected, for selected model parameters, the downscaling can induce a better overall performance compared to the standard version on both the high-resolution grid and on the native grid. Foreseen applications of this new model feature include the improvement of ice sheet model coupling and high-resolution land surface models.

  3. Online dynamical downscaling of temperature and precipitation within the iLOVECLIM model (version 1.1

    Directory of Open Access Journals (Sweden)

    A. Quiquet

    2018-02-01

    Full Text Available This paper presents the inclusion of an online dynamical downscaling of temperature and precipitation within the model of intermediate complexity iLOVECLIM v1.1. We describe the following methodology to generate temperature and precipitation fields on a 40 km  ×  40 km Cartesian grid of the Northern Hemisphere from the T21 native atmospheric model grid. Our scheme is not grid specific and conserves energy and moisture in the same way as the original climate model. We show that we are able to generate a high-resolution field which presents a spatial variability in better agreement with the observations compared to the standard model. Although the large-scale model biases are not corrected, for selected model parameters, the downscaling can induce a better overall performance compared to the standard version on both the high-resolution grid and on the native grid. Foreseen applications of this new model feature include the improvement of ice sheet model coupling and high-resolution land surface models.

  4. A psychometric evaluation of the Swedish version of the Research Utilization Questionnaire using a Rasch measurement model.

    Science.gov (United States)

    Lundberg, Veronica; Boström, Anne-Marie; Malinowsky, Camilla

    2017-07-30

    Evidence-based practice and research utilisation has become a commonly used concept in health care. The Research Utilization Questionnaire (RUQ) has been recognised to be a widely used instrument measuring the perception of research utilisation among nursing staff in clinical practice. Few studies have however analysed the psychometric properties of the RUQ. The aim of this study was to examine the psychometric properties of the Swedish version of the three subscales in RUQ using a Rasch measurement model. This study has a cross-sectional design using a sample of 163 staff (response rate 81%) working in one nursing home in Sweden. Data were collected using the Swedish version of RUQ in 2012. The three subscales Attitudes towards research, Availability of and support for research use and Use of research findings in clinical practice were investigated. Data were analysed using a Rasch measurement model. The results indicate presence of multidimensionality in all subscales. Moreover, internal scale validity and person response validity also provide some less satisfactory results, especially for the subscale Use of research findings. Overall, there seems to be a problem with the negatively worded statements. The findings suggest that clarification and refining of items, including additional psychometric evaluation of the RUQ, are needed before using the instrument in clinical practice and research studies among staff in nursing homes. © 2017 Nordic College of Caring Science.

  5. Validation of the malaysian versions of parents and children health survey for asthma by using rasch-model.

    Science.gov (United States)

    Hussein, Maryam Se; Akram, Waqas; Mamat, Mohd Nor; Majeed, Abu Bakar Abdul; Ismail, Nahlah Elkudssiah Binti

    2015-04-01

    In recent years, health-related quality of life (HRQOL) has become an important outcome measure in epidemiologic studies and clinical trials. For patients with asthma there are many instruments but most of them have been developed in English. With the increase in research project, researchers working in other languages have two options; either to develop a new measure or to translate an already developed measure. Children Health Survey for Asthma is developed by American Academy of Paediatrics which has two versions one for the parents (CHSA) and the other for the child (CHSA-C). However, there is no Malay version of the CHSA or the CHSA-C. The aim of this study was to translate and determine the validity and reliability of the Malaysian versions of Parent and Children Health Survey for Asthma. Questionnaires were translated to Bahasa Malayu using previously established guidelines, data from 180 respondents (asthmatic children and their parent) were analysed using Rasch-Model; as, it is an approach that has been increasingly used in health field and also it explores the performance of each item rather than total set score. The internal consistency was high for the parent questionnaire (CHSA) (reliability score for persons = 0.88 and for items was 0.97), and good for child questionnaire (CHSA-C) (reliability score for persons = 0.83 and for items was 0.94). Also, this study shows that all items measure for both questionnaires (CHSA and CHSA-C) are fitted to Rasch-Model. This study produced questionnaires that are conceptually equivalent to the original, easy to understand for the children and their parents, and good in terms of internal consistency. Because of the questionnaire has two versions one for the child and the other for the parents, they could be used in clinical practice to measure the effect of asthma on the child and their families. This current research had translated two instruments to other language (BahasaMalayu) and evaluated their reliability and

  6. Accounting for observation uncertainties in an evaluation metric of low latitude turbulent air-sea fluxes: application to the comparison of a suite of IPSL model versions

    Science.gov (United States)

    Servonnat, Jérôme; Găinuşă-Bogdan, Alina; Braconnot, Pascale

    2017-09-01

    Turbulent momentum and heat (sensible heat and latent heat) fluxes at the air-sea interface are key components of the whole energetic of the Earth's climate. The evaluation of these fluxes in the climate models is still difficult because of the large uncertainties associated with the reference products. In this paper we present an objective metric accounting for reference uncertainties to evaluate the annual cycle of the low latitude turbulent fluxes of a suite of IPSL climate models. This metric consists in a Hotelling T 2 test between the simulated and observed field in a reduce space characterized by the dominant modes of variability that are common to both the model and the reference, taking into account the observational uncertainty. The test is thus more severe when uncertainties are small as it is the case for sea surface temperature (SST). The results of the test show that for almost all variables and all model versions the model-reference differences are not zero. It is not possible to distinguish between model versions for sensible heat and meridional wind stress, certainly due to the large observational uncertainties. All model versions share similar biases for the different variables. There is no improvement between the reference versions of the IPSL model used for CMIP3 and CMIP5. The test also reveals that the higher horizontal resolution fails to improve the representation of the turbulent surface fluxes compared to the other versions. The representation of the fluxes is further degraded in a version with improved atmospheric physics with an amplification of some of the biases in the Indian Ocean and in the intertropical convergence zone. The ranking of the model versions for the turbulent fluxes is not correlated with the ranking found for SST. This highlights that despite the fact that SST gradients are important for the large-scale atmospheric circulation patterns, other factors such as wind speed, and air-sea temperature contrast play an

  7. First episode schizophrenia

    African Journals Online (AJOL)

    with schizophrenia present clinically with psychotic, negative and cognitive ... changes in their emotions, cognition or behaviour which may indicate a ... contribute 80% to the risk of schizophrenia developing. A number of .... Positive symptoms ... Depression ... treatment of first episode schizophrenia is of critical importance.

  8. Approaches in highly parameterized inversion—PEST++ Version 3, a Parameter ESTimation and uncertainty analysis software suite optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.

    2015-09-18

    The PEST++ Version 1 object-oriented parameter estimation code is here extended to Version 3 to incorporate additional algorithms and tools to further improve support for large and complex environmental modeling problems. PEST++ Version 3 includes the Gauss-Marquardt-Levenberg (GML) algorithm for nonlinear parameter estimation, Tikhonov regularization, integrated linear-based uncertainty quantification, options of integrated TCP/IP based parallel run management or external independent run management by use of a Version 2 update of the GENIE Version 1 software code, and utilities for global sensitivity analyses. The Version 3 code design is consistent with PEST++ Version 1 and continues to be designed to lower the barriers of entry for users as well as developers while providing efficient and optimized algorithms capable of accommodating large, highly parameterized inverse problems. As such, this effort continues the original focus of (1) implementing the most popular and powerful features of the PEST software suite in a fashion that is easy for novice or experienced modelers to use and (2) developing a software framework that is easy to extend.

  9. Assessment of two versions of regional climate model in simulating the Indian Summer Monsoon over South Asia CORDEX domain

    Science.gov (United States)

    Pattnayak, K. C.; Panda, S. K.; Saraswat, Vaishali; Dash, S. K.

    2018-04-01

    This study assess the performance of two versions of Regional Climate Model (RegCM) in simulating the Indian summer monsoon over South Asia for the period 1998 to 2003 with an aim of conducting future climate change simulations. Two sets of experiments were carried out with two different versions of RegCM (viz. RegCM4.2 and RegCM4.3) with the lateral boundary forcings provided from European Center for Medium Range Weather Forecast Reanalysis (ERA-interim) at 50 km horizontal resolution. The major updates in RegCM4.3 in comparison to the older version RegCM4.2 are the inclusion of measured solar irradiance in place of hardcoded solar constant and additional layers in the stratosphere. The analysis shows that the Indian summer monsoon rainfall, moisture flux and surface net downward shortwave flux are better represented in RegCM4.3 than that in the RegCM4.2 simulations. Excessive moisture flux in the RegCM4.2 simulation over the northern Arabian Sea and Peninsular India resulted in an overestimation of rainfall over the Western Ghats, Peninsular region as a result of which the all India rainfall has been overestimated. RegCM4.3 has performed well over India as a whole as well as its four rainfall homogenous zones in reproducing the mean monsoon rainfall and inter-annual variation of rainfall. Further, the monsoon onset, low-level Somali Jet and the upper level tropical easterly jet are better represented in the RegCM4.3 than RegCM4.2. Thus, RegCM4.3 has performed better in simulating the mean summer monsoon circulation over the South Asia. Hence, RegCM4.3 may be used to study the future climate change over the South Asia.

  10. The MIRAB Model of Small Island Economies in the Pacific and their Security Issues: Revised Version

    OpenAIRE

    Tisdell, Clem

    2014-01-01

    The MIRAB model of Pacific island micro-economies was developed in the mid-1980s by the New Zealand economists, Bertram and Watters, and dominated the literature on the economics of small island nations and economies until alternative models were proposed two decades later. Nevertheless, it is still an influential theory. MIRAB is an acronym for migration (MI), remittance (R) and foreign aid (A) and the public bureaucracy (B); the main components of the MIRAB model. The nature of this model i...

  11. The construction of semantic memory: grammar based representations learned from relational episodic information

    Directory of Open Access Journals (Sweden)

    Francesco P Battaglia

    2011-08-01

    Full Text Available After acquisition, memories underlie a process of consolidation, making them more resistant to interference and brain injury. Memory consolidation involves systems-level interactions, most importantly between the hippocampus and associated structures, which takes part in the initial encoding of memory, and the neocortex, which supports long-term storage. This dichotomy parallels the contrast between episodic memory (tied to the hippocampal formation, collecting an autobiographical stream of experiences, and semantic memory, a repertoire of facts and statistical regularities about the world, involving the neocortex at large. Experimental evidence points to a gradual transformation of memories, following encoding, from an episodic to a semantic character. This may require an exchange of information between different memory modules during inactive periods. We propose a theory for such interactions and for the formation of semantic memory, in which episodic memory is encoded as relational data. Semantic memory is modeled as a modified stochastic grammar, which learns to parse episodic configurations expressed as an association matrix. The grammar produces tree-like representations of episodes, describing the relationships between its main constituents at multiple levels of categorization, based on its current knowledge of world regularities. These regularities are learned by the grammar from episodic memory information, through an expectation-maximization procedure, analogous to the inside-outside algorithm for stochastic context-free grammars. We propose that a Monte-Carlo sampling version of this algorithm can be mapped on the dynamics of ``sleep replay'' of previously acquired information in the hippocampus and neocortex. We propose that the model can reproduce several properties of semantic memory such as decontextualization, top-down processing, and creation of schemata.

  12. The Construction of Semantic Memory: Grammar-Based Representations Learned from Relational Episodic Information

    Science.gov (United States)

    Battaglia, Francesco P.; Pennartz, Cyriel M. A.

    2011-01-01

    After acquisition, memories underlie a process of consolidation, making them more resistant to interference and brain injury. Memory consolidation involves systems-level interactions, most importantly between the hippocampus and associated structures, which takes part in the initial encoding of memory, and the neocortex, which supports long-term storage. This dichotomy parallels the contrast between episodic memory (tied to the hippocampal formation), collecting an autobiographical stream of experiences, and semantic memory, a repertoire of facts and statistical regularities about the world, involving the neocortex at large. Experimental evidence points to a gradual transformation of memories, following encoding, from an episodic to a semantic character. This may require an exchange of information between different memory modules during inactive periods. We propose a theory for such interactions and for the formation of semantic memory, in which episodic memory is encoded as relational data. Semantic memory is modeled as a modified stochastic grammar, which learns to parse episodic configurations expressed as an association matrix. The grammar produces tree-like representations of episodes, describing the relationships between its main constituents at multiple levels of categorization, based on its current knowledge of world regularities. These regularities are learned by the grammar from episodic memory information, through an expectation-maximization procedure, analogous to the inside–outside algorithm for stochastic context-free grammars. We propose that a Monte-Carlo sampling version of this algorithm can be mapped on the dynamics of “sleep replay” of previously acquired information in the hippocampus and neocortex. We propose that the model can reproduce several properties of semantic memory such as decontextualization, top-down processing, and creation of schemata. PMID:21887143

  13. Implementation of methane cycling for deep time, global warming simulations with the DCESS Earth System Model (Version 1.2)

    DEFF Research Database (Denmark)

    Shaffer, Gary; Villanueva, Esteban Fernández; Rondanelli, Roberto

    2017-01-01

    Geological records reveal a number of ancient, large and rapid negative excursions of carbon-13 isotope. Such excursions can only be explained by massive injections of depleted carbon to the Earth System over a short duration. These injections may have forced strong global warming events, sometimes....... With this improved DCESS model version and paleo-reconstructions, we are now better armed to gauge the amounts, types, time scales and locations of methane injections driving specific, observed deep time, global warming events......., or from warming-induced dissociation of methane hydrate, a solid compound of methane and water found in ocean sediments. As a consequence of the ubiquity and importance of methane in major Earth events, Earth System models should include a comprehensive treatment of methane cycling but such a treatment...

  14. A framework for expanding aqueous chemistry in the Community Multiscale Air Quality (CMAQ) model version 5.1

    Science.gov (United States)

    Fahey, Kathleen M.; Carlton, Annmarie G.; Pye, Havala O. T.; Baek, Jaemeen; Hutzell, William T.; Stanier, Charles O.; Baker, Kirk R.; Wyat Appel, K.; Jaoui, Mohammed; Offenberg, John H.

    2017-04-01

    This paper describes the development and implementation of an extendable aqueous-phase chemistry option (AQCHEM - KMT(I)) for the Community Multiscale Air Quality (CMAQ) modeling system, version 5.1. Here, the Kinetic PreProcessor (KPP), version 2.2.3, is used to generate a Rosenbrock solver (Rodas3) to integrate the stiff system of ordinary differential equations (ODEs) that describe the mass transfer, chemical kinetics, and scavenging processes of CMAQ clouds. CMAQ's standard cloud chemistry module (AQCHEM) is structurally limited to the treatment of a simple chemical mechanism. This work advances our ability to test and implement more sophisticated aqueous chemical mechanisms in CMAQ and further investigate the impacts of microphysical parameters on cloud chemistry. Box model cloud chemistry simulations were performed to choose efficient solver and tolerance settings, evaluate the implementation of the KPP solver, and assess the direct impacts of alternative solver and kinetic mass transfer on predicted concentrations for a range of scenarios. Month-long CMAQ simulations for winter and summer periods over the US reveal the changes in model predictions due to these cloud module updates within the full chemical transport model. While monthly average CMAQ predictions are not drastically altered between AQCHEM and AQCHEM - KMT, hourly concentration differences can be significant. With added in-cloud secondary organic aerosol (SOA) formation from biogenic epoxides (AQCHEM - KMTI), normalized mean error and bias statistics are slightly improved for 2-methyltetrols and 2-methylglyceric acid at the Research Triangle Park measurement site in North Carolina during the Southern Oxidant and Aerosol Study (SOAS) period. The added in-cloud chemistry leads to a monthly average increase of 11-18 % in cloud SOA at the surface in the eastern United States for June 2013.

  15. Acute episodes of predator exposure in conjunction with chronic social instability as an animal model of post-traumatic stress disorder.

    Science.gov (United States)

    Zoladz, Phillip R; Conrad, Cheryl D; Fleshner, Monika; Diamond, David M

    2008-07-01

    People who are exposed to horrific, life-threatening experiences are at risk for developing post-traumatic stress disorder (PTSD). Some of the symptoms of PTSD include persistent anxiety, exaggerated startle, cognitive impairments and increased sensitivity to yohimbine, an alpha(2)-adrenergic receptor antagonist. We have taken into account the conditions known to induce PTSD, as well as factors responsible for long-term maintenance of the disorder, to develop an animal model of PTSD. Adult male Sprague-Dawley rats were administered a total of 31 days of psychosocial stress, composed of acute and chronic components. The acute component was a 1-h stress session (immobilization during cat exposure), which occurred on Days 1 and 11. The chronic component was that on all 31 days the rats were given unstable housing conditions. We found that psychosocially stressed rats had reduced growth rate, reduced thymus weight, increased adrenal gland weight, increased anxiety, an exaggerated startle response, cognitive impairments, greater cardiovascular and corticosterone reactivity to an acute stressor and heightened responsivity to yohimbine. This work demonstrates the effectiveness of acute inescapable episodes of predator exposure administered in conjunction with daily social instability as an animal model of PTSD.

  16. Episodes of cross-polar transport in the Arctic troposphere during July 2008 as seen from models, satellite, and aircraft observations

    Directory of Open Access Journals (Sweden)

    H. Sodemann

    2011-04-01

    Full Text Available During the POLARCAT summer campaign in 2008, two episodes (2–5 July and 7–10 July 2008 occurred where low-pressure systems traveled from Siberia across the Arctic Ocean towards the North Pole. The two cyclones had extensive smoke plumes from Siberian forest fires and anthropogenic sources in East Asia embedded in their associated air masses, creating an excellent opportunity to use satellite and aircraft observations to validate the performance of atmospheric transport models in the Arctic, which is a challenging model domain due to numerical and other complications.

    Here we compare transport simulations of carbon monoxide (CO from the Lagrangian transport model FLEXPART and the Eulerian chemical transport model TOMCAT with retrievals of total column CO from the IASI passive infrared sensor onboard the MetOp-A satellite. The main aspect of the comparison is how realistic horizontal and vertical structures are represented in the model simulations. Analysis of CALIPSO lidar curtains and in situ aircraft measurements provide further independent reference points to assess how reliable the model simulations are and what the main limitations are.

    The horizontal structure of mid-latitude pollution plumes agrees well between the IASI total column CO and the model simulations. However, finer-scale structures are too quickly diffused in the Eulerian model. Applying the IASI averaging kernels to the model data is essential for a meaningful comparison. Using aircraft data as a reference suggests that the satellite data are biased high, while TOMCAT is biased low. FLEXPART fits the aircraft data rather well, but due to added background concentrations the simulation is not independent from observations. The multi-data, multi-model approach allows separating the influences of meteorological fields, model realisation, and grid type on the plume structure. In addition to the very good agreement between simulated and observed total column CO

  17. On-the-fly confluence detection for statistical model checking (extended version)

    NARCIS (Netherlands)

    Hartmanns, Arnd; Timmer, Mark

    Statistical model checking is an analysis method that circumvents the state space explosion problem in model-based verification by combining probabilistic simulation with statistical methods that provide clear error bounds. As a simulation-based technique, it can only provide sound results if the

  18. Technical documentation and user's guide for City-County Allocation Model (CCAM). Version 1.0

    International Nuclear Information System (INIS)

    Clark, L.T. Jr.; Scott, M.J.; Hammer, P.

    1986-05-01

    The City-County Allocation Model (CCAM) was developed as part of the Monitored Retrievable Storage (MRS) Program. The CCAM model was designed to allocate population changes forecasted by the MASTER model to specific local communities within commuting distance of the MRS facility. The CCAM model was designed to then forecast the potential changes in demand for key community services such as housing, police protection, and utilities for these communities. The CCAM model uses a flexible on-line data base on demand for community services that is based on a combination of local service levels and state and national service standards. The CCAM model can be used to quickly forecast the potential community service consequence of economic development for local communities anywhere in the country. The remainder of this document is organized as follows. The purpose of this manual is to assist the user in understanding and operating the City-County Allocation Model (CCAM). The annual explains the data sources for the model and code modifications as well as the operational procedures

  19. Comments on a time-dependent version of the linear-quadratic model

    International Nuclear Information System (INIS)

    Tucker, S.L.; Travis, E.L.

    1990-01-01

    The accuracy and interpretation of the 'LQ + time' model are discussed. Evidence is presented, based on data in the literature, that this model does not accurately describe the changes in isoeffect dose occurring with protraction of the overall treatment time during fractionated irradiation of the lung. This lack of fit of the model explains, in part, the surprisingly large values of γ/α that have been derived from experimental lung data. The large apparent time factors for lung suggested by the model are also partly explained by the fact that γT/α, despite having units of dose, actually measures the influence of treatment time on the effect scale, not the dose scale, and is shown to consistently overestimate the change in total dose. The unusually high values of α/β that have been derived for lung using the model are shown to be influenced by the method by which the model was fitted to data. Reanalyses of the data using a more statistically valid regression procedure produce estimates of α/β more typical of those usually cited for lung. Most importantly, published isoeffect data from lung indicate that the true deviation from the linear-quadratic (LQ) model is nonlinear in time, instead of linear, and also depends on other factors such as the effect level and the size of dose per fraction. Thus, the authors do not advocate the use of the 'LQ + time' expression as a general isoeffect model. (author). 32 refs.; 3 figs.; 1 tab

  20. Hydrogen Macro System Model User Guide, Version 1.2.1

    Energy Technology Data Exchange (ETDEWEB)

    Ruth, M.; Diakov, V.; Sa, T.; Goldsby, M.; Genung, K.; Hoseley, R.; Smith, A.; Yuzugullu, E.

    2009-07-01

    The Hydrogen Macro System Model (MSM) is a simulation tool that links existing and emerging hydrogen-related models to perform rapid, cross-cutting analysis. It allows analysis of the economics, primary energy-source requirements, and emissions of hydrogen production and delivery pathways.

  1. Model Package Report: Central Plateau Vadose Zone Geoframework Version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    Springer, Sarah D.

    2018-03-27

    The purpose of the Central Plateau Vadose Zone (CPVZ) Geoframework model (GFM) is to provide a reasonable, consistent, and defensible three-dimensional (3D) representation of the vadose zone beneath the Central Plateau at the Hanford Site to support the Composite Analysis (CA) vadose zone contaminant fate and transport models. The GFM is a 3D representation of the subsurface geologic structure. From this 3D geologic model, exported results in the form of point, surface, and/or volumes are used as inputs to populate and assemble the various numerical model architectures, providing a 3D-layered grid that is consistent with the GFM. The objective of this report is to define the process used to produce a hydrostratigraphic model for the vadose zone beneath the Hanford Site Central Plateau and the corresponding CA domain.

  2. Perspectives on Episodic-like and Episodic Memory

    OpenAIRE

    Bettina M Pause; Armin eZlomuzica; Kiyoka eKinugawa; Jean eMariani; Reinhard ePietrowsky; Ekrem eDere

    2013-01-01

    Episodic memory refers to the conscious recollection of a personal experience that contains information on what has happened and also where and when it happened. Recollection from episodic memory also implies a kind of first-person subjectivity that has been termed autonoetic consciousness. Episodic memory is extremely sensitive to cerebral aging and neurodegenerative diseases. In Alzheimer’s disease deficits in episodic memory function are among the first cognitive symptoms observed. Further...

  3. Multicomponent mass transport model: theory and numerical implementation (discrete-parcel-random-walk version)

    International Nuclear Information System (INIS)

    Ahlstrom, S.W.; Foote, H.P.; Arnett, R.C.; Cole, C.R.; Serne, R.J.

    1977-05-01

    The Multicomponent Mass Transfer (MMT) Model is a generic computer code, currently in its third generation, that was developed to predict the movement of radiocontaminants in the saturated and unsaturated sediments of the Hanford Site. This model was designed to use the water movement patterns produced by the unsaturated and saturated flow models coupled with dispersion and soil-waste reaction submodels to predict contaminant transport. This report documents the theorical foundation and the numerical solution procedure of the current (third) generation of the MMT Model. The present model simulates mass transport processes using an analog referred to as the Discrete-Parcel-Random-Walk (DPRW) algorithm. The basic concepts of this solution technique are described and the advantages and disadvantages of the DPRW scheme are discussed in relation to more conventional numerical techniques such as the finite-difference and finite-element methods. Verification of the numerical algorithm is demonstrated by comparing model results with known closed-form solutions. A brief error and sensitivity analysis of the algorithm with respect to numerical parameters is also presented. A simulation of the tritium plume beneath the Hanford Site is included to illustrate the use of the model in a typical application. 32 figs

  4. Statistical analysis of fracture data, adapted for modelling Discrete Fracture Networks-Version 2

    Energy Technology Data Exchange (ETDEWEB)

    Munier, Raymond

    2004-04-01

    The report describes the parameters which are necessary for DFN modelling, the way in which they can be extracted from the data base acquired during site investigations, and their assignment to geometrical objects in the geological model. The purpose here is to present a methodology for use in SKB modelling projects. Though the methodology is deliberately tuned to facilitate subsequent DFN modelling with other tools, some of the recommendations presented here are applicable to other aspects of geo-modelling as well. For instance, we here recommend a nomenclature to be used within SKB modelling projects, which are truly multidisciplinary, to ease communications between scientific disciplines and avoid misunderstanding of common concepts. This report originally occurred as an appendix to a strategy report for geological modelling (SKB-R--03-07). Strategy reports were intended to be successively updated to include experience gained during site investigations and site modelling. Rather than updating the entire strategy report, we choose to present the update of the appendix as a stand-alone document. This document thus replaces Appendix A2 in SKB-R--03-07. In short, the update consists of the following: The target audience has been broadened and as a consequence thereof, the purpose of the document. Correction of errors found in various formulae. All expressions have been rewritten. Inclusion of more worked examples in each section. A new section describing area normalisation. A new section on spatial correlation. A new section describing anisotropy. A new chapter describing the expected output from DFN modelling, within SKB projects.

  5. Statistical analysis of fracture data, adapted for modelling Discrete Fracture Networks-Version 2

    International Nuclear Information System (INIS)

    Munier, Raymond

    2004-04-01

    The report describes the parameters which are necessary for DFN modelling, the way in which they can be extracted from the data base acquired during site investigations, and their assignment to geometrical objects in the geological model. The purpose here is to present a methodology for use in SKB modelling projects. Though the methodology is deliberately tuned to facilitate subsequent DFN modelling with other tools, some of the recommendations presented here are applicable to other aspects of geo-modelling as well. For instance, we here recommend a nomenclature to be used within SKB modelling projects, which are truly multidisciplinary, to ease communications between scientific disciplines and avoid misunderstanding of common concepts. This report originally occurred as an appendix to a strategy report for geological modelling (SKB-R--03-07). Strategy reports were intended to be successively updated to include experience gained during site investigations and site modelling. Rather than updating the entire strategy report, we choose to present the update of the appendix as a stand-alone document. This document thus replaces Appendix A2 in SKB-R--03-07. In short, the update consists of the following: The target audience has been broadened and as a consequence thereof, the purpose of the document. Correction of errors found in various formulae. All expressions have been rewritten. Inclusion of more worked examples in each section. A new section describing area normalisation. A new section on spatial correlation. A new section describing anisotropy. A new chapter describing the expected output from DFN modelling, within SKB projects

  6. A Scalable Version of the Navy Operational Global Atmospheric Prediction System Spectral Forecast Model

    Directory of Open Access Journals (Sweden)

    Thomas E. Rosmond

    2000-01-01

    Full Text Available The Navy Operational Global Atmospheric Prediction System (NOGAPS includes a state-of-the-art spectral forecast model similar to models run at several major operational numerical weather prediction (NWP centers around the world. The model, developed by the Naval Research Laboratory (NRL in Monterey, California, has run operational at the Fleet Numerical Meteorological and Oceanographic Center (FNMOC since 1982, and most recently is being run on a Cray C90 in a multi-tasked configuration. Typically the multi-tasked code runs on 10 to 15 processors with overall parallel efficiency of about 90%. resolution is T159L30, but other operational and research applications run at significantly lower resolutions. A scalable NOGAPS forecast model has been developed by NRL in anticipation of a FNMOC C90 replacement in about 2001, as well as for current NOGAPS research requirements to run on DOD High-Performance Computing (HPC scalable systems. The model is designed to run with message passing (MPI. Model design criteria include bit reproducibility for different processor numbers and reasonably efficient performance on fully shared memory, distributed memory, and distributed shared memory systems for a wide range of model resolutions. Results for a wide range of processor numbers, model resolutions, and different vendor architectures are presented. Single node performance has been disappointing on RISC based systems, at least compared to vector processor performance. This is a common complaint, and will require careful re-examination of traditional numerical weather prediction (NWP model software design and data organization to fully exploit future scalable architectures.

  7. Parameterization Improvements and Functional and Structural Advances in Version 4 of the Community Land Model

    Directory of Open Access Journals (Sweden)

    Andrew G. Slater

    2011-05-01

    Full Text Available The Community Land Model is the land component of the Community Climate System Model. Here, we describe a broad set of model improvements and additions that have been provided through the CLM development community to create CLM4. The model is extended with a carbon-nitrogen (CN biogeochemical model that is prognostic with respect to vegetation, litter, and soil carbon and nitrogen states and vegetation phenology. An urban canyon model is added and a transient land cover and land use change (LCLUC capability, including wood harvest, is introduced, enabling study of historic and future LCLUC on energy, water, momentum, carbon, and nitrogen fluxes. The hydrology scheme is modified with a revised numerical solution of the Richards equation and a revised ground evaporation parameterization that accounts for litter and within-canopy stability. The new snow model incorporates the SNow and Ice Aerosol Radiation model (SNICAR - which includes aerosol deposition, grain-size dependent snow aging, and vertically-resolved snowpack heating –– as well as new snow cover and snow burial fraction parameterizations. The thermal and hydrologic properties of organic soil are accounted for and the ground column is extended to ~50-m depth. Several other minor modifications to the land surface types dataset, grass and crop optical properties, atmospheric forcing height, roughness length and displacement height, and the disposition of snow-capped runoff are also incorporated.Taken together, these augmentations to CLM result in improved soil moisture dynamics, drier soils, and stronger soil moisture variability. The new model also exhibits higher snow cover, cooler soil temperatures in organic-rich soils, greater global river discharge, and lower albedos over forests and grasslands, all of which are improvements compared to CLM3.5. When CLM4 is run with CN, the mean biogeophysical simulation is slightly degraded because the vegetation structure is prognostic rather

  8. On a discrete version of the CP 1 sigma model and surfaces immersed in R3

    International Nuclear Information System (INIS)

    Grundland, A M; Levi, D; Martina, L

    2003-01-01

    We present a discretization of the CP 1 sigma model. We show that the discrete CP 1 sigma model is described by a nonlinear partial second-order difference equation with rational nonlinearity. To derive discrete surfaces immersed in three-dimensional Euclidean space a 'complex' lattice is introduced. The so-obtained surfaces are characterized in terms of the quadrilateral cross-ratio of four surface points. In this way we prove that all surfaces associated with the discrete CP 1 sigma model are of constant mean curvature. An explicit example of such discrete surfaces is constructed

  9. The Role of Episodic and Semantic Memory in Episodic Foresight

    Science.gov (United States)

    Martin-Ordas, Gema; Atance, Cristina M.; Louw, Alyssa

    2012-01-01

    In this paper we describe a special form of future thinking, termed "episodic foresight" and its relation with episodic and semantic memory. We outline the methodologies that have largely been developed in the last five years to assess this capacity in young children and non-human animals. Drawing on Tulving's definition of episodic and semantic…

  10. Perspectives on Episodic-Like and Episodic Memory

    Science.gov (United States)

    Pause, Bettina M.; Zlomuzica, Armin; Kinugawa, Kiyoka; Mariani, Jean; Pietrowsky, Reinhard; Dere, Ekrem

    2013-01-01

    Episodic memory refers to the conscious recollection of a personal experience that contains information on what has happened and also where and when it happened. Recollection from episodic memory also implies a kind of first-person subjectivity that has been termed autonoetic consciousness. Episodic memory is extremely sensitive to cerebral aging and neurodegenerative diseases. In Alzheimer’s disease deficits in episodic memory function are among the first cognitive symptoms observed. Furthermore, impaired episodic memory function is also observed in a variety of other neuropsychiatric diseases including dissociative disorders, schizophrenia, and Parkinson disease. Unfortunately, it is quite difficult to induce and measure episodic memories in the laboratory and it is even more difficult to measure it in clinical populations. Presently, the tests used to assess episodic memory function do not comply with even down-sized definitions of episodic-like memory as a memory for what happened, where, and when. They also require sophisticated verbal competences and are difficult to apply to patient populations. In this review, we will summarize the progress made in defining behavioral criteria of episodic-like memory in animals (and humans) as well as the perspectives in developing novel tests of human episodic memory which can also account for phenomenological aspects of episodic memory such as autonoetic awareness. We will also define basic behavioral, procedural, and phenomenological criteria which might be helpful for the development of a valid and reliable clinical test of human episodic memory. PMID:23616754

  11. Technical manual for basic version of the Markov chain nest productivity model (MCnest)

    Science.gov (United States)

    The Markov Chain Nest Productivity Model (or MCnest) integrates existing toxicity information from three standardized avian toxicity tests with information on species life history and the timing of pesticide applications relative to the timing of avian breeding seasons to quantit...

  12. The Cloud Feedback Model Intercomparison Project Observational Simulator Package: Version 2

    Science.gov (United States)

    Swales, Dustin J.; Pincus, Robert; Bodas-Salcedo, Alejandro

    2018-01-01

    The Cloud Feedback Model Intercomparison Project Observational Simulator Package (COSP) gathers together a collection of observation proxies or satellite simulators that translate model-simulated cloud properties to synthetic observations as would be obtained by a range of satellite observing systems. This paper introduces COSP2, an evolution focusing on more explicit and consistent separation between host model, coupling infrastructure, and individual observing proxies. Revisions also enhance flexibility by allowing for model-specific representation of sub-grid-scale cloudiness, provide greater clarity by clearly separating tasks, support greater use of shared code and data including shared inputs across simulators, and follow more uniform software standards to simplify implementation across a wide range of platforms. The complete package including a testing suite is freely available.

  13. The Cloud Feedback Model Intercomparison Project Observational Simulator Package: Version 2

    Directory of Open Access Journals (Sweden)

    D. J. Swales

    2018-01-01

    Full Text Available The Cloud Feedback Model Intercomparison Project Observational Simulator Package (COSP gathers together a collection of observation proxies or satellite simulators that translate model-simulated cloud properties to synthetic observations as would be obtained by a range of satellite observing systems. This paper introduces COSP2, an evolution focusing on more explicit and consistent separation between host model, coupling infrastructure, and individual observing proxies. Revisions also enhance flexibility by allowing for model-specific representation of sub-grid-scale cloudiness, provide greater clarity by clearly separating tasks, support greater use of shared code and data including shared inputs across simulators, and follow more uniform software standards to simplify implementation across a wide range of platforms. The complete package including a testing suite is freely available.

  14. FMCSA safety program effectiveness measurement : carrier intervention effectiveness model, version 1.0 : [analysis brief].

    Science.gov (United States)

    2015-01-01

    The Carrier Intervention Effectiveness Model (CIEM) : provides the Federal Motor Carrier Safety : Administration (FMCSA) with a tool for measuring : the safety benefits of carrier interventions conducted : under the Compliance, Safety, Accountability...

  15. Modeled Radar Attenuation Rate Profile at the Vostok 5G Ice Core Site, Antarctica, Version 1

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set provides a modeled radar attenuation rate profile, showing the predicted contributions from pure ice and impurities to radar attenuation at the Vostok...

  16. User’s manual for basic version of MCnest Markov chain nest productivity model

    Science.gov (United States)

    The Markov Chain Nest Productivity Model (or MCnest) integrates existing toxicity information from three standardized avian toxicity tests with information on species life history and the timing of pesticide applications relative to the timing of avian breeding seasons to quantit...

  17. MAPSS: Mapped Atmosphere-Plant-Soil System Model, Version 1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: MAPSS (Mapped Atmosphere-Plant-Soil System) is a landscape to global vegetation distribution model that was developed to simulate the potential biosphere...

  18. MAPSS: Mapped Atmosphere-Plant-Soil System Model, Version 1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — MAPSS (Mapped Atmosphere-Plant-Soil System) is a landscape to global vegetation distribution model that was developed to simulate the potential biosphere impacts and...

  19. Illustrating and homology modeling the proteins of the Zika virus [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Sean Ekins

    2016-09-01

    Full Text Available The Zika virus (ZIKV is a flavivirus of the family Flaviviridae, which is similar to dengue virus, yellow fever and West Nile virus. Recent outbreaks in South America, Latin America, the Caribbean and in particular Brazil have led to concern for the spread of the disease and potential to cause Guillain-Barré syndrome and microcephaly. Although ZIKV has been known of for over 60 years there is very little in the way of knowledge of the virus with few publications and no crystal structures. No antivirals have been tested against it either in vitro or in vivo. ZIKV therefore epitomizes a neglected disease. Several suggested steps have been proposed which could be taken to initiate ZIKV antiviral drug discovery using both high throughput screens as well as structure-based design based on homology models for the key proteins. We now describe preliminary homology models created for NS5, FtsJ, NS4B, NS4A, HELICc, DEXDc, peptidase S7, NS2B, NS2A, NS1, E stem, glycoprotein M, propeptide, capsid and glycoprotein E using SWISS-MODEL. Eleven out of 15 models pass our model quality criteria for their further use. While a ZIKV glycoprotein E homology model was initially described in the immature conformation as a trimer, we now describe the mature dimer conformer which allowed the construction of an illustration of the complete virion. By comparing illustrations of ZIKV based on this new homology model and the dengue virus crystal structure we propose potential differences that could be exploited for antiviral and vaccine design. The prediction of sites for glycosylation on this protein may also be useful in this regard. While we await a cryo-EM structure of ZIKV and eventual crystal structures of the individual proteins, these homology models provide the community with a starting point for structure-based design of drugs and vaccines as well as a for computational virtual screening.

  20. Formal Analysis of Functional Behaviour for Model Transformations Based on Triple Graph Grammars - Extended Version

    OpenAIRE

    Hermann, Frank; Ehrig, Hartmut; Orejas, Fernando; Ulrike, Golas

    2010-01-01

    Triple Graph Grammars (TGGs) are a well-established concept for the specification of model transformations. In previous work we have formalized and analyzed already crucial properties of model transformations like termination, correctness and completeness, but functional behaviour - especially local confluence - is missing up to now. In order to close this gap we generate forward translation rules, which extend standard forward rules by translation attributes keeping track of the elements whi...

  1. Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection (Pub Version, Open Access)

    Science.gov (United States)

    2016-05-03

    model (JSM), developed using Sequitur16,17 and trained on the CMUDict0.7b18 Amer- ican English dictionary (over 134k words), was used to detect English ...modeled using the closest Swahili vowel or vowel combination. In both cases these English L2P predictions were added to a dictionary as variants to swa... English queries as a function of overlap/correspondence with an existing reference English pronunciation dictionary . As the reference dictionary , we

  2. Treatment outcomes of acute bipolar depressive episode with psychosis.

    Science.gov (United States)

    Caldieraro, Marco Antonio; Dufour, Steven; Sylvia, Louisa G; Gao, Keming; Ketter, Terence A; Bobo, William V; Walsh, Samantha; Janos, Jessica; Tohen, Mauricio; Reilly-Harrington, Noreen A; McElroy, Susan L; Shelton, Richard C; Bowden, Charles L; Deckersbach, Thilo; Nierenberg, Andrew A

    2018-05-01

    The impact of psychosis on the treatment of bipolar depression is remarkably understudied. The primary aim of this study was to compare treatment outcomes of bipolar depressed individuals with and without psychosis. The secondary aim was to compare the effect of lithium and quetiapine, each with adjunctive personalized treatments (APTs), in the psychotic subgroup. We assessed participants with DSM-IV bipolar depression included in a comparative effectiveness study of lithium and quetiapine with APTs (the Bipolar CHOICE study). Severity was assessed by the Bipolar Inventory of Symptoms Scale (BISS) and by the Clinical Global Impression Scale-Severity-Bipolar Version (CGI-S-BP). Mixed models were used to assess the course of symptom change, and Cox regression survival analysis was used to assess the time to remission. Psychotic features were present in 10.6% (n = 32) of the depressed participants (n = 303). Those with psychotic features had higher scores on the BISS before (75.2 ± 17.6 vs. 54.9 ± 16.3; P Bipolar depressive episodes with psychotic features are more severe, and compared to nonpsychotic depressions, present a similar course of improvement. Given the small number of participants presenting psychosis, the lack of statistically significant difference between lithium- and quetiapine-based treatment of psychotic bipolar depressive episodes needs replication in a larger sample. © 2018 Wiley Periodicals, Inc.

  3. Impact of numerical choices on water conservation in the E3SM Atmosphere Model version 1 (EAMv1

    Directory of Open Access Journals (Sweden)

    K. Zhang

    2018-06-01

    Full Text Available The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1 model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods for fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model becomes negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors in early V1 versions decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in V1 results in a very thin model layer at the Earth's surface, which amplifies the conservation error associated with the surface moisture flux correction. We note

  4. Unexpected spatial impact of treatment plant discharges induced by episodic hydrodynamic events: Modelling Lagrangian transport of fine particles by Northern Current intrusions in the bays of Marseille (France).

    Science.gov (United States)

    Millet, Bertrand; Pinazo, Christel; Banaru, Daniela; Pagès, Rémi; Guiart, Pierre; Pairaud, Ivane

    2018-01-01

    Our study highlights the Lagrangian transport of solid particles discharged at the Marseille Wastewater Treatment Plant (WWTP), located at Cortiou on the southern coastline. We focused on episodic situations characterized by a coastal circulation pattern induced by intrusion events of the Northern Current (NC) on the continental shelf, associated with SE wind regimes. We computed, using MARS3D-RHOMA and ICHTHYOP models, the particle trajectories from a patch of 5.104 passive and conservative fine particles released at the WWTP outlet, during 2 chosen representative periods of intrusion of the NC in June 2008 and in October 2011, associated with S-SE and E-SE winds, respectively. Unexpected results highlighted that the amount of particles reaching the vulnerable shorelines of both northern and southern bays accounted for 21.2% and 46.3% of the WWTP initial patch, in June 2008 and October 2011, respectively. Finally, a conceptual diagram is proposed to highlight the mechanisms of dispersion within the bays of Marseille of the fine particles released at the WWTP outlet that have long been underestimated.

  5. Geological discrete fracture network model for the Olkiluoto site, Eurajoki, Finland. Version 2.0

    International Nuclear Information System (INIS)

    Fox, A.; Forchhammer, K.; Pettersson, A.; La Pointe, P.; Lim, D-H.

    2012-06-01

    This report describes the methods, analyses, and conclusions of the modeling team in the production of the 2010 revision to the geological discrete fracture network (DFN) model for the Olkiluoto Site in Finland. The geological DFN is a statistical model for stochastically simulating rock fractures and minor faults at a scale ranging from approximately 0.05 m to approximately 565m; deformation zones are expressly excluded from the DFN model. The DFN model is presented as a series of tables summarizing probability distributions for several parameters necessary for fracture modeling: fracture orientation, fracture size, fracture intensity, and associated spatial constraints. The geological DFN is built from data collected during site characterization (SC) activities at Olkiluoto, which is selected to function as a final deep geological repository for spent fuel and nuclear waste from the Finnish nuclear power program. Data used in the DFN analyses include fracture maps from surface outcrops and trenches, geological and structural data from cored drillholes, and fracture information collected during the construction of the main tunnels and shafts at the ONKALO laboratory. Unlike the initial geological DFN, which was focused on the vicinity of the ONKALO tunnel, the 2010 revisions present a model parameterization for the entire island. Fracture domains are based on the tectonic subdivisions at the site (northern, central, and southern tectonic units) presented in the Geological Site Model (GSM), and are further subdivided along the intersection of major brittle-ductile zones. The rock volume at Olkiluoto is dominated by three distinct fracture sets: subhorizontally-dipping fractures striking north-northeast and dipping to the east that is subparallel to the mean bedrock foliation direction, a subvertically-dipping fracture set striking roughly north-south, and a subvertically-dipping fracture set striking approximately east-west. The subhorizontally-dipping fractures

  6. Geological discrete fracture network model for the Olkiluoto site, Eurajoki, Finland. Version 2.0

    Energy Technology Data Exchange (ETDEWEB)

    Fox, A.; Forchhammer, K.; Pettersson, A. [Golder Associates AB, Stockholm (Sweden); La Pointe, P.; Lim, D-H. [Golder Associates Inc. (Finland)

    2012-06-15

    This report describes the methods, analyses, and conclusions of the modeling team in the production of the 2010 revision to the geological discrete fracture network (DFN) model for the Olkiluoto Site in Finland. The geological DFN is a statistical model for stochastically simulating rock fractures and minor faults at a scale ranging from approximately 0.05 m to approximately 565m; deformation zones are expressly excluded from the DFN model. The DFN model is presented as a series of tables summarizing probability distributions for several parameters necessary for fracture modeling: fracture orientation, fracture size, fracture intensity, and associated spatial constraints. The geological DFN is built from data collected during site characterization (SC) activities at Olkiluoto, which is selected to function as a final deep geological repository for spent fuel and nuclear waste from the Finnish nuclear power program. Data used in the DFN analyses include fracture maps from surface outcrops and trenches, geological and structural data from cored drillholes, and fracture information collected during the construction of the main tunnels and shafts at the ONKALO laboratory. Unlike the initial geological DFN, which was focused on the vicinity of the ONKALO tunnel, the 2010 revisions present a model parameterization for the entire island. Fracture domains are based on the tectonic subdivisions at the site (northern, central, and southern tectonic units) presented in the Geological Site Model (GSM), and are further subdivided along the intersection of major brittle-ductile zones. The rock volume at Olkiluoto is dominated by three distinct fracture sets: subhorizontally-dipping fractures striking north-northeast and dipping to the east that is subparallel to the mean bedrock foliation direction, a subvertically-dipping fracture set striking roughly north-south, and a subvertically-dipping fracture set striking approximately east-west. The subhorizontally-dipping fractures

  7. Water, Energy, and Biogeochemical Model (WEBMOD), user’s manual, version 1

    Science.gov (United States)

    Webb, Richard M.T.; Parkhurst, David L.

    2017-02-08

    The Water, Energy, and Biogeochemical Model (WEBMOD) uses the framework of the U.S. Geological Survey (USGS) Modular Modeling System to simulate fluxes of water and solutes through watersheds. WEBMOD divides watersheds into model response units (MRU) where fluxes and reactions are simulated for the following eight hillslope reservoir types: canopy; snowpack; ponding on impervious surfaces; O-horizon; two reservoirs in the unsaturated zone, which represent preferential flow and matrix flow; and two reservoirs in the saturated zone, which also represent preferential flow and matrix flow. The reservoir representing ponding on impervious surfaces, currently not functional (2016), will be implemented once the model is applied to urban areas. MRUs discharge to one or more stream reservoirs that flow to the outlet of the watershed. Hydrologic fluxes in the watershed are simulated by modules derived from the USGS Precipitation Runoff Modeling System; the National Weather Service Hydro-17 snow model; and a topography-driven hydrologic model (TOPMODEL). Modifications to the standard TOPMODEL include the addition of heterogeneous vertical infiltration rates; irrigation; lateral and vertical preferential flows through the unsaturated zone; pipe flow draining the saturated zone; gains and losses to regional aquifer systems; and the option to simulate baseflow discharge by using an exponential, parabolic, or linear decrease in transmissivity. PHREEQC, an aqueous geochemical model, is incorporated to simulate chemical reactions as waters evaporate, mix, and react within the various reservoirs of the model. The reactions that can be specified for a reservoir include equilibrium reactions among water; minerals; surfaces; exchangers; and kinetic reactions such as kinetic mineral dissolution or precipitation, biologically mediated reactions, and radioactive decay. WEBMOD also simulates variations in the concentrations of the stable isotopes deuterium and oxygen-18 as a result of

  8. Biosphere-Atmosphere Transfer Scheme (BATS) version le as coupled to the NCAR community climate model. Technical note. [NCAR (National Center for Atmospheric Research)

    Energy Technology Data Exchange (ETDEWEB)

    Dickinson, R.E.; Henderson-Sellers, A.; Kennedy, P.J.

    1993-08-01

    A comprehensive model of land-surface processes has been under development suitable for use with various National Center for Atmospheric Research (NCAR) General Circulation Models (GCMs). Special emphasis has been given to describing properly the role of vegetation in modifying the surface moisture and energy budgets. The result of these efforts has been incorporated into a boundary package, referred to as the Biosphere-Atmosphere Transfer Scheme (BATS). The current frozen version, BATS1e is a piece of software about four thousand lines of code that runs as an offline version or coupled to the Community Climate Model (CCM).

  9. Geological discrete-fracture network model (version 1) for the Olkiluoto site, Finland

    International Nuclear Information System (INIS)

    Fox, A.; Buoro, A.; Dahlbo, K.; Wiren, L.

    2009-10-01

    This report describes the methods, analyses, and conclusions of the modelling team in the production of a discrete-fracture network (DFN) model for the Olkiluoto Site in Finland. The geological DFN is a statistical model for stochastically simulating rock fractures and minor faults at a scale ranging from approximately 0.05 m to approximately 500 m; an upper scale limit is not expressly defined, but the DFN model explicitly excludes structures at deformation-zone scales (∼ 500 m) and larger. The DFN model is presented as a series of tables summarizing probability distributions for several parameters necessary for fracture modelling: fracture orientation, fracture size, fracture intensity, and associated spatial constraints. The geological DFN is built from data collected during site characterization (SC) activities at Olkiluoto, which is currently planned to function as a final deep geological repository for spent fuel and nuclear waste from the Finnish nuclear power program. Data used in the DFN analyses include fracture maps from surface outcrops and trenches (as of July 2007), geological and structural data from cored boreholes (as of July 2007), and fracture information collected during the construction of the main tunnels and shafts at the ONKALO laboratory (January 2008). The modelling results suggest that the rock volume at Olkiluoto surrounding the ONKALO tunnel can be separated into three distinct volumes (fracture domains): an upper block, an intermediate block, and a lower block. The three fracture domains are bounded horizontally and vertically by large deformation zones. Fracture properties, such as fracture orientation and relative orientation set intensity, vary between fracture domains. The rock volume at Olkiluoto is dominated by three distinct fracture sets: subhorizontally-dipping fractures striking north-northeast and dipping to the east, a subvertically-dipping fracture set striking roughly north-south, and a subverticallydipping fracture set

  10. Hippocampal activation during episodic and semantic memory retrieval: comparing category production and category cued recall.

    Science.gov (United States)

    Ryan, Lee; Cox, Christine; Hayes, Scott M; Nadel, Lynn

    2008-01-01

    Whether or not the hippocampus participates in semantic memory retrieval has been the focus of much debate in the literature. However, few neuroimaging studies have directly compared hippocampal activation during semantic and episodic retrieval tasks that are well matched in all respects other than the source of the retrieved information. In Experiment 1, we compared hippocampal fMRI activation during a classic semantic memory task, category production, and an episodic version of the same task, category cued recall. Left hippocampal activation was observed in both episodic and semantic conditions, although other regions of the brain clearly distinguished the two tasks. Interestingly, participants reported using retrieval strategies during the semantic retrieval task that relied on autobiographical and spatial information; for example, visualizing themselves in their kitchen while producing items for the category kitchen utensils. In Experiment 2, we considered whether the use of these spatial and autobiographical retrieval strategies could have accounted for the hippocampal activation observed in Experiment 1. Categories were presented that elicited one of three retrieval strategy types, autobiographical and spatial, autobiographical and nonspatial, and neither autobiographical nor spatial. Once again, similar hippocampal activation was observed for all three category types, regardless of the inclusion of spatial or autobiographical content. We conclude that the distinction between semantic and episodic memory is more complex than classic memory models suggest.

  11. Hydrogeological DFN modelling using structural and hydraulic data from KLX04. Preliminary site description Laxemar subarea - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Follin, Sven [SF GeoLogic AB, Taeby (Sweden); Stigsson, Martin [Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden); Svensson, Urban [Computer-aided Fluid Engineering AB, Norrkoeping (Sweden)

    2006-04-15

    SKB is conducting site investigations for a high-level nuclear waste repository in fractured crystalline rocks at two coastal areas in Sweden. The two candidate areas are named Forsmark and Simpevarp. The site characterisation work is divided into two phases, an initial site investigation phase (ISI) and a complete site investigation phase (CSI). The results of the ISI phase are used as a basis for deciding on the subsequent CSI phase. On the basis of the CSI investigations a decision is made as to whether detailed characterisation will be performed (including sinking of a shaft). An integrated component in the site characterisation work is the development of site descriptive models. These comprise basic models in three dimensions with an accompanying text description. Central in the modelling work is the geological model which provides the geometrical context in terms of a model of deformation zones and the less fractured rock mass between the zones. Using the geological and geometrical description models as a basis, descriptive models for other disciplines (surface ecosystems, hydrogeology, hydrogeochemistry, rock mechanics, thermal properties and transport properties) will be developed. Great care is taken to arrive at a general consistency in the description of the various models and assessment of uncertainty and possible needs of alternative models. The main objective of this study is to support the development of a hydrogeological DFN model (Discrete Fracture Network) for the Preliminary Site Description of the Laxemar area on a regional-scale (SDM version L1.2). A more specific objective of this study is to assess the propagation of uncertainties in the geological DFN modelling reported for L1.2 into the groundwater flow modelling. An improved understanding is necessary in order to gain credibility for the Site Description in general and the hydrogeological description in particular. The latter will serve as a basis for describing the present

  12. Hydrogeological DFN modelling using structural and hydraulic data from KLX04. Preliminary site description Laxemar subarea - version 1.2

    International Nuclear Information System (INIS)

    Follin, Sven; Stigsson, Martin; Svensson, Urban

    2006-04-01

    SKB is conducting site investigations for a high-level nuclear waste repository in fractured crystalline rocks at two coastal areas in Sweden. The two candidate areas are named Forsmark and Simpevarp. The site characterisation work is divided into two phases, an initial site investigation phase (ISI) and a complete site investigation phase (CSI). The results of the ISI phase are used as a basis for deciding on the subsequent CSI phase. On the basis of the CSI investigations a decision is made as to whether detailed characterisation will be performed (including sinking of a shaft). An integrated component in the site characterisation work is the development of site descriptive models. These comprise basic models in three dimensions with an accompanying text description. Central in the modelling work is the geological model which provides the geometrical context in terms of a model of deformation zones and the less fractured rock mass between the zones. Using the geological and geometrical description models as a basis, descriptive models for other disciplines (surface ecosystems, hydrogeology, hydrogeochemistry, rock mechanics, thermal properties and transport properties) will be developed. Great care is taken to arrive at a general consistency in the description of the various models and assessment of uncertainty and possible needs of alternative models. The main objective of this study is to support the development of a hydrogeological DFN model (Discrete Fracture Network) for the Preliminary Site Description of the Laxemar area on a regional-scale (SDM version L1.2). A more specific objective of this study is to assess the propagation of uncertainties in the geological DFN modelling reported for L1.2 into the groundwater flow modelling. An improved understanding is necessary in order to gain credibility for the Site Description in general and the hydrogeological description in particular. The latter will serve as a basis for describing the present

  13. The SF-8 Spanish Version for Health-Related Quality of Life Assessment: Psychometric Study with IRT and CFA Models.

    Science.gov (United States)

    Tomás, José M; Galiana, Laura; Fernández, Irene

    2018-03-22

    The aim of current research is to analyze the psychometric properties of the Spanish version of the SF-8, overcoming previous shortcomings. A double line of analyses was used: competitive structural equations models to establish factorial validity, and Item Response theory to analyze item psychometric characteristics and information. 593 people aged 60 years or older, attending long life learning programs at the University were surveyed. Their age ranged from 60 to 92 years old. 67.6% were women. The survey included scales on personality dimensions, attitudes, perceptions, and behaviors related to aging. Competitive confirmatory models pointed out two-factors (physical and mental health) as the best representation of the data: χ2(13) = 72.37 (p < .01); CFI = .99; TLI = .98; RMSEA = .08 (.06, .10). Item 5 was removed because of unreliability and cross-loading. Graded response models showed appropriate fit for two-parameter logistic model both the physical and the mental dimensions. Item Information Curves and Test Information Functions pointed out that the SF-8 was more informative for low levels of health. The Spanish SF-8 has adequate psychometric properties, being better represented by two dimensions, once Item 5 is removed. Gathering evidence on patient-reported outcome measures is of crucial importance, as this type of measurement instruments are increasingly used in clinical arena.

  14. SITE-94. The CRYSTAL Geosphere Transport Model: Technical documentation version 2.1

    International Nuclear Information System (INIS)

    Worgan, K.; Robinson, P.

    1995-12-01

    CRYSTAL, a one-dimensional contaminant transport model of a densely fissured geosphere, was originally developed for the SKI Project-90 performance assessment program. It has since been extended to include matrix blocks of alternative basic geometries. CRYSTAL predicts the transport of arbitrary-length decay chains by advection, diffusion and surface sorption in the fissures and diffusion into the rock matrix blocks. The model equations are solved in Laplace transform space, and inverted numerically to the time domain. This approach avoids time-stepping and consequently is numerically very efficient. The source term for crystal may be supplied internally using either simple leaching or band release submodels or by input of a general time-series output from a near-field model. The time series input is interfaced with the geosphere model using the method of convolution. The response of the geosphere to delta-function inputs from each nuclide is combined with the time series outputs from the near-field, to obtain the nuclide flux emerging from the far-field. 14 refs

  15. User's Manual MCnest - Markov Chain Nest Productivity Model Version 2.0

    Science.gov (United States)

    The Markov chain nest productivity model, or MCnest, is a set of algorithms for integrating the results of avian toxicity tests with reproductive life-history data to project the relative magnitude of chemical effects on avian reproduction. The mathematical foundation of MCnest i...

  16. A Functional Model of Sensemaking in a Neurocognitive Architecture (Open Access, Publisher’s Version)

    Science.gov (United States)

    2013-07-08

    updating processes involved in sensemaking. We do this by developing ACT-R models to specify how ele- mentary cognitive modules and processes are marshaled ...13] M. I. Posner, R. Goldsmith , and K. E. Welton Jr., “Perceived distance and the classification of distorted patterns,” Journal of Experimental

  17. Landfill Gas Energy Cost Model Version 3.0 (LFGcost-Web V3.0)

    Science.gov (United States)

    To help stakeholders estimate the costs of a landfill gas (LFG) energy project, in 2002, LMOP developed a cost tool (LFGcost). Since then, LMOP has routinely updated the tool to reflect changes in the LFG energy industry. Initially the model was designed for EPA to assist landfil...

  18. LANDFILL GAS EMISSIONS MODEL (LANDGEM) VERSION 3.02 USER'S GUIDE

    Science.gov (United States)

    The Landfill Gas Emissions Model (LandGEM) is an automated estimation tool with a Microsoft Excel interface that can be used to estimate emission rates for total landfill gas, methane, carbon dioxide, nonmethane organic compounds, and individual air pollutants from municipal soli...

  19. Unit testing, model validation, and biological simulation [version 1; referees: 2 approved, 1 approved with reservations

    Directory of Open Access Journals (Sweden)

    Gopal P. Sarma

    2016-08-01

    Full Text Available The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models.

  20. The GRASP 3: Graphical Reliability Analysis Simulation Program. Version 3: A users' manual and modelling guide

    Science.gov (United States)

    Phillips, D. T.; Manseur, B.; Foster, J. W.

    1982-01-01

    Alternate definitions of system failure create complex analysis for which analytic solutions are available only for simple, special cases. The GRASP methodology is a computer simulation approach for solving all classes of problems in which both failure and repair events are modeled according to the probability laws of the individual components of the system.

  1. Preliminary site description: Groundwater flow simulations. Simpevarp area (version 1.1) modelled with CONNECTFLOW

    International Nuclear Information System (INIS)

    Hartley, Lee; Worth, David; Gylling, Bjoern; Marsic, Niko; Holmen, Johan

    2004-08-01

    The main objective of this study is to assess the role of known and unknown hydrogeological conditions for the present-day distribution of saline groundwater at the Simpevarp and Laxemar sites. An improved understanding of the paleo-hydrogeology is necessary in order to gain credibility for the Site Descriptive Model in general and the Site Hydrogeological Description in particular. This is to serve as a basis for describing the present hydrogeological conditions as well as predictions of future hydrogeological conditions. This objective implies a testing of: geometrical alternatives in the structural geology and bedrock fracturing, variants in the initial and boundary conditions, and parameter uncertainties (i.e. uncertainties in the hydraulic property assignment). This testing is necessary in order to evaluate the impact on the groundwater flow field of the specified components and to promote proposals of further investigations of the hydrogeological conditions at the site. The general methodology for modelling transient salt transport and groundwater flow using CONNECTFLOW that was developed for Forsmark has been applied successfully also for Simpevarp. Because of time constraints only a key set of variants were performed that focussed on the influences of DFN model parameters, the kinematic porosity, and the initial condition. Salinity data in deep boreholes available at the time of the project was too limited to allow a good calibration exercise. However, the model predictions are compared with the available data from KLX01 and KLX02 below. Once more salinity data is available it may be possible to draw more definite conclusions based on the differences between variants. At the moment though the differences should just be used understand the sensitivity of the models to various input parameters

  2. The Composition of Episodic Memory.

    Science.gov (United States)

    Underwood, Benton J.; And Others

    This study examined the interrelationships among a number of episodic memory tasks and among various attributes of memory. A sample of 200 college students was tested for ten sessions; 28 different measures of episodic memory were obtained. In addition, five measures of semantic memory were available. Results indicated that episodic and semantic…

  3. A 3 D regional scale photochemical air quality model application to a 3 day summertime episode over Paris; Un modele photochimique 3D de qualite de l`air a l`echelle regionale. Application a un episode de 3 jours a Paris en ete

    Energy Technology Data Exchange (ETDEWEB)

    Jaecker-Voirol, A.; Lipphardt, M.; Martin, B.; Quandalle, Ph.; Salles, J. [Institut Francais du Petrole (IFP), 92 - Rueil-Malmaison (France); Carissimo, B.; Dupont, P.M.; Musson-Genon, L.; Riboud, P.M. [Electricite de France (EDF), 78 - Chatou (France). Direction des Etudes et Recherches; Aumont, B.; Bergametti, G.; Bey, I.; Toupanse, G. [Paris-12 Univ., 94 - Creteil (France). Laboratoire interuniversitaire des systemes atmospheriques]|[Paris-7 Univ., 75 (France)

    1998-03-01

    This paper presents AZUR, a 3D Eulerian photochemical air quality model for the simulation of air pollution in urban and semi-urban areas. The model tracks gas pollutant species emitted into the atmosphere by transportation and industrial sources, it computes the chemical reactions of these species under varying meteorological conditions (photolysis, pressure, temperature, humidity), their transport by wind and their turbulent diffusion as a function of air stability. It has a modular software structure which includes several components dedicated to specific processes: MERCURE, a meso-scale meteorological model to compute the wind field, turbulent diffusion coefficients, and other meteorological parameters; MIEL, an emission inventory model describing the pollutant fluxes from automotive transportation, domestic and industrial activities; MoCA a photochemical gas phase model describing the chemistry of ozone, NO{sub x}, an hydrocarbon compounds; AIRQUAL, a 3D Eulerian model describing the transport by mean wind flux and air turbulent diffusion of species in the atmosphere, associated with a Gear type chemical equation solver. The model has been applied to a 3-day summertime episode over Paris area. Simulation results are compared to ground level concentration measurements performed by the local monitoring network (Airparif). (authors) 22 refs.

  4. Business models for renewable energy in the built environment. Updated version

    Energy Technology Data Exchange (ETDEWEB)

    Wuertenberger, L.; Menkveld, M.; Vethman, P.; Van Tilburg, X. [ECN Policy Studies, Amsterdam (Netherlands); Bleyl, J.W. [Energetic Solutions, Graz (Austria)

    2012-04-15

    The project RE-BIZZ aims to provide insight to policy makers and market actors in the way new and innovative business models (and/or policy measures) can stimulate the deployment of renewable energy technologies (RET) and energy efficiency (EE) measures in the built environment. The project is initiated and funded by the IEA Implementing Agreement for Renewable Energy Technology Deployment (IEA-RETD). It analysed ten business models in three categories (amongst others different types of Energy Service Companies (ESCOs), Developing properties certified with a 'green' building label, Building owners profiting from rent increases after EE measures, Property Assessed Clean Energy (PACE) financing, On-bill financing, and Leasing of RET equipment) including their organisational and financial structure, the existing market and policy context, and an analysis of Strengths, Weaknesses, Opportunities and Threats (SWOT). The study concludes with recommendations for policy makers and other market actors.

  5. Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) (Version 2) (External Review Draft)

    Science.gov (United States)

    EPA announced the availability of the draft report, Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) for a 30-day public comment period. The ICLUS version 2 (v2) modeling tool furthered land change mod...

  6. The Everglades Depth Estimation Network (EDEN) surface-water model, version 2

    Science.gov (United States)

    Telis, Pamela A.; Xie, Zhixiao; Liu, Zhongwei; Li, Yingru; Conrads, Paul

    2015-01-01

    The Everglades Depth Estimation Network (EDEN) is an integrated network of water-level gages, interpolation models that generate daily water-level and water-depth data, and applications that compute derived hydrologic data across the freshwater part of the greater Everglades landscape. The U.S. Geological Survey Greater Everglades Priority Ecosystems Science provides support for EDEN in order for EDEN to provide quality-assured monitoring data for the U.S. Army Corps of Engineers Comprehensive Everglades Restoration Plan.

  7. Ion temperature in the outer ionosphere - first version of a global empirical model

    Czech Academy of Sciences Publication Activity Database

    Třísková, Ludmila; Truhlík, Vladimír; Šmilauer, Jan; Smirnova, N. F.

    2004-01-01

    Roč. 34, č. 9 (2004), s. 1998-2003 ISSN 0273-1177 R&D Projects: GA ČR GP205/02/P037; GA AV ČR IAA3042201; GA MŠk ME 651 Institutional research plan: CEZ:AV0Z3042911 Keywords : plasma temperatures * topside ionosphere * empirical models Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 0.548, year: 2004

  8. Air Force Systems Engineering Assessment Model (AF SEAM) Management Guide, Version 2

    Science.gov (United States)

    2010-09-21

    gleaned from experienced professionals who assisted with the model’s development. Examples of the references used include the following: • ISO /IEC...Defense Acquisition Guidebook, Chapter 4 • AFI 63-1201, Life Cycle Systems Engineering • IEEE/EIA 12207 , Software Life Cycle Processes • Air...Selection criteria Reference Material: IEEE/EIA 12207 , MIL-HDBK-514 Other Considerations: Modeling, simulation and analysis techniques can be

  9. Modelling turbulent vertical mixing sensitivity using a 1-D version of NEMO

    Science.gov (United States)

    Reffray, G.; Bourdalle-Badie, R.; Calone, C.

    2015-01-01

    Through two numerical experiments, a 1-D vertical model called NEMO1D was used to investigate physical and numerical turbulent-mixing behaviour. The results show that all the turbulent closures tested (k+l from Blanke and Delecluse, 1993, and two equation models: generic length scale closures from Umlauf and Burchard, 2003) are able to correctly reproduce the classical test of Kato and Phillips (1969) under favourable numerical conditions while some solutions may diverge depending on the degradation of the spatial and time discretization. The performances of turbulence models were then compared with data measured over a 1-year period (mid-2010 to mid-2011) at the PAPA station, located in the North Pacific Ocean. The modelled temperature and salinity were in good agreement with the observations, with a maximum temperature error between -2 and 2 °C during the stratified period (June to October). However, the results also depend on the numerical conditions. The vertical RMSE varied, for different turbulent closures, from 0.1 to 0.3 °C during the stratified period and from 0.03 to 0.15 °C during the homogeneous period. This 1-D configuration at the PAPA station (called PAPA1D) is now available in NEMO as a reference configuration including the input files and atmospheric forcing set described in this paper. Thus, all the results described can be recovered by downloading and launching PAPA1D. The configuration is described on the NEMO site (PAPA">http://www.nemo-ocean.eu/Using-NEMO/Configurations/C1D_PAPA). This package is a good starting point for further investigation of vertical processes.

  10. Software Design Description for the Navy Coastal Ocean Model (NCOM) Version 4.0

    Science.gov (United States)

    2008-12-31

    Recipes Software, U.S., p. 659. Rood, R. B., (1987). Numerical advection algorithms and their role in atmospheric transport and chemistry models... cstr ,lenc) Data Declaration: Integer lenc Character cstr Coamps_uvg2uv Subroutine COAMPS_UVG2UV...are removed from the substrings. Calling Sequence: strpars(cline, cdelim, nstr, cstr , nsto, ierr) NRL/MR/7320--08-9149

  11. The Canadian Defence Input-Output Model DIO Version 4.41

    Science.gov (United States)

    2011-09-01

    Request to develop DND tailored Input/Output Model. Electronic communication from AllenWeldon to Team Leader, Defence Economics Team onMarch 12, 2011...and similar contain- ers 166 1440 Handbags, wallets and similar personal articles such as eyeglass and cigar cases and coin purses 167 1450 Cotton yarn...408 3600 Radar and radio navigation equipment 409 3619 Semi-conductors 410 3621 Printed circuits 411 3622 Integrated circuits 412 3623 Other electronic

  12. Regional groundwater flow model for a glaciation scenario. Simpevarp subarea - version 1.2

    International Nuclear Information System (INIS)

    Jaquet, O.; Siegel, P.

    2006-10-01

    A groundwater flow model (glaciation model) was developed at a regional scale in order to study long term transient effects related to a glaciation scenario likely to occur in response to climatic changes. Conceptually the glaciation model was based on the regional model of Simpevarp and was then extended to a mega-regional scale (of several hundred kilometres) in order to account for the effects of the ice sheet. These effects were modelled using transient boundary conditions provided by a dynamic ice sheet model describing the phases of glacial build-up, glacial completeness and glacial retreat needed for the glaciation scenario. The results demonstrate the strong impact of the ice sheet on the flow field, in particular during the phases of the build-up and the retreat of the ice sheet. These phases last for several thousand years and may cause large amounts of melt water to reach the level of the repository and below. The highest fluxes of melt water are located in the vicinity of the ice margin. As the ice sheet approaches the repository location, the advective effects gain dominance over diffusive effects in the flow field. In particular, up-coning effects are likely to occur at the margin of the ice sheet leading to potential increases in salinity at repository level. For the base case, the entire salinity field of the model is almost completely flushed out at the end of the glaciation period. The flow patterns are strongly governed by the location of the conductive features in the subglacial layer. The influence of these glacial features is essential for the salinity distribution as is their impact on the flow trajectories and, therefore, on the resulting performance measures. Travel times and F-factor were calculated using the method of particle tracking. Glacial effects cause major consequences on the results. In particular, average travel times from the repository to the surface are below 10 a during phases of glacial build-up and retreat. In comparison

  13. Regional groundwater flow model for a glaciation scenario. Simpevarp subarea - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Jaquet, O.; Siegel, P. [Colenco Power Engineering Ltd, Baden-Daettwil (Switzerland)

    2006-10-15

    A groundwater flow model (glaciation model) was developed at a regional scale in order to study long term transient effects related to a glaciation scenario likely to occur in response to climatic changes. Conceptually the glaciation model was based on the regional model of Simpevarp and was then extended to a mega-regional scale (of several hundred kilometres) in order to account for the effects of the ice sheet. These effects were modelled using transient boundary conditions provided by a dynamic ice sheet model describing the phases of glacial build-up, glacial completeness and glacial retreat needed for the glaciation scenario. The results demonstrate the strong impact of the ice sheet on the flow field, in particular during the phases of the build-up and the retreat of the ice sheet. These phases last for several thousand years and may cause large amounts of melt water to reach the level of the repository and below. The highest fluxes of melt water are located in the vicinity of the ice margin. As the ice sheet approaches the repository location, the advective effects gain dominance over diffusive effects in the flow field. In particular, up-coning effects are likely to occur at the margin of the ice sheet leading to potential increases in salinity at repository level. For the base case, the entire salinity field of the model is almost completely flushed out at the end of the glaciation period. The flow patterns are strongly governed by the location of the conductive features in the subglacial layer. The influence of these glacial features is essential for the salinity distribution as is their impact on the flow trajectories and, therefore, on the resulting performance measures. Travel times and F-factor were calculated using the method of particle tracking. Glacial effects cause major consequences on the results. In particular, average travel times from the repository to the surface are below 10 a during phases of glacial build-up and retreat. In comparison

  14. CHROMAT trademark Version 1.1--Soil Chromium Attenuation Evaluation Model

    International Nuclear Information System (INIS)

    Felmy, A.R.; Rai, D.; Zachara, J.M.; Thapa, M.; Gold, M.

    1992-07-01

    This document is the user's manual and technical reference for the Soil Chromium Attenuation Model (CHROMAT trademark), a computer code designed to calculate both the dissolved Cr concentration and the amount of Cr attenuated in soils as a result of the geochemical reactions that occur as Cr-containing leachates migrate through porous soils. The dissolved Cr concentration and the amount of Cr attenuated are calculated using thermodynamic (mechanistic) data for aqueous complexation reactions, adsorption/ desorption reactions, and precipitation/dissolution reactions involving both CR(III) and Cr(VI) species. Use of this mechanistic approach means that CHROMAT trademark requires a minimum amount of site-specific data on leachate and soil characteristics. CHROMAT trademark is distributed in executable form for IBM and IBM-compatible personal computers through a license from the Electric Power Research Institute (EPRI). The user interacts with CHROMAT trademark using menu-driven screen displays. Interactive on-line help options are available. Output from the code can be obtained in tabular or graphic form. This manual describes the development of CHROMAT trademark, including experimental data development in support of the model and model validation studies. The thermodynamic data and computational algorithm are also described. Example problems and results are included

  15. Two modified versions of the speciation code PHREEQE for modelling macromolecule-proton/cation interaction

    International Nuclear Information System (INIS)

    Falck, W.E.

    1991-01-01

    There is a growing need to consider the influence of organic macromolecules on the speciation of ions in natural waters. It is recognized that a simple discrete ligand approach to the binding of protons/cations to organic macromolecules is not appropriate to represent heterogeneities of binding site distributions. A more realistic approach has been incorporated into the speciation code PHREEQE which retains the discrete ligand approach but modifies the binding intensities using an electrostatic (surface complexation) model. To allow for different conformations of natural organic material two alternative concepts have been incorporated: it is assumed that (a) the organic molecules form rigid, impenetrable spheres, and (b) the organic molecules form flat surfaces. The former concept will be more appropriate for molecules in the smaller size range, while the latter will be more representative for larger size molecules or organic surface coatings. The theoretical concept is discussed and the relevant changes to the standard PHREEQE code are explained. The modified codes are called PHREEQEO-RS and PHREEQEO-FS for the rigid-sphere and flat-surface models respectively. Improved output facilities for data transfer to other computers, e.g. the Macintosh, are introduced. Examples where the model is tested against literature data are shown and practical problems are discussed. Appendices contain listings of the modified subroutines GAMMA and PTOT, an example input file and an example command procedure to run the codes on VAX computers

  16. Refinement and evaluation of the Massachusetts firm-yield estimator model version 2.0

    Science.gov (United States)

    Levin, Sara B.; Archfield, Stacey A.; Massey, Andrew J.

    2011-01-01

    The firm yield is the maximum average daily withdrawal that can be extracted from a reservoir without risk of failure during an extended drought period. Previously developed procedures for determining the firm yield of a reservoir were refined and applied to 38 reservoir systems in Massachusetts, including 25 single- and multiple-reservoir systems that were examined during previous studies and 13 additional reservoir systems. Changes to the firm-yield model include refinements to the simulation methods and input data, as well as the addition of several scenario-testing capabilities. The simulation procedure was adapted to run at a daily time step over a 44-year simulation period, and daily streamflow and meteorological data were compiled for all the reservoirs for input to the model. Another change to the model-simulation methods is the adjustment of the scaling factor used in estimating groundwater contributions to the reservoir. The scaling factor is used to convert the daily groundwater-flow rate into a volume by multiplying the rate by the length of reservoir shoreline that is hydrologically connected to the aquifer. Previous firm-yield analyses used a constant scaling factor that was estimated from the reservoir surface area at full pool. The use of a constant scaling factor caused groundwater flows during periods when the reservoir stage was very low to be overestimated. The constant groundwater scaling factor used in previous analyses was replaced with a variable scaling factor that is based on daily reservoir stage. This change reduced instability in the groundwater-flow algorithms and produced more realistic groundwater-flow contributions during periods of low storage. Uncertainty in the firm-yield model arises from many sources, including errors in input data. The sensitivity of the model to uncertainty in streamflow input data and uncertainty in the stage-storage relation was examined. A series of Monte Carlo simulations were performed on 22 reservoirs

  17. Temporal Clustering and Sequencing in Short-Term Memory and Episodic Memory

    Science.gov (United States)

    Farrell, Simon

    2012-01-01

    A model of short-term memory and episodic memory is presented, with the core assumptions that (a) people parse their continuous experience into episodic clusters and (b) items are clustered together in memory as episodes by binding information within an episode to a common temporal context. Along with the additional assumption that information…

  18. Episode-Centered Guidelines for Teacher Belief Change toward Technology Integration

    Science.gov (United States)

    Er, Erkan; Kim, ChanMin

    2017-01-01

    Teachers' episodic memories influence their beliefs. The investigation of episodic memories can help identify the teacher beliefs that limit technology-integration. We propose the Episode-Centered Belief Change (ECBC) model that utilizes teachers' episodic memories for changing beliefs impeding effective technology integration. We also propose…

  19. Regional hydrogeological simulations. Numerical modelling using ConnectFlow. Preliminary site description Simpevarp sub area - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Hartley, Lee; Hoch, Andrew; Hunter, Fiona; Jackson, Peter [Serco Assurance, Risley (United Kingdom); Marsic, Niko [Kemakta Konsult, Stockholm (Sweden)

    2005-02-01

    objective of this study is to support the development of a preliminary Site Description of the Simpevarp area on a regional-scale based on the available data of August 2004 (Data Freeze S1.2) and the previous Site Description. A more specific objective of this study is to assess the role of known and unknown hydrogeological conditions for the present-day distribution of saline groundwater in the Simpevarp area on a regional-scale. An improved understanding of the paleo-hydrogeology is necessary in order to gain credibility for the Site Description in general and the hydrogeological description in particular. This is to serve as a basis for describing the present hydrogeological conditions on a local-scale as well as predictions of future hydrogeological conditions. Other key objectives were to identify the model domain required to simulate regional flow and solute transport at the Simpevarp area and to incorporate a new geological model of the deformation zones produced for Version S1.2.Another difference with Version S1.1 is the increased effort invested in conditioning the hydrogeological property models to the fracture boremap and hydraulic data. A new methodology was developed for interpreting the discrete fracture network (DFN) by integrating the geological description of the DFN (GeoDFN) with the hydraulic test data from Posiva Flow-Log and Pipe-String System double-packer techniques to produce a conditioned Hydro-DFN model. This was done in a systematic way that addressed uncertainties associated with the assumptions made in interpreting the data, such as the relationship between fracture transmissivity and length. Consistent hydraulic data was only available for three boreholes, and therefore only relatively simplistic models were proposed as there isn't sufficient data to justify extrapolating the DFN away from the boreholes based on rock domain, for example. Significantly, a far greater quantity of hydro-geochemical data was available for calibration in the

  20. The operational eEMEP model version 10.4 for volcanic SO2 and ash forecasting

    Science.gov (United States)

    Steensen, Birthe M.; Schulz, Michael; Wind, Peter; Valdebenito, Álvaro M.; Fagerli, Hilde

    2017-05-01

    This paper presents a new version of the EMEP MSC-W model called eEMEP developed for transportation and dispersion of volcanic emissions, both gases and ash. EMEP MSC-W is usually applied to study problems with air pollution and aerosol transport and requires some adaptation to treat volcanic eruption sources and effluent dispersion. The operational set-up of model simulations in case of a volcanic eruption is described. Important choices have to be made to achieve CPU efficiency so that emergency situations can be tackled in time, answering relevant questions of ash advisory authorities. An efficient model needs to balance the complexity of the model and resolution. We have investigated here a meteorological uncertainty component of the volcanic cloud forecast by using a consistent ensemble meteorological dataset (GLAMEPS forecast) at three resolutions for the case of SO2 emissions from the 2014 Barðarbunga eruption. The low resolution (40 × 40 km) ensemble members show larger agreement in plume position and intensity, suggesting that the ensemble here does not give much added value. To compare the dispersion at different resolutions, we compute the area where the column load of the volcanic tracer, here SO2, is above a certain threshold, varied for testing purposes between 0.25 and 50 Dobson units. The increased numerical diffusion causes a larger area (+34 %) to be covered by the volcanic tracer in the low resolution simulations than in the high resolution ones. The higher resolution (10 × 10 km) ensemble members show higher column loads farther away from the volcanic eruption site in narrower clouds. Cloud positions are more varied between the high resolution members, and the cloud forms resemble the observed clouds more than the low resolution ones. For a volcanic emergency case this means that to obtain quickly results of the transport of volcanic emissions, an individual simulation with our low resolution is sufficient; however, to forecast peak

  1. User's guide to revised method-of-characteristics solute-transport model (MOC--version 31)

    Science.gov (United States)

    Konikow, Leonard F.; Granato, G.E.; Hornberger, G.Z.

    1994-01-01

    The U.S. Geological Survey computer model to simulate two-dimensional solute transport and dispersion in ground water (Konikow and Bredehoeft, 1978; Goode and Konikow, 1989) has been modified to improve management of input and output data and to provide progressive run-time information. All opening and closing of files are now done automatically by the program. Names of input data files are entered either interactively or using a batch-mode script file. Names of output files, created automatically by the program, are based on the name of the input file. In the interactive mode, messages are written to the screen during execution to allow the user to monitor the status and progress of the simulation and to anticipate total running time. Information reported and updated during a simulation include the current pumping period and time step, number of particle moves, and percentage completion of the current time step. The batch mode enables a user to run a series of simulations consecutively, without additional control. A report of the model's activity in the batch mode is written to a separate output file, allowing later review. The user has several options for creating separate output files for different types of data. The formats are compatible with many commercially available applications, which facilitates graphical postprocessing of model results. Geohydrology and Evaluation of Stream-Aquifer Relations in the Apalachicola-Chattahoochee-Flint River Basin, Southeastern Alabama, Northwestern Florida, and Southwestern Georgia By Lynn J. Torak, Gary S. Davis, George A. Strain, and Jennifer G. Herndon Abstract The lower Apalachieola-Chattahoochec-Flint River Basin is underlain by Coastal Plain sediments of pre-Cretaceous to Quaternary age consisting of alternating units of sand, clay, sandstone, dolomite, and limestone that gradually thicken and dip gently to the southeast. The stream-aquifer system consism of carbonate (limestone and dolomite) and elastic sediments

  2. Presentation, calibration and validation of the low-order, DCESS Earth System Model (Version 1

    Directory of Open Access Journals (Sweden)

    J. O. Pepke Pedersen

    2008-11-01

    Full Text Available A new, low-order Earth System Model is described, calibrated and tested against Earth system data. The model features modules for the atmosphere, ocean, ocean sediment, land biosphere and lithosphere and has been designed to simulate global change on time scales of years to millions of years. The atmosphere module considers radiation balance, meridional transport of heat and water vapor between low-mid latitude and high latitude zones, heat and gas exchange with the ocean and sea ice and snow cover. Gases considered are carbon dioxide and methane for all three carbon isotopes, nitrous oxide and oxygen. The ocean module has 100 m vertical resolution, carbonate chemistry and prescribed circulation and mixing. Ocean biogeochemical tracers are phosphate, dissolved oxygen, dissolved inorganic carbon for all three carbon isotopes and alkalinity. Biogenic production of particulate organic matter in the ocean surface layer depends on phosphate availability but with lower efficiency in the high latitude zone, as determined by model fit to ocean data. The calcite to organic carbon rain ratio depends on surface layer temperature. The semi-analytical, ocean sediment module considers calcium carbonate dissolution and oxic and anoxic organic matter remineralisation. The sediment is composed of calcite, non-calcite mineral and reactive organic matter. Sediment porosity profiles are related to sediment composition and a bioturbated layer of 0.1 m thickness is assumed. A sediment segment is ascribed to each ocean layer and segment area stems from observed ocean depth distributions. Sediment burial is calculated from sedimentation velocities at the base of the bioturbated layer. Bioturbation rates and oxic and anoxic remineralisation rates depend on organic carbon rain rates and dissolved oxygen concentrations. The land biosphere module considers leaves, wood, litter and soil. Net primary production depends on atmospheric carbon dioxide concentration and

  3. RadCon: A radiological consequences model. Technical guide - Version 2.0

    International Nuclear Information System (INIS)

    Crawford, J; Domel, R.U.; Harris, F.F.; Twining, J.R.

    2000-05-01

    A Radiological Consequence model (RadCon) is being developed at ANSTO to assess the radiological consequences, after an incident, in any climate, using appropriate meteorological and radiological transfer parameters. The major areas of interest to the developers are tropical and subtropical climates. This is particularly so given that it is anticipated that nuclear energy will become a mainstay for economies in these regions within the foreseeable future. Therefore, data acquisition and use of parameter values have been concentrated primarily on these climate types. Atmospheric dispersion and deposition for Australia can be modelled and supplied by the Regional Specialised Meteorological Centre (RSMC, one of five in the world) which is part of the Bureau of Meteorology Research Centre (BMRC), Puri et al. (1992). RadCon combines these data (i.e. the time dependent air and ground concentration generated by the dispersion model or measured quantities in the case of an actual incident) with specific regional parameter values to determine the dose to people via the major pathways of external and internal irradiation. For the external irradiation calculations, data are needed on lifestyle information such as the time spent indoors/outdoors, the high/low physical activity rates for different groups of people (especially critical groups) and shielding factors for housing types. For the internal irradiation calculations, data are needed on food consumption, effect of food processing, transfer parameters (soil to plant, plant to animal) and interception values appropriate for the region under study. Where the relevant data are not available default temperate data are currently used. The results of a wide ranging literature search has highlighted where specific research will be initiated to determine the information required for tropical and sub-tropical regions. The user is able to initiate sensitivity analyses within RadCon. This allows the parameters to be ranked in

  4. Dayton Aircraft Cabin Fire Model, Version 3, Volume I. Physical Description.

    Science.gov (United States)

    1982-06-01

    contact to any surface directly above a burning element, provided that the current flame length makes contact possible. For fires originating on the...no extension of the flames horizontally beneath the surface is considered. The equation for computing the flame length is presented in Section 5. For...high as 0.3. The values chosen for DACFIR3 are 0.15 for Ec and 0.10 for E P. The Steward model is also used to compute flame length , hf, for the fire

  5. ITS Version 3.0: Powerful, user-friendly software for radiation modelling

    International Nuclear Information System (INIS)

    Kensek, R.P.; Halbleib, J.A.; Valdez, G.D.

    1993-01-01

    ITS (the Integrated Tiger Series) is a powerful, but user-friendly, software package permitting state-of-the-art modelling of electron and/or photon radiation effects. The programs provide Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields. The ITS system combines operational simplicity and physical accuracy in order to provide experimentalist and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems

  6. Model for Analysis of the Energy Demand (MAED) users' manual for version MAED-1

    International Nuclear Information System (INIS)

    1986-09-01

    This manual is organized in two major parts. The first part includes eight main sections describing how to use the MAED-1 computer program and the second one consists of five appendices giving some additional information about the program. Concerning the main sections of the manual, Section 1 gives a summary description and some background information about the MAED-1 model. Section 2 extends the description of the MAED-1 model in more detail. Section 3 introduces some concepts, mainly related to the computer requirements imposed by the program, that are used throughout this document. Sections 4 to 7 describe how to execute each of the various programs (or modules) of the MAED-1 package. The description for each module shows the user how to prepare the control and data cards needed to execute the module and how to interpret the printed output produced. Section 8 recapitulates about the use of MAED-1 for carrying out energy and electricity planning studies, describes the several phases normally involved in this type of study and provides the user with practical hints about the most important aspects that need to be verified at each phase while executing the various MAED modules

  7. MIG version 0.0 model interface guidelines: Rules to accelerate installation of numerical models into any compliant parent code

    Energy Technology Data Exchange (ETDEWEB)

    Brannon, R.M.; Wong, M.K.

    1996-08-01

    A set of model interface guidelines, called MIG, is presented as a means by which any compliant numerical material model can be rapidly installed into any parent code without having to modify the model subroutines. Here, {open_quotes}model{close_quotes} usually means a material model such as one that computes stress as a function of strain, though the term may be extended to any numerical operation. {open_quotes}Parent code{close_quotes} means a hydrocode, finite element code, etc. which uses the model and enforces, say, the fundamental laws of motion and thermodynamics. MIG requires the model developer (who creates the model package) to specify model needs in a standardized but flexible way. MIG includes a dictionary of technical terms that allows developers and parent code architects to share a common vocabulary when specifying field variables. For portability, database management is the responsibility of the parent code. Input/output occurs via structured calling arguments. As much model information as possible (such as the lists of required inputs, as well as lists of precharacterized material data and special needs) is supplied by the model developer in an ASCII text file. Every MIG-compliant model also has three required subroutines to check data, to request extra field variables, and to perform model physics. To date, the MIG scheme has proven flexible in beta installations of a simple yield model, plus a more complicated viscodamage yield model, three electromechanical models, and a complicated anisotropic microcrack constitutive model. The MIG yield model has been successfully installed using identical subroutines in three vectorized parent codes and one parallel C++ code, all predicting comparable results. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort, thereby reducing the cost of installing and sharing models in diverse new codes.

  8. Detection of critical PM2.5 emission sources and their contributions to a heavy haze episode in Beijing, China, using an adjoint model

    Science.gov (United States)

    Zhai, Shixian; An, Xingqin; Zhao, Tianliang; Sun, Zhaobin; Wang, Wei; Hou, Qing; Guo, Zengyuan; Wang, Chao

    2018-05-01

    Air pollution sources and their regional transport are important issues for air quality control. The Global-Regional Assimilation and Prediction System coupled with the China Meteorological Administration Unified Atmospheric Chemistry Environment (GRAPES-CUACE) aerosol adjoint model was applied to detect the sensitive primary emission sources of a haze episode in Beijing occurring between 19 and 21 November 2012. The high PM2.5 concentration peaks occurring at 05:00 and 23:00 LT (GMT+8) over Beijing on 21 November 2012 were set as the cost functions for the aerosol adjoint model. The critical emission regions of the first PM2.5 concentration peak were tracked to the west and south of Beijing, with 2 to 3 days of cumulative transport of air pollutants to Beijing. The critical emission regions of the second peak were mainly located to the south of Beijing, where southeasterly moist air transport led to the hygroscopic growth of particles and pollutant convergence in front of the Taihang Mountains during the daytime on 21 November. The temporal variations in the sensitivity coefficients for the two PM2.5 concentration peaks revealed that the response time of the onset of Beijing haze pollution from the local primary emissions is approximately 1-2 h and that from the surrounding primary emissions it is approximately 7-12 h. The upstream Hebei province has the largest impact on the two PM2.5 concentration peaks, and the contribution of emissions from Hebei province to the first PM2.5 concentration peak (43.6 %) is greater than that to the second PM2.5 concentration peak (41.5 %). The second most influential province for the 05:00 LT PM2.5 concentration peak is Beijing (31.2 %), followed by Shanxi (9.8 %), Tianjin (9.8 %), and Shandong (5.7 %). The second most influential province for the 23:00 LT PM2.5 concentration peak is Beijing (35.7 %), followed by Shanxi (8.1 %), Shandong (8.0 %), and Tianjin (6.7 %). The adjoint model results were compared with the forward

  9. Offshore Wind Guidance Document: Oceanography and Sediment Stability (Version 1) Development of a Conceptual Site Model.

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, Jesse D.; Jason Magalen; Craig Jones

    2014-06-01

    This guidance document provide s the reader with an overview of the key environmental considerations for a typical offshore wind coastal location and the tools to help guide the reader through a thoro ugh planning process. It will enable readers to identify the key coastal processes relevant to their offshore wind site and perform pertinent analysis to guide siting and layout design, with the goal of minimizing costs associated with planning, permitting , and long - ter m maintenance. The document highlight s site characterization and assessment techniques for evaluating spatial patterns of sediment dynamics in the vicinity of a wind farm under typical, extreme, and storm conditions. Finally, the document des cribe s the assimilation of all of this information into the conceptual site model (CSM) to aid the decision - making processes.

  10. Theoretical modelling of epigenetically modified DNA sequences [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Alexandra Teresa Pires Carvalho

    2015-05-01

    Full Text Available We report herein a set of calculations designed to examine the effects of epigenetic modifications on the structure of DNA. The incorporation of methyl, hydroxymethyl, formyl and carboxy substituents at the 5-position of cytosine is shown to hardly affect the geometry of CG base pairs, but to result in rather larger changes to hydrogen-bond and stacking binding energies, as predicted by dispersion-corrected density functional theory (DFT methods. The same modifications within double-stranded GCG and ACA trimers exhibit rather larger structural effects, when including the sugar-phosphate backbone as well as sodium counterions and implicit aqueous solvation. In particular, changes are observed in the buckle and propeller angles within base pairs and the slide and roll values of base pair steps, but these leave the overall helical shape of DNA essentially intact. The structures so obtained are useful as a benchmark of faster methods, including molecular mechanics (MM and hybrid quantum mechanics/molecular mechanics (QM/MM methods. We show that previously developed MM parameters satisfactorily reproduce the trimer structures, as do QM/MM calculations which treat bases with dispersion-corrected DFT and the sugar-phosphate backbone with AMBER. The latter are improved by inclusion of all six bases in the QM region, since a truncated model including only the central CG base pair in the QM region is considerably further from the DFT structure. This QM/MM method is then applied to a set of double-stranded DNA heptamers derived from a recent X-ray crystallographic study, whose size puts a DFT study beyond our current computational resources. These data show that still larger structural changes are observed than in base pairs or trimers, leading us to conclude that it is important to model epigenetic modifications within realistic molecular contexts.

  11. Forsmark site investigation. Assessment of the validity of the rock domain model, version 1.2, based on the modelling of gravity and petrophysical data

    International Nuclear Information System (INIS)

    Isaksson, Hans; Stephens, Michael B.

    2007-11-01

    This document reports the results gained by the geophysical modelling of rock domains based on gravity and petrophysical data, which is one of the activities performed within the site investigation work at Forsmark. The main objective with this activity is to assess the validity of the geological rock domain model version 1.2, and to identify discrepancies in the model that may indicate a need for revision of the model or a need for additional investigations. The verification is carried out by comparing the calculated gravity model response, which takes account of the geological model, with a local gravity anomaly that represents the measured data. The model response is obtained from the three-dimensional geometry and the petrophysical data provided for each rock domain in the geological model. Due to model boundary conditions, the study is carried out in a smaller area within the regional model area. Gravity model responses are calculated in three stages; an initial model, a base model and a refined base model. The refined base model is preferred and is used for comparison purposes. In general, there is a good agreement between the refined base model that makes use of the rock domain model, version 1.2 and the measured gravity data, not least where it concerns the depth extension of the critical rock domain RFM029. The most significant discrepancy occurs in the area extending from the SFR office to the SFR underground facility and further to the northwest. It is speculated that this discrepancy is caused by a combination of an overestimation of the volume of gabbro (RFM016) that plunges towards the southeast in the rock domain model, and an underestimation of the volume of occurrence of pegmatite and pegmatitic granite that are known to be present and occur as larger bodies around SFR. Other discrepancies are noted in rock domain RFM022, which is considered to be overestimated in the rock domain model, version 1.2, and in rock domain RFM017, where the gravity

  12. Simulating the 2012 High Plains Drought Using Three Single Column Model Versions of the Community Earth System Model (SCM-CESM)

    Science.gov (United States)

    Medina, I. D.; Denning, S.

    2014-12-01

    The impact of changes in the frequency and severity of drought on fresh water sustainability is a great concern for many regions of the world. One such location is the High Plains, where the local economy is primarily driven by fresh water withdrawals from the Ogallala Aquifer, which accounts for approximately 30% of total irrigation withdrawals from all U.S. aquifers combined. Modeling studies that focus on the feedback mechanisms that control the climate and eco-hydrology during times of drought are limited in the sense that they use conventional General Circulation Models (GCMs) with grid length scales ranging from one hundred to several hundred kilometers. Additionally, these models utilize crude statistical parameterizations of cloud processes for estimating sub-grid fluxes of heat and moisture and have a poor representation of land surface heterogeneity. For this research, we focus on the 2012 High Plains drought, and will perform numerical simulations using three single column model versions of the Community Earth System Model (SCM-CESM) at multiple sites overlying the Ogallala Aquifer for the 2010-2012 period. In the first version of SCM-CESM, CESM will be used in standard mode (Community Atmospheric Model (CAM) coupled to a single instance of the Community Land Model (CLM)), secondly, CESM will be used in Super-Parameterized mode (SP-CESM), where a cloud resolving model (CRM consists of 32 atmospheric columns) replaces the standard CAM atmospheric parameterization and is coupled to a single instance of CLM, and thirdly, CESM is used in "Multi Instance" SP-CESM mode, where an instance of CLM is coupled to each CRM column of SP-CESM (32 CRM columns coupled to 32 instances of CLM). To assess the physical realism of the land-atmosphere feedbacks simulated at each site by all versions of SCM-CESM, differences in simulated energy and moisture fluxes will be computed between years for the 2010-2012 period, and will be compared to differences calculated using

  13. Fuel Cell Power Model Version 2: Startup Guide, System Designs, and Case Studies. Modeling Electricity, Heat, and Hydrogen Generation from Fuel Cell-Based Distributed Energy Systems

    Energy Technology Data Exchange (ETDEWEB)

    Steward, D.; Penev, M.; Saur, G.; Becker, W.; Zuboy, J.

    2013-06-01

    This guide helps users get started with the U.S. Department of Energy/National Renewable Energy Laboratory Fuel Cell Power (FCPower) Model Version 2, which is a Microsoft Excel workbook that analyzes the technical and economic aspects of high-temperature fuel cell-based distributed energy systems with the aim of providing consistent, transparent, comparable results. This type of energy system would provide onsite-generated heat and electricity to large end users such as hospitals and office complexes. The hydrogen produced could be used for fueling vehicles or stored for later conversion to electricity.

  14. A Method and a Model for Describing Competence and Adjustment: A Preschool Version of the Classroom Behavior Inventory.

    Science.gov (United States)

    Schaefer, Earl S.; Edgerton, Marianna D.

    A preschool version of the Classroom Behavior Inventory which provides a method for collecting valid data on a child's classroom behavior from day care and preschool teachers, was developed to complement the earlier form which was developed and validated for elementary school populations. The new version was tested with a pilot group of twenty-two…

  15. Sensitivity of precipitation to parameter values in the community atmosphere model version 5

    Energy Technology Data Exchange (ETDEWEB)

    Johannesson, Gardar; Lucas, Donald; Qian, Yun; Swiler, Laura Painton; Wildey, Timothy Michael

    2014-03-01

    One objective of the Climate Science for a Sustainable Energy Future (CSSEF) program is to develop the capability to thoroughly test and understand the uncertainties in the overall climate model and its components as they are being developed. The focus on uncertainties involves sensitivity analysis: the capability to determine which input parameters have a major influence on the output responses of interest. This report presents some initial sensitivity analysis results performed by Lawrence Livermore National Laboratory (LNNL), Sandia National Laboratories (SNL), and Pacific Northwest National Laboratory (PNNL). In the 2011-2012 timeframe, these laboratories worked in collaboration to perform sensitivity analyses of a set of CAM5, 2° runs, where the response metrics of interest were precipitation metrics. The three labs performed their sensitivity analysis (SA) studies separately and then compared results. Overall, the results were quite consistent with each other although the methods used were different. This exercise provided a robustness check of the global sensitivity analysis metrics and identified some strongly influential parameters.

  16. The natural defense system and the normative self model [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Philippe Kourilsky

    2016-05-01

    Full Text Available Infectious agents are not the only agressors, and the immune system is not the sole defender of the organism. In an enlarged perspective, the ‘normative self model’ postulates that a ‘natural defense system’ protects man and other complex organisms against the environmental and internal hazards of life, including infections and cancers. It involves multiple error detection and correction mechanisms that confer robustness to the body at all levels of its organization. According to the model, the self relies on a set of physiological norms, and NONself (meaning : Non Obedient to the Norms of the self is anything ‘off-norms’. The natural defense system comprises a set of ‘civil defenses’ (to which all cells in organs and tissues contribute, and a ‘professional army ‘, made of a smaller set of mobile cells. Mobile and non mobile cells differ in their tuning abilities. Tuning extends the recognition capabilities of NONself by the mobile cells, which increase their defensive function. To prevent them to drift, which would compromise self/NONself discrimination, the more plastic mobile cells need to periodically refer to the more stable non mobile cells to keep within physiological standards.

  17. Columbia River Statistical Update Model, Version 4. 0 (COLSTAT4): Background documentation and user's guide

    Energy Technology Data Exchange (ETDEWEB)

    Whelan, G.; Damschen, D.W.; Brockhaus, R.D.

    1987-08-01

    Daily-averaged temperature and flow information on the Columbia River just downstream of Priest Rapids Dam and upstream of river mile 380 were collected and stored in a data base. The flow information corresponds to discharges that were collected daily from October 1, 1959, through July 28, 1986. The temperature information corresponds to values that were collected daily from January 1, 1965, through May 27, 1986. The computer model, COLSTAT4 (Columbia River Statistical Update - Version 4.0 model), uses the temperature-discharge data base to statistically analyze temperature and flow conditions by computing the frequency of occurrence and duration of selected temperatures and flow rates for the Columbia River. The COLSTAT4 code analyzes the flow and temperature information in a sequential time frame (i.e., a continuous analysis over a given time period); it also analyzes this information in a seasonal time frame (i.e., a periodic analysis over a specific season from year to year). A provision is included to enable the user to edit and/or extend the data base of temperature and flow information. This report describes the COLSTAT4 code and the information contained in its data base.

  18. Updating sea spray aerosol emissions in the Community Multiscale Air Quality (CMAQ) model version 5.0.2

    Science.gov (United States)

    Gantt, B.; Kelly, J. T.; Bash, J. O.

    2015-11-01

    Sea spray aerosols (SSAs) impact the particle mass concentration and gas-particle partitioning in coastal environments, with implications for human and ecosystem health. Model evaluations of SSA emissions have mainly focused on the global scale, but regional-scale evaluations are also important due to the localized impact of SSAs on atmospheric chemistry near the coast. In this study, SSA emissions in the Community Multiscale Air Quality (CMAQ) model were updated to enhance the fine-mode size distribution, include sea surface temperature (SST) dependency, and reduce surf-enhanced emissions. Predictions from the updated CMAQ model and those of the previous release version, CMAQv5.0.2, were evaluated using several coastal and national observational data sets in the continental US. The updated emissions generally reduced model underestimates of sodium, chloride, and nitrate surface concentrations for coastal sites in the Bay Regional Atmospheric Chemistry Experiment (BRACE) near Tampa, Florida. Including SST dependency to the SSA emission parameterization led to increased sodium concentrations in the southeastern US and decreased concentrations along parts of the Pacific coast and northeastern US. The influence of sodium on the gas-particle partitioning of nitrate resulted in higher nitrate particle concentrations in many coastal urban areas due to increased condensation of nitric acid in the updated simulations, potentially affecting the predicted nitrogen deposition in sensitive ecosystems. Application of the updated SSA emissions to the California Research at the Nexus of Air Quality and Climate Change (CalNex) study period resulted in a modest improvement in the predicted surface concentration of sodium and nitrate at several central and southern California coastal sites. This update of SSA emissions enabled a more realistic simulation of the atmospheric chemistry in coastal environments where marine air mixes with urban pollution.

  19. The global aerosol-climate model ECHAM-HAM, version 2: sensitivity to improvements in process representations

    Directory of Open Access Journals (Sweden)

    K. Zhang

    2012-10-01

    Full Text Available This paper introduces and evaluates the second version of the global aerosol-climate model ECHAM-HAM. Major changes have been brought into the model, including new parameterizations for aerosol nucleation and water uptake, an explicit treatment of secondary organic aerosols, modified emission calculations for sea salt and mineral dust, the coupling of aerosol microphysics to a two-moment stratiform cloud microphysics scheme, and alternative wet scavenging parameterizations. These revisions extend the model's capability to represent details of the aerosol lifecycle and its interaction with climate. Nudged simulations of the year 2000 are carried out to compare the aerosol properties and global distribution in HAM1 and HAM2, and to evaluate them against various observations. Sensitivity experiments are performed to help identify the impact of each individual update in model formulation.

    Results indicate that from HAM1 to HAM2 there is a marked weakening of aerosol water uptake in the lower troposphere, reducing the total aerosol water burden from 75 Tg to 51 Tg. The main reason is the newly introduced κ-Köhler-theory-based water uptake scheme uses a lower value for the maximum relative humidity cutoff. Particulate organic matter loading in HAM2 is considerably higher in the upper troposphere, because the explicit treatment of secondary organic aerosols allows highly volatile oxidation products of the precursors to be vertically transported to regions of very low temperature and to form aerosols there. Sulfate, black carbon, particulate organic matter and mineral dust in HAM2 have longer lifetimes than in HAM1 because of weaker in-cloud scavenging, which is in turn related to lower autoconversion efficiency in the newly introduced two-moment cloud microphysics scheme. Modification in the sea salt emission scheme causes a significant increase in the ratio (from 1.6 to 7.7 between accumulation mode and coarse mode emission fluxes of

  20. EIA model documentation: World oil refining logistics demand model,``WORLD`` reference manual. Version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    1994-04-11

    This manual is intended primarily for use as a reference by analysts applying the WORLD model to regional studies. It also provides overview information on WORLD features of potential interest to managers and analysts. Broadly, the manual covers WORLD model features in progressively increasing detail. Section 2 provides an overview of the WORLD model, how it has evolved, what its design goals are, what it produces, and where it can be taken with further enhancements. Section 3 reviews model management covering data sources, managing over-optimization, calibration and seasonality, check-points for case construction and common errors. Section 4 describes in detail the WORLD system, including: data and program systems in overview; details of mainframe and PC program control and files;model generation, size management, debugging and error analysis; use with different optimizers; and reporting and results analysis. Section 5 provides a detailed description of every WORLD model data table, covering model controls, case and technology data. Section 6 goes into the details of WORLD matrix structure. It provides an overview, describes how regional definitions are controlled and defines the naming conventions for-all model rows, columns, right-hand sides, and bounds. It also includes a discussion of the formulation of product blending and specifications in WORLD. Several Appendices supplement the main sections.

  1. Study of the Eco-Economic Indicators by Means of the New Version of the Merge Integrated Model Part 2

    Directory of Open Access Journals (Sweden)

    Boris Vadimovich Digas

    2016-03-01

    Full Text Available One of the most relevant issues of the day is the forecasting problem of climatic changes and mitigation of their consequences. The official point of view reflected in the Climate doctrine of the Russian Federation consists in the recognition of the need of the development of the state approach to the climatic problems and related issues on the basis of the comprehensive scientific analysis of ecological, economic and social factors. For this purpose, the integrated estimation models of interdisciplinary character are attracted. Their functionality is characterized by the possibility of construction and testing of various dynamic scenarios of complex systems. The main purposes of the computing experiments described in the article are a review of the consequences of hypothetical participation of Russia in initiatives for greenhouse gas reduction as the Kyoto Protocol and approbation of one of the calculation methods of the green gross domestic product representing the efficiency of environmental management in the modelling. To implement the given goals, the MERGE optimization model is used, its classical version is intended for the quantitative estimation of the application results of nature protection strategies. The components of the model are the eco-power module, climatic module and the module of loss estimates. In the work, the main attention is paid to the adaptation of the MERGE model to a current state of the world economy in the conditions of a complicated geopolitical situation and introduction of a new component to the model, realizing a simplified method for calculation the green gross domestic product. The Project of scenario conditions and the key macroeconomic forecast parameters of the socio-economic development of Russia for 2016 and the schedule date of 2017−2018 made by the Ministry of Economic Development of the Russian Federation are used as a basic source of entrance data for the analysis of possible trajectories of the

  2. Study of the Eco-Economic Indicators by Means of the New Version of the Merge Integrated Model. Part 1

    Directory of Open Access Journals (Sweden)

    Boris Vadimovich Digas

    2015-12-01

    Full Text Available One of the most relevant issues of the day is the forecasting problem of climatic changes and mitigation of their consequences. The official point of view reflected in the Climate doctrine of the Russian Federation consists in the recognition of the need of the development of the state approach to the climatic problems and related issues on the basis of the comprehensive scientific analysis of ecological, economic and social factors. For this purpose, the integrated estimation models of interdisciplinary character are attracted. Their functionality is characterized by the possibility of construction and testing of various dynamic scenarios of complex systems. The main purposes of the computing experiments described in the article are a review of the consequences of hypothetical participation of Russia in initiatives for greenhouse gas reduction as the Kyoto Protocol and approbation of one of the calculation methods of the green GDP representing the efficiency of environmental management in the modelling. To implement the given goals, the MERGE optimization model is used, its classical version is intended for the quantitative estimation of the application results of nature protection strategies. The components of the model are the eco-power module, climatic module and the module of loss estimates. In the work, the main attention is paid to the adaptation of the MERGE model to a current state of the world economy in the conditions of a complicated geopolitical situation and introduction of a new component to the model, realizing a simplified method for calculation the green GDP. The Project of scenario conditions and the key macroeconomic forecast parameters of the socio-economic development of Russia for 2016 and the schedule date of 2017−2018 made by the Ministry of Economic Development of the Russian Federation are used as a basic source of entrance data for the analysis of possible trajectories of the economic development of Russia and the

  3. Thermal Site Descriptive Model. A strategy for the model development during site investigations. Version 1.0

    International Nuclear Information System (INIS)

    Sundberg, Jan

    2003-04-01

    Site investigations are in progress for the siting of a deep repository for spent nuclear fuel. As part of the planning work, strategies are developed for site descriptive modelling regarding different disciplines, amongst them the thermal conditions. The objective of the strategy for a thermal site descriptive model is to guide the practical implementation of evaluating site specific data during the site investigations. It is understood that further development may be needed. The model describes the thermal properties and other thermal parameters of intact rock, fractures and fracture zones, and of the rock mass. The methodology is based on estimation of thermal properties of intact rock and discontinuities, using both empirical and theoretical/numerical approaches, and estimation of thermal processes using mathematical modelling. The methodology will be used and evaluated for the thermal site descriptive modelling at the Aespoe Hard Rock Laboratory

  4. First-episode psychosis

    DEFF Research Database (Denmark)

    Simonsen, Erik

    2011-01-01

    . Patients with first-episode psychosis had significantly high NEO-PI-R scores for neuroticism and agreeableness, and lower scores for conscientiousness and extroversion. The median time for remission in the total sample was three months. Female gender and better premorbid functioning were predictive of less...... negative symptoms and shorter duration of untreated psychosis (DUP) was predictive for shorter time to remission, stable remission, less severe positive psychotic symptoms, and better social functioning. Female gender, better premorbid social functioning and more education also contributed to a better...... should warn clinicians to pay attention to the more elaborate needs of these patients. A re-evaluation at three months should reveal that non-remitted patients with longer DUPs indicate high risk of continuous non-remission. A possible shift to clozapine for this group should be strongly considered....

  5. VELMA Ecohydrological Model, Version 2.0 -- Analyzing Green Infrastructure Options for Enhancing Water Quality and Ecosystem Service Co-Benefits

    Science.gov (United States)

    This 2-page factsheet describes an enhanced version (2.0) of the VELMA eco-hydrological model. VELMA – Visualizing Ecosystem Land Management Assessments – has been redesigned to assist communities, land managers, policy makers and other decision makers in evaluataing the effecti...

  6. MODIFIED N.R.C. VERSION OF THE U.S.G.S. SOLUTE TRANSPORT MODEL. VOLUME 2. INTERACTIVE PREPROCESSOR PROGRAM

    Science.gov (United States)

    The methods described in the report can be used with the modified N.R.C. version of the U.S.G.S. Solute Transport Model to predict the concentration of chemical parameters in a contaminant plume. The two volume report contains program documentation and user's manual. The program ...

  7. Moral judgment in episodic amnesia.

    Science.gov (United States)

    Craver, Carl F; Keven, Nazim; Kwan, Donna; Kurczek, Jake; Duff, Melissa C; Rosenbaum, R Shayna

    2016-08-01

    To investigate the role of episodic thought about the past and future in moral judgment, we administered a well-established moral judgment battery to individuals with hippocampal damage and deficits in episodic thought (insert Greene et al. 2001). Healthy controls select deontological answers in high-conflict moral scenarios more frequently when they vividly imagine themselves in the scenarios than when they imagine scenarios abstractly, at some personal remove. If this bias is mediated by episodic thought, individuals with deficits in episodic thought should not exhibit this effect. We report that individuals with deficits in episodic memory and future thought make moral judgments and exhibit the biasing effect of vivid, personal imaginings on moral judgment. These results strongly suggest that the biasing effect of vivid personal imagining on moral judgment is not due to episodic thought about the past and future. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  8. Mnemonic Discrimination Deficits in First-Episode Psychosis and a Ketamine Model Suggests Dentate Gyrus Pathology Linked to N-Methyl-D-Aspartate Receptor Hypofunction.

    Science.gov (United States)

    Kraguljac, Nina Vanessa; Carle, Matthew; Frölich, Michael A; Tran, Steve; Yassa, Michael A; White, David Matthew; Reddy, Abhishek; Lahti, Adrienne Carol

    2018-03-01

    Converging evidence from neuroimaging and postmortem studies suggests that hippocampal subfields are differentially affected in schizophrenia. Recent studies report dentate gyrus dysfunction in chronic schizophrenia, but the underlying mechanisms remain to be elucidated. Here we sought to examine if this deficit is already present in first-episode psychosis, and if N-methyl-D-aspartate receptor hypofunction, a putative central pathophysiological mechanism in schizophrenia, experimentally induced by ketamine, would result in a similar abnormality. We applied a mnemonic discrimination task selectively taxing pattern separation in two experiments: 1) a group of 23 first-episode psychosis patients and 23 matched healthy volunteers and 2) a group of 19 healthy volunteers before and during a ketamine challenge (0.27 mg/kg over 10 minutes, then 0.25 mg/kg/hour for 50 minutes, 0.01 mL/s). We calculated response bias-corrected pattern separation and recognition scores. We also examined the relationships between task performance and symptom severity as well as ketamine levels. We report a deficit in pattern separation but not recognition performance in first-episode psychosis patients compared with healthy volunteers (p = .04) and in volunteers during the ketamine challenge compared with baseline (p = .003). Exploratory analyses revealed no correlation between task performance and Repeatable Battery for the Assessment of Neuropsychological Status total scores or positive symptoms in first-episode psychosis patients, or with ketamine serum levels. We observed a mnemonic discrimination deficit but intact recognition in both datasets. Our findings suggest a tentative mechanistic link between dentate gyrus dysfunction in first-episode psychosis and N-methyl-D-aspartate receptor hypofunction. Copyright © 2017 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  9. Executive function, episodic memory, and Medicare expenditures.

    Science.gov (United States)

    Bender, Alex C; Austin, Andrea M; Grodstein, Francine; Bynum, Julie P W

    2017-07-01

    We examined the relationship between health care expenditures and cognition, focusing on differences across cognitive systems defined by global cognition, executive function, or episodic memory. We used linear regression models to compare annual health expenditures by cognitive status in 8125 Nurses' Health Study participants who completed a cognitive battery and were enrolled in Medicare parts A and B. Adjusting for demographics and comorbidity, executive impairment was associated with higher total annual expenditures of $1488 per person (P episodic memory impairment was found. Expenditures exhibited a linear relationship with executive function, but not episodic memory ($584 higher for every 1 standard deviation decrement in executive function; P < .01). Impairment in executive function is specifically and linearly associated with higher health care expenditures. Focusing on management strategies that address early losses in executive function may be effective in reducing costly services. Copyright © 2017 the Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  10. Episodic-like memory in zebrafish.

    Science.gov (United States)

    Hamilton, Trevor J; Myggland, Allison; Duperreault, Erika; May, Zacnicte; Gallup, Joshua; Powell, Russell A; Schalomon, Melike; Digweed, Shannon M

    2016-11-01

    Episodic-like memory tests often aid in determining an animal's ability to recall the what, where, and which (context) of an event. To date, this type of memory has been demonstrated in humans, wild chacma baboons, corvids (Scrub jays), humming birds, mice, rats, Yucatan minipigs, and cuttlefish. The potential for this type of memory in zebrafish remains unexplored even though they are quickly becoming an essential model organism for the study of a variety of human cognitive and mental disorders. Here we explore the episodic-like capabilities of zebrafish (Danio rerio) in a previously established mammalian memory paradigm. We demonstrate that when zebrafish were presented with a familiar object in a familiar context but a novel location within that context, they spend more time in the novel quadrant. Thus, zebrafish display episodic-like memory as they remember what object they saw, where they saw it (quadrant location), and on which occasion (yellow or blue walls) it was presented.

  11. Episodes of care: is emergency medicine ready?

    Science.gov (United States)

    Wiler, Jennifer L; Beck, Dennis; Asplin, Brent R; Granovsky, Michael; Moorhead, John; Pilgrim, Randy; Schuur, Jeremiah D

    2012-05-01

    Optimizing resource use, eliminating waste, aligning provider incentives, reducing overall costs, and coordinating the delivery of quality care while improving outcomes have been major themes of health care reform initiatives. Recent legislation contains several provisions designed to move away from the current fee-for-service payment mechanism toward a model that reimburses providers for caring for a population of patients over time while shifting more financial risk to providers. In this article, we review current approaches to episode of care development and reimbursement. We describe the challenges of incorporating emergency medicine into the episode of care approach and the uncertain influence this delivery model will have on emergency medicine care, including quality outcomes. We discuss the limitations of the episode of care payment model for emergency services and advocate retention of the current fee-for-service payment model, as well as identify research gaps that, if addressed, could be used to inform future policy decisions of emergency medicine health policy leaders. We then describe a meaningful role for emergency medicine in an episode of care setting. Copyright © 2011. Published by Mosby, Inc.

  12. Mental images in episodic memory

    OpenAIRE

    Han, KyungHun

    2009-01-01

    Episodic memory, i.e. memorization of information within a spatiotemporal environment, is affected by Alzheimer's disease (AD) but its loss may also occur in the normal aging process. The purpose of this study is to analyze and evaluate episodic memory in patients with AD by examining their cognitive skills in episodic memory through the introspection technique. A new method was used, wherein we assessed mental images of the subject's own past recalled in the mind like projected pictures and ...

  13. Recall of Others' Actions after Incidental Encoding Reveals Episodic-like Memory in Dogs.

    Science.gov (United States)

    Fugazza, Claudia; Pogány, Ákos; Miklósi, Ádám

    2016-12-05

    The existence of episodic memory in non-human animals is a debated topic that has been investigated using different methodologies that reflect diverse theoretical approaches to its definition. A fundamental feature of episodic memory is recalling after incidental encoding, which can be assessed if the recall test is unexpected [1]. We used a modified version of the "Do as I Do" method [2], relying on dogs' ability to imitate human actions, to test whether dogs can rely on episodic memory when recalling others' actions from the past. Dogs were first trained to imitate human actions on command. Next, they were trained to perform a simple training exercise (lying down), irrespective of the previously demonstrated action. This way, we substituted their expectation to be required to imitate with the expectation to be required to lie down. We then tested whether dogs recalled the demonstrated actions by unexpectedly giving them the command to imitate, instead of lying down. Dogs were tested with a short (1 min) and a long (1 hr) retention interval. They were able to recall the demonstrated actions after both intervals; however, their performance declined more with time compared to conditions in which imitation was expected. These findings show that dogs recall past events as complex as human actions even if they do not expect the memory test, providing evidence for episodic-like memory. Dogs offer an ideal model to study episodic memory in non-human species, and this methodological approach allows investigating memory of complex, context-rich events. VIDEO ABSTRACT. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. The evolution of episodic memory

    Science.gov (United States)

    Allen, Timothy A.; Fortin, Norbert J.

    2013-01-01

    One prominent view holds that episodic memory emerged recently in humans and lacks a “(neo)Darwinian evolution” [Tulving E (2002) Annu Rev Psychol 53:1–25]. Here, we review evidence supporting the alternative perspective that episodic memory has a long evolutionary history. We show that fundamental features of episodic memory capacity are present in mammals and birds and that the major brain regions responsible for episodic memory in humans have anatomical and functional homologs in other species. We propose that episodic memory capacity depends on a fundamental neural circuit that is similar across mammalian and avian species, suggesting that protoepisodic memory systems exist across amniotes and, possibly, all vertebrates. The implication is that episodic memory in diverse species may primarily be due to a shared underlying neural ancestry, rather than the result of evolutionary convergence. We also discuss potential advantages that episodic memory may offer, as well as species-specific divergences that have developed on top of the fundamental episodic memory architecture. We conclude by identifying possible time points for the emergence of episodic memory in evolution, to help guide further research in this area. PMID:23754432

  15. Lord-Wingersky Algorithm Version 2.0 for Hierarchical Item Factor Models with Applications in Test Scoring, Scale Alignment, and Model Fit Testing.

    Science.gov (United States)

    Cai, Li

    2015-06-01

    Lord and Wingersky's (Appl Psychol Meas 8:453-461, 1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined on a grid formed by direct products of quadrature points. However, the increase in computational burden remains exponential in the number of dimensions, making the implementation of the recursive algorithm cumbersome for truly high-dimensional models. In this paper, a dimension reduction method that is specific to the Lord-Wingersky recursions is developed. This method can take advantage of the restrictions implied by hierarchical item factor models, e.g., the bifactor model, the testlet model, or the two-tier model, such that a version of the Lord-Wingersky recursive algorithm can operate on a dramatically reduced set of quadrature points. For instance, in a bifactor model, the dimension of integration is always equal to 2, regardless of the number of factors. The new algorithm not only provides an effective mechanism to produce summed score to IRT scaled score translation tables properly adjusted for residual dependence, but leads to new applications in test scoring, linking, and model fit checking as well. Simulated and empirical examples are used to illustrate the new applications.

  16. Reduced context effects on retrieval in first-episode schizophrenia.

    Directory of Open Access Journals (Sweden)

    Lucia M Talamini

    Full Text Available BACKGROUND: A recent modeling study by the authors predicted that contextual information is poorly integrated into episodic representations in schizophrenia, and that this is a main cause of the retrieval deficits seen in schizophrenia. METHODOLOGY/PRINCIPAL FINDINGS: We have tested this prediction in patients with first-episode schizophrenia and matched controls. The benefit from contextual cues in retrieval was strongly reduced in patients. On the other hand, retrieval based on item cues was spared. CONCLUSIONS/SIGNIFICANCE: These results suggest that reduced integration of context information into episodic representations is a core deficit in schizophrenia and one of the main causes of episodic memory impairment.

  17. Brayton Cycle Numerical Modeling using the RELAP5-3D code, version 4.3.4

    Energy Technology Data Exchange (ETDEWEB)

    Longhini, Eduardo P.; Lobo, Paulo D.C.; Guimarães, Lamartine N.F.; Filho, Francisco A.B.; Ribeiro, Guilherme B., E-mail: edu_longhini@yahoo.com.br [Instituto de Estudos Avançados (IEAv), São José dos Campos, SP (Brazil). Divisão de Energia Nuclear

    2017-07-01

    This work contributes to enable and develop technologies to mount fast micro reactors, to generate heat and electric energy, for the purpose to warm and to supply electrically spacecraft equipment and, also, the production of nuclear space propulsion effect. So, for this purpose, the Brayton Cycle demonstrates to be an optimum approach for space nuclear power. The Brayton thermal cycle gas has as characteristic to be a closed cycle, with two adiabatic processes and two isobaric processes. The components performing the cycle's processes are compressor, turbine, heat source, cold source and recuperator. Therefore, the working fluid's mass flow runs the thermal cycle that converts thermal energy into electrical energy, able to use in spaces and land devices. The objective is numerically to model the Brayton thermal cycle gas on nominal operation with one turbomachine composed for a radial-inflow compressor and turbine of a 40.8 kWe Brayton Rotating Unit (BRU). The Brayton cycle numerical modeling is being performed with the program RELAP5-3D, version 4.3.4. The nominal operation uses as working fluid a mixture 40 g/mole He-Xe with a flow rate of 1.85 kg/s, shaft rotational speed of 45 krpm, compressor and turbine inlet temperature of 400 K and 1149 K, respectively, and compressor exit pressure 0.931 MPa. Then, the aim is to get physical corresponding data to operate each cycle component and the general cycle on this nominal operation. (author)

  18. Brayton Cycle Numerical Modeling using the RELAP5-3D code, version 4.3.4

    International Nuclear Information System (INIS)

    Longhini, Eduardo P.; Lobo, Paulo D.C.; Guimarães, Lamartine N.F.; Filho, Francisco A.B.; Ribeiro, Guilherme B.

    2017-01-01

    This work contributes to enable and develop technologies to mount fast micro reactors, to generate heat and electric energy, for the purpose to warm and to supply electrically spacecraft equipment and, also, the production of nuclear space propulsion effect. So, for this purpose, the Brayton Cycle demonstrates to be an optimum approach for space nuclear power. The Brayton thermal cycle gas has as characteristic to be a closed cycle, with two adiabatic processes and two isobaric processes. The components performing the cycle's processes are compressor, turbine, heat source, cold source and recuperator. Therefore, the working fluid's mass flow runs the thermal cycle that converts thermal energy into electrical energy, able to use in spaces and land devices. The objective is numerically to model the Brayton thermal cycle gas on nominal operation with one turbomachine composed for a radial-inflow compressor and turbine of a 40.8 kWe Brayton Rotating Unit (BRU). The Brayton cycle numerical modeling is being performed with the program RELAP5-3D, version 4.3.4. The nominal operation uses as working fluid a mixture 40 g/mole He-Xe with a flow rate of 1.85 kg/s, shaft rotational speed of 45 krpm, compressor and turbine inlet temperature of 400 K and 1149 K, respectively, and compressor exit pressure 0.931 MPa. Then, the aim is to get physical corresponding data to operate each cycle component and the general cycle on this nominal operation. (author)

  19. The Episodic Nature of Episodic-Like Memories

    Science.gov (United States)

    Easton, Alexander; Webster, Lisa A. D.; Eacott, Madeline J.

    2012-01-01

    Studying episodic memory in nonhuman animals has proved difficult because definitions in humans require conscious recollection. Here, we assessed humans' experience of episodic-like recognition memory tasks that have been used with animals. It was found that tasks using contextual information to discriminate events could only be accurately…

  20. Modeling the structure of the attitudes and belief scale 2 using CFA and bifactor approaches: Toward the development of an abbreviated version.

    Science.gov (United States)

    Hyland, Philip; Shevlin, Mark; Adamson, Gary; Boduszek, Daniel

    2014-01-01

    The Attitudes and Belief Scale-2 (ABS-2: DiGiuseppe, Leaf, Exner, & Robin, 1988. The development of a measure of rational/irrational thinking. Paper presented at the World Congress of Behavior Therapy, Edinburg, Scotland.) is a 72-item self-report measure of evaluative rational and irrational beliefs widely used in Rational Emotive Behavior Therapy research contexts. However, little psychometric evidence exists regarding the measure's underlying factor structure. Furthermore, given the length of the ABS-2 there is a need for an abbreviated version that can be administered when there are time demands on the researcher, such as in clinical settings. This study sought to examine a series of theoretical models hypothesized to represent the latent structure of the ABS-2 within an alternative models framework using traditional confirmatory factor analysis as well as utilizing a bifactor modeling approach. Furthermore, this study also sought to develop a psychometrically sound abbreviated version of the ABS-2. Three hundred and thirteen (N = 313) active emergency service personnel completed the ABS-2. Results indicated that for each model, the application of bifactor modeling procedures improved model fit statistics, and a novel eight-factor intercorrelated solution was identified as the best fitting model of the ABS-2. However, the observed fit indices failed to satisfy commonly accepted standards. A 24-item abbreviated version was thus constructed and an intercorrelated eight-factor solution yielded satisfactory model fit statistics. Current results support the use of a bifactor modeling approach to determining the factor structure of the ABS-2. Furthermore, results provide empirical support for the psychometric properties of the newly developed abbreviated version.

  1. User's guide to the MESOI diffusion model: Version 1. 1 (for Data General Eclipse S/230 with AFOS)

    Energy Technology Data Exchange (ETDEWEB)

    Athey, G.F.; Ramsdell, J.V.

    1982-09-01

    MESOI is an interactive, Langrangian puff trajectory model. The model theory is documented separately (Ramsdell and Athey, 1981). Version 1.1 is a modified form of the original 1.0. It is designed to run on a Data General Eclipse computer. The model has improved support features which make it useful as an emergency response tool. This report is intended to provide the user with the information necessary to successfully conduct model simulations using MESOI Version 1.1 and to use the support programs STAPREP and EXPLT. The user is also provided information on the use of the data file maintenance and review program UPDATE. Examples are given for the operation of the program. Test data sets are described which allow the user to practice with the programs and to confirm proper implementation and execution.

  2. Temperature and Humidity Profiles in the TqJoint Data Group of AIRS Version 6 Product for the Climate Model Evaluation

    Science.gov (United States)

    Ding, Feng; Fang, Fan; Hearty, Thomas J.; Theobald, Michael; Vollmer, Bruce; Lynnes, Christopher

    2014-01-01

    The Atmospheric Infrared Sounder (AIRS) mission is entering its 13th year of global observations of the atmospheric state, including temperature and humidity profiles, outgoing long-wave radiation, cloud properties, and trace gases. Thus AIRS data have been widely used, among other things, for short-term climate research and observational component for model evaluation. One instance is the fifth phase of the Coupled Model Intercomparison Project (CMIP5) which uses AIRS version 5 data in the climate model evaluation. The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is the home of processing, archiving, and distribution services for data from the AIRS mission. The GES DISC, in collaboration with the AIRS Project, released data from the version 6 algorithm in early 2013. The new algorithm represents a significant improvement over previous versions in terms of greater stability, yield, and quality of products. The ongoing Earth System Grid for next generation climate model research project, a collaborative effort of GES DISC and NASA JPL, will bring temperature and humidity profiles from AIRS version 6. The AIRS version 6 product adds a new "TqJoint" data group, which contains data for a common set of observations across water vapor and temperature at all atmospheric levels and is suitable for climate process studies. How different may the monthly temperature and humidity profiles in "TqJoint" group be from the "Standard" group where temperature and water vapor are not always valid at the same time? This study aims to answer the question by comprehensively comparing the temperature and humidity profiles from the "TqJoint" group and the "Standard" group. The comparison includes mean differences at different levels globally and over land and ocean. We are also working on examining the sampling differences between the "TqJoint" and "Standard" group using MERRA data.

  3. Statistical model of fractures and deformations zones for Forsmark. Preliminary site description Forsmark area - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    La Pointe, Paul R. [Golder Associate Inc., Redmond, WA (United States); Olofsson, Isabelle; Hermanson, Jan [Golder Associates AB, Uppsala (Sweden)

    2005-04-01

    Compared to version 1.1, a much larger amount of data especially from boreholes is available. Both one-hole interpretation and Boremap indicate the presence of high and low fracture intensity intervals in the rock mass. The depth and width of these intervals varies from borehole to borehole but these constant fracture intensity intervals are contiguous and present quite sharp transitions. There is not a consistent pattern of intervals of high fracture intensity at or near to the surface. In many cases, the intervals of highest fracture intensity are considerably below the surface. While some fractures may have occurred or been reactivated in response to surficial stress relief, surficial stress relief does not appear to be a significant explanatory variable for the observed variations in fracture intensity. Data from the high fracture intensity intervals were extracted and statistical analyses were conducted in order to identify common geological factors. Stereoplots of fracture orientation versus depth for the different fracture intensity intervals were also produced for each borehole. Moreover percussion borehole data were analysed in order to identify the persistence of these intervals throughout the model volume. The main conclusions of these analyses are the following: The fracture intensity is conditioned by the rock domain, but inside a rock domain intervals of high and low fracture intensity are identified. The intervals of high fracture intensity almost always correspond to intervals with distinct fracture orientations (whether a set, most often the NW sub-vertical set, is highly dominant, or some orientation sets are missing). These high fracture intensity intervals are positively correlated to the presence of first and second generation minerals (epidote, calcite). No clear correlation for these fracture intensity intervals has been identified between holes. Based on these results the fracture frequency has been calculated in each rock domain for the

  4. Statistical model of fractures and deformations zones for Forsmark. Preliminary site description Forsmark area - version 1.2

    International Nuclear Information System (INIS)

    La Pointe, Paul R.; Olofsson, Isabelle; Hermanson, Jan

    2005-04-01

    Compared to version 1.1, a much larger amount of data especially from boreholes is available. Both one-hole interpretation and Boremap indicate the presence of high and low fracture intensity intervals in the rock mass. The depth and width of these intervals varies from borehole to borehole but these constant fracture intensity intervals are contiguous and present quite sharp transitions. There is not a consistent pattern of intervals of high fracture intensity at or near to the surface. In many cases, the intervals of highest fracture intensity are considerably below the surface. While some fractures may have occurred or been reactivated in response to surficial stress relief, surficial stress relief does not appear to be a significant explanatory variable for the observed variations in fracture intensity. Data from the high fracture intensity intervals were extracted and statistical analyses were conducted in order to identify common geological factors. Stereoplots of fracture orientation versus depth for the different fracture intensity intervals were also produced for each borehole. Moreover percussion borehole data were analysed in order to identify the persistence of these intervals throughout the model volume. The main conclusions of these analyses are the following: The fracture intensity is conditioned by the rock domain, but inside a rock domain intervals of high and low fracture intensity are identified. The intervals of high fracture intensity almost always correspond to intervals with distinct fracture orientations (whether a set, most often the NW sub-vertical set, is highly dominant, or some orientation sets are missing). These high fracture intensity intervals are positively correlated to the presence of first and second generation minerals (epidote, calcite). No clear correlation for these fracture intensity intervals has been identified between holes. Based on these results the fracture frequency has been calculated in each rock domain for the

  5. Medicare Program; Cancellation of Advancing Care Coordination Through Episode Payment and Cardiac Rehabilitation Incentive Payment Models; Changes to Comprehensive Care for Joint Replacement Payment Model: Extreme and Uncontrollable Circumstances Policy for the Comprehensive Care for Joint Replacement Payment Model. Final rule; interim final rule with comment period.

    Science.gov (United States)

    2017-12-01

    This final rule cancels the Episode Payment Models (EPMs) and Cardiac Rehabilitation (CR) Incentive Payment Model and rescinds the regulations governing these models. It also implements certain revisions to the Comprehensive Care for Joint Replacement (CJR) model, including: Giving certain hospitals selected for participation in the CJR model a one-time option to choose whether to continue their participation in the model; technical refinements and clarifications for certain payment, reconciliation and quality provisions; and a change to increase the pool of eligible clinicians that qualify as affiliated practitioners under the Advanced Alternative Payment Model (Advanced APM) track. An interim final rule with comment period is being issued in conjunction with this final rule in order to address the need for a policy to provide some flexibility in the determination of episode costs for providers located in areas impacted by extreme and uncontrollable circumstances.

  6. Superficial Priming in Episodic Recognition

    Science.gov (United States)

    Dopkins, Stephen; Sargent, Jesse; Ngo, Catherine T.

    2010-01-01

    We explored the effect of superficial priming in episodic recognition and found it to be different from the effect of semantic priming in episodic recognition. Participants made recognition judgments to pairs of items, with each pair consisting of a prime item and a test item. Correct positive responses to the test item were impeded if the prime…

  7. Rock mechanics site descriptive model-theoretical approach. Preliminary site description Forsmark area - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Fredriksson, Anders; Olofsson, Isabelle [Golder Associates AB, Uppsala (Sweden)

    2005-12-15

    The present report summarises the theoretical approach to estimate the mechanical properties of the rock mass in relation to the Preliminary Site Descriptive Modelling, version 1.2 Forsmark. The theoretical approach is based on a discrete fracture network (DFN) description of the fracture system in the rock mass and on the results of mechanical testing of intact rock and on rock fractures. To estimate the mechanical properties of the rock mass a load test on a rock block with fractures is simulated with the numerical code 3DEC. The location and size of the fractures are given by DFN-realisations. The rock block was loaded in plain strain condition. From the calculated relationship between stresses and deformations the mechanical properties of the rock mass were determined. The influence of the geometrical properties of the fracture system on the mechanical properties of the rock mass was analysed by loading 20 blocks based on different DFN-realisations. The material properties of the intact rock and the fractures were kept constant. The properties are set equal to the mean value of each measured material property. The influence of the variation of the properties of the intact rock and variation of the mechanical properties of the fractures are estimated by analysing numerical load tests on one specific block (one DFN-realisation) with combinations of properties for intact rock and fractures. Each parameter varies from its lowest values to its highest values while the rest of the parameters are held constant, equal to the mean value. The resulting distribution was expressed as a variation around the value determined with mean values on all parameters. To estimate the resulting distribution of the mechanical properties of the rock mass a Monte-Carlo simulation was performed by generating values from the two distributions independent of each other. The two values were added and the statistical properties of the resulting distribution were determined.

  8. Rock mechanics site descriptive model-theoretical approach. Preliminary site description Forsmark area - version 1.2

    International Nuclear Information System (INIS)

    Fredriksson, Anders; Olofsson, Isabelle

    2005-12-01

    The present report summarises the theoretical approach to estimate the mechanical properties of the rock mass in relation to the Preliminary Site Descriptive Modelling, version 1.2 Forsmark. The theoretical approach is based on a discrete fracture network (DFN) description of the fracture system in the rock mass and on the results of mechanical testing of intact rock and on rock fractures. To estimate the mechanical properties of the rock mass a load test on a rock block with fractures is simulated with the numerical code 3DEC. The location and size of the fractures are given by DFN-realisations. The rock block was loaded in plain strain condition. From the calculated relationship between stresses and deformations the mechanical properties of the rock mass were determined. The influence of the geometrical properties of the fracture system on the mechanical properties of the rock mass was analysed by loading 20 blocks based on different DFN-realisations. The material properties of the intact rock and the fractures were kept constant. The properties are set equal to the mean value of each measured material property. The influence of the variation of the properties of the intact rock and variation of the mechanical properties of the fractures are estimated by analysing numerical load tests on one specific block (one DFN-realisation) with combinations of properties for intact rock and fractures. Each parameter varies from its lowest values to its highest values while the rest of the parameters are held constant, equal to the mean value. The resulting distribution was expressed as a variation around the value determined with mean values on all parameters. To estimate the resulting distribution of the mechanical properties of the rock mass a Monte-Carlo simulation was performed by generating values from the two distributions independent of each other. The two values were added and the statistical properties of the resulting distribution were determined

  9. Infrastructure Upgrades to Support Model Longevity and New Applications: The Variable Infiltration Capacity Model Version 5.0 (VIC 5.0)

    Science.gov (United States)

    Nijssen, B.; Hamman, J.; Bohn, T. J.

    2015-12-01

    The Variable Infiltration Capacity (VIC) model is a macro-scale semi-distributed hydrologic model. VIC development began in the early 1990s and it has been used extensively, applied from basin to global scales. VIC has been applied in a many use cases, including the construction of hydrologic data sets, trend analysis, data evaluation and assimilation, forecasting, coupled climate modeling, and climate change impact analysis. Ongoing applications of the VIC model include the University of Washington's drought monitor and forecast systems, and NASA's land data assimilation systems. The development of VIC version 5.0 focused on reconfiguring the legacy VIC source code to support a wider range of modern modeling applications. The VIC source code has been moved to a public Github repository to encourage participation by the model development community-at-large. The reconfiguration has separated the physical core of the model from the driver, which is responsible for memory allocation, pre- and post-processing and I/O. VIC 5.0 includes four drivers that use the same physical model core: classic, image, CESM, and Python. The classic driver supports legacy VIC configurations and runs in the traditional time-before-space configuration. The image driver includes a space-before-time configuration, netCDF I/O, and uses MPI for parallel processing. This configuration facilitates the direct coupling of streamflow routing, reservoir, and irrigation processes within VIC. The image driver is the foundation of the CESM driver; which couples VIC to CESM's CPL7 and a prognostic atmosphere. Finally, we have added a Python driver that provides access to the functions and datatypes of VIC's physical core from a Python interface. This presentation demonstrates how reconfiguring legacy source code extends the life and applicability of a research model.

  10. Versioning of printed products

    Science.gov (United States)

    Tuijn, Chris

    2005-01-01

    During the definition of a printed product in an MIS system, a lot of attention is paid to the production process. The MIS systems typically gather all process-related parameters at such a level of detail that they can determine what the exact cost will be to make a specific product. This information can then be used to make a quote for the customer. Considerably less attention is paid to the content of the products since this does not have an immediate impact on the production costs (assuming that the number of inks or plates is known in advance). The content management is typically carried out either by the prepress systems themselves or by dedicated workflow servers uniting all people that contribute to the manufacturing of a printed product. Special care must be taken when considering versioned products. With versioned products we here mean distinct products that have a number of pages or page layers in common. Typical examples are comic books that have to be printed in different languages. In this case, the color plates can be shared over the different versions and the black plate will be different. Other examples are nation-wide magazines or newspapers that have an area with regional pages or advertising leaflets in different languages or currencies. When considering versioned products, the content will become an important cost factor. First of all, the content management (and associated proofing and approval cycles) becomes much more complex and, therefore, the risk that mistakes will be made increases considerably. Secondly, the real production costs are very much content-dependent because the content will determine whether plates can be shared across different versions or not and how many press runs will be needed. In this paper, we will present a way to manage different versions of a printed product. First, we will introduce a data model for version management. Next, we will show how the content of the different versions can be supplied by the customer

  11. THE ELECTROMAGNETIC MODEL OF SHORT GRBs, THE NATURE OF PROMPT TAILS, SUPERNOVA-LESS LONG GRBs, AND HIGHLY EFFICIENT EPISODIC ACCRETION

    Energy Technology Data Exchange (ETDEWEB)

    Lyutikov, Maxim [Department of Physics, Purdue University, 525 Northwestern Avenue, West Lafayette, IN 47907-2036 (United States)

    2013-05-01

    Many short gamma-ray bursts (GRBs) show prompt tails lasting up to hundreds of seconds that can be energetically dominant over the initial sub-second spike. In this paper we develop an electromagnetic model of short GRBs that explains the two stages of the energy release, the prompt spike and the prompt tail. The key ingredient of the model is the recent discovery that an isolated black hole can keep its open magnetic flux for times much longer than the collapse time and thus can spin down electromagnetically, driving the relativistic wind. First, the merger is preceded by an electromagnetic precursor wind with total power L{sub p} Almost-Equal-To (((GM{sub NS}){sup 3}B{sub NS}{sup 2})/c{sup 5}R){proportional_to}(-t){sup - Vulgar-Fraction-One-Quarter }, reaching 3 Multiplication-Sign 10{sup 44} erg s{sup -1} for typical neutron star masses of 1.4 M{sub Sun} and magnetic fields B {approx} 10{sup 12} G. If a fraction of this power is converted into pulsar-like coherent radio emission, this may produce an observable radio burst of a few milliseconds (like the Lorimer burst). At the active stage of the merger, two neutron stars produce a black hole surrounded by an accretion torus in which the magnetic field is amplified to {approx}10{sup 15} G. This magnetic field extracts the rotational energy of the black hole and drives an axially collimated electromagnetic wind that may carry of the order of 10{sup 50} erg, limited by the accretion time of the torus, a few hundred milliseconds. For observers nearly aligned with the orbital normal this is seen as a classical short GRB. After the accretion of the torus, the isolated black hole keeps the open magnetic flux and drives the equatorially (not axially) collimated outflow, which is seen by an observer at intermediate polar angles as a prompt tail. The tail carries more energy than the prompt spike, but its emission is de-boosted for observers along the orbital normal. Observers in the equatorial plane miss the prompt spike

  12. Performance of advanced self-shielding models in DRAGON Version4 on analysis of a high conversion light water reactor lattice

    International Nuclear Information System (INIS)

    Karthikeyan, Ramamoorthy; Hebert, Alain

    2008-01-01

    A high conversion light water reactor lattice has been analysed using the code DRAGON Version4. This analysis was performed to test the performance of the advanced self-shielding models incorporated in DRAGON Version4. The self-shielding models are broadly classified into two groups - 'equivalence in dilution' and 'subgroup approach'. Under the 'equivalence in dilution' approach we have analysed the generalized Stamm'ler model with and without Nordheim model and Riemann integration. These models have been analysed also using the Livolant-Jeanpierre normalization. Under the 'subgroup approach', we have analysed Statistical self-shielding model based on physical probability tables and Ribon extended self-shielding model based on mathematical probability tables. This analysis will help in understanding the performance of advanced self-shielding models for a lattice that is tight and has a large fraction of fissions happening in the resonance region. The nuclear data for the analysis was generated in-house. NJOY99.90 was used for generating libraries in DRAGLIB format for analysis using DRAGON and A Compact ENDF libraries for analysis using MCNP5. The evaluated datafiles were chosen based on the recommendations of the IAEA Co-ordinated Research Project on the WIMS Library Update Project. The reference solution for the problem was obtained using Monte Carlo code MCNP5. It was found that the Ribon extended self-shielding model based on mathematical probability tables using correlation model performed better than all other models

  13. Version 2.0 of the European Gas Model. Changes and their impact on the German gas sector; Das europaeische Gas Target Model 2.0. Aenderungen und Auswirkungen auf den deutschen Gassektor

    Energy Technology Data Exchange (ETDEWEB)

    Balmert, David; Petrov, Konstantin [DNV GL, Bonn (Germany)

    2015-06-15

    In January 2015 ACER, the European Agency for the Cooperation of Energy Regulators, presented an updated version of its target model for the inner-European natural gas market, also referred to as version 2.0 of the Gas Target Model. During 2014 the existing model, originally developed by the Council of European Energy Regulators (CEER) and launched in 2011, had been analysed, revised and updated in preparation of the new version. While it has few surprises to offer, the new Gas Target Model contains specifies and goes into greater detail on many elements of the original model. Some of the new content is highly relevant to the German gas sector, not least the deliberations on the current key issues, which are security of supply and the ability of the gas markets to function.

  14. Effects on incidental memory of affective tone in associated past and future episodes: influence of emotional intelligence.

    Science.gov (United States)

    Toyota, Hiroshi

    2011-02-01

    The present study examined the effects of emotion elicited by episodes (past events or expected future events) and the relationship between individual differences in emotional intelligence and memory. Participants' emotional intelligence was assessed on the Japanese version of Emotional Skills and Competence Questionnaire. They rated the pleasantness of episodes they associated with targets, and then performed unexpected free recall tests. When the targets were associated with episodes that were past events, all participants recalled more of the targets associated with pleasant and unpleasant episodes than those associated with neutral episodes. However, when the targets were associated with episodes expected to occur in the future, only participants with higher emotional intelligence scores recalled more of the targets associated with pleasant and unpleasant episodes. The participants with lower emotional intelligence scores recalled the three target types with similar accuracy. These results were interpreted as showing that emotional intelligence is associated with the processing of targets associated with future episodes as retrieval cues.

  15. Episodic reinstatement in the medial temporal lobe.

    Science.gov (United States)

    Staresina, Bernhard P; Henson, Richard N A; Kriegeskorte, Nikolaus; Alink, Arjen

    2012-12-12

    The essence of episodic memory is our ability to reexperience past events in great detail, even in the absence of external stimulus cues. Does the phenomenological reinstatement of past experiences go along with reinstating unique neural representations in the brain? And if so, how is this accomplished by the medial temporal lobe (MTL), a brain region intimately linked to episodic memory? Computational models suggest that such reinstatement (also termed "pattern completion") in cortical regions is mediated by the hippocampus, a key region of the MTL. Although recent functional magnetic resonance imaging studies demonstrated reinstatement of coarse item properties like stimulus category or task context across different brain regions, it has not yet been shown whether reinstatement can be observed at the level of individual, discrete events-arguably the defining feature of episodic memory-nor whether MTL structures like the hippocampus support this "true episodic" reinstatement. Here we show that neural activity patterns for unique word-scene combinations encountered during encoding are reinstated in human parahippocampal cortex (PhC) during retrieval. Critically, this reinstatement occurs when word-scene combinations are successfully recollected (even though the original scene is not visually presented) and does not encompass other stimulus domains (such as word-color associations). Finally, the degree of PhC reinstatement across retrieval events correlated with hippocampal activity, consistent with a role of the hippocampus in coordinating pattern completion in cortical regions.

  16. Computer code SICHTA-85/MOD 1 for thermohydraulic and mechanical modelling of WWER fuel channel behaviour during LOCA and comparison with original version of the SICHTA code

    International Nuclear Information System (INIS)

    Bujan, A.; Adamik, V.; Misak, J.

    1986-01-01

    A brief description is presented of the expansion of the SICHTA-83 computer code for the analysis of the thermal history of the fuel channel for large LOCAs by modelling the mechanical behaviour of fuel element cladding. The new version of the code has a more detailed treatment of heat transfer in the fuel-cladding gap because it also respects the mechanical (plastic) deformations of the cladding and the fuel-cladding interaction (magnitude of contact pressure). Also respected is the change in pressure of the gas filling of the fuel element, the mechanical criterion is considered of a failure of the cladding and the degree is considered of the blockage of the through-flow cross section for coolant flow in the fuel channel. The LOCA WWER-440 model computation provides a comparison of the new SICHTA-85/MOD 1 code with the results of the original 83 version of SICHTA. (author)

  17. The Nexus Land-Use model version 1.0, an approach articulating biophysical potentials and economic dynamics to model competition for land-use

    Science.gov (United States)

    Souty, F.; Brunelle, T.; Dumas, P.; Dorin, B.; Ciais, P.; Crassous, R.; Müller, C.; Bondeau, A.

    2012-10-01

    Interactions between food demand, biomass energy and forest preservation are driving both food prices and land-use changes, regionally and globally. This study presents a new model called Nexus Land-Use version 1.0 which describes these interactions through a generic representation of agricultural intensification mechanisms within agricultural lands. The Nexus Land-Use model equations combine biophysics and economics into a single coherent framework to calculate crop yields, food prices, and resulting pasture and cropland areas within 12 regions inter-connected with each other by international trade. The representation of cropland and livestock production systems in each region relies on three components: (i) a biomass production function derived from the crop yield response function to inputs such as industrial fertilisers; (ii) a detailed representation of the livestock production system subdivided into an intensive and an extensive component, and (iii) a spatially explicit distribution of potential (maximal) crop yields prescribed from the Lund-Postdam-Jena global vegetation model for managed Land (LPJmL). The economic principles governing decisions about land-use and intensification are adapted from the Ricardian rent theory, assuming cost minimisation for farmers. In contrast to the other land-use models linking economy and biophysics, crops are aggregated as a representative product in calories and intensification for the representative crop is a non-linear function of chemical inputs. The model equations and parameter values are first described in details. Then, idealised scenarios exploring the impact of forest preservation policies or rising energy price on agricultural intensification are described, and their impacts on pasture and cropland areas are investigated.

  18. The Nexus Land-Use model version 1.0, an approach articulating biophysical potentials and economic dynamics to model competition for land-use

    Directory of Open Access Journals (Sweden)

    F. Souty

    2012-10-01

    Full Text Available Interactions between food demand, biomass energy and forest preservation are driving both food prices and land-use changes, regionally and globally. This study presents a new model called Nexus Land-Use version 1.0 which describes these interactions through a generic representation of agricultural intensification mechanisms within agricultural lands. The Nexus Land-Use model equations combine biophysics and economics into a single coherent framework to calculate crop yields, food prices, and resulting pasture and cropland areas within 12 regions inter-connected with each other by international trade. The representation of cropland and livestock production systems in each region relies on three components: (i a biomass production function derived from the crop yield response function to inputs such as industrial fertilisers; (ii a detailed representation of the livestock production system subdivided into an intensive and an extensive component, and (iii a spatially explicit distribution of potential (maximal crop yields prescribed from the Lund-Postdam-Jena global vegetation model for managed Land (LPJmL. The economic principles governing decisions about land-use and intensification are adapted from the Ricardian rent theory, assuming cost minimisation for farmers. In contrast to the other land-use models linking economy and biophysics, crops are aggregated as a representative product in calories and intensification for the representative crop is a non-linear function of chemical inputs. The model equations and parameter values are first described in details. Then, idealised scenarios exploring the impact of forest preservation policies or rising energy price on agricultural intensification are described, and their impacts on pasture and cropland areas are investigated.

  19. Episodic Memory and Beyond: The Hippocampus and Neocortex in Transformation.

    Science.gov (United States)

    Moscovitch, Morris; Cabeza, Roberto; Winocur, Gordon; Nadel, Lynn

    2016-01-01

    The last decade has seen dramatic technological and conceptual changes in research on episodic memory and the brain. New technologies, and increased use of more naturalistic observations, have enabled investigators to delve deeply into the structures that mediate episodic memory, particularly the hippocampus, and to track functional and structural interactions among brain regions that support it. Conceptually, episodic memory is increasingly being viewed as subject to lifelong transformations that are reflected in the neural substrates that mediate it. In keeping with this dynamic perspective, research on episodic memory (and the hippocampus) has infiltrated domains, from perception to language and from empathy to problem solving, that were once considered outside its boundaries. Using the component process model as a framework, and focusing on the hippocampus, its subfields, and specialization along its longitudinal axis, along with its interaction with other brain regions, we consider these new developments and their implications for the organization of episodic memory and its contribution to functions in other domains.

  20. From sixty-two interviews on 'the worst and the best episode of your life'. Relationships between internal working models and a grammatical scale of subject-object affective connections.

    Science.gov (United States)

    Seganti, A; Carnevale, G; Mucelli, R; Solano, L; Target, M

    2000-06-01

    The authors address the issue of inferring unconscious internal working models of interaction through language. After reviewing Main's seminal work of linguistic assessment through the 'adult attachment interview', they stress the idea of adults' internal working models (IWMs) as information-processing devices, which give moment-to-moment sensory orientation in the face of any past or present, animate or inanimate object. They propose that a selective perception of the objects could match expected with actual influence of objects on the subject's self, through very simple 'parallel-processed' categories of internal objects. They further hypothesise that the isomorphism between internal working models of interaction and grammatical connections between subjects and objects within a clause could be a key to tracking positive and negative images of self and other during discourse. An experiment is reported applying the authors' 'scale of subject/object affective connection' to the narratives of sixty-two subjects asked to write about the 'worst' and 'best' episodes of their lives. Participants had previously been classified using Hazan & Shaver's self-reported 'attachment types' (avoidant, anxious and secure) categorising individuals' general expectations in relation to others. The findings were that the subject/object distribution of positive and negative experience, through verbs defined for this purpose as either performative or state verbs, did significantly differ between groups. In addition, different groups tended, during the best episodes, significantly to invert the trend of positive/negative subject/object distribution shown during the worst episode. Results are discussed in terms of a psychoanalytic theory of improvement through co-operative elaboration of negative relational issues.

  1. Episodic memory in nonhuman animals.

    Science.gov (United States)

    Templer, Victoria L; Hampton, Robert R

    2013-09-09

    Episodic memories differ from other types of memory because they represent aspects of the past not present in other memories, such as the time, place, or social context in which the memories were formed. Focus on phenomenal experience in human memory, such as the sense of 'having been there', has resulted in conceptualizations of episodic memory that are difficult or impossible to apply to nonhuman species. It is therefore a significant challenge for investigators to agree on objective behavioral criteria that can be applied in nonhuman animals and still capture features of memory thought to be critical in humans. Some investigators have attempted to use neurobiological parallels to bridge this gap; however, defining memory types on the basis of the brain structures involved rather than on identified cognitive mechanisms risks missing crucial functional aspects of episodic memory, which are ultimately behavioral. The most productive way forward is likely a combination of neurobiology and sophisticated cognitive testing that identifies the mental representations present in episodic memory. Investigators that have refined their approach from asking the naïve question "do nonhuman animals have episodic memory" to instead asking "what aspects of episodic memory are shared by humans and nonhumans" are making progress. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. The Interaction between Semantic Representation and Episodic Memory.

    Science.gov (United States)

    Fang, Jing; Rüther, Naima; Bellebaum, Christian; Wiskott, Laurenz; Cheng, Sen

    2018-02-01

    The experimental evidence on the interrelation between episodic memory and semantic memory is inconclusive. Are they independent systems, different aspects of a single system, or separate but strongly interacting systems? Here, we propose a computational role for the interaction between the semantic and episodic systems that might help resolve this debate. We hypothesize that episodic memories are represented as sequences of activation patterns. These patterns are the output of a semantic representational network that compresses the high-dimensional sensory input. We show quantitatively that the accuracy of episodic memory crucially depends on the quality of the semantic representation. We compare two types of semantic representations: appropriate representations, which means that the representation is used to store input sequences that are of the same type as those that it was trained on, and inappropriate representations, which means that stored inputs differ from the training data. Retrieval accuracy is higher for appropriate representations because the encoded sequences are less divergent than those encoded with inappropriate representations. Consistent with our model prediction, we found that human subjects remember some aspects of episodes significantly more accurately if they had previously been familiarized with the objects occurring in the episode, as compared to episodes involving unfamiliar objects. We thus conclude that the interaction with the semantic system plays an important role for episodic memory.

  3. MATILDA Version 2: Rough Earth TIALD Model for Laser Probabilistic Risk Assessment in Hilly Terrain - Part I

    Science.gov (United States)

    2017-03-13

    support of airborne laser designator use during test and training exercises on military ranges. The initial MATILDA tool, MATILDA PRO Version-1.6.1...2]. The use of the ALARP principle in UK hazard assessment arises from the provisions of the UK Health and Safety at Work Act of 1974 [18]. Given...The product of the probabilistic fault/failure laser hazard analysis is the ex- pectation value: the likelihood that an unprotected observer outside

  4. Rats Remember Items in Context Using Episodic Memory.

    Science.gov (United States)

    Panoz-Brown, Danielle; Corbin, Hannah E; Dalecki, Stefan J; Gentry, Meredith; Brotheridge, Sydney; Sluka, Christina M; Wu, Jie-En; Crystal, Jonathon D

    2016-10-24

    Vivid episodic memories in people have been characterized as the replay of unique events in sequential order [1-3]. Animal models of episodic memory have successfully documented episodic memory of a single event (e.g., [4-8]). However, a fundamental feature of episodic memory in people is that it involves multiple events, and notably, episodic memory impairments in human diseases are not limited to a single event. Critically, it is not known whether animals remember many unique events using episodic memory. Here, we show that rats remember many unique events and the contexts in which the events occurred using episodic memory. We used an olfactory memory assessment in which new (but not old) odors were rewarded using 32 items. Rats were presented with 16 odors in one context and the same odors in a second context. To attain high accuracy, the rats needed to remember item in context because each odor was rewarded as a new item in each context. The demands on item-in-context memory were varied by assessing memory with 2, 3, 5, or 15 unpredictable transitions between contexts, and item-in-context memory survived a 45 min retention interval challenge. When the memory of item in context was put in conflict with non-episodic familiarity cues, rats relied on item in context using episodic memory. Our findings suggest that rats remember multiple unique events and the contexts in which these events occurred using episodic memory and support the view that rats may be used to model fundamental aspects of human cognition. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Extremely Bright GRB 160625B with Multiple Emission Episodes: Evidence for Long-term Ejecta Evolution

    Energy Technology Data Exchange (ETDEWEB)

    Lü, Hou-Jun; Lü, Jing; Zhong, Shu-Qing; Huang, Xiao-Li; Zhang, Hai-Ming; Lan, Lin; Lu, Rui-Jing; Liang, En-Wei [Guangxi Key Laboratory for Relativistic Astrophysics, Department of Physics, Guangxi University, Nanning 530004 (China); Xie, Wei, E-mail: lhj@gxu.edu.edu, E-mail: lew@gxu.edu.cn [School of Physics, Huazhong University of Science and Technology, Wuhan 430074 (China)

    2017-11-01

    GRB 160625B is an extremely bright GRB with three distinct emission episodes. By analyzing its data observed with the Gamma-Ray Burst Monitor (GBM) and Large Area Telescope (LAT) on board the Fermi mission, we find that a multicolor blackbody (mBB) model can be used to fit very well the spectra of the initial short episode (Episode I) within the hypothesis of photosphere emission of a fireball model. The time-resolved spectra of its main episode (Episode II), which was detected with both GBM and LAT after a long quiescent stage (∼180 s) following the initial episode, can be fitted with a model comprising an mBB component plus a cutoff power-law (CPL) component. This GRB was detected again in the GBM and LAT bands with a long extended emission (Episode III) after a quiescent period of ∼300 s. The spectrum of Episode III is adequately fitted with CPL plus single power-law models, and no mBB component is required. These features may imply that the emission of the three episodes are dominated by distinct physics processes, i.e., Episode I is possible from the cocoon emission surrounding the relativistic jet, Episode II may be from photosphere emission and internal shock of the relativistic jet, and Episode III is contributed by internal and external shocks of the relativistic jet. On the other hand, both X-ray and optical afterglows are consistent with the standard external shocks model.

  6. Wavenumber dependent investigation of the terrestrial infrared radiation budget with two versions of the LOWTRAN5 band model

    Science.gov (United States)

    Charlock, T. P.

    1984-01-01

    Two versions of the LOWTRAN5 radiance code are used in a study of the earth's clear sky infrared radiation budget in the interval 30 per cm (333.3 microns) to 3530 per cm (2.8 microns). One version uses 5 per cm resolution and temperature dependent molecular absorption coefficients, and the second uses 20 per cm resolution and temperature independent molecular absorption coefficients. Both versions compare well with Nimbus 3 IRIS spectra, with some discrepancies at particular wavenumber intervals. Up and downgoing fluxes, calculated as functions of latitude, are displayed for wavenumbers at which the principle absorbers are active. Most of the variation of the fluxes with latitude is found in the higher wavenumber intervals for both clear and cloudy skies. The main features of the wavenumber integrated cooling rates are explained with reference to calculations in more restricted wavenumber intervals. A tropical lower tropospheric cooling maximum is produced by water vapor continuum effects in the 760-1240 per cm window. A secondary upper tropospheric cooling maximum, with wide meridional extent, is produced by water vapor rotational lines between 30-430 per cm. Water vapor lines throughout the terrestrial infrared spectrum prevent the upflux maximum from coinciding with the surface temperature maximum.

  7. Episodic memory for human-like agents and human-like agents for episodic memory

    Czech Academy of Sciences Publication Activity Database

    Brom, C.; Lukavský, Jiří; Kadlec, R.

    2010-01-01

    Roč. 2, č. 2 (2010), s. 227-244 ISSN 1793-8473 Institutional research plan: CEZ:AV0Z70250504 Keywords : episodic memory * virtual agent * modelling Subject RIV: AN - Psychology http://www.worldscinet.com/ijmc/02/0202/S1793843010000461.html

  8. PVWatts Version 5 Manual

    Energy Technology Data Exchange (ETDEWEB)

    Dobos, A. P.

    2014-09-01

    The NREL PVWatts calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes includes several built-in parameters that are hidden from the user. This technical reference describes the sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimate. This reference is applicable to the significantly revised version of PVWatts released by NREL in 2014.

  9. Two subgroups of antipsychotic-naive, first-episode schizophrenia patients identified with a Gaussian mixture model on cognition and electrophysiology

    DEFF Research Database (Denmark)

    Bak, N.; Ebdrup, B.H.; Oranje, B

    2017-01-01

    Deficits in information processing and cognition are among the most robust findings in schizophrenia patients. Previous efforts to translate group-level deficits into clinically relevant and individualized information have, however, been non-successful, which is possibly explained by biologically...... different disease subgroups. We applied machine learning algorithms on measures of electrophysiology and cognition to identify potential subgroups of schizophrenia. Next, we explored subgroup differences regarding treatment response. Sixty-six antipsychotic-naive first-episode schizophrenia patients...... be used to classify subgroups of schizophrenia patients. The two distinct subgroups, which we identified, were psychopathologically inseparable before treatment, yet their response to dopaminergic blockade was predicted with significant accuracy. This proof of principle encourages further endeavors...

  10. Users' manual for LEHGC: A Lagrangian-Eulerian Finite-Element Model of Hydrogeochemical Transport Through Saturated-Unsaturated Media. Version 1.1

    International Nuclear Information System (INIS)

    Yeh, Gour-Tsyh

    1995-11-01

    The computer program LEHGC is a Hybrid Lagrangian-Eulerian Finite-Element Model of HydroGeo-Chemical (LEHGC) Transport Through Saturated-Unsaturated Media. LEHGC iteratively solves two-dimensional transport and geochemical equilibrium equations and is a descendant of HYDROGEOCHEM, a strictly Eulerian finite-element reactive transport code. The hybrid Lagrangian-Eulerian scheme improves on the Eulerian scheme by allowing larger time steps to be used in the advection-dominant transport calculations. This causes less numerical dispersion and alleviates the problem of calculated negative concentrations at sharp concentration fronts. The code also is more computationally efficient than the strictly Eulerian version. LEHGC is designed for generic application to reactive transport problems associated with contaminant transport in subsurface media. Input to the program includes the geometry of the system, the spatial distribution of finite elements and nodes, the properties of the media, the potential chemical reactions, and the initial and boundary conditions. Output includes the spatial distribution of chemical element concentrations as a function of time and space and the chemical speciation at user-specified nodes. LEHGC Version 1.1 is a modification of LEHGC Version 1.0. The modification includes: (1) devising a tracking algorithm with the computational effort proportional to N where N is the number of computational grid nodes rather than N 2 as in LEHGC Version 1.0, (2) including multiple adsorbing sites and multiple ion-exchange sites, (3) using four preconditioned conjugate gradient methods for the solution of matrix equations, and (4) providing a model for some features of solute transport by colloids

  11. [A new assessment for episodic memory. Episodic memory test and caregiver's episodic memory test].

    Science.gov (United States)

    Ojea Ortega, T; González Álvarez de Sotomayor, M M; Pérez González, O; Fernández Fernández, O

    2013-10-01

    The purpose of the episodic memory test and the caregiver's episodic memory test is to evaluate episodic memory according to its definition in a way that is feasible for families and achieves high degrees of sensitivity and specificity. We administered a test consisting of 10 questions about episodic events to 332 subjects, of whom 65 had Alzheimer's disease (AD), 115 had amnestic MCI (aMCI) and 152 showed no cognitive impairment according to Reisberg's global deterioration scale (GDS). We calculated the test's sensitivity and specificity to distinguish AD from episodic aMCI and from normal ageing. The area under the ROC curve for the diagnosis of aMCI was 0.94 and the best cut-off value was 20; for that value, sensitivity was 89% and specificity was 82%. For a diagnosis of AD, the area under the ROC curve was 0.99 and the best cut-off point was 17, with a sensitivity of 98% and a specificity of 91%. A subsequent study using similar methodology yielded similar results when the test was administered directly by the caregiver. The episodic memory test and the caregiver's episodic memory test are useful as brief screening tools for identifying patients with early-stage AD. It is suitable for use by primary care medical staff and in the home, since it can be administered by a caregiver. The test's limitations are that it must be administered by a reliable caregiver and the fact that it measures episodic memory only. Copyright © 2012 Sociedad Española de Neurología. Published by Elsevier Espana. All rights reserved.

  12. What if? Neural activity underlying semantic and episodic counterfactual thinking.

    Science.gov (United States)

    Parikh, Natasha; Ruzic, Luka; Stewart, Gregory W; Spreng, R Nathan; De Brigard, Felipe

    2018-05-25

    Counterfactual thinking (CFT) is the process of mentally simulating alternative versions of known facts. In the past decade, cognitive neuroscientists have begun to uncover the neural underpinnings of CFT, particularly episodic CFT (eCFT), which activates regions in the default network (DN) also activated by episodic memory (eM) recall. However, the engagement of DN regions is different for distinct kinds of eCFT. More plausible counterfactuals and counterfactuals about oneself show stronger activity in DN regions compared to implausible and other- or object-focused counterfactuals. The current study sought to identify a source for this difference in DN activity. Specifically, self-focused counterfactuals may also be more plausible, suggesting that DN core regions are sensitive to the plausibility of a simulation. On the other hand, plausible and self-focused counterfactuals may involve more episodic information than implausible and other-focused counterfactuals, which would imply DN sensitivity to episodic information. In the current study, we compared episodic and semantic counterfactuals generated to be plausible or implausible against episodic and semantic memory reactivation using fMRI. Taking multivariate and univariate approaches, we found that the DN is engaged more during episodic simulations, including eM and all eCFT, than during semantic simulations. Semantic simulations engaged more inferior temporal and lateral occipital regions. The only region that showed strong plausibility effects was the hippocampus, which was significantly engaged for implausible CFT but not for plausible CFT, suggestive of binding more disparate information. Consequences of these findings for the cognitive neuroscience of mental simulation are discussed. Published by Elsevier Inc.

  13. Functional neuroanatomy of remote episodic, semantic and spatial memory: a unified account based on multiple trace theory.

    Science.gov (United States)

    Moscovitch, Morris; Rosenbaum, R Shayna; Gilboa, Asaf; Addis, Donna Rose; Westmacott, Robyn; Grady, Cheryl; McAndrews, Mary Pat; Levine, Brian; Black, Sandra; Winocur, Gordon; Nadel, Lynn

    2005-07-01

    We review lesion and neuroimaging evidence on the role of the hippocampus, and other structures, in retention and retrieval of recent and remote memories. We examine episodic, semantic and spatial memory, and show that important distinctions exist among different types of these memories and the structures that mediate them. We argue that retention and retrieval of detailed, vivid autobiographical memories depend on the hippocampal system no matter how long ago they were acquired. Semantic memories, on the other hand, benefit from hippocampal contribution for some time before they can be retrieved independently of the hippocampus. Even semantic memories, however, can have episodic elements associated with them that continue to depend on the hippocampus. Likewise, we distinguish between experientially detailed spatial memories (akin to episodic memory) and more schematic memories (akin to semantic memory) that are sufficient for navigation but not for re-experiencing the environment in which they were acquired. Like their episodic and semantic counterparts, the former type of spatial memory is dependent on the hippocampus no matter how long ago it was acquired, whereas the latter can survive independently of the hippocampus and is represented in extra-hippocampal structures. In short, the evidence reviewed suggests strongly that the function of the hippocampus (and possibly that of related limbic structures) is to help encode, retain, and retrieve experiences, no matter how long ago the events comprising the experience occurred, and no matter whether the memories are episodic or spatial. We conclude that the evidence favours a multiple trace theory (MTT) of memory over two other models: (1) traditional consolidation models which posit that the hippocampus is a time-limited memory structure for all forms of memory; and (2) versions of cognitive map theory which posit that the hippocampus is needed for representing all forms of allocentric space in memory.

  14. Functional neuroanatomy of remote episodic, semantic and spatial memory: a unified account based on multiple trace theory

    Science.gov (United States)

    Moscovitch, Morris; Rosenbaum, R Shayna; Gilboa, Asaf; Addis, Donna Rose; Westmacott, Robyn; Grady, Cheryl; McAndrews, Mary Pat; Levine, Brian; Black, Sandra; Winocur, Gordon; Nadel, Lynn

    2005-01-01

    We review lesion and neuroimaging evidence on the role of the hippocampus, and other structures, in retention and retrieval of recent and remote memories. We examine episodic, semantic and spatial memory, and show that important distinctions exist among different types of these memories and the structures that mediate them. We argue that retention and retrieval of detailed, vivid autobiographical memories depend on the hippocampal system no matter how long ago they were acquired. Semantic memories, on the other hand, benefit from hippocampal contribution for some time before they can be retrieved independently of the hippocampus. Even semantic memories, however, can have episodic elements associated with them that continue to depend on the hippocampus. Likewise, we distinguish between experientially detailed spatial memories (akin to episodic memory) and more schematic memories (akin to semantic memory) that are sufficient for navigation but not for re-experiencing the environment in which they were acquired. Like their episodic and semantic counterparts, the former type of spatial memory is dependent on the hippocampus no matter how long ago it was acquired, whereas the latter can survive independently of the hippocampus and is represented in extra-hippocampal structures. In short, the evidence reviewed suggests strongly that the function of the hippocampus (and possibly that of related limbic structures) is to help encode, retain, and retrieve experiences, no matter how long ago the events comprising the experience occurred, and no matter whether the memories are episodic or spatial. We conclude that the evidence favours a multiple trace theory (MTT) of memory over two other models: (1) traditional consolidation models which posit that the hippocampus is a time-limited memory structure for all forms of memory; and (2) versions of cognitive map theory which posit that the hippocampus is needed for representing all forms of allocentric space in memory. PMID

  15. Land Boundary Conditions for the Goddard Earth Observing System Model Version 5 (GEOS-5) Climate Modeling System: Recent Updates and Data File Descriptions

    Science.gov (United States)

    Mahanama, Sarith P.; Koster, Randal D.; Walker, Gregory K.; Takacs, Lawrence L.; Reichle, Rolf H.; De Lannoy, Gabrielle; Liu, Qing; Zhao, Bin; Suarez, Max J.

    2015-01-01

    The Earths land surface boundary conditions in the Goddard Earth Observing System version 5 (GEOS-5) modeling system were updated using recent high spatial and temporal resolution global data products. The updates include: (i) construction of a global 10-arcsec land-ocean lakes-ice mask; (ii) incorporation of a 10-arcsec Globcover 2009 land cover dataset; (iii) implementation of Level 12 Pfafstetter hydrologic catchments; (iv) use of hybridized SRTM global topography data; (v) construction of the HWSDv1.21-STATSGO2 merged global 30 arc second soil mineral and carbon data in conjunction with a highly-refined soil classification system; (vi) production of diffuse visible and near-infrared 8-day MODIS albedo climatologies at 30-arcsec from the period 2001-2011; and (vii) production of the GEOLAND2 and MODIS merged 8-day LAI climatology at 30-arcsec for GEOS-5. The global data sets were preprocessed and used to construct global raster data files for the software (mkCatchParam) that computes parameters on catchment-tiles for various atmospheric grids. The updates also include a few bug fixes in mkCatchParam, as well as changes (improvements in algorithms, etc.) to mkCatchParam that allow it to produce tile-space parameters efficiently for high resolution AGCM grids. The update process also includes the construction of data files describing the vegetation type fractions, soil background albedo, nitrogen deposition and mean annual 2m air temperature to be used with the future Catchment CN model and the global stream channel network to be used with the future global runoff routing model. This report provides detailed descriptions of the data production process and data file format of each updated data set.

  16. Extended-range prediction trials using the global cloud/cloud-system resolving model NICAM and its new ocean-coupled version NICOCO

    Science.gov (United States)

    Miyakawa, Tomoki

    2017-04-01

    The global cloud/cloud-system resolving model NICAM and its new fully-coupled version NICOCO is run on one of the worlds top-tier supercomputers, the K computer. NICOCO couples the full-3D ocean component COCO of the general circulation model MIROC using a general-purpose coupler Jcup. We carried out multiple MJO simulations using NICAM and the new ocean-coupled version NICOCO to examine their extended-range MJO prediction skills and the impact of ocean coupling. NICAM performs excellently in terms of MJO prediction, maintaining a valid skill up to 27 days after the model is initialized (Miyakawa et al 2014). As is the case in most global models, ocean coupling frees the model from being anchored by the observed SST and allows the model climate to drift away further from reality compared to the atmospheric version of the model. Thus, it is important to evaluate the model bias, and in an initial value problem such as the seasonal extended-range prediction, it is essential to be able to distinguish the actual signal from the early transition of the model from the observed state to its own climatology. Since NICAM is a highly resource-demanding model, evaluation and tuning of the model climatology (order of years) is challenging. Here we focus on the initial 100 days to estimate the early drift of the model, and subsequently evaluate MJO prediction skills of NICOCO. Results show that in the initial 100 days, NICOCO forms a La-Nina like SST bias compared to observation, with a warmer Maritime Continent warm pool and a cooler equatorial central Pacific. The enhanced convection over the Maritime Continent associated with this bias project on to the real-time multi-variate MJO indices (RMM, Wheeler and Hendon 2004), and contaminates the MJO skill score. However, the bias does not appear to demolish the MJO signal severely. The model maintains a valid MJO prediction skill up to nearly 4 weeks when evaluated after linearly removing the early drift component estimated from

  17. Factitious psychogenic nonepileptic paroxysmal episodes

    Directory of Open Access Journals (Sweden)

    Alissa Romano

    2014-01-01

    Full Text Available Mistaking psychogenic nonepileptic paroxysmal episodes (PNEPEs for epileptic seizures (ES is potentially dangerous, and certain features should alert physicians to a possible PNEPE diagnosis. Psychogenic nonepileptic paroxysmal episodes due to factitious seizures carry particularly high risks of morbidity or mortality from nonindicated emergency treatment and, often, high costs in wasted medical treatment expenditures. We report a case of a 28-year-old man with PNEPEs that were misdiagnosed as ES. The patient had been on four antiseizure medications (ASMs with therapeutic serum levels and had had multiple intubations in the past for uncontrolled episodes. He had no episodes for two days of continuous video-EEG monitoring. He then disconnected his EEG cables and had an episode of generalized stiffening and cyanosis, followed by jerking and profuse bleeding from the mouth. The manifestations were unusually similar to those of ES, except that he was clearly startled by spraying water on his face, while he was stiff in all extremities and unresponsive. There were indications that he had sucked blood from his central venous catheter to expel through his mouth during his PNEPEs while consciously holding his breath. Normal video-EEG monitoring; the patient's volitional and deceptive acts to fabricate the appearance of illness, despite pain and personal endangerment; and the absence of reward other than remaining in a sick role were all consistent with a diagnosis of factitious disorder.

  18. Regional transport and dilution during high-pollution episodes in southern France: Summary of findings from the Field Experiment to Constraint Models of Atmospheric Pollution and Emissions Transport (ESCOMPTE)

    Science.gov (United States)

    Drobinski, P.; SaïD, F.; Ancellet, G.; Arteta, J.; Augustin, P.; Bastin, S.; Brut, A.; Caccia, J. L.; Campistron, B.; Cautenet, S.; Colette, A.; Coll, I.; Corsmeier, U.; Cros, B.; Dabas, A.; Delbarre, H.; Dufour, A.; Durand, P.; GuéNard, V.; Hasel, M.; Kalthoff, N.; Kottmeier, C.; Lasry, F.; Lemonsu, A.; Lohou, F.; Masson, V.; Menut, L.; Moppert, C.; Peuch, V. H.; Puygrenier, V.; Reitebuch, O.; Vautard, R.

    2007-07-01

    In the French Mediterranean basin the large city of Marseille and its industrialized suburbs (oil plants in the Fos-Berre area) are major pollutant sources that cause frequent and hazardous pollution episodes, especially in summer when intense solar heating enhances the photochemical activity and when the sea breeze circulation redistributes pollutants farther north in the countryside. This paper summarizes the findings of 5 years of research on the sea breeze in southern France and related mesoscale transport and dilution of pollutants within the Field Experiment to Constraint Models of Atmospheric Pollution and Emissions Transport (ESCOMPTE) program held in June and July 2001. This paper provides an overview of the experimental and numerical challenges identified before the ESCOMPTE field experiment and summarizes the key findings made in observation, simulation, and theory. We specifically address the role of large-scale atmospheric circulation to local ozone vertical distribution and the mesoscale processes driving horizontal advection of pollutants and vertical transport and mixing via entrainment at the top of the sea breeze or at the front and venting along the sloped terrain. The crucial importance of the interactions between processes of various spatial and temporal scales is thus highlighted. The advances in numerical modeling and forecasting of sea breeze events and ozone pollution episodes in southern France are also underlined. Finally, we conclude and point out some open research questions needing further investigation.

  19. Regional transport and dilution during high-pollution episodes in southern France: Summary of findings from the Field Experiment to Constraint Models of Atmospheric Pollution and Emissions Transport (ESCOMPTE)

    International Nuclear Information System (INIS)

    Drobinski, P.; Menut, L.; Ancellet, G.; Bastin, S.; Colette, A.; Said, F.; Brut, A.; Campistron, B.; Cros, B.; Durand, P.; Lohou, F.; Moppert, C.; Puygrenier, V.; Arteta, J.; Cautenet, S.; Augustin, P.; Delbarre, H.; Caccia, J.L.; Guenard, V.; Coll, I.; Lasry, F.; Corsmeier, U.; Hasel, M.; Kalthoff, N.; Kottmeier, C.; Dabas, A.; Dufour, A.; Lemonsu, A.; Masson, V.; Peuch, V.H.; Reitebuch, O.; Vautard, R.

    2007-01-01

    In the French Mediterranean basin the large city of Marseille and its industrialized suburbs (oil plants in the Fos-Berre area) are major pollutant sources that cause frequent and hazardous pollution episodes, especially in summer when intense solar heating enhances the photochemical activity and when the sea breeze circulation redistributes pollutants farther north in the countryside. This paper summarizes the findings of 5 years of research on the sea breeze in southern France and related mesoscale transport and dilution of pollutants within the Field Experiment to Constraint Models of Atmospheric Pollution and Emissions Transport (ESCOMPTE) program held in June and July 2001. This paper provides an overview of the experimental and numerical challenges identified before the ESCOMPTE field experiment and summarizes the key findings made in observation, simulation, and theory. We specifically address the role of large-scale atmospheric circulation to local ozone vertical distribution and the mesoscale processes driving horizontal advection of pollutants and vertical transport and mixing via entrainment at the top of the sea breeze or at the front and venting along the sloped terrain. The crucial importance of the interactions between processes of various spatial and temporal scales is thus highlighted. The advances in numerical modeling and forecasting of sea breeze events and ozone pollution episodes in southern France are also underlined. Finally, we conclude and point out some open research questions needing further investigation. (authors)

  20. Regional transport and dilution during high-pollution episodes in southern France: Summary of findings from the Field Experiment to Constraint Models of Atmospheric Pollution and Emissions Transport (ESCOMPTE)

    Energy Technology Data Exchange (ETDEWEB)

    Drobinski, P.; Menut, L. [Ecole Polytechnique, Inst Pierre Simon Laplace, Laboratoire de Meteorologie Dynamique, F-91128 Palaiseau (France); Ancellet, G.; Bastin, S.; Colette, A. [Universite Pierre et Marie Curie, Institut Pierre Simon Laplace, Service d' aeronomie, 4 place Jussieu, F-75252 Paris, (France); Said, F.; Brut, A.; Campistron, B.; Cros, B.; Durand, P.; Lohou, F.; Moppert, C.; Puygrenier, V. [Univ Toulouse, Lab Aerol, F-31400 Toulouse, (France); Arteta, J.; Cautenet, S. [Univ Clermont Ferrand, Lab Meteorol Phys, F-63174 Aubiere, (France); Augustin, P.; Delbarre, H. [Univ Littoral Cote d' Opale, Lab Physicochim Atmosphere, F-59140 Dunkerque, (France); Caccia, J.L.; Guenard, V. [Univ Toulon and Var, Lab Sondages Electromagnet Environm Terr, F-83957 La Garde, (France); Coll, I.; Lasry, F. [Fac Sci and Technol, Lab Interuniv Syst Atmospher, F-94010 Creteil, (France); Corsmeier, U.; Hasel, M.; Kalthoff, N.; Kottmeier, C. [Univ Karlsruhe, Inst Meteorol and Klimaforsch, Forschungszentrum, D-76133 Karlsruhe, (Germany); Dabas, A.; Dufour, A.; Lemonsu, A.; Masson, V.; Peuch, V.H. [Ctr Natl Rech Meteorol, F-31057 Toulouse, (France); Reitebuch, O. [Deutsch Zentrum Luft and Raumfahrt, Inst Atmospher Phys, D-82234 Wessling, (Germany); Vautard, R. [Inst Pierre Simon Laplace, CEA Saclay, Lab Sci Climat and Environm, F-91191 Gif Sur Yvette, (France)

    2007-07-01

    In the French Mediterranean basin the large city of Marseille and its industrialized suburbs (oil plants in the Fos-Berre area) are major pollutant sources that cause frequent and hazardous pollution episodes, especially in summer when intense solar heating enhances the photochemical activity and when the sea breeze circulation redistributes pollutants farther north in the countryside. This paper summarizes the findings of 5 years of research on the sea breeze in southern France and related mesoscale transport and dilution of pollutants within the Field Experiment to Constraint Models of Atmospheric Pollution and Emissions Transport (ESCOMPTE) program held in June and July 2001. This paper provides an overview of the experimental and numerical challenges identified before the ESCOMPTE field experiment and summarizes the key findings made in observation, simulation, and theory. We specifically address the role of large-scale atmospheric circulation to local ozone vertical distribution and the mesoscale processes driving horizontal advection of pollutants and vertical transport and mixing via entrainment at the top of the sea breeze or at the front and venting along the sloped terrain. The crucial importance of the interactions between processes of various spatial and temporal scales is thus highlighted. The advances in numerical modeling and forecasting of sea breeze events and ozone pollution episodes in southern France are also underlined. Finally, we conclude and point out some open research questions needing further investigation. (authors)

  1. The Fossile Episode

    OpenAIRE

    Hassler, John; Sinn, Hans-Werner

    2012-01-01

    We build a two-sector dynamic general equilibrium model with one-sided substitutability between fossil carbon and biocarbon. One shock only, the discovery of the technology to use fossil fuels, leads to a transition from an inital pre-industrial phase to three following phases: a pure fossil carbon phase, a mixed fossil and biocarbon phase and an absorbing biocarbon phase. The increased competition for biocarbon during phase 3 and 4 leads to increasing food prices. We provide closed form expr...

  2. The Fossil Episode

    OpenAIRE

    John Hassler; Hans-Werner Sinn

    2012-01-01

    We build a two-sector dynamic general equilibrium model with one-sided substitutability between fossil carbon and biocarbon. One shock only, the discovery of the technology to use fossil fuels, leads to a transition from an initial pre-industrial phase to three following phases: a pure fossil carbon phase, a mixed fossil and biocarbon phase and an absorbing biocarbon phase. The increased competition for biocarbon during phase 3 and 4 leads to increasing food prices. We provide closed form exp...

  3. Apathy in first episode psychosis patients

    DEFF Research Database (Denmark)

    Evensen, Julie; Røssberg, Jan Ivar; Barder, Helene

    2012-01-01

    Apathy is a common symptom in first episode psychosis (FEP), and is associated with poor functioning. Prevalence and correlates of apathy 10 years after the first psychotic episode remain unexplored.......Apathy is a common symptom in first episode psychosis (FEP), and is associated with poor functioning. Prevalence and correlates of apathy 10 years after the first psychotic episode remain unexplored....

  4. Episodic Memory: A Comparative Approach

    Science.gov (United States)

    Martin-Ordas, Gema; Call, Josep

    2013-01-01

    Historically, episodic memory has been described as autonoetic, personally relevant, complex, context-rich, and allowing mental time travel. In contrast, semantic memory, which is theorized to be free of context and personal relevance, is noetic and consists of general knowledge of facts about the world. The field of comparative psychology has adopted this distinction in order to study episodic memory in non-human animals. Our aim in this article is not only to reflect on the concept of episodic memory and the experimental approaches used in comparative psychology to study this phenomenon, but also to provide a critical analysis of these paradigms. We conclude the article by providing new avenues for future research. PMID:23781179

  5. Reconstructions of f(T) gravity from entropy-corrected holographic and new agegraphic dark energy models in power-law and logarithmic versions

    Energy Technology Data Exchange (ETDEWEB)

    Saha, Pameli; Debnath, Ujjal [Indian Institute of Engineering Science and Technology, Department of Mathematics, Howrah (India)

    2016-09-15

    Here, we peruse cosmological usage of the most promising candidates of dark energy in the framework of f(T) gravity theory where T represents the torsion scalar teleparallel gravity. We reconstruct the different f(T) modified gravity models in the spatially flat Friedmann-Robertson-Walker universe according to entropy-corrected versions of the holographic and new agegraphic dark energy models in power-law and logarithmic corrections, which describe an accelerated expansion history of the universe. We conclude that the equation of state parameter of the entropy-corrected models can transit from the quintessence state to the phantom regime as indicated by recent observations or can lie entirely in the phantom region. Also, using these models, we investigate the different areas of the stability with the help of the squared speed of sound. (orig.)

  6. The sagittal stem alignment and the stem version clearly influence the impingement-free range of motion in total hip arthroplasty: a computer model-based analysis.

    Science.gov (United States)

    Müller, Michael; Duda, Georg; Perka, Carsten; Tohtz, Stephan

    2016-03-01

    The component alignment in total hip arthroplasty influences the impingement-free range of motion (ROM). While substantiated data is available for the cup positioning, little is known about the stem alignment. Especially stem rotation and the sagittal alignment influence the position of the cone in relation to the edge of the socket and thus the impingement-free functioning. Hence, the question arises as to what influence do these parameters have on the impingement-free ROM? With the help of a computer model the influence of the sagittal stem alignment and rotation on the impingement-free ROM were investigated. The computer model was based on the CT dataset of a patient with a non-cemented THA. In the model the stem version was set at 10°/0°/-10° and the sagittal alignment at 5°/0°/-5°, which resulted in nine alternative stem positions. For each position, the maximum impingement-free ROM was investigated. Both stem version and sagittal stem alignment have a relevant influence on the impingement-free ROM. In particular, flexion and extension as well as internal and external rotation capability present evident differences. In the position intervals of 10° sagittal stem alignment and 20° stem version a difference was found of about 80° in the flexion and 50° in the extension capability. Likewise, differences were evidenced of up to 72° in the internal and up to 36° in the external rotation. The sagittal stem alignment and the stem torsion have a relevant influence on the impingement-free ROM. To clarify the causes of an impingement or accompanying problems, both parameters should be examined and, if possible, a combined assessment of these factors should be made.

  7. Imagining the personal past: Episodic counterfactuals compared to episodic memories and episodic future projections

    DEFF Research Database (Denmark)

    Özbek, Müge; Bohn, Annette; Berntsen, Dorthe

    2017-01-01

    Episodic counterfactuals are imagined events that could have happened, but did not happen, in a person’s past. Such imagined past events are important aspects of mental life, affecting emotions, decisions, and behaviors. However, studies examining their phenomenological characteristics and content...... are few. Here we introduced a new method to systematically compare self-generated episodic counterfactuals to self-generated episodic memories and future projections with regard to their phenomenological characteristics (e.g., imagery, emotional valence, rehearsal) and content (e.g., reference to cultural...... distance. The findings show that imagined events are phenomenologically different from memories of experienced events, consistent with reality monitoring theory, and that imagined future events are different from both actual and imagined past events, consistent with some theories of motivation....

  8. Developing and validating a tablet version of an illness explanatory model interview for a public health survey in Pune, India.

    Directory of Open Access Journals (Sweden)

    Joseph G Giduthuri

    Full Text Available BACKGROUND: Mobile electronic devices are replacing paper-based instruments and questionnaires for epidemiological and public health research. The elimination of a data-entry step after an interview is a notable advantage over paper, saving investigator time, decreasing the time lags in managing and analyzing data, and potentially improving the data quality by removing the error-prone data-entry step. Research has not yet provided adequate evidence, however, to substantiate the claim of fewer errors for computerized interviews. METHODOLOGY: We developed an Android-based illness explanatory interview for influenza vaccine acceptance and tested the instrument in a field study in Pune, India, for feasibility and acceptability. Error rates for tablet and paper were compared with reference to the voice recording of the interview as gold standard to assess discrepancies. We also examined the preference of interviewers for the classical paper-based or the electronic version of the interview and compared the costs of research with both data collection devices. RESULTS: In 95 interviews with household respondents, total error rates with paper and tablet devices were nearly the same (2.01% and 1.99% respectively. Most interviewers indicated no preference for a particular device; but those with a preference opted for tablets. The initial investment in tablet-based interviews was higher compared to paper, while the recurring costs per interview were lower with the use of tablets. CONCLUSION: An Android-based tablet version of a complex interview was developed and successfully validated. Advantages were not compromised by increased errors, and field research assistants with a preference preferred the Android device. Use of tablets may be more costly than paper for small samples and less costly for large studies.

  9. Weak depth and along-strike variations in stretching from a multi-episodic finite stretching model: Evidence for uniform pure-shear extension in the opening of the South China Sea

    Science.gov (United States)

    Chen, Lin; Zhang, Zhongjie; Song, Haibin

    2013-12-01

    The South China Sea is widely believed to have been opened by seafloor spreading during the Cenozoic. The details of its lithospheric extension are still being debated, and it is unknown whether pure, simple, or conjunct shears are responsible for the opening of the South China Sea. The depth-dependent and along-strike extension derived from the single-stage finite stretching model or instantaneous stretching model is inconsistent with the observation that the South China Sea proto-margins have experienced multi-episodic extension since the Late Cretaceous. Based on the multi-episodic finite stretching model, we present the amount of lithosphere stretching at the northern continental margin of the South China Sea for different depth scales (upper crust, whole crust and lithosphere) and along several transects. The stretching factors are estimated by integrating seven deep-penetration seismic profiles, the Moho distribution derived from gravity modeling, and the tectonic subsidence data for 41 wells. The results demonstrate that the amount of stretching increases rapidly from 1.1 at the continent shelf to over 3.5 at the lower slope, but the stretching factors at the crust and lithosphere scales are consistent within error (from the uncertainty in paleobathymetry and sea-level change). Furthermore, the along-strike variation in stretching factor is within the range of 1.11-1.9 in west-east direction, accompanied by significant west-east differences in the thickness of high-velocity layers (HVLs) within the lowermost crust. This weak along-strike variation of the stretching factor is most likely produced by the preexisting contrasts in the composition and thermal structure of the lithosphere. The above observations suggest that the continental extension in the opening of the South China Sea mainly takes the form of a uniform pure shear rather than depth-dependent stretching.

  10. Relations between episodic memory, suggestibility, theory of mind, and cognitive inhibition in the preschool child.

    Science.gov (United States)

    Melinder, Annika; Endestad, Tor; Magnussen, Svein

    2006-12-01

    The development of episodic memory, its relation to theory of mind (ToM), executive functions (e.g., cognitive inhibition), and to suggestibility was studied. Children (n= 115) between 3 and 6 years of age saw two versions of a video film and were tested for their memory of critical elements of the videos. Results indicated similar developmental trends for all memory measures, ToM, and inhibition, but ToM and inhibition were not associated with any memory measures. Correlations involving source memory was found in relation to specific questions, whereas inhibition and ToM were significantly correlated to resistance to suggestions. A regression analysis showed that age was the main contributor to resistance to suggestions, to correct source monitoring, and to correct responses to specific questions. Inhibition was also a significant main predictor of resistance to suggestive questions, whereas the relative contribution of ToM was wiped out when an extended model was tested.

  11. Variation in payments for spine surgery episodes of care: implications for episode-based bundled payment.

    Science.gov (United States)

    Kahn, Elyne N; Ellimoottil, Chandy; Dupree, James M; Park, Paul; Ryan, Andrew M

    2018-05-25

    OBJECTIVE Spine surgery is expensive and marked by high variation across regions and providers. Bundled payments have potential to reduce unwarranted spending associated with spine surgery. This study is a cross-sectional analysis of commercial and Medicare claims data from January 2012 through March 2015 in the state of Michigan. The objective was to quantify variation in payments for spine surgery in adult patients, document sources of variation, and determine influence of patient-level, surgeon-level, and hospital-level factors. METHODS Hierarchical regression models were used to analyze contributions of patient-level covariates and influence of individual surgeons and hospitals. The primary outcome was price-standardized 90-day episode payments. Intraclass correlation coefficients-measures of variability accounted for by each level of a hierarchical model-were used to quantify sources of spending variation. RESULTS The authors analyzed 17,436 spine surgery episodes performed by 195 surgeons at 50 hospitals. Mean price-standardized 90-day episode payments in the highest spending quintile exceeded mean payments for episodes in the lowest cost quintile by $42,953 (p accounting for patient-level covariates, the remaining hospital-level and surgeon-level effects accounted for 2.0% (95% CI 1.1%-3.8%) and 4.0% (95% CI 2.9%-5.6%) of total variation, respectively. CONCLUSIONS Significant variation exists in total episode payments for spine surgery, driven mostly by variation in post-discharge and facility payments. Hospital and surgeon effects account for relatively little of the observed variation.

  12. Microbiology of Peritonitis in Peritoneal Dialysis Patients with Multiple Episodes

    Science.gov (United States)

    Nessim, Sharon J.; Nisenbaum, Rosane; Bargman, Joanne M.; Jassal, Sarbjit V.

    2012-01-01

    ♦ Background: Peritoneal dialysis (PD)–associated peritonitis clusters within patients. Patient factors contribute to peritonitis risk, but there is also entrapment of organisms within the biofilm that forms on PD catheters. It is hypothesized that this biofilm may prevent complete eradication of organisms, predisposing to multiple infections with the same organism. ♦ Methods: Using data collected in the Canadian multicenter Baxter POET (Peritonitis, Organism, Exit sites, Tunnel infections) database from 1996 to 2005, we studied incident PD patients with 2 or more peritonitis episodes. We determined the proportion of patients with 2 or more episodes caused by the same organism. In addition, using a multivariate logistic regression model, we tested whether prior peritonitis with a given organism predicted the occurrence of a subsequent episode with the same organism. ♦ Results: During their time on PD, 558 patients experienced 2 or more peritonitis episodes. Of those 558 patients, 181 (32%) had at least 2 episodes with the same organism. The organism most commonly causing repeat infection was coagulase-negative Staphylococcus (CNS), accounting for 65.7% of cases. Compared with peritonitis caused by other organisms, a first CNS peritonitis episode was associated with an increased risk of subsequent CNS peritonitis within 1 year (odds ratio: 2.1; 95% confidence interval: 1.5 to 2.8; p peritonitis, 48% of repeat episodes occurred within 6 months of the earlier episode. ♦ Conclusions: In contrast to previous data, we did not find a high proportion of patients with multiple peritonitis episodes caused by the same organism. Coagulase-negative Staphylococcus was the organism most likely to cause peritonitis more than once in a given patient, and a prior CNS peritonitis was associated with an increased risk of CNS peritonitis within the subsequent year. PMID:22215659

  13. Nuisance forecasting. Univariate modelling and very-short-term forecasting of winter smog episodes; Immissionsprognose. Univariate Modellierung und Kuerzestfristvorhersage von Wintersmogsituationen

    Energy Technology Data Exchange (ETDEWEB)

    Schlink, U.

    1996-12-31

    The work evaluates specifically the nuisance data provided by the measuring station in the centre of Leipig during the period from 1980 to 1993, with the aim to develop an algorithm for making very short-term forecasts of excessive nuisances. Forecasting was to be univariate, i.e., based exclusively on the half-hourly readings of SO{sub 2} concentrations taken in the past. As shown by Fourier analysis, there exist three main and mutually independent spectral regions: the high-frequency sector (period < 12 hours) of unstable irregularities, the seasonal sector with the periods of 24 and 12 hours, and the low-frequency sector (period > 24 hours). After breaking the measuring series up into components, the low-frequency sector is termed trend component, or trend for short. For obtaining the components, a Kalman filter is used. It was found that smog episodes are most adequately described by the trend component. This is therefore more closely investigated. The phase representation then shows characteristic trajectories of the trends. (orig./KW) [Deutsch] In der vorliegende Arbeit wurden speziell die Immissionsdaten der Messstation Leipzig-Mitte des Zeitraumes 1980-1993 mit dem Ziel der Erstellung eines Algorithmus fuer die Kuerzestfristprognose von Ueberschreitungssituationen untersucht. Die Prognosestellung sollte allein anhand der in der Vergangenheit registrierten Halbstundenwerte der SO{sub 2}-Konzentration, also univariat erfolgen. Wie die Fourieranalyse zeigt, gibt es drei wesentliche und voneinander unabhaengige Spektralbereiche: Den hochfrequenten Bereich (Periode <12 Stunden) der instabilen Irregularitaeten, den saisonalen Anteil mit den Perioden von 24 und 12 Stunden und den niedrigfrequenten Bereich (Periode >24 Stunden). Letzterer wird nach einer Zerlegung der Messreihe in Komponenten als Trendkomponente (oder kurz Trend) bezeichnet. Fuer die Komponentenzerlegung wird ein Kalman-Filter verwendet. Es stellt sich heraus, dass Smogepisoden am deutlichsten

  14. Negative affect prior to and following overeating-only, loss of control eating-only, and binge eating episodes in obese adults.

    Science.gov (United States)

    Berg, Kelly C; Crosby, Ross D; Cao, Li; Crow, Scott J; Engel, Scott G; Wonderlich, Stephen A; Peterson, Carol B

    2015-09-01

    The objective was to examine the trajectory of five types of negative affect (global negative affect, fear, guilt, hostility, sadness) prior to and following three types of eating episodes (overeating in the absence of loss of control [OE-only], loss of control eating in the absence of overeating [LOC-only], and binge eating) among obese adults using ecological momentary assessment (EMA). Fifty obese adults (84% female) completed a two-week EMA protocol during which they were asked to record all eating episodes and rate each episode on continua of overeating and loss of control. Momentary measures of global negative affect, fear, guilt, hostility, and sadness were assessed using an abbreviated version of the Positive and Negative Affect Schedule (PANAS). Trajectories for each of the five types of negative affect were modeled prior to and following episodes of OE-only, LOC-only, and binge eating. Consistent with previous findings, global negative affect and Guilt increased prior to and decreased following binge eating episodes (all ps < .05). Guilt also decreased following OE-only episodes (p < .05). These results are consistent with the affect regulation model of binge eating and suggest that binge eating may function to regulate global negative affect, and more specifically, guilt among obese adults. These data suggest that the relationship between negative affect and binge eating may not be unique to individuals with clinical eating disorders and indicate that targeting negative affect may be an effective strategy for the treatment of binge eating in the context of obesity. © 2015 Wiley Periodicals, Inc.

  15. Calculation of Brown Carbon Optical Properties in the Fifth version Community Atmospheric Model (CAM5) and Validation with a Case Study in Kanpur, India

    Science.gov (United States)

    Xu, L.; Peng, Y.; Ram, K.

    2017-12-01

    The presence of absorbing component of organic carbon in atmospheric aerosols (Brown Carbon, BrC) has recently received much attention to the scientific community because of its absorbing nature, especially in the UV and Visible region. Attempts to account for BrC in radiative forcing calculations in climate model are rather scarce, primarily due to observational constrain as well as its incorporation in the model-based studies. Due to non-treatment of BrC in the off-line models, there exists a large discrepancy between model- and observational- based estimate of direct radiative effect of carbonaceous aerosols. In this study, we have included BrC absorption and optical characteristics in the fifth version of Community Atmospheric Model (CAM5) for the better understanding of radiative impact of BrC over northern India, also for improving the performance of aerosol radiative calculation in climate model. We have used the inputs of aerosol chemical composition measurements conducted at an urban site, Kanpur, in the Indo-Gangetic Plain (IGP) during 2007-2008 to construct the optical properties of BrC in CAM5 model. Model radiative simulations of sensitive tests showed good agreement with observations. Effects of varying imaginary part of BrC refractive index, relative mass ratio of BrC to organic aerosol in combination with core-shell mixing style of BrC with other anthropogenic aerosols are also analyzed for understanding BrC impact on simulated aerosol absorption in model.

  16. A new version of the CNRM Chemistry-Climate Model, CNRM-CCM: description and improvements from the CCMVal-2 simulations

    Directory of Open Access Journals (Sweden)

    M. Michou

    2011-10-01

    Full Text Available This paper presents a new version of the Météo-France CNRM Chemistry-Climate Model, so-called CNRM-CCM. It includes some fundamental changes from the previous version (CNRM-ACM which was extensively evaluated in the context of the CCMVal-2 validation activity. The most notable changes concern the radiative code of the GCM, and the inclusion of the detailed stratospheric chemistry of our Chemistry-Transport model MOCAGE on-line within the GCM. A 47-yr transient simulation (1960–2006 is the basis of our analysis. CNRM-CCM generates satisfactory dynamical and chemical fields in the stratosphere. Several shortcomings of CNRM-ACM simulations for CCMVal-2 that resulted from an erroneous representation of the impact of volcanic aerosols as well as from transport deficiencies have been eliminated.

    Remaining problems concern the upper stratosphere (5 to 1 hPa where temperatures are too high, and where there are biases in the NO2, N2O5 and O3 mixing ratios. In contrast, temperatures at the tropical tropopause are too cold. These issues are addressed through the implementation of a more accurate radiation scheme at short wavelengths. Despite these problems we show that this new CNRM CCM is a useful tool to study chemistry-climate applications.

  17. Development of a new version of the Liverpool Malaria Model. I. Refining the parameter settings and mathematical formulation of basic processes based on a literature review

    Directory of Open Access Journals (Sweden)

    Jones Anne E

    2011-02-01

    Full Text Available Abstract Background A warm and humid climate triggers several water-associated diseases such as malaria. Climate- or weather-driven malaria models, therefore, allow for a better understanding of malaria transmission dynamics. The Liverpool Malaria Model (LMM is a mathematical-biological model of malaria parasite dynamics using daily temperature and precipitation data. In this study, the parameter settings of the LMM are refined and a new mathematical formulation of key processes related to the growth and size of the vector population are developed. Methods One of the most comprehensive studies to date in terms of gathering entomological and parasitological information from the literature was undertaken for the development of a new version of an existing malaria model. The knowledge was needed to allow the justification of new settings of various model parameters and motivated changes of the mathematical formulation of the LMM. Results The first part of the present study developed an improved set of parameter settings and mathematical formulation of the LMM. Important modules of the original LMM version were enhanced in order to achieve a higher biological and physical accuracy. The oviposition as well as the survival of immature mosquitoes were adjusted to field conditions via the application of a fuzzy distribution model. Key model parameters, including the mature age of mosquitoes, the survival probability of adult mosquitoes, the human blood index, the mosquito-to-human (human-to-mosquito transmission efficiency, the human infectious age, the recovery rate, as well as the gametocyte prevalence, were reassessed by means of entomological and parasitological observations. This paper also revealed that various malaria variables lack information from field studies to be set properly in a malaria modelling approach. Conclusions Due to the multitude of model parameters and the uncertainty involved in the setting of parameters, an extensive

  18. Updating sea spray aerosol emissions in the Community Multiscale Air Quality (CMAQ) model version 5.0.2

    Data.gov (United States)

    U.S. Environmental Protection Agency — The uploaded data consists of the BRACE Na aerosol observations paired with CMAQ model output, the updated model's parameterization of sea salt aerosol emission size...

  19. FPL-PELPS : a price endogenous linear programming system for economic modeling, supplement to PELPS III, version 1.1.

    Science.gov (United States)

    Patricia K. Lebow; Henry Spelter; Peter J. Ince

    2003-01-01

    This report provides documentation and user information for FPL-PELPS, a personal computer price endogenous linear programming system for economic modeling. Originally developed to model the North American pulp and paper industry, FPL-PELPS follows its predecessors in allowing the modeling of any appropriate sector to predict consumption, production and capacity by...

  20. Obesity and episodic memory function.

    Science.gov (United States)

    Loprinzi, Paul D; Frith, Emily

    2018-04-17

    Obesity-related lifestyle factors, such as physical activity behavior and dietary intake, have been shown to be associated with episodic memory function. From animal work, there is considerable biological plausibility linking obesity with worse memory function. There are no published systematic reviews evaluating the effects of obesity on episodic memory function among humans, and examining whether physical activity and diet influences this obesity-memory link. Thus, the purpose of this systematic review was to evaluate the totality of research examining whether obesity is associated with episodic memory function, and whether physical activity and dietary behavior confounds this relationship. A review approach was employed, using PubMed, PsychInfo, and Sports Discus databases. Fourteen studies met our criteria. Among these 14 reviewed studies, eight were cross-sectional, four were prospective, and two employed a randomized controlled experimental design. Twelve of the 14 studies did not take into consideration dietary behavior in their analysis, and similarly, nine of the 14 studies did not take into consideration participant physical activity behavior. Among the 14 studies, ten found an inverse association of weight status on memory function, but for one of these studies, this association was attenuated after controlling for physical activity. Among the 14 evaluated studies, four did not find a direct effect of weight status on memory. Among the four null studies, one, however, found an indirect effect of BMI on episodic memory and another found a moderation effect of BMI and age on memory function. It appears that obesity may be associated with worse memory function, with the underlying mechanisms discussed herein. At this point, it is uncertain whether adiposity, itself, is influencing memory changes, or rather, whether adiposity-related lifestyle behaviors (e.g., physical inactivity and diet) are driving the obesity-memory relationship.

  1. Comparison of first-episode and chronic patients diagnosed with schizophrenia: symptoms and childhood trauma.

    Science.gov (United States)

    Wang, Zheng; Xue, Zhimin; Pu, Weidan; Yang, Bo; Li, Li; Yi, Wenyin; Wang, Peng; Liu, Chang; Wu, Guowei; Liu, Zhening; Rosenheck, Robert A

    2013-02-01

    There has been considerable interest in identifying and addressing the specific needs of early-episode patients diagnosed with schizophrenia in the hope that by addressing such needs early, chronic disabilities can be avoided. One hundred twenty-eight early-episode and 571 chronic patients were compared on socio-demographic characteristics, clinical symptoms and history of childhood trauma. Symptoms were measured with the Positive and Negative Syndrome Scale (PANSS), and trauma with the short version of the Childhood Trauma Questionnaire. First-episode patients scored 9.3% higher than chronic patients on the PANSS positive symptom scale and 16.3% lower on the negative symptom scale. More first episode patients reported childhood sexual abuse (P = 0.033); however, fewer reported childhood emotional neglect (P = 0.01). Childhood trauma was associated with positive symptoms, specifically with hallucinations in first-episode patients (r = 0.174; P = 0.049). Moreover, fewer parents of first episode patients were living alone (P = 0.008). On multiple logistic regression, the first-episode patients were younger (odds ratio = 0.92), had higher PANSS positive symptom scores (odds ratio 1.04) and lower negative symptom scores (odds ratio 0.948 recalculate). More positive symptoms, fewer negative symptoms, less isolated parents and greater risk of childhood sexual abuse might warrant attention in first episode schizophrenia and perhaps should be a focus for the development of targeted interventions. © 2012 Wiley Publishing Asia Pty Ltd.

  2. The CSIRO Mk3L climate system model version 1.0 – Part 1: Description and evaluation

    Directory of Open Access Journals (Sweden)

    S. J. Phipps

    2011-06-01

    Full Text Available The CSIRO Mk3L climate system model is a coupled general circulation model, designed primarily for millennial-scale climate simulations and palaeoclimate research. Mk3L includes components which describe the atmosphere, ocean, sea ice and land surface, and combines computational efficiency with a stable and realistic control climatology. This paper describes the model physics and software, analyses the control climatology, and evaluates the ability of the model to simulate the modern climate.

    Mk3L incorporates a spectral atmospheric general circulation model, a z-coordinate ocean general circulation model, a dynamic-thermodynamic sea ice model and a land surface scheme with static vegetation. The source code is highly portable, and has no dependence upon proprietary software. The model distribution is freely available to the research community. A 1000-yr climate simulation can be completed in around one-and-a-half months on a typical desktop computer, with greater throughput being possible on high-performance computing facilities.

    Mk3L produces realistic simulations of the larger-scale features of the modern climate, although with some biases on the regional scale. The model also produces reasonable representations of the leading modes of internal climate variability in both the tropics and extratropics. The control state of the model exhibits a high degree of stability, with only a weak cooling trend on millennial timescales. Ongoing development work aims to improve the model climatology and transform Mk3L into a comprehensive earth system model.

  3. Groundwater model of the Great Basin carbonate and alluvial aquifer system version 3.0: Incorporating revisions in southwestern Utah and east central Nevada

    Science.gov (United States)

    Brooks, Lynette E.

    2017-12-01

    The groundwater model described in this report is a new version of previously published steady-state numerical groundwater flow models of the Great Basin carbonate and alluvial aquifer system, and was developed in conjunction with U.S. Geological Survey studies in Parowan, Pine, and Wah Wah Valleys, Utah. This version of the model is GBCAAS v. 3.0 and supersedes previous versions. The objectives of the model for Parowan Valley were to simulate revised conceptual estimates of recharge and discharge, to estimate simulated aquifer storage properties and the amount of reduction in storage as a result of historical groundwater withdrawals, and to assess reduction in groundwater withdrawals necessary to mitigate groundwater-level declines in the basin. The objectives of the model for the area near Pine and Wah Wah Valleys were to recalibrate the model using new observations of groundwater levels and evapotranspiration of groundwater; to provide new estimates of simulated recharge, hydraulic conductivity, and interbasin flow; and to simulate the effects of proposed groundwater withdrawals on the regional flow system. Meeting these objectives required the addition of 15 transient calibration stress periods and 14 projection stress periods, aquifer storage properties, historical withdrawals in Parowan Valley, and observations of water-level changes in Parowan Valley. Recharge in Parowan Valley and withdrawal from wells in Parowan Valley and two nearby wells in Cedar City Valley vary for each calibration stress period representing conditions from March 1940 to November 2013. Stresses, including recharge, are the same in each stress period as in the steady-state stress period for all areas outside of Parowan Valley. The model was calibrated to transient conditions only in Parowan Valley. Simulated storage properties outside of Parowan Valley were set the same as the Parowan Valley properties and are not considered calibrated. Model observations in GBCAAS v. 3.0 are

  4. STWAVE: Steady-State Spectral Wave Model. Report 1: User's Manual for STWAVE Version 2.0

    National Research Council Canada - National Science Library

    Smith, Jane

    1999-01-01

    .... STWAVE has also been incorporated into the Surface-Water Modeling System, which provides a user interface and supporting software for grid generation, interpolation of current fields, generation...

  5. Early detection of first-episode psychosis

    DEFF Research Database (Denmark)

    Larsen, Tor K; Melle, Ingrid; Auestad, Bjørn

    2006-01-01

    Early intervention is assumed to improve outcome in first-episode psychosis, but this has not been proven.......Early intervention is assumed to improve outcome in first-episode psychosis, but this has not been proven....

  6. Functional neuroimaging of semantic and episodic musical memory.

    Science.gov (United States)

    Platel, Hervé

    2005-12-01

    The distinction between episodic and semantic memory has become very popular since it was first proposed by Tulving in 1972. So far, very few neuropsychological, psychophysical, and imaging studies have related to the mnemonic aspects of music, notably on the long-term memory features, and practically nothing is known about the functional anatomy of long-term memory for music. Numerous functional imaging studies have shown that retrieval from semantic and episodic memory is subserved by distinct neural networks. For instance, the HERA model (hemispheric encoding/retrieval asymmetry) ascribes to the left prefrontal cortex a preferential role in the encoding process of episodic material and the recall of semantic information, while the right prefrontal cortex would preferentially operate in the recall of episodic information. However, these results were essentially obtained with verbal and visuo-spatial material. We have done a study to determine the neural substrates underlying the semantic and episodic components of music using familiar and nonfamiliar melodic tunes. Two distinct patterns of activations were found: bilateral activation of the middle and superior frontal areas and precuneus for episodic memory, and activation of the medial and orbital frontal cortex bilaterally, left angular gyrus, and the anterior part of the left middle and superior temporal gyri for semantic memory. We discuss these findings in light of the available neuropsychological data obtained in brain-damaged subjects and functional neuroimaging studies.

  7. A Multi-Year Plan for Enhancing Turbulence Modeling in Hydra-TH Revised and Updated Version 2.0

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Thomas M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Berndt, Markus [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Baglietto, Emilio [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Magolan, Ben [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2015-10-01

    The purpose of this report is to document a multi-year plan for enhancing turbulence modeling in Hydra-TH for the Consortium for Advanced Simulation of Light Water Reactors (CASL) program. Hydra-TH is being developed to the meet the high- fidelity, high-Reynolds number CFD based thermal hydraulic simulation needs of the program. This work is being conducted within the thermal hydraulics methods (THM) focus area. This report is an extension of THM CASL milestone L3:THM.CFD.P10.02 [33] (March, 2015) and picks up where it left off. It will also serve to meet the requirements of CASL THM level three milestone, L3:THM.CFD.P11.04, scheduled for completion September 30, 2015. The objectives of this plan will be met by: maturation of recently added turbulence models, strategic design/development of new models and systematic and rigorous testing of existing and new models and model extensions. While multi-phase turbulent flow simulations are important to the program, only single-phase modeling will be considered in this report. Large Eddy Simulation (LES) is also an important modeling methodology. However, at least in the first year, the focus is on steady-state Reynolds Averaged Navier-Stokes (RANS) turbulence modeling.

  8. delta O-18 water isotope in the iLOVECLIM model (version 1.0) - Part 1: Implementation and verification

    NARCIS (Netherlands)

    Roche, D.M.V.A.P.

    2013-01-01

    A new 18O stable water isotope scheme is developed for three components of the iLOVECLIM coupled climate model: atmospheric, oceanic and land surface. The equations required to reproduce the fractionation of stable water isotopes in the simplified atmospheric model ECBilt are developed consistently

  9. Technical documentation and user's guide for City-County Allocation Model (CCAM). Version 1. 0

    Energy Technology Data Exchange (ETDEWEB)

    Clark, L.T. Jr.; Scott, M.J.; Hammer, P.

    1986-05-01

    The City-County Allocation Model (CCAM) was developed as part of the Monitored Retrievable Storage (MRS) Program. The CCAM model was designed to allocate population changes forecasted by the MASTER model to specific local communities within commuting distance of the MRS facility. The CCAM model was designed to then forecast the potential changes in demand for key community services such as housing, police protection, and utilities for these communities. The CCAM model uses a flexible on-line data base on demand for community services that is based on a combination of local service levels and state and national service standards. The CCAM model can be used to quickly forecast the potential community service consequence of economic development for local communities anywhere in the country. The remainder of this document is organized as follows. The purpose of this manual is to assist the user in understanding and operating the City-County Allocation Model (CCAM). The annual explains the data sources for the model and code modifications as well as the operational procedures.

  10. The new version of the Institute of Numerical Mathematics Sigma Ocean Model (INMSOM) for simulation of Global Ocean circulation and its variability

    Science.gov (United States)

    Gusev, Anatoly; Fomin, Vladimir; Diansky, Nikolay; Korshenko, Evgeniya

    2017-04-01

    In this paper, we present the improved version of the ocean general circulation sigma-model developed in the Institute of Numerical Mathematics of the Russian Academy of Sciences (INM RAS). The previous version referred to as INMOM (Institute of Numerical Mathematics Ocean Model) is used as the oceanic component of the IPCC climate system model INMCM (Institute of Numerical Mathematics Climate Model (Volodin et al 2010,2013). Besides, INMOM as the only sigma-model was used for simulations according to CORE-II scenario (Danabasoglu et al. 2014,2016; Downes et al. 2015; Farneti et al. 2015). In general, INMOM results are comparable to ones of other OGCMs and were used for investigation of climatic variations in the North Atlantic (Gusev and Diansky 2014). However, detailed analysis of some CORE-II INMOM results revealed some disadvantages of the INMOM leading to considerable errors in reproducing some ocean characteristics. So, the mass transport in the Antarctic Circumpolar Current (ACC) was overestimated. As well, there were noticeable errors in reproducing thermohaline structure of the ocean. After analysing the previous results, the new version of the OGCM was developed. It was decided to entitle is INMSOM (Institute of Numerical Mathematics Sigma Ocean Model). The new title allows one to distingwish the new model, first, from its older version, and second, from another z-model developed in the INM RAS and referred to as INMIO (Institute of Numerical Mathematics and Institute of Oceanology ocean model) (Ushakov et al. 2016). There were numerous modifications in the model, some of them are as follows. 1) Formulation of the ocean circulation problem in terms of full free surface with taking into account water amount variation. 2) Using tensor form of lateral viscosity operator invariant to rotation. 3) Using isopycnal diffusion including Gent-McWilliams mixing. 4) Using atmospheric forcing computation according to NCAR methodology (Large and Yeager 2009). 5

  11. An integrated assessment modeling framework for uncertainty studies in global and regional climate change: the MIT IGSM-CAM (version 1.0)

    Science.gov (United States)

    Monier, E.; Scott, J. R.; Sokolov, A. P.; Forest, C. E.; Schlosser, C. A.

    2013-12-01

    This paper describes a computationally efficient framework for uncertainty studies in global and regional climate change. In this framework, the Massachusetts Institute of Technology (MIT) Integrated Global System Model (IGSM), an integrated assessment model that couples an Earth system model of intermediate complexity to a human activity model, is linked to the National Center for Atmospheric Research (NCAR) Community Atmosphere Model (CAM). Since the MIT IGSM-CAM framework (version 1.0) incorporates a human activity model, it is possible to analyze uncertainties in emissions resulting from both uncertainties in the underlying socio-economic characteristics of the economic model and in the choice of climate-related policies. Another major feature is the flexibility to vary key climate parameters controlling the climate system response to changes in greenhouse gases and aerosols concentrations, e.g., climate sensitivity, ocean heat uptake rate, and strength of the aerosol forcing. The IGSM-CAM is not only able to realistically simulate the present-day mean climate and the observed trends at the global and continental scale, but it also simulates ENSO variability with realistic time scales, seasonality and patterns of SST anomalies, albeit with stronger magnitudes than observed. The IGSM-CAM shares the same general strengths and limitations as the Coupled Model Intercomparison Project Phase 3 (CMIP3) models in simulating present-day annual mean surface temperature and precipitation. Over land, the IGSM-CAM shows similar biases to the NCAR Community Climate System Model (CCSM) version 3, which shares the same atmospheric model. This study also presents 21st century simulations based on two emissions scenarios (unconstrained scenario and stabilization scenario at 660 ppm CO2-equivalent) similar to, respectively, the Representative Concentration Pathways RCP8.5 and RCP4.5 scenarios, and three sets of climate parameters. Results of the simulations with the chosen

  12. GENII Version 2 Users’ Guide

    Energy Technology Data Exchange (ETDEWEB)

    Napier, Bruce A.

    2004-03-08

    The GENII Version 2 computer code was developed for the Environmental Protection Agency (EPA) at Pacific Northwest National Laboratory (PNNL) to incorporate the internal dosimetry models recommended by the International Commission on Radiological Protection (ICRP) and the radiological risk estimating procedures of Federal Guidance Report 13 into updated versions of existing environmental pathway analysis models. The resulting environmental dosimetry computer codes are compiled in the GENII Environmental Dosimetry System. The GENII system was developed to provide a state-of-the-art, technically peer-reviewed, documented set of programs for calculating radiation dose and risk from radionuclides released to the environment. The codes were designed with the flexibility to accommodate input parameters for a wide variety of generic sites. Operation of a new version of the codes, GENII Version 2, is described in this report. Two versions of the GENII Version 2 code system are available, a full-featured version and a version specifically designed for demonstrating compliance with the dose limits specified in 40 CFR 61.93(a), the National Emission Standards for Hazardous Air Pollutants (NESHAPS) for radionuclides. The only differences lie in the limitation of the capabilities of the user to change specific parameters in the NESHAPS version. This report describes the data entry, accomplished via interactive, menu-driven user interfaces. Default exposure and consumption parameters are provided for both the average (population) and maximum individual; however, these may be modified by the user. Source term information may be entered as radionuclide release quantities for transport scenarios, or as basic radionuclide concentrations in environmental media (air, water, soil). For input of basic or derived concentrations, decay of parent radionuclides and ingrowth of radioactive decay products prior to the start of the exposure scenario may be considered. A single code run can

  13. Investigating the role of chemical and physical processes on organic aerosol modelling with CAMx in the Po Valley during a winter episode

    Science.gov (United States)

    Meroni, A.; Pirovano, G.; Gilardoni, S.; Lonati, G.; Colombi, C.; Gianelle, V.; Paglione, M.; Poluzzi, V.; Riva, G. M.; Toppetti, A.

    2017-12-01

    Traditional aerosol mechanisms underestimate the observed organic aerosol concentration, especially due to the lack of information on secondary organic aerosol (SOA) formation and processing. In this study we evaluate the chemical and transport model CAMx during a one-month in winter (February 2013) over a 5 km resolution domain, covering the whole Po valley (Northern Italy). This works aims at investigating the effects of chemical and physical atmospheric processing on modelling results and, in particular, to evaluate the CAMx sensitivity to organic aerosol (OA) modelling schemes: we will compare the recent 1.5D-VBS algorithm (CAMx-VBS) with the traditional Odum 2-product model (CAMx-SOAP). Additionally, the thorough diagnostic analysis of the reproduction of meteorology, precursors and aerosol components was intended to point put strength and weaknesses of the modelling system and address its improvement. Firstly, we evaluate model performance for criteria PM concentration. PM10 concentration was underestimated both by CAMx-SOAP and even more by CAMx-VBS, with the latter showing a bias ranging between -4.7 and -7.1 μg m-3. PM2.5 model performance was to some extent better than PM10, showing a mean bias ranging between -0.5 μg m-3 at rural sites and -5.5 μg m-3 at urban and suburban sites. CAMx performance for OA was clearly worse than for the other PM compounds (negative bias ranging between -40% and -75%). The comparisons of model results with OA sources (identified by PMF analysis) shows that the VBS scheme underestimates freshly emitted organic aerosol while SOAP overestimates. The VBS scheme correctly reproduces biomass burning (BBOA) contributions to primary OA concentrations (POA). In contrast VBS slightly underestimates the contribution from fossil-fuel combustion (HOA), indicating that POA emissions related to road transport are either underestimated or associated to higher volatility classes. The VBS scheme under-predictes the SOA too, but to a lesser

  14. Gridded Surface Subsurface Hydrologic Analysis (GSSHA) User's Manual; Version 1.43 for Watershed Modeling System 6.1

    National Research Council Canada - National Science Library

    Downer, Charles W; Ogden, Fred L

    2006-01-01

    The need to simulate surface water flows in watersheds with diverse runoff production mechanisms has led to the development of the physically-based hydrologic model Gridded Surface Subsurface Hydrologic Analysis (GSSHA...

  15. FMCSA Safety Program Effectiveness Measurement: Carrier Intervention Effectiveness Model (CIEM), Version 1.1 Report for Fiscal Year 2014 Interventions

    Science.gov (United States)

    2018-04-01

    The Federal Motor Carrier Safety Administration (FMCSA), in cooperation with the John A. Volpe National Transportation Systems Center (Volpe), has developed a quantitative model to measure the effectiveness of motor carrier interventions in terms of ...

  16. FMCSA Safety Program Effectiveness Measurement: Carrier Intervention Effectiveness Model, Version 1.1-Report for FY 2014 Interventions - Analysis Brief

    Science.gov (United States)

    2018-04-01

    The Carrier Intervention Effectiveness Model (CIEM) provides the Federal Motor Carrier Safety Administration (FMCSA) with a tool for measuring the safety benefits of carrier interventions conducted under the Compliance, Safety, Accountability (CSA) e...

  17. FMCSA safety program effectiveness measurement : Carrier Intervention Effectiveness Model (CIEM), Version 1.1, report for fiscal year 2013 interventions.

    Science.gov (United States)

    2017-04-01

    The Federal Motor Carrier Safety Administration (FMCSA), in cooperation with the John A. Volpe National Transportation Systems Center (Volpe), has developed a quantitative model to measure the effectiveness of motor carrier interventions in terms of ...

  18. Cognitive and emotional predictors of episodic and dispositional forgiveness

    Directory of Open Access Journals (Sweden)

    Mróz Justyna

    2017-06-01

    Full Text Available The study examined the importance of cognitive (positive orientation, basic hope and emotional (positive and negative affectivity, emotional control variables for state and trait forgiveness. One hundred and thirty nine participants completed six inventories in Polish version: HFS (Thompson et al., 2005, TRIM (McCullough et al., 1998, P