WorldWideScience

Sample records for model input sequence

  1. Comprehensive Information Retrieval and Model Input Sequence (CIRMIS)

    International Nuclear Information System (INIS)

    Friedrichs, D.R.

    1977-04-01

    The Comprehensive Information Retrieval and Model Input Sequence (CIRMIS) was developed to provide the research scientist with man--machine interactive capabilities in a real-time environment, and thereby produce results more quickly and efficiently. The CIRMIS system was originally developed to increase data storage and retrieval capabilities and ground-water model control for the Hanford site. The overall configuration, however, can be used in other areas. The CIRMIS system provides the user with three major functions: retrieval of well-based data, special application for manipulating surface data or background maps, and the manipulation and control of ground-water models. These programs comprise only a portion of the entire CIRMIS system. A complete description of the CIRMIS system is given in this report. 25 figures, 7 tables

  2. The use of synthetic input sequences in time series modeling

    International Nuclear Information System (INIS)

    Oliveira, Dair Jose de; Letellier, Christophe; Gomes, Murilo E.D.; Aguirre, Luis A.

    2008-01-01

    In many situations time series models obtained from noise-like data settle to trivial solutions under iteration. This Letter proposes a way of producing a synthetic (dummy) input, that is included to prevent the model from settling down to a trivial solution, while maintaining features of the original signal. Simulated benchmark models and a real time series of RR intervals from an ECG are used to illustrate the procedure

  3. OFFSCALE: PC input processor for SCALE-4 criticality sequences

    International Nuclear Information System (INIS)

    Bowman, S.M.

    1991-01-01

    OFFSCALE is a personal computer program that serves as a user-friendly interface for the Criticality Safety Analysis Sequences (CSAS) available in SCALE-4. It is designed to assist a SCALE-4 user in preparing an input file for execution of criticality safety problems. Output from OFFSCALE is a card-image input file that may be uploaded to a mainframe computer to execute the CSAS4 control module in SCALE-4. OFFSCALE features a pulldown menu system that accesses sophisticated data entry screens. The program allows the user to quickly set up a CSAS4 input file and perform data checking

  4. Crossover Can Be Constructive When Computing Unique Input Output Sequences

    DEFF Research Database (Denmark)

    Lehre, Per Kristian; Yao, Xin

    2010-01-01

    Unique input output (UIO) sequences have important applications in conformance testing of finite state machines (FSMs). Previous experimental and theoretical research has shown that evolutionary algorithms (EAs) can compute UIOs efficiently on many FSM instance classes, but fail on others. However...

  5. Modeling and generating input processes

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, M.E.

    1987-01-01

    This tutorial paper provides information relevant to the selection and generation of stochastic inputs to simulation studies. The primary area considered is multivariate but much of the philosophy at least is relevant to univariate inputs as well. 14 refs.

  6. Nonparametric combinatorial sequence models.

    Science.gov (United States)

    Wauthier, Fabian L; Jordan, Michael I; Jojic, Nebojsa

    2011-11-01

    This work considers biological sequences that exhibit combinatorial structures in their composition: groups of positions of the aligned sequences are "linked" and covary as one unit across sequences. If multiple such groups exist, complex interactions can emerge between them. Sequences of this kind arise frequently in biology but methodologies for analyzing them are still being developed. This article presents a nonparametric prior on sequences which allows combinatorial structures to emerge and which induces a posterior distribution over factorized sequence representations. We carry out experiments on three biological sequence families which indicate that combinatorial structures are indeed present and that combinatorial sequence models can more succinctly describe them than simpler mixture models. We conclude with an application to MHC binding prediction which highlights the utility of the posterior distribution over sequence representations induced by the prior. By integrating out the posterior, our method compares favorably to leading binding predictors.

  7. Modeling of Prepregs during Automated Draping Sequences

    DEFF Research Database (Denmark)

    Krogh, Christian; Glud, Jens Ammitzbøll; Jakobsen, Johnny

    2017-01-01

    algorithm used to generate target points on the mold which are used as input to a draping sequence planner. The draping sequence planner prescribes the displacement history for each gripper in the drape tool and these displacements are then applied to each gripper in a transient model of the draping...... sequence. The model is based on a transient finite element analysis with the material’s constitutive behavior currently being approximated as linear elastic orthotropic. In-plane tensile and bias-extension tests as well as bending tests are conducted and used as input for the model. The virtual draping...

  8. Modeling inputs to computer models used in risk assessment

    International Nuclear Information System (INIS)

    Iman, R.L.

    1987-01-01

    Computer models for various risk assessment applications are closely scrutinized both from the standpoint of questioning the correctness of the underlying mathematical model with respect to the process it is attempting to model and from the standpoint of verifying that the computer model correctly implements the underlying mathematical model. A process that receives less scrutiny, but is nonetheless of equal importance, concerns the individual and joint modeling of the inputs. This modeling effort clearly has a great impact on the credibility of results. Model characteristics are reviewed in this paper that have a direct bearing on the model input process and reasons are given for using probabilities-based modeling with the inputs. The authors also present ways to model distributions for individual inputs and multivariate input structures when dependence and other constraints may be present

  9. Input modeling with phase-type distributions and Markov models theory and applications

    CERN Document Server

    Buchholz, Peter; Felko, Iryna

    2014-01-01

    Containing a summary of several recent results on Markov-based input modeling in a coherent notation, this book introduces and compares algorithms for parameter fitting and gives an overview of available software tools in the area. Due to progress made in recent years with respect to new algorithms to generate PH distributions and Markovian arrival processes from measured data, the models outlined are useful alternatives to other distributions or stochastic processes used for input modeling. Graduate students and researchers in applied probability, operations research and computer science along with practitioners using simulation or analytical models for performance analysis and capacity planning will find the unified notation and up-to-date results presented useful. Input modeling is the key step in model based system analysis to adequately describe the load of a system using stochastic models. The goal of input modeling is to find a stochastic model to describe a sequence of measurements from a real system...

  10. Modelling of Multi Input Transfer Function for Rainfall Forecasting in Batu City

    OpenAIRE

    Priska Arindya Purnama

    2017-01-01

    The aim of this research is to model and forecast the rainfall in Batu City using multi input transfer function model based on air temperature, humidity, wind speed and cloud. Transfer function model is a multivariate time series model which consists of an output series (Yt) sequence expected to be effected by an input series (Xt) and other inputs in a group called a noise series (Nt). Multi input transfer function model obtained is (b1,s1,r1) (b2,s2,r2) (b3,s3,r3) (b4,s4,r4)(pn,qn) = (0,0,0)...

  11. Remote sensing inputs to water demand modeling

    Science.gov (United States)

    Estes, J. E.; Jensen, J. R.; Tinney, L. R.; Rector, M.

    1975-01-01

    In an attempt to determine the ability of remote sensing techniques to economically generate data required by water demand models, the Geography Remote Sensing Unit, in conjunction with the Kern County Water Agency of California, developed an analysis model. As a result it was determined that agricultural cropland inventories utilizing both high altitude photography and LANDSAT imagery can be conducted cost effectively. In addition, by using average irrigation application rates in conjunction with cropland data, estimates of agricultural water demand can be generated. However, more accurate estimates are possible if crop type, acreage, and crop specific application rates are employed. An analysis of the effect of saline-alkali soils on water demand in the study area is also examined. Finally, reference is made to the detection and delineation of water tables that are perched near the surface by semi-permeable clay layers. Soil salinity prediction, automated crop identification on a by-field basis, and a potential input to the determination of zones of equal benefit taxation are briefly touched upon.

  12. Phylogenetic mixtures and linear invariants for equal input models.

    Science.gov (United States)

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  13. Robust input design for nonlinear dynamic modeling of AUV.

    Science.gov (United States)

    Nouri, Nowrouz Mohammad; Valadi, Mehrdad

    2017-09-01

    Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Hydrogen Generation Rate Model Calculation Input Data

    International Nuclear Information System (INIS)

    KUFAHL, M.A.

    2000-01-01

    This report documents the procedures and techniques utilized in the collection and analysis of analyte input data values in support of the flammable gas hazard safety analyses. This document represents the analyses of data current at the time of its writing and does not account for data available since then

  15. A probabilistic graphical model based stochastic input model construction

    International Nuclear Information System (INIS)

    Wan, Jiang; Zabaras, Nicholas

    2014-01-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media

  16. Modeling Recognition Memory Using the Similarity Structure of Natural Input

    Science.gov (United States)

    Lacroix, Joyca P. W.; Murre, Jaap M. J.; Postma, Eric O.; van den Herik, H. Jaap

    2006-01-01

    The natural input memory (NAM) model is a new model for recognition memory that operates on natural visual input. A biologically informed perceptual preprocessing method takes local samples (eye fixations) from a natural image and translates these into a feature-vector representation. During recognition, the model compares incoming preprocessed…

  17. Modeling of prepregs during automated draping sequences

    Science.gov (United States)

    Krogh, Christian; Glud, Jens A.; Jakobsen, Johnny

    2017-10-01

    The behavior of wowen prepreg fabric during automated draping sequences is investigated. A drape tool under development with an arrangement of grippers facilitates the placement of a woven prepreg fabric in a mold. It is essential that the draped configuration is free from wrinkles and other defects. The present study aims at setting up a virtual draping framework capable of modeling the draping process from the initial flat fabric to the final double curved shape and aims at assisting the development of an automated drape tool. The virtual draping framework consists of a kinematic mapping algorithm used to generate target points on the mold which are used as input to a draping sequence planner. The draping sequence planner prescribes the displacement history for each gripper in the drape tool and these displacements are then applied to each gripper in a transient model of the draping sequence. The model is based on a transient finite element analysis with the material's constitutive behavior currently being approximated as linear elastic orthotropic. In-plane tensile and bias-extension tests as well as bending tests are conducted and used as input for the model. The virtual draping framework shows a good potential for obtaining a better understanding of the drape process and guide the development of the drape tool. However, results obtained from using the framework on a simple test case indicate that the generation of draping sequences is non-trivial.

  18. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  19. Analytic uncertainty and sensitivity analysis of models with input correlations

    Science.gov (United States)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  20. A Method to Select Software Test Cases in Consideration of Past Input Sequence

    International Nuclear Information System (INIS)

    Kim, Hee Eun; Kim, Bo Gyung; Kang, Hyun Gook

    2015-01-01

    In the Korea Nuclear I and C Systems (KNICS) project, the software for the fully-digitalized reactor protection system (RPS) was developed under a strict procedure. Even though the behavior of the software is deterministic, the randomness of input sequence produces probabilistic behavior of software. A software failure occurs when some inputs to the software occur and interact with the internal state of the digital system to trigger a fault that was introduced into the software during the software lifecycle. In this paper, the method to select test set for software failure probability estimation is suggested. This test set reflects past input sequence of software, which covers all possible cases. In this study, the method to select test cases for software failure probability quantification was suggested. To obtain profile of paired state variables, relationships of the variables need to be considered. The effect of input from human operator also have to be considered. As an example, test set of PZR-PR-Lo-Trip logic was examined. This method provides framework for selecting test cases of safety-critical software

  1. Modeling recognition memory using the similarity structure of natural input

    NARCIS (Netherlands)

    Lacroix, J.P.W.; Murre, J.M.J.; Postma, E.O.; van den Herik, H.J.

    2006-01-01

    The natural input memory (NIM) model is a new model for recognition memory that operates on natural visual input. A biologically informed perceptual preprocessing method takes local samples (eye fixations) from a natural image and translates these into a feature-vector representation. During

  2. Variance-based sensitivity indices for models with dependent inputs

    International Nuclear Information System (INIS)

    Mara, Thierry A.; Tarantola, Stefano

    2012-01-01

    Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.

  3. Stein's neuronal model with pooled renewal input

    Czech Academy of Sciences Publication Activity Database

    Rajdl, K.; Lánský, Petr

    2015-01-01

    Roč. 109, č. 3 (2015), s. 389-399 ISSN 0340-1200 Institutional support: RVO:67985823 Keywords : Stein’s model * Poisson process * pooled renewal processes * first-passage time Subject RIV: BA - General Mathematics Impact factor: 1.611, year: 2015

  4. Global sensitivity analysis of computer models with functional inputs

    International Nuclear Information System (INIS)

    Iooss, Bertrand; Ribatet, Mathieu

    2009-01-01

    Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.

  5. Runtime analysis of the (1+1) EA on computing unique input output sequences

    DEFF Research Database (Denmark)

    Lehre, Per Kristian; Yao, Xin

    2010-01-01

    Computing unique input output (UIO) sequences is a fundamental and hard problem in conformance testing of finite state machines (FSM). Previous experimental research has shown that evolutionary algorithms (EAs) can be applied successfully to find UIOs for some FSMs. However, before EAs can...... in the theoretical analysis, and the variability of the runtime. The numerical results fit well with the theoretical results, even for small problem instance sizes. Together, these results provide a first theoretical characterisation of the potential and limitations of the (1 + 1) EA on the problem of computing UIOs....

  6. Calibration of controlling input models for pavement management system.

    Science.gov (United States)

    2013-07-01

    The Oklahoma Department of Transportation (ODOT) is currently using the Deighton Total Infrastructure Management System (dTIMS) software for pavement management. This system is based on several input models which are computational backbones to dev...

  7. Agricultural and Environmental Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    Kaylie Rasmuson; Kurt Rautenstrauch

    2003-01-01

    This analysis is one of nine technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. It documents input parameters for the biosphere model, and supports the use of the model to develop Biosphere Dose Conversion Factors (BDCF). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in the biosphere Technical Work Plan (TWP, BSC 2003a). It should be noted that some documents identified in Figure 1-1 may be under development and therefore not available at the time this document is issued. The ''Biosphere Model Report'' (BSC 2003b) describes the ERMYN and its input parameters. This analysis report, ANL-MGR-MD-000006, ''Agricultural and Environmental Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. This report defines and justifies values for twelve parameters required in the biosphere model. These parameters are related to use of contaminated groundwater to grow crops. The parameter values recommended in this report are used in the soil, plant, and carbon-14 submodels of the ERMYN

  8. Agricultural and Environmental Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    K. Rasmuson; K. Rautenstrauch

    2004-01-01

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters

  9. Modelling of Multi Input Transfer Function for Rainfall Forecasting in Batu City

    Directory of Open Access Journals (Sweden)

    Priska Arindya Purnama

    2017-11-01

    Full Text Available The aim of this research is to model and forecast the rainfall in Batu City using multi input transfer function model based on air temperature, humidity, wind speed and cloud. Transfer function model is a multivariate time series model which consists of an output series (Yt sequence expected to be effected by an input series (Xt and other inputs in a group called a noise series (Nt. Multi input transfer function model obtained is (b1,s1,r1 (b2,s2,r2 (b3,s3,r3 (b4,s4,r4(pn,qn = (0,0,0 (23,0,0 (1,2,0 (0,0,0 ([5,8],2 and shows that air temperature on t-day affects rainfall on t-day, rainfall on t-day is influenced by air humidity in the previous 23 days, rainfall on t-day is affected by wind speed in the previous day , and rainfall on day t is affected by clouds on day t. The results of rainfall forecasting in Batu City with multi input transfer function model can be said to be accurate, because it produces relatively small RMSE value. The value of RMSE data forecasting training is 7.7921 while forecasting data testing is 4.2184. Multi-input transfer function model is suitable for rainfall in Batu City.

  10. Foundations of Sequence-to-Sequence Modeling for Time Series

    OpenAIRE

    Kuznetsov, Vitaly; Mariet, Zelda

    2018-01-01

    The availability of large amounts of time series data, paired with the performance of deep-learning algorithms on a broad class of problems, has recently led to significant interest in the use of sequence-to-sequence models for time series forecasting. We provide the first theoretical analysis of this time series forecasting framework. We include a comparison of sequence-to-sequence modeling to classical time series models, and as such our theory can serve as a quantitative guide for practiti...

  11. Quality assurance of weather data for agricultural system model input

    Science.gov (United States)

    It is well known that crop production and hydrologic variation on watersheds is weather related. Rarely, however, is meteorological data quality checks reported for agricultural systems model research. We present quality assurance procedures for agricultural system model weather data input. Problems...

  12. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rautenstrauch

    2004-09-10

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.

  13. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    K. Rautenstrauch

    2004-01-01

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception

  14. Development of an Input Model to MELCOR 1.8.5 for the Ringhals 3 PWR

    International Nuclear Information System (INIS)

    Nilsson, Lars

    2004-12-01

    An input file to the severe accident code MELCOR 1.8.5 has been developed for the Swedish pressurized water reactor Ringhals 3. The aim was to produce a file that can be used for calculations of various postulated severe accident scenarios, although the first application is specifically on cases involving large hydrogen production. The input file is rather detailed with individual modelling of all three cooling loops. The report describes the basis for the Ringhals 3 model and the input preparation step by step and is illustrated by nodalization schemes of the different plant systems. Present version of the report is restricted to the fundamental MELCOR input preparation, and therefore most of the figures of Ringhals 3 measurements and operating parameters are excluded here. These are given in another, complete version of the report, for limited distribution, which includes tables for pertinent data of all components. That version contains appendices with a complete listing of the input files as well as tables of data compiled from a RELAP5 file, that was a major basis for the MELCOR input for the cooling loops. The input was tested in steady-state calculations in order to simulate the initial conditions at current nominal operating conditions in Ringhals 3 for 2775 MW thermal power. The results of the steady-state calculations are presented in the report. Calculations with the MELCOR model will then be carried out of certain accident sequences for comparison with results from earlier MAAP4 calculations. That work will be reported separately

  15. Evaluating nuclear physics inputs in core-collapse supernova models

    Science.gov (United States)

    Lentz, E.; Hix, W. R.; Baird, M. L.; Messer, O. E. B.; Mezzacappa, A.

    Core-collapse supernova models depend on the details of the nuclear and weak interaction physics inputs just as they depend on the details of the macroscopic physics (transport, hydrodynamics, etc.), numerical methods, and progenitors. We present preliminary results from our ongoing comparison studies of nuclear and weak interaction physics inputs to core collapse supernova models using the spherically-symmetric, general relativistic, neutrino radiation hydrodynamics code Agile-Boltztran. We focus on comparisons of the effects of the nuclear EoS and the effects of improving the opacities, particularly neutrino--nucleon interactions.

  16. Investigation of RADTRAN Stop Model input parameters for truck stops

    International Nuclear Information System (INIS)

    Griego, N.R.; Smith, J.D.; Neuhauser, K.S.

    1996-01-01

    RADTRAN is a computer code for estimating the risks and consequences as transport of radioactive materials (RAM). RADTRAN was developed and is maintained by Sandia National Laboratories for the US Department of Energy (DOE). For incident-free transportation, the dose to persons exposed while the shipment is stopped is frequently a major percentage of the overall dose. This dose is referred to as Stop Dose and is calculated by the Stop Model. Because stop dose is a significant portion of the overall dose associated with RAM transport, the values used as input for the Stop Model are important. Therefore, an investigation of typical values for RADTRAN Stop Parameters for truck stops was performed. The resulting data from these investigations were analyzed to provide mean values, standard deviations, and histograms. Hence, the mean values can be used when an analyst does not have a basis for selecting other input values for the Stop Model. In addition, the histograms and their characteristics can be used to guide statistical sampling techniques to measure sensitivity of the RADTRAN calculated Stop Dose to the uncertainties in the stop model input parameters. This paper discusses the details and presents the results of the investigation of stop model input parameters at truck stops

  17. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2004-09-10

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis

  18. Environmental Transport Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. Wasiolek

    2004-01-01

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573])

  19. Agricultural and Environmental Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    Kaylie Rasmuson; Kurt Rautenstrauch

    2003-06-20

    This analysis is one of nine technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. It documents input parameters for the biosphere model, and supports the use of the model to develop Biosphere Dose Conversion Factors (BDCF). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in the biosphere Technical Work Plan (TWP, BSC 2003a). It should be noted that some documents identified in Figure 1-1 may be under development and therefore not available at the time this document is issued. The ''Biosphere Model Report'' (BSC 2003b) describes the ERMYN and its input parameters. This analysis report, ANL-MGR-MD-000006, ''Agricultural and Environmental Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. This report defines and justifies values for twelve parameters required in the biosphere model. These parameters are related to use of contaminated groundwater to grow crops. The parameter values recommended in this report are used in the soil, plant, and carbon-14 submodels of the ERMYN.

  20. GASFLOW computer code (physical models and input data)

    International Nuclear Information System (INIS)

    Muehlbauer, Petr

    2007-11-01

    The GASFLOW computer code was developed jointly by the Los Alamos National Laboratory, USA, and Forschungszentrum Karlsruhe, Germany. The code is primarily intended for calculations of the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containments and in other facilities. The physical models and the input data are described, and a commented simple calculation is presented

  1. Framework for Modelling Multiple Input Complex Aggregations for Interactive Installations

    DEFF Research Database (Denmark)

    Padfield, Nicolas; Andreasen, Troels

    2012-01-01

    on fuzzy logic and provides a method for variably balancing interaction and user input with the intention of the artist or director. An experimental design is presented, demonstrating an intuitive interface for parametric modelling of a complex aggregation function. The aggregation function unifies...

  2. Key processes and input parameters for environmental tritium models

    International Nuclear Information System (INIS)

    Bunnenberg, C.; Taschner, M.; Ogram, G.L.

    1994-01-01

    The primary objective of the work reported here is to define key processes and input parameters for mathematical models of environmental tritium behaviour adequate for use in safety analysis and licensing of fusion devices like NET and associated tritium handling facilities. (author). 45 refs., 3 figs

  3. Key processes and input parameters for environmental tritium models

    Energy Technology Data Exchange (ETDEWEB)

    Bunnenberg, C; Taschner, M [Niedersaechsisches Inst. fuer Radiooekologie, Hannover (Germany); Ogram, G L [Ontario Hydro, Toronto, ON (Canada)

    1994-12-31

    The primary objective of the work reported here is to define key processes and input parameters for mathematical models of environmental tritium behaviour adequate for use in safety analysis and licensing of fusion devices like NET and associated tritium handling facilities. (author). 45 refs., 3 figs.

  4. Model reduction of nonlinear systems subject to input disturbances

    KAUST Repository

    Ndoye, Ibrahima

    2017-07-10

    The method of convex optimization is used as a tool for model reduction of a class of nonlinear systems in the presence of disturbances. It is shown that under some conditions the nonlinear disturbed system can be approximated by a reduced order nonlinear system with similar disturbance-output properties to the original plant. The proposed model reduction strategy preserves the nonlinearity and the input disturbance nature of the model. It guarantees a sufficiently small error between the outputs of the original and the reduced-order systems, and also maintains the properties of input-to-state stability. The matrices of the reduced order system are given in terms of a set of linear matrix inequalities (LMIs). The paper concludes with a demonstration of the proposed approach on model reduction of a nonlinear electronic circuit with additive disturbances.

  5. Effects of input uncertainty on cross-scale crop modeling

    Science.gov (United States)

    Waha, Katharina; Huth, Neil; Carberry, Peter

    2014-05-01

    The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input

  6. A PRODUCTIVITY EVALUATION MODEL BASED ON INPUT AND OUTPUT ORIENTATIONS

    Directory of Open Access Journals (Sweden)

    C.O. Anyaeche

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: Many productivity models evaluate either the input or the output performances using standalone techniques. This sometimes gives divergent views of the same system’s results. The work reported in this article, which simultaneously evaluated productivity from both orientations, was applied on real life data. The results showed losses in productivity (–2% and price recovery (–8% for the outputs; the inputs showed productivity gain (145% but price recovery loss (–63%. These imply losses in product performances but a productivity gain in inputs. The loss in the price recovery of inputs indicates a problem in the pricing policy. This model is applicable in product diversification.

    AFRIKAANSE OPSOMMING: Die meeste produktiwiteitsmodelle evalueer of die inset- of die uitsetverrigting deur gebruik te maak van geïsoleerde tegnieke. Dit lei soms tot uiteenlopende perspektiewe van dieselfde sisteem se verrigting. Hierdie artikel evalueer verrigting uit beide perspektiewe en gebruik ware data. Die resultate toon ‘n afname in produktiwiteit (-2% en prysherwinning (-8% vir die uitsette. Die insette toon ‘n toename in produktiwiteit (145%, maar ‘n afname in prysherwinning (-63%. Dit impliseer ‘n afname in produkverrigting, maar ‘n produktiwiteitstoename in insette. Die afname in die prysherwinning van insette dui op ‘n problem in die prysvasstellingbeleid. Hierdie model is geskik vir produkdiversifikasie.

  7. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2006-06-05

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This

  8. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. Wasiolek

    2006-01-01

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This report is concerned primarily with the

  9. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-09-24

    This analysis is one of the nine reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2003a) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents a set of input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for a Yucca Mountain repository. This report, ''Inhalation Exposure Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003b). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available at that time. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this analysis report. This analysis report defines and justifies values of mass loading, which is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Measurements of mass loading are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air surrounding crops and concentrations in air

  10. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. A. Wasiolek

    2003-01-01

    This analysis is one of the nine reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2003a) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents a set of input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for a Yucca Mountain repository. This report, ''Inhalation Exposure Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003b). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available at that time. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this analysis report. This analysis report defines and justifies values of mass loading, which is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Measurements of mass loading are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air surrounding crops and concentrations in air inhaled by a receptor. Concentrations in air to which the

  11. Agricultural and Environmental Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rasmuson; K. Rautenstrauch

    2004-09-14

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters.

  12. Screening important inputs in models with strong interaction properties

    International Nuclear Information System (INIS)

    Saltelli, Andrea; Campolongo, Francesca; Cariboni, Jessica

    2009-01-01

    We introduce a new method for screening inputs in mathematical or computational models with large numbers of inputs. The method proposed here represents an improvement over the best available practice for this setting when dealing with models having strong interaction effects. When the sample size is sufficiently high the same design can also be used to obtain accurate quantitative estimates of the variance-based sensitivity measures: the same simulations can be used to obtain estimates of the variance-based measures according to the Sobol' and the Jansen formulas. Results demonstrate that Sobol' is more efficient for the computation of the first-order indices, while Jansen performs better for the computation of the total indices.

  13. Screening important inputs in models with strong interaction properties

    Energy Technology Data Exchange (ETDEWEB)

    Saltelli, Andrea [European Commission, Joint Research Centre, 21020 Ispra, Varese (Italy); Campolongo, Francesca [European Commission, Joint Research Centre, 21020 Ispra, Varese (Italy)], E-mail: francesca.campolongo@jrc.it; Cariboni, Jessica [European Commission, Joint Research Centre, 21020 Ispra, Varese (Italy)

    2009-07-15

    We introduce a new method for screening inputs in mathematical or computational models with large numbers of inputs. The method proposed here represents an improvement over the best available practice for this setting when dealing with models having strong interaction effects. When the sample size is sufficiently high the same design can also be used to obtain accurate quantitative estimates of the variance-based sensitivity measures: the same simulations can be used to obtain estimates of the variance-based measures according to the Sobol' and the Jansen formulas. Results demonstrate that Sobol' is more efficient for the computation of the first-order indices, while Jansen performs better for the computation of the total indices.

  14. Soil-related Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    A. J. Smith

    2003-01-01

    This analysis is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the geologic repository at Yucca Mountain. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN biosphere model is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003 [163602]). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. ''The Biosphere Model Report'' (BSC 2003 [160699]) describes in detail the conceptual model as well as the mathematical model and its input parameters. The purpose of this analysis was to develop the biosphere model parameters needed to evaluate doses from pathways associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation and ash

  15. Simplifying BRDF input data for optical signature modeling

    Science.gov (United States)

    Hallberg, Tomas; Pohl, Anna; Fagerström, Jan

    2017-05-01

    Scene simulations of optical signature properties using signature codes normally requires input of various parameterized measurement data of surfaces and coatings in order to achieve realistic scene object features. Some of the most important parameters are used in the model of the Bidirectional Reflectance Distribution Function (BRDF) and are normally determined by surface reflectance and scattering measurements. Reflectance measurements of the spectral Directional Hemispherical Reflectance (DHR) at various incident angles can normally be performed in most spectroscopy labs, while measuring the BRDF is more complicated or may not be available at all in many optical labs. We will present a method in order to achieve the necessary BRDF data directly from DHR measurements for modeling software using the Sandford-Robertson BRDF model. The accuracy of the method is tested by modeling a test surface by comparing results from using estimated and measured BRDF data as input to the model. These results show that using this method gives no significant loss in modeling accuracy.

  16. Input dependent cell assembly dynamics in a model of the striatal medium spiny neuron network

    Directory of Open Access Journals (Sweden)

    Adam ePonzi

    2012-03-01

    Full Text Available The striatal medium spiny neuron (MSNs network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri stimulus time histograms (PSTH of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioural task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviourally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would in when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and delineate the range of parameters where this behaviour is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response

  17. Input dependent cell assembly dynamics in a model of the striatal medium spiny neuron network.

    Science.gov (United States)

    Ponzi, Adam; Wickens, Jeff

    2012-01-01

    The striatal medium spiny neuron (MSN) network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri-stimulus time histograms (PSTH) of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioral task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviorally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and outline the range of parameters where this behavior is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response which could be utilized by the animal in behavior.

  18. Out-of-Sequence Prevention for Multicast Input-Queuing Space-Memory-Memory Clos-Network

    DEFF Research Database (Denmark)

    Yu, Hao; Ruepp, Sarah; Berger, Michael Stübert

    2011-01-01

    This paper proposes two cell dispatching algorithms for the input-queuing space-memory-memory (IQ-SMM) Closnetwork to reduce out-of-sequence (OOS) for multicast traffic. The frequent connection pattern change of DSRR results in a severe OOS problem. Based on the principle of DSRR, MFDSRR is able ...

  19. Out-of-Sequence Preventative Cell Dispatching for Multicast Input-Queued Space-Memory-Memory Clos-Network

    DEFF Research Database (Denmark)

    Yu, Hao; Ruepp, Sarah Renée; Berger, Michael Stübert

    2011-01-01

    This paper proposes two out-of-sequence (OOS) preventative cell dispatching algorithms for the multicast input-queued space-memory-memory (IQ-SMM) Clos-network switch architecture, i.e. the multicast flow-based DSRR (MF-DSRR) and the multicast flow-based round-robin (MFRR). Treating each cell...

  20. Preventing Out-of-Sequence for Multicast Input-Queued Space-Memory-Memory Clos-Network

    DEFF Research Database (Denmark)

    Yu, Hao; Ruepp, Sarah Renée; Berger, Michael Stübert

    2011-01-01

    This paper proposes an out-of-sequence (OOS) preventative cell dispatching algorithm, the multicast flow-based round robin (MFRR), for multicast input-queued space-memory-memory (IQ-SMM) Clos-network architecture. Independently treating each incoming cell, such as the desynchronized static round...

  1. OFFSCALE: A PC input processor for the SCALE code system. The CSASIN processor for the criticality sequences

    International Nuclear Information System (INIS)

    Bowman, S.M.

    1994-11-01

    OFFSCALE is a suite of personal computer input processor programs developed at Oak Ridge National Laboratory to provide an easy-to-use interface for modules in the SCALE-4 code system. CSASIN (formerly known as OFFSCALE) is a program in the OFFSCALE suite that serves as a user-friendly interface for the Criticality Safety Analysis Sequences (CSAS) available in SCALE-4. It is designed to assist a SCALE-4 user in preparing an input file for execution of criticality safety problems. Output from CSASIN generates an input file that may be used to execute the CSAS control module in SCALE-4. CSASIN features a pulldown menu system that accesses sophisticated data entry screens. The program allows the user to quickly set up a CSAS input file and perform data checking. This capability increases productivity and decreases the chance of user error

  2. Soil-Related Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    Smith, A. J.

    2004-01-01

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure was defined as AP-SIII.9Q, ''Scientific Analyses''. This

  3. Soil-Related Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    A. J. Smith

    2004-09-09

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure

  4. Temporal rainfall estimation using input data reduction and model inversion

    Science.gov (United States)

    Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.

    2016-12-01

    Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a

  5. Evaluating the uncertainty of input quantities in measurement models

    Science.gov (United States)

    Possolo, Antonio; Elster, Clemens

    2014-06-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) gives guidance about how values and uncertainties should be assigned to the input quantities that appear in measurement models. This contribution offers a concrete proposal for how that guidance may be updated in light of the advances in the evaluation and expression of measurement uncertainty that were made in the course of the twenty years that have elapsed since the publication of the GUM, and also considering situations that the GUM does not yet contemplate. Our motivation is the ongoing conversation about a new edition of the GUM. While generally we favour a Bayesian approach to uncertainty evaluation, we also recognize the value that other approaches may bring to the problems considered here, and focus on methods for uncertainty evaluation and propagation that are widely applicable, including to cases that the GUM has not yet addressed. In addition to Bayesian methods, we discuss maximum-likelihood estimation, robust statistical methods, and measurement models where values of nominal properties play the same role that input quantities play in traditional models. We illustrate these general-purpose techniques in concrete examples, employing data sets that are realistic but that also are of conveniently small sizes. The supplementary material available online lists the R computer code that we have used to produce these examples (stacks.iop.org/Met/51/3/339/mmedia). Although we strive to stay close to clause 4 of the GUM, which addresses the evaluation of uncertainty for input quantities, we depart from it as we review the classes of measurement models that we believe are generally useful in contemporary measurement science. We also considerably expand and update the treatment that the GUM gives to Type B evaluations of uncertainty: reviewing the state-of-the-art, disciplined approach to the elicitation of expert knowledge, and its encapsulation in probability distributions that are usable in

  6. Continuous Online Sequence Learning with an Unsupervised Neural Network Model.

    Science.gov (United States)

    Cui, Yuwei; Ahmad, Subutar; Hawkins, Jeff

    2016-09-14

    The ability to recognize and predict temporal sequences of sensory inputs is vital for survival in natural environments. Based on many known properties of cortical neurons, hierarchical temporal memory (HTM) sequence memory recently has been proposed as a theoretical framework for sequence learning in the cortex. In this letter, we analyze properties of HTM sequence memory and apply it to sequence learning and prediction problems with streaming data. We show the model is able to continuously learn a large number of variableorder temporal sequences using an unsupervised Hebbian-like learning rule. The sparse temporal codes formed by the model can robustly handle branching temporal sequences by maintaining multiple predictions until there is sufficient disambiguating evidence. We compare the HTM sequence memory with other sequence learning algorithms, including statistical methods: autoregressive integrated moving average; feedforward neural networks-time delay neural network and online sequential extreme learning machine; and recurrent neural networks-long short-term memory and echo-state networks on sequence prediction problems with both artificial and real-world data. The HTM model achieves comparable accuracy to other state-of-the-art algorithms. The model also exhibits properties that are critical for sequence learning, including continuous online learning, the ability to handle multiple predictions and branching sequences with high-order statistics, robustness to sensor noise and fault tolerance, and good performance without task-specific hyperparameter tuning. Therefore, the HTM sequence memory not only advances our understanding of how the brain may solve the sequence learning problem but is also applicable to real-world sequence learning problems from continuous data streams.

  7. Influential input parameters for reflood model of MARS code

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Deog Yeon; Bang, Young Seok [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2012-10-15

    Best Estimate (BE) calculation has been more broadly used in nuclear industries and regulations to reduce the significant conservatism for evaluating Loss of Coolant Accident (LOCA). Reflood model has been identified as one of the problems in BE calculation. The objective of the Post BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) program of OECD/NEA is to make progress the issue of the quantification of the uncertainty of the physical models in system thermal hydraulic codes, by considering an experimental result especially for reflood. It is important to establish a methodology to identify and select the parameters influential to the response of reflood phenomena following Large Break LOCA. For this aspect, a reference calculation and sensitivity analysis to select the dominant influential parameters for FEBA experiment are performed.

  8. Environmental Transport Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    Wasiolek, M. A.

    2003-01-01

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699], Section 6.2). Parameter values

  9. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-06-27

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699

  10. Lysimeter data as input to performance assessment models

    International Nuclear Information System (INIS)

    McConnell, J.W. Jr.

    1998-01-01

    The Field Lysimeter Investigations: Low-Level Waste Data Base Development Program is obtaining information on the performance of radioactive waste forms in a disposal environment. Waste forms fabricated using ion-exchange resins from EPICOR-117 prefilters employed in the cleanup of the Three Mile Island (TMI) Nuclear Power Station are being tested to develop a low-level waste data base and to obtain information on survivability of waste forms in a disposal environment. The program includes reviewing radionuclide releases from those waste forms in the first 7 years of sampling and examining the relationship between code input parameters and lysimeter data. Also, lysimeter data are applied to performance assessment source term models, and initial results from use of data in two models are presented

  11. Measurement of Laser Weld Temperatures for 3D Model Input

    Energy Technology Data Exchange (ETDEWEB)

    Dagel, Daryl [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grossetete, Grant [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Maccallum, Danny O. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-10-01

    Laser welding is a key joining process used extensively in the manufacture and assembly of critical components for several weapons systems. Sandia National Laboratories advances the understanding of the laser welding process through coupled experimentation and modeling. This report summarizes the experimental portion of the research program, which focused on measuring temperatures and thermal history of laser welds on steel plates. To increase confidence in measurement accuracy, researchers utilized multiple complementary techniques to acquire temperatures during laser welding. This data serves as input to and validation of 3D laser welding models aimed at predicting microstructure and the formation of defects and their impact on weld-joint reliability, a crucial step in rapid prototyping of weapons components.

  12. Assigning probability distributions to input parameters of performance assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta [INTERA Inc., Austin, TX (United States)

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.

  13. Assigning probability distributions to input parameters of performance assessment models

    International Nuclear Information System (INIS)

    Mishra, Srikanta

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available

  14. HotSpot Wizard 3.0: web server for automated design of mutations and smart libraries based on sequence input information.

    Science.gov (United States)

    Sumbalova, Lenka; Stourac, Jan; Martinek, Tomas; Bednar, David; Damborsky, Jiri

    2018-05-23

    HotSpot Wizard is a web server used for the automated identification of hotspots in semi-rational protein design to give improved protein stability, catalytic activity, substrate specificity and enantioselectivity. Since there are three orders of magnitude fewer protein structures than sequences in bioinformatic databases, the major limitation to the usability of previous versions was the requirement for the protein structure to be a compulsory input for the calculation. HotSpot Wizard 3.0 now accepts the protein sequence as input data. The protein structure for the query sequence is obtained either from eight repositories of homology models or is modeled using Modeller and I-Tasser. The quality of the models is then evaluated using three quality assessment tools-WHAT_CHECK, PROCHECK and MolProbity. During follow-up analyses, the system automatically warns the users whenever they attempt to redesign poorly predicted parts of their homology models. The second main limitation of HotSpot Wizard's predictions is that it identifies suitable positions for mutagenesis, but does not provide any reliable advice on particular substitutions. A new module for the estimation of thermodynamic stabilities using the Rosetta and FoldX suites has been introduced which prevents destabilizing mutations among pre-selected variants entering experimental testing. HotSpot Wizard is freely available at http://loschmidt.chemi.muni.cz/hotspotwizard.

  15. Metocean input data for drift models applications: Loustic study

    International Nuclear Information System (INIS)

    Michon, P.; Bossart, C.; Cabioc'h, M.

    1995-01-01

    Real-time monitoring and crisis management of oil slicks or floating structures displacement require a good knowledge of local winds, waves and currents used as input data for operational drift models. Fortunately, thanks to world-wide and all-weather coverage, satellite measurements have recently enabled the introduction of new methods for the remote sensing of the marine environment. Within a French joint industry project, a procedure has been developed using basically satellite measurements combined to metocean models in order to provide marine operators' drift models with reliable wind, wave and current analyses and short term forecasts. Particularly, a model now allows the calculation of the drift current, under the joint action of wind and sea-state, thus radically improving the classical laws. This global procedure either directly uses satellite wind and waves measurements (if available on the study area) or indirectly, as calibration of metocean models results which are brought to the oil slick or floating structure location. The operational use of this procedure is reported here with an example of floating structure drift offshore from the Brittany coasts

  16. An improved robust model predictive control for linear parameter-varying input-output models

    NARCIS (Netherlands)

    Abbas, H.S.; Hanema, J.; Tóth, R.; Mohammadpour, J.; Meskin, N.

    2018-01-01

    This paper describes a new robust model predictive control (MPC) scheme to control the discrete-time linear parameter-varying input-output models subject to input and output constraints. Closed-loop asymptotic stability is guaranteed by including a quadratic terminal cost and an ellipsoidal terminal

  17. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity.

    Science.gov (United States)

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2012-12-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes.

  18. Modelling Analysis of Forestry Input-Output Elasticity in China

    Directory of Open Access Journals (Sweden)

    Guofeng Wang

    2016-01-01

    Full Text Available Based on an extended economic model and space econometrics, this essay analyzed the spatial distributions and interdependent relationships of the production of forestry in China; also the input-output elasticity of forestry production were calculated. Results figure out there exists significant spatial correlation in forestry production in China. Spatial distribution is mainly manifested as spatial agglomeration. The output elasticity of labor force is equal to 0.6649, and that of capital is equal to 0.8412. The contribution of land is significantly negative. Labor and capital are the main determinants for the province-level forestry production in China. Thus, research on the province-level forestry production should not ignore the spatial effect. The policy-making process should take into consideration the effects between provinces on the production of forestry. This study provides some scientific technical support for forestry production.

  19. Prioritizing Interdependent Production Processes using Leontief Input-Output Model

    Directory of Open Access Journals (Sweden)

    Masbad Jesah Grace

    2016-03-01

    Full Text Available This paper proposes a methodology in identifying key production processes in an interdependent production system. Previous approaches on this domain have drawbacks that may potentially affect the reliability of decision-making. The proposed approach adopts the Leontief input-output model (L-IOM which was proven successful in analyzing interdependent economic systems. The motivation behind such adoption lies in the strength of L-IOM in providing a rigorous quantitative framework in identifying key components of interdependent systems. In this proposed approach, the consumption and production flows of each process are represented respectively by the material inventory produced by the prior process and the material inventory produced by the current process, both in monetary values. A case study in a furniture production system located in central Philippines was carried out to elucidate the proposed approach. Results of the case were reported in this work

  20. An analytical model for an input/output-subsystem

    International Nuclear Information System (INIS)

    Roemgens, J.

    1983-05-01

    An input/output-subsystem of one or several computers if formed by the external memory units and the peripheral units of a computer system. For these subsystems mathematical models are established, taking into account the special properties of the I/O-subsystems, in order to avoid planning errors and to allow for predictions of the capacity of such systems. Here an analytical model is presented for the magnetic discs of a I/O-subsystem, using analytical methods for the individual waiting queues or waiting queue networks. Only I/O-subsystems of IBM-computer configurations are considered, which can be controlled by the MVS operating system. After a description of the hardware and software components of these I/O-systems, possible solutions from the literature are presented and discussed with respect to their applicability in IBM-I/O-subsystems. Based on these models a special scheme is developed which combines the advantages of the literature models and avoids the disadvantages in part. (orig./RW) [de

  1. An investigation of developmental changes in interpretation and construction of graphic AAC symbol sequences through systematic combination of input and output modalities.

    Science.gov (United States)

    Trudeau, Natacha; Sutton, Ann; Morford, Jill P

    2014-09-01

    While research on spoken language has a long tradition of studying and contrasting language production and comprehension, the study of graphic symbol communication has focused more on production than comprehension. As a result, the relationships between the ability to construct and to interpret graphic symbol sequences are not well understood. This study explored the use of graphic symbol sequences in children without disabilities aged 3;0 to 6;11 (years; months) (n=111). Children took part in nine tasks that systematically varied input and output modalities (speech, action, and graphic symbols). Results show that in 3- and 4-year-olds, attributing meaning to a sequence of symbols was particularly difficult even when the children knew the meaning of each symbol in the sequence. Similarly, while even 3- and 4-year-olds could produce a graphic symbol sequence following a model, transposing a spoken sentence into a graphic sequence was more difficult for them. Representing an action with graphic symbols was difficult even for 5-year-olds. Finally, the ability to comprehend graphic-symbol sequences preceded the ability to produce them. These developmental patterns, as well as memory-related variables, should be taken into account in choosing intervention strategies with young children who use AAC.

  2. A Markovian model of evolving world input-output network.

    Directory of Open Access Journals (Sweden)

    Vahid Moosavi

    Full Text Available The initial theoretical connections between Leontief input-output models and Markov chains were established back in 1950s. However, considering the wide variety of mathematical properties of Markov chains, so far there has not been a full investigation of evolving world economic networks with Markov chain formalism. In this work, using the recently available world input-output database, we investigated the evolution of the world economic network from 1995 to 2011 through analysis of a time series of finite Markov chains. We assessed different aspects of this evolving system via different known properties of the Markov chains such as mixing time, Kemeny constant, steady state probabilities and perturbation analysis of the transition matrices. First, we showed how the time series of mixing times and Kemeny constants could be used as an aggregate index of globalization. Next, we focused on the steady state probabilities as a measure of structural power of the economies that are comparable to GDP shares of economies as the traditional index of economies welfare. Further, we introduced two measures of systemic risk, called systemic influence and systemic fragility, where the former is the ratio of number of influenced nodes to the total number of nodes, caused by a shock in the activity of a node, and the latter is based on the number of times a specific economic node is affected by a shock in the activity of any of the other nodes. Finally, focusing on Kemeny constant as a global indicator of monetary flow across the network, we showed that there is a paradoxical effect of a change in activity levels of economic nodes on the overall flow of the world economic network. While the economic slowdown of the majority of nodes with high structural power results to a slower average monetary flow over the network, there are some nodes, where their slowdowns improve the overall quality of the network in terms of connectivity and the average flow of the money.

  3. A Markovian model of evolving world input-output network.

    Science.gov (United States)

    Moosavi, Vahid; Isacchini, Giulio

    2017-01-01

    The initial theoretical connections between Leontief input-output models and Markov chains were established back in 1950s. However, considering the wide variety of mathematical properties of Markov chains, so far there has not been a full investigation of evolving world economic networks with Markov chain formalism. In this work, using the recently available world input-output database, we investigated the evolution of the world economic network from 1995 to 2011 through analysis of a time series of finite Markov chains. We assessed different aspects of this evolving system via different known properties of the Markov chains such as mixing time, Kemeny constant, steady state probabilities and perturbation analysis of the transition matrices. First, we showed how the time series of mixing times and Kemeny constants could be used as an aggregate index of globalization. Next, we focused on the steady state probabilities as a measure of structural power of the economies that are comparable to GDP shares of economies as the traditional index of economies welfare. Further, we introduced two measures of systemic risk, called systemic influence and systemic fragility, where the former is the ratio of number of influenced nodes to the total number of nodes, caused by a shock in the activity of a node, and the latter is based on the number of times a specific economic node is affected by a shock in the activity of any of the other nodes. Finally, focusing on Kemeny constant as a global indicator of monetary flow across the network, we showed that there is a paradoxical effect of a change in activity levels of economic nodes on the overall flow of the world economic network. While the economic slowdown of the majority of nodes with high structural power results to a slower average monetary flow over the network, there are some nodes, where their slowdowns improve the overall quality of the network in terms of connectivity and the average flow of the money.

  4. DNA sequence modeling based on context trees

    NARCIS (Netherlands)

    Kusters, C.J.; Ignatenko, T.; Roland, J.; Horlin, F.

    2015-01-01

    Genomic sequences contain instructions for protein and cell production. Therefore understanding and identification of biologically and functionally meaningful patterns in DNA sequences is of paramount importance. Modeling of DNA sequences in its turn can help to better understand and identify such

  5. Regulation of Wnt signaling by nociceptive input in animal models

    Directory of Open Access Journals (Sweden)

    Shi Yuqiang

    2012-06-01

    Full Text Available Abstract Background Central sensitization-associated synaptic plasticity in the spinal cord dorsal horn (SCDH critically contributes to the development of chronic pain, but understanding of the underlying molecular pathways is still incomplete. Emerging evidence suggests that Wnt signaling plays a crucial role in regulation of synaptic plasticity. Little is known about the potential function of the Wnt signaling cascades in chronic pain development. Results Fluorescent immunostaining results indicate that β-catenin, an essential protein in the canonical Wnt signaling pathway, is expressed in the superficial layers of the mouse SCDH with enrichment at synapses in lamina II. In addition, Wnt3a, a prototypic Wnt ligand that activates the canonical pathway, is also enriched in the superficial layers. Immunoblotting analysis indicates that both Wnt3a a β-catenin are up-regulated in the SCDH of various mouse pain models created by hind-paw injection of capsaicin, intrathecal (i.t. injection of HIV-gp120 protein or spinal nerve ligation (SNL. Furthermore, Wnt5a, a prototypic Wnt ligand for non-canonical pathways, and its receptor Ror2 are also up-regulated in the SCDH of these models. Conclusion Our results suggest that Wnt signaling pathways are regulated by nociceptive input. The activation of Wnt signaling may regulate the expression of spinal central sensitization during the development of acute and chronic pain.

  6. GLASSgo – Automated and Reliable Detection of sRNA Homologs From a Single Input Sequence

    Directory of Open Access Journals (Sweden)

    Steffen C. Lott

    2018-04-01

    Full Text Available Bacterial small RNAs (sRNAs are important post-transcriptional regulators of gene expression. The functional and evolutionary characterization of sRNAs requires the identification of homologs, which is frequently challenging due to their heterogeneity, short length and partly, little sequence conservation. We developed the GLobal Automatic Small RNA Search go (GLASSgo algorithm to identify sRNA homologs in complex genomic databases starting from a single sequence. GLASSgo combines an iterative BLAST strategy with pairwise identity filtering and a graph-based clustering method that utilizes RNA secondary structure information. We tested the specificity, sensitivity and runtime of GLASSgo, BLAST and the combination RNAlien/cmsearch in a typical use case scenario on 40 bacterial sRNA families. The sensitivity of the tested methods was similar, while the specificity of GLASSgo and RNAlien/cmsearch was significantly higher than that of BLAST. GLASSgo was on average ∼87 times faster than RNAlien/cmsearch, and only ∼7.5 times slower than BLAST, which shows that GLASSgo optimizes the trade-off between speed and accuracy in the task of finding sRNA homologs. GLASSgo is fully automated, whereas BLAST often recovers only parts of homologs and RNAlien/cmsearch requires extensive additional bioinformatic work to get a comprehensive set of homologs. GLASSgo is available as an easy-to-use web server to find homologous sRNAs in large databases.

  7. ETFOD: a point model physics code with arbitrary input

    International Nuclear Information System (INIS)

    Rothe, K.E.; Attenberger, S.E.

    1980-06-01

    ETFOD is a zero-dimensional code which solves a set of physics equations by minimization. The technique used is different than normally used, in that the input is arbitrary. The user is supplied with a set of variables from which he specifies which variables are input (unchanging). The remaining variables become the output. Presently the code is being used for ETF reactor design studies. The code was written in a manner to allow easy modificaton of equations, variables, and physics calculations. The solution technique is presented along with hints for using the code

  8. Sequence modelling and an extensible data model for genomic database

    Energy Technology Data Exchange (ETDEWEB)

    Li, Peter Wei-Der [California Univ., San Francisco, CA (United States); Univ. of California, Berkeley, CA (United States)

    1992-01-01

    The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS`s do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data model that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the ``Extensible Object Model``, to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.

  9. Sequence modelling and an extensible data model for genomic database

    Energy Technology Data Exchange (ETDEWEB)

    Li, Peter Wei-Der (California Univ., San Francisco, CA (United States) Lawrence Berkeley Lab., CA (United States))

    1992-01-01

    The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS's do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data model that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the Extensible Object Model'', to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.

  10. High Temperature Test Facility Preliminary RELAP5-3D Input Model Description

    Energy Technology Data Exchange (ETDEWEB)

    Bayless, Paul David [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-12-01

    A RELAP5-3D input model is being developed for the High Temperature Test Facility at Oregon State University. The current model is described in detail. Further refinements will be made to the model as final as-built drawings are released and when system characterization data are available for benchmarking the input model.

  11. Modeling Input Errors to Improve Uncertainty Estimates for Sediment Transport Model Predictions

    Science.gov (United States)

    Jung, J. Y.; Niemann, J. D.; Greimann, B. P.

    2016-12-01

    Bayesian methods using Markov chain Monte Carlo algorithms have recently been applied to sediment transport models to assess the uncertainty in the model predictions due to the parameter values. Unfortunately, the existing approaches can only attribute overall uncertainty to the parameters. This limitation is critical because no model can produce accurate forecasts if forced with inaccurate input data, even if the model is well founded in physical theory. In this research, an existing Bayesian method is modified to consider the potential errors in input data during the uncertainty evaluation process. The input error is modeled using Gaussian distributions, and the means and standard deviations are treated as uncertain parameters. The proposed approach is tested by coupling it to the Sedimentation and River Hydraulics - One Dimension (SRH-1D) model and simulating a 23-km reach of the Tachia River in Taiwan. The Wu equation in SRH-1D is used for computing the transport capacity for a bed material load of non-cohesive material. Three types of input data are considered uncertain: (1) the input flowrate at the upstream boundary, (2) the water surface elevation at the downstream boundary, and (3) the water surface elevation at a hydraulic structure in the middle of the reach. The benefits of modeling the input errors in the uncertainty analysis are evaluated by comparing the accuracy of the most likely forecast and the coverage of the observed data by the credible intervals to those of the existing method. The results indicate that the internal boundary condition has the largest uncertainty among those considered. Overall, the uncertainty estimates from the new method are notably different from those of the existing method for both the calibration and forecast periods.

  12. Hidden Markov models for labeled sequences

    DEFF Research Database (Denmark)

    Krogh, Anders Stærmose

    1994-01-01

    A hidden Markov model for labeled observations, called a class HMM, is introduced and a maximum likelihood method is developed for estimating the parameters of the model. Instead of training it to model the statistics of the training sequences it is trained to optimize recognition. It resembles MMI...

  13. A new interpretation and validation of variance based importance measures for models with correlated inputs

    Science.gov (United States)

    Hao, Wenrui; Lu, Zhenzhou; Li, Luyi

    2013-05-01

    In order to explore the contributions by correlated input variables to the variance of the output, a novel interpretation framework of importance measure indices is proposed for a model with correlated inputs, which includes the indices of the total correlated contribution and the total uncorrelated contribution. The proposed indices accurately describe the connotations of the contributions by the correlated input to the variance of output, and they can be viewed as the complement and correction of the interpretation about the contributions by the correlated inputs presented in "Estimation of global sensitivity indices for models with dependent variables, Computer Physics Communications, 183 (2012) 937-946". Both of them contain the independent contribution by an individual input. Taking the general form of quadratic polynomial as an illustration, the total correlated contribution and the independent contribution by an individual input are derived analytically, from which the components and their origins of both contributions of correlated input can be clarified without any ambiguity. In the special case that no square term is included in the quadratic polynomial model, the total correlated contribution by the input can be further decomposed into the variance contribution related to the correlation of the input with other inputs and the independent contribution by the input itself, and the total uncorrelated contribution can be further decomposed into the independent part by interaction between the input and others and the independent part by the input itself. Numerical examples are employed and their results demonstrate that the derived analytical expressions of the variance-based importance measure are correct, and the clarification of the correlated input contribution to model output by the analytical derivation is very important for expanding the theory and solutions of uncorrelated input to those of the correlated one.

  14. A neurocomputational model of automatic sequence production.

    Science.gov (United States)

    Helie, Sebastien; Roeder, Jessica L; Vucovich, Lauren; Rünger, Dennis; Ashby, F Gregory

    2015-07-01

    Most behaviors unfold in time and include a sequence of submovements or cognitive activities. In addition, most behaviors are automatic and repeated daily throughout life. Yet, relatively little is known about the neurobiology of automatic sequence production. Past research suggests a gradual transfer from the associative striatum to the sensorimotor striatum, but a number of more recent studies challenge this role of the BG in automatic sequence production. In this article, we propose a new neurocomputational model of automatic sequence production in which the main role of the BG is to train cortical-cortical connections within the premotor areas that are responsible for automatic sequence production. The new model is used to simulate four different data sets from human and nonhuman animals, including (1) behavioral data (e.g., RTs), (2) electrophysiology data (e.g., single-neuron recordings), (3) macrostructure data (e.g., TMS), and (4) neurological circuit data (e.g., inactivation studies). We conclude with a comparison of the new model with existing models of automatic sequence production and discuss a possible new role for the BG in automaticity and its implication for Parkinson's disease.

  15. Specification and Aggregation Errors in Environmentally Extended Input-Output Models

    NARCIS (Netherlands)

    Bouwmeester, Maaike C.; Oosterhaven, Jan

    This article considers the specification and aggregation errors that arise from estimating embodied emissions and embodied water use with environmentally extended national input-output (IO) models, instead of with an environmentally extended international IO model. Model specification errors result

  16. The MARINA model (Model to Assess River Inputs of Nutrients to seAs)

    NARCIS (Netherlands)

    Strokal, Maryna; Kroeze, Carolien; Wang, Mengru; Bai, Zhaohai; Ma, Lin

    2016-01-01

    Chinese agriculture has been developing fast towards industrial food production systems that discharge nutrient-rich wastewater into rivers. As a result, nutrient export by rivers has been increasing, resulting in coastal water pollution. We developed a Model to Assess River Inputs of Nutrients

  17. Joint analysis of input and parametric uncertainties in watershed water quality modeling: A formal Bayesian approach

    Science.gov (United States)

    Han, Feng; Zheng, Yi

    2018-06-01

    Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.

  18. Can Simulation Credibility Be Improved Using Sensitivity Analysis to Understand Input Data Effects on Model Outcome?

    Science.gov (United States)

    Myers, Jerry G.; Young, M.; Goodenow, Debra A.; Keenan, A.; Walton, M.; Boley, L.

    2015-01-01

    Model and simulation (MS) credibility is defined as, the quality to elicit belief or trust in MS results. NASA-STD-7009 [1] delineates eight components (Verification, Validation, Input Pedigree, Results Uncertainty, Results Robustness, Use History, MS Management, People Qualifications) that address quantifying model credibility, and provides guidance to the model developers, analysts, and end users for assessing the MS credibility. Of the eight characteristics, input pedigree, or the quality of the data used to develop model input parameters, governing functions, or initial conditions, can vary significantly. These data quality differences have varying consequences across the range of MS application. NASA-STD-7009 requires that the lowest input data quality be used to represent the entire set of input data when scoring the input pedigree credibility of the model. This requirement provides a conservative assessment of model inputs, and maximizes the communication of the potential level of risk of using model outputs. Unfortunately, in practice, this may result in overly pessimistic communication of the MS output, undermining the credibility of simulation predictions to decision makers. This presentation proposes an alternative assessment mechanism, utilizing results parameter robustness, also known as model input sensitivity, to improve the credibility scoring process for specific simulations.

  19. Motivation Monitoring and Assessment Extension for Input-Process-Outcome Game Model

    Science.gov (United States)

    Ghergulescu, Ioana; Muntean, Cristina Hava

    2014-01-01

    This article proposes a Motivation Assessment-oriented Input-Process-Outcome Game Model (MotIPO), which extends the Input-Process-Outcome game model with game-centred and player-centred motivation assessments performed right from the beginning of the game-play. A feasibility case-study involving 67 participants playing an educational game and…

  20. Characteristic length scale of input data in distributed models: implications for modeling grid size

    Science.gov (United States)

    Artan, G. A.; Neale, C. M. U.; Tarboton, D. G.

    2000-01-01

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.

  1. Characteristic length scale of input data in distributed models: implications for modeling grain size

    Science.gov (United States)

    Artan, Guleid A.; Neale, C. M. U.; Tarboton, D. G.

    2000-01-01

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.

  2. Influence of input matrix representation on topic modelling performance

    CSIR Research Space (South Africa)

    De Waal, A

    2010-11-01

    Full Text Available Topic models explain a collection of documents with a small set of distributions over terms. These distributions over terms define the topics. Topic models ignore the structure of documents and use a bag-of-words approach which relies solely...

  3. "Updates to Model Algorithms & Inputs for the Biogenic ...

    Science.gov (United States)

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observations. This has resulted in improvements in model evaluations of modeled isoprene, NOx, and O3. The National Exposure Research Laboratory (NERL) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA mission to protect human health and the environment. AMAD research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for providing a sound scientific and technical basis for regulatory policies based on air quality models to improve ambient air quality. The models developed by AMAD are being used by EPA, NOAA, and the air pollution community in understanding and forecasting not only the magnitude of the air pollution problem, but also in developing emission control policies and regulations for air quality improvements.

  4. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Science.gov (United States)

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  5. Sensitivity analysis of complex models: Coping with dynamic and static inputs

    International Nuclear Information System (INIS)

    Anstett-Collin, F.; Goffart, J.; Mara, T.; Denis-Vidal, L.

    2015-01-01

    In this paper, we address the issue of conducting a sensitivity analysis of complex models with both static and dynamic uncertain inputs. While several approaches have been proposed to compute the sensitivity indices of the static inputs (i.e. parameters), the one of the dynamic inputs (i.e. stochastic fields) have been rarely addressed. For this purpose, we first treat each dynamic as a Gaussian process. Then, the truncated Karhunen–Loève expansion of each dynamic input is performed. Such an expansion allows to generate independent Gaussian processes from a finite number of independent random variables. Given that a dynamic input is represented by a finite number of random variables, its variance-based sensitivity index is defined by the sensitivity index of this group of variables. Besides, an efficient sampling-based strategy is described to estimate the first-order indices of all the input factors by only using two input samples. The approach is applied to a building energy model, in order to assess the impact of the uncertainties of the material properties (static inputs) and the weather data (dynamic inputs) on the energy performance of a real low energy consumption house. - Highlights: • Sensitivity analysis of models with uncertain static and dynamic inputs is performed. • Karhunen–Loève (KL) decomposition of the spatio/temporal inputs is performed. • The influence of the dynamic inputs is studied through the modes of the KL expansion. • The proposed approach is applied to a building energy model. • Impact of weather data and material properties on performance of real house is given

  6. High Flux Isotope Reactor system RELAP5 input model

    International Nuclear Information System (INIS)

    Morris, D.G.; Wendel, M.W.

    1993-01-01

    A thermal-hydraulic computational model of the High Flux Isotope Reactor (HFIR) has been developed using the RELAP5 program. The purpose of the model is to provide a state-of-the art thermal-hydraulic simulation tool for analyzing selected hypothetical accident scenarios for a revised HFIR Safety Analysis Report (SAR). The model includes (1) a detailed representation of the reactor core and other vessel components, (2) three heat exchanger/pump cells, (3) pressurizing pumps and letdown valves, and (4) secondary coolant system (with less detail than the primary system). Data from HFIR operation, component tests, tests in facility mockups and the HFIR, HFIR specific experiments, and other pertinent experiments performed independent of HFIR were used to construct the model and validate it to the extent permitted by the data. The detailed version of the model has been used to simulate loss-of-coolant accidents (LOCAs), while the abbreviated version has been developed for the operational transients that allow use of a less detailed nodalization. Analysis of station blackout with core long-term decay heat removal via natural convection has been performed using the core and vessel portions of the detailed model

  7. Determining input values for a simple parametric model to estimate ...

    African Journals Online (AJOL)

    Estimating soil evaporation (Es) is an important part of modelling vineyard evapotranspiration for irrigation purposes. Furthermore, quantification of possible soil texture and trellis effects is essential. Daily Es from six topsoils packed into lysimeters was measured under grapevines on slanting and vertical trellises, ...

  8. Reissner-Mindlin plate model with uncertain input data

    Czech Academy of Sciences Publication Activity Database

    Hlaváček, Ivan; Chleboun, J.

    2014-01-01

    Roč. 17, Jun (2014), s. 71-88 ISSN 1468-1218 Institutional support: RVO:67985840 Keywords : Reissner-Mindlin model * orthotropic plate Subject RIV: BA - General Mathematics Impact factor: 2.519, year: 2014 http://www.sciencedirect.com/science/article/pii/S1468121813001077

  9. Model reduction of nonlinear systems subject to input disturbances

    KAUST Repository

    Ndoye, Ibrahima; Laleg-Kirati, Taous-Meriem

    2017-01-01

    The method of convex optimization is used as a tool for model reduction of a class of nonlinear systems in the presence of disturbances. It is shown that under some conditions the nonlinear disturbed system can be approximated by a reduced order

  10. Little Higgs model limits from LHC - Input for Snowmass 2013

    International Nuclear Information System (INIS)

    Reuter, Juergen; Tonini, Marco; Vries, Maikel de

    2013-07-01

    The status of the most prominent model implementations of the Little Higgs paradigm, the Littlest Higgs with and without discrete T parity as well as the Simplest Little Higgs are reviewed. For this, we are taking into account a fit to 21 electroweak precision observables from LEP, SLC, Tevatron together with the full 25 fb -1 of Higgs data reported from ATLAS and CMS at Moriond 2013. We also - focusing on the Littlest Higgs with T parity - include an outlook on corresponding direct searches at the 8 TeV LHC and their competitiveness with the EW and Higgs data regarding their exclusion potential. This contribution to the Snowmass procedure serves as a guideline which regions in parameter space of Little Higgs models are still compatible for the upcoming LHC runs and future experiments at the energy frontier. For this we propose two different benchmark scenarios for the Littlest Higgs with T parity, one with heavy mirror quarks, one with light ones.

  11. Scientific and technical advisory committee review of the nutrient inputs to the watershed model

    Science.gov (United States)

    The following is a report by a STAC Review Team concerning the methods and documentation used by the Chesapeake Bay Partnership for evaluation of nutrient inputs to Phase 6 of the Chesapeake Bay Watershed Model. The “STAC Review of the Nutrient Inputs to the Watershed Model” (previously referred to...

  12. From LCC to LCA Using a Hybrid Input Output Model – A Maritime Case Study

    DEFF Research Database (Denmark)

    Kjær, Louise Laumann; Pagoropoulos, Aris; Hauschild, Michael Zwicky

    2015-01-01

    As companies try to embrace life cycle thinking, Life Cycle Assessment (LCA) and Life Cycle Costing (LCC) have proven to be powerful tools. In this paper, an Environmental Input-Output model is used for analysis as it enables an LCA using the same economic input data as LCC. This approach helps...

  13. Wideband Small-Signal Input dq Admittance Modeling of Six-Pulse Diode Rectifiers

    DEFF Research Database (Denmark)

    Yue, Xiaolong; Wang, Xiongfei; Blaabjerg, Frede

    2018-01-01

    This paper studies the wideband small-signal input dq admittance of six-pulse diode rectifiers. Considering the frequency coupling introduced by ripple frequency harmonics of d-and q-channel switching function, the proposed model successfully predicts the small-signal input dq admittance of six......-pulse diode rectifiers in high frequency regions that existing models fail to explain. Simulation and experimental results verify the accuracy of the proposed model....

  14. A Design Method of Robust Servo Internal Model Control with Control Input Saturation

    OpenAIRE

    山田, 功; 舩見, 洋祐

    2001-01-01

    In the present paper, we examine a design method of robust servo Internal Model Control with control input saturation. First of all, we clarify the condition that Internal Model Control has robust servo characteristics for the system with control input saturation. From this consideration, we propose new design method of Internal Model Control with robust servo characteristics. A numerical example to illustrate the effectiveness of the proposed method is shown.

  15. Tumor Growth Model with PK Input for Neuroblastoma Drug Development

    Science.gov (United States)

    2015-09-01

    Your credit card order has been processed on  Tuesday  2 December 2014 at 3:05 PM. Status: Complete 12/3/2014 Oasis, The Online Abstract Submission System...pharmacokinetic models. Toxicol Ind Health, 1997. 13(4): p. 407-84. PMID: 9249929 4. Davies, B. and T. Morris , Physiological parameters in laboratory animals and humans. Pharm Res, 1993. 10(7): p. 1093-5. PMID: 8378254

  16. Description of the CONTAIN input model for the Dodewaard nuclear power plant

    International Nuclear Information System (INIS)

    Velema, E.J.

    1992-02-01

    This report describes the ECN standard CONTAIN input model for the Dodewaard Nuclear Power Plant (NPP) that has been developed by ECN. This standard input model will serve as a basis for analyses of the phenomena which may occur inside the Dodewaard containment in the event of a postulated severe accident. Boundary conditions for specific containment analyses can easily be implemented in the input model. as a result ECN will be able to respond quickly on requests for analyses from the utilities of the authorities. The report also includes brief descriptions of the Dodewaard NPP and the CONTAIN computer program. (author). 7 refs.; 5 figs.; 3 tabs

  17. Little Higgs model limits from LHC - Input for Snowmass 2013

    Energy Technology Data Exchange (ETDEWEB)

    Reuter, Juergen; Tonini, Marco; Vries, Maikel. de

    2013-07-15

    The status of the most prominent model implementations of the Little Higgs paradigm, the Littlest Higgs with and without discrete T parity as well as the Simplest Little Higgs are reviewed. For this, we are taking into account a fit to 21 electroweak precision observables from LEP, SLC, Tevatron together with the full 25 fb{sup -1} of Higgs data reported from ATLAS and CMS at Moriond 2013. We also - focusing on the Littlest Higgs with T parity - include an outlook on corresponding direct searches at the 8 TeV LHC and their competitiveness with the EW and Higgs data regarding their exclusion potential. This contribution to the Snowmass procedure serves as a guideline which regions in parameter space of Little Higgs models are still compatible for the upcoming LHC runs and future experiments at the energy frontier. For this we propose two different benchmark scenarios for the Littlest Higgs with T parity, one with heavy mirror quarks, one with light ones.

  18. Modeling and Control of a Dual-Input Isolated Full-Bridge Boost Converter

    DEFF Research Database (Denmark)

    Zhang, Zhe; Thomsen, Ole Cornelius; Andersen, Michael A. E.

    2012-01-01

    In this paper, a steady-state model, a large-signal (LS) model and an ac small-signal (SS) model for a recently proposed dual-input transformer-isolated boost converter are derived respectively by the switching flow-graph (SFG) nonlinear modeling technique. Based upon the converter’s model...

  19. Mechanistic interpretation of glass reaction: Input to kinetic model development

    International Nuclear Information System (INIS)

    Bates, J.K.; Ebert, W.L.; Bradley, J.P.; Bourcier, W.L.

    1991-05-01

    Actinide-doped SRL 165 type glass was reacted in J-13 groundwater at 90 degree C for times up to 278 days. The reaction was characterized by both solution and solid analyses. The glass was seen to react nonstoichiometrically with preferred leaching of alkali metals and boron. High resolution electron microscopy revealed the formation of a complex layer structure which became separated from the underlying glass as the reaction progressed. The formation of the layer and its effect on continued glass reaction are discussed with respect to the current model for glass reaction used in the EQ3/6 computer simulation. It is concluded that the layer formed after 278 days is not protective and may eventually become fractured and generate particulates that may be transported by liquid water. 5 refs., 5 figs. , 3 tabs

  20. Vascular input function correction of inflow enhancement for improved pharmacokinetic modeling of liver DCE-MRI.

    Science.gov (United States)

    Ning, Jia; Schubert, Tilman; Johnson, Kevin M; Roldán-Alzate, Alejandro; Chen, Huijun; Yuan, Chun; Reeder, Scott B

    2018-06-01

    To propose a simple method to correct vascular input function (VIF) due to inflow effects and to test whether the proposed method can provide more accurate VIFs for improved pharmacokinetic modeling. A spoiled gradient echo sequence-based inflow quantification and contrast agent concentration correction method was proposed. Simulations were conducted to illustrate improvement in the accuracy of VIF estimation and pharmacokinetic fitting. Animal studies with dynamic contrast-enhanced MR scans were conducted before, 1 week after, and 2 weeks after portal vein embolization (PVE) was performed in the left portal circulation of pigs. The proposed method was applied to correct the VIFs for model fitting. Pharmacokinetic parameters fitted using corrected and uncorrected VIFs were compared between different lobes and visits. Simulation results demonstrated that the proposed method can improve accuracy of VIF estimation and pharmacokinetic fitting. In animal study results, pharmacokinetic fitting using corrected VIFs demonstrated changes in perfusion consistent with changes expected after PVE, whereas the perfusion estimates derived by uncorrected VIFs showed no significant changes. The proposed correction method improves accuracy of VIFs and therefore provides more precise pharmacokinetic fitting. This method may be promising in improving the reliability of perfusion quantification. Magn Reson Med 79:3093-3102, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  1. Remote sensing inputs to landscape models which predict future spatial land use patterns for hydrologic models

    Science.gov (United States)

    Miller, L. D.; Tom, C.; Nualchawee, K.

    1977-01-01

    A tropical forest area of Northern Thailand provided a test case of the application of the approach in more natural surroundings. Remote sensing imagery subjected to proper computer analysis has been shown to be a very useful means of collecting spatial data for the science of hydrology. Remote sensing products provide direct input to hydrologic models and practical data bases for planning large and small-scale hydrologic developments. Combining the available remote sensing imagery together with available map information in the landscape model provides a basis for substantial improvements in these applications.

  2. Evaluating the Sensitivity of Agricultural Model Performance to Different Climate Inputs: Supplemental Material

    Science.gov (United States)

    Glotter, Michael J.; Ruane, Alex C.; Moyer, Elisabeth J.; Elliott, Joshua W.

    2015-01-01

    Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled and observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources reanalysis, reanalysis that is bias corrected with observed climate, and a control dataset and compared with observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by non-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. Some issues persist for all choices of climate inputs: crop yields appear to be oversensitive to precipitation fluctuations but under sensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves.

  3. Pandemic recovery analysis using the dynamic inoperability input-output model.

    Science.gov (United States)

    Santos, Joost R; Orsi, Mark J; Bond, Erik J

    2009-12-01

    Economists have long conceptualized and modeled the inherent interdependent relationships among different sectors of the economy. This concept paved the way for input-output modeling, a methodology that accounts for sector interdependencies governing the magnitude and extent of ripple effects due to changes in the economic structure of a region or nation. Recent extensions to input-output modeling have enhanced the model's capabilities to account for the impact of an economic perturbation; two such examples are the inoperability input-output model((1,2)) and the dynamic inoperability input-output model (DIIM).((3)) These models introduced sector inoperability, or the inability to satisfy as-planned production levels, into input-output modeling. While these models provide insights for understanding the impacts of inoperability, there are several aspects of the current formulation that do not account for complexities associated with certain disasters, such as a pandemic. This article proposes further enhancements to the DIIM to account for economic productivity losses resulting primarily from workforce disruptions. A pandemic is a unique disaster because the majority of its direct impacts are workforce related. The article develops a modeling framework to account for workforce inoperability and recovery factors. The proposed workforce-explicit enhancements to the DIIM are demonstrated in a case study to simulate a pandemic scenario in the Commonwealth of Virginia.

  4. Development of the RETRAN input model for Ulchin 3/4 visual system analyzer

    International Nuclear Information System (INIS)

    Lee, S. W.; Kim, K. D.; Lee, Y. J.; Lee, W. J.; Chung, B. D.; Jeong, J. J.; Hwang, M. K.

    2004-01-01

    As a part of the Long-Term Nuclear R and D program, KAERI has developed the so-called Visual System Analyzer (ViSA) based on best-estimate codes. The MARS and RETRAN codes are used as the best-estimate codes for ViSA. Between these two codes, the RETRAN code is used for realistic analysis of Non-LOCA transients and small-break loss-of-coolant accidents, of which break size is less than 3 inch diameter. So it is necessary to develop the RETRAN input model for Ulchin 3/4 plants (KSNP). In recognition of this, the RETRAN input model for Ulchin 3/4 plants has been developed. This report includes the input model requirements and the calculation note for the input data generation (see the Appendix). In order to confirm the validity of the input data, the calculations are performed for a steady state at 100 % power operation condition, inadvertent reactor trip and RCP trip. The results of the steady-state calculation agree well with the design data. The results of the other transient calculations seem to be reasonable and consistent with those of other best-estimate calculations. Therefore, the RETRAN input data can be used as a base input deck for the RETRAN transient analyzer for Ulchin 3/4. Moreover, it is found that Core Protection Calculator (CPC) module, which is modified by Korea Electric Power Research Institute (KEPRI), is well adapted to ViSA

  5. Bayesian nonlinear structural FE model and seismic input identification for damage assessment of civil structures

    Science.gov (United States)

    Astroza, Rodrigo; Ebrahimian, Hamed; Li, Yong; Conte, Joel P.

    2017-09-01

    A methodology is proposed to update mechanics-based nonlinear finite element (FE) models of civil structures subjected to unknown input excitation. The approach allows to jointly estimate unknown time-invariant model parameters of a nonlinear FE model of the structure and the unknown time histories of input excitations using spatially-sparse output response measurements recorded during an earthquake event. The unscented Kalman filter, which circumvents the computation of FE response sensitivities with respect to the unknown model parameters and unknown input excitations by using a deterministic sampling approach, is employed as the estimation tool. The use of measurement data obtained from arrays of heterogeneous sensors, including accelerometers, displacement sensors, and strain gauges is investigated. Based on the estimated FE model parameters and input excitations, the updated nonlinear FE model can be interrogated to detect, localize, classify, and assess damage in the structure. Numerically simulated response data of a three-dimensional 4-story 2-by-1 bay steel frame structure with six unknown model parameters subjected to unknown bi-directional horizontal seismic excitation, and a three-dimensional 5-story 2-by-1 bay reinforced concrete frame structure with nine unknown model parameters subjected to unknown bi-directional horizontal seismic excitation are used to illustrate and validate the proposed methodology. The results of the validation studies show the excellent performance and robustness of the proposed algorithm to jointly estimate unknown FE model parameters and unknown input excitations.

  6. Modeling of heat transfer into a heat pipe for a localized heat input zone

    International Nuclear Information System (INIS)

    Rosenfeld, J.H.

    1987-01-01

    A general model is presented for heat transfer into a heat pipe using a localized heat input. Conduction in the wall of the heat pipe and boiling in the interior structure are treated simultaneously. The model is derived from circumferential heat transfer in a cylindrical heat pipe evaporator and for radial heat transfer in a circular disk with boiling from the interior surface. A comparison is made with data for a localized heat input zone. Agreement between the theory and the model is good. This model can be used for design purposes if a boiling correlation is available. The model can be extended to provide improved predictions of heat pipe performance

  7. Determination of the arterial input function in mouse-models using clinical MRI

    International Nuclear Information System (INIS)

    Theis, D.; Fachhochschule Giessen-Friedberg; Keil, B.; Heverhagen, J.T.; Klose, K.J.; Behe, M.; Fiebich, M.

    2008-01-01

    Dynamic contrast enhanced magnetic resonance imaging is a promising method for quantitative analysis of tumor perfusion and is increasingly used in study of cancer in small animal models. In those studies the determination of the arterial input function (AIF) of the target tissue can be the first step. Series of short-axis images of the heart were acquired during administration of a bolus of Gd-DTPA using saturation-recovery gradient echo pulse sequences. The AIF was determined from the changes of the signal intensity in the left ventricle. The native T1 relaxation times and AIF were determined for 11 mice. An average value of (1.16 ± 0.09) s for the native T1 relaxation time was measured. However, the AIF showed significant inter animal variability, as previously observed by other authors. The inter-animal variability shows, that a direct measurement of the AIF is reasonable to avoid significant errors. The proposed method for determination of the AIF proved to be reliable. (orig.)

  8. Input-output model for MACCS nuclear accident impacts estimation¹

    Energy Technology Data Exchange (ETDEWEB)

    Outkin, Alexander V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bixler, Nathan E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vargas, Vanessa N [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-27

    Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domestic product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.

  9. Multivariate Self-Exciting Threshold Autoregressive Models with eXogenous Input

    OpenAIRE

    Addo, Peter Martey

    2014-01-01

    This study defines a multivariate Self--Exciting Threshold Autoregressive with eXogenous input (MSETARX) models and present an estimation procedure for the parameters. The conditions for stationarity of the nonlinear MSETARX models is provided. In particular, the efficiency of an adaptive parameter estimation algorithm and LSE (least squares estimate) algorithm for this class of models is then provided via simulations.

  10. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    International Nuclear Information System (INIS)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok

    2016-01-01

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  11. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    Energy Technology Data Exchange (ETDEWEB)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin, E-mail: dengbin@tju.edu.cn; Chan, Wai-lok [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2016-06-15

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  12. Use of regional climate model simulations as an input for hydrological models for the Hindukush-Karakorum-Himalaya region

    NARCIS (Netherlands)

    Akhtar, M.; Ahmad, N.; Booij, Martijn J.

    2009-01-01

    The most important climatological inputs required for the calibration and validation of hydrological models are temperature and precipitation that can be derived from observational records or alternatively from regional climate models (RCMs). In this paper, meteorological station observations and

  13. Calibration of uncertain inputs to computer models using experimentally measured quantities and the BMARS emulator

    International Nuclear Information System (INIS)

    Stripling, H.F.; McClarren, R.G.; Kuranz, C.C.; Grosskopf, M.J.; Rutter, E.; Torralva, B.R.

    2011-01-01

    We present a method for calibrating the uncertain inputs to a computer model using available experimental data. The goal of the procedure is to produce posterior distributions of the uncertain inputs such that when samples from the posteriors are used as inputs to future model runs, the model is more likely to replicate (or predict) the experimental response. The calibration is performed by sampling the space of the uncertain inputs, using the computer model (or, more likely, an emulator for the computer model) to assign weights to the samples, and applying the weights to produce the posterior distributions and generate predictions of new experiments within confidence bounds. The method is similar to the Markov chain Monte Carlo (MCMC) calibration methods with independent sampling with the exception that we generate samples beforehand and replace the candidate acceptance routine with a weighting scheme. We apply our method to the calibration of a Hyades 2D model of laser energy deposition in beryllium. We employ a Bayesian Multivariate Adaptive Regression Splines (BMARS) emulator as a surrogate for Hyades 2D. We treat a range of uncertainties in our system, including uncertainties in the experimental inputs, experimental measurement error, and systematic experimental timing errors. The results of the calibration are posterior distributions that both agree with intuition and improve the accuracy and decrease the uncertainty in experimental predictions. (author)

  14. Using Random Forests to Select Optimal Input Variables for Short-Term Wind Speed Forecasting Models

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2017-10-01

    Full Text Available Achieving relatively high-accuracy short-term wind speed forecasting estimates is a precondition for the construction and grid-connected operation of wind power forecasting systems for wind farms. Currently, most research is focused on the structure of forecasting models and does not consider the selection of input variables, which can have significant impacts on forecasting performance. This paper presents an input variable selection method for wind speed forecasting models. The candidate input variables for various leading periods are selected and random forests (RF is employed to evaluate the importance of all variable as features. The feature subset with the best evaluation performance is selected as the optimal feature set. Then, kernel-based extreme learning machine is constructed to evaluate the performance of input variables selection based on RF. The results of the case study show that by removing the uncorrelated and redundant features, RF effectively extracts the most strongly correlated set of features from the candidate input variables. By finding the optimal feature combination to represent the original information, RF simplifies the structure of the wind speed forecasting model, shortens the training time required, and substantially improves the model’s accuracy and generalization ability, demonstrating that the input variables selected by RF are effective.

  15. Development of the MARS input model for Kori nuclear units 1 transient analyzer

    International Nuclear Information System (INIS)

    Hwang, M.; Kim, K. D.; Lee, S. W.; Lee, Y. J.; Lee, W. J.; Chung, B. D.; Jeong, J. J.

    2004-11-01

    KAERI has been developing the 'NSSS transient analyzer' based on best-estimate codes for Kori Nuclear Units 1 plants. The MARS and RETRAN codes have been used as the best-estimate codes for the NSSS transient analyzer. Among these codes, the MARS code is adopted for realistic analysis of small- and large-break loss-of-coolant accidents, of which break size is greater than 2 inch diameter. So it is necessary to develop the MARS input model for Kori Nuclear Units 1 plants. This report includes the input model (hydrodynamic component and heat structure models) requirements and the calculation note for the MARS input data generation for Kori Nuclear Units 1 plant analyzer (see the Appendix). In order to confirm the validity of the input data, we performed the calculations for a steady state at 100 % power operation condition and a double-ended cold leg break LOCA. The results of the steady-state calculation agree well with the design data. The results of the LOCA calculation seem to be reasonable and consistent with those of other best-estimate calculations. Therefore, the MARS input data can be used as a base input deck for the MARS transient analyzer for Kori Nuclear Units 1

  16. Sensitivity Analysis of Input Parameters for a Dynamic Food Chain Model DYNACON

    International Nuclear Information System (INIS)

    Hwang, Won Tae; Lee, Geun Chang; Han, Moon Hee; Cho, Gyu Seong

    2000-01-01

    The sensitivity analysis of input parameters for a dynamic food chain model DYNACON was conducted as a function of deposition data for the long-lived radionuclides ( 137 Cs, 90 Sr). Also, the influence of input parameters for the short and long-terms contamination of selected foodstuffs (cereals, leafy vegetables, milk) was investigated. The input parameters were sampled using the LHS technique, and their sensitivity indices represented as PRCC. The sensitivity index was strongly dependent on contamination period as well as deposition data. In case of deposition during the growing stages of plants, the input parameters associated with contamination by foliar absorption were relatively important in long-term contamination as well as short-term contamination. They were also important in short-term contamination in case of deposition during the non-growing stages. In long-term contamination, the influence of input parameters associated with foliar absorption decreased, while the influence of input parameters associated with root uptake increased. These phenomena were more remarkable in case of the deposition of non-growing stages than growing stages, and in case of 90 Sr deposition than 137 Cs deposition. In case of deposition during growing stages of pasture, the input parameters associated with the characteristics of cattle such as feed-milk transfer factor and daily intake rate of cattle were relatively important in contamination of milk

  17. Multi input single output model predictive control of non-linear bio-polymerization process

    Energy Technology Data Exchange (ETDEWEB)

    Arumugasamy, Senthil Kumar; Ahmad, Z. [School of Chemical Engineering, Univerisiti Sains Malaysia, Engineering Campus, Seri Ampangan,14300 Nibong Tebal, Seberang Perai Selatan, Pulau Pinang (Malaysia)

    2015-05-15

    This paper focuses on Multi Input Single Output (MISO) Model Predictive Control of bio-polymerization process in which mechanistic model is developed and linked with the feedforward neural network model to obtain a hybrid model (Mechanistic-FANN) of lipase-catalyzed ring-opening polymerization of ε-caprolactone (ε-CL) for Poly (ε-caprolactone) production. In this research, state space model was used, in which the input to the model were the reactor temperatures and reactor impeller speeds and the output were the molecular weight of polymer (M{sub n}) and polymer polydispersity index. State space model for MISO created using System identification tool box of Matlab™. This state space model is used in MISO MPC. Model predictive control (MPC) has been applied to predict the molecular weight of the biopolymer and consequently control the molecular weight of biopolymer. The result shows that MPC is able to track reference trajectory and give optimum movement of manipulated variable.

  18. A quantitative approach to modeling the information processing of NPP operators under input information overload

    International Nuclear Information System (INIS)

    Kim, Jong Hyun; Seong, Poong Hyun

    2002-01-01

    This paper proposes a quantitative approach to modeling the information processing of NPP operators. The aim of this work is to derive the amount of the information processed during a certain control task under input information overload. We primarily develop the information processing model having multiple stages, which contains information flow. Then the uncertainty of the information is quantified using the Conant's model, a kind of information theory. We also investigate the applicability of this approach to quantifying the information reduction of operators under the input information overload

  19. Human Inferences about Sequences: A Minimal Transition Probability Model.

    Directory of Open Access Journals (Sweden)

    Florent Meyniel

    2016-12-01

    Full Text Available The brain constantly infers the causes of the inputs it receives and uses these inferences to generate statistical expectations about future observations. Experimental evidence for these expectations and their violations include explicit reports, sequential effects on reaction times, and mismatch or surprise signals recorded in electrophysiology and functional MRI. Here, we explore the hypothesis that the brain acts as a near-optimal inference device that constantly attempts to infer the time-varying matrix of transition probabilities between the stimuli it receives, even when those stimuli are in fact fully unpredictable. This parsimonious Bayesian model, with a single free parameter, accounts for a broad range of findings on surprise signals, sequential effects and the perception of randomness. Notably, it explains the pervasive asymmetry between repetitions and alternations encountered in those studies. Our analysis suggests that a neural machinery for inferring transition probabilities lies at the core of human sequence knowledge.

  20. System Identification for Nonlinear FOPDT Model with Input-Dependent Dead-Time

    DEFF Research Database (Denmark)

    Sun, Zhen; Yang, Zhenyu

    2011-01-01

    An on-line iterative method of system identification for a kind of nonlinear FOPDT system is proposed in the paper. The considered nonlinear FOPDT model is an extension of the standard FOPDT model by means that its dead time depends on the input signal and the other parameters are time dependent....

  1. Regional disaster impact analysis: comparing Input-Output and Computable General Equilibrium models

    NARCIS (Netherlands)

    Koks, E.E.; Carrera, L.; Jonkeren, O.; Aerts, J.C.J.H.; Husby, T.G.; Thissen, M.; Standardi, G.; Mysiak, J.

    2016-01-01

    A variety of models have been applied to assess the economic losses of disasters, of which the most common ones are input-output (IO) and computable general equilibrium (CGE) models. In addition, an increasing number of scholars have developed hybrid approaches: one that combines both or either of

  2. DIMITRI 1.0: Beschrijving en toepassing van een dynamisch input-output model

    NARCIS (Netherlands)

    Wilting HC; Blom WF; Thomas R; Idenburg AM; LAE

    2001-01-01

    DIMITRI, the Dynamic Input-Output Model to study the Impacts of Technology Related Innovations, was developed in the framework of the RIVM Environment and Economy project to answer questions about interrelationships between economy, technology and the environment. DIMITRI, a meso-economic model,

  3. Logistics flows and enterprise input-output models: aggregate and disaggregate analysis

    NARCIS (Netherlands)

    Albino, V.; Yazan, Devrim; Messeni Petruzzelli, A.; Okogbaa, O.G.

    2011-01-01

    In the present paper, we propose the use of enterprise input-output (EIO) models to describe and analyse the logistics flows considering spatial issues and related environmental effects associated with production and transportation processes. In particular, transportation is modelled as a specific

  4. Modeling the short-run effect of fiscal stimuli on GDP : A new semi-closed input-output model

    NARCIS (Netherlands)

    Chen, Quanrun; Dietzenbacher, Erik; Los, Bart; Yang, Cuihong

    In this study, we propose a new semi-closed input-output model, which reconciles input-output analysis with modern consumption theories. It can simulate changes in household consumption behavior when exogenous stimulus policies lead to higher disposable income levels. It is useful for quantifying

  5. Modeling the short-run effect of fiscal stimuli on GDP : A new semi-closed input-output model

    NARCIS (Netherlands)

    Chen, Quanrun; Dietzenbacher, Erik; Los, Bart; Yang, Cuihong

    2016-01-01

    In this study, we propose a new semi-closed input-output model, which reconciles input-output analysis with modern consumption theories. It can simulate changes in household consumption behavior when exogenous stimulus policies lead to higher disposable income levels. It is useful for quantifying

  6. Input modelling of ASSERT-PV V2R8M1 for RUFIC fuel bundle

    Energy Technology Data Exchange (ETDEWEB)

    Park, Joo Hwan; Suk, Ho Chun

    2001-02-01

    This report describes the input modelling for subchannel analysis of CANFLEX-RU (RUFIC) fuel bundle which has been developed for an advanced fuel bundle of CANDU-6 reactor, using ASSERT-PV V2R8M1 code. Execution file of ASSERT-PV V2R8M1 code was recently transferred from AECL under JRDC agreement between KAERI and AECL. SSERT-PV V2R8M1 which is quite different from COBRA-IV-i code has been developed for thermalhydraulic analysis of CANDU-6 fuel channel by subchannel analysis method and updated so that 43-element CANDU fuel geometry can be applied. Hence, ASSERT code can be applied to the subchannel analysis of RUFIC fuel bundle. The present report was prepared for ASSERT input modelling of RUFIC fuel bundle. Since the ASSERT results highly depend on user's input modelling, the calculation results may be quite different among the user's input models. The objective of the present report is the preparation of detail description of the background information for input data and gives credibility of the calculation results.

  7. Input modelling of ASSERT-PV V2R8M1 for RUFIC fuel bundle

    Energy Technology Data Exchange (ETDEWEB)

    Park, Joo Hwan; Suk, Ho Chun

    2001-02-01

    This report describes the input modelling for subchannel analysis of CANFLEX-RU (RUFIC) fuel bundle which has been developed for an advanced fuel bundle of CANDU-6 reactor, using ASSERT-PV V2R8M1 code. Execution file of ASSERT-PV V2R8M1 code was recently transferred from AECL under JRDC agreement between KAERI and AECL. SSERT-PV V2R8M1 which is quite different from COBRA-IV-i code has been developed for thermalhydraulic analysis of CANDU-6 fuel channel by subchannel analysis method and updated so that 43-element CANDU fuel geometry can be applied. Hence, ASSERT code can be applied to the subchannel analysis of RUFIC fuel bundle. The present report was prepared for ASSERT input modelling of RUFIC fuel bundle. Since the ASSERT results highly depend on user's input modelling, the calculation results may be quite different among the user's input models. The objective of the present report is the preparation of detail description of the background information for input data and gives credibility of the calculation results.

  8. Input modelling of ASSERT-PV V2R8M1 for RUFIC fuel bundle

    International Nuclear Information System (INIS)

    Park, Joo Hwan; Suk, Ho Chun

    2001-02-01

    This report describes the input modelling for subchannel analysis of CANFLEX-RU (RUFIC) fuel bundle which has been developed for an advanced fuel bundle of CANDU-6 reactor, using ASSERT-PV V2R8M1 code. Execution file of ASSERT-PV V2R8M1 code was recently transferred from AECL under JRDC agreement between KAERI and AECL. SSERT-PV V2R8M1 which is quite different from COBRA-IV-i code has been developed for thermalhydraulic analysis of CANDU-6 fuel channel by subchannel analysis method and updated so that 43-element CANDU fuel geometry can be applied. Hence, ASSERT code can be applied to the subchannel analysis of RUFIC fuel bundle. The present report was prepared for ASSERT input modelling of RUFIC fuel bundle. Since the ASSERT results highly depend on user's input modelling, the calculation results may be quite different among the user's input models. The objective of the present report is the preparation of detail description of the background information for input data and gives credibility of the calculation results

  9. Hierarchical Bayesian modelling of mobility metrics for hazard model input calibration

    Science.gov (United States)

    Calder, Eliza; Ogburn, Sarah; Spiller, Elaine; Rutarindwa, Regis; Berger, Jim

    2015-04-01

    In this work we present a method to constrain flow mobility input parameters for pyroclastic flow models using hierarchical Bayes modeling of standard mobility metrics such as H/L and flow volume etc. The advantage of hierarchical modeling is that it can leverage the information in global dataset for a particular mobility metric in order to reduce the uncertainty in modeling of an individual volcano, especially important where individual volcanoes have only sparse datasets. We use compiled pyroclastic flow runout data from Colima, Merapi, Soufriere Hills, Unzen and Semeru volcanoes, presented in an open-source database FlowDat (https://vhub.org/groups/massflowdatabase). While the exact relationship between flow volume and friction varies somewhat between volcanoes, dome collapse flows originating from the same volcano exhibit similar mobility relationships. Instead of fitting separate regression models for each volcano dataset, we use a variation of the hierarchical linear model (Kass and Steffey, 1989). The model presents a hierarchical structure with two levels; all dome collapse flows and dome collapse flows at specific volcanoes. The hierarchical model allows us to assume that the flows at specific volcanoes share a common distribution of regression slopes, then solves for that distribution. We present comparisons of the 95% confidence intervals on the individual regression lines for the data set from each volcano as well as those obtained from the hierarchical model. The results clearly demonstrate the advantage of considering global datasets using this technique. The technique developed is demonstrated here for mobility metrics, but can be applied to many other global datasets of volcanic parameters. In particular, such methods can provide a means to better contain parameters for volcanoes for which we only have sparse data, a ubiquitous problem in volcanology.

  10. Development of an Input Model to MELCOR 1.8.5 for the Oskarshamn 3 BWR

    Energy Technology Data Exchange (ETDEWEB)

    Nilsson, Lars [Lentek, Nykoeping (Sweden)

    2006-05-15

    An input model has been prepared to the code MELCOR 1.8.5 for the Swedish Oskarshamn 3 Boiling Water Reactor (O3). This report describes the modelling work and the various files which comprise the input deck. Input data are mainly based on original drawings and system descriptions made available by courtesy of OKG AB. Comparison and check of some primary system data were made against an O3 input file to the SCDAP/RELAP5 code that was used in the SARA project. Useful information was also obtained from the FSAR (Final Safety Analysis Report) for O3 and the SKI report '2003 Stoerningshandboken BWR'. The input models the O3 reactor at its current state with the operating power of 3300 MW{sub th}. One aim with this work is that the MELCOR input could also be used for power upgrading studies. All fuel assemblies are thus assumed to consist of the new Westinghouse-Atom's SVEA-96 Optima2 fuel. MELCOR is a severe accident code developed by Sandia National Laboratory under contract from the U.S. Nuclear Regulatory Commission (NRC). MELCOR is a successor to STCP (Source Term Code Package) and has thus a long evolutionary history. The input described here is adapted to the latest version 1.8.5 available when the work began. It was released the year 2000, but a new version 1.8.6 was distributed recently. Conversion to the new version is recommended. (During the writing of this report still another code version, MELCOR 2.0, has been announced to be released within short.) In version 1.8.5 there is an option to describe the accident progression in the lower plenum and the melt-through of the reactor vessel bottom in more detail by use of the Bottom Head (BH) package developed by Oak Ridge National Laboratory especially for BWRs. This is in addition to the ordinary MELCOR COR package. Since problems arose running with the BH input two versions of the O3 input deck were produced, a NONBH and a BH deck. The BH package is no longer a separate package in the new 1

  11. Multiple-Input Subject-Specific Modeling of Plasma Glucose Concentration for Feedforward Control.

    Science.gov (United States)

    Kotz, Kaylee; Cinar, Ali; Mei, Yong; Roggendorf, Amy; Littlejohn, Elizabeth; Quinn, Laurie; Rollins, Derrick K

    2014-11-26

    The ability to accurately develop subject-specific, input causation models, for blood glucose concentration (BGC) for large input sets can have a significant impact on tightening control for insulin dependent diabetes. More specifically, for Type 1 diabetics (T1Ds), it can lead to an effective artificial pancreas (i.e., an automatic control system that delivers exogenous insulin) under extreme changes in critical disturbances. These disturbances include food consumption, activity variations, and physiological stress changes. Thus, this paper presents a free-living, outpatient, multiple-input, modeling method for BGC with strong causation attributes that is stable and guards against overfitting to provide an effective modeling approach for feedforward control (FFC). This approach is a Wiener block-oriented methodology, which has unique attributes for meeting critical requirements for effective, long-term, FFC.

  12. A Model to Determinate the Influence of Probability Density Functions (PDFs of Input Quantities in Measurements

    Directory of Open Access Journals (Sweden)

    Jesús Caja

    2016-06-01

    Full Text Available A method for analysing the effect of different hypotheses about the type of the input quantities distributions of a measurement model is presented here so that the developed algorithms can be simplified. As an example, a model of indirect measurements with optical coordinate measurement machine was employed to evaluate these different hypotheses. As a result of the different experiments, the assumption that the different variables of the model can be modelled as normal distributions is proved.

  13. How model and input uncertainty impact maize yield simulations in West Africa

    Science.gov (United States)

    Waha, Katharina; Huth, Neil; Carberry, Peter; Wang, Enli

    2015-02-01

    Crop models are common tools for simulating crop yields and crop production in studies on food security and global change. Various uncertainties however exist, not only in the model design and model parameters, but also and maybe even more important in soil, climate and management input data. We analyze the performance of the point-scale crop model APSIM and the global scale crop model LPJmL with different climate and soil conditions under different agricultural management in the low-input maize-growing areas of Burkina Faso, West Africa. We test the models’ response to different levels of input information from little to detailed information on soil, climate (1961-2000) and agricultural management and compare the models’ ability to represent the observed spatial (between locations) and temporal variability (between years) in crop yields. We found that the resolution of different soil, climate and management information influences the simulated crop yields in both models. However, the difference between models is larger than between input data and larger between simulations with different climate and management information than between simulations with different soil information. The observed spatial variability can be represented well from both models even with little information on soils and management but APSIM simulates a higher variation between single locations than LPJmL. The agreement of simulated and observed temporal variability is lower due to non-climatic factors e.g. investment in agricultural research and development between 1987 and 1991 in Burkina Faso which resulted in a doubling of maize yields. The findings of our study highlight the importance of scale and model choice and show that the most detailed input data does not necessarily improve model performance.

  14. Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models

    Science.gov (United States)

    Rothenberger, Michael J.

    This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input

  15. Sensitivity of a complex urban air quality model to input data

    International Nuclear Information System (INIS)

    Seigneur, C.; Tesche, T.W.; Roth, P.M.; Reid, L.E.

    1981-01-01

    In recent years, urban-scale photochemical simulation models have been developed that are of practical value for predicting air quality and analyzing the impacts of alternative emission control strategies. Although the performance of some urban-scale models appears to be acceptable, the demanding data requirements of such models have prompted concern about the costs of data acquistion, which might be high enough to preclude use of photochemical models for many urban areas. To explore this issue, sensitivity studies with the Systems Applications, Inc. (SAI) Airshed Model, a grid-based time-dependent photochemical dispersion model, have been carried out for the Los Angeles basin. Reductions in the amount and quality of meteorological, air quality and emission data, as well as modifications of the model gridded structure, have been analyzed. This paper presents and interprets the results of 22 sensitivity studies. A sensitivity-uncertainty index is defined to rank input data needs for an urban photochemical model. The index takes into account the sensitivity of model predictions to the amount of input data, the costs of data acquistion, and the uncertainties in the air quality model input variables. The results of these sensitivity studies are considered in light of the limitations of specific attributes of the Los Angeles basin and of the modeling conditions (e.g., choice of wind model, length of simulation time). The extent to which the results may be applied to other urban areas also is discussed

  16. The role of additive neurogenesis and synaptic plasticity in a hippocampal memory model with grid-cell like input.

    Directory of Open Access Journals (Sweden)

    Peter A Appleby

    Full Text Available Recently, we presented a study of adult neurogenesis in a simplified hippocampal memory model. The network was required to encode and decode memory patterns despite changing input statistics. We showed that additive neurogenesis was a more effective adaptation strategy compared to neuronal turnover and conventional synaptic plasticity as it allowed the network to respond to changes in the input statistics while preserving representations of earlier environments. Here we extend our model to include realistic, spatially driven input firing patterns in the form of grid cells in the entorhinal cortex. We compare network performance across a sequence of spatial environments using three distinct adaptation strategies: conventional synaptic plasticity, where the network is of fixed size but the connectivity is plastic; neuronal turnover, where the network is of fixed size but units in the network may die and be replaced; and additive neurogenesis, where the network starts out with fewer initial units but grows over time. We confirm that additive neurogenesis is a superior adaptation strategy when using realistic, spatially structured input patterns. We then show that a more biologically plausible neurogenesis rule that incorporates cell death and enhanced plasticity of new granule cells has an overall performance significantly better than any one of the three individual strategies operating alone. This adaptation rule can be tailored to maximise performance of the network when operating as either a short- or long-term memory store. We also examine the time course of adult neurogenesis over the lifetime of an animal raised under different hypothetical rearing conditions. These growth profiles have several distinct features that form a theoretical prediction that could be tested experimentally. Finally, we show that place cells can emerge and refine in a realistic manner in our model as a direct result of the sparsification performed by the dentate gyrus

  17. Recurrent network models for perfect temporal integration of fluctuating correlated inputs.

    Directory of Open Access Journals (Sweden)

    Hiroshi Okamoto

    2009-06-01

    Full Text Available Temporal integration of input is essential to the accumulation of information in various cognitive and behavioral processes, and gradually increasing neuronal activity, typically occurring within a range of seconds, is considered to reflect such computation by the brain. Some psychological evidence suggests that temporal integration by the brain is nearly perfect, that is, the integration is non-leaky, and the output of a neural integrator is accurately proportional to the strength of input. Neural mechanisms of perfect temporal integration, however, remain largely unknown. Here, we propose a recurrent network model of cortical neurons that perfectly integrates partially correlated, irregular input spike trains. We demonstrate that the rate of this temporal integration changes proportionately to the probability of spike coincidences in synaptic inputs. We analytically prove that this highly accurate integration of synaptic inputs emerges from integration of the variance of the fluctuating synaptic inputs, when their mean component is kept constant. Highly irregular neuronal firing and spike coincidences are the major features of cortical activity, but they have been separately addressed so far. Our results suggest that the efficient protocol of information integration by cortical networks essentially requires both features and hence is heterotic.

  18. Variance-based sensitivity indices for stochastic models with correlated inputs

    Energy Technology Data Exchange (ETDEWEB)

    Kala, Zdeněk [Brno University of Technology, Faculty of Civil Engineering, Department of Structural Mechanics Veveří St. 95, ZIP 602 00, Brno (Czech Republic)

    2015-03-10

    The goal of this article is the formulation of the principles of one of the possible strategies in implementing correlation between input random variables so as to be usable for algorithm development and the evaluation of Sobol’s sensitivity analysis. With regard to the types of stochastic computational models, which are commonly found in structural mechanics, an algorithm was designed for effective use in conjunction with Monte Carlo methods. Sensitivity indices are evaluated for all possible permutations of the decorrelation procedures for input parameters. The evaluation of Sobol’s sensitivity coefficients is illustrated on an example in which a computational model was used for the analysis of the resistance of a steel bar in tension with statistically dependent input geometric characteristics.

  19. Variance-based sensitivity indices for stochastic models with correlated inputs

    International Nuclear Information System (INIS)

    Kala, Zdeněk

    2015-01-01

    The goal of this article is the formulation of the principles of one of the possible strategies in implementing correlation between input random variables so as to be usable for algorithm development and the evaluation of Sobol’s sensitivity analysis. With regard to the types of stochastic computational models, which are commonly found in structural mechanics, an algorithm was designed for effective use in conjunction with Monte Carlo methods. Sensitivity indices are evaluated for all possible permutations of the decorrelation procedures for input parameters. The evaluation of Sobol’s sensitivity coefficients is illustrated on an example in which a computational model was used for the analysis of the resistance of a steel bar in tension with statistically dependent input geometric characteristics

  20. COGEDIF - automatic TORT and DORT input generation from MORSE combinatorial geometry models

    International Nuclear Information System (INIS)

    Castelli, R.A.; Barnett, D.A.

    1992-01-01

    COGEDIF is an interactive utility which was developed to automate the preparation of two and three dimensional geometrical inputs for the ORNL-TORT and DORT discrete ordinates programs from complex three dimensional models described using the MORSE combinatorial geometry input description. The program creates either continuous or disjoint mesh input based upon the intersections of user defined meshing planes and the MORSE body definitions. The composition overlay of the combinatorial geometry is used to create the composition mapping of the discretized geometry based upon the composition found at the centroid of each of the mesh cells. This program simplifies the process of using discrete orthogonal mesh cells to represent non-orthogonal geometries in large models which require mesh sizes of the order of a million cells or more. The program was specifically written to take advantage of the new TORT disjoint mesh option which was developed at ORNL

  1. Statistical Analysis of Input Parameters Impact on the Modelling of Underground Structures

    Directory of Open Access Journals (Sweden)

    M. Hilar

    2008-01-01

    Full Text Available The behaviour of a geomechanical model and its final results are strongly affected by the input parameters. As the inherent variability of rock mass is difficult to model, engineers are frequently forced to face the question “Which input values should be used for analyses?” The correct answer to such a question requires a probabilistic approach, considering the uncertainty of site investigations and variation in the ground. This paper describes the statistical analysis of input parameters for FEM calculations of traffic tunnels in the city of Prague. At the beginning of the paper, the inaccuracy in the geotechnical modelling is discussed. In the following part the Fuzzy techniques are summarized, including information about an application of the Fuzzy arithmetic on the shotcrete parameters. The next part of the paper is focused on the stochastic simulation – Monte Carlo Simulation is briefly described, Latin Hypercubes method is described more in details. At the end several practical examples are described: statistical analysis of the input parameters on the numerical modelling of the completed Mrázovka tunnel (profile West Tunnel Tube km 5.160 and modelling of the constructed tunnel Špejchar – Pelc Tyrolka. 

  2. Input data requirements for performance modelling and monitoring of photovoltaic plants

    DEFF Research Database (Denmark)

    Gavriluta, Anamaria Florina; Spataru, Sergiu; Sera, Dezso

    2018-01-01

    This work investigates the input data requirements in the context of performance modeling of thin-film photovoltaic (PV) systems. The analysis focuses on the PVWatts performance model, well suited for on-line performance monitoring of PV strings, due to its low number of parameters and high......, modelling the performance of the PV modules at high irradiances requires a dataset of only a few hundred samples in order to obtain a power estimation accuracy of ~1-2\\%....

  3. Input-output and energy demand models for Ireland: Data collection report. Part 1: EXPLOR

    Energy Technology Data Exchange (ETDEWEB)

    Henry, E W; Scott, S

    1981-01-01

    Data are presented in support of EXPLOR, an input-output economic model for Ireland. The data follow the listing of exogenous data-sets used by Batelle in document X11/515/77. Data are given for 1974, 1980, and 1985 and consist of household consumption, final demand-production, and commodity prices. (ACR)

  4. Comparison of plasma input and reference tissue models for analysing [(11)C]flumazenil studies

    NARCIS (Netherlands)

    Klumpers, Ursula M. H.; Veltman, Dick J.; Boellaard, Ronald; Comans, Emile F.; Zuketto, Cassandra; Yaqub, Maqsood; Mourik, Jurgen E. M.; Lubberink, Mark; Hoogendijk, Witte J. G.; Lammertsma, Adriaan A.

    2008-01-01

    A single-tissue compartment model with plasma input is the established method for analysing [(11)C]flumazenil ([(11)C]FMZ) studies. However, arterial cannulation and measurement of metabolites are time-consuming. Therefore, a reference tissue approach is appealing, but this approach has not been

  5. Input-Output model for waste management plan for Nigeria | Njoku ...

    African Journals Online (AJOL)

    An Input-Output Model for Waste Management Plan has been developed for Nigeria based on Leontief concept and life cycle analysis. Waste was considered as source of pollution, loss of resources, and emission of green house gasses from bio-chemical treatment and decomposition, with negative impact on the ...

  6. The economic impact of multifunctional agriculture in Dutch regions: An input-output model

    NARCIS (Netherlands)

    Heringa, P.W.; Heide, van der C.M.; Heijman, W.J.M.

    2013-01-01

    Multifunctional agriculture is a broad concept lacking a precise definition. Moreover, little is known about the societal importance of multifunctional agriculture. This paper is an empirical attempt to fill this gap. To this end, an input-output model was constructed for multifunctional agriculture

  7. The economic impact of multifunctional agriculture in The Netherlands: A regional input-output model

    NARCIS (Netherlands)

    Heringa, P.W.; Heide, van der C.M.; Heijman, W.J.M.

    2012-01-01

    Multifunctional agriculture is a broad concept lacking a precise and uniform definition. Moreover, little is known about the societal importance of multifunctional agriculture. This paper is an empirical attempt to fill this gap. To this end, an input-output model is constructed for multifunctional

  8. Prediction of Chl-a concentrations in an eutrophic lake using ANN models with hybrid inputs

    Science.gov (United States)

    Aksoy, A.; Yuzugullu, O.

    2017-12-01

    Chlorophyll-a (Chl-a) concentrations in water bodies exhibit both spatial and temporal variations. As a result, frequent sampling is required with higher number of samples. This motivates the use of remote sensing as a monitoring tool. Yet, prediction performances of models that convert radiance values into Chl-a concentrations can be poor in shallow lakes. In this study, Chl-a concentrations in Lake Eymir, a shallow eutrophic lake in Ankara (Turkey), are determined using artificial neural network (ANN) models that use hybrid inputs composed of water quality and meteorological data as well as remotely sensed radiance values to improve prediction performance. Following a screening based on multi-collinearity and principal component analysis (PCA), dissolved-oxygen concentration (DO), pH, turbidity, and humidity were selected among several parameters as the constituents of the hybrid input dataset. Radiance values were obtained from QuickBird-2 satellite. Conversion of the hybrid input into Chl-a concentrations were studied for two different periods in the lake. ANN models were successful in predicting Chl-a concentrations. Yet, prediction performance declined for low Chl-a concentrations in the lake. In general, models with hybrid inputs were superior over the ones that solely used remotely sensed data.

  9. Linear and quadratic models of point process systems: contributions of patterned input to output.

    Science.gov (United States)

    Lindsay, K A; Rosenberg, J R

    2012-08-01

    In the 1880's Volterra characterised a nonlinear system using a functional series connecting continuous input and continuous output. Norbert Wiener, in the 1940's, circumvented problems associated with the application of Volterra series to physical problems by deriving from it a new series of terms that are mutually uncorrelated with respect to Gaussian processes. Subsequently, Brillinger, in the 1970's, introduced a point-process analogue of Volterra's series connecting point-process inputs to the instantaneous rate of point-process output. We derive here a new series from this analogue in which its terms are mutually uncorrelated with respect to Poisson processes. This new series expresses how patterned input in a spike train, represented by third-order cross-cumulants, is converted into the instantaneous rate of an output point-process. Given experimental records of suitable duration, the contribution of arbitrary patterned input to an output process can, in principle, be determined. Solutions for linear and quadratic point-process models with one and two inputs and a single output are investigated. Our theoretical results are applied to isolated muscle spindle data in which the spike trains from the primary and secondary endings from the same muscle spindle are recorded in response to stimulation of one and then two static fusimotor axons in the absence and presence of a random length change imposed on the parent muscle. For a fixed mean rate of input spikes, the analysis of the experimental data makes explicit which patterns of two input spikes contribute to an output spike. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Design, Fabrication, and Modeling of a Novel Dual-Axis Control Input PZT Gyroscope.

    Science.gov (United States)

    Chang, Cheng-Yang; Chen, Tsung-Lin

    2017-10-31

    Conventional gyroscopes are equipped with a single-axis control input, limiting their performance. Although researchers have proposed control algorithms with dual-axis control inputs to improve gyroscope performance, most have verified the control algorithms through numerical simulations because they lacked practical devices with dual-axis control inputs. The aim of this study was to design a piezoelectric gyroscope equipped with a dual-axis control input so that researchers may experimentally verify those control algorithms in future. Designing a piezoelectric gyroscope with a dual-axis control input is more difficult than designing a conventional gyroscope because the control input must be effective over a broad frequency range to compensate for imperfections, and the multiple mode shapes in flexural deformations complicate the relation between flexural deformation and the proof mass position. This study solved these problems by using a lead zirconate titanate (PZT) material, introducing additional electrodes for shielding, developing an optimal electrode pattern, and performing calibrations of undesired couplings. The results indicated that the fabricated device could be operated at 5.5±1 kHz to perform dual-axis actuations and position measurements. The calibration of the fabricated device was completed by system identifications of a new dynamic model including gyroscopic motions, electromechanical coupling, mechanical coupling, electrostatic coupling, and capacitive output impedance. Finally, without the assistance of control algorithms, the "open loop sensitivity" of the fabricated gyroscope was 1.82 μV/deg/s with a nonlinearity of 9.5% full-scale output. This sensitivity is comparable with those of other PZT gyroscopes with single-axis control inputs.

  11. Design, Fabrication, and Modeling of a Novel Dual-Axis Control Input PZT Gyroscope

    Directory of Open Access Journals (Sweden)

    Cheng-Yang Chang

    2017-10-01

    Full Text Available Conventional gyroscopes are equipped with a single-axis control input, limiting their performance. Although researchers have proposed control algorithms with dual-axis control inputs to improve gyroscope performance, most have verified the control algorithms through numerical simulations because they lacked practical devices with dual-axis control inputs. The aim of this study was to design a piezoelectric gyroscope equipped with a dual-axis control input so that researchers may experimentally verify those control algorithms in future. Designing a piezoelectric gyroscope with a dual-axis control input is more difficult than designing a conventional gyroscope because the control input must be effective over a broad frequency range to compensate for imperfections, and the multiple mode shapes in flexural deformations complicate the relation between flexural deformation and the proof mass position. This study solved these problems by using a lead zirconate titanate (PZT material, introducing additional electrodes for shielding, developing an optimal electrode pattern, and performing calibrations of undesired couplings. The results indicated that the fabricated device could be operated at 5.5±1 kHz to perform dual-axis actuations and position measurements. The calibration of the fabricated device was completed by system identifications of a new dynamic model including gyroscopic motions, electromechanical coupling, mechanical coupling, electrostatic coupling, and capacitive output impedance. Finally, without the assistance of control algorithms, the “open loop sensitivity” of the fabricated gyroscope was 1.82 μV/deg/s with a nonlinearity of 9.5% full-scale output. This sensitivity is comparable with those of other PZT gyroscopes with single-axis control inputs.

  12. Input-constrained model predictive control via the alternating direction method of multipliers

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Frison, Gianluca; Andersen, Martin S.

    2014-01-01

    This paper presents an algorithm, based on the alternating direction method of multipliers, for the convex optimal control problem arising in input-constrained model predictive control. We develop an efficient implementation of the algorithm for the extended linear quadratic control problem (LQCP......) with input and input-rate limits. The algorithm alternates between solving an extended LQCP and a highly structured quadratic program. These quadratic programs are solved using a Riccati iteration procedure, and a structure-exploiting interior-point method, respectively. The computational cost per iteration...... is quadratic in the dimensions of the controlled system, and linear in the length of the prediction horizon. Simulations show that the approach proposed in this paper is more than an order of magnitude faster than several state-of-the-art quadratic programming algorithms, and that the difference in computation...

  13. Development of the MARS input model for Ulchin 1/2 transient analyzer

    International Nuclear Information System (INIS)

    Jeong, J. J.; Kim, K. D.; Lee, S. W.; Lee, Y. J.; Chung, B. D.; Hwang, M.

    2003-03-01

    KAERI has been developing the NSSS transient analyzer based on best-estimate codes for Ulchin 1/2 plants. The MARS and RETRAN code are used as the best-estimate codes for the NSSS transient analyzer. Among the two codes, the MARS code is to be used for realistic analysis of small- and large-break loss-of-coolant accidents, of which break size is greater than 2 inch diameter. This report includes the input model requirements and the calculation note for the Ulchin 1/2 MARS input data generation (see the Appendix). In order to confirm the validity of the input data, we performed the calculations for a steady state at 100 % power operation condition and a double-ended cold leg break LOCA. The results of the steady-state calculation agree well with the design data. The results of the LOCA calculation seem to be reasonable and consistent with those of other best-estimate calculations. Therefore, the MARS input data can be used as a base input deck for the MARS transient analyzer for Ulchin 1/2

  14. Development of the MARS input model for Ulchin 3/4 transient analyzer

    International Nuclear Information System (INIS)

    Jeong, J. J.; Kim, K. D.; Lee, S. W.; Lee, Y. J.; Lee, W. J.; Chung, B. D.; Hwang, M. G.

    2003-12-01

    KAERI has been developing the NSSS transient analyzer based on best-estimate codes.The MARS and RETRAN code are adopted as the best-estimate codes for the NSSS transient analyzer. Among these two codes, the MARS code is to be used for realistic analysis of small- and large-break loss-of-coolant accidents, of which break size is greater than 2 inch diameter. This report includes the MARS input model requirements and the calculation note for the MARS input data generation (see the Appendix) for Ulchin 3/4 plant analyzer. In order to confirm the validity of the input data, we performed the calculations for a steady state at 100 % power operation condition and a double-ended cold leg break LOCA. The results of the steady-state calculation agree well with the design data. The results of the LOCA calculation seem to be reasonable and consistent with those of other best-estimate calculations. Therefore, the MARS input data can be used as a base input deck for the MARS transient analyzer for Ulchin 3/4

  15. ANALYSIS OF THE BANDUNG CHANGES EXCELLENT POTENTIAL THROUGH INPUT-OUTPUT MODEL USING INDEX LE MASNE

    Directory of Open Access Journals (Sweden)

    Teti Sofia Yanti

    2017-03-01

    Full Text Available Input-Output Table is arranged to present an overview of the interrelationships and interdependence between units of activity (sector production in the whole economy. Therefore the input-output models are complete and comprehensive analytical tool. The usefulness of input-output tables is an analysis of the economic structure of the national/regional level which covers the structure of production and value-added (GDP of each sector. For the purposes of planning and evaluation of the outcomes of development that is comprehensive both national and smaller scale (district/city, a model for regional development planning approach can use the model input-output analysis. Analysis of Bandung Economic Structure did use Le Masne index, by comparing the coefficients of the technology in 2003 and 2008, of which nearly 50% change. The trade sector has grown very conspicuous than other areas, followed by the services of road transport and air transport services, the development priorities and investment Bandung should be directed to these areas, this is due to these areas can be thrust and be power attraction for the growth of other areas. The areas that experienced the highest decrease was Industrial Chemicals and Goods from Chemistry, followed by Oil and Refinery Industry Textile Industry Except For Garment.

  16. Simulation model structure numerically robust to changes in magnitude and combination of input and output variables

    DEFF Research Database (Denmark)

    Rasmussen, Bjarne D.; Jakobsen, Arne

    1999-01-01

    Mathematical models of refrigeration systems are often based on a coupling of component models forming a “closed loop” type of system model. In these models the coupling structure of the component models represents the actual flow path of refrigerant in the system. Very often numerical...... instabilities prevent the practical use of such a system model for more than one input/output combination and for other magnitudes of refrigerating capacities.A higher numerical robustness of system models can be achieved by making a model for the refrigeration cycle the core of the system model and by using...... variables with narrow definition intervals for the exchange of information between the cycle model and the component models.The advantages of the cycle-oriented method are illustrated by an example showing the refrigeration cycle similarities between two very different refrigeration systems....

  17. Modelling groundwater discharge areas using only digital elevation models as input data

    Energy Technology Data Exchange (ETDEWEB)

    Brydsten, Lars [Umeaa Univ. (Sweden). Dept. of Biology and Environmental Science

    2006-10-15

    Advanced geohydrological models require data on topography, soil distribution in three dimensions, vegetation, land use, bedrock fracture zones. To model present geohydrological conditions, these factors can be gathered with different techniques. If a future geohydrological condition is modelled in an area with positive shore displacement (say 5,000 or 10,000 years), some of these factors can be difficult to measure. This could include the development of wetlands and the filling of lakes. If the goal of the model is to predict distribution of groundwater recharge and discharge areas in the landscape, the most important factor is topography. The question is how much can topography alone explain the distribution of geohydrological objects in the landscape. A simplified description of the distribution of geohydrological objects in the landscape is that groundwater recharge areas occur at local elevation curvatures and discharge occurs in lakes, brooks, and low situated slopes. Areas in-between these make up discharge areas during wet periods and recharge areas during dry periods. A model that could predict this pattern only using topography data needs to be able to predict high ridges and future lakes and brooks. This study uses GIS software with four different functions using digital elevation models as input data, geomorphometrical parameters to predict landscape ridges, basin fill for predicting lakes, flow accumulations for predicting future waterways, and topographical wetness indexes for dividing in-between areas based on degree of wetness. An area between the village of and Forsmarks' Nuclear Power Plant has been used to calibrate the model. The area is within the SKB 10-metre Elevation Model (DEM) and has a high-resolution orienteering map for wetlands. Wetlands are assumed to be groundwater discharge areas. Five hundred points were randomly distributed across the wetlands. These are potential discharge points. Model parameters were chosen with the

  18. Modelling groundwater discharge areas using only digital elevation models as input data

    International Nuclear Information System (INIS)

    Brydsten, Lars

    2006-10-01

    Advanced geohydrological models require data on topography, soil distribution in three dimensions, vegetation, land use, bedrock fracture zones. To model present geohydrological conditions, these factors can be gathered with different techniques. If a future geohydrological condition is modelled in an area with positive shore displacement (say 5,000 or 10,000 years), some of these factors can be difficult to measure. This could include the development of wetlands and the filling of lakes. If the goal of the model is to predict distribution of groundwater recharge and discharge areas in the landscape, the most important factor is topography. The question is how much can topography alone explain the distribution of geohydrological objects in the landscape. A simplified description of the distribution of geohydrological objects in the landscape is that groundwater recharge areas occur at local elevation curvatures and discharge occurs in lakes, brooks, and low situated slopes. Areas in-between these make up discharge areas during wet periods and recharge areas during dry periods. A model that could predict this pattern only using topography data needs to be able to predict high ridges and future lakes and brooks. This study uses GIS software with four different functions using digital elevation models as input data, geomorphometrical parameters to predict landscape ridges, basin fill for predicting lakes, flow accumulations for predicting future waterways, and topographical wetness indexes for dividing in-between areas based on degree of wetness. An area between the village of and Forsmarks' Nuclear Power Plant has been used to calibrate the model. The area is within the SKB 10-metre Elevation Model (DEM) and has a high-resolution orienteering map for wetlands. Wetlands are assumed to be groundwater discharge areas. Five hundred points were randomly distributed across the wetlands. These are potential discharge points. Model parameters were chosen with the

  19. Dynamics of a Stage Structured Pest Control Model in a Polluted Environment with Pulse Pollution Input

    OpenAIRE

    Liu, Bing; Xu, Ling; Kang, Baolin

    2013-01-01

    By using pollution model and impulsive delay differential equation, we formulate a pest control model with stage structure for natural enemy in a polluted environment by introducing a constant periodic pollutant input and killing pest at different fixed moments and investigate the dynamics of such a system. We assume only that the natural enemies are affected by pollution, and we choose the method to kill the pest without harming natural enemies. Sufficient conditions for global attractivity ...

  20. CONSTRUCTION OF A DYNAMIC INPUT-OUTPUT MODEL WITH A HUMAN CAPITAL BLOCK

    Directory of Open Access Journals (Sweden)

    Baranov A. O.

    2017-03-01

    Full Text Available The accumulation of human capital is an important factor of economic growth. It seems to be useful to include «human capital» as a factor of a macroeconomic model, as it helps to take into account the quality differentiation of the workforce. Most of the models usually distinguish labor force by the levels of education, while some of the factors remain unaccounted. Among them are health status and culture development level, which influence productivity level as well as gross product reproduction. Inclusion of the human capital block to the interindustry model can help to make it more reliable for economic development forecasting. The article presents a mathematical description of the extended dynamic input-output model (DIOM with a human capital block. The extended DIOM is based on the Input-Output Model from The KAMIN system (the System of Integrated Analyses of Interindustrial Information developed at the Institute of Economics and Industrial Engineering of the Siberian Branch of the Academy of Sciences of the Russian Federation and at the Novosibirsk State University. The extended input-output model can be used to analyze and forecast development of Russian economy.

  1. High Resolution Modeling of the Thermospheric Response to Energy Inputs During the RENU-2 Rocket Flight

    Science.gov (United States)

    Walterscheid, R. L.; Brinkman, D. G.; Clemmons, J. H.; Hecht, J. H.; Lessard, M.; Fritz, B.; Hysell, D. L.; Clausen, L. B. N.; Moen, J.; Oksavik, K.; Yeoman, T. K.

    2017-12-01

    The Earth's magnetospheric cusp provides direct access of energetic particles to the thermosphere. These particles produce ionization and kinetic (particle) heating of the atmosphere. The increased ionization coupled with enhanced electric fields in the cusp produces increased Joule heating and ion drag forcing. These energy inputs cause large wind and temperature changes in the cusp region. The Rocket Experiment for Neutral Upwelling -2 (RENU-2) launched from Andoya, Norway at 0745UT on 13 December 2015 into the ionosphere-thermosphere beneath the magnetic cusp. It made measurements of the energy inputs (e.g., precipitating particles, electric fields) and the thermospheric response to these energy inputs (e.g., neutral density and temperature, neutral winds). Complementary ground based measurements were made. In this study, we use a high resolution two-dimensional time-dependent non hydrostatic nonlinear dynamical model driven by rocket and ground based measurements of the energy inputs to simulate the thermospheric response during the RENU-2 flight. Model simulations will be compared to the corresponding measurements of the thermosphere to see what they reveal about thermospheric structure and the nature of magnetosphere-ionosphere-thermosphere coupling in the cusp. Acknowledgements: This material is based upon work supported by the National Aeronautics and Space Administration under Grants: NNX16AH46G and NNX13AJ93G. This research was also supported by The Aerospace Corporation's Technical Investment program

  2. Input vs. Output Taxation—A DSGE Approach to Modelling Resource Decoupling

    Directory of Open Access Journals (Sweden)

    Marek Antosiewicz

    2016-04-01

    Full Text Available Environmental taxes constitute a crucial instrument aimed at reducing resource use through lower production losses, resource-leaner products, and more resource-efficient production processes. In this paper we focus on material use and apply a multi-sector dynamic stochastic general equilibrium (DSGE model to study two types of taxation: tax on material inputs used by industry, energy, construction, and transport sectors, and tax on output of these sectors. We allow for endogenous adoption of resource-saving technologies. We calibrate the model for the EU27 area using an IO matrix. We consider taxation introduced from 2021 and simulate its impact until 2050. We compare the taxes along their ability to induce reduction in material use and raise revenue. We also consider the effect of spending this revenue on reduction of labour taxation. We find that input and output taxation create contrasting incentives and have opposite effects on resource efficiency. The material input tax induces investment in efficiency-improving technology which, in the long term, results in GDP and employment by 15%–20% higher than in the case of a comparable output tax. We also find that using revenues to reduce taxes on labour has stronger beneficial effects for the input tax.

  3. Application of a Linear Input/Output Model to Tankless Water Heaters

    Energy Technology Data Exchange (ETDEWEB)

    Butcher T.; Schoenbauer, B.

    2011-12-31

    In this study, the applicability of a linear input/output model to gas-fired, tankless water heaters has been evaluated. This simple model assumes that the relationship between input and output, averaged over both active draw and idle periods, is linear. This approach is being applied to boilers in other studies and offers the potential to make a small number of simple measurements to obtain the model parameters. These parameters can then be used to predict performance under complex load patterns. Both condensing and non-condensing water heaters have been tested under a very wide range of load conditions. It is shown that this approach can be used to reproduce performance metrics, such as the energy factor, and can be used to evaluate the impacts of alternative draw patterns and conditions.

  4. Efficient uncertainty quantification of a fully nonlinear and dispersive water wave model with random inputs

    DEFF Research Database (Denmark)

    Bigoni, Daniele; Engsig-Karup, Allan Peter; Eskilsson, Claes

    2016-01-01

    A major challenge in next-generation industrial applications is to improve numerical analysis by quantifying uncertainties in predictions. In this work we present a formulation of a fully nonlinear and dispersive potential flow water wave model with random inputs for the probabilistic description...... at different points in the parameter space, allowing for the reuse of existing simulation software. The choice of the applied methods is driven by the number of uncertain input parameters and by the fact that finding the solution of the considered model is computationally intensive. We revisit experimental...... benchmarks often used for validation of deterministic water wave models. Based on numerical experiments and assumed uncertainties in boundary data, our analysis reveals that some of the known discrepancies from deterministic simulation in comparison with experimental measurements could be partially explained...

  5. Stochastic modelling of daily rainfall sequences

    NARCIS (Netherlands)

    Buishand, T.A.

    1977-01-01

    Rainfall series of different climatic regions were analysed with the aim of generating daily rainfall sequences. A survey of the data is given in I, 1. When analysing daily rainfall sequences one must be aware of the following points:
    a. Seasonality. Because of seasonal variation

  6. New Results on Robust Model Predictive Control for Time-Delay Systems with Input Constraints

    Directory of Open Access Journals (Sweden)

    Qing Lu

    2014-01-01

    Full Text Available This paper investigates the problem of model predictive control for a class of nonlinear systems subject to state delays and input constraints. The time-varying delay is considered with both upper and lower bounds. A new model is proposed to approximate the delay. And the uncertainty is polytopic type. For the state-feedback MPC design objective, we formulate an optimization problem. Under model transformation, a new model predictive controller is designed such that the robust asymptotical stability of the closed-loop system can be guaranteed. Finally, the applicability of the presented results are demonstrated by a practical example.

  7. Input Uncertainty and its Implications on Parameter Assessment in Hydrologic and Hydroclimatic Modelling Studies

    Science.gov (United States)

    Chowdhury, S.; Sharma, A.

    2005-12-01

    Hydrological model inputs are often derived from measurements at point locations taken at discrete time steps. The nature of uncertainty associated with such inputs is thus a function of the quality and number of measurements available in time. A change in these characteristics (such as a change in the number of rain-gauge inputs used to derive spatially averaged rainfall) results in inhomogeneity in the associated distributional profile. Ignoring such uncertainty can lead to models that aim to simulate based on the observed input variable instead of the true measurement, resulting in a biased representation of the underlying system dynamics as well as an increase in both bias and the predictive uncertainty in simulations. This is especially true of cases where the nature of uncertainty likely in the future is significantly different to that in the past. Possible examples include situations where the accuracy of the catchment averaged rainfall has increased substantially due to an increase in the rain-gauge density, or accuracy of climatic observations (such as sea surface temperatures) increased due to the use of more accurate remote sensing technologies. We introduce here a method to ascertain the true value of parameters in the presence of additive uncertainty in model inputs. This method, known as SIMulation EXtrapolation (SIMEX, [Cook, 1994]) operates on the basis of an empirical relationship between parameters and the level of additive input noise (or uncertainty). The method starts with generating a series of alternate realisations of model inputs by artificially adding white noise in increasing multiples of the known error variance. The alternate realisations lead to alternate sets of parameters that are increasingly biased with respect to the truth due to the increased variability in the inputs. Once several such realisations have been drawn, one is able to formulate an empirical relationship between the parameter values and the level of additive noise

  8. Evolutionary sequence of models of planetary nebulae

    International Nuclear Information System (INIS)

    Vil'koviskij, Eh.Ya.; Kondrat'eva, L.N.; Tambovtseva, L.V.

    1983-01-01

    The evolutionary sequences of model planetary nebulae of different masses have been calculated. The computed emission line intensities are compared with the observed ones by means of the parameter ''reduced size of the nebula'', Rsub(n). It is shown that the evolution tracks of Schonberner for the central stars are consistent with the observed data. Part of ionized mass Mi in any nebulae does not not exceed 0.3 b and in the average Msu(i) 3 years at actual values of radius Rsub(i) <0.025 ps. Then the luminosity growth slows down to the maximum temperature which central star reaches and decreases with sharp decrease of the star luminosity. At that, the radius of ionized zone of greater mass nebulae can even decrease, inspite of the constant expansion of the nebula. As a result nebulae of great masses having undergone the evolution can be included in the number of observed compact objects (Rsub(n) < 0.1 ps)

  9. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    Science.gov (United States)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    2008-06-01

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space Rn. An isometric mapping F from M to a low-dimensional, compact, connected set A⊂Rd(d≪n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology by constructing low

  10. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    International Nuclear Information System (INIS)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    2008-01-01

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space R n . An isometric mapping F from M to a low-dimensional, compact, connected set A is contained in R d (d<< n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology

  11. Non parametric, self organizing, scalable modeling of spatiotemporal inputs: the sign language paradigm.

    Science.gov (United States)

    Caridakis, G; Karpouzis, K; Drosopoulos, A; Kollias, S

    2012-12-01

    Modeling and recognizing spatiotemporal, as opposed to static input, is a challenging task since it incorporates input dynamics as part of the problem. The vast majority of existing methods tackle the problem as an extension of the static counterpart, using dynamics, such as input derivatives, at feature level and adopting artificial intelligence and machine learning techniques originally designed for solving problems that do not specifically address the temporal aspect. The proposed approach deals with temporal and spatial aspects of the spatiotemporal domain in a discriminative as well as coupling manner. Self Organizing Maps (SOM) model the spatial aspect of the problem and Markov models its temporal counterpart. Incorporation of adjacency, both in training and classification, enhances the overall architecture with robustness and adaptability. The proposed scheme is validated both theoretically, through an error propagation study, and experimentally, on the recognition of individual signs, performed by different, native Greek Sign Language users. Results illustrate the architecture's superiority when compared to Hidden Markov Model techniques and variations both in terms of classification performance and computational cost. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Computational Techniques for Model Predictive Control of Large-Scale Systems with Continuous-Valued and Discrete-Valued Inputs

    Directory of Open Access Journals (Sweden)

    Koichi Kobayashi

    2013-01-01

    Full Text Available We propose computational techniques for model predictive control of large-scale systems with both continuous-valued control inputs and discrete-valued control inputs, which are a class of hybrid systems. In the proposed method, we introduce the notion of virtual control inputs, which are obtained by relaxing discrete-valued control inputs to continuous variables. In online computation, first, we find continuous-valued control inputs and virtual control inputs minimizing a cost function. Next, using the obtained virtual control inputs, only discrete-valued control inputs at the current time are computed in each subsystem. In addition, we also discuss the effect of quantization errors. Finally, the effectiveness of the proposed method is shown by a numerical example. The proposed method enables us to reduce and decentralize the computation load.

  13. On the redistribution of existing inputs using the spherical frontier dea model

    Directory of Open Access Journals (Sweden)

    José Virgilio Guedes de Avellar

    2010-04-01

    Full Text Available The Spherical Frontier DEA Model (SFM (Avellar et al., 2007 was developed to be used when one wants to fairly distribute a new and fixed input to a group of Decision Making Units (DMU's. SFM's basic idea is to distribute this new and fixed input in such a way that every DMU will be placed on an efficiency frontier with a spherical shape. We use SFM to analyze the problems that appear when one wants to redistribute an already existing input to a group of DMU's such that the total sum of this input will remain constant. We also analyze the case in which this total sum may vary.O Modelo de Fronteira Esférica (MFE (Avellar et al., 2007 foi desenvolvido para ser usado quando se deseja distribuir de maneira justa um novo insumo a um conjunto de unidades tomadoras de decisão (DMU's, da sigla em inglês, Decision Making Units. A ideia básica do MFE é a de distribuir esse novo insumo de maneira que todas as DMU's sejam colocadas numa fronteira de eficiência com um formato esférico. Neste artigo, usamos MFE para analisar o problema que surge quando se deseja redistribuir um insumo já existente para um grupo de DMU's de tal forma que a soma desse insumo para todas as DMU's se mantenha constante. Também analisamos o caso em que essa soma possa variar.

  14. Persistence and ergodicity of plant disease model with markov conversion and impulsive toxicant input

    Science.gov (United States)

    Zhao, Wencai; Li, Juan; Zhang, Tongqian; Meng, Xinzhu; Zhang, Tonghua

    2017-07-01

    Taking into account of both white and colored noises, a stochastic mathematical model with impulsive toxicant input is formulated. Based on this model, we investigate dynamics, such as the persistence and ergodicity, of plant infectious disease model with Markov conversion in a polluted environment. The thresholds of extinction and persistence in mean are obtained. By using Lyapunov functions, we prove that the system is ergodic and has a stationary distribution under certain sufficient conditions. Finally, numerical simulations are employed to illustrate our theoretical analysis.

  15. Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) Benchmark Phase II: Identification of Influential Parameters

    International Nuclear Information System (INIS)

    Kovtonyuk, A.; Petruzzi, A.; D'Auria, F.

    2015-01-01

    The objective of the Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) benchmark is to progress on the issue of the quantification of the uncertainty of the physical models in system thermal-hydraulic codes by considering a concrete case: the physical models involved in the prediction of core reflooding. The PREMIUM benchmark consists of five phases. This report presents the results of Phase II dedicated to the identification of the uncertain code parameters associated with physical models used in the simulation of reflooding conditions. This identification is made on the basis of the Test 216 of the FEBA/SEFLEX programme according to the following steps: - identification of influential phenomena; - identification of the associated physical models and parameters, depending on the used code; - quantification of the variation range of identified input parameters through a series of sensitivity calculations. A procedure for the identification of potentially influential code input parameters has been set up in the Specifications of Phase II of PREMIUM benchmark. A set of quantitative criteria has been as well proposed for the identification of influential IP and their respective variation range. Thirteen participating organisations, using 8 different codes (7 system thermal-hydraulic codes and 1 sub-channel module of a system thermal-hydraulic code) submitted Phase II results. The base case calculations show spread in predicted cladding temperatures and quench front propagation that has been characterized. All the participants, except one, predict a too fast quench front progression. Besides, the cladding temperature time trends obtained by almost all the participants show oscillatory behaviour which may have numeric origins. Adopted criteria for identification of influential input parameters differ between the participants: some organisations used the set of criteria proposed in Specifications 'as is', some modified the quantitative thresholds

  16. A New Ensemble of Perturbed-Input-Parameter Simulations by the Community Atmosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    Covey, C; Brandon, S; Bremer, P T; Domyancis, D; Garaizar, X; Johannesson, G; Klein, R; Klein, S A; Lucas, D D; Tannahill, J; Zhang, Y

    2011-10-27

    Uncertainty quantification (UQ) is a fundamental challenge in the numerical simulation of Earth's weather and climate, and other complex systems. It entails much more than attaching defensible error bars to predictions: in particular it includes assessing low-probability but high-consequence events. To achieve these goals with models containing a large number of uncertain input parameters, structural uncertainties, etc., raw computational power is needed. An automated, self-adapting search of the possible model configurations is also useful. Our UQ initiative at the Lawrence Livermore National Laboratory has produced the most extensive set to date of simulations from the US Community Atmosphere Model. We are examining output from about 3,000 twelve-year climate simulations generated with a specialized UQ software framework, and assessing the model's accuracy as a function of 21 to 28 uncertain input parameter values. Most of the input parameters we vary are related to the boundary layer, clouds, and other sub-grid scale processes. Our simulations prescribe surface boundary conditions (sea surface temperatures and sea ice amounts) to match recent observations. Fully searching this 21+ dimensional space is impossible, but sensitivity and ranking algorithms can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination. Bayesian statistical constraints, employing a variety of climate observations as metrics, also seem promising. Observational constraints will be important in the next step of our project, which will compute sea surface temperatures and sea ice interactively, and will study climate change due to increasing atmospheric carbon dioxide.

  17. The Canadian Defence Input-Output Model DIO Version 4.41

    Science.gov (United States)

    2011-09-01

    Request to develop DND tailored Input/Output Model. Electronic communication from AllenWeldon to Team Leader, Defence Economics Team onMarch 12, 2011...and similar contain- ers 166 1440 Handbags, wallets and similar personal articles such as eyeglass and cigar cases and coin purses 167 1450 Cotton yarn...408 3600 Radar and radio navigation equipment 409 3619 Semi-conductors 410 3621 Printed circuits 411 3622 Integrated circuits 412 3623 Other electronic

  18. Urban Landscape Characterization Using Remote Sensing Data For Input into Air Quality Modeling

    Science.gov (United States)

    Quattrochi, Dale A.; Estes, Maurice G., Jr.; Crosson, William; Khan, Maudood

    2005-01-01

    The urban landscape is inherently complex and this complexity is not adequately captured in air quality models that are used to assess whether urban areas are in attainment of EPA air quality standards, particularly for ground level ozone. This inadequacy of air quality models to sufficiently respond to the heterogeneous nature of the urban landscape can impact how well these models predict ozone pollutant levels over metropolitan areas and ultimately, whether cities exceed EPA ozone air quality standards. We are exploring the utility of high-resolution remote sensing data and urban growth projections as improved inputs to meteorological and air quality models focusing on the Atlanta, Georgia metropolitan area as a case study. The National Land Cover Dataset at 30m resolution is being used as the land use/land cover input and aggregated to the 4km scale for the MM5 mesoscale meteorological model and the Community Multiscale Air Quality (CMAQ) modeling schemes. Use of these data have been found to better characterize low density/suburban development as compared with USGS 1 km land use/land cover data that have traditionally been used in modeling. Air quality prediction for future scenarios to 2030 is being facilitated by land use projections using a spatial growth model. Land use projections were developed using the 2030 Regional Transportation Plan developed by the Atlanta Regional Commission. This allows the State Environmental Protection agency to evaluate how these transportation plans will affect future air quality.

  19. Development of an Input Suite for an Orthotropic Composite Material Model

    Science.gov (United States)

    Hoffarth, Canio; Shyamsunder, Loukham; Khaled, Bilal; Rajan, Subramaniam; Goldberg, Robert K.; Carney, Kelly S.; Dubois, Paul; Blankenhorn, Gunther

    2017-01-01

    An orthotropic three-dimensional material model suitable for use in modeling impact tests has been developed that has three major components elastic and inelastic deformations, damage and failure. The material model has been implemented as MAT213 into a special version of LS-DYNA and uses tabulated data obtained from experiments. The prominent features of the constitutive model are illustrated using a widely-used aerospace composite the T800S3900-2B[P2352W-19] BMS8-276 Rev-H-Unitape fiber resin unidirectional composite. The input for the deformation model consists of experimental data from 12 distinct experiments at a known temperature and strain rate: tension and compression along all three principal directions, shear in all three principal planes, and off axis tension or compression tests in all three principal planes, along with other material constants. There are additional input associated with the damage and failure models. The steps in using this model are illustrated composite characterization tests, verification tests and a validation test. The results show that the developed and implemented model is stable and yields acceptably accurate results.

  20. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    International Nuclear Information System (INIS)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-01-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction. (paper)

  1. A generic method for automatic translation between input models for different versions of simulation codes

    International Nuclear Information System (INIS)

    Serfontein, Dawid E.; Mulder, Eben J.; Reitsma, Frederik

    2014-01-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications

  2. A generic method for automatic translation between input models for different versions of simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Serfontein, Dawid E., E-mail: Dawid.Serfontein@nwu.ac.za [School of Mechanical and Nuclear Engineering, North West University (PUK-Campus), PRIVATE BAG X6001 (Internal Post Box 360), Potchefstroom 2520 (South Africa); Mulder, Eben J. [School of Mechanical and Nuclear Engineering, North West University (South Africa); Reitsma, Frederik [Calvera Consultants (South Africa)

    2014-05-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications.

  3. Development of algorithm for depreciation costs allocation in dynamic input-output industrial enterprise model

    Directory of Open Access Journals (Sweden)

    Keller Alevtina

    2017-01-01

    Full Text Available The article considers the issue of allocation of depreciation costs in the dynamic inputoutput model of an industrial enterprise. Accounting the depreciation costs in such a model improves the policy of fixed assets management. It is particularly relevant to develop the algorithm for the allocation of depreciation costs in the construction of dynamic input-output model of an industrial enterprise, since such enterprises have a significant amount of fixed assets. Implementation of terms of the adequacy of such an algorithm itself allows: evaluating the appropriateness of investments in fixed assets, studying the final financial results of an industrial enterprise, depending on management decisions in the depreciation policy. It is necessary to note that the model in question for the enterprise is always degenerate. It is caused by the presence of zero rows in the matrix of capital expenditures by lines of structural elements unable to generate fixed assets (part of the service units, households, corporate consumers. The paper presents the algorithm for the allocation of depreciation costs for the model. This algorithm was developed by the authors and served as the basis for further development of the flowchart for subsequent implementation with use of software. The construction of such algorithm and its use for dynamic input-output models of industrial enterprises is actualized by international acceptance of the effectiveness of the use of input-output models for national and regional economic systems. This is what allows us to consider that the solutions discussed in the article are of interest to economists of various industrial enterprises.

  4. Unified Deep Learning Architecture for Modeling Biology Sequence.

    Science.gov (United States)

    Wu, Hongjie; Cao, Chengyuan; Xia, Xiaoyan; Lu, Qiang

    2017-10-09

    Prediction of the spatial structure or function of biological macromolecules based on their sequence remains an important challenge in bioinformatics. When modeling biological sequences using traditional sequencing models, characteristics, such as long-range interactions between basic units, the complicated and variable output of labeled structures, and the variable length of biological sequences, usually lead to different solutions on a case-by-case basis. This study proposed the use of bidirectional recurrent neural networks based on long short-term memory or a gated recurrent unit to capture long-range interactions by designing the optional reshape operator to adapt to the diversity of the output labels and implementing a training algorithm to support the training of sequence models capable of processing variable-length sequences. Additionally, the merge and pooling operators enhanced the ability to capture short-range interactions between basic units of biological sequences. The proposed deep-learning model and its training algorithm might be capable of solving currently known biological sequence-modeling problems through the use of a unified framework. We validated our model on one of the most difficult biological sequence-modeling problems currently known, with our results indicating the ability of the model to obtain predictions of protein residue interactions that exceeded the accuracy of current popular approaches by 10% based on multiple benchmarks.

  5. A time-resolved model of the mesospheric Na layer: constraints on the meteor input function

    Directory of Open Access Journals (Sweden)

    J. M. C. Plane

    2004-01-01

    Full Text Available A time-resolved model of the Na layer in the mesosphere/lower thermosphere region is described, where the continuity equations for the major sodium species Na, Na+ and NaHCO3 are solved explicity, and the other short-lived species are treated in steady-state. It is shown that the diurnal variation of the Na layer can only be modelled satisfactorily if sodium species are permanently removed below about 85 km, both through the dimerization of NaHCO3 and the uptake of sodium species on meteoric smoke particles that are assumed to have formed from the recondensation of vaporized meteoroids. When the sensitivity of the Na layer to the meteoroid input function is considered, an inconsistent picture emerges. The ratio of the column abundance of Na+ to Na is shown to increase strongly with the average meteoroid velocity, because the Na is injected at higher altitudes. Comparison with a limited set of Na+ measurements indicates that the average meteoroid velocity is probably less than about 25 km s-1, in agreement with velocity estimates from conventional meteor radars, and considerably slower than recent observations made by wide aperture incoherent scatter radars. The Na column abundance is shown to be very sensitive to the meteoroid mass input rate, and to the rate of vertical transport by eddy diffusion. Although the magnitude of the eddy diffusion coefficient in the 80–90 km region is uncertain, there is a consensus between recent models using parameterisations of gravity wave momentum deposition that the average value is less than 3×105 cm2 s-1. This requires that the global meteoric mass input rate is less than about 20 td-1, which is closest to estimates from incoherent scatter radar observations. Finally, the diurnal variation in the meteoroid input rate only slight perturbs the Na layer, because the residence time of Na in the layer is several days, and diurnal effects are effectively averaged out.

  6. Good Modeling Practice for PAT Applications: Propagation of Input Uncertainty and Sensitivity Analysis

    DEFF Research Database (Denmark)

    Sin, Gürkan; Gernaey, Krist; Eliasson Lantz, Anna

    2009-01-01

    The uncertainty and sensitivity analysis are evaluated for their usefulness as part of the model-building within Process Analytical Technology applications. A mechanistic model describing a batch cultivation of Streptomyces coelicolor for antibiotic production was used as case study. The input...... compared to the large uncertainty observed in the antibiotic and off-gas CO2 predictions. The output uncertainty was observed to be lower during the exponential growth phase, while higher in the stationary and death phases - meaning the model describes some periods better than others. To understand which...... promising for helping to build reliable mechanistic models and to interpret the model outputs properly. These tools make part of good modeling practice, which can contribute to successful PAT applications for increased process understanding, operation and control purposes. © 2009 American Institute...

  7. Alternative to Ritt's pseudodivision for finding the input-output equations of multi-output models.

    Science.gov (United States)

    Meshkat, Nicolette; Anderson, Chris; DiStefano, Joseph J

    2012-09-01

    Differential algebra approaches to structural identifiability analysis of a dynamic system model in many instances heavily depend upon Ritt's pseudodivision at an early step in analysis. The pseudodivision algorithm is used to find the characteristic set, of which a subset, the input-output equations, is used for identifiability analysis. A simpler algorithm is proposed for this step, using Gröbner Bases, along with a proof of the method that includes a reduced upper bound on derivative requirements. Efficacy of the new algorithm is illustrated with several biosystem model examples. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. Unitary input DEA model to identify beef cattle production systems typologies

    Directory of Open Access Journals (Sweden)

    Eliane Gonçalves Gomes

    2012-08-01

    Full Text Available The cow-calf beef production sector in Brazil has a wide variety of operating systems. This suggests the identification and the characterization of homogeneous regions of production, with consequent implementation of actions to achieve its sustainability. In this paper we attempted to measure the performance of 21 livestock modal production systems, in their cow-calf phase. We measured the performance of these systems, considering husbandry and production variables. The proposed approach is based on data envelopment analysis (DEA. We used unitary input DEA model, with apparent input orientation, together with the efficiency measurements generated by the inverted DEA frontier. We identified five modal production systems typologies, using the isoefficiency layers approach. The results showed that the knowledge and the processes management are the most important factors for improving the efficiency of beef cattle production systems.

  9. The sensitivity of ecosystem service models to choices of input data and spatial resolution

    Science.gov (United States)

    Bagstad, Kenneth J.; Cohen, Erika; Ancona, Zachary H.; McNulty, Steven; Sun, Ge

    2018-01-01

    Although ecosystem service (ES) modeling has progressed rapidly in the last 10–15 years, comparative studies on data and model selection effects have become more common only recently. Such studies have drawn mixed conclusions about whether different data and model choices yield divergent results. In this study, we compared the results of different models to address these questions at national, provincial, and subwatershed scales in Rwanda. We compared results for carbon, water, and sediment as modeled using InVEST and WaSSI using (1) land cover data at 30 and 300 m resolution and (2) three different input land cover datasets. WaSSI and simpler InVEST models (carbon storage and annual water yield) were relatively insensitive to the choice of spatial resolution, but more complex InVEST models (seasonal water yield and sediment regulation) produced large differences when applied at differing resolution. Six out of nine ES metrics (InVEST annual and seasonal water yield and WaSSI) gave similar predictions for at least two different input land cover datasets. Despite differences in mean values when using different data sources and resolution, we found significant and highly correlated results when using Spearman's rank correlation, indicating consistent spatial patterns of high and low values. Our results confirm and extend conclusions of past studies, showing that in certain cases (e.g., simpler models and national-scale analyses), results can be robust to data and modeling choices. For more complex models, those with different output metrics, and subnational to site-based analyses in heterogeneous environments, data and model choices may strongly influence study findings.

  10. VSC Input-Admittance Modeling and Analysis Above the Nyquist Frequency for Passivity-Based Stability Assessment

    DEFF Research Database (Denmark)

    Harnefors, Lennart; Finger, Raphael; Wang, Xiongfei

    2017-01-01

    The interconnection stability of a gridconnected voltage-source converter (VSC) can be assessed via the dissipative properties of its input admittance. In this paper, the modeling of the current control loop is revisited with the aim to improve the accuracy of the input-admittance model above...

  11. 'Fingerprints' of four crop models as affected by soil input data aggregation

    DEFF Research Database (Denmark)

    Angulo, Carlos; Gaiser, Thomas; Rötter, Reimund P

    2014-01-01

    for all models. Further analysis revealed that the small influence of spatial resolution of soil input data might be related to: (a) the high precipitation amount in the region which partly masked differences in soil characteristics for water holding capacity, (b) the loss of variability in hydraulic soil...... properties due to the methods applied to calculate water retention properties of the used soil profiles, and (c) the method of soil data aggregation. No characteristic “fingerprint” between sites, years and resolutions could be found for any of the models. Our results support earlier recommendation....... In this study we used four crop models (SIMPLACE, DSSAT-CSM, EPIC and DAISY) differing in the detail of modeling above-ground biomass and yield as well as of modeling soil water dynamics, water uptake and drought effects on plants to simulate winter wheat in two (agro-climatologically and geo...

  12. Investigations of the sensitivity of a coronal mass ejection model (ENLIL) to solar input parameters

    DEFF Research Database (Denmark)

    Falkenberg, Thea Vilstrup; Vršnak, B.; Taktakishvili, A.

    2010-01-01

    Understanding space weather is not only important for satellite operations and human exploration of the solar system but also to phenomena here on Earth that may potentially disturb and disrupt electrical signals. Some of the most violent space weather effects are caused by coronal mass ejections...... (CMEs), but in order to predict the caused effects, we need to be able to model their propagation from their origin in the solar corona to the point of interest, e.g., Earth. Many such models exist, but to understand the models in detail we must understand the primary input parameters. Here we...... investigate the parameter space of the ENLILv2.5b model using the CME event of 25 July 2004. ENLIL is a time‐dependent 3‐D MHD model that can simulate the propagation of cone‐shaped interplanetary coronal mass ejections (ICMEs) through the solar system. Excepting the cone parameters (radius, position...

  13. Biological sequence analysis: probabilistic models of proteins and nucleic acids

    National Research Council Canada - National Science Library

    Durbin, Richard

    1998-01-01

    ... analysis methods are now based on principles of probabilistic modelling. Examples of such methods include the use of probabilistically derived score matrices to determine the significance of sequence alignments, the use of hidden Markov models as the basis for profile searches to identify distant members of sequence families, and the inference...

  14. Screening of dementia genes by whole-exome sequencing in early-onset Alzheimer disease: input and lessons.

    Science.gov (United States)

    Nicolas, Gaël; Wallon, David; Charbonnier, Camille; Quenez, Olivier; Rousseau, Stéphane; Richard, Anne-Claire; Rovelet-Lecrux, Anne; Coutant, Sophie; Le Guennec, Kilan; Bacq, Delphine; Garnier, Jean-Guillaume; Olaso, Robert; Boland, Anne; Meyer, Vincent; Deleuze, Jean-François; Munter, Hans Markus; Bourque, Guillaume; Auld, Daniel; Montpetit, Alexandre; Lathrop, Mark; Guyant-Maréchal, Lucie; Martinaud, Olivier; Pariente, Jérémie; Rollin-Sillaire, Adeline; Pasquier, Florence; Le Ber, Isabelle; Sarazin, Marie; Croisile, Bernard; Boutoleau-Bretonnière, Claire; Thomas-Antérion, Catherine; Paquet, Claire; Sauvée, Mathilde; Moreaud, Olivier; Gabelle, Audrey; Sellal, François; Ceccaldi, Mathieu; Chamard, Ludivine; Blanc, Frédéric; Frebourg, Thierry; Campion, Dominique; Hannequin, Didier

    2016-05-01

    Causative variants in APP, PSEN1 or PSEN2 account for a majority of cases of autosomal dominant early-onset Alzheimer disease (ADEOAD, onset before 65 years). Variant detection rates in other EOAD patients, that is, with family history of late-onset AD (LOAD) (and no incidence of EOAD) and sporadic cases might be much lower. We analyzed the genomes from 264 patients using whole-exome sequencing (WES) with high depth of coverage: 90 EOAD patients with family history of LOAD and no incidence of EOAD in the family and 174 patients with sporadic AD starting between 51 and 65 years. We found three PSEN1 and one PSEN2 causative, probably or possibly causative variants in four patients (1.5%). Given the absence of PSEN1, PSEN2 and APP causative variants, we investigated whether these 260 patients might be burdened with protein-modifying variants in 20 genes that were previously shown to cause other types of dementia when mutated. For this analysis, we included an additional set of 160 patients who were previously shown to be free of causative variants in PSEN1, PSEN2 and APP: 107 ADEOAD patients and 53 sporadic EOAD patients with an age of onset before 51 years. In these 420 patients, we detected no variant that might modify the function of the 20 dementia-causing genes. We conclude that EOAD patients with family history of LOAD and no incidence of EOAD in the family or patients with sporadic AD starting between 51 and 65 years have a low variant-detection rate in AD genes.

  15. Assessment of input function distortions on kinetic model parameters in simulated dynamic 82Rb PET perfusion studies

    International Nuclear Information System (INIS)

    Meyer, Carsten; Peligrad, Dragos-Nicolae; Weibrecht, Martin

    2007-01-01

    Cardiac 82 rubidium dynamic PET studies allow quantifying absolute myocardial perfusion by using tracer kinetic modeling. Here, the accurate measurement of the input function, i.e. the tracer concentration in blood plasma, is a major challenge. This measurement is deteriorated by inappropriate temporal sampling, spillover, etc. Such effects may influence the measured input peak value and the measured blood pool clearance. The aim of our study is to evaluate the effect of input function distortions on the myocardial perfusion as estimated by the model. To this end, we simulate noise-free myocardium time activity curves (TACs) with a two-compartment kinetic model. The input function to the model is a generic analytical function. Distortions of this function have been introduced by varying its parameters. Using the distorted input function, the compartment model has been fitted to the simulated myocardium TAC. This analysis has been performed for various sets of model parameters covering a physiologically relevant range. The evaluation shows that ±10% error in the input peak value can easily lead to ±10-25% error in the model parameter K 1 , which relates to myocardial perfusion. Variations in the input function tail are generally less relevant. We conclude that an accurate estimation especially of the plasma input peak is crucial for a reliable kinetic analysis and blood flow estimation

  16. Scaling precipitation input to spatially distributed hydrological models by measured snow distribution

    Directory of Open Access Journals (Sweden)

    Christian Vögeli

    2016-12-01

    Full Text Available Accurate knowledge on snow distribution in alpine terrain is crucial for various applicationssuch as flood risk assessment, avalanche warning or managing water supply and hydro-power.To simulate the seasonal snow cover development in alpine terrain, the spatially distributed,physics-based model Alpine3D is suitable. The model is typically driven by spatial interpolationsof observations from automatic weather stations (AWS, leading to errors in the spatial distributionof atmospheric forcing. With recent advances in remote sensing techniques, maps of snowdepth can be acquired with high spatial resolution and accuracy. In this work, maps of the snowdepth distribution, calculated from summer and winter digital surface models based on AirborneDigital Sensors (ADS, are used to scale precipitation input data, with the aim to improve theaccuracy of simulation of the spatial distribution of snow with Alpine3D. A simple method toscale and redistribute precipitation is presented and the performance is analysed. The scalingmethod is only applied if it is snowing. For rainfall the precipitation is distributed by interpolation,with a simple air temperature threshold used for the determination of the precipitation phase.It was found that the accuracy of spatial snow distribution could be improved significantly forthe simulated domain. The standard deviation of absolute snow depth error is reduced up toa factor 3.4 to less than 20 cm. The mean absolute error in snow distribution was reducedwhen using representative input sources for the simulation domain. For inter-annual scaling, themodel performance could also be improved, even when using a remote sensing dataset from adifferent winter. In conclusion, using remote sensing data to process precipitation input, complexprocesses such as preferential snow deposition and snow relocation due to wind or avalanches,can be substituted and modelling performance of spatial snow distribution is improved.

  17. PERMODELAN INDEKS HARGA KONSUMEN INDONESIA DENGAN MENGGUNAKAN MODEL INTERVENSI MULTI INPUT

    KAUST Repository

    Novianti, Putri Wikie

    2017-01-24

    There are some events which are expected effecting CPI’s fluctuation, i.e. financial crisis 1997/1998, fuel price risings, base year changing’s, independence of Timor-Timur (October 1999), and Tsunami disaster in Aceh (December 2004). During re-search period, there were eight fuel price risings and four base year changing’s. The objective of this research is to obtain multi input intervention model which can des-cribe magnitude and duration of each event effected to CPI. Most of intervention re-searches that have been done are only contain of an intervention with single input, ei-ther step or pulse function. Multi input intervention was used in Indonesia CPI case because there are some events which are expected effecting CPI. Based on the result, those events were affecting CPI. Additionally, other events, such as Ied on January 1999, events on April 2002, July 2003, December 2005, and September 2008, were affecting CPI too. In general, those events gave positive effect to CPI, except events on April 2002 and July 2003 which gave negative effects.

  18. Detection of no-model input-output pairs in closed-loop systems.

    Science.gov (United States)

    Potts, Alain Segundo; Alvarado, Christiam Segundo Morales; Garcia, Claudio

    2017-11-01

    The detection of no-model input-output (IO) pairs is important because it can speed up the multivariable system identification process, since all the pairs with null transfer functions are previously discarded and it can also improve the identified model quality, thus improving the performance of model based controllers. In the available literature, the methods focus just on the open-loop case, since in this case there is not the effect of the controller forcing the main diagonal in the transfer matrix to one and all the other terms to zero. In this paper, a modification of a previous method able to detect no-model IO pairs in open-loop systems is presented, but adapted to perform this duty in closed-loop systems. Tests are performed by using the traditional methods and the proposed one to show its effectiveness. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Input-output model of regional environmental and economic impacts of nuclear power plants

    International Nuclear Information System (INIS)

    Johnson, M.H.; Bennett, J.T.

    1979-01-01

    The costs of delayed licensing of nuclear power plants calls for a more-comprehensive method of quantifying the economic and environmental impacts on a region. A traditional input-output (I-O) analysis approach is extended to assess the effects of changes in output, income, employment, pollution, water consumption, and the costs and revenues of local government disaggregated among 23 industry sectors during the construction and operating phases. Unlike earlier studies, this model uses nonlinear environmental interactions and specifies environmental feedbacks to the economic sector. 20 references

  20. Using Whole-House Field Tests to Empirically Derive Moisture Buffering Model Inputs

    Energy Technology Data Exchange (ETDEWEB)

    Woods, J.; Winkler, J.; Christensen, D.; Hancock, E.

    2014-08-01

    Building energy simulations can be used to predict a building's interior conditions, along with the energy use associated with keeping these conditions comfortable. These models simulate the loads on the building (e.g., internal gains, envelope heat transfer), determine the operation of the space conditioning equipment, and then calculate the building's temperature and humidity throughout the year. The indoor temperature and humidity are affected not only by the loads and the space conditioning equipment, but also by the capacitance of the building materials, which buffer changes in temperature and humidity. This research developed an empirical method to extract whole-house model inputs for use with a more accurate moisture capacitance model (the effective moisture penetration depth model). The experimental approach was to subject the materials in the house to a square-wave relative humidity profile, measure all of the moisture transfer terms (e.g., infiltration, air conditioner condensate) and calculate the only unmeasured term: the moisture absorption into the materials. After validating the method with laboratory measurements, we performed the tests in a field house. A least-squares fit of an analytical solution to the measured moisture absorption curves was used to determine the three independent model parameters representing the moisture buffering potential of this house and its furnishings. Follow on tests with realistic latent and sensible loads showed good agreement with the derived parameters, especially compared to the commonly-used effective capacitance approach. These results show that the EMPD model, once the inputs are known, is an accurate moisture buffering model.

  1. Low-level waste shallow land disposal source term model: Data input guides

    International Nuclear Information System (INIS)

    Sullivan, T.M.; Suen, C.J.

    1989-07-01

    This report provides an input guide for the computational models developed to predict the rate of radionuclide release from shallow land disposal of low-level waste. Release of contaminants depends on four processes: water flow, container degradation, waste from leaching, and contaminant transport. The computer code FEMWATER has been selected to predict the movement of water in an unsaturated porous media. The computer code BLT (Breach, Leach, and Transport), a modification of FEMWASTE, has been selected to predict the processes of container degradation (Breach), contaminant release from the waste form (Leach), and contaminant migration (Transport). In conjunction, these two codes have the capability to account for the effects of disposal geometry, unsaturated/water flow, container degradation, waste form leaching, and migration of contaminants releases within a single disposal trench. In addition to the input requirements, this report presents the fundamental equations and relationships used to model the four different processes previously discussed. Further, the appendices provide a representative sample of data required by the different models. 14 figs., 27 tabs

  2. Modelling the soil microclimate: does the spatial or temporal resolution of input parameters matter?

    Directory of Open Access Journals (Sweden)

    Anna Carter

    2016-01-01

    Full Text Available The urgency of predicting future impacts of environmental change on vulnerable populations is advancing the development of spatially explicit habitat models. Continental-scale climate and microclimate layers are now widely available. However, most terrestrial organisms exist within microclimate spaces that are very small, relative to the spatial resolution of those layers. We examined the effects of multi-resolution, multi-extent topographic and climate inputs on the accuracy of hourly soil temperature predictions for a small island generated at a very high spatial resolution (<1 m2 using the mechanistic microclimate model in NicheMapR. Achieving an accuracy comparable to lower-resolution, continental-scale microclimate layers (within about 2–3°C of observed values required the use of daily weather data as well as high resolution topographic layers (elevation, slope, aspect, horizon angles, while inclusion of site-specific soil properties did not markedly improve predictions. Our results suggest that large-extent microclimate layers may not provide accurate estimates of microclimate conditions when the spatial extent of a habitat or other area of interest is similar to or smaller than the spatial resolution of the layers themselves. Thus, effort in sourcing model inputs should be focused on obtaining high resolution terrain data, e.g., via LiDAR or photogrammetry, and local weather information rather than in situ sampling of microclimate characteristics.

  3. Transport coefficient computation based on input/output reduced order models

    Science.gov (United States)

    Hurst, Joshua L.

    The guiding purpose of this thesis is to address the optimal material design problem when the material description is a molecular dynamics model. The end goal is to obtain a simplified and fast model that captures the property of interest such that it can be used in controller design and optimization. The approach is to examine model reduction analysis and methods to capture a specific property of interest, in this case viscosity, or more generally complex modulus or complex viscosity. This property and other transport coefficients are defined by a input/output relationship and this motivates model reduction techniques that are tailored to preserve input/output behavior. In particular Singular Value Decomposition (SVD) based methods are investigated. First simulation methods are identified that are amenable to systems theory analysis. For viscosity, these models are of the Gosling and Lees-Edwards type. They are high order nonlinear Ordinary Differential Equations (ODEs) that employ Periodic Boundary Conditions. Properties can be calculated from the state trajectories of these ODEs. In this research local linear approximations are rigorously derived and special attention is given to potentials that are evaluated with Periodic Boundary Conditions (PBC). For the Gosling description LTI models are developed from state trajectories but are found to have limited success in capturing the system property, even though it is shown that full order LTI models can be well approximated by reduced order LTI models. For the Lees-Edwards SLLOD type model nonlinear ODEs will be approximated by a Linear Time Varying (LTV) model about some nominal trajectory and both balanced truncation and Proper Orthogonal Decomposition (POD) will be used to assess the plausibility of reduced order models to this system description. An immediate application of the derived LTV models is Quasilinearization or Waveform Relaxation. Quasilinearization is a Newton's method applied to the ODE operator

  4. Fuzzy portfolio model with fuzzy-input return rates and fuzzy-output proportions

    Science.gov (United States)

    Tsaur, Ruey-Chyn

    2015-02-01

    In the finance market, a short-term investment strategy is usually applied in portfolio selection in order to reduce investment risk; however, the economy is uncertain and the investment period is short. Further, an investor has incomplete information for selecting a portfolio with crisp proportions for each chosen security. In this paper we present a new method of constructing fuzzy portfolio model for the parameters of fuzzy-input return rates and fuzzy-output proportions, based on possibilistic mean-standard deviation models. Furthermore, we consider both excess or shortage of investment in different economic periods by using fuzzy constraint for the sum of the fuzzy proportions, and we also refer to risks of securities investment and vagueness of incomplete information during the period of depression economics for the portfolio selection. Finally, we present a numerical example of a portfolio selection problem to illustrate the proposed model and a sensitivity analysis is realised based on the results.

  5. Chaos game representation (CGR)-walk model for DNA sequences

    International Nuclear Information System (INIS)

    Jie, Gao; Zhen-Yuan, Xu

    2009-01-01

    Chaos game representation (CGR) is an iterative mapping technique that processes sequences of units, such as nucleotides in a DNA sequence or amino acids in a protein, in order to determine the coordinates of their positions in a continuous space. This distribution of positions has two features: one is unique, and the other is source sequence that can be recovered from the coordinates so that the distance between positions may serve as a measure of similarity between the corresponding sequences. A CGR-walk model is proposed based on CGR coordinates for the DNA sequences. The CGR coordinates are converted into a time series, and a long-memory ARFIMA (p, d, q) model, where ARFIMA stands for autoregressive fractionally integrated moving average, is introduced into the DNA sequence analysis. This model is applied to simulating real CGR-walk sequence data of ten genomic sequences. Remarkably long-range correlations are uncovered in the data, and the results from these models are reasonably fitted with those from the ARFIMA (p, d, q) model. (cross-disciplinary physics and related areas of science and technology)

  6. A Water-Withdrawal Input-Output Model of the Indian Economy.

    Science.gov (United States)

    Bogra, Shelly; Bakshi, Bhavik R; Mathur, Ritu

    2016-02-02

    Managing freshwater allocation for a highly populated and growing economy like India can benefit from knowledge about the effect of economic activities. This study transforms the 2003-2004 economic input-output (IO) table of India into a water withdrawal input-output model to quantify direct and indirect flows. This unique model is based on a comprehensive database compiled from diverse public sources, and estimates direct and indirect water withdrawal of all economic sectors. It distinguishes between green (rainfall), blue (surface and ground), and scarce groundwater. Results indicate that the total direct water withdrawal is nearly 3052 billion cubic meter (BCM) and 96% of this is used in agriculture sectors with the contribution of direct green water being about 1145 BCM, excluding forestry. Apart from 727 BCM direct blue water withdrawal for agricultural, other significant users include "Electricity" with 64 BCM, "Water supply" with 44 BCM and other industrial sectors with nearly 14 BCM. "Construction", "miscellaneous food products"; "Hotels and restaurants"; "Paper, paper products, and newsprint" are other significant indirect withdrawers. The net virtual water import is found to be insignificant compared to direct water used in agriculture nationally, while scarce ground water associated with crops is largely contributed by northern states.

  7. International trade inoperability input-output model (IT-IIM): theory and application.

    Science.gov (United States)

    Jung, Jeesang; Santos, Joost R; Haimes, Yacov Y

    2009-01-01

    The inoperability input-output model (IIM) has been used for analyzing disruptions due to man-made or natural disasters that can adversely affect the operation of economic systems or critical infrastructures. Taking economic perturbation for each sector as inputs, the IIM provides the degree of economic production impacts on all industry sectors as the outputs for the model. The current version of the IIM does not provide a separate analysis for the international trade component of the inoperability. If an important port of entry (e.g., Port of Los Angeles) is disrupted, then international trade inoperability becomes a highly relevant subject for analysis. To complement the current IIM, this article develops the International Trade-IIM (IT-IIM). The IT-IIM investigates the resulting international trade inoperability for all industry sectors resulting from disruptions to a major port of entry. Similar to traditional IIM analysis, the inoperability metrics that the IT-IIM provides can be used to prioritize economic sectors based on the losses they could potentially incur. The IT-IIM is used to analyze two types of direct perturbations: (1) the reduced capacity of ports of entry, including harbors and airports (e.g., a shutdown of any port of entry); and (2) restrictions on commercial goods that foreign countries trade with the base nation (e.g., embargo).

  8. Multiregional input-output model for the evaluation of Spanish water flows.

    Science.gov (United States)

    Cazcarro, Ignacio; Duarte, Rosa; Sánchez Chóliz, Julio

    2013-01-01

    We construct a multiregional input-output model for Spain, in order to evaluate the pressures on the water resources, virtual water flows, and water footprints of the regions, and the water impact of trade relationships within Spain and abroad. The study is framed with those interregional input-output models constructed to study water flows and impacts of regions in China, Australia, Mexico, or the UK. To build our database, we reconcile regional IO tables, national and regional accountancy of Spain, trade and water data. Results show an important imbalance between origin of water resources and final destination, with significant water pressures in the South, Mediterranean, and some central regions. The most populated and dynamic regions of Madrid and Barcelona are important drivers of water consumption in Spain. Main virtual water exporters are the South and Central agrarian regions: Andalusia, Castile-La Mancha, Castile-Leon, Aragon, and Extremadura, while the main virtual water importers are the industrialized regions of Madrid, Basque country, and the Mediterranean coast. The paper shows the different location of direct and indirect consumers of water in Spain and how the economic trade and consumption pattern of certain areas has significant impacts on the availability of water resources in other different and often drier regions.

  9. Model analysis of riparian buffer effectiveness for reducing nutrient inputs to streams in agricultural landscapes

    Science.gov (United States)

    McKane, R. B.; M, S.; F, P.; Kwiatkowski, B. L.; Rastetter, E. B.

    2006-12-01

    Federal and state agencies responsible for protecting water quality rely mainly on statistically-based methods to assess and manage risks to the nation's streams, lakes and estuaries. Although statistical approaches provide valuable information on current trends in water quality, process-based simulation models are essential for understanding and forecasting how changes in human activities across complex landscapes impact the transport of nutrients and contaminants to surface waters. To address this need, we developed a broadly applicable, process-based watershed simulator that links a spatially-explicit hydrologic model and a terrestrial biogeochemistry model (MEL). See Stieglitz et al. and Pan et al., this meeting, for details on the design and verification of this simulator. Here we apply the watershed simulator to a generalized agricultural setting to demonstrate its potential for informing policy and management decisions concerning water quality. This demonstration specifically explores the effectiveness of riparian buffers for reducing the transport of nitrogenous fertilizers from agricultural fields to streams. The interaction of hydrologic and biogeochemical processes represented in our simulator allows several important questions to be addressed. (1) For a range of upland fertilization rates, to what extent do riparian buffers reduce nitrogen inputs to streams? (2) How does buffer effectiveness change over time as the plant-soil system approaches N-saturation? (3) How can buffers be managed to increase their effectiveness, e.g., through periodic harvest and replanting? The model results illustrate that, while the answers to these questions depend to some extent on site factors (climatic regime, soil properties and vegetation type), in all cases riparian buffers have a limited capacity to reduce nitrogen inputs to streams where fertilization rates approach those typically used for intensive agriculture (e.g., 200 kg N per ha per year for corn in the U

  10. The genome sequence of the model ascomycete fungus Podospora anserina

    NARCIS (Netherlands)

    Espagne, Eric; Lespinet, Olivier; Malagnac, Fabienne; Da Silva, Corinne; Jaillon, Olivier; Porcel, Betina M; Couloux, Arnaud; Aury, Jean-Marc; Ségurens, Béatrice; Poulain, Julie; Anthouard, Véronique; Grossetete, Sandrine; Khalili, Hamid; Coppin, Evelyne; Déquard-Chablat, Michelle; Picard, Marguerite; Contamine, Véronique; Arnaise, Sylvie; Bourdais, Anne; Berteaux-Lecellier, Véronique; Gautheret, Daniel; de Vries, Ronald P; Battaglia, Evy; Coutinho, Pedro M; Danchin, Etienne Gj; Henrissat, Bernard; Khoury, Riyad El; Sainsard-Chanet, Annie; Boivin, Antoine; Pinan-Lucarré, Bérangère; Sellem, Carole H; Debuchy, Robert; Wincker, Patrick; Weissenbach, Jean; Silar, Philippe

    2008-01-01

    BACKGROUND: The dung-inhabiting ascomycete fungus Podospora anserina is a model used to study various aspects of eukaryotic and fungal biology, such as ageing, prions and sexual development. RESULTS: We present a 10X draft sequence of P. anserina genome, linked to the sequences of a large expressed

  11. On Input Vector Representation for the SVR model of Reactor Core Loading Pattern Critical Parameters

    International Nuclear Information System (INIS)

    Trontl, K.; Pevec, D.; Smuc, T.

    2008-01-01

    Determination and optimization of reactor core loading pattern is an important factor in nuclear power plant operation. The goal is to minimize the amount of enriched uranium (fresh fuel) and burnable absorbers placed in the core, while maintaining nuclear power plant operational and safety characteristics. The usual approach to loading pattern optimization involves high degree of engineering judgment, a set of heuristic rules, an optimization algorithm and a computer code used for evaluating proposed loading patterns. The speed of the optimization process is highly dependent on the computer code used for the evaluation. Recently, we proposed a new method for fast loading pattern evaluation based on general robust regression model relying on the state of the art research in the field of machine learning. We employed Support Vector Regression (SVR) technique. SVR is a supervised learning method in which model parameters are automatically determined by solving a quadratic optimization problem. The preliminary tests revealed a good potential of the SVR method application for fast and accurate reactor core loading pattern evaluation. However, some aspects of model development are still unresolved. The main objective of the work reported in this paper was to conduct additional tests and analyses required for full clarification of the SVR applicability for loading pattern evaluation. We focused our attention on the parameters defining input vector, primarily its structure and complexity, and parameters defining kernel functions. All the tests were conducted on the NPP Krsko reactor core, using MCRAC code for the calculation of reactor core loading pattern critical parameters. The tested input vector structures did not influence the accuracy of the models suggesting that the initially tested input vector, consisted of the number of IFBAs and the k-inf at the beginning of the cycle, is adequate. The influence of kernel function specific parameters (σ for RBF kernel

  12. INPUT DATA OF BURNING WOOD FOR CFD MODELLING USING SMALL-SCALE EXPERIMENTS

    Directory of Open Access Journals (Sweden)

    Petr Hejtmánek

    2017-12-01

    Full Text Available The paper presents an option how to acquire simplified input data for modelling of burning wood in CFD programmes. The option lies in combination of data from small- and molecular-scale experiments in order to describe the material as a one-reaction material property. Such virtual material would spread fire, develop the fire according to surrounding environment and it could be extinguished without using complex reaction molecular description. Series of experiments including elemental analysis, thermogravimetric analysis and difference thermal analysis, and combustion analysis were performed. Then the FDS model of burning pine wood in a cone calorimeter was built. In the model where those values were used. The model was validated to HRR (Heat Release Rate from the real cone calorimeter experiment. The results show that for the purpose of CFD modelling the effective heat of combustion, which is one of the basic material property for fire modelling affecting the total intensity of burning, should be used. Using the net heat of combustion in the model leads to higher values of HRR in comparison to the real experiment data. Considering all the results shown in this paper, it was shown that it is possible to simulate burning of wood using the extrapolated data obtained in small-size experiments.

  13. Targeting the right input data to improve crop modeling at global level

    Science.gov (United States)

    Adam, M.; Robertson, R.; Gbegbelegbe, S.; Jones, J. W.; Boote, K. J.; Asseng, S.

    2012-12-01

    Designed for location-specific simulations, the use of crop models at a global level raises important questions. Crop models are originally premised on small unit areas where environmental conditions and management practices are considered homogeneous. Specific information describing soils, climate, management, and crop characteristics are used in the calibration process. However, when scaling up for global application, we rely on information derived from geographical information systems and weather generators. To run crop models at broad, we use a modeling platform that assumes a uniformly generated grid cell as a unit area. Specific weather, specific soil and specific management practices for each crop are represented for each of the cell grids. Studies on the impacts of the uncertainties of weather information and climate change on crop yield at a global level have been carried out (Osborne et al, 2007, Nelson et al., 2010, van Bussel et al, 2011). Detailed information on soils and management practices at global level are very scarce but recognized to be of critical importance (Reidsma et al., 2009). Few attempts to assess the impact of their uncertainties on cropping systems performances can be found. The objectives of this study are (i) to determine sensitivities of a crop model to soil and management practices, inputs most relevant to low input rainfed cropping systems, and (ii) to define hotspots of sensitivity according to the input data. We ran DSSAT v4.5 globally (CERES-CROPSIM) to simulate wheat yields at 45arc-minute resolution. Cultivar parameters were calibrated and validated for different mega-environments (results not shown). The model was run for nitrogen-limited production systems. This setting was chosen as the most representative to simulate actual yield (especially for low-input rainfed agricultural systems) and assumes crop growth to be free of any pest and diseases damages. We conducted a sensitivity analysis on contrasting management

  14. Loss of GABAergic inputs in APP/PS1 mouse model of Alzheimer's disease

    Directory of Open Access Journals (Sweden)

    Tutu Oyelami

    2014-04-01

    Full Text Available Alzheimer's disease (AD is characterized by symptoms which include seizures, sleep disruption, loss of memory as well as anxiety in patients. Of particular importance is the possibility of preventing the progressive loss of neuronal projections in the disease. Transgenic mice overexpressing EOFAD mutant PS1 (L166P and mutant APP (APP KM670/671NL Swedish (APP/PS1 develop a very early and robust Amyloid pathology and display synaptic plasticity impairments and cognitive dysfunction. Here we investigated GABAergic neurotransmission, using multi-electrode array (MEA technology and pharmacological manipulation to quantify the effect of GABA Blockers on field excitatory postsynaptic potentials (fEPSP, and immunostaining of GABAergic neurons. Using MEA technology we confirm impaired LTP induction by high frequency stimulation in APPPS1 hippocampal CA1 region that was associated with reduced alteration of the pair pulse ratio after LTP induction. Synaptic dysfunction was also observed under manipulation of external Calcium concentration and input-output curve. Electrophysiological recordings from brain slice of CA1 hippocampus area, in the presence of GABAergic receptors blockers cocktails further demonstrated significant reduction in the GABAergic inputs in APP/PS1 mice. Moreover, immunostaining of GAD65 a specific marker for GABAergic neurons revealed reduction of the GABAergic inputs in CA1 area of the hippocampus. These results might be linked to increased seizure sensitivity, premature death and cognitive dysfunction in this animal model of AD. Further in depth analysis of GABAergic dysfunction in APP/PS1 mice is required and may open new perspectives for AD therapy by restoring GABAergic function.

  15. Thermodynamics-based models of transcriptional regulation with gene sequence.

    Science.gov (United States)

    Wang, Shuqiang; Shen, Yanyan; Hu, Jinxing

    2015-12-01

    Quantitative models of gene regulatory activity have the potential to improve our mechanistic understanding of transcriptional regulation. However, the few models available today have been based on simplistic assumptions about the sequences being modeled or heuristic approximations of the underlying regulatory mechanisms. In this work, we have developed a thermodynamics-based model to predict gene expression driven by any DNA sequence. The proposed model relies on a continuous time, differential equation description of transcriptional dynamics. The sequence features of the promoter are exploited to derive the binding affinity which is derived based on statistical molecular thermodynamics. Experimental results show that the proposed model can effectively identify the activity levels of transcription factors and the regulatory parameters. Comparing with the previous models, the proposed model can reveal more biological sense.

  16. An Approach for Generating Precipitation Input for Worst-Case Flood Modelling

    Science.gov (United States)

    Felder, Guido; Weingartner, Rolf

    2015-04-01

    There is a lack of suitable methods for creating precipitation scenarios that can be used to realistically estimate peak discharges with very low probabilities. On the one hand, existing methods are methodically questionable when it comes to physical system boundaries. On the other hand, the spatio-temporal representativeness of precipitation patterns as system input is limited. In response, this study proposes a method of deriving representative spatio-temporal precipitation patterns and presents a step towards making methodically correct estimations of infrequent floods by using a worst-case approach. A Monte-Carlo rainfall-runoff model allows for the testing of a wide range of different spatio-temporal distributions of an extreme precipitation event and therefore for the generation of a hydrograph for each of these distributions. Out of these numerous hydrographs and their corresponding peak discharges, the worst-case catchment reactions on the system input can be derived. The spatio-temporal distributions leading to the highest peak discharges are identified and can eventually be used for further investigations.

  17. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models

    International Nuclear Information System (INIS)

    Lamboni, Matieyendou; Monod, Herve; Makowski, David

    2011-01-01

    Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006 ) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.

  18. Comparison of several climate indices as inputs in modelling of the Baltic Sea runoff

    Energy Technology Data Exchange (ETDEWEB)

    Hanninen, J.; Vuorinen, I. [Turku Univ. (Finland). Archipelaco Research Inst.], e-mail: jari.hanninen@utu.fi

    2012-11-01

    Using Transfer function (TF) models, we have earlier presented a chain of events between changes in the North Atlantic Oscillation (NAO) and their oceanographical and ecological consequences in the Baltic Sea. Here we tested whether other climate indices as inputs would improve TF models, and our understanding of the Baltic Sea ecosystem. Besides NAO, the predictors were the Arctic Oscillation (AO), sea-level air pressures at Iceland (SLP), and wind speeds at Hoburg (Gotland). All indices produced good TF models when the total riverine runoff to the Baltic Sea was used as a modelling basis. AO was not applicable in all study areas, showing a delay of about half a year between climate and runoff events, connected with freezing and melting time of ice and snow in the northern catchment area of the Baltic Sea. NAO appeared to be most useful modelling tool as its area of applicability was the widest of the tested indices, and the time lag between climate and runoff events was the shortest. SLP and Hoburg wind speeds showed largely same results as NAO, but with smaller areal applicability. Thus AO and NAO were both mostly contributing to the general understanding of climate control of runoff events in the Baltic Sea ecosystem. (orig.)

  19. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models

    Energy Technology Data Exchange (ETDEWEB)

    Lamboni, Matieyendou [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Monod, Herve, E-mail: herve.monod@jouy.inra.f [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Makowski, David [INRA, UMR Agronomie INRA/AgroParisTech (UMR 211), BP 01, F78850 Thiverval-Grignon (France)

    2011-04-15

    Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.

  20. Solar Load Inputs for USARIEM Thermal Strain Models and the Solar Radiation-Sensitive Components of the WBGT Index

    National Research Council Canada - National Science Library

    Matthew, William

    2001-01-01

    This report describes processes we have implemented to use global pyranometer-based estimates of mean radiant temperature as the common solar load input for the Scenario model, the USARIEM heat strain...

  1. Assessment of NASA's Physiographic and Meteorological Datasets as Input to HSPF and SWAT Hydrological Models

    Science.gov (United States)

    Alacron, Vladimir J.; Nigro, Joseph D.; McAnally, William H.; OHara, Charles G.; Engman, Edwin Ted; Toll, David

    2011-01-01

    This paper documents the use of simulated Moderate Resolution Imaging Spectroradiometer land use/land cover (MODIS-LULC), NASA-LIS generated precipitation and evapo-transpiration (ET), and Shuttle Radar Topography Mission (SRTM) datasets (in conjunction with standard land use, topographical and meteorological datasets) as input to hydrological models routinely used by the watershed hydrology modeling community. The study is focused in coastal watersheds in the Mississippi Gulf Coast although one of the test cases focuses in an inland watershed located in northeastern State of Mississippi, USA. The decision support tools (DSTs) into which the NASA datasets were assimilated were the Soil Water & Assessment Tool (SWAT) and the Hydrological Simulation Program FORTRAN (HSPF). These DSTs are endorsed by several US government agencies (EPA, FEMA, USGS) for water resources management strategies. These models use physiographic and meteorological data extensively. Precipitation gages and USGS gage stations in the region were used to calibrate several HSPF and SWAT model applications. Land use and topographical datasets were swapped to assess model output sensitivities. NASA-LIS meteorological data were introduced in the calibrated model applications for simulation of watershed hydrology for a time period in which no weather data were available (1997-2006). The performance of the NASA datasets in the context of hydrological modeling was assessed through comparison of measured and model-simulated hydrographs. Overall, NASA datasets were as useful as standard land use, topographical , and meteorological datasets. Moreover, NASA datasets were used for performing analyses that the standard datasets could not made possible, e.g., introduction of land use dynamics into hydrological simulations

  2. Fractional Gaussian noise-enhanced information capacity of a nonlinear neuron model with binary signal input

    Science.gov (United States)

    Gao, Feng-Yin; Kang, Yan-Mei; Chen, Xi; Chen, Guanrong

    2018-05-01

    This paper reveals the effect of fractional Gaussian noise with Hurst exponent H ∈(1 /2 ,1 ) on the information capacity of a general nonlinear neuron model with binary signal input. The fGn and its corresponding fractional Brownian motion exhibit long-range, strong-dependent increments. It extends standard Brownian motion to many types of fractional processes found in nature, such as the synaptic noise. In the paper, for the subthreshold binary signal, sufficient conditions are given based on the "forbidden interval" theorem to guarantee the occurrence of stochastic resonance, while for the suprathreshold binary signal, the simulated results show that additive fGn with Hurst exponent H ∈(1 /2 ,1 ) could increase the mutual information or bits count. The investigation indicated that the synaptic noise with the characters of long-range dependence and self-similarity might be the driving factor for the efficient encoding and decoding of the nervous system.

  3. Evaluation of globally available precipitation data products as input for water balance models

    Science.gov (United States)

    Lebrenz, H.; Bárdossy, A.

    2009-04-01

    Subject of this study is the evaluation of globally available precipitation data products, which are intended to be used as input variables for water balance models in ungauged basins. The selected data sources are a) the Global Precipitation Climatology Centre (GPCC), b) the Global Precipitation Climatology Project (GPCP) and c) the Climate Research Unit (CRU), resulting into twelve globally available data products. The data products imply different data bases, different derivation routines and varying resolutions in time and space. For validation purposes, the ground data from South Africa were screened on homogeneity and consistency by various tests and an outlier detection using multi-linear regression was performed. External Drift Kriging was subsequently applied on the ground data and the resulting precipitation arrays were compared to the different products with respect to quantity and variance.

  4. A Probabilistic Genome-Wide Gene Reading Frame Sequence Model

    DEFF Research Database (Denmark)

    Have, Christian Theil; Mørk, Søren

    We introduce a new type of probabilistic sequence model, that model the sequential composition of reading frames of genes in a genome. Our approach extends gene finders with a model of the sequential composition of genes at the genome-level -- effectively producing a sequential genome annotation...... as output. The model can be used to obtain the most probable genome annotation based on a combination of i: a gene finder score of each gene candidate and ii: the sequence of the reading frames of gene candidates through a genome. The model --- as well as a higher order variant --- is developed and tested...... and are evaluated by the effect on prediction performance. Since bacterial gene finding to a large extent is a solved problem it forms an ideal proving ground for evaluating the explicit modeling of larger scale gene sequence composition of genomes. We conclude that the sequential composition of gene reading frames...

  5. Modeling bias and variation in the stochastic processes of small RNA sequencing.

    Science.gov (United States)

    Argyropoulos, Christos; Etheridge, Alton; Sakhanenko, Nikita; Galas, David

    2017-06-20

    The use of RNA-seq as the preferred method for the discovery and validation of small RNA biomarkers has been hindered by high quantitative variability and biased sequence counts. In this paper we develop a statistical model for sequence counts that accounts for ligase bias and stochastic variation in sequence counts. This model implies a linear quadratic relation between the mean and variance of sequence counts. Using a large number of sequencing datasets, we demonstrate how one can use the generalized additive models for location, scale and shape (GAMLSS) distributional regression framework to calculate and apply empirical correction factors for ligase bias. Bias correction could remove more than 40% of the bias for miRNAs. Empirical bias correction factors appear to be nearly constant over at least one and up to four orders of magnitude of total RNA input and independent of sample composition. Using synthetic mixes of known composition, we show that the GAMLSS approach can analyze differential expression with greater accuracy, higher sensitivity and specificity than six existing algorithms (DESeq2, edgeR, EBSeq, limma, DSS, voom) for the analysis of small RNA-seq data. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. Model morphing and sequence assignment after molecular replacement

    Energy Technology Data Exchange (ETDEWEB)

    Terwilliger, Thomas C., E-mail: terwilliger@lanl.gov [Los Alamos National Laboratory, Mail Stop M888, Los Alamos, NM 87545 (United States); Read, Randy J. [University of Cambridge, Cambridge Institute for Medical Research, Cambridge CB2 0XY (United Kingdom); Adams, Paul D. [Lawrence Berkeley National Laboratory, One Cyclotron Road, Bldg 64R0121, Berkeley, CA 94720 (United States); Brunger, Axel T. [Stanford University, 318 Campus Drive West, Stanford, CA 94305 (United States); Afonine, Pavel V. [Lawrence Berkeley National Laboratory, One Cyclotron Road, Bldg 64R0121, Berkeley, CA 94720 (United States); Hung, Li-Wei [Los Alamos National Laboratory, Mail Stop M888, Los Alamos, NM 87545 (United States)

    2013-11-01

    A procedure for model building is described that combines morphing a model to match a density map, trimming the morphed model and aligning the model to a sequence. A procedure termed ‘morphing’ for improving a model after it has been placed in the crystallographic cell by molecular replacement has recently been developed. Morphing consists of applying a smooth deformation to a model to make it match an electron-density map more closely. Morphing does not change the identities of the residues in the chain, only their coordinates. Consequently, if the true structure differs from the working model by containing different residues, these differences cannot be corrected by morphing. Here, a procedure that helps to address this limitation is described. The goal of the procedure is to obtain a relatively complete model that has accurate main-chain atomic positions and residues that are correctly assigned to the sequence. Residues in a morphed model that do not match the electron-density map are removed. Each segment of the resulting trimmed morphed model is then assigned to the sequence of the molecule using information about the connectivity of the chains from the working model and from connections that can be identified from the electron-density map. The procedure was tested by application to a recently determined structure at a resolution of 3.2 Å and was found to increase the number of correctly identified residues in this structure from the 88 obtained using phenix.resolve sequence assignment alone (Terwilliger, 2003 ▶) to 247 of a possible 359. Additionally, the procedure was tested by application to a series of templates with sequence identities to a target structure ranging between 7 and 36%. The mean fraction of correctly identified residues in these cases was increased from 33% using phenix.resolve sequence assignment to 47% using the current procedure. The procedure is simple to apply and is available in the Phenix software package.

  7. Model morphing and sequence assignment after molecular replacement

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.; Read, Randy J.; Adams, Paul D.; Brunger, Axel T.; Afonine, Pavel V.; Hung, Li-Wei

    2013-01-01

    A procedure for model building is described that combines morphing a model to match a density map, trimming the morphed model and aligning the model to a sequence. A procedure termed ‘morphing’ for improving a model after it has been placed in the crystallographic cell by molecular replacement has recently been developed. Morphing consists of applying a smooth deformation to a model to make it match an electron-density map more closely. Morphing does not change the identities of the residues in the chain, only their coordinates. Consequently, if the true structure differs from the working model by containing different residues, these differences cannot be corrected by morphing. Here, a procedure that helps to address this limitation is described. The goal of the procedure is to obtain a relatively complete model that has accurate main-chain atomic positions and residues that are correctly assigned to the sequence. Residues in a morphed model that do not match the electron-density map are removed. Each segment of the resulting trimmed morphed model is then assigned to the sequence of the molecule using information about the connectivity of the chains from the working model and from connections that can be identified from the electron-density map. The procedure was tested by application to a recently determined structure at a resolution of 3.2 Å and was found to increase the number of correctly identified residues in this structure from the 88 obtained using phenix.resolve sequence assignment alone (Terwilliger, 2003 ▶) to 247 of a possible 359. Additionally, the procedure was tested by application to a series of templates with sequence identities to a target structure ranging between 7 and 36%. The mean fraction of correctly identified residues in these cases was increased from 33% using phenix.resolve sequence assignment to 47% using the current procedure. The procedure is simple to apply and is available in the Phenix software package

  8. Time series analysis as input for clinical predictive modeling: modeling cardiac arrest in a pediatric ICU.

    Science.gov (United States)

    Kennedy, Curtis E; Turley, James P

    2011-10-24

    Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9

  9. An extended TRANSCAR model including ionospheric convection: simulation of EISCAT observations using inputs from AMIE

    Directory of Open Access Journals (Sweden)

    P.-L. Blelly

    2005-02-01

    Full Text Available The TRANSCAR ionospheric model was extended to account for the convection of the magnetic field lines in the auroral and polar ionosphere. A mixed Eulerian-Lagrangian 13-moment approach was used to describe the dynamics of an ionospheric plasma tube. In the present study, one focuses on large scale transports in the polar ionosphere. The model was used to simulate a 35-h period of EISCAT-UHF observations on 16-17 February 1993. The first day was magnetically quiet, and characterized by elevated electron concentrations: the diurnal F2 layer reached as much as 1012m-3, which is unusual for a winter and moderate solar activity (F10.7=130 period. An intense geomagnetic event occurred on the second day, seen in the data as a strong intensification of the ionosphere convection velocities in the early afternoon (with the northward electric field reaching 150mVm-1 and corresponding frictional heating of the ions up to 2500K. The simulation used time-dependent AMIE outputs to infer flux-tube transports in the polar region, and to provide magnetospheric particle and energy inputs to the ionosphere. The overall very good agreement, obtained between the model and the observations, demonstrates the high ability of the extended TRANSCAR model for quantitative modelling of the high-latitude ionosphere; however, some differences are found which are attributed to the precipitation of electrons with very low energy. All these results are finally discussed in the frame of modelling the auroral ionosphere with space weather applications in mind.

  10. An extended TRANSCAR model including ionospheric convection: simulation of EISCAT observations using inputs from AMIE

    Directory of Open Access Journals (Sweden)

    P.-L. Blelly

    2005-02-01

    Full Text Available The TRANSCAR ionospheric model was extended to account for the convection of the magnetic field lines in the auroral and polar ionosphere. A mixed Eulerian-Lagrangian 13-moment approach was used to describe the dynamics of an ionospheric plasma tube. In the present study, one focuses on large scale transports in the polar ionosphere. The model was used to simulate a 35-h period of EISCAT-UHF observations on 16-17 February 1993. The first day was magnetically quiet, and characterized by elevated electron concentrations: the diurnal F2 layer reached as much as 1012m-3, which is unusual for a winter and moderate solar activity (F10.7=130 period. An intense geomagnetic event occurred on the second day, seen in the data as a strong intensification of the ionosphere convection velocities in the early afternoon (with the northward electric field reaching 150mVm-1 and corresponding frictional heating of the ions up to 2500K. The simulation used time-dependent AMIE outputs to infer flux-tube transports in the polar region, and to provide magnetospheric particle and energy inputs to the ionosphere. The overall very good agreement, obtained between the model and the observations, demonstrates the high ability of the extended TRANSCAR model for quantitative modelling of the high-latitude ionosphere; however, some differences are found which are attributed to the precipitation of electrons with very low energy. All these results are finally discussed in the frame of modelling the auroral ionosphere with space weather applications in mind.

  11. Modelling pesticide leaching under climate change: parameter vs. climate input uncertainty

    Directory of Open Access Journals (Sweden)

    K. Steffens

    2014-02-01

    Full Text Available Assessing climate change impacts on pesticide leaching requires careful consideration of different sources of uncertainty. We investigated the uncertainty related to climate scenario input and its importance relative to parameter uncertainty of the pesticide leaching model. The pesticide fate model MACRO was calibrated against a comprehensive one-year field data set for a well-structured clay soil in south-western Sweden. We obtained an ensemble of 56 acceptable parameter sets that represented the parameter uncertainty. Nine different climate model projections of the regional climate model RCA3 were available as driven by different combinations of global climate models (GCM, greenhouse gas emission scenarios and initial states of the GCM. The future time series of weather data used to drive the MACRO model were generated by scaling a reference climate data set (1970–1999 for an important agricultural production area in south-western Sweden based on monthly change factors for 2070–2099. 30 yr simulations were performed for different combinations of pesticide properties and application seasons. Our analysis showed that both the magnitude and the direction of predicted change in pesticide leaching from present to future depended strongly on the particular climate scenario. The effect of parameter uncertainty was of major importance for simulating absolute pesticide losses, whereas the climate uncertainty was relatively more important for predictions of changes of pesticide losses from present to future. The climate uncertainty should be accounted for by applying an ensemble of different climate scenarios. The aggregated ensemble prediction based on both acceptable parameterizations and different climate scenarios has the potential to provide robust probabilistic estimates of future pesticide losses.

  12. Statistical approaches to use a model organism for regulatory sequences annotation of newly sequenced species.

    Directory of Open Access Journals (Sweden)

    Pietro Liò

    Full Text Available A major goal of bioinformatics is the characterization of transcription factors and the transcriptional programs they regulate. Given the speed of genome sequencing, we would like to quickly annotate regulatory sequences in newly-sequenced genomes. In such cases, it would be helpful to predict sequence motifs by using experimental data from closely related model organism. Here we present a general algorithm that allow to identify transcription factor binding sites in one newly sequenced species by performing Bayesian regression on the annotated species. First we set the rationale of our method by applying it within the same species, then we extend it to use data available in closely related species. Finally, we generalise the method to handle the case when a certain number of experiments, from several species close to the species on which to make inference, are available. In order to show the performance of the method, we analyse three functionally related networks in the Ascomycota. Two gene network case studies are related to the G2/M phase of the Ascomycota cell cycle; the third is related to morphogenesis. We also compared the method with MatrixReduce and discuss other types of validation and tests. The first network is well known and provides a biological validation test of the method. The two cell cycle case studies, where the gene network size is conserved, demonstrate an effective utility in annotating new species sequences using all the available replicas from model species. The third case, where the gene network size varies among species, shows that the combination of information is less powerful but is still informative. Our methodology is quite general and could be extended to integrate other high-throughput data from model organisms.

  13. Generating quantitative models describing the sequence specificity of biological processes with the stabilized matrix method

    Directory of Open Access Journals (Sweden)

    Sette Alessandro

    2005-05-01

    Full Text Available Abstract Background Many processes in molecular biology involve the recognition of short sequences of nucleic-or amino acids, such as the binding of immunogenic peptides to major histocompatibility complex (MHC molecules. From experimental data, a model of the sequence specificity of these processes can be constructed, such as a sequence motif, a scoring matrix or an artificial neural network. The purpose of these models is two-fold. First, they can provide a summary of experimental results, allowing for a deeper understanding of the mechanisms involved in sequence recognition. Second, such models can be used to predict the experimental outcome for yet untested sequences. In the past we reported the development of a method to generate such models called the Stabilized Matrix Method (SMM. This method has been successfully applied to predicting peptide binding to MHC molecules, peptide transport by the transporter associated with antigen presentation (TAP and proteasomal cleavage of protein sequences. Results Herein we report the implementation of the SMM algorithm as a publicly available software package. Specific features determining the type of problems the method is most appropriate for are discussed. Advantageous features of the package are: (1 the output generated is easy to interpret, (2 input and output are both quantitative, (3 specific computational strategies to handle experimental noise are built in, (4 the algorithm is designed to effectively handle bounded experimental data, (5 experimental data from randomized peptide libraries and conventional peptides can easily be combined, and (6 it is possible to incorporate pair interactions between positions of a sequence. Conclusion Making the SMM method publicly available enables bioinformaticians and experimental biologists to easily access it, to compare its performance to other prediction methods, and to extend it to other applications.

  14. A new chance-constrained DEA model with birandom input and output data

    OpenAIRE

    Tavana, M.; Shiraz, R. K.; Hatami-Marbini, A.

    2013-01-01

    The purpose of conventional Data Envelopment Analysis (DEA) is to evaluate the performance of a set of firms or Decision-Making Units using deterministic input and output data. However, the input and output data in the real-life performance evaluation problems are often stochastic. The stochastic input and output data in DEA can be represented with random variables. Several methods have been proposed to deal with the random input and output data in DEA. In this paper, we propose a new chance-...

  15. Predicting musically induced emotions from physiological inputs: linear and neural network models.

    Science.gov (United States)

    Russo, Frank A; Vempala, Naresh N; Sandstrom, Gillian M

    2013-01-01

    Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of "felt" emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants-heart rate (HR), respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA) dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a non-linear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The non-linear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the non-linear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

  16. EARLY GUIDANCE FOR ASSIGNING DISTRIBUTION PARAMETERS TO GEOCHEMICAL INPUT TERMS TO STOCHASTIC TRANSPORT MODELS

    International Nuclear Information System (INIS)

    Kaplan, D; Margaret Millings, M

    2006-01-01

    Stochastic modeling is being used in the Performance Assessment program to provide a probabilistic estimate of the range of risk that buried waste may pose. The objective of this task was to provide early guidance for stochastic modelers for the selection of the range and distribution (e.g., normal, log-normal) of distribution coefficients (K d ) and solubility values (K sp ) to be used in modeling subsurface radionuclide transport in E- and Z-Area on the Savannah River Site (SRS). Due to the project's schedule, some modeling had to be started prior to collecting the necessary field and laboratory data needed to fully populate these models. For the interim, the project will rely on literature values and some statistical analyses of literature data as inputs. Based on statistical analyses of some literature sorption tests, the following early guidance was provided: (1) Set the range to an order of magnitude for radionuclides with K d values >1000 mL/g and to a factor of two for K d values of sp values -6 M and to a factor of two for K d values of >10 -6 M. This decision is based on the literature. (3) The distribution of K d values with a mean >1000 mL/g will be log-normally distributed. Those with a K d value <1000 mL/g will be assigned a normal distribution. This is based on statistical analysis of non-site-specific data. Results from on-going site-specific field/laboratory research involving E-Area sediments will supersede this guidance; these results are expected in 2007

  17. Realistic modeling of seismic input for megacities and large urban areas

    International Nuclear Information System (INIS)

    Panza, Giuliano F.; Alvarez, Leonardo; Aoudia, Abdelkrim

    2002-06-01

    The project addressed the problem of pre-disaster orientation: hazard prediction, risk assessment, and hazard mapping, in connection with seismic activity and man-induced vibrations. The definition of realistic seismic input has been obtained from the computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different source and structural models. The innovative modeling technique, that constitutes the common tool to the entire project, takes into account source, propagation and local site effects. This is done using first principles of physics about wave generation and propagation in complex media, and does not require to resort to convolutive approaches, that have been proven to be quite unreliable, mainly when dealing with complex geological structures, the most interesting from the practical point of view. In fact, several techniques that have been proposed to empirically estimate the site effects using observations convolved with theoretically computed signals corresponding to simplified models, supply reliable information about the site response to non-interfering seismic phases. They are not adequate in most of the real cases, when the seismic sequel is formed by several interfering waves. The availability of realistic numerical simulations enables us to reliably estimate the amplification effects even in complex geological structures, exploiting the available geotechnical, lithological, geophysical parameters, topography of the medium, tectonic, historical, palaeoseismological data, and seismotectonic models. The realistic modeling of the ground motion is a very important base of knowledge for the preparation of groundshaking scenarios that represent a valid and economic tool for the seismic microzonation. This knowledge can be very fruitfully used by civil engineers in the design of new seismo-resistant constructions and in the reinforcement of the existing built environment, and, therefore

  18. Predicting musically induced emotions from physiological inputs: Linear and neural network models

    Directory of Open Access Journals (Sweden)

    Frank A. Russo

    2013-08-01

    Full Text Available Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of 'felt' emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants – heart rate, respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a nonlinear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The nonlinear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the nonlinear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

  19. "Updates to Model Algorithms & Inputs for the Biogenic Emissions Inventory System (BEIS) Model"

    Science.gov (United States)

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observatio...

  20. Categorical Inputs, Sensitivity Analysis, Optimization and Importance Tempering with tgp Version 2, an R Package for Treed Gaussian Process Models

    Directory of Open Access Journals (Sweden)

    Robert B. Gramacy

    2010-02-01

    Full Text Available This document describes the new features in version 2.x of the tgp package for R, implementing treed Gaussian process (GP models. The topics covered include methods for dealing with categorical inputs and excluding inputs from the tree or GP part of the model; fully Bayesian sensitivity analysis for inputs/covariates; sequential optimization of black-box functions; and a new Monte Carlo method for inference in multi-modal posterior distributions that combines simulated tempering and importance sampling. These additions extend the functionality of tgp across all models in the hierarchy: from Bayesian linear models, to classification and regression trees (CART, to treed Gaussian processes with jumps to the limiting linear model. It is assumed that the reader is familiar with the baseline functionality of the package, outlined in the first vignette (Gramacy 2007.

  1. Modeling imbalanced economic recovery following a natural disaster using input-output analysis.

    Science.gov (United States)

    Li, Jun; Crawford-Brown, Douglas; Syddall, Mark; Guan, Dabo

    2013-10-01

    Input-output analysis is frequently used in studies of large-scale weather-related (e.g., Hurricanes and flooding) disruption of a regional economy. The economy after a sudden catastrophe shows a multitude of imbalances with respect to demand and production and may take months or years to recover. However, there is no consensus about how the economy recovers. This article presents a theoretical route map for imbalanced economic recovery called dynamic inequalities. Subsequently, it is applied to a hypothetical postdisaster economic scenario of flooding in London around the year 2020 to assess the influence of future shocks to a regional economy and suggest adaptation measures. Economic projections are produced by a macro econometric model and used as baseline conditions. The results suggest that London's economy would recover over approximately 70 months by applying a proportional rationing scheme under the assumption of initial 50% labor loss (with full recovery in six months), 40% initial loss to service sectors, and 10-30% initial loss to other sectors. The results also suggest that imbalance will be the norm during the postdisaster period of economic recovery even though balance may occur temporarily. Model sensitivity analysis suggests that a proportional rationing scheme may be an effective strategy to apply during postdisaster economic reconstruction, and that policies in transportation recovery and in health care are essential for effective postdisaster economic recovery. © 2013 Society for Risk Analysis.

  2. The efficiency of the agricultural sector in Poland in the light output-input model1

    Directory of Open Access Journals (Sweden)

    Czyżewski Andrzej

    2015-05-01

    Full Text Available The study turns attention to the use of the input-output model (account of interbranch flows in macroeconomic assessments of the effectiveness of the agricultural sector. In the introductory part the essence of the account of interbranch flows has been specified, pointing to its historical origin and place in the economic theory, and the morphological structure of the individual parts (quarters of the model has been presented. Then the study discusses the application of the account of interbranch flows in macroeconomic assessments of the effectiveness of the agricultural sector, defining and characterizing a number of indicators which allow to conclude on the effectiveness of the agricultural sector on the basis of the account of interbranch flows. The last, empirical part of the study assesses the effectiveness of the agricultural sector in Poland on the basis of interbranch flows statistics for the years 2000 and 2005. The analyses allowed to demonstrate increased efficiency of the agricultural sector in Poland after Poland joined the EU, and also to say that the account of interbranch flows is an important tool enabling comprehensive assessment of the effectiveness of the agricultural sector in the macro-scale, through the prism of the effect - disbursement, which accounts for its exceptional suitability in this kind of analyses.

  3. Modeling uncertainties in workforce disruptions from influenza pandemics using dynamic input-output analysis.

    Science.gov (United States)

    El Haimar, Amine; Santos, Joost R

    2014-03-01

    Influenza pandemic is a serious disaster that can pose significant disruptions to the workforce and associated economic sectors. This article examines the impact of influenza pandemic on workforce availability within an interdependent set of economic sectors. We introduce a simulation model based on the dynamic input-output model to capture the propagation of pandemic consequences through the National Capital Region (NCR). The analysis conducted in this article is based on the 2009 H1N1 pandemic data. Two metrics were used to assess the impacts of the influenza pandemic on the economic sectors: (i) inoperability, which measures the percentage gap between the as-planned output and the actual output of a sector, and (ii) economic loss, which quantifies the associated monetary value of the degraded output. The inoperability and economic loss metrics generate two different rankings of the critical economic sectors. Results show that most of the critical sectors in terms of inoperability are sectors that are related to hospitals and health-care providers. On the other hand, most of the sectors that are critically ranked in terms of economic loss are sectors with significant total production outputs in the NCR such as federal government agencies. Therefore, policy recommendations relating to potential mitigation and recovery strategies should take into account the balance between the inoperability and economic loss metrics. © 2013 Society for Risk Analysis.

  4. Modelling Implicit Communication in Multi-Agent Systems with Hybrid Input/Output Automata

    Directory of Open Access Journals (Sweden)

    Marta Capiluppi

    2012-10-01

    Full Text Available We propose an extension of Hybrid I/O Automata (HIOAs to model agent systems and their implicit communication through perturbation of the environment, like localization of objects or radio signals diffusion and detection. To this end we decided to specialize some variables of the HIOAs whose values are functions both of time and space. We call them world variables. Basically they are treated similarly to the other variables of HIOAs, but they have the function of representing the interaction of each automaton with the surrounding environment, hence they can be output, input or internal variables. Since these special variables have the role of simulating implicit communication, their dynamics are specified both in time and space, because they model the perturbations induced by the agent to the environment, and the perturbations of the environment as perceived by the agent. Parallel composition of world variables is slightly different from parallel composition of the other variables, since their signals are summed. The theory is illustrated through a simple example of agents systems.

  5. Realistic modelling of the seismic input: Site effects and parametric studies

    International Nuclear Information System (INIS)

    Romanelli, F.; Vaccari, F.; Panza, G.F.

    2002-11-01

    We illustrate the work done in the framework of a large international cooperation, showing the very recent numerical experiments carried out within the framework of the EC project 'Advanced methods for assessing the seismic vulnerability of existing motorway bridges' (VAB) to assess the importance of non-synchronous seismic excitation of long structures. The definition of the seismic input at the Warth bridge site, i.e. the determination of the seismic ground motion due to an earthquake with a given magnitude and epicentral distance from the site, has been done following a theoretical approach. In order to perform an accurate and realistic estimate of site effects and of differential motion it is necessary to make a parametric study that takes into account the complex combination of the source and propagation parameters, in realistic geological structures. The computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different sources and structural models, allows us the construction of damage scenarios that are out of the reach of stochastic models, at a very low cost/benefit ratio. (author)

  6. PERMODELAN INDEKS HARGA KONSUMEN INDONESIA DENGAN MENGGUNAKAN MODEL INTERVENSI MULTI INPUT

    KAUST Repository

    Novianti, Putri Wikie; Suhartono, Suhartono

    2017-01-01

    -searches that have been done are only contain of an intervention with single input, ei-ther step or pulse function. Multi input intervention was used in Indonesia CPI case because there are some events which are expected effecting CPI. Based on the result, those

  7. Evaluating the effects of model structure and meteorological input data on runoff modelling in an alpine headwater basin

    Science.gov (United States)

    Schattan, Paul; Bellinger, Johannes; Förster, Kristian; Schöber, Johannes; Huttenlau, Matthias; Kirnbauer, Robert; Achleitner, Stefan

    2017-04-01

    Modelling water resources in snow-dominated mountainous catchments is challenging due to both, short concentration times and a highly variable contribution of snow melt in space and time from complex terrain. A number of model setups exist ranging from physically based models to conceptional models which do not attempt to represent the natural processes in a physically meaningful way. Within the flood forecasting system for the Tyrolean Inn River two serially linked hydrological models with differing process representation are used. Non- glacierized catchments are modelled by a semi-distributed, water balance model (HQsim) based on the HRU-approach. A fully-distributed energy and mass balance model (SES), purpose-built for snow- and icemelt, is used for highly glacierized headwater catchments. Previous work revealed uncertainties and limitations within the models' structures regarding (i) the representation of snow processes in HQsim, (ii) the runoff routing of SES, and (iii) the spatial resolution of the meteorological input data in both models. To overcome these limitations, a "strengths driven" model coupling is applied. Instead of linking the models serially, a vertical one-way coupling of models has been implemented. The fully-distributed snow modelling of SES is combined with the semi-distributed HQsim structure, allowing to benefit from soil and runoff routing schemes in HQsim. A monte-carlo based modelling experiment was set up to evaluate the resulting differences in the runoff prediction due to the improved model coupling and a refined spatial resolution of the meteorological forcing. The experiment design follows a gradient of spatial discretisation of hydrological processes and meteorological forcing data with a total of six different model setups for the alpine headwater basin of the Fagge River in the Tyrolean Alps. In general, all setups show a good performance for this particular basin. It is therefore planned to include other basins with differing

  8. A sequence-dependent rigid-base model of DNA

    Science.gov (United States)

    Gonzalez, O.; Petkevičiutė, D.; Maddocks, J. H.

    2013-02-01

    A novel hierarchy of coarse-grain, sequence-dependent, rigid-base models of B-form DNA in solution is introduced. The hierarchy depends on both the assumed range of energetic couplings, and the extent of sequence dependence of the model parameters. A significant feature of the models is that they exhibit the phenomenon of frustration: each base cannot simultaneously minimize the energy of all of its interactions. As a consequence, an arbitrary DNA oligomer has an intrinsic or pre-existing stress, with the level of this frustration dependent on the particular sequence of the oligomer. Attention is focussed on the particular model in the hierarchy that has nearest-neighbor interactions and dimer sequence dependence of the model parameters. For a Gaussian version of this model, a complete coarse-grain parameter set is estimated. The parameterized model allows, for an oligomer of arbitrary length and sequence, a simple and explicit construction of an approximation to the configuration-space equilibrium probability density function for the oligomer in solution. The training set leading to the coarse-grain parameter set is itself extracted from a recent and extensive database of a large number of independent, atomic-resolution molecular dynamics (MD) simulations of short DNA oligomers immersed in explicit solvent. The Kullback-Leibler divergence between probability density functions is used to make several quantitative assessments of our nearest-neighbor, dimer-dependent model, which is compared against others in the hierarchy to assess various assumptions pertaining both to the locality of the energetic couplings and to the level of sequence dependence of its parameters. It is also compared directly against all-atom MD simulation to assess its predictive capabilities. The results show that the nearest-neighbor, dimer-dependent model can successfully resolve sequence effects both within and between oligomers. For example, due to the presence of frustration, the model can

  9. A sequence-dependent rigid-base model of DNA.

    Science.gov (United States)

    Gonzalez, O; Petkevičiūtė, D; Maddocks, J H

    2013-02-07

    A novel hierarchy of coarse-grain, sequence-dependent, rigid-base models of B-form DNA in solution is introduced. The hierarchy depends on both the assumed range of energetic couplings, and the extent of sequence dependence of the model parameters. A significant feature of the models is that they exhibit the phenomenon of frustration: each base cannot simultaneously minimize the energy of all of its interactions. As a consequence, an arbitrary DNA oligomer has an intrinsic or pre-existing stress, with the level of this frustration dependent on the particular sequence of the oligomer. Attention is focussed on the particular model in the hierarchy that has nearest-neighbor interactions and dimer sequence dependence of the model parameters. For a Gaussian version of this model, a complete coarse-grain parameter set is estimated. The parameterized model allows, for an oligomer of arbitrary length and sequence, a simple and explicit construction of an approximation to the configuration-space equilibrium probability density function for the oligomer in solution. The training set leading to the coarse-grain parameter set is itself extracted from a recent and extensive database of a large number of independent, atomic-resolution molecular dynamics (MD) simulations of short DNA oligomers immersed in explicit solvent. The Kullback-Leibler divergence between probability density functions is used to make several quantitative assessments of our nearest-neighbor, dimer-dependent model, which is compared against others in the hierarchy to assess various assumptions pertaining both to the locality of the energetic couplings and to the level of sequence dependence of its parameters. It is also compared directly against all-atom MD simulation to assess its predictive capabilities. The results show that the nearest-neighbor, dimer-dependent model can successfully resolve sequence effects both within and between oligomers. For example, due to the presence of frustration, the model can

  10. Analysis of Sequence Diagram Layout in Advanced UML Modelling Tools

    Directory of Open Access Journals (Sweden)

    Ņikiforova Oksana

    2016-05-01

    Full Text Available System modelling using Unified Modelling Language (UML is the task that should be solved for software development. The more complex software becomes the higher requirements are stated to demonstrate the system to be developed, especially in its dynamic aspect, which in UML is offered by a sequence diagram. To solve this task, the main attention is devoted to the graphical presentation of the system, where diagram layout plays the central role in information perception. The UML sequence diagram due to its specific structure is selected for a deeper analysis on the elements’ layout. The authors research represents the abilities of modern UML modelling tools to offer automatic layout of the UML sequence diagram and analyse them according to criteria required for the diagram perception.

  11. Self-Exciting Point Process Modeling of Conversation Event Sequences

    Science.gov (United States)

    Masuda, Naoki; Takaguchi, Taro; Sato, Nobuo; Yano, Kazuo

    Self-exciting processes of Hawkes type have been used to model various phenomena including earthquakes, neural activities, and views of online videos. Studies of temporal networks have revealed that sequences of social interevent times for individuals are highly bursty. We examine some basic properties of event sequences generated by the Hawkes self-exciting process to show that it generates bursty interevent times for a wide parameter range. Then, we fit the model to the data of conversation sequences recorded in company offices in Japan. In this way, we can estimate relative magnitudes of the self excitement, its temporal decay, and the base event rate independent of the self excitation. These variables highly depend on individuals. We also point out that the Hawkes model has an important limitation that the correlation in the interevent times and the burstiness cannot be independently modulated.

  12. A latent low-dimensional common input drives a pool of motor neurons: a probabilistic latent state-space model.

    Science.gov (United States)

    Feeney, Daniel F; Meyer, François G; Noone, Nicholas; Enoka, Roger M

    2017-10-01

    Motor neurons appear to be activated with a common input signal that modulates the discharge activity of all neurons in the motor nucleus. It has proven difficult for neurophysiologists to quantify the variability in a common input signal, but characterization of such a signal may improve our understanding of how the activation signal varies across motor tasks. Contemporary methods of quantifying the common input to motor neurons rely on compiling discrete action potentials into continuous time series, assuming the motor pool acts as a linear filter, and requiring signals to be of sufficient duration for frequency analysis. We introduce a space-state model in which the discharge activity of motor neurons is modeled as inhomogeneous Poisson processes and propose a method to quantify an abstract latent trajectory that represents the common input received by motor neurons. The approach also approximates the variation in synaptic noise in the common input signal. The model is validated with four data sets: a simulation of 120 motor units, a pair of integrate-and-fire neurons with a Renshaw cell providing inhibitory feedback, the discharge activity of 10 integrate-and-fire neurons, and the discharge times of concurrently active motor units during an isometric voluntary contraction. The simulations revealed that a latent state-space model is able to quantify the trajectory and variability of the common input signal across all four conditions. When compared with the cumulative spike train method of characterizing common input, the state-space approach was more sensitive to the details of the common input current and was less influenced by the duration of the signal. The state-space approach appears to be capable of detecting rather modest changes in common input signals across conditions. NEW & NOTEWORTHY We propose a state-space model that explicitly delineates a common input signal sent to motor neurons and the physiological noise inherent in synaptic signal

  13. Asteroseismic modelling of solar-type stars: internal systematics from input physics and surface correction methods

    Science.gov (United States)

    Nsamba, B.; Campante, T. L.; Monteiro, M. J. P. F. G.; Cunha, M. S.; Rendle, B. M.; Reese, D. R.; Verma, K.

    2018-04-01

    Asteroseismic forward modelling techniques are being used to determine fundamental properties (e.g. mass, radius, and age) of solar-type stars. The need to take into account all possible sources of error is of paramount importance towards a robust determination of stellar properties. We present a study of 34 solar-type stars for which high signal-to-noise asteroseismic data is available from multi-year Kepler photometry. We explore the internal systematics on the stellar properties, that is, associated with the uncertainty in the input physics used to construct the stellar models. In particular, we explore the systematics arising from: (i) the inclusion of the diffusion of helium and heavy elements; and (ii) the uncertainty in solar metallicity mixture. We also assess the systematics arising from (iii) different surface correction methods used in optimisation/fitting procedures. The systematics arising from comparing results of models with and without diffusion are found to be 0.5%, 0.8%, 2.1%, and 16% in mean density, radius, mass, and age, respectively. The internal systematics in age are significantly larger than the statistical uncertainties. We find the internal systematics resulting from the uncertainty in solar metallicity mixture to be 0.7% in mean density, 0.5% in radius, 1.4% in mass, and 6.7% in age. The surface correction method by Sonoi et al. and Ball & Gizon's two-term correction produce the lowest internal systematics among the different correction methods, namely, ˜1%, ˜1%, ˜2%, and ˜8% in mean density, radius, mass, and age, respectively. Stellar masses obtained using the surface correction methods by Kjeldsen et al. and Ball & Gizon's one-term correction are systematically higher than those obtained using frequency ratios.

  14. Smoke inputs to climate models: optical properties and height distribution for nuclear winter studies

    International Nuclear Information System (INIS)

    Penner, J.E.; Haselman, L.C. Jr.

    1985-04-01

    Smoke from fires produced in the aftermath of a major nuclear exchange has been predicted to cause large decreases in land surface temperatures. The extent of the decrease and even the sign of the temperature change depend on the optical characteristics of the smoke and how it is distributed with altitude. The height distribution of smoke over a fire is determined by the amount of buoyant energy produced by the fire and the amount of energy released by the latent heat of condensation of water vapor. The optical properties of the smoke depend on the size distribution of smoke particles which changes due to coagulation within the lofted plume. We present calculations demonstrating these processes and estimate their importance for the smoke source term input for climate models. For high initial smoke densities and for absorbing smoke ( m = 1.75 - 0.3i), coagulation of smoke particles within the smoke plume is predicted to first increase, then decrease, the size-integrated extinction cross section. However, at the smoke densities predicted in our model (assuming a 3% emission rate for smoke) and for our assumed initial size distribution, the attachment rates for brownian and turbulent collision processes are not fast enough to alter the smoke size distribution enough to significantly change the integrated extinction cross section. Early-time coagulation is, however, fast enough to allow further coagulation, on longer time scales, to act to decrease the extinction cross section. On these longer time scales appropriate to climate models, coagulation can decrease the extinction cross section by almost a factor of two before the smoke becomes well mixed around the globe. This process has been neglected in past climate effect evaluations, but could have a significant effect, since the extinction cross section enters as an exponential factor in calculating the light attenuation due to smoke. 10 refs., 20 figs

  15. A response analysis with effective stress model by using vertical input motions

    International Nuclear Information System (INIS)

    Yamanouchi, H.; Ohkawa, I.; Chiba, O.; Tohdo, M.; Kaneko, O.

    1987-01-01

    The nuclear power plant reactor buildings are to be directly supported on a hard soil as a rule in Japan. In case of determining the input motions in order to design those buildings, the amplifications of the hard soil deposits are examined by the total stress analysis in general. However, when the supporting hard soil is replaced with the slightly softer medium such as sandy or gravelly soil, the existence of pore water, in other words, the contribution of the pore water pressure to the total stress cannot be ignored even in a practical sense. In this paper the authors defined an analytical model considering the effective stress-strain relation. In the analyses, the response in the vertical direction is used to evaluate the confining pressure, at first. In the next step, the process of the generation and dissipation of the pore water pressure, is taken into account, together with the effect of the confining pressure. They applied these procedures for the response computations of the horizontally layered soil deposits

  16. Multiregional input-output model for China's farm land and water use.

    Science.gov (United States)

    Guo, Shan; Shen, Geoffrey Qiping

    2015-01-06

    Land and water are the two main drivers of agricultural production. Pressure on farm land and water resources is increasing in China due to rising food demand. Domestic trade affects China's regional farm land and water use by distributing resources associated with the production of goods and services. This study constructs a multiregional input-output model to simultaneously analyze China's farm land and water uses embodied in consumption and interregional trade. Results show a great similarity for both China's farm land and water endowments. Shandong, Henan, Guangdong, and Yunnan are the most important drivers of farm land and water consumption in China, even though they have relatively few land and water resource endowments. Significant net transfers of embodied farm land and water flows are identified from the central and western areas to the eastern area via interregional trade. Heilongjiang is the largest farm land and water supplier, in contrast to Shanghai as the largest receiver. The results help policy makers to comprehensively understand embodied farm land and water flows in a complex economy network. Improving resource utilization efficiency and reshaping the embodied resource trade nexus should be addressed by considering the transfer of regional responsibilities.

  17. Process Debottlenecking and Retrofit of Palm Oil Milling Process via Inoperability Input-Output Modelling

    Directory of Open Access Journals (Sweden)

    May Tan May

    2018-01-01

    Full Text Available In recent years, there has been an increase in crude palm oil (CPO demand, resulting in palm oil mills (POMs seizing the opportunity to increase CPO production to make more profits. A series of equipment are designed to operate in their optimum capacities in the current existing POMs. Some equipment may be limited by their maximum design capacities when there is a need to increase CPO production, resulting in process bottlenecks. In this research, a framework is developed to provide stepwise procedures on identifying bottlenecks and retrofitting a POM process to cater for the increase in production capacity. This framework adapts an algebraic approach known as Inoperability Input-Output Modelling (IIM. To illustrate the application of the framework, an industrial POM case study was solved using LINGO software in this work, by maximising its production capacity. Benefit-to-Cost Ratio (BCR analysis was also performed to assess the economic feasibility. As results, the Screw Press was identified as the bottleneck. The retrofitting recommendation was to purchase an additional Screw Press to cater for the new throughput with BCR of 54.57. It was found the POM to be able to achieve the maximum targeted production capacity of 8,139.65 kg/hr of CPO without any bottlenecks.

  18. Usefulness of non-linear input-output models for economic impact analyses in tourism and recreation

    NARCIS (Netherlands)

    Klijs, J.; Peerlings, J.H.M.; Heijman, W.J.M.

    2015-01-01

    In tourism and recreation management it is still common practice to apply traditional input–output (IO) economic impact models, despite their well-known limitations. In this study the authors analyse the usefulness of applying a non-linear input–output (NLIO) model, in which price-induced input

  19. RUSLE2015: Modelling soil erosion at continental scale using high resolution input layers

    Science.gov (United States)

    Panagos, Panos; Borrelli, Pasquale; Meusburger, Katrin; Poesen, Jean; Ballabio, Cristiano; Lugato, Emanuele; Montanarella, Luca; Alewell, Christine

    2016-04-01

    Soil erosion by water is one of the most widespread forms of soil degradation in the Europe. On the occasion of the 2015 celebration of the International Year of Soils, the European Commission's Joint Research Centre (JRC) published the RUSLE2015, a modified modelling approach for assessing soil erosion in Europe by using the best available input data layers. The objective of the recent assessment performed with RUSLE2015 was to improve our knowledge and understanding of soil erosion by water across the European Union and to accentuate the differences and similarities between different regions and countries beyond national borders and nationally adapted models. RUSLE2015 has maximized the use of available homogeneous, updated, pan-European datasets (LUCAS topsoil, LUCAS survey, GAEC, Eurostat crops, Eurostat Management Practices, REDES, DEM 25m, CORINE, European Soil Database) and have used the best suited approach at European scale for modelling soil erosion. The collaboration of JRC with many scientists around Europe and numerous prominent European universities and institutes resulted in an improved assessment of individual risk factors (rainfall erosivity, soil erodibility, cover-management, topography and support practices) and a final harmonized European soil erosion map at high resolution. The mean soil loss rate in the European Union's erosion-prone lands (agricultural, forests and semi-natural areas) was found to be 2.46 t ha-1 yr-1, resulting in a total soil loss of 970 Mt annually; equal to an area the size of Berlin (assuming a removal of 1 meter). According to the RUSLE2015 model approximately 12.7% of arable lands in the European Union is estimated to suffer from moderate to high erosion(>5 t ha-1 yr-1). This equates to an area of 140,373 km2 which equals to the surface area of Greece (Environmental Science & Policy, 54, 438-447; 2015). Even the mean erosion rate outstrips the mean formation rate (walls and contouring) through the common agricultural

  20. Modeling DPOAE input/output function compression: comparisons with hearing thresholds.

    Science.gov (United States)

    Bhagat, Shaum P

    2014-09-01

    Basilar membrane input/output (I/O) functions in mammalian animal models are characterized by linear and compressed segments when measured near the location corresponding to the characteristic frequency. A method of studying basilar membrane compression indirectly in humans involves measuring distortion-product otoacoustic emission (DPOAE) I/O functions. Previous research has linked compression estimates from behavioral growth-of-masking functions to hearing thresholds. The aim of this study was to compare compression estimates from DPOAE I/O functions and hearing thresholds at 1 and 2 kHz. A prospective correlational research design was performed. The relationship between DPOAE I/O function compression estimates and hearing thresholds was evaluated with Pearson product-moment correlations. Normal-hearing adults (n = 16) aged 22-42 yr were recruited. DPOAE I/O functions (L₂ = 45-70 dB SPL) and two-interval forced-choice hearing thresholds were measured in normal-hearing adults. A three-segment linear regression model applied to DPOAE I/O functions supplied estimates of compression thresholds, defined as breakpoints between linear and compressed segments and the slopes of the compressed segments. Pearson product-moment correlations between DPOAE compression estimates and hearing thresholds were evaluated. A high correlation between DPOAE compression thresholds and hearing thresholds was observed at 2 kHz, but not at 1 kHz. Compression slopes also correlated highly with hearing thresholds only at 2 kHz. The derivation of cochlear compression estimates from DPOAE I/O functions provides a means to characterize basilar membrane mechanics in humans and elucidates the role of compression in tone detection in the 1-2 kHz frequency range. American Academy of Audiology.

  1. Model morphing and sequence assignment after molecular replacement.

    Science.gov (United States)

    Terwilliger, Thomas C; Read, Randy J; Adams, Paul D; Brunger, Axel T; Afonine, Pavel V; Hung, Li-Wei

    2013-11-01

    A procedure termed `morphing' for improving a model after it has been placed in the crystallographic cell by molecular replacement has recently been developed. Morphing consists of applying a smooth deformation to a model to make it match an electron-density map more closely. Morphing does not change the identities of the residues in the chain, only their coordinates. Consequently, if the true structure differs from the working model by containing different residues, these differences cannot be corrected by morphing. Here, a procedure that helps to address this limitation is described. The goal of the procedure is to obtain a relatively complete model that has accurate main-chain atomic positions and residues that are correctly assigned to the sequence. Residues in a morphed model that do not match the electron-density map are removed. Each segment of the resulting trimmed morphed model is then assigned to the sequence of the molecule using information about the connectivity of the chains from the working model and from connections that can be identified from the electron-density map. The procedure was tested by application to a recently determined structure at a resolution of 3.2 Å and was found to increase the number of correctly identified residues in this structure from the 88 obtained using phenix.resolve sequence assignment alone (Terwilliger, 2003) to 247 of a possible 359. Additionally, the procedure was tested by application to a series of templates with sequence identities to a target structure ranging between 7 and 36%. The mean fraction of correctly identified residues in these cases was increased from 33% using phenix.resolve sequence assignment to 47% using the current procedure. The procedure is simple to apply and is available in the Phenix software package.

  2. A new approach to modeling temperature-related mortality: Non-linear autoregressive models with exogenous input.

    Science.gov (United States)

    Lee, Cameron C; Sheridan, Scott C

    2018-07-01

    Temperature-mortality relationships are nonlinear, time-lagged, and can vary depending on the time of year and geographic location, all of which limits the applicability of simple regression models in describing these associations. This research demonstrates the utility of an alternative method for modeling such complex relationships that has gained recent traction in other environmental fields: nonlinear autoregressive models with exogenous input (NARX models). All-cause mortality data and multiple temperature-based data sets were gathered from 41 different US cities, for the period 1975-2010, and subjected to ensemble NARX modeling. Models generally performed better in larger cities and during the winter season. Across the US, median absolute percentage errors were 10% (ranging from 4% to 15% in various cities), the average improvement in the r-squared over that of a simple persistence model was 17% (6-24%), and the hit rate for modeling spike days in mortality (>80th percentile) was 54% (34-71%). Mortality responded acutely to hot summer days, peaking at 0-2 days of lag before dropping precipitously, and there was an extended mortality response to cold winter days, peaking at 2-4 days of lag and dropping slowly and continuing for multiple weeks. Spring and autumn showed both of the aforementioned temperature-mortality relationships, but generally to a lesser magnitude than what was seen in summer or winter. When compared to distributed lag nonlinear models, NARX model output was nearly identical. These results highlight the applicability of NARX models for use in modeling complex and time-dependent relationships for various applications in epidemiology and environmental sciences. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Plantagora: modeling whole genome sequencing and assembly of plant genomes.

    Directory of Open Access Journals (Sweden)

    Roger Barthelson

    Full Text Available BACKGROUND: Genomics studies are being revolutionized by the next generation sequencing technologies, which have made whole genome sequencing much more accessible to the average researcher. Whole genome sequencing with the new technologies is a developing art that, despite the large volumes of data that can be produced, may still fail to provide a clear and thorough map of a genome. The Plantagora project was conceived to address specifically the gap between having the technical tools for genome sequencing and knowing precisely the best way to use them. METHODOLOGY/PRINCIPAL FINDINGS: For Plantagora, a platform was created for generating simulated reads from several different plant genomes of different sizes. The resulting read files mimicked either 454 or Illumina reads, with varying paired end spacing. Thousands of datasets of reads were created, most derived from our primary model genome, rice chromosome one. All reads were assembled with different software assemblers, including Newbler, Abyss, and SOAPdenovo, and the resulting assemblies were evaluated by an extensive battery of metrics chosen for these studies. The metrics included both statistics of the assembly sequences and fidelity-related measures derived by alignment of the assemblies to the original genome source for the reads. The results were presented in a website, which includes a data graphing tool, all created to help the user compare rapidly the feasibility and effectiveness of different sequencing and assembly strategies prior to testing an approach in the lab. Some of our own conclusions regarding the different strategies were also recorded on the website. CONCLUSIONS/SIGNIFICANCE: Plantagora provides a substantial body of information for comparing different approaches to sequencing a plant genome, and some conclusions regarding some of the specific approaches. Plantagora also provides a platform of metrics and tools for studying the process of sequencing and assembly

  4. Optimization and evaluation of probabilistic-logic sequence models

    DEFF Research Database (Denmark)

    Christiansen, Henning; Lassen, Ole Torp

    to, in principle, Turing complete languages. In general, such models are computationally far to complex for direct use, so optimization by pruning and approximation are needed. % The first steps are made towards a methodology for optimizing such models by approximations using auxiliary models......Analysis of biological sequence data demands more and more sophisticated and fine-grained models, but these in turn introduce hard computational problems. A class of probabilistic-logic models is considered, which increases the expressibility from HMM's and SCFG's regular and context-free languages...

  5. Modeling ChIP sequencing in silico with applications.

    Directory of Open Access Journals (Sweden)

    Zhengdong D Zhang

    2008-08-01

    Full Text Available ChIP sequencing (ChIP-seq is a new method for genomewide mapping of protein binding sites on DNA. It has generated much excitement in functional genomics. To score data and determine adequate sequencing depth, both the genomic background and the binding sites must be properly modeled. To develop a computational foundation to tackle these issues, we first performed a study to characterize the observed statistical nature of this new type of high-throughput data. By linking sequence tags into clusters, we show that there are two components to the distribution of tag counts observed in a number of recent experiments: an initial power-law distribution and a subsequent long right tail. Then we develop in silico ChIP-seq, a computational method to simulate the experimental outcome by placing tags onto the genome according to particular assumed distributions for the actual binding sites and for the background genomic sequence. In contrast to current assumptions, our results show that both the background and the binding sites need to have a markedly nonuniform distribution in order to correctly model the observed ChIP-seq data, with, for instance, the background tag counts modeled by a gamma distribution. On the basis of these results, we extend an existing scoring approach by using a more realistic genomic-background model. This enables us to identify transcription-factor binding sites in ChIP-seq data in a statistically rigorous fashion.

  6. Comparison of different snow model formulations and their responses to input uncertainties in the Upper Indus Basin

    Science.gov (United States)

    Pritchard, David; Fowler, Hayley; Forsythe, Nathan; O'Donnell, Greg; Rutter, Nick; Bardossy, Andras

    2017-04-01

    Snow and glacier melt in the mountainous Upper Indus Basin (UIB) sustain water supplies, irrigation networks, hydropower production and ecosystems in extensive downstream lowlands. Understanding hydrological and cryospheric sensitivities to climatic variability and change in the basin is therefore critical for local, national and regional water resources management. Assessing these sensitivities using numerical modelling is challenging, due to limitations in the quality and quantity of input and evaluation data, as well as uncertainties in model structures and parameters. This study explores how these uncertainties in inputs and process parameterisations affect distributed simulations of ablation in the complex climatic setting of the UIB. The role of model forcing uncertainties is explored using combinations of local observations, remote sensing and reanalysis - including the high resolution High Asia Refined Analysis - to generate multiple realisations of spatiotemporal model input fields. Forcing a range of model structures with these input fields then provides an indication of how different ablation parameterisations respond to uncertainties and perturbations in climatic drivers. Model structures considered include simple, empirical representations of melt processes through to physically based, full energy balance models with multi-physics options for simulating snowpack evolution (including an adapted version of FSM). Analysing model input and structural uncertainties in this way provides insights for methodological choices in climate sensitivity assessments of data-sparse, high mountain catchments. Such assessments are key for supporting water resource management in these catchments, particularly given the potential complications of enhanced warming through elevation effects or, in the case of the UIB, limited understanding of how and why local climate change signals differ from broader patterns.

  7. Evaluation of precipitation input for SWAT modeling in Alpine catchment: A case study in the Adige river basin (Italy).

    Science.gov (United States)

    Tuo, Ye; Duan, Zheng; Disse, Markus; Chiogna, Gabriele

    2016-12-15

    Precipitation is often the most important input data in hydrological models when simulating streamflow. The Soil and Water Assessment Tool (SWAT), a widely used hydrological model, only makes use of data from one precipitation gauge station that is nearest to the centroid of each subbasin, which is eventually corrected using the elevation band method. This leads in general to inaccurate representation of subbasin precipitation input data, particularly in catchments with complex topography. To investigate the impact of different precipitation inputs on the SWAT model simulations in Alpine catchments, 13years (1998-2010) of daily precipitation data from four datasets including OP (Observed precipitation), IDW (Inverse Distance Weighting data), CHIRPS (Climate Hazards Group InfraRed Precipitation with Station data) and TRMM (Tropical Rainfall Measuring Mission) has been considered. Both model performances (comparing simulated and measured streamflow data at the catchment outlet) as well as parameter and prediction uncertainties have been quantified. For all three subbasins, the use of elevation bands is fundamental to match the water budget. Streamflow predictions obtained using IDW inputs are better than those obtained using the other datasets in terms of both model performance and prediction uncertainty. Models using the CHIRPS product as input provide satisfactory streamflow estimation, suggesting that this satellite product can be applied to this data-scarce Alpine region. Comparing the performance of SWAT models using different precipitation datasets is therefore important in data-scarce regions. This study has shown that, precipitation is the main source of uncertainty, and different precipitation datasets in SWAT models lead to different best estimate ranges for the calibrated parameters. This has important implications for the interpretation of the simulated hydrological processes. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Evaluating the efficiency of municipalities in collecting and processing municipal solid waste: a shared input DEA-model.

    Science.gov (United States)

    Rogge, Nicky; De Jaeger, Simon

    2012-10-01

    This paper proposed an adjusted "shared-input" version of the popular efficiency measurement technique Data Envelopment Analysis (DEA) that enables evaluating municipality waste collection and processing performances in settings in which one input (waste costs) is shared among treatment efforts of multiple municipal solid waste fractions. The main advantage of this version of DEA is that it not only provides an estimate of the municipalities overall cost efficiency but also estimates of the municipalities' cost efficiency in the treatment of the different fractions of municipal solid waste (MSW). To illustrate the practical usefulness of the shared input DEA-model, we apply the model to data on 293 municipalities in Flanders, Belgium, for the year 2008. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Evaluating the efficiency of municipalities in collecting and processing municipal solid waste: A shared input DEA-model

    International Nuclear Information System (INIS)

    Rogge, Nicky; De Jaeger, Simon

    2012-01-01

    Highlights: ► Complexity in local waste management calls for more in depth efficiency analysis. ► Shared-input Data Envelopment Analysis can provide solution. ► Considerable room for the Flemish municipalities to improve their cost efficiency. - Abstract: This paper proposed an adjusted “shared-input” version of the popular efficiency measurement technique Data Envelopment Analysis (DEA) that enables evaluating municipality waste collection and processing performances in settings in which one input (waste costs) is shared among treatment efforts of multiple municipal solid waste fractions. The main advantage of this version of DEA is that it not only provides an estimate of the municipalities overall cost efficiency but also estimates of the municipalities’ cost efficiency in the treatment of the different fractions of municipal solid waste (MSW). To illustrate the practical usefulness of the shared input DEA-model, we apply the model to data on 293 municipalities in Flanders, Belgium, for the year 2008.

  10. Enhancement of information transmission with stochastic resonance in hippocampal CA1 neuron models: effects of noise input location.

    Science.gov (United States)

    Kawaguchi, Minato; Mino, Hiroyuki; Durand, Dominique M

    2007-01-01

    Stochastic resonance (SR) has been shown to enhance the signal to noise ratio or detection of signals in neurons. It is not yet clear how this effect of SR on the signal to noise ratio affects signal processing in neural networks. In this paper, we investigate the effects of the location of background noise input on information transmission in a hippocampal CA1 neuron model. In the computer simulation, random sub-threshold spike trains (signal) generated by a filtered homogeneous Poisson process were presented repeatedly to the middle point of the main apical branch, while the homogeneous Poisson shot noise (background noise) was applied to a location of the dendrite in the hippocampal CA1 model consisting of the soma with a sodium, a calcium, and five potassium channels. The location of the background noise input was varied along the dendrites to investigate the effects of background noise input location on information transmission. The computer simulation results show that the information rate reached a maximum value for an optimal amplitude of the background noise amplitude. It is also shown that this optimal amplitude of the background noise is independent of the distance between the soma and the noise input location. The results also show that the location of the background noise input does not significantly affect the maximum values of the information rates generated by stochastic resonance.

  11. PSA modeling of long-term accident sequences

    International Nuclear Information System (INIS)

    Georgescu, Gabriel; Corenwinder, Francois; Lanore, Jeanne-Marie

    2014-01-01

    In the context of the extension of PSA scope to include external hazards, in France, both operator (EDF) and IRSN work for the improvement of methods to better take into account in the PSA the accident sequences induced by initiators which affect a whole site containing several nuclear units (reactors, fuel pools,...). These methodological improvements represent an essential prerequisite for the development of external hazards PSA. However, it has to be noted that in French PSA, even before Fukushima, long term accident sequences were taken into account: many insight were therefore used, as complementary information, to enhance the safety level of the plants. IRSN proposed an external events PSA development program. One of the first steps of the program is the development of methods to model in the PSA the long term accident sequences, based on the experience gained. At short term IRSN intends to enhance the modeling of the 'long term' accident sequences induced by the loss of the heat sink or/and the loss of external power supply. The experience gained by IRSN and EDF from the development of several probabilistic studies treating long term accident sequences shows that the simple extension of the mission time of the mitigation systems from 24 hours to longer times is not sufficient to realistically quantify the risk and to obtain a correct ranking of the risk contributions and that treatment of recoveries is also necessary. IRSN intends to develop a generic study which can be used as a general methodology for the assessment of the long term accident sequences, mainly generated by external hazards and their combinations. This first attempt to develop this generic study allowed identifying some aspects, which may be hazard (or combinations of hazards) or related to initial boundary conditions, which should be taken into account for further developments. (authors)

  12. Solar Luminosity on the Main Sequence, Standard Model and Variations

    Science.gov (United States)

    Ayukov, S. V.; Baturin, V. A.; Gorshkov, A. B.; Oreshina, A. V.

    2017-05-01

    Our Sun became Main Sequence star 4.6 Gyr ago according Standard Solar Model. At that time solar luminosity was 30% lower than current value. This conclusion is based on assumption that Sun is fueled by thermonuclear reactions. If Earth's albedo and emissivity in infrared are unchanged during Earth history, 2.3 Gyr ago oceans had to be frozen. This contradicts to geological data: there was liquid water 3.6-3.8 Gyr ago on Earth. This problem is known as Faint Young Sun Paradox. We analyze luminosity change in standard solar evolution theory. Increase of mean molecular weight in the central part of the Sun due to conversion of hydrogen to helium leads to gradual increase of luminosity with time on the Main Sequence. We also consider several exotic models: fully mixed Sun; drastic change of pp reaction rate; Sun consisting of hydrogen and helium only. Solar neutrino observations however exclude most non-standard solar models.

  13. Physical-mathematical model for cybernetic description of the human organs with trace element concentrations as input variables

    International Nuclear Information System (INIS)

    Mihai, Maria; Popescu, I.V.

    2003-01-01

    In this paper we report a physical-mathematical model for studying the organs and humans fluids by cybernetic principle. The input variables represent the trace elements which are determined by atomic and nuclear methods of elemental analysis. We have determined the health limits between which the organs might function. (authors)

  14. A single point of pressure approach as input for injury models with respect to complex blast loading conditions

    NARCIS (Netherlands)

    Teland, J.A.; Doormaal, J.C.A.M. van; Horst, M.J. van der; Svinsås, E.

    2010-01-01

    Blast injury models, like Axelsson and Stuhmiller, require four pressure signals as input. Those pressure signals must be acquired by a Blast Test Device (BTD) that has four pressure transducers placed in a horizontal plane at intervals of 90 degrees. This can be either in a physical test setup or

  15. Effect of stimulation on the input parameters of stochastic leaky integrate-and-fire neuronal model

    Czech Academy of Sciences Publication Activity Database

    Lánský, Petr; Šanda, Pavel; He, J.

    2010-01-01

    Roč. 104, 3-4 (2010), s. 160-166 ISSN 0928-4257 R&D Projects: GA MŠk(CZ) LC554; GA AV ČR(CZ) IAA101120604 Institutional research plan: CEZ:AV0Z50110509 Keywords : membrane depolarization * input parameters * diffusion Subject RIV: BO - Biophysics Impact factor: 3.030, year: 2010

  16. Enhancement of regional wet deposition estimates based on modeled precipitation inputs

    Science.gov (United States)

    James A. Lynch; Jeffery W. Grimm; Edward S. Corbett

    1996-01-01

    Application of a variety of two-dimensional interpolation algorithms to precipitation chemistry data gathered at scattered monitoring sites for the purpose of estimating precipitation- born ionic inputs for specific points or regions have failed to produce accurate estimates. The accuracy of these estimates is particularly poor in areas of high topographic relief....

  17. Impact of Infralimbic Inputs on Intercalated Amygdale Neurons: A Biophysical Modeling Study

    Science.gov (United States)

    Li, Guoshi; Amano, Taiju; Pare, Denis; Nair, Satish S.

    2011-01-01

    Intercalated (ITC) amygdala neurons regulate fear expression by controlling impulse traffic between the input (basolateral amygdala; BLA) and output (central nucleus; Ce) stations of the amygdala for conditioned fear responses. Previously, stimulation of the infralimbic (IL) cortex was found to reduce fear expression and the responsiveness of Ce…

  18. A novel wavelet sequence based on deep bidirectional LSTM network model for ECG signal classification.

    Science.gov (United States)

    Yildirim, Özal

    2018-05-01

    Long-short term memory networks (LSTMs), which have recently emerged in sequential data analysis, are the most widely used type of recurrent neural networks (RNNs) architecture. Progress on the topic of deep learning includes successful adaptations of deep versions of these architectures. In this study, a new model for deep bidirectional LSTM network-based wavelet sequences called DBLSTM-WS was proposed for classifying electrocardiogram (ECG) signals. For this purpose, a new wavelet-based layer is implemented to generate ECG signal sequences. The ECG signals were decomposed into frequency sub-bands at different scales in this layer. These sub-bands are used as sequences for the input of LSTM networks. New network models that include unidirectional (ULSTM) and bidirectional (BLSTM) structures are designed for performance comparisons. Experimental studies have been performed for five different types of heartbeats obtained from the MIT-BIH arrhythmia database. These five types are Normal Sinus Rhythm (NSR), Ventricular Premature Contraction (VPC), Paced Beat (PB), Left Bundle Branch Block (LBBB), and Right Bundle Branch Block (RBBB). The results show that the DBLSTM-WS model gives a high recognition performance of 99.39%. It has been observed that the wavelet-based layer proposed in the study significantly improves the recognition performance of conventional networks. This proposed network structure is an important approach that can be applied to similar signal processing problems. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Dynamic PET of human liver inflammation: impact of kinetic modeling with optimization-derived dual-blood input function.

    Science.gov (United States)

    Wang, Guobao; Corwin, Michael T; Olson, Kristin A; Badawi, Ramsey D; Sarkar, Souvik

    2018-05-30

    The hallmark of nonalcoholic steatohepatitis is hepatocellular inflammation and injury in the setting of hepatic steatosis. Recent work has indicated that dynamic 18F-FDG PET with kinetic modeling has the potential to assess hepatic inflammation noninvasively, while static FDG-PET did not show a promise. Because the liver has dual blood supplies, kinetic modeling of dynamic liver PET data is challenging in human studies. The objective of this study is to evaluate and identify a dual-input kinetic modeling approach for dynamic FDG-PET of human liver inflammation. Fourteen human patients with nonalcoholic fatty liver disease were included in the study. Each patient underwent one-hour dynamic FDG-PET/CT scan and had liver biopsy within six weeks. Three models were tested for kinetic analysis: traditional two-tissue compartmental model with an image-derived single-blood input function (SBIF), model with population-based dual-blood input function (DBIF), and modified model with optimization-derived DBIF through a joint estimation framework. The three models were compared using Akaike information criterion (AIC), F test and histopathologic inflammation reference. The results showed that the optimization-derived DBIF model improved the fitting of liver time activity curves and achieved lower AIC values and higher F values than the SBIF and population-based DBIF models in all patients. The optimization-derived model significantly increased FDG K1 estimates by 101% and 27% as compared with traditional SBIF and population-based DBIF. K1 by the optimization-derived model was significantly associated with histopathologic grades of liver inflammation while the other two models did not provide a statistical significance. In conclusion, modeling of DBIF is critical for kinetic analysis of dynamic liver FDG-PET data in human studies. The optimization-derived DBIF model is more appropriate than SBIF and population-based DBIF for dynamic FDG-PET of liver inflammation. © 2018

  20. Accident sequence precursor analysis level 2/3 model development

    International Nuclear Information System (INIS)

    Lui, C.H.; Galyean, W.J.; Brownson, D.A.

    1997-01-01

    The US Nuclear Regulatory Commission's Accident Sequence Precursor (ASP) program currently uses simple Level 1 models to assess the conditional core damage probability for operational events occurring in commercial nuclear power plants (NPP). Since not all accident sequences leading to core damage will result in the same radiological consequences, it is necessary to develop simple Level 2/3 models that can be used to analyze the response of the NPP containment structure in the context of a core damage accident, estimate the magnitude of the resulting radioactive releases to the environment, and calculate the consequences associated with these releases. The simple Level 2/3 model development work was initiated in 1995, and several prototype models have been completed. Once developed, these simple Level 2/3 models are linked to the simple Level 1 models to provide risk perspectives for operational events. This paper describes the methods implemented for the development of these simple Level 2/3 ASP models, and the linkage process to the existing Level 1 models

  1. The input and output management of solid waste using DEA models: A case study at Jengka, Pahang

    Science.gov (United States)

    Mohamed, Siti Rosiah; Ghazali, Nur Fadzrina Mohd; Mohd, Ainun Hafizah

    2017-08-01

    Data Envelopment Analysis (DEA) as a tool for obtaining performance indices has been used extensively in several of organizations sector. The ways to improve the efficiency of Decision Making Units (DMUs) is impractical because some of inputs and outputs are uncontrollable and in certain situation its produce weak efficiency which often reflect the impact for operating environment. Based on the data from Alam Flora Sdn. Bhd Jengka, the researcher wants to determine the efficiency of solid waste management (SWM) in town Jengka Pahang using CCRI and CCRO model of DEA and duality formulation with vector average input and output. Three input variables (length collection in meter, frequency time per week in hour and number of garbage truck) and 2 outputs variables (frequency collection and the total solid waste collection in kilogram) are analyzed. As a conclusion, it shows only three roads from 23 roads are efficient that achieve efficiency score 1. Meanwhile, 20 other roads are in an inefficient management.

  2. Using hidden Markov models to align multiple sequences.

    Science.gov (United States)

    Mount, David W

    2009-07-01

    A hidden Markov model (HMM) is a probabilistic model of a multiple sequence alignment (msa) of proteins. In the model, each column of symbols in the alignment is represented by a frequency distribution of the symbols (called a "state"), and insertions and deletions are represented by other states. One moves through the model along a particular path from state to state in a Markov chain (i.e., random choice of next move), trying to match a given sequence. The next matching symbol is chosen from each state, recording its probability (frequency) and also the probability of going to that state from a previous one (the transition probability). State and transition probabilities are multiplied to obtain a probability of the given sequence. The hidden nature of the HMM is due to the lack of information about the value of a specific state, which is instead represented by a probability distribution over all possible values. This article discusses the advantages and disadvantages of HMMs in msa and presents algorithms for calculating an HMM and the conditions for producing the best HMM.

  3. Finding identifiable parameter combinations in nonlinear ODE models and the rational reparameterization of their input-output equations.

    Science.gov (United States)

    Meshkat, Nicolette; Anderson, Chris; Distefano, Joseph J

    2011-09-01

    When examining the structural identifiability properties of dynamic system models, some parameters can take on an infinite number of values and yet yield identical input-output data. These parameters and the model are then said to be unidentifiable. Finding identifiable combinations of parameters with which to reparameterize the model provides a means for quantitatively analyzing the model and computing solutions in terms of the combinations. In this paper, we revisit and explore the properties of an algorithm for finding identifiable parameter combinations using Gröbner Bases and prove useful theoretical properties of these parameter combinations. We prove a set of M algebraically independent identifiable parameter combinations can be found using this algorithm and that there exists a unique rational reparameterization of the input-output equations over these parameter combinations. We also demonstrate application of the procedure to a nonlinear biomodel. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. On the Influence of Input Data Quality to Flood Damage Estimation: The Performance of the INSYDE Model

    Directory of Open Access Journals (Sweden)

    Daniela Molinari

    2017-09-01

    Full Text Available IN-depth SYnthetic Model for Flood Damage Estimation (INSYDE is a model for the estimation of flood damage to residential buildings at the micro-scale. This study investigates the sensitivity of INSYDE to the accuracy of input data. Starting from the knowledge of input parameters at the scale of individual buildings for a case study, the level of detail of input data is progressively downgraded until the condition in which a representative value is defined for all inputs at the census block scale. The analysis reveals that two conditions are required to limit the errors in damage estimation: the representativeness of representatives values with respect to micro-scale values and the local knowledge of the footprint area of the buildings, being the latter the main extensive variable adopted by INSYDE. Such a result allows for extending the usability of the model at the meso-scale, also in different countries, depending on the availability of aggregated building data.

  5. Latitudinal and seasonal variability of the micrometeor input function: A study using model predictions and observations from Arecibo and PFISR

    Science.gov (United States)

    Fentzke, J. T.; Janches, D.; Sparks, J. J.

    2009-05-01

    In this work, we use a semi-empirical model of the micrometeor input function (MIF) together with meteor head-echo observations obtained with two high power and large aperture (HPLA) radars, the 430 MHz Arecibo Observatory (AO) radar in Puerto Rico (18°N, 67°W) and the 450 MHz Poker flat incoherent scatter radar (PFISR) in Alaska (65°N, 147°W), to study the seasonal and geographical dependence of the meteoric flux in the upper atmosphere. The model, recently developed by Janches et al. [2006a. Modeling the global micrometeor input function in the upper atmosphere observed by high power and large aperture radars. Journal of Geophysical Research 111] and Fentzke and Janches [2008. A semi-empirical model of the contribution from sporadic meteoroid sources on the meteor input function observed at arecibo. Journal of Geophysical Research (Space Physics) 113 (A03304)], includes an initial mass flux that is provided by the six known meteor sources (i.e. orbital families of dust) as well as detailed modeling of meteoroid atmospheric entry and ablation physics. In addition, we use a simple ionization model to treat radar sensitivity issues by defining minimum electron volume density production thresholds required in the meteor head-echo plasma for detection. This simplified approach works well because we use observations from two radars with similar frequencies, but different sensitivities and locations. This methodology allows us to explore the initial input of particles and how it manifests in different parts of the MLT as observed by these instruments without the need to invoke more sophisticated plasma models, which are under current development. The comparisons between model predictions and radar observations show excellent agreement between diurnal, seasonal, and latitudinal variability of the detected meteor rate and radial velocity distributions, allowing us to understand how individual meteoroid populations contribute to the overall flux at a particular

  6. A grey neural network and input-output combined forecasting model. Primary energy consumption forecasts in Spanish economic sectors

    International Nuclear Information System (INIS)

    Liu, Xiuli; Moreno, Blanca; García, Ana Salomé

    2016-01-01

    A combined forecast of Grey forecasting method and neural network back propagation model, which is called Grey Neural Network and Input-Output Combined Forecasting Model (GNF-IO model), is proposed. A real case of energy consumption forecast is used to validate the effectiveness of the proposed model. The GNF-IO model predicts coal, crude oil, natural gas, renewable and nuclear primary energy consumption volumes by Spain's 36 sub-sectors from 2010 to 2015 according to three different GDP growth scenarios (optimistic, baseline and pessimistic). Model test shows that the proposed model has higher simulation and forecasting accuracy on energy consumption than Grey models separately and other combination methods. The forecasts indicate that the primary energies as coal, crude oil and natural gas will represent on average the 83.6% percent of the total of primary energy consumption, raising concerns about security of supply and energy cost and adding risk for some industrial production processes. Thus, Spanish industry must speed up its transition to an energy-efficiency economy, achieving a cost reduction and increase in the level of self-supply. - Highlights: • Forecasting System Using Grey Models combined with Input-Output Models is proposed. • Primary energy consumption in Spain is used to validate the model. • The grey-based combined model has good forecasting performance. • Natural gas will represent the majority of the total of primary energy consumption. • Concerns about security of supply, energy cost and industry competitiveness are raised.

  7. Realistic modeling of seismic input for megacities and large urban areas

    Science.gov (United States)

    Panza, G. F.; Unesco/Iugs/Igcp Project 414 Team

    2003-04-01

    The project addressed the problem of pre-disaster orientation: hazard prediction, risk assessment, and hazard mapping, in connection with seismic activity and man-induced vibrations. The definition of realistic seismic input has been obtained from the computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different source and structural models. The innovative modeling technique, that constitutes the common tool to the entire project, takes into account source, propagation and local site effects. This is done using first principles of physics about wave generation and propagation in complex media, and does not require to resort to convolutive approaches, that have been proven to be quite unreliable, mainly when dealing with complex geological structures, the most interesting from the practical point of view. In fact, several techniques that have been proposed to empirically estimate the site effects using observations convolved with theoretically computed signals corresponding to simplified models, supply reliable information about the site response to non-interfering seismic phases. They are not adequate in most of the real cases, when the seismic sequel is formed by several interfering waves. The availability of realistic numerical simulations enables us to reliably estimate the amplification effects even in complex geological structures, exploiting the available geotechnical, lithological, geophysical parameters, topography of the medium, tectonic, historical, palaeoseismological data, and seismotectonic models. The realistic modeling of the ground motion is a very important base of knowledge for the preparation of groundshaking scenarios that represent a valid and economic tool for the seismic microzonation. This knowledge can be very fruitfully used by civil engineers in the design of new seismo-resistant constructions and in the reinforcement of the existing built environment, and, therefore

  8. Evaluation of Uncertainty in Constituent Input Parameters for Modeling the Fate of IMX 101 Components

    Science.gov (United States)

    2017-05-01

    2) TREECS™ has a tool for estimating soil Kd values given Koc, the soil tex- ture (percent sand, silt, and clay ), and the percent organic matter...respectively. Mulherin et al. (2005) studied the stability of NQ in three moist, unsatu- rated soils under laboratory conditions. This study yielded a range...of the uncertain input properties (degrada- tion rates and water-to- soil and water-to-sediment adsorption partitioning distribution coefficients, or

  9. A study on the multi-dimensional spectral analysis for response of a piping model with two-seismic inputs

    International Nuclear Information System (INIS)

    Suzuki, K.; Sato, H.

    1975-01-01

    The power and the cross power spectrum analysis by which the vibration characteristic of structures, such as natural frequency, mode of vibration and damping ratio, can be identified would be effective for the confirmation of the characteristics after the construction is completed by using the response for small earthquakes or the micro-tremor under the operating condition. This method of analysis previously utilized only from the view point of systems with single input so far, is extensively applied for the analysis of a medium scale model of a piping system subjected to two seismic inputs. The piping system attached to a three storied concrete structure model which is constructed on a shaking table was excited due to earthquake motions. The inputs to the piping system were recorded at the second floor and the ceiling of the third floor where the system was attached to. The output, the response of the piping system, was instrumented at a middle point on the system. As a result, the multi-dimensional power spectrum analysis is effective for a more reliable identification of the vibration characteristics of the multi-input structure system

  10. Modeling framework for crew decisions during accident sequences

    International Nuclear Information System (INIS)

    Lukic, Y.D.; Worledge, D.H.; Hannaman, G.W.; Spurgin, A.J.

    1986-01-01

    The ability to model the average behavior of operating crews in the course of accident sequences is vital in learning on how to prevent damage to power plants and to maintain safety. This paper summarizes the work carried out in support of a Human Reliability Model framework. This work develops the mathematical framework of the model and identifies the parameters which could be measured in some way, e.g., through simulator experience and/or small scale tests. Selected illustrative examples are presented, of the numerical experiments carried out in order to understand the model sensitivity to parameter variation. These examples ar discussed with the objective of deriving insights of general nature regarding operating of the model which may lead to enhanced understanding of man/machine interactions

  11. WE-FG-206-06: Dual-Input Tracer Kinetic Modeling and Its Analog Implementation for Dynamic Contrast-Enhanced (DCE-) MRI of Malignant Mesothelioma (MPM)

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S; Rimner, A; Hayes, S; Hunt, M; Deasy, J; Zauderer, M; Rusch, V; Tyagi, N [Memorial Sloan Kettering Cancer Center, New York, NY (United States)

    2016-06-15

    Purpose: To use dual-input tracer kinetic modeling of the lung for mapping spatial heterogeneity of various kinetic parameters in malignant MPM Methods: Six MPM patients received DCE-MRI as part of their radiation therapy simulation scan. 5 patients had the epitheloid subtype of MPM, while one was biphasic. A 3D fast-field echo sequence with TR/TE/Flip angle of 3.62ms/1.69ms/15° was used for DCE-MRI acquisition. The scan was collected for 5 minutes with a temporal resolution of 5-9 seconds depending on the spatial extent of the tumor. A principal component analysis-based groupwise deformable registration was used to co-register all the DCE-MRI series for motion compensation. All the images were analyzed using five different dual-input tracer kinetic models implemented in analog continuous-time formalism: the Tofts-Kety (TK), extended TK (ETK), two compartment exchange (2CX), adiabatic approximation to the tissue homogeneity (AATH), and distributed parameter (DP) models. The following parameters were computed for each model: total blood flow (BF), pulmonary flow fraction (γ), pulmonary blood flow (BF-pa), systemic blood flow (BF-a), blood volume (BV), mean transit time (MTT), permeability-surface area product (PS), fractional interstitial volume (vi), extraction fraction (E), volume transfer constant (Ktrans) and efflux rate constant (kep). Results: Although the majority of patients had epitheloid histologies, kinetic parameter values varied across different models. One patient showed a higher total BF value in all models among the epitheloid histologies, although the γ value was varying among these different models. In one tumor with a large area of necrosis, the TK and ETK models showed higher E, Ktrans, and kep values and lower interstitial volume as compared to AATH and DP and 2CX models. Kinetic parameters such as BF-pa, BF-a, PS, Ktrans values were higher in surviving group compared to non-surviving group across most models. Conclusion: Dual-input tracer

  12. MODELING THE RED SEQUENCE: HIERARCHICAL GROWTH YET SLOW LUMINOSITY EVOLUTION

    International Nuclear Information System (INIS)

    Skelton, Rosalind E.; Bell, Eric F.; Somerville, Rachel S.

    2012-01-01

    We explore the effects of mergers on the evolution of massive early-type galaxies by modeling the evolution of their stellar populations in a hierarchical context. We investigate how a realistic red sequence population set up by z ∼ 1 evolves under different assumptions for the merger and star formation histories, comparing changes in color, luminosity, and mass. The purely passive fading of existing red sequence galaxies, with no further mergers or star formation, results in dramatic changes at the bright end of the luminosity function and color-magnitude relation. Without mergers there is too much evolution in luminosity at a fixed space density compared to observations. The change in color and magnitude at a fixed mass resembles that of a passively evolving population that formed relatively recently, at z ∼ 2. Mergers among the red sequence population ('dry mergers') occurring after z = 1 build up mass, counteracting the fading of the existing stellar populations to give smaller changes in both color and luminosity for massive galaxies. By allowing some galaxies to migrate from the blue cloud onto the red sequence after z = 1 through gas-rich mergers, younger stellar populations are added to the red sequence. This manifestation of the progenitor bias increases the scatter in age and results in even smaller changes in color and luminosity between z = 1 and z = 0 at a fixed mass. The resultant evolution appears much slower, resembling the passive evolution of a population that formed at high redshift (z ∼ 3-5), and is in closer agreement with observations. We conclude that measurements of the luminosity and color evolution alone are not sufficient to distinguish between the purely passive evolution of an old population and cosmologically motivated hierarchical growth, although these scenarios have very different implications for the mass growth of early-type galaxies over the last half of cosmic history.

  13. Bacterial DNA Sequence Compression Models Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Armando J. Pinho

    2013-08-01

    Full Text Available It is widely accepted that the advances in DNA sequencing techniques have contributed to an unprecedented growth of genomic data. This fact has increased the interest in DNA compression, not only from the information theory and biology points of view, but also from a practical perspective, since such sequences require storage resources. Several compression methods exist, and particularly, those using finite-context models (FCMs have received increasing attention, as they have been proven to effectively compress DNA sequences with low bits-per-base, as well as low encoding/decoding time-per-base. However, the amount of run-time memory required to store high-order finite-context models may become impractical, since a context-order as low as 16 requires a maximum of 17.2 x 109 memory entries. This paper presents a method to reduce such a memory requirement by using a novel application of artificial neural networks (ANN to build such probabilistic models in a compact way and shows how to use them to estimate the probabilities. Such a system was implemented, and its performance compared against state-of-the art compressors, such as XM-DNA (expert model and FCM-Mx (mixture of finite-context models , as well as with general-purpose compressors. Using a combination of order-10 FCM and ANN, similar encoding results to those of FCM, up to order-16, are obtained using only 17 megabytes of memory, whereas the latter, even employing hash-tables, uses several hundreds of megabytes.

  14. SISTEM KONTROL OTOMATIK DENGAN MODEL SINGLE-INPUT-DUAL-OUTPUT DALAM KENDALI EFISIENSI UMUR-PEMAKAIAN INSTRUMEN

    Directory of Open Access Journals (Sweden)

    S.N.M.P. Simamora

    2014-10-01

    Full Text Available Efficiency condition occurs when the value of the used outputs compared to the resource total that has been used almost close to the value 1 (absolute environment. An instrument to achieve efficiency if the power output level has decreased significantly in the life of the instrument used, if it compared to the previous condition, when the instrument is not equipped with additional systems (or proposed model improvement. Even more effective if the inputs model that are used in unison to achieve a homogeneous output. On this research has been designed and implemented the automatic control system for models of single input-dual-output, wherein the sampling instruments used are lamp and fan. Source voltage used is AC (alternate-current and tested using quantitative research methods and instrumentation (with measuring instruments are observed. The results obtained demonstrate the efficiency of the instrument experienced a significant current model of single-input-dual-output applied separately instrument trials such as lamp and fan when it compared to the condition or state before. And the result show that the design has been built, can also run well.

  15. Performance assessment of retrospective meteorological inputs for use in air quality modeling during TexAQS 2006

    Science.gov (United States)

    Ngan, Fong; Byun, Daewon; Kim, Hyuncheol; Lee, Daegyun; Rappenglück, Bernhard; Pour-Biazar, Arastoo

    2012-07-01

    To achieve more accurate meteorological inputs than was used in the daily forecast for studying the TexAQS 2006 air quality, retrospective simulations were conducted using objective analysis and 3D/surface analysis nudging with surface and upper observations. Model ozone using the assimilated meteorological fields with improved wind fields shows better agreement with the observation compared to the forecasting results. In the post-frontal conditions, important factors for ozone modeling in terms of wind patterns are the weak easterlies in the morning for bringing in industrial emissions to the city and the subsequent clockwise turning of the wind direction induced by the Coriolis force superimposing the sea breeze, which keeps pollutants in the urban area. Objective analysis and nudging employed in the retrospective simulation minimize the wind bias but are not able to compensate for the general flow pattern biases inherited from large scale inputs. By using an alternative analyses data for initializing the meteorological simulation, the model can re-produce the flow pattern and generate the ozone peak location closer to the reality. The inaccurate simulation of precipitation and cloudiness cause over-prediction of ozone occasionally. Since there are limitations in the meteorological model to simulate precipitation and cloudiness in the fine scale domain (less than 4-km grid), the satellite-based cloud is an alternative way to provide necessary inputs for the retrospective study of air quality.

  16. Synaptic inputs compete during rapid formation of the calyx of Held: a new model system for neural development.

    Science.gov (United States)

    Holcomb, Paul S; Hoffpauir, Brian K; Hoyson, Mitchell C; Jackson, Dakota R; Deerinck, Thomas J; Marrs, Glenn S; Dehoff, Marlin; Wu, Jonathan; Ellisman, Mark H; Spirou, George A

    2013-08-07

    Hallmark features of neural circuit development include early exuberant innervation followed by competition and pruning to mature innervation topography. Several neural systems, including the neuromuscular junction and climbing fiber innervation of Purkinje cells, are models to study neural development in part because they establish a recognizable endpoint of monoinnervation of their targets and because the presynaptic terminals are large and easily monitored. We demonstrate here that calyx of Held (CH) innervation of its target, which forms a key element of auditory brainstem binaural circuitry, exhibits all of these characteristics. To investigate CH development, we made the first application of serial block-face scanning electron microscopy to neural development with fine temporal resolution and thereby accomplished the first time series for 3D ultrastructural analysis of neural circuit formation. This approach revealed a growth spurt of added apposed surface area (ASA)>200 μm2/d centered on a single age at postnatal day 3 in mice and an initial rapid phase of growth and competition that resolved to monoinnervation in two-thirds of cells within 3 d. This rapid growth occurred in parallel with an increase in action potential threshold, which may mediate selection of the strongest input as the winning competitor. ASAs of competing inputs were segregated on the cell body surface. These data suggest mechanisms to select "winning" inputs by regional reinforcement of postsynaptic membrane to mediate size and strength of competing synaptic inputs.

  17. On the relationship between input parameters in two-mass vocal-fold model with acoustical coupling an signal parameters of the glottal flow

    NARCIS (Netherlands)

    van Hirtum, Annemie; Lopez, Ines; Hirschberg, Abraham; Pelorson, Xavier

    2003-01-01

    In this paper the sensitivity of the two-mass model with acoustical coupling to the model input-parameters is assessed. The model-output or the glottal volume air flow is characterised by signal-parameters in the time-domain. The influence of changing input-parameters on the signal-parameters is

  18. On the relationship between input parameters in the two-mass vocal-fold model with acoustical coupling and signal parameters of the glottal flow

    NARCIS (Netherlands)

    Hirtum, van A.; Lopez Arteaga, I.; Hirschberg, A.; Pelorson, X.

    2003-01-01

    In this paper the sensitivity of the two-mass model with acoustical coupling to the model input-parameters is assessed. The model-output or the glottal volume air flow is characterised by signal-parameters in the time-domain. The influence of changing input-parameters on the signal-parameters is

  19. Analysis of correlations between sites in models of protein sequences

    International Nuclear Information System (INIS)

    Giraud, B.G.; Lapedes, A.; Liu, L.C.

    1998-01-01

    A criterion based on conditional probabilities, related to the concept of algorithmic distance, is used to detect correlated mutations at noncontiguous sites on sequences. We apply this criterion to the problem of analyzing correlations between sites in protein sequences; however, the analysis applies generally to networks of interacting sites with discrete states at each site. Elementary models, where explicit results can be derived easily, are introduced. The number of states per site considered ranges from 2, illustrating the relation to familiar classical spin systems, to 20 states, suitable for representing amino acids. Numerical simulations show that the criterion remains valid even when the genetic history of the data samples (e.g., protein sequences), as represented by a phylogenetic tree, introduces nonindependence between samples. Statistical fluctuations due to finite sampling are also investigated and do not invalidate the criterion. A subsidiary result is found: The more homogeneous a population, the more easily its average properties can drift from the properties of its ancestor. copyright 1998 The American Physical Society

  20. Protein model discrimination using mutational sensitivity derived from deep sequencing.

    Science.gov (United States)

    Adkar, Bharat V; Tripathi, Arti; Sahoo, Anusmita; Bajaj, Kanika; Goswami, Devrishi; Chakrabarti, Purbani; Swarnkar, Mohit K; Gokhale, Rajesh S; Varadarajan, Raghavan

    2012-02-08

    A major bottleneck in protein structure prediction is the selection of correct models from a pool of decoys. Relative activities of ∼1,200 individual single-site mutants in a saturation library of the bacterial toxin CcdB were estimated by determining their relative populations using deep sequencing. This phenotypic information was used to define an empirical score for each residue (RankScore), which correlated with the residue depth, and identify active-site residues. Using these correlations, ∼98% of correct models of CcdB (RMSD ≤ 4Å) were identified from a large set of decoys. The model-discrimination methodology was further validated on eleven different monomeric proteins using simulated RankScore values. The methodology is also a rapid, accurate way to obtain relative activities of each mutant in a large pool and derive sequence-structure-function relationships without protein isolation or characterization. It can be applied to any system in which mutational effects can be monitored by a phenotypic readout. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Next-generation sequence analysis of cancer xenograft models.

    Directory of Open Access Journals (Sweden)

    Fernando J Rossello

    Full Text Available Next-generation sequencing (NGS studies in cancer are limited by the amount, quality and purity of tissue samples. In this situation, primary xenografts have proven useful preclinical models. However, the presence of mouse-derived stromal cells represents a technical challenge to their use in NGS studies. We examined this problem in an established primary xenograft model of small cell lung cancer (SCLC, a malignancy often diagnosed from small biopsy or needle aspirate samples. Using an in silico strategy that assign reads according to species-of-origin, we prospectively compared NGS data from primary xenograft models with matched cell lines and with published datasets. We show here that low-coverage whole-genome analysis demonstrated remarkable concordance between published genome data and internal controls, despite the presence of mouse genomic DNA. Exome capture sequencing revealed that this enrichment procedure was highly species-specific, with less than 4% of reads aligning to the mouse genome. Human-specific expression profiling with RNA-Seq replicated array-based gene expression experiments, whereas mouse-specific transcript profiles correlated with published datasets from human cancer stroma. We conclude that primary xenografts represent a useful platform for complex NGS analysis in cancer research for tumours with limited sample resources, or those with prominent stromal cell populations.

  2. Nonlinear neural network for hemodynamic model state and input estimation using fMRI data

    KAUST Repository

    Karam, Ayman M.

    2014-11-01

    Originally inspired by biological neural networks, artificial neural networks (ANNs) are powerful mathematical tools that can solve complex nonlinear problems such as filtering, classification, prediction and more. This paper demonstrates the first successful implementation of ANN, specifically nonlinear autoregressive with exogenous input (NARX) networks, to estimate the hemodynamic states and neural activity from simulated and measured real blood oxygenation level dependent (BOLD) signals. Blocked and event-related BOLD data are used to test the algorithm on real experiments. The proposed method is accurate and robust even in the presence of signal noise and it does not depend on sampling interval. Moreover, the structure of the NARX networks is optimized to yield the best estimate with minimal network architecture. The results of the estimated neural activity are also discussed in terms of their potential use.

  3. Modelling and control design for SHARON/Anammox reactor sequence

    DEFF Research Database (Denmark)

    Valverde Perez, Borja; Mauricio Iglesias, Miguel; Sin, Gürkan

    2012-01-01

    metabolism against fast chemical reaction and mass transfer. Likewise, the analysis of the dynamics contributed to establish qualitatively the requirements for control of the reactors, both for regulation and for optimal operation. Work in progress on quantitatively analysing different control structure......With the perspective of investigating a suitable control design for autotrophic nitrogen removal, this work presents a complete model of the SHARON/Anammox reactor sequence. The dynamics of the reactor were explored pointing out the different scales of the rates in the system: slow microbial...

  4. Embodied water analysis for Hebei Province, China by input-output modelling

    Science.gov (United States)

    Liu, Siyuan; Han, Mengyao; Wu, Xudong; Wu, Xiaofang; Li, Zhi; Xia, Xiaohua; Ji, Xi

    2018-03-01

    With the accelerating coordinated development of the Beijing-Tianjin-Hebei region, regional economic integration is recognized as a national strategy. As water scarcity places Hebei Province in a dilemma, it is of critical importance for Hebei Province to balance water resources as well as make full use of its unique advantages in the transition to sustainable development. To our knowledge, related embodied water accounting analysis has been conducted for Beijing and Tianjin, while similar works with the focus on Hebei are not found. In this paper, using the most complete and recent statistics available for Hebei Province, the embodied water use in Hebei Province is analyzed in detail. Based on input-output analysis, it presents a complete set of systems accounting framework for water resources. In addition, a database of embodied water intensity is proposed which is applicable to both intermediate inputs and final demand. The result suggests that the total amount of embodied water in final demand is 10.62 billion m3, of which the water embodied in urban household consumption accounts for more than half. As a net embodied water importer, the water embodied in the commodity trade in Hebei Province is 17.20 billion m3. The outcome of this work implies that it is particularly urgent to adjust industrial structure and trade policies for water conservation, to upgrade technology and to improve water utilization. As a result, to relieve water shortages in Hebei Province, it is of crucial importance to regulate the balance of water use within the province, thus balancing water distribution in the various industrial sectors.

  5. The sensitivity of ecosystem service models to choices of input data and spatial resolution

    Science.gov (United States)

    Kenneth J. Bagstad; Erika Cohen; Zachary H. Ancona; Steven. G. McNulty; Ge   Sun

    2018-01-01

    Although ecosystem service (ES) modeling has progressed rapidly in the last 10–15 years, comparative studies on data and model selection effects have become more common only recently. Such studies have drawn mixed conclusions about whether different data and model choices yield divergent results. In this study, we compared the results of different models to address...

  6. Modeling and sliding mode predictive control of the ultra-supercritical boiler-turbine system with uncertainties and input constraints.

    Science.gov (United States)

    Tian, Zhen; Yuan, Jingqi; Zhang, Xiang; Kong, Lei; Wang, Jingcheng

    2018-05-01

    The coordinated control system (CCS) serves as an important role in load regulation, efficiency optimization and pollutant reduction for coal-fired power plants. The CCS faces with tough challenges, such as the wide-range load variation, various uncertainties and constraints. This paper aims to improve the load tacking ability and robustness for boiler-turbine units under wide-range operation. To capture the key dynamics of the ultra-supercritical boiler-turbine system, a nonlinear control-oriented model is developed based on mechanism analysis and model reduction techniques, which is validated with the history operation data of a real 1000 MW unit. To simultaneously address the issues of uncertainties and input constraints, a discrete-time sliding mode predictive controller (SMPC) is designed with the dual-mode control law. Moreover, the input-to-state stability and robustness of the closed-loop system are proved. Simulation results are presented to illustrate the effectiveness of the proposed control scheme, which achieves good tracking performance, disturbance rejection ability and compatibility to input constraints. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Methodology for deriving hydrogeological input parameters for safety-analysis models - application to fractured crystalline rocks of Northern Switzerland

    International Nuclear Information System (INIS)

    Vomvoris, S.; Andrews, R.W.; Lanyon, G.W.; Voborny, O.; Wilson, W.

    1996-04-01

    Switzerland is one of many nations with nuclear power that is seeking to identify rock types and locations that would be suitable for the underground disposal of nuclear waste. A common challenge among these programs is to provide engineering designers and safety analysts with a reasonably representative hydrogeological input dataset that synthesizes the relevant information from direct field observations as well as inferences and model results derived from those observations. Needed are estimates of the volumetric flux through a volume of rock and the distribution of that flux into discrete pathways between the repository zones and the biosphere. These fluxes are not directly measurable but must be derived based on understandings of the range of plausible hydrogeologic conditions expected at the location investigated. The methodology described in this report utilizes conceptual and numerical models at various scales to derive the input dataset. The methodology incorporates an innovative approach, called the geometric approach, in which field observations and their associated uncertainty, together with a conceptual representation of those features that most significantly affect the groundwater flow regime, were rigorously applied to generate alternative possible realizations of hydrogeologic features in the geosphere. In this approach, the ranges in the output values directly reflect uncertainties in the input values. As a demonstration, the methodology is applied to the derivation of the hydrogeological dataset for the crystalline basement of Northern Switzerland. (author) figs., tabs., refs

  8. Tables and intercomparisons of evolutionary sequences of models for massive stars

    International Nuclear Information System (INIS)

    Chin, Chaowen; Stothers, R.B.

    1990-01-01

    Tables of evolutionary sequences of models for massive stars have been prepared for a variety of physical input parameters that are normally treated as free. These parameters include the interior convective mixing scheme, the mixing length in the outer convective envelope, the rate of stellar-wind mass loss, the initial stellar mass, and the initial chemical composition. Ranges of specified initial mass and initial chemical composition are M = 10-120 solar masses, Xe = 0.602-0.739, and Ze = 0.021-0.044. The tables cover evolution of the star from the ZAMS to either the end of core H burning or the end of core He burning. Differences among the evolutionary tracks are illustrated primarily in terms of the interior mixing scheme, since the amount and timing of stellar wind mass loss are still very uncertain for initial masses above about 30 solar masses. 52 refs

  9. Improved Stabilization Conditions for Nonlinear Systems with Input and State Delays via T-S Fuzzy Model

    Directory of Open Access Journals (Sweden)

    Chang Che

    2018-01-01

    Full Text Available This paper focuses on the problem of nonlinear systems with input and state delays. The considered nonlinear systems are represented by Takagi-Sugeno (T-S fuzzy model. A new state feedback control approach is introduced for T-S fuzzy systems with input delay and state delays. A new Lyapunov-Krasovskii functional is employed to derive less conservative stability conditions by incorporating a recently developed Wirtinger-based integral inequality. Based on the Lyapunov stability criterion, a series of linear matrix inequalities (LMIs are obtained by using the slack variables and integral inequality, which guarantees the asymptotic stability of the closed-loop system. Several numerical examples are given to show the advantages of the proposed results.

  10. Modeling Soil Carbon Dynamics in Northern Forests: Effects of Spatial and Temporal Aggregation of Climatic Input Data.

    Science.gov (United States)

    Dalsgaard, Lise; Astrup, Rasmus; Antón-Fernández, Clara; Borgen, Signe Kynding; Breidenbach, Johannes; Lange, Holger; Lehtonen, Aleksi; Liski, Jari

    2016-01-01

    Boreal forests contain 30% of the global forest carbon with the majority residing in soils. While challenging to quantify, soil carbon changes comprise a significant, and potentially increasing, part of the terrestrial carbon cycle. Thus, their estimation is important when designing forest-based climate change mitigation strategies and soil carbon change estimates are required for the reporting of greenhouse gas emissions. Organic matter decomposition varies with climate in complex nonlinear ways, rendering data aggregation nontrivial. Here, we explored the effects of temporal and spatial aggregation of climatic and litter input data on regional estimates of soil organic carbon stocks and changes for upland forests. We used the soil carbon and decomposition model Yasso07 with input from the Norwegian National Forest Inventory (11275 plots, 1960-2012). Estimates were produced at three spatial and three temporal scales. Results showed that a national level average soil carbon stock estimate varied by 10% depending on the applied spatial and temporal scale of aggregation. Higher stocks were found when applying plot-level input compared to country-level input and when long-term climate was used as compared to annual or 5-year mean values. A national level estimate for soil carbon change was similar across spatial scales, but was considerably (60-70%) lower when applying annual or 5-year mean climate compared to long-term mean climate reflecting the recent climatic changes in Norway. This was particularly evident for the forest-dominated districts in the southeastern and central parts of Norway and in the far north. We concluded that the sensitivity of model estimates to spatial aggregation will depend on the region of interest. Further, that using long-term climate averages during periods with strong climatic trends results in large differences in soil carbon estimates. The largest differences in this study were observed in central and northern regions with strongly

  11. Effect of manure vs. fertilizer inputs on productivity of forage crop models.

    Science.gov (United States)

    Annicchiarico, Giovanni; Caternolo, Giovanni; Rossi, Emanuela; Martiniello, Pasquale

    2011-06-01

    Manure produced by livestock activity is a dangerous product capable of causing serious environmental pollution. Agronomic management practices on the use of manure may transform the target from a waste to a resource product. Experiments performed on comparison of manure with standard chemical fertilizers (CF) were studied under a double cropping per year regime (alfalfa, model I; Italian ryegrass-corn, model II; barley-seed sorghum, model III; and horse-bean-silage sorghum, model IV). The total amount of manure applied in the annual forage crops of the model II, III and IV was 158, 140 and 80 m3 ha(-1), respectively. The manure applied to soil by broadcast and injection procedure provides an amount of nitrogen equal to that supplied by CF. The effect of manure applications on animal feeding production and biochemical soil characteristics was related to the models. The weather condition and manures and CF showed small interaction among treatments. The number of MFU ha(-1) of biomass crop gross product produced in autumn and spring sowing models under manure applications was 11,769, 20,525, 11,342, 21,397 in models I through IV, respectively. The reduction of MFU ha(-1) under CF ranges from 10.7% to 13.2% those of the manure models. The effect of manure on organic carbon and total nitrogen of topsoil, compared to model I, stressed the parameters as CF whose amount was higher in models II and III than model IV. In term of percentage the organic carbon and total nitrogen of model I and treatment with manure was reduced by about 18.5 and 21.9% in model II and model III and 8.8 and 6.3% in model IV, respectively. Manure management may substitute CF without reducing gross production and sustainability of cropping systems, thus allowing the opportunity to recycle the waste product for animal forage feeding.

  12. Effect of Manure vs. Fertilizer Inputs on Productivity of Forage Crop Models

    Directory of Open Access Journals (Sweden)

    Pasquale Martiniello

    2011-06-01

    Full Text Available Manure produced by livestock activity is a dangerous product capable of causing serious environmental pollution. Agronomic management practices on the use of manure may transform the target from a waste to a resource product. Experiments performed on comparison of manure with standard chemical fertilizers (CF were studied under a double cropping per year regime (alfalfa, model I; Italian ryegrass-corn, model II; barley-seed sorghum, model III; and horse-bean-silage sorghum, model IV. The total amount of manure applied in the annual forage crops of the model II, III and IV was 158, 140 and 80 m3 ha−1, respectively. The manure applied to soil by broadcast and injection procedure provides an amount of nitrogen equal to that supplied by CF. The effect of manure applications on animal feeding production and biochemical soil characteristics was related to the models. The weather condition and manures and CF showed small interaction among treatments. The number of MFU ha−1 of biomass crop gross product produced in autumn and spring sowing models under manure applications was 11,769, 20,525, 11,342, 21,397 in models I through IV, respectively. The reduction of MFU ha−1 under CF ranges from 10.7% to 13.2% those of the manure models. The effect of manure on organic carbon and total nitrogen of topsoil, compared to model I, stressed the parameters as CF whose amount was higher in models II and III than model IV. In term of percentage the organic carbon and total nitrogen of model I and treatment with manure was reduced by about 18.5 and 21.9% in model II and model III and 8.8 and 6.3% in model IV, respectively. Manure management may substitute CF without reducing gross production and sustainability of cropping systems, thus allowing the opportunity to recycle the waste product for animal forage feeding.

  13. Modeling river total bed material load discharge using artificial intelligence approaches (based on conceptual inputs)

    Science.gov (United States)

    Roushangar, Kiyoumars; Mehrabani, Fatemeh Vojoudi; Shiri, Jalal

    2014-06-01

    This study presents Artificial Intelligence (AI)-based modeling of total bed material load through developing the accuracy level of the predictions of traditional models. Gene expression programming (GEP) and adaptive neuro-fuzzy inference system (ANFIS)-based models were developed and validated for estimations. Sediment data from Qotur River (Northwestern Iran) were used for developing and validation of the applied techniques. In order to assess the applied techniques in relation to traditional models, stream power-based and shear stress-based physical models were also applied in the studied case. The obtained results reveal that developed AI-based models using minimum number of dominant factors, give more accurate results than the other applied models. Nonetheless, it was revealed that k-fold test is a practical but high-cost technique for complete scanning of applied data and avoiding the over-fitting.

  14. Modelling of just-in-sequence supply of manufacturing processes

    Directory of Open Access Journals (Sweden)

    Bányai Tamás

    2017-01-01

    Full Text Available The customer oriented production led to the growth of complexity of manufacturing and connected logistics processes. In many production companies one of the largest asset on balance sheet is inventory. To avoid inventory problems and to be the winners of today’s market situation manufacturing companies try to decrease heavy inventory levels through just-in-time based supply strategies. The aim of this research work is to analyse these supply strategies. The first part of the paper describes the just-in-time based supply and summarises the most important characteristics of them. The second part focuses on the modelling of just-in-sequence based in-plant supply. The models makes it possible to determine different in-plant supply strategies.

  15. Universal sequence replication, reversible polymerization and early functional biopolymers: a model for the initiation of prebiotic sequence evolution.

    Directory of Open Access Journals (Sweden)

    Sara Imari Walker

    Full Text Available Many models for the origin of life have focused on understanding how evolution can drive the refinement of a preexisting enzyme, such as the evolution of efficient replicase activity. Here we present a model for what was, arguably, an even earlier stage of chemical evolution, when polymer sequence diversity was generated and sustained before, and during, the onset of functional selection. The model includes regular environmental cycles (e.g. hydration-dehydration cycles that drive polymers between times of replication and functional activity, which coincide with times of different monomer and polymer diffusivity. Template-directed replication of informational polymers, which takes place during the dehydration stage of each cycle, is considered to be sequence-independent. New sequences are generated by spontaneous polymer formation, and all sequences compete for a finite monomer resource that is recycled via reversible polymerization. Kinetic Monte Carlo simulations demonstrate that this proposed prebiotic scenario provides a robust mechanism for the exploration of sequence space. Introduction of a polymer sequence with monomer synthetase activity illustrates that functional sequences can become established in a preexisting pool of otherwise non-functional sequences. Functional selection does not dominate system dynamics and sequence diversity remains high, permitting the emergence and spread of more than one functional sequence. It is also observed that polymers spontaneously form clusters in simulations where polymers diffuse more slowly than monomers, a feature that is reminiscent of a previous proposal that the earliest stages of life could have been defined by the collective evolution of a system-wide cooperation of polymer aggregates. Overall, the results presented demonstrate the merits of considering plausible prebiotic polymer chemistries and environments that would have allowed for the rapid turnover of monomer resources and for

  16. AN ACCURATE MODELING OF DELAY AND SLEW METRICS FOR ON-CHIP VLSI RC INTERCONNECTS FOR RAMP INPUTS USING BURR’S DISTRIBUTION FUNCTION

    Directory of Open Access Journals (Sweden)

    Rajib Kar

    2010-09-01

    Full Text Available This work presents an accurate and efficient model to compute the delay and slew metric of on-chip interconnect of high speed CMOS circuits foe ramp input. Our metric assumption is based on the Burr’s Distribution function. The Burr’s distribution is used to characterize the normalized homogeneous portion of the step response. We used the PERI (Probability distribution function Extension for Ramp Inputs technique that extends delay metrics and slew metric for step inputs to the more general and realistic non-step inputs. The accuracy of our models is justified with the results compared with that of SPICE simulations.

  17. A simple technique for obtaining future climate data inputs for natural resource models

    Science.gov (United States)

    Those conducting impact studies using natural resource models need to be able to quickly and easily obtain downscaled future climate data from multiple models, scenarios, and timescales for multiple locations. This paper describes a method of quickly obtaining future climate data over a wide range o...

  18. Better temperature predictions in geothermal modelling by improved quality of input parameters

    DEFF Research Database (Denmark)

    Fuchs, Sven; Bording, Thue Sylvester; Balling, N.

    2015-01-01

    Thermal modelling is used to examine the subsurface temperature field and geothermal conditions at various scales (e.g. sedimentary basins, deep crust) and in the framework of different problem settings (e.g. scientific or industrial use). In such models, knowledge of rock thermal properties...

  19. Linear and Non-linear Multi-Input Multi-Output Model Predictive Control of Continuous Stirred Tank Reactor

    Directory of Open Access Journals (Sweden)

    Muayad Al-Qaisy

    2015-02-01

    Full Text Available In this article, multi-input multi-output (MIMO linear model predictive controller (LMPC based on state space model and nonlinear model predictive controller based on neural network (NNMPC are applied on a continuous stirred tank reactor (CSTR. The idea is to have a good control system that will be able to give optimal performance, reject high load disturbance, and track set point change. In order to study the performance of the two model predictive controllers, MIMO Proportional-Integral-Derivative controller (PID strategy is used as benchmark. The LMPC, NNMPC, and PID strategies are used for controlling the residual concentration (CA and reactor temperature (T. NNMPC control shows a superior performance over the LMPC and PID controllers by presenting a smaller overshoot and shorter settling time.

  20. Evaluating meteo marine climatic model inputs for the investigation of coastal hydrodynamics

    Science.gov (United States)

    Bellafiore, D.; Bucchignani, E.; Umgiesser, G.

    2010-09-01

    One of the major aspects discussed in the recent works on climate change is how to provide information from the global scale to the local one. In fact the influence of sea level rise and changes in the meteorological conditions due to climate change in strategic areas like the coastal zone is at the base of the well known mitigation and risk assessment plans. The investigation of the coastal zone hydrodynamics, from a modeling point of view, has been the field for the connection between hydraulic models and ocean models and, in terms of process studies, finite element models have demonstrated their suitability in the reproduction of complex coastal morphology and in the capability to reproduce different spatial scale hydrodynamic processes. In this work the connection between two different model families, the climate models and the hydrodynamic models usually implemented for process studies, is tested. Together, they can be the most suitable tool for the investigation of climate change on coastal systems. A finite element model, SHYFEM (Shallow water Hydrodynamic Finite Element Model), is implemented on the Adriatic Sea, to investigate the effect of wind forcing datasets produced by different downscaling from global climate models in terms of surge and its coastal effects. The wind datasets are produced by the regional climate model COSMO-CLM (CIRA), and by EBU-POM model (Belgrade University), both downscaling from ECHAM4. As a first step the downscaled wind datasets, that have different spatial resolutions, has been analyzed for the period 1960-1990 to compare what is their capability to reproduce the measured wind statistics in the coastal zone in front of the Venice Lagoon. The particularity of the Adriatic Sea meteo climate is connected with the influence of the orography in the strengthening of winds like Bora, from North-East. The increase in spatial resolution permits the more resolved wind dataset to better reproduce meteorology and to provide a more

  1. Modeling spray drift and runoff-related inputs of pesticides to receiving water.

    Science.gov (United States)

    Zhang, Xuyang; Luo, Yuzhou; Goh, Kean S

    2018-03-01

    Pesticides move to surface water via various pathways including surface runoff, spray drift and subsurface flow. Little is known about the relative contributions of surface runoff and spray drift in agricultural watersheds. This study develops a modeling framework to address the contribution of spray drift to the total loadings of pesticides in receiving water bodies. The modeling framework consists of a GIS module for identifying drift potential, the AgDRIFT model for simulating spray drift, and the Soil and Water Assessment Tool (SWAT) for simulating various hydrological and landscape processes including surface runoff and transport of pesticides. The modeling framework was applied on the Orestimba Creek Watershed, California. Monitoring data collected from daily samples were used for model evaluation. Pesticide mass deposition on the Orestimba Creek ranged from 0.08 to 6.09% of applied mass. Monitoring data suggests that surface runoff was the major pathway for pesticide entering water bodies, accounting for 76% of the annual loading; the rest 24% from spray drift. The results from the modeling framework showed 81 and 19%, respectively, for runoff and spray drift. Spray drift contributed over half of the mass loading during summer months. The slightly lower spray drift contribution as predicted by the modeling framework was mainly due to SWAT's under-prediction of pesticide mass loading during summer and over-prediction of the loading during winter. Although model simulations were associated with various sources of uncertainties, the overall performance of the modeling framework was satisfactory as evaluated by multiple statistics: for simulation of daily flow, the Nash-Sutcliffe Efficiency Coefficient (NSE) ranged from 0.61 to 0.74 and the percent bias (PBIAS) runoff in receiving waters and the design of management practices for mitigating pesticide exposure within a watershed. Published by Elsevier Ltd.

  2. Dependence of Computational Models on Input Dimension: Tractability of Approximation and Optimization Tasks

    Czech Academy of Sciences Publication Activity Database

    Kainen, P.C.; Kůrková, Věra; Sanguineti, M.

    2012-01-01

    Roč. 58, č. 2 (2012), s. 1203-1214 ISSN 0018-9448 R&D Projects: GA MŠk(CZ) ME10023; GA ČR GA201/08/1744; GA ČR GAP202/11/1368 Grant - others:CNR-AV ČR(CZ-IT) Project 2010–2012 Complexity of Neural -Network and Kernel Computational Models Institutional research plan: CEZ:AV0Z10300504 Keywords : dictionary-based computational models * high-dimensional approximation and optimization * model complexity * polynomial upper bounds Subject RIV: IN - Informatics, Computer Science Impact factor: 2.621, year: 2012

  3. Hidden Markov event sequence models: toward unsupervised functional MRI brain mapping.

    Science.gov (United States)

    Faisan, Sylvain; Thoraval, Laurent; Armspach, Jean-Paul; Foucher, Jack R; Metz-Lutz, Marie-Noëlle; Heitz, Fabrice

    2005-01-01

    Most methods used in functional MRI (fMRI) brain mapping require restrictive assumptions about the shape and timing of the fMRI signal in activated voxels. Consequently, fMRI data may be partially and misleadingly characterized, leading to suboptimal or invalid inference. To limit these assumptions and to capture the broad range of possible activation patterns, a novel statistical fMRI brain mapping method is proposed. It relies on hidden semi-Markov event sequence models (HSMESMs), a special class of hidden Markov models (HMMs) dedicated to the modeling and analysis of event-based random processes. Activation detection is formulated in terms of time coupling between (1) the observed sequence of hemodynamic response onset (HRO) events detected in the voxel's fMRI signal and (2) the "hidden" sequence of task-induced neural activation onset (NAO) events underlying the HROs. Both event sequences are modeled within a single HSMESM. The resulting brain activation model is trained to automatically detect neural activity embedded in the input fMRI data set under analysis. The data sets considered in this article are threefold: synthetic epoch-related, real epoch-related (auditory lexical processing task), and real event-related (oddball detection task) fMRI data sets. Synthetic data: Activation detection results demonstrate the superiority of the HSMESM mapping method with respect to a standard implementation of the statistical parametric mapping (SPM) approach. They are also very close, sometimes equivalent, to those obtained with an "ideal" implementation of SPM in which the activation patterns synthesized are reused for analysis. The HSMESM method appears clearly insensitive to timing variations of the hemodynamic response and exhibits low sensitivity to fluctuations of its shape (unsustained activation during task). Real epoch-related data: HSMESM activation detection results compete with those obtained with SPM, without requiring any prior definition of the expected

  4. Analytical model for advective-dispersive transport involving flexible boundary inputs, initial distributions and zero-order productions

    Science.gov (United States)

    Chen, Jui-Sheng; Li, Loretta Y.; Lai, Keng-Hsin; Liang, Ching-Ping

    2017-11-01

    A novel solution method is presented which leads to an analytical model for the advective-dispersive transport in a semi-infinite domain involving a wide spectrum of boundary inputs, initial distributions, and zero-order productions. The novel solution method applies the Laplace transform in combination with the generalized integral transform technique (GITT) to obtain the generalized analytical solution. Based on this generalized analytical expression, we derive a comprehensive set of special-case solutions for some time-dependent boundary distributions and zero-order productions, described by the Dirac delta, constant, Heaviside, exponentially-decaying, or periodically sinusoidal functions as well as some position-dependent initial conditions and zero-order productions specified by the Dirac delta, constant, Heaviside, or exponentially-decaying functions. The developed solutions are tested against an analytical solution from the literature. The excellent agreement between the analytical solutions confirms that the new model can serve as an effective tool for investigating transport behaviors under different scenarios. Several examples of applications, are given to explore transport behaviors which are rarely noted in the literature. The results show that the concentration waves resulting from the periodically sinusoidal input are sensitive to dispersion coefficient. The implication of this new finding is that a tracer test with a periodic input may provide additional information when for identifying the dispersion coefficients. Moreover, the solution strategy presented in this study can be extended to derive analytical models for handling more complicated problems of solute transport in multi-dimensional media subjected to sequential decay chain reactions, for which analytical solutions are not currently available.

  5. Effective property determination for input to a geostatistical model of regional groundwater flow: Wellenberg T→K

    International Nuclear Information System (INIS)

    Lanyon, G.W.; Marschall, P.; Vomvoris, S.; Jaquet, O.; Mazurek, M.

    1998-01-01

    This paper describes the methodology used to estimate effective hydraulic properties for input into a regional geostatistical model of groundwater flow at the Wellenberg site in Switzerland. The methodology uses a geologically-based discrete fracture network model to calculate effective hydraulic properties for 100m blocks along each borehole. A description of the most transmissive features (Water Conducting Features or WCFs) in each borehole is used to determine local transmissivity distributions which are combined with descriptions of WCF extent, orientation and channelling to create fracture network models. WCF geometry is dependent on the class of WCF. WCF classes are defined for each type of geological structure associated with identified borehole inflows. Local to each borehole, models are conditioned on the observed transmissivity and occurrence of WCFs. Multiple realisations are calculated for each 100m block over approximately 400m of borehole. The results from the numerical upscaling are compared with conservative estimates of hydraulic conductivity. Results from unconditioned models are also compared to identify the consequences of conditioning and interval of boreholes that appear to be atypical. An inverse method is also described by which realisations of the geostatistical model can be used to condition discrete fracture network models away from the boreholes. The method can be used as a verification of the modelling approach by prediction of data at borehole locations. Applications of the models to estimation of post-closure repository performance, including cavern inflow and seal zone modelling, are illustrated

  6. Development of a General Form CO2 and Brine Flux Input Model

    Energy Technology Data Exchange (ETDEWEB)

    Mansoor, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sun, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Carroll, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-08-01

    The National Risk Assessment Partnership (NRAP) project is developing a science-based toolset for the quantitative analysis of the potential risks associated with changes in groundwater chemistry from CO2 injection. In order to address uncertainty probabilistically, NRAP is developing efficient, reduced-order models (ROMs) as part of its approach. These ROMs are built from detailed, physics-based process models to provide confidence in the predictions over a range of conditions. The ROMs are designed to reproduce accurately the predictions from the computationally intensive process models at a fraction of the computational time, thereby allowing the utilization of Monte Carlo methods to probe variability in key parameters. This report presents the procedures used to develop a generalized model for CO2 and brine leakage fluxes based on the output of a numerical wellbore simulation. The resulting generalized parameters and ranges reported here will be used for the development of third-generation groundwater ROMs.

  7. Sensitivity of modeled estuarine circulation to spatial and temporal resolution of input meteorological forcing of a cold frontal passage

    Science.gov (United States)

    Weaver, Robert J.; Taeb, Peyman; Lazarus, Steven; Splitt, Michael; Holman, Bryan P.; Colvin, Jeffrey

    2016-12-01

    In this study, a four member ensemble of meteorological forcing is generated using the Weather Research and Forecasting (WRF) model in order to simulate a frontal passage event that impacted the Indian River Lagoon (IRL) during March 2015. The WRF model is run to provide high and low, spatial (0.005° and 0.1°) and temporal (30 min and 6 h) input wind and pressure fields. The four member ensemble is used to force the Advanced Circulation model (ADCIRC) coupled with Simulating Waves Nearshore (SWAN) and compute the hydrodynamic and wave response. Results indicate that increasing the spatial resolution of the meteorological forcing has a greater impact on the results than increasing the temporal resolution in coastal systems like the IRL where the length scales are smaller than the resolution of the operational meteorological model being used to generate the forecast. Changes in predicted water elevations are due in part to the upwind and downwind behavior of the input wind forcing. The significant wave height is more sensitive to the meteorological forcing, exhibited by greater ensemble spread throughout the simulation. It is important that the land mask, seen by the meteorological model, is representative of the geography of the coastal estuary as resolved by the hydrodynamic model. As long as the temporal resolution of the wind field captures the bulk characteristics of the frontal passage, computational resources should be focused so as to ensure that the meteorological model resolves the spatial complexities, such as the land-water interface, that drive the land use responsible for dynamic downscaling of the winds.

  8. Industrial and ecological cumulative exergy consumption of the United States via the 1997 input-output benchmark model

    International Nuclear Information System (INIS)

    Ukidwe, Nandan U.; Bakshi, Bhavik R.

    2007-01-01

    This paper develops a thermodynamic input-output (TIO) model of the 1997 United States economy that accounts for the flow of cumulative exergy in the 488-sector benchmark economic input-output model in two different ways. Industrial cumulative exergy consumption (ICEC) captures the exergy of all natural resources consumed directly and indirectly by each economic sector, while ecological cumulative exergy consumption (ECEC) also accounts for the exergy consumed in ecological systems for producing each natural resource. Information about exergy consumed in nature is obtained from the thermodynamics of biogeochemical cycles. As used in this work, ECEC is analogous to the concept of emergy, but does not rely on any of its controversial claims. The TIO model can also account for emissions from each sector and their impact and the role of labor. The use of consistent exergetic units permits the combination of various streams to define aggregate metrics that may provide insight into aspects related to the impact of economic sectors on the environment. Accounting for the contribution of natural capital by ECEC has been claimed to permit better representation of the quality of ecosystem goods and services than ICEC. The results of this work are expected to permit evaluation of these claims. If validated, this work is expected to lay the foundation for thermodynamic life cycle assessment, particularly of emerging technologies and with limited information

  9. Fluorescent-increase kinetics of different fluorescent reporters used for qPCR depend on monitoring chemistry, targeted sequence, type of DNA input and PCR efficiency

    International Nuclear Information System (INIS)

    Ruijter, Jan M.; Hoff, Maurice J. B. van den; Lorenz, Peter; Tuomi, Jari M.; Hecker, Michael

    2014-01-01

    The analysis of quantitative PCR data usually does not take into account the fact that the increase in fluorescence depends on the monitoring chemistry, the input of ds-DNA or ss-cDNA, and the directionality of the targeting of probes or primers. The monitoring chemistries currently available can be categorized into six groups: (A) DNA-binding dyes; (B) hybridization probes; (C) hydrolysis probes; (D) LUX primers; (E) hairpin primers; and (F) the QZyme system. We have determined the kinetics of the increase in fluorescence for each of these groups with respect to the input of both ds-DNA and ss-cDNA. For the latter, we also evaluated mRNA and cDNA targeting probes or primers. This analysis revealed three situations. Hydrolysis probes and LUX primers, compared to DNA-binding dyes, do not require a correction of the observed quantification cycle. Hybridization probes and hairpin primers require a correction of −1 cycle (dubbed C-lag), while the QZyme system requires the C-lag correction and an efficiency-dependent C-shift correction. A PCR efficiency value can be derived from the relative increase in fluorescence in the exponential phase of the amplification curve for all monitoring chemistries. In case of hydrolysis probes, LUX primers and hairpin primers, however, this should be performed after cycle 12, and for the QZyme system after cycle 19, to keep the overestimation of the PCR efficiency below 0.5 %. (author)

  10. Comparison of squashing and self-consistent input-output models of quantum feedback

    Science.gov (United States)

    Peřinová, V.; Lukš, A.; Křepelka, J.

    2018-03-01

    The paper (Yanagisawa and Hope, 2010) opens with two ways of analysis of a measurement-based quantum feedback. The scheme of the feedback includes, along with the homodyne detector, a modulator and a beamsplitter, which does not enable one to extract the nonclassical field. In the present scheme, the beamsplitter is replaced by the quantum noise evader, which makes it possible to extract the nonclassical field. We re-approach the comparison of two models related to the same scheme. The first one admits that in the feedback loop between the photon annihilation and creation operators, unusual commutation relations hold. As a consequence, in the feedback loop, squashing of the light occurs. In the second one, the description arrives at the feedback loop via unitary transformations. But it is obvious that the unitary transformation which describes the modulator changes even the annihilation operator of the mode which passes by the modulator which is not natural. The first model could be called "squashing model" and the second one could be named "self-consistent model". Although the predictions of the two models differ only a little and both the ways of analysis have their advantages, they have also their drawbacks and further investigation is possible.

  11. Modeling microstructure of incudostapedial joint and the effect on cochlear input

    Science.gov (United States)

    Gan, Rong Z.; Wang, Xuelin

    2015-12-01

    The incudostapedial joint (ISJ) connects the incus to stapes in human ear and plays an important role for sound transmission from the tympanic membrane (TM) to cochlea. ISJ is a synovial joint composed of articular cartilage on the lenticular process and stapes head with the synovial fluid between them. However, there is no study on how the synovial ISJ affects the middle ear and cochlear functions. Recently, we have developed a 3-dimensinal finite element (FE) model of synovial ISJ and connected the model to our comprehensive FE model of the human ear. The motions of TM, stapes footplate, and basilar membrane and the pressures in scala vestibule and scala tympani were derived over frequencies and compared with experimental measurements. Results show that the synovial ISJ affects sound transmission into cochlea and the frequency-dependent viscoelastic behavior of ISJ provides protection for cochlea from high intensity sound.

  12. Simplified models for new physics in vector boson scattering. Input for Snowmass 2013

    International Nuclear Information System (INIS)

    Reuter, Juergen; Kilian, Wolfgang; Sekulla, Marco

    2013-07-01

    In this contribution to the Snowmass process 2013 we give a brief review of how new physics could enter in the electroweak (EW) sector of the Standard Model (SM). This new physics, if it is directly accessible at low energies, can be parameterized by explicit resonances having certain quantum numbers. The extreme case is the decoupling limit where those resonances are very heavy and leave only traces in the form of deviations in the SM couplings. Translations are given into higher-dimensional operators leading to such deviations. As long as such resonances are introduced without a UV-complete theory behind it, these models suffer from unitarity violation of perturbative scattering amplitudes. We show explicitly how theoretically sane descriptions could be achieved by using a unitarization prescription that allows a correct description of such a resonance without specifying a UV-complete model.

  13. Simplified models for new physics in vector boson scattering. Input for Snowmass 2013

    Energy Technology Data Exchange (ETDEWEB)

    Reuter, Juergen [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Kilian, Wolfgang; Sekulla, Marco [Siegen Univ. (Germany). Theoretische Physik I

    2013-07-15

    In this contribution to the Snowmass process 2013 we give a brief review of how new physics could enter in the electroweak (EW) sector of the Standard Model (SM). This new physics, if it is directly accessible at low energies, can be parameterized by explicit resonances having certain quantum numbers. The extreme case is the decoupling limit where those resonances are very heavy and leave only traces in the form of deviations in the SM couplings. Translations are given into higher-dimensional operators leading to such deviations. As long as such resonances are introduced without a UV-complete theory behind it, these models suffer from unitarity violation of perturbative scattering amplitudes. We show explicitly how theoretically sane descriptions could be achieved by using a unitarization prescription that allows a correct description of such a resonance without specifying a UV-complete model.

  14. SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations

    Science.gov (United States)

    Baes, M.; Camps, P.

    2015-09-01

    The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.

  15. SPY: a new scission-point model based on microscopic inputs to predict fission fragment properties

    Energy Technology Data Exchange (ETDEWEB)

    Panebianco, Stefano; Lemaître, Jean-Francois; Sida, Jean-Luc [CEA Centre de Saclay, Gif-sur-Ivette (France); Dubray, Noëel [CEA, DAM, DIF, Arpajon (France); Goriely, Stephane [Institut d' Astronomie et d' Astrophisique, Universite Libre de Bruxelles, Brussels (Belgium)

    2014-07-01

    Despite the difficulty in describing the whole fission dynamics, the main fragment characteristics can be determined in a static approach based on a so-called scission-point model. Within this framework, a new Scission-Point model for the calculations of fission fragment Yields (SPY) has been developed. This model, initially based on the approach developed by Wilkins in the late seventies, consists in performing a static energy balance at scission, where the two fragments are supposed to be completely separated so that their macroscopic properties (mass and charge) can be considered as fixed. Given the knowledge of the system state density, averaged quantities such as mass and charge yields, mean kinetic and excitation energy can then be extracted in the framework of a microcanonical statistical description. The main advantage of the SPY model is the introduction of one of the most up-to-date microscopic descriptions of the nucleus for the individual energy of each fragment and, in the future, for their state density. These quantities are obtained in the framework of HFB calculations using the Gogny nucleon-nucleon interaction, ensuring an overall coherence of the model. Starting from a description of the SPY model and its main features, a comparison between the SPY predictions and experimental data will be discussed for some specific cases, from light nuclei around mercury to major actinides. Moreover, extensive predictions over the whole chart of nuclides will be discussed, with particular attention to their implication in stellar nucleosynthesis. Finally, future developments, mainly concerning the introduction of microscopic state densities, will be briefly discussed. (author)

  16. A Hierarchical multi-input and output Bi-GRU Model for Sentiment Analysis on Customer Reviews

    Science.gov (United States)

    Zhang, Liujie; Zhou, Yanquan; Duan, Xiuyu; Chen, Ruiqi

    2018-03-01

    Multi-label sentiment classification on customer reviews is a practical challenging task in Natural Language Processing. In this paper, we propose a hierarchical multi-input and output model based bi-directional recurrent neural network, which both considers the semantic and lexical information of emotional expression. Our model applies two independent Bi-GRU layer to generate part of speech and sentence representation. Then the lexical information is considered via attention over output of softmax activation on part of speech representation. In addition, we combine probability of auxiliary labels as feature with hidden layer to capturing crucial correlation between output labels. The experimental result shows that our model is computationally efficient and achieves breakthrough improvements on customer reviews dataset.

  17. Modeling Interdependent and Periodic Real-World Action Sequences

    Science.gov (United States)

    Kurashima, Takeshi; Althoff, Tim; Leskovec, Jure

    2018-01-01

    Mobile health applications, including those that track activities such as exercise, sleep, and diet, are becoming widely used. Accurately predicting human actions in the real world is essential for targeted recommendations that could improve our health and for personalization of these applications. However, making such predictions is extremely difficult due to the complexities of human behavior, which consists of a large number of potential actions that vary over time, depend on each other, and are periodic. Previous work has not jointly modeled these dynamics and has largely focused on item consumption patterns instead of broader types of behaviors such as eating, commuting or exercising. In this work, we develop a novel statistical model, called TIPAS, for Time-varying, Interdependent, and Periodic Action Sequences. Our approach is based on personalized, multivariate temporal point processes that model time-varying action propensities through a mixture of Gaussian intensities. Our model captures short-term and long-term periodic interdependencies between actions through Hawkes process-based self-excitations. We evaluate our approach on two activity logging datasets comprising 12 million real-world actions (e.g., eating, sleep, and exercise) taken by 20 thousand users over 17 months. We demonstrate that our approach allows us to make successful predictions of future user actions and their timing. Specifically, TIPAS improves predictions of actions, and their timing, over existing methods across multiple datasets by up to 156%, and up to 37%, respectively. Performance improvements are particularly large for relatively rare and periodic actions such as walking and biking, improving over baselines by up to 256%. This demonstrates that explicit modeling of dependencies and periodicities in real-world behavior enables successful predictions of future actions, with implications for modeling human behavior, app personalization, and targeting of health interventions. PMID

  18. Progress on reference input parameter library for nuclear model calculations of nuclear data (III)

    International Nuclear Information System (INIS)

    Su Zongdi; Liu Jianfeng; Huang Zhongfu

    1997-01-01

    A new set of the average neutron resonance spacings D 0 and neutron strength functions S 0 for 309 nuclei were reestimated on the basis of the resolved resonance parameters reevaluated from BNL-325, ENDF/B-6, JEF-2, and JENDL-3, and the cumulative number N 0 of low low lying levels for 344 nuclei were also reevaluated by means of histograms. Three sets of level density parameters for the Gilbert-Cameron (GC) formula, back-shifted Fermi gas model(BS) and generated superfluid model (GSM) have been reesitmated by fitting the D 0 and N 0 values of CENPL.LRD-2

  19. Estimating severity of sideways fall using a generic multi linear regression model based on kinematic input variables.

    Science.gov (United States)

    van der Zijden, A M; Groen, B E; Tanck, E; Nienhuis, B; Verdonschot, N; Weerdesteyn, V

    2017-03-21

    Many research groups have studied fall impact mechanics to understand how fall severity can be reduced to prevent hip fractures. Yet, direct impact force measurements with force plates are restricted to a very limited repertoire of experimental falls. The purpose of this study was to develop a generic model for estimating hip impact forces (i.e. fall severity) in in vivo sideways falls without the use of force plates. Twelve experienced judokas performed sideways Martial Arts (MA) and Block ('natural') falls on a force plate, both with and without a mat on top. Data were analyzed to determine the hip impact force and to derive 11 selected (subject-specific and kinematic) variables. Falls from kneeling height were used to perform a stepwise regression procedure to assess the effects of these input variables and build the model. The final model includes four input variables, involving one subject-specific measure and three kinematic variables: maximum upper body deceleration, body mass, shoulder angle at the instant of 'maximum impact' and maximum hip deceleration. The results showed that estimated and measured hip impact forces were linearly related (explained variances ranging from 46 to 63%). Hip impact forces of MA falls onto the mat from a standing position (3650±916N) estimated by the final model were comparable with measured values (3698±689N), even though these data were not used for training the model. In conclusion, a generic linear regression model was developed that enables the assessment of fall severity through kinematic measures of sideways falls, without using force plates. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. GALEV evolutionary synthesis models – I. Code, input physics and web

    NARCIS (Netherlands)

    Kotulla, R.; Fritze, U.; Weilbacher, P.; Anders, P.

    2009-01-01

    GALEV (GALaxy EVolution) evolutionary synthesis models describe the evolution of stellar populations in general, of star clusters as well as of galaxies, both in terms of resolved stellar populations and of integrated light properties over cosmological time-scales of ≥13 Gyr from the onset of star

  1. Model-based extraction of input and organ functions in dynamic scintigraphic imaging

    Czech Academy of Sciences Publication Activity Database

    Tichý, Ondřej; Šmídl, Václav; Šámal, M.

    2016-01-01

    Roč. 4, 3-4 (2016), s. 135-145 ISSN 2168-1171 R&D Projects: GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : blind source separation * convolution * dynamic medical imaging * compartment modelling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2014/AS/tichy-0428540.pdf

  2. Modeling chronic diseases: the diabetes module. Justification of (new) input data

    NARCIS (Netherlands)

    Baan CA; Bos G; Jacobs-van der Bruggen MAM; Baan CA; Bos G; Jacobs-van der Bruggen MAM; PZO

    2005-01-01

    The RIVM chronic disease model (CDM) is an instrument designed to estimate the effects of changes in the prevalence of risk factors for chronic diseases on disease burden and mortality. To enable the computation of the effects of various diabetes prevention scenarios, the CDM has been updated and

  3. The effective temperature of the DBV's, and the sensitivity of DB model atmospheres to input physics

    International Nuclear Information System (INIS)

    Thejll, P.; Delaware Univ., Newark, DE; Vennes, S.; Shipman, H.L.

    1990-01-01

    A new grid of DB models is applied to the problem of the DBV temperatures and the DB gap. It is found that the DBV instability strip lies lower than thought before. This has consequences for the calibration of mixing-length theories and the reality of the DB gap. The DBV GD358 is discussed in detail. (orig.)

  4. Evapotranspiration and Precipitation inputs for SWAT model using remotely sensed observations

    Science.gov (United States)

    The ability of numerical models, such as the Soil and Water Assessment Tool (or SWAT), to accurately represent the partition of the water budget and describe sediment loads and other pollutant conditions related to water quality strongly depends on how well spatiotemporal variability in precipitatio...

  5. Treatment of input uncertainty in hydrologic modeling: Doing hydrology backward with Markov chain Monte Carlo simulation

    NARCIS (Netherlands)

    Vrugt, J.A.; Braak, ter C.J.F.; Clark, M.P.; Hyman, J.M.; Robinson, B.A.

    2008-01-01

    There is increasing consensus in the hydrologic literature that an appropriate framework for streamflow forecasting and simulation should include explicit recognition of forcing and parameter and model structural error. This paper presents a novel Markov chain Monte Carlo (MCMC) sampler, entitled

  6. Early neonatal loss of inhibitory synaptic input to the spinal motor neurons confers spina bifida-like leg dysfunction in a chicken model

    Directory of Open Access Journals (Sweden)

    Md. Sakirul Islam Khan

    2017-12-01

    Full Text Available Spina bifida aperta (SBA, one of the most common congenital malformations, causes lifelong neurological complications, particularly in terms of motor dysfunction. Fetuses with SBA exhibit voluntary leg movements in utero and during early neonatal life, but these disappear within the first few weeks after birth. However, the pathophysiological sequence underlying such motor dysfunction remains unclear. Additionally, because important insights have yet to be obtained from human cases, an appropriate animal model is essential. Here, we investigated the neuropathological mechanisms of progression of SBA-like motor dysfunctions in a neural tube surgery-induced chicken model of SBA at different pathogenesis points ranging from embryonic to posthatch ages. We found that chicks with SBA-like features lose voluntary leg movements and subsequently exhibit lower-limb paralysis within the first 2 weeks after hatching, coinciding with the synaptic change-induced disruption of spinal motor networks at the site of the SBA lesion in the lumbosacral region. Such synaptic changes reduced the ratio of inhibitory-to-excitatory inputs to motor neurons and were associated with a drastic loss of γ-aminobutyric acid (GABAergic inputs and upregulation of the cholinergic activities of motor neurons. Furthermore, most of the neurons in ventral horns, which appeared to be suffering from excitotoxicity during the early postnatal days, underwent apoptosis. However, the triggers of cellular abnormalization and neurodegenerative signaling were evident in the middle- to late-gestational stages, probably attributable to the amniotic fluid-induced in ovo milieu. In conclusion, we found that early neonatal loss of neurons in the ventral horn of exposed spinal cord affords novel insights into the pathophysiology of SBA-like leg dysfunction.

  7. A stock-flow consistent input-output model with applications to energy price shocks, interest rates, and heat emissions

    Science.gov (United States)

    Berg, Matthew; Hartley, Brian; Richters, Oliver

    2015-01-01

    By synthesizing stock-flow consistent models, input-output models, and aspects of ecological macroeconomics, a method is developed to simultaneously model monetary flows through the financial system, flows of produced goods and services through the real economy, and flows of physical materials through the natural environment. This paper highlights the linkages between the physical environment and the economic system by emphasizing the role of the energy industry. A conceptual model is developed in general form with an arbitrary number of sectors, while emphasizing connections with the agent-based, econophysics, and complexity economics literature. First, we use the model to challenge claims that 0% interest rates are a necessary condition for a stationary economy and conduct a stability analysis within the parameter space of interest rates and consumption parameters of an economy in stock-flow equilibrium. Second, we analyze the role of energy price shocks in contributing to recessions, incorporating several propagation and amplification mechanisms. Third, implied heat emissions from energy conversion and the effect of anthropogenic heat flux on climate change are considered in light of a minimal single-layer atmosphere climate model, although the model is only implicitly, not explicitly, linked to the economic model.

  8. Predictive Place-Cell Sequences for Goal-Finding Emerge from Goal Memory and the Cognitive Map: A Computational Model

    Directory of Open Access Journals (Sweden)

    Lorenz Gönner

    2017-10-01

    Full Text Available Hippocampal place-cell sequences observed during awake immobility often represent previous experience, suggesting a role in memory processes. However, recent reports of goals being overrepresented in sequential activity suggest a role in short-term planning, although a detailed understanding of the origins of hippocampal sequential activity and of its functional role is still lacking. In particular, it is unknown which mechanism could support efficient planning by generating place-cell sequences biased toward known goal locations, in an adaptive and constructive fashion. To address these questions, we propose a model of spatial learning and sequence generation as interdependent processes, integrating cortical contextual coding, synaptic plasticity and neuromodulatory mechanisms into a map-based approach. Following goal learning, sequential activity emerges from continuous attractor network dynamics biased by goal memory inputs. We apply Bayesian decoding on the resulting spike trains, allowing a direct comparison with experimental data. Simulations show that this model (1 explains the generation of never-experienced sequence trajectories in familiar environments, without requiring virtual self-motion signals, (2 accounts for the bias in place-cell sequences toward goal locations, (3 highlights their utility in flexible route planning, and (4 provides specific testable predictions.

  9. PLEXOS Input Data Generator

    Energy Technology Data Exchange (ETDEWEB)

    2017-02-01

    The PLEXOS Input Data Generator (PIDG) is a tool that enables PLEXOS users to better version their data, automate data processing, collaborate in developing inputs, and transfer data between different production cost modeling and other power systems analysis software. PIDG can process data that is in a generalized format from multiple input sources, including CSV files, PostgreSQL databases, and PSS/E .raw files and write it to an Excel file that can be imported into PLEXOS with only limited manual intervention.

  10. Output from Statistical Predictive Models as Input to eLearning Dashboards

    Directory of Open Access Journals (Sweden)

    Marlene A. Smith

    2015-06-01

    Full Text Available We describe how statistical predictive models might play an expanded role in educational analytics by giving students automated, real-time information about what their current performance means for eventual success in eLearning environments. We discuss how an online messaging system might tailor information to individual students using predictive analytics. The proposed system would be data-driven and quantitative; e.g., a message might furnish the probability that a student will successfully complete the certificate requirements of a massive open online course. Repeated messages would prod underperforming students and alert instructors to those in need of intervention. Administrators responsible for accreditation or outcomes assessment would have ready documentation of learning outcomes and actions taken to address unsatisfactory student performance. The article’s brief introduction to statistical predictive models sets the stage for a description of the messaging system. Resources and methods needed to develop and implement the system are discussed.

  11. Modelling Effects on Grid Cells of Sensory Input During Self-motion

    Science.gov (United States)

    2016-04-20

    individual oscillators. These oscillatory interference models effectively simulate the theta rhythmic firing of grid cells (Hafting et al. 2008; Jeewajee...et al. 2008; Brandon et al. 2011; Koenig et al. 2011; Stensola et al. 2012), and the changes in rhythmic firing frequency based on running speed and...Fiete, 2009; Couey et al. 2013), and equate head direction with movement direction. However, an analysis of behavioural data shows that the head

  12. A normalization model suggests that attention changes the weighting of inputs between visual areas.

    Science.gov (United States)

    Ruff, Douglas A; Cohen, Marlene R

    2017-05-16

    Models of divisive normalization can explain the trial-averaged responses of neurons in sensory, association, and motor areas under a wide range of conditions, including how visual attention changes the gains of neurons in visual cortex. Attention, like other modulatory processes, is also associated with changes in the extent to which pairs of neurons share trial-to-trial variability. We showed recently that in addition to decreasing correlations between similarly tuned neurons within the same visual area, attention increases correlations between neurons in primary visual cortex (V1) and the middle temporal area (MT) and that an extension of a classic normalization model can account for this correlation increase. One of the benefits of having a descriptive model that can account for many physiological observations is that it can be used to probe the mechanisms underlying processes such as attention. Here, we use electrical microstimulation in V1 paired with recording in MT to provide causal evidence that the relationship between V1 and MT activity is nonlinear and is well described by divisive normalization. We then use the normalization model and recording and microstimulation experiments to show that the attention dependence of V1-MT correlations is better explained by a mechanism in which attention changes the weights of connections between V1 and MT than by a mechanism that modulates responses in either area. Our study shows that normalization can explain interactions between neurons in different areas and provides a framework for using multiarea recording and stimulation to probe the neural mechanisms underlying neuronal computations.

  13. Fingerprints of four crop models as affected by soil input data aggregation

    Czech Academy of Sciences Publication Activity Database

    Angulo, C.; Gaiser, T.; Rötter, R. P.; Børgesen, C. D.; Hlavinka, Petr; Trnka, Miroslav; Ewert, F.

    2014-01-01

    Roč. 61, NOV 2014 (2014), s. 35-48 ISSN 1161-0301 R&D Projects: GA MŠk(CZ) EE2.3.20.0248; GA MŠk(CZ) EE2.4.31.0056; GA MZe QJ1310123 Institutional support: RVO:67179843 Keywords : crop model * soil data * spatial resolution * yield distribution * aggregation Subject RIV: EH - Ecology, Behaviour Impact factor: 2.704, year: 2014

  14. Characteristic 'fingerprints' of crop model responses data at different spatial resolutions to weather input

    Czech Academy of Sciences Publication Activity Database

    Angulo, C.; Rotter, R.; Trnka, Miroslav; Pirttioja, N. K.; Gaiser, T.; Hlavinka, Petr; Ewert, F.

    2013-01-01

    Roč. 49, AUG 2013 (2013), s. 104-114 ISSN 1161-0301 R&D Projects: GA MŠk(CZ) EE2.3.20.0248; GA MŠk(CZ) EE2.4.31.0056 Institutional support: RVO:67179843 Keywords : Crop model * Weather data resolution * Aggregation * Yield distribution Subject RIV: EH - Ecology, Behaviour Impact factor: 2.918, year: 2013

  15. Errors in estimation of the input signal for integrate-and-fire neuronal models

    Czech Academy of Sciences Publication Activity Database

    Bibbona, E.; Lánský, Petr; Sacerdote, L.; Sirovich, R.

    2008-01-01

    Roč. 78, č. 1 (2008), s. 1-10 ISSN 1539-3755 R&D Projects: GA MŠk(CZ) LC554; GA AV ČR(CZ) 1ET400110401 Grant - others:EC(XE) MIUR PRIN 2005 Institutional research plan: CEZ:AV0Z50110509 Keywords : parameter estimation * stochastic neuronal model Subject RIV: BO - Biophysics Impact factor: 2.508, year: 2008 http://link.aps.org/abstract/PRE/v78/e011918

  16. Satellite, climatological, and theoretical inputs for modeling of the diurnal cycle of fire emissions

    Science.gov (United States)

    Hyer, E. J.; Reid, J. S.; Schmidt, C. C.; Giglio, L.; Prins, E.

    2009-12-01

    The diurnal cycle of fire activity is crucial for accurate simulation of atmospheric effects of fire emissions, especially at finer spatial and temporal scales. Estimating diurnal variability in emissions is also a critical problem for construction of emissions estimates from multiple sensors with variable coverage patterns. An optimal diurnal emissions estimate will use as much information as possible from satellite fire observations, compensate known biases in those observations, and use detailed theoretical models of the diurnal cycle to fill in missing information. As part of ongoing improvements to the Fire Location and Monitoring of Burning Emissions (FLAMBE) fire monitoring system, we evaluated several different methods of integrating observations with different temporal sampling. We used geostationary fire detections from WF_ABBA, fire detection data from MODIS, empirical diurnal cycles from TRMM, and simple theoretical diurnal curves based on surface heating. Our experiments integrated these data in different combinations to estimate the diurnal cycles of emissions for each location and time. Hourly emissions estimates derived using these methods were tested using an aerosol transport model. We present results of this comparison, and discuss the implications of our results for the broader problem of multi-sensor data fusion in fire emissions modeling.

  17. Optimization modeling of U.S. renewable electricity deployment using local input variables

    Science.gov (United States)

    Bernstein, Adam

    For the past five years, state Renewable Portfolio Standard (RPS) laws have been a primary driver of renewable electricity (RE) deployments in the United States. However, four key trends currently developing: (i) lower natural gas prices, (ii) slower growth in electricity demand, (iii) challenges of system balancing intermittent RE within the U.S. transmission regions, and (iv) fewer economical sites for RE development, may limit the efficacy of RPS laws over the remainder of the current RPS statutes' lifetime. An outsized proportion of U.S. RE build occurs in a small number of favorable locations, increasing the effects of these variables on marginal RE capacity additions. A state-by-state analysis is necessary to study the U.S. electric sector and to generate technology specific generation forecasts. We used LP optimization modeling similar to the National Renewable Energy Laboratory (NREL) Renewable Energy Development System (ReEDS) to forecast RE deployment across the 8 U.S. states with the largest electricity load, and found state-level RE projections to Year 2031 significantly lower than thoseimplied in the Energy Information Administration (EIA) 2013 Annual Energy Outlook forecast. Additionally, the majority of states do not achieve their RPS targets in our forecast. Combined with the tendency of prior research and RE forecasts to focus on larger national and global scale models, we posit that further bottom-up state and local analysis is needed for more accurate policy assessment, forecasting, and ongoing revision of variables as parameter values evolve through time. Current optimization software eliminates much of the need for algorithm coding and programming, allowing for rapid model construction and updating across many customized state and local RE parameters. Further, our results can be tested against the empirical outcomes that will be observed over the coming years, and the forecast deviation from the actuals can be attributed to discrete parameter

  18. TRANSIT: model for providing generic transportation input for preliminary siting analysis

    International Nuclear Information System (INIS)

    McNair, G.W.; Cashwell, J.W.

    1985-02-01

    To assist the US Department of Energy's efforts in potential facility site screening in the nuclear waste management program, a computerized model, TRANSIT, is being developed. Utilizing existing data on the location and inventory characteristics of spent nuclear fuel at reactor sites, TRANSIT derives isopleths of transportation mileage, costs, risks and fleet requirements for shipments to storage sites and/or repository sites. This technique provides a graphic, first-order method for use by the Department in future site screening efforts. 2 refs

  19. Robust Model Predictive Control of Networked Control Systems under Input Constraints and Packet Dropouts

    Directory of Open Access Journals (Sweden)

    Deyin Yao

    2014-01-01

    Full Text Available This paper deals with the problem of robust model predictive control (RMPC for a class of linear time-varying systems with constraints and data losses. We take the polytopic uncertainties into account to describe the uncertain systems. First, we design a robust state observer by using the linear matrix inequality (LMI constraints so that the original system state can be tracked. Second, the MPC gain is calculated by minimizing the upper bound of infinite horizon robust performance objective in terms of linear matrix inequality conditions. The method of robust MPC and state observer design is illustrated by a numerical example.

  20. A Novel Approach to Develop the Lower Order Model of Multi-Input Multi-Output System

    Science.gov (United States)

    Rajalakshmy, P.; Dharmalingam, S.; Jayakumar, J.

    2017-10-01

    A mathematical model is a virtual entity that uses mathematical language to describe the behavior of a system. Mathematical models are used particularly in the natural sciences and engineering disciplines like physics, biology, and electrical engineering as well as in the social sciences like economics, sociology and political science. Physicists, Engineers, Computer scientists, and Economists use mathematical models most extensively. With the advent of high performance processors and advanced mathematical computations, it is possible to develop high performing simulators for complicated Multi Input Multi Ouptut (MIMO) systems like Quadruple tank systems, Aircrafts, Boilers etc. This paper presents the development of the mathematical model of a 500 MW utility boiler which is a highly complex system. A synergistic combination of operational experience, system identification and lower order modeling philosophy has been effectively used to develop a simplified but accurate model of a circulation system of a utility boiler which is a MIMO system. The results obtained are found to be in good agreement with the physics of the process and with the results obtained through design procedure. The model obtained can be directly used for control system studies and to realize hardware simulators for boiler testing and operator training.

  1. Consumer input into health care: Time for a new active and comprehensive model of consumer involvement.

    Science.gov (United States)

    Hall, Alix E; Bryant, Jamie; Sanson-Fisher, Rob W; Fradgley, Elizabeth A; Proietto, Anthony M; Roos, Ian

    2018-03-07

    To ensure the provision of patient-centred health care, it is essential that consumers are actively involved in the process of determining and implementing health-care quality improvements. However, common strategies used to involve consumers in quality improvements, such as consumer membership on committees and collection of patient feedback via surveys, are ineffective and have a number of limitations, including: limited representativeness; tokenism; a lack of reliable and valid patient feedback data; infrequent assessment of patient feedback; delays in acquiring feedback; and how collected feedback is used to drive health-care improvements. We propose a new active model of consumer engagement that aims to overcome these limitations. This model involves the following: (i) the development of a new measure of consumer perceptions; (ii) low cost and frequent electronic data collection of patient views of quality improvements; (iii) efficient feedback to the health-care decision makers; and (iv) active involvement of consumers that fosters power to influence health system changes. © 2018 The Authors Health Expectations published by John Wiley & Sons Ltd.

  2. Estimation of Global 1km-grid Terrestrial Carbon Exchange Part I: Developing Inputs and Modelling

    Science.gov (United States)

    Sasai, T.; Murakami, K.; Kato, S.; Matsunaga, T.; Saigusa, N.; Hiraki, K.

    2015-12-01

    Global terrestrial carbon cycle largely depends on a spatial pattern in land cover type, which is heterogeneously-distributed over regional and global scales. However, most studies, which aimed at the estimation of carbon exchanges between ecosystem and atmosphere, remained within several tens of kilometers grid spatial resolution, and the results have not been enough to understand the detailed pattern of carbon exchanges based on ecological community. Improving the sophistication of spatial resolution is obviously necessary to enhance the accuracy of carbon exchanges. Moreover, the improvement may contribute to global warming awareness, policy makers and other social activities. In this study, we show global terrestrial carbon exchanges (net ecosystem production, net primary production, and gross primary production) with 1km-grid resolution. As methodology for computing the exchanges, we 1) developed a global 1km-grid climate and satellite dataset based on the approach in Setoyama and Sasai (2013); 2) used the satellite-driven biosphere model (Biosphere model integrating Eco-physiological And Mechanistic approaches using Satellite data: BEAMS) (Sasai et al., 2005, 2007, 2011); 3) simulated the carbon exchanges by using the new dataset and BEAMS by the use of a supercomputer that includes 1280 CPU and 320 GPGPU cores (GOSAT RCF of NIES). As a result, we could develop a global uniform system for realistically estimating terrestrial carbon exchange, and evaluate net ecosystem production in each community level; leading to obtain highly detailed understanding of terrestrial carbon exchanges.

  3. Modeling of the impact of Rhone River nutrient inputs on the dynamics of planktonic diversity

    Science.gov (United States)

    Alekseenko, Elena; Baklouti, Melika; Garreau, Pierre; Guyennon, Arnaud; Carlotti, François

    2014-05-01

    Recent studies devoted to the Mediterranean Sea highlight that a large number of uncertainties still exist particularly as regards the variations of elemental stoichiometry of all compartments of pelagic ecosystems (The MerMex Group, 2011, Pujo-Pay et al., 2011, Malatonne-Rizotti and the Pan-Med Group, 2012). Moreover, during the last two decades, it was observed that the inorganic ratio N:P ratio in among all the Mediterranean rivers, including the Rhone River, has dramatically increased, thus strengthening the P-limitation in the Mediterranean waters (Ludwig et al, 2009, The MerMex group, 2011) and increasing the anomaly in the ratio N:P of the Gulf of Lions and all the western part of NW Mediterranean. At which time scales such a change will impact the biogeochemical stocks and fluxes of the Gulf of Lion and of the whole NW Mediterranean sea still remains unknown. In the same way, it is still uncertain how this increase in the N:P ratio will modify the composition of the trophic web, and potentially lead to regime shifts by favouring for example one of the classical food chains of the sea considered in Parsons & Lalli (2002). To address this question, the Eco3M-MED biogeochemical model (Baklouti et al., 2006a,b, Alekseenko et al., 2014) representing the first trophic levels from bacteria to mesozooplankton, coupled with the hydrodynamical model MARS3D (Lazure&Dumas, 2008) is used. This model has already been partially validated (Alekseenko et al., 2014) and the fact that it describes each biogenic compartment in terms of its abundance (for organisms), and carbon, phosphorus, nitrogen and chlorophyll (for autotrophs) implies that all the information on the intracellular status of organisms and on the element(s) that limit(s) their growth will be available. The N:P ratios in water, organisms and in the exported material will also be analyzed. In practice, the work will first consist in running different scenarios starting from similar initial early winter

  4. Input Harmonic Analysis on the Slim DC-Link Drive Using Harmonic State Space Model

    DEFF Research Database (Denmark)

    Yang, Feng; Kwon, Jun Bum; Wang, Xiongfei

    2017-01-01

    The harmonic performance of the slim dc-link adjustable speed drives has shown good performance in some studies but poor in some others. The contradiction indicates that a feasible theoretical analysis is still lacking to characterize the harmonic distortion for the slim dc-link drive. Considerin...... results of the slim dc-link drive, loaded up to 2.0 kW, are presented to validate the theoretical analysis....... variation according to the switching instant, the harmonics at the steady-state condition, as well as the coupling between the multiple harmonic impedances. By using this model, the impaction on the harmonics performance by the film capacitor and the grid inductance is derived. Simulation and experimental...

  5. Effect of delayed response in growth on the dynamics of a chemostat model with impulsive input

    International Nuclear Information System (INIS)

    Jiao Jianjun; Yang Xiaosong; Chen Lansun; Cai Shaohong

    2009-01-01

    In this paper, a chemostat model with delayed response in growth and impulsive perturbations on the substrate is considered. Using the discrete dynamical system determined by the stroboscopic map, we obtain a microorganism-extinction periodic solution, further, the globally attractive condition of the microorganism-extinction periodic solution is obtained. By the use of the theory on delay functional and impulsive differential equation, we also obtain the permanent condition of the investigated system. Our results indicate that the discrete time delay has influence to the dynamics behaviors of the investigated system, and provide tactical basis for the experimenters to control the outcome of the chemostat. Furthermore, numerical analysis is inserted to illuminate the dynamics of the system affected by the discrete time delay.

  6. Effect of the spatiotemporal variability of rainfall inputs in water quality integrated catchment modelling for dissolved oxygen concentrations

    Science.gov (United States)

    Moreno Ródenas, Antonio Manuel; Cecinati, Francesca; ten Veldhuis, Marie-Claire; Langeveld, Jeroen; Clemens, Francois

    2016-04-01

    Maintaining water quality standards in highly urbanised hydrological catchments is a worldwide challenge. Water management authorities struggle to cope with changing climate and an increase in pollution pressures. Water quality modelling has been used as a decision support tool for investment and regulatory developments. This approach led to the development of integrated catchment models (ICM), which account for the link between the urban/rural hydrology and the in-river pollutant dynamics. In the modelled system, rainfall triggers the drainage systems of urban areas scattered along a river. When flow exceeds the sewer infrastructure capacity, untreated wastewater enters the natural system by combined sewer overflows. This results in a degradation of the river water quality, depending on the magnitude of the emission and river conditions. Thus, being capable of representing these dynamics in the modelling process is key for a correct assessment of the water quality. In many urbanised hydrological systems the distances between draining sewer infrastructures go beyond the de-correlation length of rainfall processes, especially, for convective summer storms. Hence, spatial and temporal scales of selected rainfall inputs are expected to affect water quality dynamics. The objective of this work is to evaluate how the use of rainfall data from different sources and with different space-time characteristics affects modelled output concentrations of dissolved oxygen in a simplified ICM. The study area is located at the Dommel, a relatively small and sensitive river flowing through the city of Eindhoven (The Netherlands). This river stretch receives the discharge of the 750,000 p.e. WWTP of Eindhoven and from over 200 combined sewer overflows scattered along its length. A pseudo-distributed water quality model has been developed in WEST (mikedhi.com); this is a lumped-physically based model that accounts for urban drainage processes, WWTP and river dynamics for several

  7. Supply Chain Vulnerability Analysis Using Scenario-Based Input-Output Modeling: Application to Port Operations.

    Science.gov (United States)

    Thekdi, Shital A; Santos, Joost R

    2016-05-01

    Disruptive events such as natural disasters, loss or reduction of resources, work stoppages, and emergent conditions have potential to propagate economic losses across trade networks. In particular, disruptions to the operation of container port activity can be detrimental for international trade and commerce. Risk assessment should anticipate the impact of port operation disruptions with consideration of how priorities change due to uncertain scenarios and guide investments that are effective and feasible for implementation. Priorities for protective measures and continuity of operations planning must consider the economic impact of such disruptions across a variety of scenarios. This article introduces new performance metrics to characterize resiliency in interdependency modeling and also integrates scenario-based methods to measure economic sensitivity to sudden-onset disruptions. The methods will be demonstrated on a U.S. port responsible for handling $36.1 billion of cargo annually. The methods will be useful to port management, private industry supply chain planning, and transportation infrastructure management. © 2015 Society for Risk Analysis.

  8. Optimization model of peach production relevant to input energies – Yield function in Chaharmahal va Bakhtiari province, Iran

    International Nuclear Information System (INIS)

    Ghatrehsamani, Shirin; Ebrahimi, Rahim; Kazi, Salim Newaz; Badarudin Badry, Ahmad; Sadeghinezhad, Emad

    2016-01-01

    The aim of this study was to determine the amount of input–output energy used in peach production and to develop an optimal model of production in Chaharmahal va Bakhtiari province, Iran. Data were collected from 100 producers by administering a questionnaire in face-to-face interviews. Farms were selected based on random sampling method. Results revealed that the total energy of production is 47,951.52 MJ/ha and the highest share of energy consumption belongs to chemical fertilizers (35.37%). Consumption of direct energy was 47.4% while indirect energy was 52.6%. Also, Total energy consumption was divided into two groups; renewable and non-renewable (19.2% and 80.8% respectively). Energy use efficiency, Energy productivity, Specific energy and Net energy were calculated as 0.433, 0.228 (kg/MJ), 4.38 (MJ/kg) and −27,161.722 (MJ/ha), respectively. According to the negative sign for Net energy, if special strategy is used, energy dismiss will decrease and negative effect of some parameters could be omitted. In the present case the amount is indicating decimate of production energy. In addition, energy efficiency was not high enough. Some of the input energies were applied to machinery, chemical fertilizer, water irrigation and electricity which had significant effect on increasing production and MPP (marginal physical productivity) was determined for variables. This parameter was positive for energy groups namely; machinery, diesel fuel, chemical fertilizer, water irrigation and electricity while it was negative for other kind of energy such as chemical pesticides and human labor. Finally, there is a need to pursue a new policy to force producers to undertake energy-efficient practices to establish sustainable production systems without disrupting the natural resources. In addition, extension activities are needed to improve the efficiency of energy consumption and to sustain the natural resources. - Highlights: • Replacing non-renewable energy with renewable

  9. Modelling of uranium inputs and its fate in soil; Modellierung von Uraneintraegen aus Duengern und ihr Verbleib im Boden

    Energy Technology Data Exchange (ETDEWEB)

    Achatz, M. [Bundesamt fuer Strahlenschutz, Berlin (Germany); Urso, L. [Bundesamt fuer Strahlenschutz, Oberschleissheim (Germany)

    2016-07-01

    87 % of mineral phosphate fertilizers are produced of sedimentary rock phosphate, which generally contains heavy metals, like uranium. The solution and migration behavior of uranium is apart from its redox ratio, determined by its pH conditions as well as its ligand quality and quantity. A further important role in sorption is played by soil components like clay minerals, pedogenic oxides and soil organic matter. To provide a preferably detailed speciation model of U in soil several physical and chemical components have to be included to be able to state distribution coefficients (k{sub D}) and sorption processes. The model of Hormann and Fischer served as the basis of modelling uranium mobility in soil by using the program PhreeqC. The usage of real soil and soil water measurements may contribute to identify factors and processes influencing the mobility of uranium under preferably realistic conditions. Additionally, the assessment of further predictions towards uranium migration in soil can be made based on a modelling with PhreeqC. The modelling of uranium inputs and its fate in soil can help to elucidate the human caused occurrence or geogenic origin of uranium in soil.

  10. Effects of degraded sensory input on memory for speech: behavioral data and a test of biologically constrained computational models.

    Science.gov (United States)

    Piquado, Tepring; Cousins, Katheryn A Q; Wingfield, Arthur; Miller, Paul

    2010-12-13

    Poor hearing acuity reduces memory for spoken words, even when the words are presented with enough clarity for correct recognition. An "effortful hypothesis" suggests that the perceptual effort needed for recognition draws from resources that would otherwise be available for encoding the word in memory. To assess this hypothesis, we conducted a behavioral task requiring immediate free recall of word-lists, some of which contained an acoustically masked word that was just above perceptual threshold. Results show that masking a word reduces the recall of that word and words prior to it, as well as weakening the linking associations between the masked and prior words. In contrast, recall probabilities of words following the masked word are not affected. To account for this effect we conducted computational simulations testing two classes of models: Associative Linking Models and Short-Term Memory Buffer Models. Only a model that integrated both contextual linking and buffer components matched all of the effects of masking observed in our behavioral data. In this Linking-Buffer Model, the masked word disrupts a short-term memory buffer, causing associative links of words in the buffer to be weakened, affecting memory for the masked word and the word prior to it, while allowing links of words following the masked word to be spared. We suggest that these data account for the so-called "effortful hypothesis", where distorted input has a detrimental impact on prior information stored in short-term memory. Copyright © 2010 Elsevier B.V. All rights reserved.

  11. Data-driven modelling of protein synthesis : A sequence perspective

    NARCIS (Netherlands)

    Gritsenko, A.

    2017-01-01

    Recent advances in DNA sequencing, synthesis and genetic engineering have enabled the introduction of choice DNA sequences into living cells. This is an exciting prospect for the field of industrial biotechnology, which aims at using microorganisms to produce foods, beverages, pharmaceuticals and

  12. A comparison of numerical and machine-learning modeling of soil water content with limited input data

    Science.gov (United States)

    Karandish, Fatemeh; Šimůnek, Jiří

    2016-12-01

    Soil water content (SWC) is a key factor in optimizing the usage of water resources in agriculture since it provides information to make an accurate estimation of crop water demand. Methods for predicting SWC that have simple data requirements are needed to achieve an optimal irrigation schedule, especially for various water-saving irrigation strategies that are required to resolve both food and water security issues under conditions of water shortages. Thus, a two-year field investigation was carried out to provide a dataset to compare the effectiveness of HYDRUS-2D, a physically-based numerical model, with various machine-learning models, including Multiple Linear Regressions (MLR), Adaptive Neuro-Fuzzy Inference Systems (ANFIS), and Support Vector Machines (SVM), for simulating time series of SWC data under water stress conditions. SWC was monitored using TDRs during the maize growing seasons of 2010 and 2011. Eight combinations of six, simple, independent parameters, including pan evaporation and average air temperature as atmospheric parameters, cumulative growth degree days (cGDD) and crop coefficient (Kc) as crop factors, and water deficit (WD) and irrigation depth (In) as crop stress factors, were adopted for the estimation of SWCs in the machine-learning models. Having Root Mean Square Errors (RMSE) in the range of 0.54-2.07 mm, HYDRUS-2D ranked first for the SWC estimation, while the ANFIS and SVM models with input datasets of cGDD, Kc, WD and In ranked next with RMSEs ranging from 1.27 to 1.9 mm and mean bias errors of -0.07 to 0.27 mm, respectively. However, the MLR models did not perform well for SWC forecasting, mainly due to non-linear changes of SWCs under the irrigation process. The results demonstrated that despite requiring only simple input data, the ANFIS and SVM models could be favorably used for SWC predictions under water stress conditions, especially when there is a lack of data. However, process-based numerical models are undoubtedly a

  13. Documentation of input datasets for the soil-water balance groundwater recharge model of the Upper Colorado River Basin

    Science.gov (United States)

    Tillman, Fred D.

    2015-01-01

    The Colorado River and its tributaries supply water to more than 35 million people in the United States and 3 million people in Mexico, irrigating more than 4.5 million acres of farmland, and generating about 12 billion kilowatt hours of hydroelectric power annually. The Upper Colorado River Basin, encompassing more than 110,000 square miles (mi2), contains the headwaters of the Colorado River (also known as the River) and is an important source of snowmelt runoff to the River. Groundwater discharge also is an important source of water in the River and its tributaries, with estimates ranging from 21 to 58 percent of streamflow in the upper basin. Planning for the sustainable management of the Colorado River in future climates requires an understanding of the Upper Colorado River Basin groundwater system. This report documents input datasets for a Soil-Water Balance groundwater recharge model that was developed for the Upper Colorado River Basin.

  14. Selection Input Output by Restriction Using DEA Models Based on a Fuzzy Delphi Approach and Expert Information

    Science.gov (United States)

    Arsad, Roslah; Nasir Abdullah, Mohammad; Alias, Suriana; Isa, Zaidi

    2017-09-01

    Stock evaluation has always been an interesting problem for investors. In this paper, a comparison regarding the efficiency stocks of listed companies in Bursa Malaysia were made through the application of estimation method of Data Envelopment Analysis (DEA). One of the interesting research subjects in DEA is the selection of appropriate input and output parameter. In this study, DEA was used to measure efficiency of stocks of listed companies in Bursa Malaysia in terms of the financial ratio to evaluate performance of stocks. Based on previous studies and Fuzzy Delphi Method (FDM), the most important financial ratio was selected. The results indicated that return on equity, return on assets, net profit margin, operating profit margin, earnings per share, price to earnings and debt to equity were the most important ratios. Using expert information, all the parameter were clarified as inputs and outputs. The main objectives were to identify most critical financial ratio, clarify them based on expert information and compute the relative efficiency scores of stocks as well as rank them in the construction industry and material completely. The methods of analysis using Alirezaee and Afsharian’s model were employed in this study, where the originality of Charnes, Cooper and Rhodes (CCR) with the assumption of Constant Return to Scale (CSR) still holds. This method of ranking relative efficiency of decision making units (DMUs) was value-added by the Balance Index. The interested data was made for year 2015 and the population of the research includes accepted companies in stock markets in the construction industry and material (63 companies). According to the ranking, the proposed model can rank completely for 63 companies using selected financial ratio.

  15. Neonatal intensive care nursing curriculum challenges based on context, input, process, and product evaluation model: A qualitative study

    Directory of Open Access Journals (Sweden)

    Mansoureh Ashghali-Farahani

    2018-01-01

    Full Text Available Background: Weakness of curriculum development in nursing education results in lack of professional skills in graduates. This study was done on master's students in nursing to evaluate challenges of neonatal intensive care nursing curriculum based on context, input, process, and product (CIPP evaluation model. Materials and Methods: This study was conducted with qualitative approach, which was completed according to the CIPP evaluation model. The study was conducted from May 2014 to April 2015. The research community included neonatal intensive care nursing master's students, the graduates, faculty members, neonatologists, nurses working in neonatal intensive care unit (NICU, and mothers of infants who were hospitalized in such wards. Purposeful sampling was applied. Results: The data analysis showed that there were two main categories: “inappropriate infrastructure” and “unknown duties,” which influenced the context formation of NICU master's curriculum. The input was formed by five categories, including “biomedical approach,” “incomprehensive curriculum,” “lack of professional NICU nursing mentors,” “inappropriate admission process of NICU students,” and “lack of NICU skill labs.” Three categories were extracted in the process, including “more emphasize on theoretical education,” “the overlap of credits with each other and the inconsistency among the mentors,” and “ineffective assessment.” Finally, five categories were extracted in the product, including “preferring routine work instead of professional job,” “tendency to leave the job,” “clinical incompetency of graduates,” “the conflict between graduates and nursing staff expectations,” and “dissatisfaction of graduates.” Conclusions: Some changes are needed in NICU master's curriculum by considering the nursing experts' comments and evaluating the consequences of such program by them.

  16. Estimating direct and indirect rebound effects by supply-driven input-output model: A case study of Taiwan's industry

    International Nuclear Information System (INIS)

    Wu, Kuei-Yen; Wu, Jung-Hua; Huang, Yun-Hsun; Fu, Szu-Chi; Chen, Chia-Yon

    2016-01-01

    Most existing literature focuses on the direct rebound effect on the demand side for consumers. This study analyses direct and indirect rebound effects in Taiwan's industry from the perspective of producers. However, most studies on the producers' viewpoint may overlook inter-industry linkages. This study applies a supply-driven input-output model to quantify the magnitude of rebound effects by explicitly considering inter-industry linkages. Empirical results showed that total rebound effects for most Taiwan's sectors were less than 10% in 2011. A comparison among the sectors yields that sectors with lower energy efficiency had higher direct rebound effects, while sectors with higher forward linkages generated higher indirect rebound effects. Taking the Mining sector (S3) as an example, which is an upstream supplier and has high forward linkages; it showed high indirect rebound effects that are derived from the accumulation of additional energy consumption by its downstream producers. The findings also showed that in almost all sectors, indirect rebound effects were higher than direct rebound effects. In other words, if indirect rebound effects are neglected, the total rebound effects will be underestimated. Hence, the energy-saving potential may be overestimated. - Highlights: • This study quantifies rebound effects by a supply-driven input-output model. • For most Taiwan's sectors, total rebound magnitudes were less than 10% in 2011. • Direct rebound effects and energy efficiency were inverse correlation. • Indirect rebound effects and industrial forward linkages were positive correlation. • Indirect rebound effects were generally higher than direct rebound effects.

  17. Review of Literature for Inputs to the National Water Savings Model and Spreadsheet Tool-Commercial/Institutional

    Energy Technology Data Exchange (ETDEWEB)

    Whitehead, Camilla Dunham; Melody, Moya; Lutz, James

    2009-05-29

    Lawrence Berkeley National Laboratory (LBNL) is developing a computer model and spreadsheet tool for the United States Environmental Protection Agency (EPA) to help estimate the water savings attributable to their WaterSense program. WaterSense has developed a labeling program for three types of plumbing fixtures commonly used in commercial and institutional settings: flushometer valve toilets, urinals, and pre-rinse spray valves. This National Water Savings-Commercial/Institutional (NWS-CI) model is patterned after the National Water Savings-Residential model, which was completed in 2008. Calculating the quantity of water and money saved through the WaterSense labeling program requires three primary inputs: (1) the quantity of a given product in use; (2) the frequency with which units of the product are replaced or are installed in new construction; and (3) the number of times or the duration the product is used in various settings. To obtain the information required for developing the NWS-CI model, LBNL reviewed various resources pertaining to the three WaterSense-labeled commercial/institutional products. The data gathered ranged from the number of commercial buildings in the United States to numbers of employees in various sectors of the economy and plumbing codes for commercial buildings. This document summarizes information obtained about the three products' attributes, quantities, and use in commercial and institutional settings that is needed to estimate how much water EPA's WaterSense program saves.

  18. On the influence of meteorological input on photochemical modelling of a severe episode over a coastal area

    Science.gov (United States)

    Pirovano, G.; Coll, I.; Bedogni, M.; Alessandrini, S.; Costa, M. P.; Gabusi, V.; Lasry, F.; Menut, L.; Vautard, R.

    The modelling reconstruction of the processes determining the transport and mixing of ozone and its precursors in complex terrain areas is a challenging task, particularly when local-scale circulations, such as sea breeze, take place. Within this frame, the ESCOMPTE European campaign took place in the vicinity of Marseille (south-east of France) in summer 2001. The main objectives of the field campaign were to document several photochemical episodes, as well as to constitute a detailed database for chemistry transport models intercomparison. CAMx model has been applied on the largest intense observation periods (IOP) (June 21-26, 2001) in order to evaluate the impacts of two state-of-the-art meteorological models, RAMS and MM5, on chemical model outputs. The meteorological models have been used as best as possible in analysis mode, thus allowing to identify the spread arising in pollutant concentrations as an indication of the intrinsic uncertainty associated to the meteorological input. Simulations have been deeply investigated and compared with a considerable subset of observations both at ground level and along vertical profiles. The analysis has shown that both models were able to reproduce the main circulation features of the IOP. The strongest discrepancies are confined to the Planetary Boundary Layer, consisting of a clear tendency to underestimate or overestimate wind speed over the whole domain. The photochemical simulations showed that variability in circulation intensity was crucial mainly for the representation of the ozone peaks and of the shape of ozone plumes at the ground that have been affected in the same way over the whole domain and all along the simulated period. As a consequence, such differences can be thought of as a possible indicator for the uncertainty related to the definition of meteorological fields in a complex terrain area.

  19. Reference genome sequence of the model plant Setaria.

    Science.gov (United States)

    Bennetzen, Jeffrey L; Schmutz, Jeremy; Wang, Hao; Percifield, Ryan; Hawkins, Jennifer; Pontaroli, Ana C; Estep, Matt; Feng, Liang; Vaughn, Justin N; Grimwood, Jane; Jenkins, Jerry; Barry, Kerrie; Lindquist, Erika; Hellsten, Uffe; Deshpande, Shweta; Wang, Xuewen; Wu, Xiaomei; Mitros, Therese; Triplett, Jimmy; Yang, Xiaohan; Ye, Chu-Yu; Mauro-Herrera, Margarita; Wang, Lin; Li, Pinghua; Sharma, Manoj; Sharma, Rita; Ronald, Pamela C; Panaud, Olivier; Kellogg, Elizabeth A; Brutnell, Thomas P; Doust, Andrew N; Tuskan, Gerald A; Rokhsar, Daniel; Devos, Katrien M

    2012-05-13

    We generated a high-quality reference genome sequence for foxtail millet (Setaria italica). The ∼400-Mb assembly covers ∼80% of the genome and >95% of the gene space. The assembly was anchored to a 992-locus genetic map and was annotated by comparison with >1.3 million expressed sequence tag reads. We produced more than 580 million RNA-Seq reads to facilitate expression analyses. We also sequenced Setaria viridis, the ancestral wild relative of S. italica, and identified regions of differential single-nucleotide polymorphism density, distribution of transposable elements, small RNA content, chromosomal rearrangement and segregation distortion. The genus Setaria includes natural and cultivated species that demonstrate a wide capacity for adaptation. The genetic basis of this adaptation was investigated by comparing five sequenced grass genomes. We also used the diploid Setaria genome to evaluate the ongoing genome assembly of a related polyploid, switchgrass (Panicum virgatum).

  20. Reference genome sequence of the model plant Setaria

    Energy Technology Data Exchange (ETDEWEB)

    Bennetzen, Jeffrey L [ORNL; Schmutz, Jeremy [Hudson Alpha Institute of Biotechnology; Wang, Hao [University of Georgia, Athens, GA; Percifield, Ryan [University of Georgia, Athens, GA; Hawkins, Jennifer [University of Georgia, Athens, GA; Pontaroli, Ana C. [University of Georgia, Athens, GA; Estep, Matt [University of Georgia, Athens, GA; Feng, Liang [University of Georgia, Athens, GA; Vaughn, Justin N [ORNL; Grimwood, Jane [Hudson Alpha Institute of Biotechnology; Jenkins, Jerry [Hudson Alpha Institute of Biotechnology; Barry, Kerrie [U.S. Department of Energy, Joint Genome Institute; Lindquist, Erika [U.S. Department of Energy, Joint Genome Institute; Hellsten, Uffe [U.S. Department of Energy, Joint Genome Institute; Deshpande, Shweta [U.S. Department of Energy, Joint Genome Institute; Wang, Xuewen [University of Georgia, Athens, GA; Wu, Xiaomei [University of Georgia, Athens, GA; Mitros, Therese [University of California, Berkeley; Triplett, Jimmy [University of Missouri, St. Louis; Yang, Xiaohan [ORNL; Ye, Chuyu [ORNL; Mauro-Herrera, Margarita [Oklahoma State University; Wang, Lin [Cornell University; Li, Pinghua [Cornell University; Sharma, Manoj [University of California, Davis; Sharma, Rita [University of California, Davis; Ronald, Pamela [University of California, Davis; Panaud, Olivier [Universite de Perpignan, Perpignan, France; Kellogg, Elizabeth A. [University of Missouri, St. Louis; Brutnell, Thomas P. [Cornell University; Doust, Andrew N. [Oklahoma State University; Tuskan, Gerald A [ORNL; Rokhsar, Daniel [U.S. Department of Energy, Joint Genome Institute; Devos, Katrien M [ORNL

    2012-01-01

    We generated a high-quality reference genome sequence for foxtail millet (Setaria italica). The ~400-Mb assembly covers ~80% of the genome and >95% of the gene space. The assembly was anchored to a 992-locus genetic map and was annotated by comparison with >1.3 million expressed sequence tag reads. We produced more than 580 million RNA-Seq reads to facilitate expression analyses. We also sequenced Setaria viridis, the ancestral wild relative of S. italica, and identified regions of differential single-nucleotide polymorphism density, distribution of transposable elements, small RNA content, chromosomal rearrangement and segregation distortion. The genus Setaria includes natural and cultivated species that demonstrate a wide capacity for adaptation. The genetic basis of this adaptation was investigated by comparing five sequenced grass genomes. We also used the diploid Setaria genome to evaluate the ongoing genome assembly of a related polyploid, switchgrass (Panicum virgatum).

  1. Reference genome sequence of the model plant Setaria

    Energy Technology Data Exchange (ETDEWEB)

    Bennetzen, Jeffrey L [ORNL; Yang, Xiaohan [ORNL; Ye, Chuyu [ORNL; Tuskan, Gerald A [ORNL

    2012-01-01

    We generated a high-quality reference genome sequence for foxtail millet (Setaria italica). The {approx}400-Mb assembly covers {approx}80% of the genome and >95% of the gene space. The assembly was anchored to a 992-locus genetic map and was annotated by comparison with >1.3 million expressed sequence tag reads. We produced more than 580 million RNA-Seq reads to facilitate expression analyses. We also sequenced Setaria viridis, the ancestral wild relative of S. italica, and identified regions of differential single-nucleotide polymorphism density, distribution of transposable elements, small RNA content, chromosomal rearrangement and segregation distortion. The genus Setaria includes natural and cultivated species that demonstrate a wide capacity for adaptation. The genetic basis of this adaptation was investigated by comparing five sequenced grass genomes. We also used the diploid Setaria genome to evaluate the ongoing genome assembly of a related polyploid, switchgrass (Panicum virgatum).

  2. Comparison of static model and dynamic model for the evaluation of station blackout sequences

    International Nuclear Information System (INIS)

    Lee, Kwang-Nam; Kang, Sun-Koo; Hong, Sung-Yull.

    1992-01-01

    Station blackout is one of major contributors to the core damage frequency (CDF) in many PSA studies. Since station blackout sequence exhibits dynamic features, accurate calculation of CDF for the station blackout sequence is not possible with event tree/fault tree (ET/FT) method. Although the integral method can determine accurate CDF, it is time consuming and is difficult to evaluate various alternative AC source configuration and sensitivities. In this study, a comparison is made between static model and dynamic model and a new methodology which combines static model and dynamic model is provided for the accurate quantification of CDF and evaluation of improvement alternatives. Results of several case studies show that accurate calculation of CDF is possible by introducing equivalent mission time. (author)

  3. Artificial neural network modelling of biological oxygen demand in rivers at the national level with input selection based on Monte Carlo simulations.

    Science.gov (United States)

    Šiljić, Aleksandra; Antanasijević, Davor; Perić-Grujić, Aleksandra; Ristić, Mirjana; Pocajt, Viktor

    2015-03-01

    Biological oxygen demand (BOD) is the most significant water quality parameter and indicates water pollution with respect to the present biodegradable organic matter content. European countries are therefore obliged to report annual BOD values to Eurostat; however, BOD data at the national level is only available for 28 of 35 listed European countries for the period prior to 2008, among which 46% of data is missing. This paper describes the development of an artificial neural network model for the forecasting of annual BOD values at the national level, using widely available sustainability and economical/industrial parameters as inputs. The initial general regression neural network (GRNN) model was trained, validated and tested utilizing 20 inputs. The number of inputs was reduced to 15 using the Monte Carlo simulation technique as the input selection method. The best results were achieved with the GRNN model utilizing 25% less inputs than the initial model and a comparison with a multiple linear regression model trained and tested using the same input variables using multiple statistical performance indicators confirmed the advantage of the GRNN model. Sensitivity analysis has shown that inputs with the greatest effect on the GRNN model were (in descending order) precipitation, rural population with access to improved water sources, treatment capacity of wastewater treatment plants (urban) and treatment of municipal waste, with the last two having an equal effect. Finally, it was concluded that the developed GRNN model can be useful as a tool to support the decision-making process on sustainable development at a regional, national and international level.

  4. The embodied energy and environmental emissions of construction projects in China: An economic input-output LCA model

    International Nuclear Information System (INIS)

    Chang Yuan; Ries, Robert J.; Wang Yaowu

    2010-01-01

    A complete understanding of the resource consumption, embodied energy, and environmental emissions of civil projects in China is difficult due to the lack of comprehensive national statistics. To quantitatively assess the energy and environmental impacts of civil construction at a macro-level, this study developed a 24 sector environmental input-output life-cycle assessment model (I-O LCA) based on 2002 Chinese national economic and environmental data. The model generates an economy-wide inventory of energy use and environmental emissions. Estimates based on the level of economic activity related to planned future civil works in 2015 are made. Results indicate that the embodied energy of construction projects accounts for nearly one-sixth of the total economy's energy consumption in 2007, and may account for approximately one-fifth of the total energy use by 2015. This energy consumption is dominated by coal and oil consumptions. Energy-related emissions are the main polluters of the country's atmosphere and environment. If the industry's energy use and manufacturing techniques remain the same as in 2002, challenges to the goals for total energy consumption in China will appear in the next decade. Thus, effective implementation of efficient energy technologies and regulations are indispensable for achieving China's energy and environmental quality goals.

  5. Impact of multi-resolution analysis of artificial intelligence models inputs on multi-step ahead river flow forecasting

    Science.gov (United States)

    Badrzadeh, Honey; Sarukkalige, Ranjan; Jayawardena, A. W.

    2013-12-01

    Discrete wavelet transform was applied to decomposed ANN and ANFIS inputs.Novel approach of WNF with subtractive clustering applied for flow forecasting.Forecasting was performed in 1-5 step ahead, using multi-variate inputs.Forecasting accuracy of peak values and longer lead-time significantly improved.

  6. Extended Fitts' model of pointing time in eye-gaze input system - Incorporating effects of target shape and movement direction into modeling.

    Science.gov (United States)

    Murata, Atsuo; Fukunaga, Daichi

    2018-04-01

    This study attempted to investigate the effects of the target shape and the movement direction on the pointing time using an eye-gaze input system and extend Fitts' model so that these factors are incorporated into the model and the predictive power of Fitts' model is enhanced. The target shape, the target size, the movement distance, and the direction of target presentation were set as within-subject experimental variables. The target shape included: a circle, and rectangles with an aspect ratio of 1:1, 1:2, 1:3, and 1:4. The movement direction included eight directions: upper, lower, left, right, upper left, upper right, lower left, and lower right. On the basis of the data for identifying the effects of the target shape and the movement direction on the pointing time, an attempt was made to develop a generalized and extended Fitts' model that took into account the movement direction and the target shape. As a result, the generalized and extended model was found to fit better to the experimental data, and be more effective for predicting the pointing time for a variety of human-computer interaction (HCI) task using an eye-gaze input system. Copyright © 2017. Published by Elsevier Ltd.

  7. Sequence Tree Modeling for Combined Accident and Feed-and-Bleed Operation

    International Nuclear Information System (INIS)

    Kim, Bo Gyung; Kang Hyun Gook; Yoon, Ho Joon

    2016-01-01

    In order to address this issue, this study suggests the sequence tree model to analyze accident sequence systematically. Using the sequence tree model, all possible scenarios which need a specific safety action to prevent the core damage can be identified and success conditions of safety action under complicated situation such as combined accident will be also identified. Sequence tree is branch model to divide plant condition considering the plant dynamics. Since sequence tree model can reflect the plant dynamics, arising from interaction of different accident timing and plant condition and from the interaction between the operator action, mitigation system, and the indicators for operation, sequence tree model can be used to develop the dynamic event tree model easily. Target safety action for this study is a feed-and-bleed (F and B) operation. A F and B operation directly cools down the reactor cooling system (RCS) using the primary cooling system when residual heat removal by the secondary cooling system is not available. In this study, a TLOFW accident and a TLOFW accident with LOCA were the target accidents. Based on the conventional PSA model and indicators, the sequence tree model for a TLOFW accident was developed. If sampling analysis is performed, practical accident sequences can be identified based on the sequence analysis. If a realistic distribution for the variables can be obtained for sampling analysis, much more realistic accident sequences can be described. Moreover, if the initiating event frequency under a combined accident can be quantified, the sequence tree model can translate into a dynamic event tree model based on the sampling analysis results

  8. Sequence Tree Modeling for Combined Accident and Feed-and-Bleed Operation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Bo Gyung; Kang Hyun Gook [KAIST, Daejeon (Korea, Republic of); Yoon, Ho Joon [Khalifa University of Science, Abu Dhabi (United Arab Emirates)

    2016-05-15

    In order to address this issue, this study suggests the sequence tree model to analyze accident sequence systematically. Using the sequence tree model, all possible scenarios which need a specific safety action to prevent the core damage can be identified and success conditions of safety action under complicated situation such as combined accident will be also identified. Sequence tree is branch model to divide plant condition considering the plant dynamics. Since sequence tree model can reflect the plant dynamics, arising from interaction of different accident timing and plant condition and from the interaction between the operator action, mitigation system, and the indicators for operation, sequence tree model can be used to develop the dynamic event tree model easily. Target safety action for this study is a feed-and-bleed (F and B) operation. A F and B operation directly cools down the reactor cooling system (RCS) using the primary cooling system when residual heat removal by the secondary cooling system is not available. In this study, a TLOFW accident and a TLOFW accident with LOCA were the target accidents. Based on the conventional PSA model and indicators, the sequence tree model for a TLOFW accident was developed. If sampling analysis is performed, practical accident sequences can be identified based on the sequence analysis. If a realistic distribution for the variables can be obtained for sampling analysis, much more realistic accident sequences can be described. Moreover, if the initiating event frequency under a combined accident can be quantified, the sequence tree model can translate into a dynamic event tree model based on the sampling analysis results.

  9. Oil spill modeling input to the offshore environmental cost model (OECM) for US-BOEMRE's spill risk and costs evaluations

    International Nuclear Information System (INIS)

    French McCay, Deborah; Reich, Danielle; Rowe, Jill; Schroeder, Melanie; Graham, Eileen

    2011-01-01

    This paper simulates the consequences of oil spills using a planning model known as the Offshore Environmental Cost Model (OECM). This study aims at creating various predictive models for possible oil spill scenarios in marine waters. A crucial part of this investigation was the SIMAP model. It analyzes the distance and the direction covered by the spill under certain test conditions, generating a regression equation that simulates the impact of the spill. Tests were run in two different regions; the Mid-Atlantic region and the Chukchi Sea. Results showed that the higher wind speeds and higher water temperature of the Mid-Atlantic region had greater impact on wildlife and the water column respectively. However, short-line impact was higher in the Chukchi area due to the multi-directional wind. It was also shown that, because of their higher diffusivity in water, lighter crude oils had more impact than heavier oils. It was suggested that this model could ultimately be applied to other oil spill scenarios happening under similar conditions.

  10. Modeling the cellular mechanisms and olfactory input underlying the triphasic response of moth pheromone-sensitive projection neurons.

    Directory of Open Access Journals (Sweden)

    Yuqiao Gu

    Full Text Available In the antennal lobe of the noctuid moth Agrotis ipsilon, most pheromone-sensitive projection neurons (PNs exhibit a triphasic firing pattern of excitation (E1-inhibition (I-excitation (E2 in response to a pulse of the sex pheromone. To understand the mechanisms underlying this stereotypical discharge, we developed a biophysical model of a PN receiving inputs from olfactory receptor neurons (ORNs via nicotinic cholinergic synapses. The ORN is modeled as an inhomogeneous Poisson process whose firing rate is a function of time and is fitted to extracellular data recorded in response to pheromone stimulations at various concentrations and durations. The PN model is based on the Hodgkin-Huxley formalism with realistic ionic currents whose parameters were derived from previous studies. Simulations revealed that the inhibitory phase I can be produced by a SK current (Ca2+-gated small conductance K+ current and that the excitatory phase E2 can result from the long-lasting response of the ORNs. Parameter analysis further revealed that the ending time of E1 depends on some parameters of SK, Ca2+, nACh and Na+ currents; I duration mainly depends on the time constant of intracellular Ca2+ dynamics, conductance of Ca2+ currents and some parameters of nACh currents; The mean firing frequency of E1 and E2 depends differentially on the interaction of various currents. Thus it is likely that the interplay between PN intrinsic currents and feedforward synaptic currents are sufficient to generate the triphasic firing patterns observed in the noctuid moth A. ipsilon.

  11. Probabilistic topic modeling for the analysis and classification of genomic sequences

    Science.gov (United States)

    2015-01-01

    Background Studies on genomic sequences for classification and taxonomic identification have a leading role in the biomedical field and in the analysis of biodiversity. These studies are focusing on the so-called barcode genes, representing a well defined region of the whole genome. Recently, alignment-free techniques are gaining more importance because they are able to overcome the drawbacks of sequence alignment techniques. In this paper a new alignment-free method for DNA sequences clustering and classification is proposed. The method is based on k-mers representation and text mining techniques. Methods The presented method is based on Probabilistic Topic Modeling, a statistical technique originally proposed for text documents. Probabilistic topic models are able to find in a document corpus the topics (recurrent themes) characterizing classes of documents. This technique, applied on DNA sequences representing the documents, exploits the frequency of fixed-length k-mers and builds a generative model for a training group of sequences. This generative model, obtained through the Latent Dirichlet Allocation (LDA) algorithm, is then used to classify a large set of genomic sequences. Results and conclusions We performed classification of over 7000 16S DNA barcode sequences taken from Ribosomal Database Project (RDP) repository, training probabilistic topic models. The proposed method is compared to the RDP tool and Support Vector Machine (SVM) classification algorithm in a extensive set of trials using both complete sequences and short sequence snippets (from 400 bp to 25 bp). Our method reaches very similar results to RDP classifier and SVM for complete sequences. The most interesting results are obtained when short sequence snippets are considered. In these conditions the proposed method outperforms RDP and SVM with ultra short sequences and it exhibits a smooth decrease of performance, at every taxonomic level, when the sequence length is decreased. PMID:25916734

  12. Migration of radionuclides with ground water: a discussion of the relevance of the input parameters used in model calculations

    International Nuclear Information System (INIS)

    Jensen, B.S.

    1982-01-01

    It is probably obvious to all, that establishing the scientific basis of geological waste disposal by going deeper and deeper in detail, may fill out the working hours of hundreds of scientists for hundreds of years. Such an endeavor is, however, impossible to attain, and we are forced to define some criteria telling us and others when knowledge and insight is sufficient. In thepresent case of geological disposal one need to be able to predict migration behavior of a series of radionuclides under diverse conditions to ascertain that unacceptable transfer to the biosphere never occurs. We have already collected a huge amount of data concerning migration phenomena, some very useful, oter less so, but we still need investigatoins departing from the simple ideal concepts, which most often have provided modellers with input data to their calculations. I therefore advocate that basic research is pursued to the point where it is possible to put limits on the effect of the lesser known factors on the migration behavior of radionuclides. When such limits have been established, it will be possible to make calculations on the worst cases, which may also occur. Although I personally believe, that these extra investigations will prove additional safety in geological disposal, this fact will convince nobody, only experimental facts will do

  13. Fully automated calculation of image-derived input function in simultaneous PET/MRI in a sheep model

    International Nuclear Information System (INIS)

    Jochimsen, Thies H.; Zeisig, Vilia; Schulz, Jessica; Werner, Peter; Patt, Marianne; Patt, Jörg; Dreyer, Antje Y.; Boltze, Johannes; Barthel, Henryk; Sabri, Osama; Sattler, Bernhard

    2016-01-01

    Obtaining the arterial input function (AIF) from image data in dynamic positron emission tomography (PET) examinations is a non-invasive alternative to arterial blood sampling. In simultaneous PET/magnetic resonance imaging (PET/MRI), high-resolution MRI angiographies can be used to define major arteries for correction of partial-volume effects (PVE) and point spread function (PSF) response in the PET data. The present study describes a fully automated method to obtain the image-derived input function (IDIF) in PET/MRI. Results are compared to those obtained by arterial blood sampling. To segment the trunk of the major arteries in the neck, a high-resolution time-of-flight MRI angiography was postprocessed by a vessel-enhancement filter based on the inertia tensor. Together with the measured PSF of the PET subsystem, the arterial mask was used for geometrical deconvolution, yielding the time-resolved activity concentration averaged over a major artery. The method was compared to manual arterial blood sampling at the hind leg of 21 sheep (animal stroke model) during measurement of blood flow with O15-water. Absolute quantification of activity concentration was compared after bolus passage during steady state, i.e., between 2.5- and 5-min post injection. Cerebral blood flow (CBF) values from blood sampling and IDIF were also compared. The cross-calibration factor obtained by comparing activity concentrations in blood samples and IDIF during steady state is 0.98 ± 0.10. In all examinations, the IDIF provided a much earlier and sharper bolus peak than in the time course of activity concentration obtained by arterial blood sampling. CBF using the IDIF was 22 % higher than CBF obtained by using the AIF yielded by blood sampling. The small deviation between arterial blood sampling and IDIF during steady state indicates that correction of PVE and PSF is possible with the method presented. The differences in bolus dynamics and, hence, CBF values can be explained by the

  14. Fully automated calculation of image-derived input function in simultaneous PET/MRI in a sheep model

    Energy Technology Data Exchange (ETDEWEB)

    Jochimsen, Thies H.; Zeisig, Vilia [Department of Nuclear Medicine, Leipzig University Hospital, Liebigstr. 18, Leipzig (Germany); Schulz, Jessica [Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1a, Leipzig, D-04103 (Germany); Werner, Peter; Patt, Marianne; Patt, Jörg [Department of Nuclear Medicine, Leipzig University Hospital, Liebigstr. 18, Leipzig (Germany); Dreyer, Antje Y. [Fraunhofer Institute of Cell Therapy and Immunology, Perlickstr. 1, Leipzig, D-04103 (Germany); Translational Centre for Regenerative Medicine, University Leipzig, Philipp-Rosenthal-Str. 55, Leipzig, D-04103 (Germany); Boltze, Johannes [Fraunhofer Institute of Cell Therapy and Immunology, Perlickstr. 1, Leipzig, D-04103 (Germany); Translational Centre for Regenerative Medicine, University Leipzig, Philipp-Rosenthal-Str. 55, Leipzig, D-04103 (Germany); Fraunhofer Research Institution of Marine Biotechnology and Institute for Medical and Marine Biotechnology, University of Lübeck, Lübeck (Germany); Barthel, Henryk; Sabri, Osama; Sattler, Bernhard [Department of Nuclear Medicine, Leipzig University Hospital, Liebigstr. 18, Leipzig (Germany)

    2016-02-13

    Obtaining the arterial input function (AIF) from image data in dynamic positron emission tomography (PET) examinations is a non-invasive alternative to arterial blood sampling. In simultaneous PET/magnetic resonance imaging (PET/MRI), high-resolution MRI angiographies can be used to define major arteries for correction of partial-volume effects (PVE) and point spread function (PSF) response in the PET data. The present study describes a fully automated method to obtain the image-derived input function (IDIF) in PET/MRI. Results are compared to those obtained by arterial blood sampling. To segment the trunk of the major arteries in the neck, a high-resolution time-of-flight MRI angiography was postprocessed by a vessel-enhancement filter based on the inertia tensor. Together with the measured PSF of the PET subsystem, the arterial mask was used for geometrical deconvolution, yielding the time-resolved activity concentration averaged over a major artery. The method was compared to manual arterial blood sampling at the hind leg of 21 sheep (animal stroke model) during measurement of blood flow with O15-water. Absolute quantification of activity concentration was compared after bolus passage during steady state, i.e., between 2.5- and 5-min post injection. Cerebral blood flow (CBF) values from blood sampling and IDIF were also compared. The cross-calibration factor obtained by comparing activity concentrations in blood samples and IDIF during steady state is 0.98 ± 0.10. In all examinations, the IDIF provided a much earlier and sharper bolus peak than in the time course of activity concentration obtained by arterial blood sampling. CBF using the IDIF was 22 % higher than CBF obtained by using the AIF yielded by blood sampling. The small deviation between arterial blood sampling and IDIF during steady state indicates that correction of PVE and PSF is possible with the method presented. The differences in bolus dynamics and, hence, CBF values can be explained by the

  15. Application of regional physically-based landslide early warning model: tuning of the input parameters and validation of the results

    Science.gov (United States)

    D'Ambrosio, Michele; Tofani, Veronica; Rossi, Guglielmo; Salvatici, Teresa; Tacconi Stefanelli, Carlo; Rosi, Ascanio; Benedetta Masi, Elena; Pazzi, Veronica; Vannocci, Pietro; Catani, Filippo; Casagli, Nicola

    2017-04-01

    The Aosta Valley region is located in North-West Alpine mountain chain. The geomorphology of the region is characterized by steep slopes, high climatic and altitude (ranging from 400 m a.s.l of Dora Baltea's river floodplain to 4810 m a.s.l. of Mont Blanc) variability. In the study area (zone B), located in Eastern part of Aosta Valley, heavy rainfall of about 800-900 mm per year is the main landslides trigger. These features lead to a high hydrogeological risk in all territory, as mass movements interest the 70% of the municipality areas (mainly shallow rapid landslides and rock falls). An in-depth study of the geotechnical and hydrological properties of hillslopes controlling shallow landslides formation was conducted, with the aim to improve the reliability of deterministic model, named HIRESS (HIgh REsolution Stability Simulator). In particular, two campaigns of on site measurements and laboratory experiments were performed. The data obtained have been studied in order to assess the relationships existing among the different parameters and the bedrock lithology. The analyzed soils in 12 survey points are mainly composed of sand and gravel, with highly variable contents of silt. The range of effective internal friction angle (from 25.6° to 34.3°) and effective cohesion (from 0 kPa to 9.3 kPa) measured and the median ks (10E-6 m/s) value are consistent with the average grain sizes (gravelly sand). The data collected contributes to generate input map of parameters for HIRESS (static data). More static data are: volume weight, residual water content, porosity and grain size index. In order to improve the original formulation of the model, the contribution of the root cohesion has been also taken into account based on the vegetation map and literature values. HIRESS is a physically based distributed slope stability simulator for analyzing shallow landslide triggering conditions in real time and in large areas using parallel computational techniques. The software

  16. An extended environmental input-output lifecycle assessment model to study the urban food-energy-water nexus

    Science.gov (United States)

    Sherwood, John; Clabeaux, Raeanne; Carbajales-Dale, Michael

    2017-10-01

    We developed a physically-based environmental account of US food production systems and integrated these data into the environmental-input-output life cycle assessment (EIO-LCA) model. The extended model was used to characterize the food, energy, and water (FEW) intensities of every US economic sector. The model was then applied to every Bureau of Economic Analysis metropolitan statistical area (MSA) to determine their FEW usages. The extended EIO-LCA model can determine the water resource use (kGal), energy resource use (TJ), and food resource use in units of mass (kg) or energy content (kcal) of any economic activity within the United States. We analyzed every economic sector to determine its FEW intensities per dollar of economic output. This data was applied to each of the 382 MSAs to determine their total and per dollar of GDP FEW usages by allocating MSA economic production to the corresponding FEW intensities of US economic sectors. Additionally, a longitudinal study was performed for the Los Angeles-Long Beach-Anaheim, CA, metropolitan statistical area to examine trends from this singular MSA and compare it to the overall results. Results show a strong correlation between GDP and energy use, and between food and water use across MSAs. There is also a correlation between GDP and greenhouse gas emissions. The longitudinal study indicates that these correlations can shift alongside a shifting industrial composition. Comparing MSAs on a per GDP basis reveals that central and southern California tend to be more resource intensive than many other parts of the country, while much of Florida has abnormally low resource requirements. Results of this study enable a more complete understanding of food, energy, and water as key ingredients to a functioning economy. With the addition of the food data to the EIO-LCA framework, researchers will be able to better study the food-energy-water nexus and gain insight into how these three vital resources are interconnected

  17. A representation result for hysteresis operators with vector valued inputs and its application to models for magnetic materials

    Energy Technology Data Exchange (ETDEWEB)

    Klein, Olaf, E-mail: Olaf.Klein@wias-berlin.de

    2014-02-15

    In this work, hysteresis operators mapping continuous vector-valued input functions being piecewise monotaffine, i.e. being piecewise the composition of a monotone with an affine function, to vector-valued output functions are considered. It is shown that the operator can be generated by a unique defined function on the convexity triple free strings. A formulation of a congruence property for periodic inputs is presented and reformulated as a condition for the generating string function.

  18. Development of a MODIS-Derived Surface Albedo Data Set: An Improved Model Input for Processing the NSRDB

    Energy Technology Data Exchange (ETDEWEB)

    Maclaurin, Galen [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sengupta, Manajit [National Renewable Energy Lab. (NREL), Golden, CO (United States); Xie, Yu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gilroy, Nicholas [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-12-01

    A significant source of bias in the transposition of global horizontal irradiance to plane-of-array (POA) irradiance arises from inaccurate estimations of surface albedo. The current physics-based model used to produce the National Solar Radiation Database (NSRDB) relies on model estimations of surface albedo from a reanalysis climatalogy produced at relatively coarse spatial resolution compared to that of the NSRDB. As an input to spectral decomposition and transposition models, more accurate surface albedo data from remotely sensed imagery at finer spatial resolutions would improve accuracy in the final product. The National Renewable Energy Laboratory (NREL) developed an improved white-sky (bi-hemispherical reflectance) broadband (0.3-5.0 ..mu..m) surface albedo data set for processing the NSRDB from two existing data sets: a gap-filled albedo product and a daily snow cover product. The Moderate Resolution Imaging Spectroradiometer (MODIS) sensors onboard the Terra and Aqua satellites have provided high-quality measurements of surface albedo at 30 arc-second spatial resolution and 8-day temporal resolution since 2001. The high spatial and temporal resolutions and the temporal coverage of the MODIS sensor will allow for improved modeling of POA irradiance in the NSRDB. However, cloud and snow cover interfere with MODIS observations of ground surface albedo, and thus they require post-processing. The MODIS production team applied a gap-filling methodology to interpolate observations obscured by clouds or ephemeral snow. This approach filled pixels with ephemeral snow cover because the 8-day temporal resolution is too coarse to accurately capture the variability of snow cover and its impact on albedo estimates. However, for this project, accurate representation of daily snow cover change is important in producing the NSRDB. Therefore, NREL also used the Integrated Multisensor Snow and Ice Mapping System data set, which provides daily snow cover observations of the

  19. Next-generation phylogeography: a targeted approach for multilocus sequencing of non-model organisms.

    Directory of Open Access Journals (Sweden)

    Jonathan B Puritz

    Full Text Available The field of phylogeography has long since realized the need and utility of incorporating nuclear DNA (nDNA sequences into analyses. However, the use of nDNA sequence data, at the population level, has been hindered by technical laboratory difficulty, sequencing costs, and problematic analytical methods dealing with genotypic sequence data, especially in non-model organisms. Here, we present a method utilizing the 454 GS-FLX Titanium pyrosequencing platform with the capacity to simultaneously sequence two species of sea star (Meridiastra calcar and Parvulastra exigua at five different nDNA loci across 16 different populations of 20 individuals each per species. We compare results from 3 populations with traditional Sanger sequencing based methods, and demonstrate that this next-generation sequencing platform is more time and cost effective and more sensitive to rare variants than Sanger based sequencing. A crucial advantage is that the high coverage of clonally amplified sequences simplifies haplotype determination, even in highly polymorphic species. This targeted next-generation approach can greatly increase the use of nDNA sequence loci in phylogeographic and population genetic studies by mitigating many of the time, cost, and analytical issues associated with highly polymorphic, diploid sequence markers.

  20. A robust hybrid model integrating enhanced inputs based extreme learning machine with PLSR (PLSR-EIELM) and its application to intelligent measurement.

    Science.gov (United States)

    He, Yan-Lin; Geng, Zhi-Qiang; Xu, Yuan; Zhu, Qun-Xiong

    2015-09-01

    In this paper, a robust hybrid model integrating an enhanced inputs based extreme learning machine with the partial least square regression (PLSR-EIELM) was proposed. The proposed PLSR-EIELM model can overcome two main flaws in the extreme learning machine (ELM), i.e. the intractable problem in determining the optimal number of the hidden layer neurons and the over-fitting phenomenon. First, a traditional extreme learning machine (ELM) is selected. Second, a method of randomly assigning is applied to the weights between the input layer and the hidden layer, and then the nonlinear transformation for independent variables can be obtained from the output of the hidden layer neurons. Especially, the original input variables are regarded as enhanced inputs; then the enhanced inputs and the nonlinear transformed variables are tied together as the whole independent variables. In this way, the PLSR can be carried out to identify the PLS components not only from the nonlinear transformed variables but also from the original input variables, which can remove the correlation among the whole independent variables and the expected outputs. Finally, the optimal relationship model of the whole independent variables with the expected outputs can be achieved by using PLSR. Thus, the PLSR-EIELM model is developed. Then the PLSR-EIELM model served as an intelligent measurement tool for the key variables of the Purified Terephthalic Acid (PTA) process and the High Density Polyethylene (HDPE) process. The experimental results show that the predictive accuracy of PLSR-EIELM is stable, which indicate that PLSR-EIELM has good robust character. Moreover, compared with ELM, PLSR, hierarchical ELM (HELM), and PLSR-ELM, PLSR-EIELM can achieve much smaller predicted relative errors in these two applications. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Enhancement of a robust arcuate GABAergic input to gonadotropin-releasing hormone neurons in a model of polycystic ovarian syndrome.

    Science.gov (United States)

    Moore, Aleisha M; Prescott, Mel; Marshall, Christopher J; Yip, Siew Hoong; Campbell, Rebecca E

    2015-01-13

    Polycystic ovarian syndrome (PCOS), the leading cause of female infertility, is associated with an increase in luteinizing hormone (LH) pulse frequency, implicating abnormal steroid hormone feedback to gonadotropin-releasing hormone (GnRH) neurons. This study investigated whether modifications in the synaptically connected neuronal network of GnRH neurons could account for this pathology. The PCOS phenotype was induced in mice following prenatal androgen (PNA) exposure. Serial blood sampling confirmed that PNA elicits increased LH pulse frequency and impaired progesterone negative feedback in adult females, mimicking the neuroendocrine abnormalities of the clinical syndrome. Imaging of GnRH neurons revealed greater dendritic spine density that correlated with increased putative GABAergic but not glutamatergic inputs in PNA mice. Mapping of steroid hormone receptor expression revealed that PNA mice had 59% fewer progesterone receptor-expressing cells in the arcuate nucleus of the hypothalamus (ARN). To address whether increased GABA innervation to GnRH neurons originates in the ARN, a viral-mediated Cre-lox approach was taken to trace the projections of ARN GABA neurons in vivo. Remarkably, projections from ARN GABAergic neurons heavily contacted and even bundled with GnRH neuron dendrites, and the density of fibers apposing GnRH neurons was even greater in PNA mice (56%). Additionally, this ARN GABA population showed significantly less colocalization with progesterone receptor in PNA animals compared with controls. Together, these data describe a robust GABAergic circuit originating in the ARN that is enhanced in a model of PCOS and may underpin the neuroendocrine pathophysiology of the syndrome.

  2. A probabilistic cell model in background corrected image sequences for single cell analysis

    Directory of Open Access Journals (Sweden)

    Fieguth Paul

    2010-10-01

    Full Text Available Abstract Background Methods of manual cell localization and outlining are so onerous that automated tracking methods would seem mandatory for handling huge image sequences, nevertheless manual tracking is, astonishingly, still widely practiced in areas such as cell biology which are outside the influence of most image processing research. The goal of our research is to address this gap by developing automated methods of cell tracking, localization, and segmentation. Since even an optimal frame-to-frame association method cannot compensate and recover from poor detection, it is clear that the quality of cell tracking depends on the quality of cell detection within each frame. Methods Cell detection performs poorly where the background is not uniform and includes temporal illumination variations, spatial non-uniformities, and stationary objects such as well boundaries (which confine the cells under study. To improve cell detection, the signal to noise ratio of the input image can be increased via accurate background estimation. In this paper we investigate background estimation, for the purpose of cell detection. We propose a cell model and a method for background estimation, driven by the proposed cell model, such that well structure can be identified, and explicitly rejected, when estimating the background. Results The resulting background-removed images have fewer artifacts and allow cells to be localized and detected more reliably. The experimental results generated by applying the proposed method to different Hematopoietic Stem Cell (HSC image sequences are quite promising. Conclusion The understanding of cell behavior relies on precise information about the temporal dynamics and spatial distribution of cells. Such information may play a key role in disease research and regenerative medicine, so automated methods for observation and measurement of cells from microscopic images are in high demand. The proposed method in this paper is capable

  3. Waste Isolation Pilot Plant environmental impact report: an outline of the input--output model and the impact projections methodology. Technical document, socioeconomic portion

    International Nuclear Information System (INIS)

    1978-07-01

    A static model in the form of a regional input-output model was constructed for Eddy and Lea Counties, New Mexico. Besides the WIPP project, the model was also used for several other projects to determine the economic impact of proposed new facilities and developments. Both private and public sectors are covered. Sub-sectors for WIPP below-ground construction, above-ground construction, and operation and transport are included

  4. De novo structural modeling and computational sequence analysis ...

    African Journals Online (AJOL)

    Different bioinformatics tools and machine learning techniques were used for protein structural classification. De novo protein modeling was performed by using I-TASSER server. The final model obtained was accessed by PROCHECK and DFIRE2, which confirmed that the final model is reliable. Until complete biochemical ...

  5. Continuous and simultaneous estimation of finger kinematics using inputs from an EMG-to-muscle activation model.

    Science.gov (United States)

    Ngeo, Jimson G; Tamei, Tomoya; Shibata, Tomohiro

    2014-08-14

    Surface electromyography (EMG) signals are often used in many robot and rehabilitation applications because these reflect motor intentions of users very well. However, very few studies have focused on the accurate and proportional control of the human hand using EMG signals. Many have focused on discrete gesture classification and some have encountered inherent problems such as electro-mechanical delays (EMD). Here, we present a new method for estimating simultaneous and multiple finger kinematics from multi-channel surface EMG signals. In this study, surface EMG signals from the forearm and finger kinematic data were extracted from ten able-bodied subjects while they were tasked to do individual and simultaneous multiple finger flexion and extension movements in free space. Instead of using traditional time-domain features of EMG, an EMG-to-Muscle Activation model that parameterizes EMD was used and shown to give better estimation performance. A fast feed forward artificial neural network (ANN) and a nonparametric Gaussian Process (GP) regressor were both used and evaluated to estimate complex finger kinematics, with the latter rarely used in the other related literature. The estimation accuracies, in terms of mean correlation coefficient, were 0.85 ± 0.07, 0.78 ± 0.06 and 0.73 ± 0.04 for the metacarpophalangeal (MCP), proximal interphalangeal (PIP) and the distal interphalangeal (DIP) finger joint DOFs, respectively. The mean root-mean-square error in each individual DOF ranged from 5 to 15%. We show that estimation improved using the proposed muscle activation inputs compared to other features, and that using GP regression gave better estimation results when using fewer training samples. The proposed method provides a viable means of capturing the general trend of finger movements and shows a good way of estimating finger joint kinematics using a muscle activation model that parameterizes EMD. The results from this study demonstrates a potential control

  6. Monte Carlo modeling of the net effects of coma scattering and thermal reradiation on the energy input to cometary nucleus

    International Nuclear Information System (INIS)

    Salo, H.

    1988-01-01

    A Monte Carlo simulation method is presented that can, to an accuracy of a few percent, calculate the effects of a dusty coma on the total energy input to the cometary nucleus. This method treats nonconservative nonisotropic scattering, as well as the reflection from the nucleus surface. Results are presented as a function of the optical thickness of the dust column in the sun-comet axis. The total energy input to the nucleus appears to be only weakly dependent on the opacity of the coma, the radial distribution of the dust, or the details of the extinction processes. 18 references

  7. An introduction to hidden Markov models for biological sequences

    DEFF Research Database (Denmark)

    Krogh, Anders Stærmose

    1998-01-01

    A non-matematical tutorial on hidden Markov models (HMMs) plus a description of one of the applications of HMMs: gene finding.......A non-matematical tutorial on hidden Markov models (HMMs) plus a description of one of the applications of HMMs: gene finding....

  8. Revised sequence components power system models for unbalanced power system studies

    Energy Technology Data Exchange (ETDEWEB)

    Abdel-Akher, M. [Tunku Abdul Rahman Univ., Kuala Lumpur (Malaysia); Nor, K.-M. [Univ. of Technology Malaysia, Johor (Malaysia); Rashid, A.H.A. [Univ. of Malaya, Kuala Lumpur (Malaysia)

    2007-07-01

    The principle method of analysis using positive, negative, and zero-sequence networks has been used to examine the balanced power system under both balanced and unbalanced loading conditions. The significant advantage of the sequence networks is that the sequence networks become entirely uncoupled in the case of balanced three-phase power systems. The uncoupled sequence networks then can be solved in independent way such as in fault calculation programs. However, the hypothesis of balanced power systems cannot be considered in many cases due to untransposed transmission lines; multiphase line segments in a distribution power system; or transformer phase shifts which cannot be incorporated in the existing models. A revised sequence decoupled power system models for analyzing unbalanced power systems based on symmetrical networks was presented in this paper. These models included synchronous machines, transformers, transmission lines, and voltage regulators. The models were derived from their counterpart's models in phase coordinates frame of reference. In these models, the three sequence networks were fully decoupled with a three-phase coordinates features such as transformer phase shifts and transmission line coupling. The proposed models were used to develop an unbalanced power-flow program for analyzing both balanced and unbalanced networks. The power flow solution was identical to results obtained from a full phase coordinate three-phase power-flow program. 11 refs., 3 tabs.

  9. Fast and Sequence-Adaptive Whole-Brain Segmentation Using Parametric Bayesian Modeling

    DEFF Research Database (Denmark)

    Puonti, Oula; Iglesias, Juan Eugenio; Van Leemput, Koen

    2016-01-01

    the performance of a segmentation algorithm designed to meet these requirements, building upon generative parametric models previously used in tissue classification. The method is tested on four different datasets acquired with different scanners, field strengths and pulse sequences, demonstrating comparable...

  10. Data input guide for SWIFT II. The Sandia waste-isolation flow and transport model for fractured media, Release 4.84

    International Nuclear Information System (INIS)

    Reeves, M.; Ward, D.S.; Johns, N.D.; Cranwell, R.M.

    1986-04-01

    This report is one of three which describes the SWIFT II computer code. The code simulates flow and transport processes in geologic media which may be fractured. SWIFT II was developed for use in the analysis of deep geologic facilities for nuclear-waste disposal. This user's manual should permit the analyst to use the code effectively by facilitating the preparation of input data. A second companion document discusses the theory and implementation of the models employed by the SWIFT II code. A third document provides illustrative problems for instructional purposes. This report contains detailed descriptions of the input data along with an appendix of the input diagnostics. The use of auxiliary files, unit conversions, and program variable descriptors also are included in this document

  11. The effect of adjusting model inputs to achieve mass balance on time-dynamic simulations in a food-web model of Lake Huron

    Science.gov (United States)

    Langseth, Brian J.; Jones, Michael L.; Riley, Stephen C.

    2014-01-01

    Ecopath with Ecosim (EwE) is a widely used modeling tool in fishery research and management. Ecopath requires a mass-balanced snapshot of a food web at a particular point in time, which Ecosim then uses to simulate changes in biomass over time. Initial inputs to Ecopath, including estimates for biomasses, production to biomass ratios, consumption to biomass ratios, and diets, rarely produce mass balance, and thus ad hoc changes to inputs are required to balance the model. There has been little previous research of whether ad hoc changes to achieve mass balance affect Ecosim simulations. We constructed an EwE model for the offshore community of Lake Huron, and balanced the model using four contrasting but realistic methods. The four balancing methods were based on two contrasting approaches; in the first approach, production of unbalanced groups was increased by increasing either biomass or the production to biomass ratio, while in the second approach, consumption of predators on unbalanced groups was decreased by decreasing either biomass or the consumption to biomass ratio. We compared six simulation scenarios based on three alternative assumptions about the extent to which mortality rates of prey can change in response to changes in predator biomass (i.e., vulnerabilities) under perturbations to either fishing mortality or environmental production. Changes in simulated biomass values over time were used in a principal components analysis to assess the comparative effect of balancing method, vulnerabilities, and perturbation types. Vulnerabilities explained the most variation in biomass, followed by the type of perturbation. Choice of balancing method explained little of the overall variation in biomass. Under scenarios where changes in predator biomass caused large changes in mortality rates of prey (i.e., high vulnerabilities), variation in biomass was greater than when changes in predator biomass caused only small changes in mortality rates of prey (i.e., low

  12. Secondary structure classification of amino-acid sequences using state-space modeling

    OpenAIRE

    Brunnert, Marcus; Krahnke, Tillmann; Urfer, Wolfgang

    2001-01-01

    The secondary structure classification of amino acid sequences can be carried out by a statistical analysis of sequence and structure data using state-space models. Aiming at this classification, a modified filter algorithm programmed in S is applied to data of three proteins. The application leads to correct classifications of two proteins even when using relatively simple estimation methods for the parameters of the state-space models. Furthermore, it has been shown that the assumed initial...

  13. Sequence-structure relationships in RNA loops: establishing the basis for loop homology modeling.

    Science.gov (United States)

    Schudoma, Christian; May, Patrick; Nikiforova, Viktoria; Walther, Dirk

    2010-01-01

    The specific function of RNA molecules frequently resides in their seemingly unstructured loop regions. We performed a systematic analysis of RNA loops extracted from experimentally determined three-dimensional structures of RNA molecules. A comprehensive loop-structure data set was created and organized into distinct clusters based on structural and sequence similarity. We detected clear evidence of the hallmark of homology present in the sequence-structure relationships in loops. Loops differing by structures. Thus, our results support the application of homology modeling for RNA loop model building. We established a threshold that may guide the sequence divergence-based selection of template structures for RNA loop homology modeling. Of all possible sequences that are, under the assumption of isosteric relationships, theoretically compatible with actual sequences observed in RNA structures, only a small fraction is contained in the Rfam database of RNA sequences and classes implying that the actual RNA loop space may consist of a limited number of unique loop structures and conserved sequences. The loop-structure data sets are made available via an online database, RLooM. RLooM also offers functionalities for the modeling of RNA loop structures in support of RNA engineering and design efforts.

  14. A Local Poisson Graphical Model for inferring networks from sequencing data.

    Science.gov (United States)

    Allen, Genevera I; Liu, Zhandong

    2013-09-01

    Gaussian graphical models, a class of undirected graphs or Markov Networks, are often used to infer gene networks based on microarray expression data. Many scientists, however, have begun using high-throughput sequencing technologies such as RNA-sequencing or next generation sequencing to measure gene expression. As the resulting data consists of counts of sequencing reads for each gene, Gaussian graphical models are not optimal for this discrete data. In this paper, we propose a novel method for inferring gene networks from sequencing data: the Local Poisson Graphical Model. Our model assumes a Local Markov property where each variable conditional on all other variables is Poisson distributed. We develop a neighborhood selection algorithm to fit our model locally by performing a series of l1 penalized Poisson, or log-linear, regressions. This yields a fast parallel algorithm for estimating networks from next generation sequencing data. In simulations, we illustrate the effectiveness of our methods for recovering network structure from count data. A case study on breast cancer microRNAs (miRNAs), a novel application of graphical models, finds known regulators of breast cancer genes and discovers novel miRNA clusters and hubs that are targets for future research.

  15. Modeling the ionosphere-thermosphere response to a geomagnetic storm using physics-based magnetospheric energy input: OpenGGCM-CTIM results

    Directory of Open Access Journals (Sweden)

    Connor Hyunju Kim

    2016-01-01

    Full Text Available The magnetosphere is a major source of energy for the Earth’s ionosphere and thermosphere (IT system. Current IT models drive the upper atmosphere using empirically calculated magnetospheric energy input. Thus, they do not sufficiently capture the storm-time dynamics, particularly at high latitudes. To improve the prediction capability of IT models, a physics-based magnetospheric input is necessary. Here, we use the Open Global General Circulation Model (OpenGGCM coupled with the Coupled Thermosphere Ionosphere Model (CTIM. OpenGGCM calculates a three-dimensional global magnetosphere and a two-dimensional high-latitude ionosphere by solving resistive magnetohydrodynamic (MHD equations with solar wind input. CTIM calculates a global thermosphere and a high-latitude ionosphere in three dimensions using realistic magnetospheric inputs from the OpenGGCM. We investigate whether the coupled model improves the storm-time IT responses by simulating a geomagnetic storm that is preceded by a strong solar wind pressure front on August 24, 2005. We compare the OpenGGCM-CTIM results with low-earth-orbit satellite observations and with the model results of Coupled Thermosphere-Ionosphere-Plasmasphere electrodynamics (CTIPe. CTIPe is an up-to-date version of CTIM that incorporates more IT dynamics such as a low-latitude ionosphere and a plasmasphere, but uses empirical magnetospheric input. OpenGGCM-CTIM reproduces localized neutral density peaks at ~ 400 km altitude in the high-latitude dayside regions in agreement with in situ observations during the pressure shock and the early phase of the storm. Although CTIPe is in some sense a much superior model than CTIM, it misses these localized enhancements. Unlike the CTIPe empirical input models, OpenGGCM-CTIM more faithfully produces localized increases of both auroral precipitation and ionospheric electric fields near the high-latitude dayside region after the pressure shock and after the storm onset

  16. Sequence-based model of gap gene regulatory network.

    Science.gov (United States)

    Kozlov, Konstantin; Gursky, Vitaly; Kulakovskiy, Ivan; Samsonova, Maria

    2014-01-01

    The detailed analysis of transcriptional regulation is crucially important for understanding biological processes. The gap gene network in Drosophila attracts large interest among researches studying mechanisms of transcriptional regulation. It implements the most upstream regulatory layer of the segmentation gene network. The knowledge of molecular mechanisms involved in gap gene regulation is far less complete than that of genetics of the system. Mathematical modeling goes beyond insights gained by genetics and molecular approaches. It allows us to reconstruct wild-type gene expression patterns in silico, infer underlying regulatory mechanism and prove its sufficiency. We developed a new model that provides a dynamical description of gap gene regulatory systems, using detailed DNA-based information, as well as spatial transcription factor concentration data at varying time points. We showed that this model correctly reproduces gap gene expression patterns in wild type embryos and is able to predict gap expression patterns in Kr mutants and four reporter constructs. We used four-fold cross validation test and fitting to random dataset to validate the model and proof its sufficiency in data description. The identifiability analysis showed that most model parameters are well identifiable. We reconstructed the gap gene network topology and studied the impact of individual transcription factor binding sites on the model output. We measured this impact by calculating the site regulatory weight as a normalized difference between the residual sum of squares error for the set of all annotated sites and for the set with the site of interest excluded. The reconstructed topology of the gap gene network is in agreement with previous modeling results and data from literature. We showed that 1) the regulatory weights of transcription factor binding sites show very weak correlation with their PWM score; 2) sites with low regulatory weight are important for the model output; 3

  17. Sequence Domain Harmonic Modeling of Type-IV Wind Turbines

    DEFF Research Database (Denmark)

    Guest, Emerson; Jensen, Kim Høj; Rasmussen, Tonny Wederberg

    2017-01-01

    -sampled pulsewidth modulation and an analysis of converter generated voltage harmonics due to compensated dead-time. The decoupling capabilities of the proposed the SD harmonic model are verified through a power quality (PQ) assessment of a 3MW Type-IV wind turbine. The assessment shows that the magnitude and phase...... of low-order odd converter generated voltage harmonics are dependent on the converter operating point and the phase of the fundamental component of converter current respectively. The SD harmonic model can be used to make PQ assessments of Type-IV wind turbines or incorporated into harmonic load flows...... for computation of PQ in wind power plants....

  18. Hidden Markov models for sequence analysis: extension and analysis of the basic method

    DEFF Research Database (Denmark)

    Hughey, Richard; Krogh, Anders Stærmose

    1996-01-01

    -maximization training procedure is relatively straight-forward. In this paper,we review the mathematical extensions and heuristics that move the method from the theoreticalto the practical. Then, we experimentally analyze the effectiveness of model regularization,dynamic model modification, and optimization strategies......Hidden Markov models (HMMs) are a highly effective means of modeling a family of unalignedsequences or a common motif within a set of unaligned sequences. The trained HMM can then beused for discrimination or multiple alignment. The basic mathematical description of an HMMand its expectation....... Finally it is demonstrated on the SH2domain how a domain can be found from unaligned sequences using a special model type. Theexperimental work was completed with the aid of the Sequence Alignment and Modeling softwaresuite....

  19. Persistence and extinction of a stochastic single-species population model in a polluted environment with impulsive toxicant input

    Directory of Open Access Journals (Sweden)

    Meng Liu

    2013-10-01

    Full Text Available A stochastic single-species population system in a polluted environment with impulsive toxicant input is proposed and studied. Sufficient conditions for extinction, non-persistence in the mean, strong persistence in the mean and stochastic permanence of the population are established. The threshold between strong persistence in the mean and extinction is obtained. Some simulation figures are introduced to illustrate the main results.

  20. An efficient binomial model-based measure for sequence comparison and its application.

    Science.gov (United States)

    Liu, Xiaoqing; Dai, Qi; Li, Lihua; He, Zerong

    2011-04-01

    Sequence comparison is one of the major tasks in bioinformatics, which could serve as evidence of structural and functional conservation, as well as of evolutionary relations. There are several similarity/dissimilarity measures for sequence comparison, but challenges remains. This paper presented a binomial model-based measure to analyze biological sequences. With help of a random indicator, the occurrence of a word at any position of sequence can be regarded as a random Bernoulli variable, and the distribution of a sum of the word occurrence is well known to be a binomial one. By using a recursive formula, we computed the binomial probability of the word count and proposed a binomial model-based measure based on the relative entropy. The proposed measure was tested by extensive experiments including classification of HEV genotypes and phylogenetic analysis, and further compared with alignment-based and alignment-free measures. The results demonstrate that the proposed measure based on binomial model is more efficient.

  1. Fast, Sequence Adaptive Parcellation of Brain MR Using Parametric Models

    DEFF Research Database (Denmark)

    Puonti, Oula; Iglesias, Juan Eugenio; Van Leemput, Koen

    2013-01-01

    In this paper we propose a method for whole brain parcellation using the type of generative parametric models typically used in tissue classification. Compared to the non-parametric, multi-atlas segmentation techniques that have become popular in recent years, our method obtains state-of-the-art ...

  2. Stochastic Petri Net Modeling of Wave Sequences in Cardiac Arrhythmias.

    Science.gov (United States)

    1987-11-01

    Wolff - Parkinson - White syndrome . The ventricles are divided into two parts. One of these is excited normally and produces R...example, in a condition called the Wolff - Parkinson - White syn- drome the ECG displays an abnormally early onset of the ventricular activity. It also...8217. , ’ ", -’ , .’ %,,? .’ ,; ’ .- ’ ’ -’ € ’ , . ’ - I. 32 Wolfe- Parkinson - White Syndrome (Fig. 8c) This model uses "basic" elements for the

  3. MCMsf -- Mixing-cell model for a steady flow MIG -- Mixing-cell input generator: A short manual for installation and operation of MCMsf using the MIG -- mixing-cell input generator

    International Nuclear Information System (INIS)

    Adar, E.M.; Kuells, C.

    2002-01-01

    The following MIG computer code is restricted to a steady flow and steady hydrochemical system. The code for a non-steady hydrological system is still heavily dependant on external optimization libraries, such as the NAG Library. Therefore, a stand-alone 'friendly' code or solver for the non-steady system has yet to be compiled. Readers looking to implement the mixing-cell approach in a non-steady hydrological flow system are encouraged to contact the authors. In order to simplify the procedure of preparing the data and running the Mixing-Cell Model for steady flow system (MCMsf), a special Mixing Input Generator (MIG) has been programmed. MIG is a Visual Basic Microsoft application that runs within Excel 5.0 (and with more advanced versions such as Office 2000) via Windows 95 or newer environment. The program has been tested and used successfully in Windows NT, Windows 95 and Windows 98 together with Excel 5.0, 7.0 and 2000. The development of the standalone Version MIGSA that will run on a Windows system without Microsoft Excel is under development. Section 1 provides some clarifications of terms that are used both in MCMsf and MIG, whereas Section 2 briefly reviews the mathematical algorithm. For elaboration of the basic assumptions and for further mathematical description, the user is referred to the explanations provided in the Model Simplification and to the references provided in this publication

  4. Situation models and memory: the effects of temporal and causal information on recall sequence.

    Science.gov (United States)

    Brownstein, Aaron L; Read, Stephen J

    2007-10-01

    Participants watched an episode of the television show Cheers on video and then reported free recall. Recall sequence followed the sequence of events in the story; if one concept was observed immediately after another, it was recalled immediately after it. We also made a causal network of the show's story and found that recall sequence followed causal links; effects were recalled immediately after their causes. Recall sequence was more likely to follow causal links than temporal sequence, and most likely to follow causal links that were temporally sequential. Results were similar at 10-minute and 1-week delayed recall. This is the most direct and detailed evidence reported on sequential effects in recall. The causal network also predicted probability of recall; concepts with more links and concepts on the main causal chain were most likely to be recalled. This extends the causal network model to more complex materials than previous research.

  5. Combining next-generation sequencing and online databases for microsatellite development in non-model organisms.

    Science.gov (United States)

    Rico, Ciro; Normandeau, Eric; Dion-Côté, Anne-Marie; Rico, María Inés; Côté, Guillaume; Bernatchez, Louis

    2013-12-03

    Next-generation sequencing (NGS) is revolutionising marker development and the rapidly increasing amount of transcriptomes published across a wide variety of taxa is providing valuable sequence databases for the identification of genetic markers without the need to generate new sequences. Microsatellites are still the most important source of polymorphic markers in ecology and evolution. Motivated by our long-term interest in the adaptive radiation of a non-model species complex of whitefishes (Coregonus spp.), in this study, we focus on microsatellite characterisation and multiplex optimisation using transcriptome sequences generated by Illumina® and Roche-454, as well as online databases of Expressed Sequence Tags (EST) for the study of whitefish evolution and demographic history. We identified and optimised 40 polymorphic loci in multiplex PCR reactions and validated the robustness of our analyses by testing several population genetics and phylogeographic predictions using 494 fish from five lakes and 2 distinct ecotypes.

  6. Sequence Modeling for Analysing Student Interaction with Educational Systems

    DEFF Research Database (Denmark)

    Hansen, Christian; Hansen, Casper; Hjuler, Niklas Oskar Daniel

    2017-01-01

    as exhibiting unproductive student behaviour. Based on our results this student representation is promising, especially for educational systems offering many different learning usages, and offers an alternative to common approaches like modelling student behaviour as a single Markov chain often done......The analysis of log data generated by online educational systems is an important task for improving the systems, and furthering our knowledge of how students learn. This paper uses previously unseen log data from Edulab, the largest provider of digital learning for mathematics in Denmark...

  7. Modeling compositional dynamics based on GC and purine contents of protein-coding sequences

    KAUST Repository

    Zhang, Zhang; Yu, Jun

    2010-01-01

    Background: Understanding the compositional dynamics of genomes and their coding sequences is of great significance in gaining clues into molecular evolution and a large number of publically-available genome sequences have allowed us to quantitatively predict deviations of empirical data from their theoretical counterparts. However, the quantification of theoretical compositional variations for a wide diversity of genomes remains a major challenge.Results: To model the compositional dynamics of protein-coding sequences, we propose two simple models that take into account both mutation and selection effects, which act differently at the three codon positions, and use both GC and purine contents as compositional parameters. The two models concern the theoretical composition of nucleotides, codons, and amino acids, with no prerequisite of homologous sequences or their alignments. We evaluated the two models by quantifying theoretical compositions of a large collection of protein-coding sequences (including 46 of Archaea, 686 of Bacteria, and 826 of Eukarya), yielding consistent theoretical compositions across all the collected sequences.Conclusions: We show that the compositions of nucleotides, codons, and amino acids are largely determined by both GC and purine contents and suggest that deviations of the observed from the expected compositions may reflect compositional signatures that arise from a complex interplay between mutation and selection via DNA replication and repair mechanisms.Reviewers: This article was reviewed by Zhaolei Zhang (nominated by Mark Gerstein), Guruprasad Ananda (nominated by Kateryna Makova), and Daniel Haft. 2010 Zhang and Yu; licensee BioMed Central Ltd.

  8. Modeling compositional dynamics based on GC and purine contents of protein-coding sequences

    KAUST Repository

    Zhang, Zhang

    2010-11-08

    Background: Understanding the compositional dynamics of genomes and their coding sequences is of great significance in gaining clues into molecular evolution and a large number of publically-available genome sequences have allowed us to quantitatively predict deviations of empirical data from their theoretical counterparts. However, the quantification of theoretical compositional variations for a wide diversity of genomes remains a major challenge.Results: To model the compositional dynamics of protein-coding sequences, we propose two simple models that take into account both mutation and selection effects, which act differently at the three codon positions, and use both GC and purine contents as compositional parameters. The two models concern the theoretical composition of nucleotides, codons, and amino acids, with no prerequisite of homologous sequences or their alignments. We evaluated the two models by quantifying theoretical compositions of a large collection of protein-coding sequences (including 46 of Archaea, 686 of Bacteria, and 826 of Eukarya), yielding consistent theoretical compositions across all the collected sequences.Conclusions: We show that the compositions of nucleotides, codons, and amino acids are largely determined by both GC and purine contents and suggest that deviations of the observed from the expected compositions may reflect compositional signatures that arise from a complex interplay between mutation and selection via DNA replication and repair mechanisms.Reviewers: This article was reviewed by Zhaolei Zhang (nominated by Mark Gerstein), Guruprasad Ananda (nominated by Kateryna Makova), and Daniel Haft. 2010 Zhang and Yu; licensee BioMed Central Ltd.

  9. Testing the importance of accurate meteorological input fields and parameterizations in atmospheric transport modelling using DREAM - Validation against ETEX-1

    DEFF Research Database (Denmark)

    Brandt, J.; Bastrup-Birk, A.; Christensen, J.H.

    1998-01-01

    A tracer model, the DREAM, which is based on a combination of a near-range Lagrangian model and a long-range Eulerian model, has been developed. The meteorological meso-scale model, MM5V1, is implemented as a meteorological driver for the tracer model. The model system is used for studying...

  10. A branch-heterogeneous model of protein evolution for efficient inference of ancestral sequences.

    Science.gov (United States)

    Groussin, M; Boussau, B; Gouy, M

    2013-07-01

    Most models of nucleotide or amino acid substitution used in phylogenetic studies assume that the evolutionary process has been homogeneous across lineages and that composition of nucleotides or amino acids has remained the same throughout the tree. These oversimplified assumptions are refuted by the observation that compositional variability characterizes extant biological sequences. Branch-heterogeneous models of protein evolution that account for compositional variability have been developed, but are not yet in common use because of the large number of parameters required, leading to high computational costs and potential overparameterization. Here, we present a new branch-nonhomogeneous and nonstationary model of protein evolution that captures more accurately the high complexity of sequence evolution. This model, henceforth called Correspondence and likelihood analysis (COaLA), makes use of a correspondence analysis to reduce the number of parameters to be optimized through maximum likelihood, focusing on most of the compositional variation observed in the data. The model was thoroughly tested on both simulated and biological data sets to show its high performance in terms of data fitting and CPU time. COaLA efficiently estimates ancestral amino acid frequencies and sequences, making it relevant for studies aiming at reconstructing and resurrecting ancestral amino acid sequences. Finally, we applied COaLA on a concatenate of universal amino acid sequences to confirm previous results obtained with a nonhomogeneous Bayesian model regarding the early pattern of adaptation to optimal growth temperature, supporting the mesophilic nature of the Last Universal Common Ancestor.

  11. Waste Isolation Pilot Plant environmental impact report: socioeconomic portion. An outline of the input-output model and the impact projections methodology

    International Nuclear Information System (INIS)

    1978-07-01

    A static model in the form of a regional input-output model was constructed for Eddy and Lea Counties, New Mexico. This modeling process has been used to assess the economic impacts of the following activities and for the following agencies: San Juan Generating Units Nos. 1, 3, and 4 for Public Service Company of New Mexico, and general economic impacts (an ongoing process) for the Bureau of Business and Economic Research, University of New Mexico. The regional modeling process adjusts a national model by means of location quotients and aggregating techniques. The national model, or base model, used in this process contains 407 economic categories or subsectors of the economy, 389 of which represent the private economy, and 18 of which represent activities mostly dealing with the public sector. The 389 identified sub-sectors were used in the modeling process; the government impact was computed after the private sector analysis

  12. Application of MELCOR Code to a French PWR 900 MWe Severe Accident Sequence and Evaluation of Models Performance Focusing on In-Vessel Thermal Hydraulic Results

    International Nuclear Information System (INIS)

    De Rosa, Felice

    2006-01-01

    In the ambit of the Severe Accident Network of Excellence Project (SARNET), funded by the European Union, 6. FISA (Fission Safety) Programme, one of the main tasks is the development and validation of the European Accident Source Term Evaluation Code (ASTEC Code). One of the reference codes used to compare ASTEC results, coming from experimental and Reactor Plant applications, is MELCOR. ENEA is a SARNET member and also an ASTEC and MELCOR user. During the first 18 months of this project, we performed a series of MELCOR and ASTEC calculations referring to a French PWR 900 MWe and to the accident sequence of 'Loss of Steam Generator (SG) Feedwater' (known as H2 sequence in the French classification). H2 is an accident sequence substantially equivalent to a Station Blackout scenario, like a TMLB accident, with the only difference that in H2 sequence the scram is forced to occur with a delay of 28 seconds. The main events during the accident sequence are a loss of normal and auxiliary SG feedwater (0 s), followed by a scram when the water level in SG is equal or less than 0.7 m (after 28 seconds). There is also a main coolant pumps trip when ΔTsat < 10 deg. C, a total opening of the three relief valves when Tric (core maximal outlet temperature) is above 603 K (330 deg. C) and accumulators isolation when primary pressure goes below 1.5 MPa (15 bar). Among many other points, it is worth noting that this was the first time that a MELCOR 1.8.5 input deck was available for a French PWR 900. The main ENEA effort in this period was devoted to prepare the MELCOR input deck using the code version v.1.8.5 (build QZ Oct 2000 with the latest patch 185003 Oct 2001). The input deck, completely new, was prepared taking into account structure, data and same conditions as those found inside ASTEC input decks. The main goal of the work presented in this paper is to put in evidence where and when MELCOR provides good enough results and why, in some cases mainly referring to its

  13. Better temperature predictions in geothermal modelling by improved quality of input parameters: a regional case study from the Danish-German border region

    Science.gov (United States)

    Fuchs, Sven; Bording, Thue S.; Balling, Niels

    2015-04-01

    Thermal modelling is used to examine the subsurface temperature field and geothermal conditions at various scales (e.g. sedimentary basins, deep crust) and in the framework of different problem settings (e.g. scientific or industrial use). In such models, knowledge of rock thermal properties is prerequisites for the parameterisation of boundary conditions and layer properties. In contrast to hydrogeological ground-water models, where parameterization of the major rock property (i.e. hydraulic conductivity) is generally conducted considering lateral variations within geological layers, parameterization of thermal models (in particular regarding thermal conductivity but also radiogenic heat production and specific heat capacity) in most cases is conducted using constant parameters for each modelled layer. For such constant thermal parameter values, moreover, initial values are normally obtained from rare core measurements and/or literature values, which raise questions for their representativeness. Some few studies have considered lithological composition or well log information, but still keeping the layer values constant. In the present thermal-modelling scenario analysis, we demonstrate how the use of different parameter input type (from literature, well logs and lithology) and parameter input style (constant or laterally varying layer values) affects the temperature model prediction in sedimentary basins. For this purpose, rock thermal properties are deduced from standard petrophysical well logs and lithological descriptions for several wells in a project area. Statistical values of thermal properties (mean, standard deviation, moments, etc.) are calculated at each borehole location for each geological formation and, moreover, for the entire dataset. Our case study is located at the Danish-German border region (model dimension: 135 x115 km, depth: 20 km). Results clearly show that (i) the use of location-specific well-log derived rock thermal properties and (i

  14. Incorporation of Damage and Failure into an Orthotropic Elasto-Plastic Three-Dimensional Model with Tabulated Input Suitable for Use in Composite Impact Problems

    Science.gov (United States)

    Goldberg, Robert K.; Carney, Kelly S.; Dubois, Paul; Hoffarth, Canio; Khaled, Bilal; Rajan, Subramaniam; Blankenhorn, Gunther

    2016-01-01

    A material model which incorporates several key capabilities which have been identified by the aerospace community as lacking in the composite impact models currently available in LS-DYNA(Registered Trademark) is under development. In particular, the material model, which is being implemented as MAT 213 into a tailored version of LS-DYNA being jointly developed by the FAA and NASA, incorporates both plasticity and damage within the material model, utilizes experimentally based tabulated input to define the evolution of plasticity and damage as opposed to specifying discrete input parameters (such as modulus and strength), and is able to analyze the response of composites composed with a variety of fiber architectures. The plasticity portion of the orthotropic, three-dimensional, macroscopic composite constitutive model is based on an extension of the Tsai-Wu composite failure model into a generalized yield function with a non-associative flow rule. The capability to account for the rate and temperature dependent deformation response of composites has also been incorporated into the material model. For the damage model, a strain equivalent formulation is utilized to allow for the uncoupling of the deformation and damage analyses. In the damage model, a diagonal damage tensor is defined to account for the directionally dependent variation of damage. However, in composites it has been found that loading in one direction can lead to damage in multiple coordinate directions. To account for this phenomena, the terms in the damage matrix are semi-coupled such that the damage in a particular coordinate direction is a function of the stresses and plastic strains in all of the coordinate directions. The onset of material failure, and thus element deletion, is being developed to be a function of the stresses and plastic strains in the various coordinate directions. Systematic procedures are being developed to generate the required input parameters based on the results of

  15. Protein secondary structure prediction for a single-sequence using hidden semi-Markov models

    Directory of Open Access Journals (Sweden)

    Borodovsky Mark

    2006-03-01

    Full Text Available Abstract Background The accuracy of protein secondary structure prediction has been improving steadily towards the 88% estimated theoretical limit. There are two types of prediction algorithms: Single-sequence prediction algorithms imply that information about other (homologous proteins is not available, while algorithms of the second type imply that information about homologous proteins is available, and use it intensively. The single-sequence algorithms could make an important contribution to studies of proteins with no detected homologs, however the accuracy of protein secondary structure prediction from a single-sequence is not as high as when the additional evolutionary information is present. Results In this paper, we further refine and extend the hidden semi-Markov model (HSMM initially considered in the BSPSS algorithm. We introduce an improved residue dependency model by considering the patterns of statistically significant amino acid correlation at structural segment borders. We also derive models that specialize on different sections of the dependency structure and incorporate them into HSMM. In addition, we implement an iterative training method to refine estimates of HSMM parameters. The three-state-per-residue accuracy and other accuracy measures of the new method, IPSSP, are shown to be comparable or better than ones for BSPSS as well as for PSIPRED, tested under the single-sequence condition. Conclusions We have shown that new dependency models and training methods bring further improvements to single-sequence protein secondary structure prediction. The results are obtained under cross-validation conditions using a dataset with no pair of sequences having significant sequence similarity. As new sequences are added to the database it is possible to augment the dependency structure and obtain even higher accuracy. Current and future advances should contribute to the improvement of function prediction for orphan proteins inscrutable

  16. Establishing gene models from the Pinus pinaster genome using gene capture and BAC sequencing.

    Science.gov (United States)

    Seoane-Zonjic, Pedro; Cañas, Rafael A; Bautista, Rocío; Gómez-Maldonado, Josefa; Arrillaga, Isabel; Fernández-Pozo, Noé; Claros, M Gonzalo; Cánovas, Francisco M; Ávila, Concepción

    2016-02-27

    In the era of DNA throughput sequencing, assembling and understanding gymnosperm mega-genomes remains a challenge. Although drafts of three conifer genomes have recently been published, this number is too low to understand the full complexity of conifer genomes. Using techniques focused on specific genes, gene models can be established that can aid in the assembly of gene-rich regions, and this information can be used to compare genomes and understand functional evolution. In this study, gene capture technology combined with BAC isolation and sequencing was used as an experimental approach to establish de novo gene structures without a reference genome. Probes were designed for 866 maritime pine transcripts to sequence genes captured from genomic DNA. The gene models were constructed using GeneAssembler, a new bioinformatic pipeline, which reconstructed over 82% of the gene structures, and a high proportion (85%) of the captured gene models contained sequences from the promoter regulatory region. In a parallel experiment, the P. pinaster BAC library was screened to isolate clones containing genes whose cDNA sequence were already available. BAC clones containing the asparagine synthetase, sucrose synthase and xyloglucan endotransglycosylase gene sequences were isolated and used in this study. The gene models derived from the gene capture approach were compared with the genomic sequences derived from the BAC clones. This combined approach is a particularly efficient way to capture the genomic structures of gene families with a small number of members. The experimental approach used in this study is a valuable combined technique to study genomic gene structures in species for which a reference genome is unavailable. It can be used to establish exon/intron boundaries in unknown gene structures, to reconstruct incomplete genes and to obtain promoter sequences that can be used for transcriptional studies. A bioinformatics algorithm (GeneAssembler) is also provided as a

  17. A guidance on MELCOR input preparation : An input deck for Ul-Chin 3 and 4 Nuclear Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Song Won

    1997-02-01

    The objective of this study is to enhance the capability of assessing the severe accident sequence analyses and the containment behavior using MELCOR computer code and to provide the guideline of its efficient use. This report shows the method of the input deck preparation as well as the assessment strategy for the MELCOR code. MELCOR code is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. The code is being developed at Sandia National Laboratories for the U.S. NRC as a second generation plant risk assessment tool and the successor to the source term code package. The accident sequence of the reference input deck prepared in this study for Ulchin unit 3 and 4 nuclear power plants, is the total loss of feedwater (TLOFW) without any success of safety systems, which is similar to station blackout (TLMB). It is very useful to simulate a well-known sequence through the best estimated code or experiment, because the results of the simulation before core melt can be compared with the FSAR, but no data is available after core melt. The precalculation for the TLOFW using the reference input deck is performed successfully as expected. The other sequences will be carried out with minor changes in the reference input. This input deck will be improved continually by the adding of the safety systems not included in this input deck, and also through the sensitivity and uncertainty analyses. (author). 19 refs., 10 tabs., 55 figs.

  18. A guidance on MELCOR input preparation : An input deck for Ul-Chin 3 and 4 Nuclear Power Plant

    International Nuclear Information System (INIS)

    Cho, Song Won.

    1997-02-01

    The objective of this study is to enhance the capability of assessing the severe accident sequence analyses and the containment behavior using MELCOR computer code and to provide the guideline of its efficient use. This report shows the method of the input deck preparation as well as the assessment strategy for the MELCOR code. MELCOR code is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. The code is being developed at Sandia National Laboratories for the U.S. NRC as a second generation plant risk assessment tool and the successor to the source term code package. The accident sequence of the reference input deck prepared in this study for Ulchin unit 3 and 4 nuclear power plants, is the total loss of feedwater (TLOFW) without any success of safety systems, which is similar to station blackout (TLMB). It is very useful to simulate a well-known sequence through the best estimated code or experiment, because the results of the simulation before core melt can be compared with the FSAR, but no data is available after core melt. The precalculation for the TLOFW using the reference input deck is performed successfully as expected. The other sequences will be carried out with minor changes in the reference input. This input deck will be improved continually by the adding of the safety systems not included in this input deck, and also through the sensitivity and uncertainty analyses. (author). 19 refs., 10 tabs., 55 figs

  19. Comments on Frequency Swept Rotating Input Perturbation Techniques and Identification of the Fluid Force Models in Rotor/bearing/seal Systems and Fluid Handling Machines

    Science.gov (United States)

    Muszynska, Agnes; Bently, Donald E.

    1991-01-01

    Perturbation techniques used for identification of rotating system dynamic characteristics are described. A comparison between two periodic frequency-swept perturbation methods applied in identification of fluid forces of rotating machines is presented. The description of the fluid force model identified by inputting circular periodic frequency-swept force is given. This model is based on the existence and strength of the circumferential flow, most often generated by the shaft rotation. The application of the fluid force model in rotor dynamic analysis is presented. It is shown that the rotor stability is an entire rotating system property. Some areas for further research are discussed.

  20. Summary report of the 3. research co-ordination meeting on development of reference input parameter library for nuclear model calculations of nuclear data (Phase 1: Starter File)

    International Nuclear Information System (INIS)

    Oblozinsky, P.

    1997-09-01

    The report contains the summary of the third and the last Research Co-ordination Meeting on ''Development of Reference Input Parameter Library for Nuclear Model Calculations of Nuclear Data (Phase I: Starter File)'', held at the ICTP, Trieste, Italy, from 26 to 29 May 1997. Details are given on the status of the Handbook and the Starter File - two major results of the project. (author)

  1. Model-free aftershock forecasts constructed from similar sequences in the past

    Science.gov (United States)

    van der Elst, N.; Page, M. T.

    2017-12-01

    The basic premise behind aftershock forecasting is that sequences in the future will be similar to those in the past. Forecast models typically use empirically tuned parametric distributions to approximate past sequences, and project those distributions into the future to make a forecast. While parametric models do a good job of describing average outcomes, they are not explicitly designed to capture the full range of variability between sequences, and can suffer from over-tuning of the parameters. In particular, parametric forecasts may produce a high rate of "surprises" - sequences that land outside the forecast range. Here we present a non-parametric forecast method that cuts out the parametric "middleman" between training data and forecast. The method is based on finding past sequences that are similar to the target sequence, and evaluating their outcomes. We quantify similarity as the Poisson probability that the observed event count in a past sequence reflects the same underlying intensity as the observed event count in the target sequence. Event counts are defined in terms of differential magnitude relative to the mainshock. The forecast is then constructed from the distribution of past sequences outcomes, weighted by their similarity. We compare the similarity forecast with the Reasenberg and Jones (RJ95) method, for a set of 2807 global aftershock sequences of M≥6 mainshocks. We implement a sequence-specific RJ95 forecast using a global average prior and Bayesian updating, but do not propagate epistemic uncertainty. The RJ95 forecast is somewhat more precise than the similarity forecast: 90% of observed sequences fall within a factor of two of the median RJ95 forecast value, whereas the fraction is 85% for the similarity forecast. However, the surprise rate is much higher for the RJ95 forecast; 10% of observed sequences fall in the upper 2.5% of the (Poissonian) forecast range. The surprise rate is less than 3% for the similarity forecast. The similarity

  2. Precision Measurements of the Cluster Red Sequence using an Error Corrected Gaussian Mixture Model

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Jiangang; /Fermilab /Michigan U.; Koester, Benjamin P.; /Chicago U.; Mckay, Timothy A.; /Michigan U.; Rykoff, Eli S.; /UC, Santa Barbara; Rozo, Eduardo; /Ohio State U.; Evrard, August; /Michigan U.; Annis, James; /Fermilab; Becker, Matthew; /Chicago U.; Busha, Michael; /KIPAC, Menlo Park /SLAC; Gerdes, David; /Michigan U.; Johnston, David E.; /Northwestern U. /Brookhaven

    2009-07-01

    The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red-sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically-based cluster cosmology.

  3. PRECISION MEASUREMENTS OF THE CLUSTER RED SEQUENCE USING AN ERROR-CORRECTED GAUSSIAN MIXTURE MODEL

    International Nuclear Information System (INIS)

    Hao Jiangang; Annis, James; Koester, Benjamin P.; Mckay, Timothy A.; Evrard, August; Gerdes, David; Rykoff, Eli S.; Rozo, Eduardo; Becker, Matthew; Busha, Michael; Wechsler, Risa H.; Johnston, David E.; Sheldon, Erin

    2009-01-01

    The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error-corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically based cluster cosmology.

  4. A generative Bezier curve model for surf-zone tracking in coastal image sequences

    CSIR Research Space (South Africa)

    Burke, Michael G

    2017-09-01

    Full Text Available This work introduces a generative Bezier curve model suitable for surf-zone curve tracking in coastal image sequences. The model combines an adaptive curve parametrised by control points governed by local random walks with a global sinusoidal motion...

  5. Model-based quality assessment and base-calling for second-generation sequencing data.

    Science.gov (United States)

    Bravo, Héctor Corrada; Irizarry, Rafael A

    2010-09-01

    Second-generation sequencing (sec-gen) technology can sequence millions of short fragments of DNA in parallel, making it capable of assembling complex genomes for a small fraction of the price and time of previous technologies. In fact, a recently formed international consortium, the 1000 Genomes Project, plans to fully sequence the genomes of approximately 1200 people. The prospect of comparative analysis at the sequence level of a large number of samples across multiple populations may be achieved within the next five years. These data present unprecedented challenges in statistical analysis. For instance, analysis operates on millions of short nucleotide sequences, or reads-strings of A,C,G, or T's, between 30 and 100 characters long-which are the result of complex processing of noisy continuous fluorescence intensity measurements known as base-calling. The complexity of the base-calling discretization process results in reads of widely varying quality within and across sequence samples. This variation in processing quality results in infrequent but systematic errors that we have found to mislead downstream analysis of the discretized sequence read data. For instance, a central goal of the 1000 Genomes Project is to quantify across-sample variation at the single nucleotide level. At this resolution, small error rates in sequencing prove significant, especially for rare variants. Sec-gen sequencing is a relatively new technology for which potential biases and sources of obscuring variation are not yet fully understood. Therefore, modeling and quantifying the uncertainty inherent in the generation of sequence reads is of utmost importance. In this article, we present a simple model to capture uncertainty arising in the base-calling procedure of the Illumina/Solexa GA platform. Model parameters have a straightforward interpretation in terms of the chemistry of base-calling allowing for informative and easily interpretable metrics that capture the variability in

  6. Construction sequence scale model: an aid to productivity and quality assurance

    International Nuclear Information System (INIS)

    Clothier, W.A. Sr.

    1978-01-01

    The natural tendencies of an engineering scale model to promote a high level of quality by error prevention during design and construction stages of a project are studied. A brief section on the basic history of engineering modeling is used to describe TVA's usage of the model. The basic design model is explored in an overview touching the highlights of that form of modeling. A detailed look at the construction sequence model, a relatively new form of model, is presented to demonstrate quality and productivity awareness

  7. Sequence2Vec: A novel embedding approach for modeling transcription factor binding affinity landscape

    KAUST Repository

    Dai, Hanjun

    2017-07-26

    Motivation: An accurate characterization of transcription factor (TF)-DNA affinity landscape is crucial to a quantitative understanding of the molecular mechanisms underpinning endogenous gene regulation. While recent advances in biotechnology have brought the opportunity for building binding affinity prediction methods, the accurate characterization of TF-DNA binding affinity landscape still remains a challenging problem. Results: Here we propose a novel sequence embedding approach for modeling the transcription factor binding affinity landscape. Our method represents DNA binding sequences as a hidden Markov model (HMM) which captures both position specific information and long-range dependency in the sequence. A cornerstone of our method is a novel message passing-like embedding algorithm, called Sequence2Vec, which maps these HMMs into a common nonlinear feature space and uses these embedded features to build a predictive model. Our method is a novel combination of the strength of probabilistic graphical models, feature space embedding and deep learning. We conducted comprehensive experiments on over 90 large-scale TF-DNA data sets which were measured by different high-throughput experimental technologies. Sequence2Vec outperforms alternative machine learning methods as well as the state-of-the-art binding affinity prediction methods.

  8. Structured prediction models for RNN based sequence labeling in clinical text.

    Science.gov (United States)

    Jagannatha, Abhyuday N; Yu, Hong

    2016-11-01

    Sequence labeling is a widely used method for named entity recognition and information extraction from unstructured natural language data. In clinical domain one major application of sequence labeling involves extraction of medical entities such as medication, indication, and side-effects from Electronic Health Record narratives. Sequence labeling in this domain, presents its own set of challenges and objectives. In this work we experimented with various CRF based structured learning models with Recurrent Neural Networks. We extend the previously studied LSTM-CRF models with explicit modeling of pairwise potentials. We also propose an approximate version of skip-chain CRF inference with RNN potentials. We use these methodologies for structured prediction in order to improve the exact phrase detection of various medical entities.

  9. Genome sequence analysis of the model grass Brachypodium distachyon: insights into grass genome evolution

    Energy Technology Data Exchange (ETDEWEB)

    Schulman, Al

    2009-08-09

    Three subfamilies of grasses, the Erhardtoideae (rice), the Panicoideae (maize, sorghum, sugar cane and millet), and the Pooideae (wheat, barley and cool season forage grasses) provide the basis of human nutrition and are poised to become major sources of renewable energy. Here we describe the complete genome sequence of the wild grass Brachypodium distachyon (Brachypodium), the first member of the Pooideae subfamily to be completely sequenced. Comparison of the Brachypodium, rice and sorghum genomes reveals a precise sequence- based history of genome evolution across a broad diversity of the grass family and identifies nested insertions of whole chromosomes into centromeric regions as a predominant mechanism driving chromosome evolution in the grasses. The relatively compact genome of Brachypodium is maintained by a balance of retroelement replication and loss. The complete genome sequence of Brachypodium, coupled to its exceptional promise as a model system for grass research, will support the development of new energy and food crops

  10. Technical Report on Modeling for Quasispecies Abundance Inference with Confidence Intervals from Metagenomic Sequence Data

    Energy Technology Data Exchange (ETDEWEB)

    McLoughlin, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-11

    The overall aim of this project is to develop a software package, called MetaQuant, that can determine the constituents of a complex microbial sample and estimate their relative abundances by analysis of metagenomic sequencing data. The goal for Task 1 is to create a generative model describing the stochastic process underlying the creation of sequence read pairs in the data set. The stages in this generative process include the selection of a source genome sequence for each read pair, with probability dependent on its abundance in the sample. The other stages describe the evolution of the source genome from its nearest common ancestor with a reference genome, breakage of the source DNA into short fragments, and the errors in sequencing the ends of the fragments to produce read pairs.

  11. Phasing Out a Polluting Input

    OpenAIRE

    Eriksson, Clas

    2015-01-01

    This paper explores economic policies related to the potential conflict between economic growth and the environment. It applies a model with directed technological change and focuses on the case with low elasticity of substitution between clean and dirty inputs in production. New technology is substituted for the polluting input, which results in a gradual decline in pollution along the optimal long-run growth path. In contrast to some recent work, the era of pollution and environmental polic...

  12. High organic inputs explain shallow and deep SOC storage in a long-term agroforestry system – combining experimental and modeling approaches

    Directory of Open Access Journals (Sweden)

    R. Cardinael

    2018-01-01

    Full Text Available Agroforestry is an increasingly popular farming system enabling agricultural diversification and providing several ecosystem services. In agroforestry systems, soil organic carbon (SOC stocks are generally increased, but it is difficult to disentangle the different factors responsible for this storage. Organic carbon (OC inputs to the soil may be larger, but SOC decomposition rates may be modified owing to microclimate, physical protection, or priming effect from roots, especially at depth. We used an 18-year-old silvoarable system associating hybrid walnut trees (Juglans regia  ×  nigra and durum wheat (Triticum turgidum L. subsp. durum and an adjacent agricultural control plot to quantify all OC inputs to the soil – leaf litter, tree fine root senescence, crop residues, and tree row herbaceous vegetation – and measured SOC stocks down to 2 m of depth at varying distances from the trees. We then proposed a model that simulates SOC dynamics in agroforestry accounting for both the whole soil profile and the lateral spatial heterogeneity. The model was calibrated to the control plot only. Measured OC inputs to soil were increased by about 40 % (+ 1.11 t C ha−1 yr−1 down to 2 m of depth in the agroforestry plot compared to the control, resulting in an additional SOC stock of 6.3 t C ha−1 down to 1 m of depth. However, most of the SOC storage occurred in the first 30 cm of soil and in the tree rows. The model was strongly validated, properly describing the measured SOC stocks and distribution with depth in agroforestry tree rows and alleys. It showed that the increased inputs of fresh biomass to soil explained the observed additional SOC storage in the agroforestry plot. Moreover, only a priming effect variant of the model was able to capture the depth distribution of SOC stocks, suggesting the priming effect as a possible mechanism driving deep SOC dynamics. This result questions the potential of soils to

  13. High organic inputs explain shallow and deep SOC storage in a long-term agroforestry system - combining experimental and modeling approaches

    Science.gov (United States)

    Cardinael, Rémi; Guenet, Bertrand; Chevallier, Tiphaine; Dupraz, Christian; Cozzi, Thomas; Chenu, Claire

    2018-01-01

    Agroforestry is an increasingly popular farming system enabling agricultural diversification and providing several ecosystem services. In agroforestry systems, soil organic carbon (SOC) stocks are generally increased, but it is difficult to disentangle the different factors responsible for this storage. Organic carbon (OC) inputs to the soil may be larger, but SOC decomposition rates may be modified owing to microclimate, physical protection, or priming effect from roots, especially at depth. We used an 18-year-old silvoarable system associating hybrid walnut trees (Juglans regia × nigra) and durum wheat (Triticum turgidum L. subsp. durum) and an adjacent agricultural control plot to quantify all OC inputs to the soil - leaf litter, tree fine root senescence, crop residues, and tree row herbaceous vegetation - and measured SOC stocks down to 2 m of depth at varying distances from the trees. We then proposed a model that simulates SOC dynamics in agroforestry accounting for both the whole soil profile and the lateral spatial heterogeneity. The model was calibrated to the control plot only. Measured OC inputs to soil were increased by about 40 % (+ 1.11 t C ha-1 yr-1) down to 2 m of depth in the agroforestry plot compared to the control, resulting in an additional SOC stock of 6.3 t C ha-1 down to 1 m of depth. However, most of the SOC storage occurred in the first 30 cm of soil and in the tree rows. The model was strongly validated, properly describing the measured SOC stocks and distribution with depth in agroforestry tree rows and alleys. It showed that the increased inputs of fresh biomass to soil explained the observed additional SOC storage in the agroforestry plot. Moreover, only a priming effect variant of the model was able to capture the depth distribution of SOC stocks, suggesting the priming effect as a possible mechanism driving deep SOC dynamics. This result questions the potential of soils to store large amounts of carbon, especially at depth. Deep

  14. A 2nd generation static model for predicting greenhouse energy inputs, as an aid for production planning

    CERN Document Server

    Jolliet, O; Munday, G L

    1985-01-01

    A model which allows accurate prediction of energy consumption of a greenhouse is a useful tool for production planning and optimisation of greenhouse components. To date two types of model have been developed; some very simple models of low precision, others, precise dynamic models unsuitable for employment over long periods and too complex for use in practice. A theoretical study and measurements at the CERN trial greenhouse have allowed development of a new static model named "HORTICERN", easy to use and as precise as more complex dynamic models. This paper demonstrates the potential of this model for long-term production planning. The model gives precise predictions of energy consumption when given greenhouse conditions of use (inside temperatures, dehumidification by ventilation, …) and takes into account local climatic conditions (wind radiative losses to the sky and solar gains), type of greenhouse (cladding, thermal screen …). The HORTICERN method has been developed for PC use and requires less...

  15. QNB: differential RNA methylation analysis for count-based small-sample sequencing data with a quad-negative binomial model.

    Science.gov (United States)

    Liu, Lian; Zhang, Shao-Wu; Huang, Yufei; Meng, Jia

    2017-08-31

    As a newly emerged research area, RNA epigenetics has drawn increasing attention recently for the participation of RNA methylation and other modifications in a number of crucial biological processes. Thanks to high throughput sequencing techniques, such as, MeRIP-Seq, transcriptome-wide RNA methylation profile is now available in the form of count-based data, with which it is often of interests to study the dynamics at epitranscriptomic layer. However, the sample size of RNA methylation experiment is usually very small due to its costs; and additionally, there usually exist a large number of genes whose methylation level cannot be accurately estimated due to their low expression level, making differential RNA methylation analysis a difficult task. We present QNB, a statistical approach for differential RNA methylation analysis with count-based small-sample sequencing data. Compared with previous approaches such as DRME model based on a statistical test covering the IP samples only with 2 negative binomial distributions, QNB is based on 4 independent negative binomial distributions with their variances and means linked by local regressions, and in the way, the input control samples are also properly taken care of. In addition, different from DRME approach, which relies only the input control sample only for estimating the background, QNB uses a more robust estimator for gene expression by combining information from both input and IP samples, which could largely improve the testing performance for very lowly expressed genes. QNB showed improved performance on both simulated and real MeRIP-Seq datasets when compared with competing algorithms. And the QNB model is also applicable to other datasets related RNA modifications, including but not limited to RNA bisulfite sequencing, m 1 A-Seq, Par-CLIP, RIP-Seq, etc.

  16. TART input manual

    International Nuclear Information System (INIS)

    Kimlinger, J.R.; Plechaty, E.F.

    1982-01-01

    The TART code is a Monte Carlo neutron/photon transport code that is only on the CRAY computer. All the input cards for the TART code are listed, and definitions for all input parameters are given. The execution and limitations of the code are described, and input for two sample problems are given

  17. FLUTAN input specifications

    International Nuclear Information System (INIS)

    Borgwaldt, H.; Baumann, W.; Willerding, G.

    1991-05-01

    FLUTAN is a highly vectorized computer code for 3-D fluiddynamic and thermal-hydraulic analyses in cartesian and cylinder coordinates. It is related to the family of COMMIX codes originally developed at Argonne National Laboratory, USA. To a large extent, FLUTAN relies on basic concepts and structures imported from COMMIX-1B and COMMIX-2 which were made available to KfK in the frame of cooperation contracts in the fast reactor safety field. While on the one hand not all features of the original COMMIX versions have been implemented in FLUTAN, the code on the other hand includes some essential innovative options like CRESOR solution algorithm, general 3-dimensional rebalacing scheme for solving the pressure equation, and LECUSSO-QUICK-FRAM techniques suitable for reducing 'numerical diffusion' in both the enthalphy and momentum equations. This report provides users with detailed input instructions, presents formulations of the various model options, and explains by means of comprehensive sample input, how to use the code. (orig.) [de

  18. GARFEM input deck description

    Energy Technology Data Exchange (ETDEWEB)

    Zdunek, A.; Soederberg, M. (Aeronautical Research Inst. of Sweden, Bromma (Sweden))

    1989-01-01

    The input card deck for the finite element program GARFEM version 3.2 is described in this manual. The program includes, but is not limited to, capabilities to handle the following problems: * Linear bar and beam element structures, * Geometrically non-linear problems (bar and beam), both static and transient dynamic analysis, * Transient response dynamics from a catalog of time varying external forcing function types or input function tables, * Eigenvalue solution (modes and frequencies), * Multi point constraints (MPC) for the modelling of mechanisms and e.g. rigid links. The MPC definition is used only in the geometrically linearized sense, * Beams with disjunct shear axis and neutral axis, * Beams with rigid offset. An interface exist that connects GARFEM with the program GAROS. GAROS is a program for aeroelastic analysis of rotating structures. Since this interface was developed GARFEM now serves as a preprocessor program in place of NASTRAN which was formerly used. Documentation of the methods applied in GARFEM exists but is so far limited to the capacities in existence before the GAROS interface was developed.

  19. ToPS: a framework to manipulate probabilistic models of sequence data.

    Directory of Open Access Journals (Sweden)

    André Yoshiaki Kashiwabara

    Full Text Available Discrete Markovian models can be used to characterize patterns in sequences of values and have many applications in biological sequence analysis, including gene prediction, CpG island detection, alignment, and protein profiling. We present ToPS, a computational framework that can be used to implement different applications in bioinformatics analysis by combining eight kinds of models: (i independent and identically distributed process; (ii variable-length Markov chain; (iii inhomogeneous Markov chain; (iv hidden Markov model; (v profile hidden Markov model; (vi pair hidden Markov model; (vii generalized hidden Markov model; and (viii similarity based sequence weighting. The framework includes functionality for training, simulation and decoding of the models. Additionally, it provides two methods to help parameter setting: Akaike and Bayesian information criteria (AIC and BIC. The models can be used stand-alone, combined in Bayesian classifiers, or included in more complex, multi-model, probabilistic architectures using GHMMs. In particular the framework provides a novel, flexible, implementation of decoding in GHMMs that detects when the architecture can be traversed efficiently.

  20. Fouling resistance prediction using artificial neural network nonlinear auto-regressive with exogenous input model based on operating conditions and fluid properties correlations

    Energy Technology Data Exchange (ETDEWEB)

    Biyanto, Totok R. [Department of Engineering Physics, Institute Technology of Sepuluh Nopember Surabaya, Surabaya, Indonesia 60111 (Indonesia)

    2016-06-03

    Fouling in a heat exchanger in Crude Preheat Train (CPT) refinery is an unsolved problem that reduces the plant efficiency, increases fuel consumption and CO{sub 2} emission. The fouling resistance behavior is very complex. It is difficult to develop a model using first principle equation to predict the fouling resistance due to different operating condition