WorldWideScience

Sample records for model input sequence

  1. Comprehensive Information Retrieval and Model Input Sequence (CIRMIS)

    International Nuclear Information System (INIS)

    Friedrichs, D.R.

    1977-04-01

    The Comprehensive Information Retrieval and Model Input Sequence (CIRMIS) was developed to provide the research scientist with man--machine interactive capabilities in a real-time environment, and thereby produce results more quickly and efficiently. The CIRMIS system was originally developed to increase data storage and retrieval capabilities and ground-water model control for the Hanford site. The overall configuration, however, can be used in other areas. The CIRMIS system provides the user with three major functions: retrieval of well-based data, special application for manipulating surface data or background maps, and the manipulation and control of ground-water models. These programs comprise only a portion of the entire CIRMIS system. A complete description of the CIRMIS system is given in this report. 25 figures, 7 tables

  2. Modeling of Prepregs during Automated Draping Sequences

    DEFF Research Database (Denmark)

    Krogh, Christian; Glud, Jens Ammitzbøll; Jakobsen, Johnny

    2017-01-01

    algorithm used to generate target points on the mold which are used as input to a draping sequence planner. The draping sequence planner prescribes the displacement history for each gripper in the drape tool and these displacements are then applied to each gripper in a transient model of the draping...... sequence. The model is based on a transient finite element analysis with the material’s constitutive behavior currently being approximated as linear elastic orthotropic. In-plane tensile and bias-extension tests as well as bending tests are conducted and used as input for the model. The virtual draping...

  3. The use of synthetic input sequences in time series modeling

    International Nuclear Information System (INIS)

    Oliveira, Dair Jose de; Letellier, Christophe; Gomes, Murilo E.D.; Aguirre, Luis A.

    2008-01-01

    In many situations time series models obtained from noise-like data settle to trivial solutions under iteration. This Letter proposes a way of producing a synthetic (dummy) input, that is included to prevent the model from settling down to a trivial solution, while maintaining features of the original signal. Simulated benchmark models and a real time series of RR intervals from an ECG are used to illustrate the procedure

  4. Input modeling with phase-type distributions and Markov models theory and applications

    CERN Document Server

    Buchholz, Peter; Felko, Iryna

    2014-01-01

    Containing a summary of several recent results on Markov-based input modeling in a coherent notation, this book introduces and compares algorithms for parameter fitting and gives an overview of available software tools in the area. Due to progress made in recent years with respect to new algorithms to generate PH distributions and Markovian arrival processes from measured data, the models outlined are useful alternatives to other distributions or stochastic processes used for input modeling. Graduate students and researchers in applied probability, operations research and computer science along with practitioners using simulation or analytical models for performance analysis and capacity planning will find the unified notation and up-to-date results presented useful. Input modeling is the key step in model based system analysis to adequately describe the load of a system using stochastic models. The goal of input modeling is to find a stochastic model to describe a sequence of measurements from a real system...

  5. OFFSCALE: PC input processor for SCALE-4 criticality sequences

    International Nuclear Information System (INIS)

    Bowman, S.M.

    1991-01-01

    OFFSCALE is a personal computer program that serves as a user-friendly interface for the Criticality Safety Analysis Sequences (CSAS) available in SCALE-4. It is designed to assist a SCALE-4 user in preparing an input file for execution of criticality safety problems. Output from OFFSCALE is a card-image input file that may be uploaded to a mainframe computer to execute the CSAS4 control module in SCALE-4. OFFSCALE features a pulldown menu system that accesses sophisticated data entry screens. The program allows the user to quickly set up a CSAS4 input file and perform data checking

  6. Modelling of Multi Input Transfer Function for Rainfall Forecasting in Batu City

    OpenAIRE

    Priska Arindya Purnama

    2017-01-01

    The aim of this research is to model and forecast the rainfall in Batu City using multi input transfer function model based on air temperature, humidity, wind speed and cloud. Transfer function model is a multivariate time series model which consists of an output series (Yt) sequence expected to be effected by an input series (Xt) and other inputs in a group called a noise series (Nt). Multi input transfer function model obtained is (b1,s1,r1) (b2,s2,r2) (b3,s3,r3) (b4,s4,r4)(pn,qn) = (0,0,0)...

  7. Crossover Can Be Constructive When Computing Unique Input Output Sequences

    DEFF Research Database (Denmark)

    Lehre, Per Kristian; Yao, Xin

    2010-01-01

    Unique input output (UIO) sequences have important applications in conformance testing of finite state machines (FSMs). Previous experimental and theoretical research has shown that evolutionary algorithms (EAs) can compute UIOs efficiently on many FSM instance classes, but fail on others. However...

  8. Modeling of prepregs during automated draping sequences

    Science.gov (United States)

    Krogh, Christian; Glud, Jens A.; Jakobsen, Johnny

    2017-10-01

    The behavior of wowen prepreg fabric during automated draping sequences is investigated. A drape tool under development with an arrangement of grippers facilitates the placement of a woven prepreg fabric in a mold. It is essential that the draped configuration is free from wrinkles and other defects. The present study aims at setting up a virtual draping framework capable of modeling the draping process from the initial flat fabric to the final double curved shape and aims at assisting the development of an automated drape tool. The virtual draping framework consists of a kinematic mapping algorithm used to generate target points on the mold which are used as input to a draping sequence planner. The draping sequence planner prescribes the displacement history for each gripper in the drape tool and these displacements are then applied to each gripper in a transient model of the draping sequence. The model is based on a transient finite element analysis with the material's constitutive behavior currently being approximated as linear elastic orthotropic. In-plane tensile and bias-extension tests as well as bending tests are conducted and used as input for the model. The virtual draping framework shows a good potential for obtaining a better understanding of the drape process and guide the development of the drape tool. However, results obtained from using the framework on a simple test case indicate that the generation of draping sequences is non-trivial.

  9. A Method to Select Software Test Cases in Consideration of Past Input Sequence

    International Nuclear Information System (INIS)

    Kim, Hee Eun; Kim, Bo Gyung; Kang, Hyun Gook

    2015-01-01

    In the Korea Nuclear I and C Systems (KNICS) project, the software for the fully-digitalized reactor protection system (RPS) was developed under a strict procedure. Even though the behavior of the software is deterministic, the randomness of input sequence produces probabilistic behavior of software. A software failure occurs when some inputs to the software occur and interact with the internal state of the digital system to trigger a fault that was introduced into the software during the software lifecycle. In this paper, the method to select test set for software failure probability estimation is suggested. This test set reflects past input sequence of software, which covers all possible cases. In this study, the method to select test cases for software failure probability quantification was suggested. To obtain profile of paired state variables, relationships of the variables need to be considered. The effect of input from human operator also have to be considered. As an example, test set of PZR-PR-Lo-Trip logic was examined. This method provides framework for selecting test cases of safety-critical software

  10. Modelling of Multi Input Transfer Function for Rainfall Forecasting in Batu City

    Directory of Open Access Journals (Sweden)

    Priska Arindya Purnama

    2017-11-01

    Full Text Available The aim of this research is to model and forecast the rainfall in Batu City using multi input transfer function model based on air temperature, humidity, wind speed and cloud. Transfer function model is a multivariate time series model which consists of an output series (Yt sequence expected to be effected by an input series (Xt and other inputs in a group called a noise series (Nt. Multi input transfer function model obtained is (b1,s1,r1 (b2,s2,r2 (b3,s3,r3 (b4,s4,r4(pn,qn = (0,0,0 (23,0,0 (1,2,0 (0,0,0 ([5,8],2 and shows that air temperature on t-day affects rainfall on t-day, rainfall on t-day is influenced by air humidity in the previous 23 days, rainfall on t-day is affected by wind speed in the previous day , and rainfall on day t is affected by clouds on day t. The results of rainfall forecasting in Batu City with multi input transfer function model can be said to be accurate, because it produces relatively small RMSE value. The value of RMSE data forecasting training is 7.7921 while forecasting data testing is 4.2184. Multi-input transfer function model is suitable for rainfall in Batu City.

  11. Modeling inputs to computer models used in risk assessment

    International Nuclear Information System (INIS)

    Iman, R.L.

    1987-01-01

    Computer models for various risk assessment applications are closely scrutinized both from the standpoint of questioning the correctness of the underlying mathematical model with respect to the process it is attempting to model and from the standpoint of verifying that the computer model correctly implements the underlying mathematical model. A process that receives less scrutiny, but is nonetheless of equal importance, concerns the individual and joint modeling of the inputs. This modeling effort clearly has a great impact on the credibility of results. Model characteristics are reviewed in this paper that have a direct bearing on the model input process and reasons are given for using probabilities-based modeling with the inputs. The authors also present ways to model distributions for individual inputs and multivariate input structures when dependence and other constraints may be present

  12. HotSpot Wizard 3.0: web server for automated design of mutations and smart libraries based on sequence input information.

    Science.gov (United States)

    Sumbalova, Lenka; Stourac, Jan; Martinek, Tomas; Bednar, David; Damborsky, Jiri

    2018-05-23

    HotSpot Wizard is a web server used for the automated identification of hotspots in semi-rational protein design to give improved protein stability, catalytic activity, substrate specificity and enantioselectivity. Since there are three orders of magnitude fewer protein structures than sequences in bioinformatic databases, the major limitation to the usability of previous versions was the requirement for the protein structure to be a compulsory input for the calculation. HotSpot Wizard 3.0 now accepts the protein sequence as input data. The protein structure for the query sequence is obtained either from eight repositories of homology models or is modeled using Modeller and I-Tasser. The quality of the models is then evaluated using three quality assessment tools-WHAT_CHECK, PROCHECK and MolProbity. During follow-up analyses, the system automatically warns the users whenever they attempt to redesign poorly predicted parts of their homology models. The second main limitation of HotSpot Wizard's predictions is that it identifies suitable positions for mutagenesis, but does not provide any reliable advice on particular substitutions. A new module for the estimation of thermodynamic stabilities using the Rosetta and FoldX suites has been introduced which prevents destabilizing mutations among pre-selected variants entering experimental testing. HotSpot Wizard is freely available at http://loschmidt.chemi.muni.cz/hotspotwizard.

  13. Preventing Out-of-Sequence for Multicast Input-Queued Space-Memory-Memory Clos-Network

    DEFF Research Database (Denmark)

    Yu, Hao; Ruepp, Sarah Renée; Berger, Michael Stübert

    2011-01-01

    This paper proposes an out-of-sequence (OOS) preventative cell dispatching algorithm, the multicast flow-based round robin (MFRR), for multicast input-queued space-memory-memory (IQ-SMM) Clos-network architecture. Independently treating each incoming cell, such as the desynchronized static round...

  14. Out-of-Sequence Prevention for Multicast Input-Queuing Space-Memory-Memory Clos-Network

    DEFF Research Database (Denmark)

    Yu, Hao; Ruepp, Sarah; Berger, Michael Stübert

    2011-01-01

    This paper proposes two cell dispatching algorithms for the input-queuing space-memory-memory (IQ-SMM) Closnetwork to reduce out-of-sequence (OOS) for multicast traffic. The frequent connection pattern change of DSRR results in a severe OOS problem. Based on the principle of DSRR, MFDSRR is able ...

  15. Input dependent cell assembly dynamics in a model of the striatal medium spiny neuron network

    Directory of Open Access Journals (Sweden)

    Adam ePonzi

    2012-03-01

    Full Text Available The striatal medium spiny neuron (MSNs network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri stimulus time histograms (PSTH of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioural task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviourally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would in when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and delineate the range of parameters where this behaviour is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response

  16. Input dependent cell assembly dynamics in a model of the striatal medium spiny neuron network.

    Science.gov (United States)

    Ponzi, Adam; Wickens, Jeff

    2012-01-01

    The striatal medium spiny neuron (MSN) network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri-stimulus time histograms (PSTH) of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioral task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviorally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and outline the range of parameters where this behavior is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response which could be utilized by the animal in behavior.

  17. Phylogenetic mixtures and linear invariants for equal input models.

    Science.gov (United States)

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  18. Robust input design for nonlinear dynamic modeling of AUV.

    Science.gov (United States)

    Nouri, Nowrouz Mohammad; Valadi, Mehrdad

    2017-09-01

    Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Continuous Online Sequence Learning with an Unsupervised Neural Network Model.

    Science.gov (United States)

    Cui, Yuwei; Ahmad, Subutar; Hawkins, Jeff

    2016-09-14

    The ability to recognize and predict temporal sequences of sensory inputs is vital for survival in natural environments. Based on many known properties of cortical neurons, hierarchical temporal memory (HTM) sequence memory recently has been proposed as a theoretical framework for sequence learning in the cortex. In this letter, we analyze properties of HTM sequence memory and apply it to sequence learning and prediction problems with streaming data. We show the model is able to continuously learn a large number of variableorder temporal sequences using an unsupervised Hebbian-like learning rule. The sparse temporal codes formed by the model can robustly handle branching temporal sequences by maintaining multiple predictions until there is sufficient disambiguating evidence. We compare the HTM sequence memory with other sequence learning algorithms, including statistical methods: autoregressive integrated moving average; feedforward neural networks-time delay neural network and online sequential extreme learning machine; and recurrent neural networks-long short-term memory and echo-state networks on sequence prediction problems with both artificial and real-world data. The HTM model achieves comparable accuracy to other state-of-the-art algorithms. The model also exhibits properties that are critical for sequence learning, including continuous online learning, the ability to handle multiple predictions and branching sequences with high-order statistics, robustness to sensor noise and fault tolerance, and good performance without task-specific hyperparameter tuning. Therefore, the HTM sequence memory not only advances our understanding of how the brain may solve the sequence learning problem but is also applicable to real-world sequence learning problems from continuous data streams.

  20. A guidance on MELCOR input preparation : An input deck for Ul-Chin 3 and 4 Nuclear Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Song Won

    1997-02-01

    The objective of this study is to enhance the capability of assessing the severe accident sequence analyses and the containment behavior using MELCOR computer code and to provide the guideline of its efficient use. This report shows the method of the input deck preparation as well as the assessment strategy for the MELCOR code. MELCOR code is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. The code is being developed at Sandia National Laboratories for the U.S. NRC as a second generation plant risk assessment tool and the successor to the source term code package. The accident sequence of the reference input deck prepared in this study for Ulchin unit 3 and 4 nuclear power plants, is the total loss of feedwater (TLOFW) without any success of safety systems, which is similar to station blackout (TLMB). It is very useful to simulate a well-known sequence through the best estimated code or experiment, because the results of the simulation before core melt can be compared with the FSAR, but no data is available after core melt. The precalculation for the TLOFW using the reference input deck is performed successfully as expected. The other sequences will be carried out with minor changes in the reference input. This input deck will be improved continually by the adding of the safety systems not included in this input deck, and also through the sensitivity and uncertainty analyses. (author). 19 refs., 10 tabs., 55 figs.

  1. A guidance on MELCOR input preparation : An input deck for Ul-Chin 3 and 4 Nuclear Power Plant

    International Nuclear Information System (INIS)

    Cho, Song Won.

    1997-02-01

    The objective of this study is to enhance the capability of assessing the severe accident sequence analyses and the containment behavior using MELCOR computer code and to provide the guideline of its efficient use. This report shows the method of the input deck preparation as well as the assessment strategy for the MELCOR code. MELCOR code is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. The code is being developed at Sandia National Laboratories for the U.S. NRC as a second generation plant risk assessment tool and the successor to the source term code package. The accident sequence of the reference input deck prepared in this study for Ulchin unit 3 and 4 nuclear power plants, is the total loss of feedwater (TLOFW) without any success of safety systems, which is similar to station blackout (TLMB). It is very useful to simulate a well-known sequence through the best estimated code or experiment, because the results of the simulation before core melt can be compared with the FSAR, but no data is available after core melt. The precalculation for the TLOFW using the reference input deck is performed successfully as expected. The other sequences will be carried out with minor changes in the reference input. This input deck will be improved continually by the adding of the safety systems not included in this input deck, and also through the sensitivity and uncertainty analyses. (author). 19 refs., 10 tabs., 55 figs

  2. Out-of-Sequence Preventative Cell Dispatching for Multicast Input-Queued Space-Memory-Memory Clos-Network

    DEFF Research Database (Denmark)

    Yu, Hao; Ruepp, Sarah Renée; Berger, Michael Stübert

    2011-01-01

    This paper proposes two out-of-sequence (OOS) preventative cell dispatching algorithms for the multicast input-queued space-memory-memory (IQ-SMM) Clos-network switch architecture, i.e. the multicast flow-based DSRR (MF-DSRR) and the multicast flow-based round-robin (MFRR). Treating each cell...

  3. Variance-based sensitivity indices for models with dependent inputs

    International Nuclear Information System (INIS)

    Mara, Thierry A.; Tarantola, Stefano

    2012-01-01

    Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.

  4. Development of an Input Model to MELCOR 1.8.5 for the Ringhals 3 PWR

    International Nuclear Information System (INIS)

    Nilsson, Lars

    2004-12-01

    An input file to the severe accident code MELCOR 1.8.5 has been developed for the Swedish pressurized water reactor Ringhals 3. The aim was to produce a file that can be used for calculations of various postulated severe accident scenarios, although the first application is specifically on cases involving large hydrogen production. The input file is rather detailed with individual modelling of all three cooling loops. The report describes the basis for the Ringhals 3 model and the input preparation step by step and is illustrated by nodalization schemes of the different plant systems. Present version of the report is restricted to the fundamental MELCOR input preparation, and therefore most of the figures of Ringhals 3 measurements and operating parameters are excluded here. These are given in another, complete version of the report, for limited distribution, which includes tables for pertinent data of all components. That version contains appendices with a complete listing of the input files as well as tables of data compiled from a RELAP5 file, that was a major basis for the MELCOR input for the cooling loops. The input was tested in steady-state calculations in order to simulate the initial conditions at current nominal operating conditions in Ringhals 3 for 2775 MW thermal power. The results of the steady-state calculations are presented in the report. Calculations with the MELCOR model will then be carried out of certain accident sequences for comparison with results from earlier MAAP4 calculations. That work will be reported separately

  5. Global sensitivity analysis of computer models with functional inputs

    International Nuclear Information System (INIS)

    Iooss, Bertrand; Ribatet, Mathieu

    2009-01-01

    Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.

  6. A probabilistic graphical model based stochastic input model construction

    International Nuclear Information System (INIS)

    Wan, Jiang; Zabaras, Nicholas

    2014-01-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media

  7. OFFSCALE: A PC input processor for the SCALE code system. The CSASIN processor for the criticality sequences

    International Nuclear Information System (INIS)

    Bowman, S.M.

    1994-11-01

    OFFSCALE is a suite of personal computer input processor programs developed at Oak Ridge National Laboratory to provide an easy-to-use interface for modules in the SCALE-4 code system. CSASIN (formerly known as OFFSCALE) is a program in the OFFSCALE suite that serves as a user-friendly interface for the Criticality Safety Analysis Sequences (CSAS) available in SCALE-4. It is designed to assist a SCALE-4 user in preparing an input file for execution of criticality safety problems. Output from CSASIN generates an input file that may be used to execute the CSAS control module in SCALE-4. CSASIN features a pulldown menu system that accesses sophisticated data entry screens. The program allows the user to quickly set up a CSAS input file and perform data checking. This capability increases productivity and decreases the chance of user error

  8. Runtime analysis of the (1+1) EA on computing unique input output sequences

    DEFF Research Database (Denmark)

    Lehre, Per Kristian; Yao, Xin

    2010-01-01

    Computing unique input output (UIO) sequences is a fundamental and hard problem in conformance testing of finite state machines (FSM). Previous experimental research has shown that evolutionary algorithms (EAs) can be applied successfully to find UIOs for some FSMs. However, before EAs can...... in the theoretical analysis, and the variability of the runtime. The numerical results fit well with the theoretical results, even for small problem instance sizes. Together, these results provide a first theoretical characterisation of the potential and limitations of the (1 + 1) EA on the problem of computing UIOs....

  9. Analytic uncertainty and sensitivity analysis of models with input correlations

    Science.gov (United States)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  10. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  11. An investigation of developmental changes in interpretation and construction of graphic AAC symbol sequences through systematic combination of input and output modalities.

    Science.gov (United States)

    Trudeau, Natacha; Sutton, Ann; Morford, Jill P

    2014-09-01

    While research on spoken language has a long tradition of studying and contrasting language production and comprehension, the study of graphic symbol communication has focused more on production than comprehension. As a result, the relationships between the ability to construct and to interpret graphic symbol sequences are not well understood. This study explored the use of graphic symbol sequences in children without disabilities aged 3;0 to 6;11 (years; months) (n=111). Children took part in nine tasks that systematically varied input and output modalities (speech, action, and graphic symbols). Results show that in 3- and 4-year-olds, attributing meaning to a sequence of symbols was particularly difficult even when the children knew the meaning of each symbol in the sequence. Similarly, while even 3- and 4-year-olds could produce a graphic symbol sequence following a model, transposing a spoken sentence into a graphic sequence was more difficult for them. Representing an action with graphic symbols was difficult even for 5-year-olds. Finally, the ability to comprehend graphic-symbol sequences preceded the ability to produce them. These developmental patterns, as well as memory-related variables, should be taken into account in choosing intervention strategies with young children who use AAC.

  12. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rautenstrauch

    2004-09-10

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.

  13. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    K. Rautenstrauch

    2004-01-01

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception

  14. Modeling Input Errors to Improve Uncertainty Estimates for Sediment Transport Model Predictions

    Science.gov (United States)

    Jung, J. Y.; Niemann, J. D.; Greimann, B. P.

    2016-12-01

    Bayesian methods using Markov chain Monte Carlo algorithms have recently been applied to sediment transport models to assess the uncertainty in the model predictions due to the parameter values. Unfortunately, the existing approaches can only attribute overall uncertainty to the parameters. This limitation is critical because no model can produce accurate forecasts if forced with inaccurate input data, even if the model is well founded in physical theory. In this research, an existing Bayesian method is modified to consider the potential errors in input data during the uncertainty evaluation process. The input error is modeled using Gaussian distributions, and the means and standard deviations are treated as uncertain parameters. The proposed approach is tested by coupling it to the Sedimentation and River Hydraulics - One Dimension (SRH-1D) model and simulating a 23-km reach of the Tachia River in Taiwan. The Wu equation in SRH-1D is used for computing the transport capacity for a bed material load of non-cohesive material. Three types of input data are considered uncertain: (1) the input flowrate at the upstream boundary, (2) the water surface elevation at the downstream boundary, and (3) the water surface elevation at a hydraulic structure in the middle of the reach. The benefits of modeling the input errors in the uncertainty analysis are evaluated by comparing the accuracy of the most likely forecast and the coverage of the observed data by the credible intervals to those of the existing method. The results indicate that the internal boundary condition has the largest uncertainty among those considered. Overall, the uncertainty estimates from the new method are notably different from those of the existing method for both the calibration and forecast periods.

  15. Environmental Transport Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. Wasiolek

    2004-01-01

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573])

  16. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2004-09-10

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis

  17. Effects of input uncertainty on cross-scale crop modeling

    Science.gov (United States)

    Waha, Katharina; Huth, Neil; Carberry, Peter

    2014-05-01

    The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input

  18. Pandemic recovery analysis using the dynamic inoperability input-output model.

    Science.gov (United States)

    Santos, Joost R; Orsi, Mark J; Bond, Erik J

    2009-12-01

    Economists have long conceptualized and modeled the inherent interdependent relationships among different sectors of the economy. This concept paved the way for input-output modeling, a methodology that accounts for sector interdependencies governing the magnitude and extent of ripple effects due to changes in the economic structure of a region or nation. Recent extensions to input-output modeling have enhanced the model's capabilities to account for the impact of an economic perturbation; two such examples are the inoperability input-output model((1,2)) and the dynamic inoperability input-output model (DIIM).((3)) These models introduced sector inoperability, or the inability to satisfy as-planned production levels, into input-output modeling. While these models provide insights for understanding the impacts of inoperability, there are several aspects of the current formulation that do not account for complexities associated with certain disasters, such as a pandemic. This article proposes further enhancements to the DIIM to account for economic productivity losses resulting primarily from workforce disruptions. A pandemic is a unique disaster because the majority of its direct impacts are workforce related. The article develops a modeling framework to account for workforce inoperability and recovery factors. The proposed workforce-explicit enhancements to the DIIM are demonstrated in a case study to simulate a pandemic scenario in the Commonwealth of Virginia.

  19. Modeling bias and variation in the stochastic processes of small RNA sequencing.

    Science.gov (United States)

    Argyropoulos, Christos; Etheridge, Alton; Sakhanenko, Nikita; Galas, David

    2017-06-20

    The use of RNA-seq as the preferred method for the discovery and validation of small RNA biomarkers has been hindered by high quantitative variability and biased sequence counts. In this paper we develop a statistical model for sequence counts that accounts for ligase bias and stochastic variation in sequence counts. This model implies a linear quadratic relation between the mean and variance of sequence counts. Using a large number of sequencing datasets, we demonstrate how one can use the generalized additive models for location, scale and shape (GAMLSS) distributional regression framework to calculate and apply empirical correction factors for ligase bias. Bias correction could remove more than 40% of the bias for miRNAs. Empirical bias correction factors appear to be nearly constant over at least one and up to four orders of magnitude of total RNA input and independent of sample composition. Using synthetic mixes of known composition, we show that the GAMLSS approach can analyze differential expression with greater accuracy, higher sensitivity and specificity than six existing algorithms (DESeq2, edgeR, EBSeq, limma, DSS, voom) for the analysis of small RNA-seq data. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. Agricultural and Environmental Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    Kaylie Rasmuson; Kurt Rautenstrauch

    2003-01-01

    This analysis is one of nine technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. It documents input parameters for the biosphere model, and supports the use of the model to develop Biosphere Dose Conversion Factors (BDCF). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in the biosphere Technical Work Plan (TWP, BSC 2003a). It should be noted that some documents identified in Figure 1-1 may be under development and therefore not available at the time this document is issued. The ''Biosphere Model Report'' (BSC 2003b) describes the ERMYN and its input parameters. This analysis report, ANL-MGR-MD-000006, ''Agricultural and Environmental Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. This report defines and justifies values for twelve parameters required in the biosphere model. These parameters are related to use of contaminated groundwater to grow crops. The parameter values recommended in this report are used in the soil, plant, and carbon-14 submodels of the ERMYN

  1. Sensitivity analysis of complex models: Coping with dynamic and static inputs

    International Nuclear Information System (INIS)

    Anstett-Collin, F.; Goffart, J.; Mara, T.; Denis-Vidal, L.

    2015-01-01

    In this paper, we address the issue of conducting a sensitivity analysis of complex models with both static and dynamic uncertain inputs. While several approaches have been proposed to compute the sensitivity indices of the static inputs (i.e. parameters), the one of the dynamic inputs (i.e. stochastic fields) have been rarely addressed. For this purpose, we first treat each dynamic as a Gaussian process. Then, the truncated Karhunen–Loève expansion of each dynamic input is performed. Such an expansion allows to generate independent Gaussian processes from a finite number of independent random variables. Given that a dynamic input is represented by a finite number of random variables, its variance-based sensitivity index is defined by the sensitivity index of this group of variables. Besides, an efficient sampling-based strategy is described to estimate the first-order indices of all the input factors by only using two input samples. The approach is applied to a building energy model, in order to assess the impact of the uncertainties of the material properties (static inputs) and the weather data (dynamic inputs) on the energy performance of a real low energy consumption house. - Highlights: • Sensitivity analysis of models with uncertain static and dynamic inputs is performed. • Karhunen–Loève (KL) decomposition of the spatio/temporal inputs is performed. • The influence of the dynamic inputs is studied through the modes of the KL expansion. • The proposed approach is applied to a building energy model. • Impact of weather data and material properties on performance of real house is given

  2. An improved robust model predictive control for linear parameter-varying input-output models

    NARCIS (Netherlands)

    Abbas, H.S.; Hanema, J.; Tóth, R.; Mohammadpour, J.; Meskin, N.

    2018-01-01

    This paper describes a new robust model predictive control (MPC) scheme to control the discrete-time linear parameter-varying input-output models subject to input and output constraints. Closed-loop asymptotic stability is guaranteed by including a quadratic terminal cost and an ellipsoidal terminal

  3. Modeling Recognition Memory Using the Similarity Structure of Natural Input

    Science.gov (United States)

    Lacroix, Joyca P. W.; Murre, Jaap M. J.; Postma, Eric O.; van den Herik, H. Jaap

    2006-01-01

    The natural input memory (NAM) model is a new model for recognition memory that operates on natural visual input. A biologically informed perceptual preprocessing method takes local samples (eye fixations) from a natural image and translates these into a feature-vector representation. During recognition, the model compares incoming preprocessed…

  4. Foundations of Sequence-to-Sequence Modeling for Time Series

    OpenAIRE

    Kuznetsov, Vitaly; Mariet, Zelda

    2018-01-01

    The availability of large amounts of time series data, paired with the performance of deep-learning algorithms on a broad class of problems, has recently led to significant interest in the use of sequence-to-sequence models for time series forecasting. We provide the first theoretical analysis of this time series forecasting framework. We include a comparison of sequence-to-sequence modeling to classical time series models, and as such our theory can serve as a quantitative guide for practiti...

  5. Agricultural and Environmental Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    Kaylie Rasmuson; Kurt Rautenstrauch

    2003-06-20

    This analysis is one of nine technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. It documents input parameters for the biosphere model, and supports the use of the model to develop Biosphere Dose Conversion Factors (BDCF). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in the biosphere Technical Work Plan (TWP, BSC 2003a). It should be noted that some documents identified in Figure 1-1 may be under development and therefore not available at the time this document is issued. The ''Biosphere Model Report'' (BSC 2003b) describes the ERMYN and its input parameters. This analysis report, ANL-MGR-MD-000006, ''Agricultural and Environmental Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. This report defines and justifies values for twelve parameters required in the biosphere model. These parameters are related to use of contaminated groundwater to grow crops. The parameter values recommended in this report are used in the soil, plant, and carbon-14 submodels of the ERMYN.

  6. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2006-06-05

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This

  7. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. Wasiolek

    2006-01-01

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This report is concerned primarily with the

  8. Modeling recognition memory using the similarity structure of natural input

    NARCIS (Netherlands)

    Lacroix, J.P.W.; Murre, J.M.J.; Postma, E.O.; van den Herik, H.J.

    2006-01-01

    The natural input memory (NIM) model is a new model for recognition memory that operates on natural visual input. A biologically informed perceptual preprocessing method takes local samples (eye fixations) from a natural image and translates these into a feature-vector representation. During

  9. Nonparametric combinatorial sequence models.

    Science.gov (United States)

    Wauthier, Fabian L; Jordan, Michael I; Jojic, Nebojsa

    2011-11-01

    This work considers biological sequences that exhibit combinatorial structures in their composition: groups of positions of the aligned sequences are "linked" and covary as one unit across sequences. If multiple such groups exist, complex interactions can emerge between them. Sequences of this kind arise frequently in biology but methodologies for analyzing them are still being developed. This article presents a nonparametric prior on sequences which allows combinatorial structures to emerge and which induces a posterior distribution over factorized sequence representations. We carry out experiments on three biological sequence families which indicate that combinatorial structures are indeed present and that combinatorial sequence models can more succinctly describe them than simpler mixture models. We conclude with an application to MHC binding prediction which highlights the utility of the posterior distribution over sequence representations induced by the prior. By integrating out the posterior, our method compares favorably to leading binding predictors.

  10. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. A. Wasiolek

    2003-01-01

    This analysis is one of the nine reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2003a) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents a set of input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for a Yucca Mountain repository. This report, ''Inhalation Exposure Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003b). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available at that time. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this analysis report. This analysis report defines and justifies values of mass loading, which is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Measurements of mass loading are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air surrounding crops and concentrations in air inhaled by a receptor. Concentrations in air to which the

  11. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-09-24

    This analysis is one of the nine reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2003a) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents a set of input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for a Yucca Mountain repository. This report, ''Inhalation Exposure Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003b). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available at that time. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this analysis report. This analysis report defines and justifies values of mass loading, which is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Measurements of mass loading are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air surrounding crops and concentrations in air

  12. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity.

    Science.gov (United States)

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2012-12-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes.

  13. Simplifying BRDF input data for optical signature modeling

    Science.gov (United States)

    Hallberg, Tomas; Pohl, Anna; Fagerström, Jan

    2017-05-01

    Scene simulations of optical signature properties using signature codes normally requires input of various parameterized measurement data of surfaces and coatings in order to achieve realistic scene object features. Some of the most important parameters are used in the model of the Bidirectional Reflectance Distribution Function (BRDF) and are normally determined by surface reflectance and scattering measurements. Reflectance measurements of the spectral Directional Hemispherical Reflectance (DHR) at various incident angles can normally be performed in most spectroscopy labs, while measuring the BRDF is more complicated or may not be available at all in many optical labs. We will present a method in order to achieve the necessary BRDF data directly from DHR measurements for modeling software using the Sandford-Robertson BRDF model. The accuracy of the method is tested by modeling a test surface by comparing results from using estimated and measured BRDF data as input to the model. These results show that using this method gives no significant loss in modeling accuracy.

  14. Input-output model for MACCS nuclear accident impacts estimation¹

    Energy Technology Data Exchange (ETDEWEB)

    Outkin, Alexander V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bixler, Nathan E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vargas, Vanessa N [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-27

    Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domestic product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.

  15. Sequence modelling and an extensible data model for genomic database

    Energy Technology Data Exchange (ETDEWEB)

    Li, Peter Wei-Der [California Univ., San Francisco, CA (United States); Univ. of California, Berkeley, CA (United States)

    1992-01-01

    The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS`s do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data model that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the ``Extensible Object Model``, to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.

  16. Sequence modelling and an extensible data model for genomic database

    Energy Technology Data Exchange (ETDEWEB)

    Li, Peter Wei-Der (California Univ., San Francisco, CA (United States) Lawrence Berkeley Lab., CA (United States))

    1992-01-01

    The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS's do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data model that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the Extensible Object Model'', to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.

  17. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    International Nuclear Information System (INIS)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok

    2016-01-01

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  18. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    Energy Technology Data Exchange (ETDEWEB)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin, E-mail: dengbin@tju.edu.cn; Chan, Wai-lok [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2016-06-15

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  19. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Science.gov (United States)

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  20. Investigation of RADTRAN Stop Model input parameters for truck stops

    International Nuclear Information System (INIS)

    Griego, N.R.; Smith, J.D.; Neuhauser, K.S.

    1996-01-01

    RADTRAN is a computer code for estimating the risks and consequences as transport of radioactive materials (RAM). RADTRAN was developed and is maintained by Sandia National Laboratories for the US Department of Energy (DOE). For incident-free transportation, the dose to persons exposed while the shipment is stopped is frequently a major percentage of the overall dose. This dose is referred to as Stop Dose and is calculated by the Stop Model. Because stop dose is a significant portion of the overall dose associated with RAM transport, the values used as input for the Stop Model are important. Therefore, an investigation of typical values for RADTRAN Stop Parameters for truck stops was performed. The resulting data from these investigations were analyzed to provide mean values, standard deviations, and histograms. Hence, the mean values can be used when an analyst does not have a basis for selecting other input values for the Stop Model. In addition, the histograms and their characteristics can be used to guide statistical sampling techniques to measure sensitivity of the RADTRAN calculated Stop Dose to the uncertainties in the stop model input parameters. This paper discusses the details and presents the results of the investigation of stop model input parameters at truck stops

  1. Spatio-temporal alignment of pedobarographic image sequences.

    Science.gov (United States)

    Oliveira, Francisco P M; Sousa, Andreia; Santos, Rubim; Tavares, João Manuel R S

    2011-07-01

    This article presents a methodology to align plantar pressure image sequences simultaneously in time and space. The spatial position and orientation of a foot in a sequence are changed to match the foot represented in a second sequence. Simultaneously with the spatial alignment, the temporal scale of the first sequence is transformed with the aim of synchronizing the two input footsteps. Consequently, the spatial correspondence of the foot regions along the sequences as well as the temporal synchronizing is automatically attained, making the study easier and more straightforward. In terms of spatial alignment, the methodology can use one of four possible geometric transformation models: rigid, similarity, affine, or projective. In the temporal alignment, a polynomial transformation up to the 4th degree can be adopted in order to model linear and curved time behaviors. Suitable geometric and temporal transformations are found by minimizing the mean squared error (MSE) between the input sequences. The methodology was tested on a set of real image sequences acquired from a common pedobarographic device. When used in experimental cases generated by applying geometric and temporal control transformations, the methodology revealed high accuracy. In addition, the intra-subject alignment tests from real plantar pressure image sequences showed that the curved temporal models produced better MSE results (P alignment of pedobarographic image data, since previous methods can only be applied on static images.

  2. Evaluating nuclear physics inputs in core-collapse supernova models

    Science.gov (United States)

    Lentz, E.; Hix, W. R.; Baird, M. L.; Messer, O. E. B.; Mezzacappa, A.

    Core-collapse supernova models depend on the details of the nuclear and weak interaction physics inputs just as they depend on the details of the macroscopic physics (transport, hydrodynamics, etc.), numerical methods, and progenitors. We present preliminary results from our ongoing comparison studies of nuclear and weak interaction physics inputs to core collapse supernova models using the spherically-symmetric, general relativistic, neutrino radiation hydrodynamics code Agile-Boltztran. We focus on comparisons of the effects of the nuclear EoS and the effects of improving the opacities, particularly neutrino--nucleon interactions.

  3. Agricultural and Environmental Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rasmuson; K. Rautenstrauch

    2004-09-14

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters.

  4. Agricultural and Environmental Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    K. Rasmuson; K. Rautenstrauch

    2004-01-01

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters

  5. Motivation Monitoring and Assessment Extension for Input-Process-Outcome Game Model

    Science.gov (United States)

    Ghergulescu, Ioana; Muntean, Cristina Hava

    2014-01-01

    This article proposes a Motivation Assessment-oriented Input-Process-Outcome Game Model (MotIPO), which extends the Input-Process-Outcome game model with game-centred and player-centred motivation assessments performed right from the beginning of the game-play. A feasibility case-study involving 67 participants playing an educational game and…

  6. High Temperature Test Facility Preliminary RELAP5-3D Input Model Description

    Energy Technology Data Exchange (ETDEWEB)

    Bayless, Paul David [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-12-01

    A RELAP5-3D input model is being developed for the High Temperature Test Facility at Oregon State University. The current model is described in detail. Further refinements will be made to the model as final as-built drawings are released and when system characterization data are available for benchmarking the input model.

  7. Variance-based sensitivity indices for stochastic models with correlated inputs

    Energy Technology Data Exchange (ETDEWEB)

    Kala, Zdeněk [Brno University of Technology, Faculty of Civil Engineering, Department of Structural Mechanics Veveří St. 95, ZIP 602 00, Brno (Czech Republic)

    2015-03-10

    The goal of this article is the formulation of the principles of one of the possible strategies in implementing correlation between input random variables so as to be usable for algorithm development and the evaluation of Sobol’s sensitivity analysis. With regard to the types of stochastic computational models, which are commonly found in structural mechanics, an algorithm was designed for effective use in conjunction with Monte Carlo methods. Sensitivity indices are evaluated for all possible permutations of the decorrelation procedures for input parameters. The evaluation of Sobol’s sensitivity coefficients is illustrated on an example in which a computational model was used for the analysis of the resistance of a steel bar in tension with statistically dependent input geometric characteristics.

  8. Variance-based sensitivity indices for stochastic models with correlated inputs

    International Nuclear Information System (INIS)

    Kala, Zdeněk

    2015-01-01

    The goal of this article is the formulation of the principles of one of the possible strategies in implementing correlation between input random variables so as to be usable for algorithm development and the evaluation of Sobol’s sensitivity analysis. With regard to the types of stochastic computational models, which are commonly found in structural mechanics, an algorithm was designed for effective use in conjunction with Monte Carlo methods. Sensitivity indices are evaluated for all possible permutations of the decorrelation procedures for input parameters. The evaluation of Sobol’s sensitivity coefficients is illustrated on an example in which a computational model was used for the analysis of the resistance of a steel bar in tension with statistically dependent input geometric characteristics

  9. Model reduction of nonlinear systems subject to input disturbances

    KAUST Repository

    Ndoye, Ibrahima

    2017-07-10

    The method of convex optimization is used as a tool for model reduction of a class of nonlinear systems in the presence of disturbances. It is shown that under some conditions the nonlinear disturbed system can be approximated by a reduced order nonlinear system with similar disturbance-output properties to the original plant. The proposed model reduction strategy preserves the nonlinearity and the input disturbance nature of the model. It guarantees a sufficiently small error between the outputs of the original and the reduced-order systems, and also maintains the properties of input-to-state stability. The matrices of the reduced order system are given in terms of a set of linear matrix inequalities (LMIs). The paper concludes with a demonstration of the proposed approach on model reduction of a nonlinear electronic circuit with additive disturbances.

  10. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    International Nuclear Information System (INIS)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-01-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction. (paper)

  11. How model and input uncertainty impact maize yield simulations in West Africa

    Science.gov (United States)

    Waha, Katharina; Huth, Neil; Carberry, Peter; Wang, Enli

    2015-02-01

    Crop models are common tools for simulating crop yields and crop production in studies on food security and global change. Various uncertainties however exist, not only in the model design and model parameters, but also and maybe even more important in soil, climate and management input data. We analyze the performance of the point-scale crop model APSIM and the global scale crop model LPJmL with different climate and soil conditions under different agricultural management in the low-input maize-growing areas of Burkina Faso, West Africa. We test the models’ response to different levels of input information from little to detailed information on soil, climate (1961-2000) and agricultural management and compare the models’ ability to represent the observed spatial (between locations) and temporal variability (between years) in crop yields. We found that the resolution of different soil, climate and management information influences the simulated crop yields in both models. However, the difference between models is larger than between input data and larger between simulations with different climate and management information than between simulations with different soil information. The observed spatial variability can be represented well from both models even with little information on soils and management but APSIM simulates a higher variation between single locations than LPJmL. The agreement of simulated and observed temporal variability is lower due to non-climatic factors e.g. investment in agricultural research and development between 1987 and 1991 in Burkina Faso which resulted in a doubling of maize yields. The findings of our study highlight the importance of scale and model choice and show that the most detailed input data does not necessarily improve model performance.

  12. Evaluating the Sensitivity of Agricultural Model Performance to Different Climate Inputs: Supplemental Material

    Science.gov (United States)

    Glotter, Michael J.; Ruane, Alex C.; Moyer, Elisabeth J.; Elliott, Joshua W.

    2015-01-01

    Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled and observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources reanalysis, reanalysis that is bias corrected with observed climate, and a control dataset and compared with observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by non-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. Some issues persist for all choices of climate inputs: crop yields appear to be oversensitive to precipitation fluctuations but under sensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves.

  13. Modeling and generating input processes

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, M.E.

    1987-01-01

    This tutorial paper provides information relevant to the selection and generation of stochastic inputs to simulation studies. The primary area considered is multivariate but much of the philosophy at least is relevant to univariate inputs as well. 14 refs.

  14. Description of the CONTAIN input model for the Dodewaard nuclear power plant

    International Nuclear Information System (INIS)

    Velema, E.J.

    1992-02-01

    This report describes the ECN standard CONTAIN input model for the Dodewaard Nuclear Power Plant (NPP) that has been developed by ECN. This standard input model will serve as a basis for analyses of the phenomena which may occur inside the Dodewaard containment in the event of a postulated severe accident. Boundary conditions for specific containment analyses can easily be implemented in the input model. as a result ECN will be able to respond quickly on requests for analyses from the utilities of the authorities. The report also includes brief descriptions of the Dodewaard NPP and the CONTAIN computer program. (author). 7 refs.; 5 figs.; 3 tabs

  15. Soil-Related Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    Smith, A. J.

    2004-01-01

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure was defined as AP-SIII.9Q, ''Scientific Analyses''. This

  16. Screening important inputs in models with strong interaction properties

    International Nuclear Information System (INIS)

    Saltelli, Andrea; Campolongo, Francesca; Cariboni, Jessica

    2009-01-01

    We introduce a new method for screening inputs in mathematical or computational models with large numbers of inputs. The method proposed here represents an improvement over the best available practice for this setting when dealing with models having strong interaction effects. When the sample size is sufficiently high the same design can also be used to obtain accurate quantitative estimates of the variance-based sensitivity measures: the same simulations can be used to obtain estimates of the variance-based measures according to the Sobol' and the Jansen formulas. Results demonstrate that Sobol' is more efficient for the computation of the first-order indices, while Jansen performs better for the computation of the total indices.

  17. Screening important inputs in models with strong interaction properties

    Energy Technology Data Exchange (ETDEWEB)

    Saltelli, Andrea [European Commission, Joint Research Centre, 21020 Ispra, Varese (Italy); Campolongo, Francesca [European Commission, Joint Research Centre, 21020 Ispra, Varese (Italy)], E-mail: francesca.campolongo@jrc.it; Cariboni, Jessica [European Commission, Joint Research Centre, 21020 Ispra, Varese (Italy)

    2009-07-15

    We introduce a new method for screening inputs in mathematical or computational models with large numbers of inputs. The method proposed here represents an improvement over the best available practice for this setting when dealing with models having strong interaction effects. When the sample size is sufficiently high the same design can also be used to obtain accurate quantitative estimates of the variance-based sensitivity measures: the same simulations can be used to obtain estimates of the variance-based measures according to the Sobol' and the Jansen formulas. Results demonstrate that Sobol' is more efficient for the computation of the first-order indices, while Jansen performs better for the computation of the total indices.

  18. Specification and Aggregation Errors in Environmentally Extended Input-Output Models

    NARCIS (Netherlands)

    Bouwmeester, Maaike C.; Oosterhaven, Jan

    This article considers the specification and aggregation errors that arise from estimating embodied emissions and embodied water use with environmentally extended national input-output (IO) models, instead of with an environmentally extended international IO model. Model specification errors result

  19. Characteristic length scale of input data in distributed models: implications for modeling grid size

    Science.gov (United States)

    Artan, G. A.; Neale, C. M. U.; Tarboton, D. G.

    2000-01-01

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.

  20. Characteristic length scale of input data in distributed models: implications for modeling grain size

    Science.gov (United States)

    Artan, Guleid A.; Neale, C. M. U.; Tarboton, D. G.

    2000-01-01

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.

  1. A PRODUCTIVITY EVALUATION MODEL BASED ON INPUT AND OUTPUT ORIENTATIONS

    Directory of Open Access Journals (Sweden)

    C.O. Anyaeche

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: Many productivity models evaluate either the input or the output performances using standalone techniques. This sometimes gives divergent views of the same system’s results. The work reported in this article, which simultaneously evaluated productivity from both orientations, was applied on real life data. The results showed losses in productivity (–2% and price recovery (–8% for the outputs; the inputs showed productivity gain (145% but price recovery loss (–63%. These imply losses in product performances but a productivity gain in inputs. The loss in the price recovery of inputs indicates a problem in the pricing policy. This model is applicable in product diversification.

    AFRIKAANSE OPSOMMING: Die meeste produktiwiteitsmodelle evalueer of die inset- of die uitsetverrigting deur gebruik te maak van geïsoleerde tegnieke. Dit lei soms tot uiteenlopende perspektiewe van dieselfde sisteem se verrigting. Hierdie artikel evalueer verrigting uit beide perspektiewe en gebruik ware data. Die resultate toon ‘n afname in produktiwiteit (-2% en prysherwinning (-8% vir die uitsette. Die insette toon ‘n toename in produktiwiteit (145%, maar ‘n afname in prysherwinning (-63%. Dit impliseer ‘n afname in produkverrigting, maar ‘n produktiwiteitstoename in insette. Die afname in die prysherwinning van insette dui op ‘n problem in die prysvasstellingbeleid. Hierdie model is geskik vir produkdiversifikasie.

  2. Wideband Small-Signal Input dq Admittance Modeling of Six-Pulse Diode Rectifiers

    DEFF Research Database (Denmark)

    Yue, Xiaolong; Wang, Xiongfei; Blaabjerg, Frede

    2018-01-01

    This paper studies the wideband small-signal input dq admittance of six-pulse diode rectifiers. Considering the frequency coupling introduced by ripple frequency harmonics of d-and q-channel switching function, the proposed model successfully predicts the small-signal input dq admittance of six......-pulse diode rectifiers in high frequency regions that existing models fail to explain. Simulation and experimental results verify the accuracy of the proposed model....

  3. Statistical Analysis of Input Parameters Impact on the Modelling of Underground Structures

    Directory of Open Access Journals (Sweden)

    M. Hilar

    2008-01-01

    Full Text Available The behaviour of a geomechanical model and its final results are strongly affected by the input parameters. As the inherent variability of rock mass is difficult to model, engineers are frequently forced to face the question “Which input values should be used for analyses?” The correct answer to such a question requires a probabilistic approach, considering the uncertainty of site investigations and variation in the ground. This paper describes the statistical analysis of input parameters for FEM calculations of traffic tunnels in the city of Prague. At the beginning of the paper, the inaccuracy in the geotechnical modelling is discussed. In the following part the Fuzzy techniques are summarized, including information about an application of the Fuzzy arithmetic on the shotcrete parameters. The next part of the paper is focused on the stochastic simulation – Monte Carlo Simulation is briefly described, Latin Hypercubes method is described more in details. At the end several practical examples are described: statistical analysis of the input parameters on the numerical modelling of the completed Mrázovka tunnel (profile West Tunnel Tube km 5.160 and modelling of the constructed tunnel Špejchar – Pelc Tyrolka. 

  4. Soil-related Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    A. J. Smith

    2003-01-01

    This analysis is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the geologic repository at Yucca Mountain. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN biosphere model is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003 [163602]). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. ''The Biosphere Model Report'' (BSC 2003 [160699]) describes in detail the conceptual model as well as the mathematical model and its input parameters. The purpose of this analysis was to develop the biosphere model parameters needed to evaluate doses from pathways associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation and ash

  5. Soil-Related Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    A. J. Smith

    2004-09-09

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure

  6. Determination of the arterial input function in mouse-models using clinical MRI

    International Nuclear Information System (INIS)

    Theis, D.; Fachhochschule Giessen-Friedberg; Keil, B.; Heverhagen, J.T.; Klose, K.J.; Behe, M.; Fiebich, M.

    2008-01-01

    Dynamic contrast enhanced magnetic resonance imaging is a promising method for quantitative analysis of tumor perfusion and is increasingly used in study of cancer in small animal models. In those studies the determination of the arterial input function (AIF) of the target tissue can be the first step. Series of short-axis images of the heart were acquired during administration of a bolus of Gd-DTPA using saturation-recovery gradient echo pulse sequences. The AIF was determined from the changes of the signal intensity in the left ventricle. The native T1 relaxation times and AIF were determined for 11 mice. An average value of (1.16 ± 0.09) s for the native T1 relaxation time was measured. However, the AIF showed significant inter animal variability, as previously observed by other authors. The inter-animal variability shows, that a direct measurement of the AIF is reasonable to avoid significant errors. The proposed method for determination of the AIF proved to be reliable. (orig.)

  7. The role of additive neurogenesis and synaptic plasticity in a hippocampal memory model with grid-cell like input.

    Directory of Open Access Journals (Sweden)

    Peter A Appleby

    Full Text Available Recently, we presented a study of adult neurogenesis in a simplified hippocampal memory model. The network was required to encode and decode memory patterns despite changing input statistics. We showed that additive neurogenesis was a more effective adaptation strategy compared to neuronal turnover and conventional synaptic plasticity as it allowed the network to respond to changes in the input statistics while preserving representations of earlier environments. Here we extend our model to include realistic, spatially driven input firing patterns in the form of grid cells in the entorhinal cortex. We compare network performance across a sequence of spatial environments using three distinct adaptation strategies: conventional synaptic plasticity, where the network is of fixed size but the connectivity is plastic; neuronal turnover, where the network is of fixed size but units in the network may die and be replaced; and additive neurogenesis, where the network starts out with fewer initial units but grows over time. We confirm that additive neurogenesis is a superior adaptation strategy when using realistic, spatially structured input patterns. We then show that a more biologically plausible neurogenesis rule that incorporates cell death and enhanced plasticity of new granule cells has an overall performance significantly better than any one of the three individual strategies operating alone. This adaptation rule can be tailored to maximise performance of the network when operating as either a short- or long-term memory store. We also examine the time course of adult neurogenesis over the lifetime of an animal raised under different hypothetical rearing conditions. These growth profiles have several distinct features that form a theoretical prediction that could be tested experimentally. Finally, we show that place cells can emerge and refine in a realistic manner in our model as a direct result of the sparsification performed by the dentate gyrus

  8. Can Simulation Credibility Be Improved Using Sensitivity Analysis to Understand Input Data Effects on Model Outcome?

    Science.gov (United States)

    Myers, Jerry G.; Young, M.; Goodenow, Debra A.; Keenan, A.; Walton, M.; Boley, L.

    2015-01-01

    Model and simulation (MS) credibility is defined as, the quality to elicit belief or trust in MS results. NASA-STD-7009 [1] delineates eight components (Verification, Validation, Input Pedigree, Results Uncertainty, Results Robustness, Use History, MS Management, People Qualifications) that address quantifying model credibility, and provides guidance to the model developers, analysts, and end users for assessing the MS credibility. Of the eight characteristics, input pedigree, or the quality of the data used to develop model input parameters, governing functions, or initial conditions, can vary significantly. These data quality differences have varying consequences across the range of MS application. NASA-STD-7009 requires that the lowest input data quality be used to represent the entire set of input data when scoring the input pedigree credibility of the model. This requirement provides a conservative assessment of model inputs, and maximizes the communication of the potential level of risk of using model outputs. Unfortunately, in practice, this may result in overly pessimistic communication of the MS output, undermining the credibility of simulation predictions to decision makers. This presentation proposes an alternative assessment mechanism, utilizing results parameter robustness, also known as model input sensitivity, to improve the credibility scoring process for specific simulations.

  9. Generating quantitative models describing the sequence specificity of biological processes with the stabilized matrix method

    Directory of Open Access Journals (Sweden)

    Sette Alessandro

    2005-05-01

    Full Text Available Abstract Background Many processes in molecular biology involve the recognition of short sequences of nucleic-or amino acids, such as the binding of immunogenic peptides to major histocompatibility complex (MHC molecules. From experimental data, a model of the sequence specificity of these processes can be constructed, such as a sequence motif, a scoring matrix or an artificial neural network. The purpose of these models is two-fold. First, they can provide a summary of experimental results, allowing for a deeper understanding of the mechanisms involved in sequence recognition. Second, such models can be used to predict the experimental outcome for yet untested sequences. In the past we reported the development of a method to generate such models called the Stabilized Matrix Method (SMM. This method has been successfully applied to predicting peptide binding to MHC molecules, peptide transport by the transporter associated with antigen presentation (TAP and proteasomal cleavage of protein sequences. Results Herein we report the implementation of the SMM algorithm as a publicly available software package. Specific features determining the type of problems the method is most appropriate for are discussed. Advantageous features of the package are: (1 the output generated is easy to interpret, (2 input and output are both quantitative, (3 specific computational strategies to handle experimental noise are built in, (4 the algorithm is designed to effectively handle bounded experimental data, (5 experimental data from randomized peptide libraries and conventional peptides can easily be combined, and (6 it is possible to incorporate pair interactions between positions of a sequence. Conclusion Making the SMM method publicly available enables bioinformaticians and experimental biologists to easily access it, to compare its performance to other prediction methods, and to extend it to other applications.

  10. A new interpretation and validation of variance based importance measures for models with correlated inputs

    Science.gov (United States)

    Hao, Wenrui; Lu, Zhenzhou; Li, Luyi

    2013-05-01

    In order to explore the contributions by correlated input variables to the variance of the output, a novel interpretation framework of importance measure indices is proposed for a model with correlated inputs, which includes the indices of the total correlated contribution and the total uncorrelated contribution. The proposed indices accurately describe the connotations of the contributions by the correlated input to the variance of output, and they can be viewed as the complement and correction of the interpretation about the contributions by the correlated inputs presented in "Estimation of global sensitivity indices for models with dependent variables, Computer Physics Communications, 183 (2012) 937-946". Both of them contain the independent contribution by an individual input. Taking the general form of quadratic polynomial as an illustration, the total correlated contribution and the independent contribution by an individual input are derived analytically, from which the components and their origins of both contributions of correlated input can be clarified without any ambiguity. In the special case that no square term is included in the quadratic polynomial model, the total correlated contribution by the input can be further decomposed into the variance contribution related to the correlation of the input with other inputs and the independent contribution by the input itself, and the total uncorrelated contribution can be further decomposed into the independent part by interaction between the input and others and the independent part by the input itself. Numerical examples are employed and their results demonstrate that the derived analytical expressions of the variance-based importance measure are correct, and the clarification of the correlated input contribution to model output by the analytical derivation is very important for expanding the theory and solutions of uncorrelated input to those of the correlated one.

  11. Sensitivity of a complex urban air quality model to input data

    International Nuclear Information System (INIS)

    Seigneur, C.; Tesche, T.W.; Roth, P.M.; Reid, L.E.

    1981-01-01

    In recent years, urban-scale photochemical simulation models have been developed that are of practical value for predicting air quality and analyzing the impacts of alternative emission control strategies. Although the performance of some urban-scale models appears to be acceptable, the demanding data requirements of such models have prompted concern about the costs of data acquistion, which might be high enough to preclude use of photochemical models for many urban areas. To explore this issue, sensitivity studies with the Systems Applications, Inc. (SAI) Airshed Model, a grid-based time-dependent photochemical dispersion model, have been carried out for the Los Angeles basin. Reductions in the amount and quality of meteorological, air quality and emission data, as well as modifications of the model gridded structure, have been analyzed. This paper presents and interprets the results of 22 sensitivity studies. A sensitivity-uncertainty index is defined to rank input data needs for an urban photochemical model. The index takes into account the sensitivity of model predictions to the amount of input data, the costs of data acquistion, and the uncertainties in the air quality model input variables. The results of these sensitivity studies are considered in light of the limitations of specific attributes of the Los Angeles basin and of the modeling conditions (e.g., choice of wind model, length of simulation time). The extent to which the results may be applied to other urban areas also is discussed

  12. Sequence memory based on coherent spin-interaction neural networks.

    Science.gov (United States)

    Xia, Min; Wong, W K; Wang, Zhijie

    2014-12-01

    Sequence information processing, for instance, the sequence memory, plays an important role on many functions of brain. In the workings of the human brain, the steady-state period is alterable. However, in the existing sequence memory models using heteroassociations, the steady-state period cannot be changed in the sequence recall. In this work, a novel neural network model for sequence memory with controllable steady-state period based on coherent spininteraction is proposed. In the proposed model, neurons fire collectively in a phase-coherent manner, which lets a neuron group respond differently to different patterns and also lets different neuron groups respond differently to one pattern. The simulation results demonstrating the performance of the sequence memory are presented. By introducing a new coherent spin-interaction sequence memory model, the steady-state period can be controlled by dimension parameters and the overlap between the input pattern and the stored patterns. The sequence storage capacity is enlarged by coherent spin interaction compared with the existing sequence memory models. Furthermore, the sequence storage capacity has an exponential relationship to the dimension of the neural network.

  13. Unified Deep Learning Architecture for Modeling Biology Sequence.

    Science.gov (United States)

    Wu, Hongjie; Cao, Chengyuan; Xia, Xiaoyan; Lu, Qiang

    2017-10-09

    Prediction of the spatial structure or function of biological macromolecules based on their sequence remains an important challenge in bioinformatics. When modeling biological sequences using traditional sequencing models, characteristics, such as long-range interactions between basic units, the complicated and variable output of labeled structures, and the variable length of biological sequences, usually lead to different solutions on a case-by-case basis. This study proposed the use of bidirectional recurrent neural networks based on long short-term memory or a gated recurrent unit to capture long-range interactions by designing the optional reshape operator to adapt to the diversity of the output labels and implementing a training algorithm to support the training of sequence models capable of processing variable-length sequences. Additionally, the merge and pooling operators enhanced the ability to capture short-range interactions between basic units of biological sequences. The proposed deep-learning model and its training algorithm might be capable of solving currently known biological sequence-modeling problems through the use of a unified framework. We validated our model on one of the most difficult biological sequence-modeling problems currently known, with our results indicating the ability of the model to obtain predictions of protein residue interactions that exceeded the accuracy of current popular approaches by 10% based on multiple benchmarks.

  14. Calibration of controlling input models for pavement management system.

    Science.gov (United States)

    2013-07-01

    The Oklahoma Department of Transportation (ODOT) is currently using the Deighton Total Infrastructure Management System (dTIMS) software for pavement management. This system is based on several input models which are computational backbones to dev...

  15. Quality assurance of weather data for agricultural system model input

    Science.gov (United States)

    It is well known that crop production and hydrologic variation on watersheds is weather related. Rarely, however, is meteorological data quality checks reported for agricultural systems model research. We present quality assurance procedures for agricultural system model weather data input. Problems...

  16. Multiple-Input Subject-Specific Modeling of Plasma Glucose Concentration for Feedforward Control.

    Science.gov (United States)

    Kotz, Kaylee; Cinar, Ali; Mei, Yong; Roggendorf, Amy; Littlejohn, Elizabeth; Quinn, Laurie; Rollins, Derrick K

    2014-11-26

    The ability to accurately develop subject-specific, input causation models, for blood glucose concentration (BGC) for large input sets can have a significant impact on tightening control for insulin dependent diabetes. More specifically, for Type 1 diabetics (T1Ds), it can lead to an effective artificial pancreas (i.e., an automatic control system that delivers exogenous insulin) under extreme changes in critical disturbances. These disturbances include food consumption, activity variations, and physiological stress changes. Thus, this paper presents a free-living, outpatient, multiple-input, modeling method for BGC with strong causation attributes that is stable and guards against overfitting to provide an effective modeling approach for feedforward control (FFC). This approach is a Wiener block-oriented methodology, which has unique attributes for meeting critical requirements for effective, long-term, FFC.

  17. Development of the MARS input model for Kori nuclear units 1 transient analyzer

    International Nuclear Information System (INIS)

    Hwang, M.; Kim, K. D.; Lee, S. W.; Lee, Y. J.; Lee, W. J.; Chung, B. D.; Jeong, J. J.

    2004-11-01

    KAERI has been developing the 'NSSS transient analyzer' based on best-estimate codes for Kori Nuclear Units 1 plants. The MARS and RETRAN codes have been used as the best-estimate codes for the NSSS transient analyzer. Among these codes, the MARS code is adopted for realistic analysis of small- and large-break loss-of-coolant accidents, of which break size is greater than 2 inch diameter. So it is necessary to develop the MARS input model for Kori Nuclear Units 1 plants. This report includes the input model (hydrodynamic component and heat structure models) requirements and the calculation note for the MARS input data generation for Kori Nuclear Units 1 plant analyzer (see the Appendix). In order to confirm the validity of the input data, we performed the calculations for a steady state at 100 % power operation condition and a double-ended cold leg break LOCA. The results of the steady-state calculation agree well with the design data. The results of the LOCA calculation seem to be reasonable and consistent with those of other best-estimate calculations. Therefore, the MARS input data can be used as a base input deck for the MARS transient analyzer for Kori Nuclear Units 1

  18. Development of the RETRAN input model for Ulchin 3/4 visual system analyzer

    International Nuclear Information System (INIS)

    Lee, S. W.; Kim, K. D.; Lee, Y. J.; Lee, W. J.; Chung, B. D.; Jeong, J. J.; Hwang, M. K.

    2004-01-01

    As a part of the Long-Term Nuclear R and D program, KAERI has developed the so-called Visual System Analyzer (ViSA) based on best-estimate codes. The MARS and RETRAN codes are used as the best-estimate codes for ViSA. Between these two codes, the RETRAN code is used for realistic analysis of Non-LOCA transients and small-break loss-of-coolant accidents, of which break size is less than 3 inch diameter. So it is necessary to develop the RETRAN input model for Ulchin 3/4 plants (KSNP). In recognition of this, the RETRAN input model for Ulchin 3/4 plants has been developed. This report includes the input model requirements and the calculation note for the input data generation (see the Appendix). In order to confirm the validity of the input data, the calculations are performed for a steady state at 100 % power operation condition, inadvertent reactor trip and RCP trip. The results of the steady-state calculation agree well with the design data. The results of the other transient calculations seem to be reasonable and consistent with those of other best-estimate calculations. Therefore, the RETRAN input data can be used as a base input deck for the RETRAN transient analyzer for Ulchin 3/4. Moreover, it is found that Core Protection Calculator (CPC) module, which is modified by Korea Electric Power Research Institute (KEPRI), is well adapted to ViSA

  19. Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models

    Science.gov (United States)

    Rothenberger, Michael J.

    This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input

  20. A Design Method of Robust Servo Internal Model Control with Control Input Saturation

    OpenAIRE

    山田, 功; 舩見, 洋祐

    2001-01-01

    In the present paper, we examine a design method of robust servo Internal Model Control with control input saturation. First of all, we clarify the condition that Internal Model Control has robust servo characteristics for the system with control input saturation. From this consideration, we propose new design method of Internal Model Control with robust servo characteristics. A numerical example to illustrate the effectiveness of the proposed method is shown.

  1. QNB: differential RNA methylation analysis for count-based small-sample sequencing data with a quad-negative binomial model.

    Science.gov (United States)

    Liu, Lian; Zhang, Shao-Wu; Huang, Yufei; Meng, Jia

    2017-08-31

    As a newly emerged research area, RNA epigenetics has drawn increasing attention recently for the participation of RNA methylation and other modifications in a number of crucial biological processes. Thanks to high throughput sequencing techniques, such as, MeRIP-Seq, transcriptome-wide RNA methylation profile is now available in the form of count-based data, with which it is often of interests to study the dynamics at epitranscriptomic layer. However, the sample size of RNA methylation experiment is usually very small due to its costs; and additionally, there usually exist a large number of genes whose methylation level cannot be accurately estimated due to their low expression level, making differential RNA methylation analysis a difficult task. We present QNB, a statistical approach for differential RNA methylation analysis with count-based small-sample sequencing data. Compared with previous approaches such as DRME model based on a statistical test covering the IP samples only with 2 negative binomial distributions, QNB is based on 4 independent negative binomial distributions with their variances and means linked by local regressions, and in the way, the input control samples are also properly taken care of. In addition, different from DRME approach, which relies only the input control sample only for estimating the background, QNB uses a more robust estimator for gene expression by combining information from both input and IP samples, which could largely improve the testing performance for very lowly expressed genes. QNB showed improved performance on both simulated and real MeRIP-Seq datasets when compared with competing algorithms. And the QNB model is also applicable to other datasets related RNA modifications, including but not limited to RNA bisulfite sequencing, m 1 A-Seq, Par-CLIP, RIP-Seq, etc.

  2. Evaluating the uncertainty of input quantities in measurement models

    Science.gov (United States)

    Possolo, Antonio; Elster, Clemens

    2014-06-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) gives guidance about how values and uncertainties should be assigned to the input quantities that appear in measurement models. This contribution offers a concrete proposal for how that guidance may be updated in light of the advances in the evaluation and expression of measurement uncertainty that were made in the course of the twenty years that have elapsed since the publication of the GUM, and also considering situations that the GUM does not yet contemplate. Our motivation is the ongoing conversation about a new edition of the GUM. While generally we favour a Bayesian approach to uncertainty evaluation, we also recognize the value that other approaches may bring to the problems considered here, and focus on methods for uncertainty evaluation and propagation that are widely applicable, including to cases that the GUM has not yet addressed. In addition to Bayesian methods, we discuss maximum-likelihood estimation, robust statistical methods, and measurement models where values of nominal properties play the same role that input quantities play in traditional models. We illustrate these general-purpose techniques in concrete examples, employing data sets that are realistic but that also are of conveniently small sizes. The supplementary material available online lists the R computer code that we have used to produce these examples (stacks.iop.org/Met/51/3/339/mmedia). Although we strive to stay close to clause 4 of the GUM, which addresses the evaluation of uncertainty for input quantities, we depart from it as we review the classes of measurement models that we believe are generally useful in contemporary measurement science. We also considerably expand and update the treatment that the GUM gives to Type B evaluations of uncertainty: reviewing the state-of-the-art, disciplined approach to the elicitation of expert knowledge, and its encapsulation in probability distributions that are usable in

  3. CONSTRUCTION OF A DYNAMIC INPUT-OUTPUT MODEL WITH A HUMAN CAPITAL BLOCK

    Directory of Open Access Journals (Sweden)

    Baranov A. O.

    2017-03-01

    Full Text Available The accumulation of human capital is an important factor of economic growth. It seems to be useful to include «human capital» as a factor of a macroeconomic model, as it helps to take into account the quality differentiation of the workforce. Most of the models usually distinguish labor force by the levels of education, while some of the factors remain unaccounted. Among them are health status and culture development level, which influence productivity level as well as gross product reproduction. Inclusion of the human capital block to the interindustry model can help to make it more reliable for economic development forecasting. The article presents a mathematical description of the extended dynamic input-output model (DIOM with a human capital block. The extended DIOM is based on the Input-Output Model from The KAMIN system (the System of Integrated Analyses of Interindustrial Information developed at the Institute of Economics and Industrial Engineering of the Siberian Branch of the Academy of Sciences of the Russian Federation and at the Novosibirsk State University. The extended input-output model can be used to analyze and forecast development of Russian economy.

  4. Predictive Place-Cell Sequences for Goal-Finding Emerge from Goal Memory and the Cognitive Map: A Computational Model

    Directory of Open Access Journals (Sweden)

    Lorenz Gönner

    2017-10-01

    Full Text Available Hippocampal place-cell sequences observed during awake immobility often represent previous experience, suggesting a role in memory processes. However, recent reports of goals being overrepresented in sequential activity suggest a role in short-term planning, although a detailed understanding of the origins of hippocampal sequential activity and of its functional role is still lacking. In particular, it is unknown which mechanism could support efficient planning by generating place-cell sequences biased toward known goal locations, in an adaptive and constructive fashion. To address these questions, we propose a model of spatial learning and sequence generation as interdependent processes, integrating cortical contextual coding, synaptic plasticity and neuromodulatory mechanisms into a map-based approach. Following goal learning, sequential activity emerges from continuous attractor network dynamics biased by goal memory inputs. We apply Bayesian decoding on the resulting spike trains, allowing a direct comparison with experimental data. Simulations show that this model (1 explains the generation of never-experienced sequence trajectories in familiar environments, without requiring virtual self-motion signals, (2 accounts for the bias in place-cell sequences toward goal locations, (3 highlights their utility in flexible route planning, and (4 provides specific testable predictions.

  5. Multivariate Self-Exciting Threshold Autoregressive Models with eXogenous Input

    OpenAIRE

    Addo, Peter Martey

    2014-01-01

    This study defines a multivariate Self--Exciting Threshold Autoregressive with eXogenous input (MSETARX) models and present an estimation procedure for the parameters. The conditions for stationarity of the nonlinear MSETARX models is provided. In particular, the efficiency of an adaptive parameter estimation algorithm and LSE (least squares estimate) algorithm for this class of models is then provided via simulations.

  6. Joint analysis of input and parametric uncertainties in watershed water quality modeling: A formal Bayesian approach

    Science.gov (United States)

    Han, Feng; Zheng, Yi

    2018-06-01

    Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.

  7. GASFLOW computer code (physical models and input data)

    International Nuclear Information System (INIS)

    Muehlbauer, Petr

    2007-11-01

    The GASFLOW computer code was developed jointly by the Los Alamos National Laboratory, USA, and Forschungszentrum Karlsruhe, Germany. The code is primarily intended for calculations of the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containments and in other facilities. The physical models and the input data are described, and a commented simple calculation is presented

  8. A neurocomputational model of automatic sequence production.

    Science.gov (United States)

    Helie, Sebastien; Roeder, Jessica L; Vucovich, Lauren; Rünger, Dennis; Ashby, F Gregory

    2015-07-01

    Most behaviors unfold in time and include a sequence of submovements or cognitive activities. In addition, most behaviors are automatic and repeated daily throughout life. Yet, relatively little is known about the neurobiology of automatic sequence production. Past research suggests a gradual transfer from the associative striatum to the sensorimotor striatum, but a number of more recent studies challenge this role of the BG in automatic sequence production. In this article, we propose a new neurocomputational model of automatic sequence production in which the main role of the BG is to train cortical-cortical connections within the premotor areas that are responsible for automatic sequence production. The new model is used to simulate four different data sets from human and nonhuman animals, including (1) behavioral data (e.g., RTs), (2) electrophysiology data (e.g., single-neuron recordings), (3) macrostructure data (e.g., TMS), and (4) neurological circuit data (e.g., inactivation studies). We conclude with a comparison of the new model with existing models of automatic sequence production and discuss a possible new role for the BG in automaticity and its implication for Parkinson's disease.

  9. Recurrent network models for perfect temporal integration of fluctuating correlated inputs.

    Directory of Open Access Journals (Sweden)

    Hiroshi Okamoto

    2009-06-01

    Full Text Available Temporal integration of input is essential to the accumulation of information in various cognitive and behavioral processes, and gradually increasing neuronal activity, typically occurring within a range of seconds, is considered to reflect such computation by the brain. Some psychological evidence suggests that temporal integration by the brain is nearly perfect, that is, the integration is non-leaky, and the output of a neural integrator is accurately proportional to the strength of input. Neural mechanisms of perfect temporal integration, however, remain largely unknown. Here, we propose a recurrent network model of cortical neurons that perfectly integrates partially correlated, irregular input spike trains. We demonstrate that the rate of this temporal integration changes proportionately to the probability of spike coincidences in synaptic inputs. We analytically prove that this highly accurate integration of synaptic inputs emerges from integration of the variance of the fluctuating synaptic inputs, when their mean component is kept constant. Highly irregular neuronal firing and spike coincidences are the major features of cortical activity, but they have been separately addressed so far. Our results suggest that the efficient protocol of information integration by cortical networks essentially requires both features and hence is heterotic.

  10. Key processes and input parameters for environmental tritium models

    International Nuclear Information System (INIS)

    Bunnenberg, C.; Taschner, M.; Ogram, G.L.

    1994-01-01

    The primary objective of the work reported here is to define key processes and input parameters for mathematical models of environmental tritium behaviour adequate for use in safety analysis and licensing of fusion devices like NET and associated tritium handling facilities. (author). 45 refs., 3 figs

  11. Key processes and input parameters for environmental tritium models

    Energy Technology Data Exchange (ETDEWEB)

    Bunnenberg, C; Taschner, M [Niedersaechsisches Inst. fuer Radiooekologie, Hannover (Germany); Ogram, G L [Ontario Hydro, Toronto, ON (Canada)

    1994-12-31

    The primary objective of the work reported here is to define key processes and input parameters for mathematical models of environmental tritium behaviour adequate for use in safety analysis and licensing of fusion devices like NET and associated tritium handling facilities. (author). 45 refs., 3 figs.

  12. Development of an Input Suite for an Orthotropic Composite Material Model

    Science.gov (United States)

    Hoffarth, Canio; Shyamsunder, Loukham; Khaled, Bilal; Rajan, Subramaniam; Goldberg, Robert K.; Carney, Kelly S.; Dubois, Paul; Blankenhorn, Gunther

    2017-01-01

    An orthotropic three-dimensional material model suitable for use in modeling impact tests has been developed that has three major components elastic and inelastic deformations, damage and failure. The material model has been implemented as MAT213 into a special version of LS-DYNA and uses tabulated data obtained from experiments. The prominent features of the constitutive model are illustrated using a widely-used aerospace composite the T800S3900-2B[P2352W-19] BMS8-276 Rev-H-Unitape fiber resin unidirectional composite. The input for the deformation model consists of experimental data from 12 distinct experiments at a known temperature and strain rate: tension and compression along all three principal directions, shear in all three principal planes, and off axis tension or compression tests in all three principal planes, along with other material constants. There are additional input associated with the damage and failure models. The steps in using this model are illustrated composite characterization tests, verification tests and a validation test. The results show that the developed and implemented model is stable and yields acceptably accurate results.

  13. Assessment of input function distortions on kinetic model parameters in simulated dynamic 82Rb PET perfusion studies

    International Nuclear Information System (INIS)

    Meyer, Carsten; Peligrad, Dragos-Nicolae; Weibrecht, Martin

    2007-01-01

    Cardiac 82 rubidium dynamic PET studies allow quantifying absolute myocardial perfusion by using tracer kinetic modeling. Here, the accurate measurement of the input function, i.e. the tracer concentration in blood plasma, is a major challenge. This measurement is deteriorated by inappropriate temporal sampling, spillover, etc. Such effects may influence the measured input peak value and the measured blood pool clearance. The aim of our study is to evaluate the effect of input function distortions on the myocardial perfusion as estimated by the model. To this end, we simulate noise-free myocardium time activity curves (TACs) with a two-compartment kinetic model. The input function to the model is a generic analytical function. Distortions of this function have been introduced by varying its parameters. Using the distorted input function, the compartment model has been fitted to the simulated myocardium TAC. This analysis has been performed for various sets of model parameters covering a physiologically relevant range. The evaluation shows that ±10% error in the input peak value can easily lead to ±10-25% error in the model parameter K 1 , which relates to myocardial perfusion. Variations in the input function tail are generally less relevant. We conclude that an accurate estimation especially of the plasma input peak is crucial for a reliable kinetic analysis and blood flow estimation

  14. Input vs. Output Taxation—A DSGE Approach to Modelling Resource Decoupling

    Directory of Open Access Journals (Sweden)

    Marek Antosiewicz

    2016-04-01

    Full Text Available Environmental taxes constitute a crucial instrument aimed at reducing resource use through lower production losses, resource-leaner products, and more resource-efficient production processes. In this paper we focus on material use and apply a multi-sector dynamic stochastic general equilibrium (DSGE model to study two types of taxation: tax on material inputs used by industry, energy, construction, and transport sectors, and tax on output of these sectors. We allow for endogenous adoption of resource-saving technologies. We calibrate the model for the EU27 area using an IO matrix. We consider taxation introduced from 2021 and simulate its impact until 2050. We compare the taxes along their ability to induce reduction in material use and raise revenue. We also consider the effect of spending this revenue on reduction of labour taxation. We find that input and output taxation create contrasting incentives and have opposite effects on resource efficiency. The material input tax induces investment in efficiency-improving technology which, in the long term, results in GDP and employment by 15%–20% higher than in the case of a comparable output tax. We also find that using revenues to reduce taxes on labour has stronger beneficial effects for the input tax.

  15. Modeling the short-run effect of fiscal stimuli on GDP : A new semi-closed input-output model

    NARCIS (Netherlands)

    Chen, Quanrun; Dietzenbacher, Erik; Los, Bart; Yang, Cuihong

    In this study, we propose a new semi-closed input-output model, which reconciles input-output analysis with modern consumption theories. It can simulate changes in household consumption behavior when exogenous stimulus policies lead to higher disposable income levels. It is useful for quantifying

  16. Modeling the short-run effect of fiscal stimuli on GDP : A new semi-closed input-output model

    NARCIS (Netherlands)

    Chen, Quanrun; Dietzenbacher, Erik; Los, Bart; Yang, Cuihong

    2016-01-01

    In this study, we propose a new semi-closed input-output model, which reconciles input-output analysis with modern consumption theories. It can simulate changes in household consumption behavior when exogenous stimulus policies lead to higher disposable income levels. It is useful for quantifying

  17. Forecasting the Romanian sectoral economy using the input-output method

    Directory of Open Access Journals (Sweden)

    Liliana DUGULEANĂ

    2017-07-01

    Full Text Available The purpose of this paper is to forecast the sectoral output in 2013 based on the input-output structure of Romanian economy in 2010. Considering that the economic linkage mechanisms do not easily change during certain time periods, the forecasting is possible, even if not in the sequence of the time passing. Using the technical matrix of the sectoral structure described for year 2010 and some known indicators of the economic sectors, as the value added for each sector in 2013, the sectoral output is projected for 2013. The Romanian GDP in 2013 is estimated based on the input-output model. From a managerial perspective, this study is useful to forecast the sectoral output and to understand the sectoral behaviour, based on the input-output analysis of the value added, the compensation for employees and the final demand, which were considered here.

  18. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-06-27

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699

  19. Environmental Transport Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    Wasiolek, M. A.

    2003-01-01

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699], Section 6.2). Parameter values

  20. Input modelling of ASSERT-PV V2R8M1 for RUFIC fuel bundle

    Energy Technology Data Exchange (ETDEWEB)

    Park, Joo Hwan; Suk, Ho Chun

    2001-02-01

    This report describes the input modelling for subchannel analysis of CANFLEX-RU (RUFIC) fuel bundle which has been developed for an advanced fuel bundle of CANDU-6 reactor, using ASSERT-PV V2R8M1 code. Execution file of ASSERT-PV V2R8M1 code was recently transferred from AECL under JRDC agreement between KAERI and AECL. SSERT-PV V2R8M1 which is quite different from COBRA-IV-i code has been developed for thermalhydraulic analysis of CANDU-6 fuel channel by subchannel analysis method and updated so that 43-element CANDU fuel geometry can be applied. Hence, ASSERT code can be applied to the subchannel analysis of RUFIC fuel bundle. The present report was prepared for ASSERT input modelling of RUFIC fuel bundle. Since the ASSERT results highly depend on user's input modelling, the calculation results may be quite different among the user's input models. The objective of the present report is the preparation of detail description of the background information for input data and gives credibility of the calculation results.

  1. Input modelling of ASSERT-PV V2R8M1 for RUFIC fuel bundle

    International Nuclear Information System (INIS)

    Park, Joo Hwan; Suk, Ho Chun

    2001-02-01

    This report describes the input modelling for subchannel analysis of CANFLEX-RU (RUFIC) fuel bundle which has been developed for an advanced fuel bundle of CANDU-6 reactor, using ASSERT-PV V2R8M1 code. Execution file of ASSERT-PV V2R8M1 code was recently transferred from AECL under JRDC agreement between KAERI and AECL. SSERT-PV V2R8M1 which is quite different from COBRA-IV-i code has been developed for thermalhydraulic analysis of CANDU-6 fuel channel by subchannel analysis method and updated so that 43-element CANDU fuel geometry can be applied. Hence, ASSERT code can be applied to the subchannel analysis of RUFIC fuel bundle. The present report was prepared for ASSERT input modelling of RUFIC fuel bundle. Since the ASSERT results highly depend on user's input modelling, the calculation results may be quite different among the user's input models. The objective of the present report is the preparation of detail description of the background information for input data and gives credibility of the calculation results

  2. Application of a Linear Input/Output Model to Tankless Water Heaters

    Energy Technology Data Exchange (ETDEWEB)

    Butcher T.; Schoenbauer, B.

    2011-12-31

    In this study, the applicability of a linear input/output model to gas-fired, tankless water heaters has been evaluated. This simple model assumes that the relationship between input and output, averaged over both active draw and idle periods, is linear. This approach is being applied to boilers in other studies and offers the potential to make a small number of simple measurements to obtain the model parameters. These parameters can then be used to predict performance under complex load patterns. Both condensing and non-condensing water heaters have been tested under a very wide range of load conditions. It is shown that this approach can be used to reproduce performance metrics, such as the energy factor, and can be used to evaluate the impacts of alternative draw patterns and conditions.

  3. Modeling and Control of a Dual-Input Isolated Full-Bridge Boost Converter

    DEFF Research Database (Denmark)

    Zhang, Zhe; Thomsen, Ole Cornelius; Andersen, Michael A. E.

    2012-01-01

    In this paper, a steady-state model, a large-signal (LS) model and an ac small-signal (SS) model for a recently proposed dual-input transformer-isolated boost converter are derived respectively by the switching flow-graph (SFG) nonlinear modeling technique. Based upon the converter’s model...

  4. Logistics flows and enterprise input-output models: aggregate and disaggregate analysis

    NARCIS (Netherlands)

    Albino, V.; Yazan, Devrim; Messeni Petruzzelli, A.; Okogbaa, O.G.

    2011-01-01

    In the present paper, we propose the use of enterprise input-output (EIO) models to describe and analyse the logistics flows considering spatial issues and related environmental effects associated with production and transportation processes. In particular, transportation is modelled as a specific

  5. Bayesian nonlinear structural FE model and seismic input identification for damage assessment of civil structures

    Science.gov (United States)

    Astroza, Rodrigo; Ebrahimian, Hamed; Li, Yong; Conte, Joel P.

    2017-09-01

    A methodology is proposed to update mechanics-based nonlinear finite element (FE) models of civil structures subjected to unknown input excitation. The approach allows to jointly estimate unknown time-invariant model parameters of a nonlinear FE model of the structure and the unknown time histories of input excitations using spatially-sparse output response measurements recorded during an earthquake event. The unscented Kalman filter, which circumvents the computation of FE response sensitivities with respect to the unknown model parameters and unknown input excitations by using a deterministic sampling approach, is employed as the estimation tool. The use of measurement data obtained from arrays of heterogeneous sensors, including accelerometers, displacement sensors, and strain gauges is investigated. Based on the estimated FE model parameters and input excitations, the updated nonlinear FE model can be interrogated to detect, localize, classify, and assess damage in the structure. Numerically simulated response data of a three-dimensional 4-story 2-by-1 bay steel frame structure with six unknown model parameters subjected to unknown bi-directional horizontal seismic excitation, and a three-dimensional 5-story 2-by-1 bay reinforced concrete frame structure with nine unknown model parameters subjected to unknown bi-directional horizontal seismic excitation are used to illustrate and validate the proposed methodology. The results of the validation studies show the excellent performance and robustness of the proposed algorithm to jointly estimate unknown FE model parameters and unknown input excitations.

  6. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    Science.gov (United States)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    2008-06-01

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space Rn. An isometric mapping F from M to a low-dimensional, compact, connected set A⊂Rd(d≪n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology by constructing low

  7. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    International Nuclear Information System (INIS)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    2008-01-01

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space R n . An isometric mapping F from M to a low-dimensional, compact, connected set A is contained in R d (d<< n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology

  8. The MARINA model (Model to Assess River Inputs of Nutrients to seAs)

    NARCIS (Netherlands)

    Strokal, Maryna; Kroeze, Carolien; Wang, Mengru; Bai, Zhaohai; Ma, Lin

    2016-01-01

    Chinese agriculture has been developing fast towards industrial food production systems that discharge nutrient-rich wastewater into rivers. As a result, nutrient export by rivers has been increasing, resulting in coastal water pollution. We developed a Model to Assess River Inputs of Nutrients

  9. DNA sequence modeling based on context trees

    NARCIS (Netherlands)

    Kusters, C.J.; Ignatenko, T.; Roland, J.; Horlin, F.

    2015-01-01

    Genomic sequences contain instructions for protein and cell production. Therefore understanding and identification of biologically and functionally meaningful patterns in DNA sequences is of paramount importance. Modeling of DNA sequences in its turn can help to better understand and identify such

  10. A novel wavelet sequence based on deep bidirectional LSTM network model for ECG signal classification.

    Science.gov (United States)

    Yildirim, Özal

    2018-05-01

    Long-short term memory networks (LSTMs), which have recently emerged in sequential data analysis, are the most widely used type of recurrent neural networks (RNNs) architecture. Progress on the topic of deep learning includes successful adaptations of deep versions of these architectures. In this study, a new model for deep bidirectional LSTM network-based wavelet sequences called DBLSTM-WS was proposed for classifying electrocardiogram (ECG) signals. For this purpose, a new wavelet-based layer is implemented to generate ECG signal sequences. The ECG signals were decomposed into frequency sub-bands at different scales in this layer. These sub-bands are used as sequences for the input of LSTM networks. New network models that include unidirectional (ULSTM) and bidirectional (BLSTM) structures are designed for performance comparisons. Experimental studies have been performed for five different types of heartbeats obtained from the MIT-BIH arrhythmia database. These five types are Normal Sinus Rhythm (NSR), Ventricular Premature Contraction (VPC), Paced Beat (PB), Left Bundle Branch Block (LBBB), and Right Bundle Branch Block (RBBB). The results show that the DBLSTM-WS model gives a high recognition performance of 99.39%. It has been observed that the wavelet-based layer proposed in the study significantly improves the recognition performance of conventional networks. This proposed network structure is an important approach that can be applied to similar signal processing problems. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Chaos game representation (CGR)-walk model for DNA sequences

    International Nuclear Information System (INIS)

    Jie, Gao; Zhen-Yuan, Xu

    2009-01-01

    Chaos game representation (CGR) is an iterative mapping technique that processes sequences of units, such as nucleotides in a DNA sequence or amino acids in a protein, in order to determine the coordinates of their positions in a continuous space. This distribution of positions has two features: one is unique, and the other is source sequence that can be recovered from the coordinates so that the distance between positions may serve as a measure of similarity between the corresponding sequences. A CGR-walk model is proposed based on CGR coordinates for the DNA sequences. The CGR coordinates are converted into a time series, and a long-memory ARFIMA (p, d, q) model, where ARFIMA stands for autoregressive fractionally integrated moving average, is introduced into the DNA sequence analysis. This model is applied to simulating real CGR-walk sequence data of ten genomic sequences. Remarkably long-range correlations are uncovered in the data, and the results from these models are reasonably fitted with those from the ARFIMA (p, d, q) model. (cross-disciplinary physics and related areas of science and technology)

  12. Human Inferences about Sequences: A Minimal Transition Probability Model.

    Directory of Open Access Journals (Sweden)

    Florent Meyniel

    2016-12-01

    Full Text Available The brain constantly infers the causes of the inputs it receives and uses these inferences to generate statistical expectations about future observations. Experimental evidence for these expectations and their violations include explicit reports, sequential effects on reaction times, and mismatch or surprise signals recorded in electrophysiology and functional MRI. Here, we explore the hypothesis that the brain acts as a near-optimal inference device that constantly attempts to infer the time-varying matrix of transition probabilities between the stimuli it receives, even when those stimuli are in fact fully unpredictable. This parsimonious Bayesian model, with a single free parameter, accounts for a broad range of findings on surprise signals, sequential effects and the perception of randomness. Notably, it explains the pervasive asymmetry between repetitions and alternations encountered in those studies. Our analysis suggests that a neural machinery for inferring transition probabilities lies at the core of human sequence knowledge.

  13. A sequence-dependent rigid-base model of DNA

    Science.gov (United States)

    Gonzalez, O.; Petkevičiutė, D.; Maddocks, J. H.

    2013-02-01

    A novel hierarchy of coarse-grain, sequence-dependent, rigid-base models of B-form DNA in solution is introduced. The hierarchy depends on both the assumed range of energetic couplings, and the extent of sequence dependence of the model parameters. A significant feature of the models is that they exhibit the phenomenon of frustration: each base cannot simultaneously minimize the energy of all of its interactions. As a consequence, an arbitrary DNA oligomer has an intrinsic or pre-existing stress, with the level of this frustration dependent on the particular sequence of the oligomer. Attention is focussed on the particular model in the hierarchy that has nearest-neighbor interactions and dimer sequence dependence of the model parameters. For a Gaussian version of this model, a complete coarse-grain parameter set is estimated. The parameterized model allows, for an oligomer of arbitrary length and sequence, a simple and explicit construction of an approximation to the configuration-space equilibrium probability density function for the oligomer in solution. The training set leading to the coarse-grain parameter set is itself extracted from a recent and extensive database of a large number of independent, atomic-resolution molecular dynamics (MD) simulations of short DNA oligomers immersed in explicit solvent. The Kullback-Leibler divergence between probability density functions is used to make several quantitative assessments of our nearest-neighbor, dimer-dependent model, which is compared against others in the hierarchy to assess various assumptions pertaining both to the locality of the energetic couplings and to the level of sequence dependence of its parameters. It is also compared directly against all-atom MD simulation to assess its predictive capabilities. The results show that the nearest-neighbor, dimer-dependent model can successfully resolve sequence effects both within and between oligomers. For example, due to the presence of frustration, the model can

  14. A sequence-dependent rigid-base model of DNA.

    Science.gov (United States)

    Gonzalez, O; Petkevičiūtė, D; Maddocks, J H

    2013-02-07

    A novel hierarchy of coarse-grain, sequence-dependent, rigid-base models of B-form DNA in solution is introduced. The hierarchy depends on both the assumed range of energetic couplings, and the extent of sequence dependence of the model parameters. A significant feature of the models is that they exhibit the phenomenon of frustration: each base cannot simultaneously minimize the energy of all of its interactions. As a consequence, an arbitrary DNA oligomer has an intrinsic or pre-existing stress, with the level of this frustration dependent on the particular sequence of the oligomer. Attention is focussed on the particular model in the hierarchy that has nearest-neighbor interactions and dimer sequence dependence of the model parameters. For a Gaussian version of this model, a complete coarse-grain parameter set is estimated. The parameterized model allows, for an oligomer of arbitrary length and sequence, a simple and explicit construction of an approximation to the configuration-space equilibrium probability density function for the oligomer in solution. The training set leading to the coarse-grain parameter set is itself extracted from a recent and extensive database of a large number of independent, atomic-resolution molecular dynamics (MD) simulations of short DNA oligomers immersed in explicit solvent. The Kullback-Leibler divergence between probability density functions is used to make several quantitative assessments of our nearest-neighbor, dimer-dependent model, which is compared against others in the hierarchy to assess various assumptions pertaining both to the locality of the energetic couplings and to the level of sequence dependence of its parameters. It is also compared directly against all-atom MD simulation to assess its predictive capabilities. The results show that the nearest-neighbor, dimer-dependent model can successfully resolve sequence effects both within and between oligomers. For example, due to the presence of frustration, the model can

  15. Calibration of uncertain inputs to computer models using experimentally measured quantities and the BMARS emulator

    International Nuclear Information System (INIS)

    Stripling, H.F.; McClarren, R.G.; Kuranz, C.C.; Grosskopf, M.J.; Rutter, E.; Torralva, B.R.

    2011-01-01

    We present a method for calibrating the uncertain inputs to a computer model using available experimental data. The goal of the procedure is to produce posterior distributions of the uncertain inputs such that when samples from the posteriors are used as inputs to future model runs, the model is more likely to replicate (or predict) the experimental response. The calibration is performed by sampling the space of the uncertain inputs, using the computer model (or, more likely, an emulator for the computer model) to assign weights to the samples, and applying the weights to produce the posterior distributions and generate predictions of new experiments within confidence bounds. The method is similar to the Markov chain Monte Carlo (MCMC) calibration methods with independent sampling with the exception that we generate samples beforehand and replace the candidate acceptance routine with a weighting scheme. We apply our method to the calibration of a Hyades 2D model of laser energy deposition in beryllium. We employ a Bayesian Multivariate Adaptive Regression Splines (BMARS) emulator as a surrogate for Hyades 2D. We treat a range of uncertainties in our system, including uncertainties in the experimental inputs, experimental measurement error, and systematic experimental timing errors. The results of the calibration are posterior distributions that both agree with intuition and improve the accuracy and decrease the uncertainty in experimental predictions. (author)

  16. Input modelling of ASSERT-PV V2R8M1 for RUFIC fuel bundle

    Energy Technology Data Exchange (ETDEWEB)

    Park, Joo Hwan; Suk, Ho Chun

    2001-02-01

    This report describes the input modelling for subchannel analysis of CANFLEX-RU (RUFIC) fuel bundle which has been developed for an advanced fuel bundle of CANDU-6 reactor, using ASSERT-PV V2R8M1 code. Execution file of ASSERT-PV V2R8M1 code was recently transferred from AECL under JRDC agreement between KAERI and AECL. SSERT-PV V2R8M1 which is quite different from COBRA-IV-i code has been developed for thermalhydraulic analysis of CANDU-6 fuel channel by subchannel analysis method and updated so that 43-element CANDU fuel geometry can be applied. Hence, ASSERT code can be applied to the subchannel analysis of RUFIC fuel bundle. The present report was prepared for ASSERT input modelling of RUFIC fuel bundle. Since the ASSERT results highly depend on user's input modelling, the calculation results may be quite different among the user's input models. The objective of the present report is the preparation of detail description of the background information for input data and gives credibility of the calculation results.

  17. VSC Input-Admittance Modeling and Analysis Above the Nyquist Frequency for Passivity-Based Stability Assessment

    DEFF Research Database (Denmark)

    Harnefors, Lennart; Finger, Raphael; Wang, Xiongfei

    2017-01-01

    The interconnection stability of a gridconnected voltage-source converter (VSC) can be assessed via the dissipative properties of its input admittance. In this paper, the modeling of the current control loop is revisited with the aim to improve the accuracy of the input-admittance model above...

  18. COGEDIF - automatic TORT and DORT input generation from MORSE combinatorial geometry models

    International Nuclear Information System (INIS)

    Castelli, R.A.; Barnett, D.A.

    1992-01-01

    COGEDIF is an interactive utility which was developed to automate the preparation of two and three dimensional geometrical inputs for the ORNL-TORT and DORT discrete ordinates programs from complex three dimensional models described using the MORSE combinatorial geometry input description. The program creates either continuous or disjoint mesh input based upon the intersections of user defined meshing planes and the MORSE body definitions. The composition overlay of the combinatorial geometry is used to create the composition mapping of the discretized geometry based upon the composition found at the centroid of each of the mesh cells. This program simplifies the process of using discrete orthogonal mesh cells to represent non-orthogonal geometries in large models which require mesh sizes of the order of a million cells or more. The program was specifically written to take advantage of the new TORT disjoint mesh option which was developed at ORNL

  19. Framework for Modelling Multiple Input Complex Aggregations for Interactive Installations

    DEFF Research Database (Denmark)

    Padfield, Nicolas; Andreasen, Troels

    2012-01-01

    on fuzzy logic and provides a method for variably balancing interaction and user input with the intention of the artist or director. An experimental design is presented, demonstrating an intuitive interface for parametric modelling of a complex aggregation function. The aggregation function unifies...

  20. Input Uncertainty and its Implications on Parameter Assessment in Hydrologic and Hydroclimatic Modelling Studies

    Science.gov (United States)

    Chowdhury, S.; Sharma, A.

    2005-12-01

    Hydrological model inputs are often derived from measurements at point locations taken at discrete time steps. The nature of uncertainty associated with such inputs is thus a function of the quality and number of measurements available in time. A change in these characteristics (such as a change in the number of rain-gauge inputs used to derive spatially averaged rainfall) results in inhomogeneity in the associated distributional profile. Ignoring such uncertainty can lead to models that aim to simulate based on the observed input variable instead of the true measurement, resulting in a biased representation of the underlying system dynamics as well as an increase in both bias and the predictive uncertainty in simulations. This is especially true of cases where the nature of uncertainty likely in the future is significantly different to that in the past. Possible examples include situations where the accuracy of the catchment averaged rainfall has increased substantially due to an increase in the rain-gauge density, or accuracy of climatic observations (such as sea surface temperatures) increased due to the use of more accurate remote sensing technologies. We introduce here a method to ascertain the true value of parameters in the presence of additive uncertainty in model inputs. This method, known as SIMulation EXtrapolation (SIMEX, [Cook, 1994]) operates on the basis of an empirical relationship between parameters and the level of additive input noise (or uncertainty). The method starts with generating a series of alternate realisations of model inputs by artificially adding white noise in increasing multiples of the known error variance. The alternate realisations lead to alternate sets of parameters that are increasingly biased with respect to the truth due to the increased variability in the inputs. Once several such realisations have been drawn, one is able to formulate an empirical relationship between the parameter values and the level of additive noise

  1. Non parametric, self organizing, scalable modeling of spatiotemporal inputs: the sign language paradigm.

    Science.gov (United States)

    Caridakis, G; Karpouzis, K; Drosopoulos, A; Kollias, S

    2012-12-01

    Modeling and recognizing spatiotemporal, as opposed to static input, is a challenging task since it incorporates input dynamics as part of the problem. The vast majority of existing methods tackle the problem as an extension of the static counterpart, using dynamics, such as input derivatives, at feature level and adopting artificial intelligence and machine learning techniques originally designed for solving problems that do not specifically address the temporal aspect. The proposed approach deals with temporal and spatial aspects of the spatiotemporal domain in a discriminative as well as coupling manner. Self Organizing Maps (SOM) model the spatial aspect of the problem and Markov models its temporal counterpart. Incorporation of adjacency, both in training and classification, enhances the overall architecture with robustness and adaptability. The proposed scheme is validated both theoretically, through an error propagation study, and experimentally, on the recognition of individual signs, performed by different, native Greek Sign Language users. Results illustrate the architecture's superiority when compared to Hidden Markov Model techniques and variations both in terms of classification performance and computational cost. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. System Identification for Nonlinear FOPDT Model with Input-Dependent Dead-Time

    DEFF Research Database (Denmark)

    Sun, Zhen; Yang, Zhenyu

    2011-01-01

    An on-line iterative method of system identification for a kind of nonlinear FOPDT system is proposed in the paper. The considered nonlinear FOPDT model is an extension of the standard FOPDT model by means that its dead time depends on the input signal and the other parameters are time dependent....

  3. Harmonic-suppressed quadrature-input frequency divider for OFDM systems

    International Nuclear Information System (INIS)

    Fu Haipeng; Ren Junyan; Li Wei; Li Ning

    2011-01-01

    A fully balanced harmonic-suppressed quadrature-input frequency divider is proposed. The frequency divider improves the quadrature phase accuracy at the output by using both input I/Q signals. Compared with conventional dividers, the circuit achieves an output I/Q phase sequence that is independent of the input I/Q phase sequence. Moreover, the third harmonic is effectively suppressed by employing a double degeneration technique. The design is fabricated in TSMC 0.13-μm CMOS and operated at 1.2 V. While locked at 8.5 GHz, the proposed divider measures a maximum third harmonic rejection of 45 dB and a phase noise of −124 dBc/Hz at a 10 MHz offset. The circuit achieves a locking range of 15% while consuming a total current of 4.5 mA. (semiconductor integrated circuits)

  4. Using Random Forests to Select Optimal Input Variables for Short-Term Wind Speed Forecasting Models

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2017-10-01

    Full Text Available Achieving relatively high-accuracy short-term wind speed forecasting estimates is a precondition for the construction and grid-connected operation of wind power forecasting systems for wind farms. Currently, most research is focused on the structure of forecasting models and does not consider the selection of input variables, which can have significant impacts on forecasting performance. This paper presents an input variable selection method for wind speed forecasting models. The candidate input variables for various leading periods are selected and random forests (RF is employed to evaluate the importance of all variable as features. The feature subset with the best evaluation performance is selected as the optimal feature set. Then, kernel-based extreme learning machine is constructed to evaluate the performance of input variables selection based on RF. The results of the case study show that by removing the uncorrelated and redundant features, RF effectively extracts the most strongly correlated set of features from the candidate input variables. By finding the optimal feature combination to represent the original information, RF simplifies the structure of the wind speed forecasting model, shortens the training time required, and substantially improves the model’s accuracy and generalization ability, demonstrating that the input variables selected by RF are effective.

  5. Development of the MARS input model for Ulchin 1/2 transient analyzer

    International Nuclear Information System (INIS)

    Jeong, J. J.; Kim, K. D.; Lee, S. W.; Lee, Y. J.; Chung, B. D.; Hwang, M.

    2003-03-01

    KAERI has been developing the NSSS transient analyzer based on best-estimate codes for Ulchin 1/2 plants. The MARS and RETRAN code are used as the best-estimate codes for the NSSS transient analyzer. Among the two codes, the MARS code is to be used for realistic analysis of small- and large-break loss-of-coolant accidents, of which break size is greater than 2 inch diameter. This report includes the input model requirements and the calculation note for the Ulchin 1/2 MARS input data generation (see the Appendix). In order to confirm the validity of the input data, we performed the calculations for a steady state at 100 % power operation condition and a double-ended cold leg break LOCA. The results of the steady-state calculation agree well with the design data. The results of the LOCA calculation seem to be reasonable and consistent with those of other best-estimate calculations. Therefore, the MARS input data can be used as a base input deck for the MARS transient analyzer for Ulchin 1/2

  6. Development of the MARS input model for Ulchin 3/4 transient analyzer

    International Nuclear Information System (INIS)

    Jeong, J. J.; Kim, K. D.; Lee, S. W.; Lee, Y. J.; Lee, W. J.; Chung, B. D.; Hwang, M. G.

    2003-12-01

    KAERI has been developing the NSSS transient analyzer based on best-estimate codes.The MARS and RETRAN code are adopted as the best-estimate codes for the NSSS transient analyzer. Among these two codes, the MARS code is to be used for realistic analysis of small- and large-break loss-of-coolant accidents, of which break size is greater than 2 inch diameter. This report includes the MARS input model requirements and the calculation note for the MARS input data generation (see the Appendix) for Ulchin 3/4 plant analyzer. In order to confirm the validity of the input data, we performed the calculations for a steady state at 100 % power operation condition and a double-ended cold leg break LOCA. The results of the steady-state calculation agree well with the design data. The results of the LOCA calculation seem to be reasonable and consistent with those of other best-estimate calculations. Therefore, the MARS input data can be used as a base input deck for the MARS transient analyzer for Ulchin 3/4

  7. Sensitivity Analysis of Input Parameters for a Dynamic Food Chain Model DYNACON

    International Nuclear Information System (INIS)

    Hwang, Won Tae; Lee, Geun Chang; Han, Moon Hee; Cho, Gyu Seong

    2000-01-01

    The sensitivity analysis of input parameters for a dynamic food chain model DYNACON was conducted as a function of deposition data for the long-lived radionuclides ( 137 Cs, 90 Sr). Also, the influence of input parameters for the short and long-terms contamination of selected foodstuffs (cereals, leafy vegetables, milk) was investigated. The input parameters were sampled using the LHS technique, and their sensitivity indices represented as PRCC. The sensitivity index was strongly dependent on contamination period as well as deposition data. In case of deposition during the growing stages of plants, the input parameters associated with contamination by foliar absorption were relatively important in long-term contamination as well as short-term contamination. They were also important in short-term contamination in case of deposition during the non-growing stages. In long-term contamination, the influence of input parameters associated with foliar absorption decreased, while the influence of input parameters associated with root uptake increased. These phenomena were more remarkable in case of the deposition of non-growing stages than growing stages, and in case of 90 Sr deposition than 137 Cs deposition. In case of deposition during growing stages of pasture, the input parameters associated with the characteristics of cattle such as feed-milk transfer factor and daily intake rate of cattle were relatively important in contamination of milk

  8. Modeling of heat transfer into a heat pipe for a localized heat input zone

    International Nuclear Information System (INIS)

    Rosenfeld, J.H.

    1987-01-01

    A general model is presented for heat transfer into a heat pipe using a localized heat input. Conduction in the wall of the heat pipe and boiling in the interior structure are treated simultaneously. The model is derived from circumferential heat transfer in a cylindrical heat pipe evaporator and for radial heat transfer in a circular disk with boiling from the interior surface. A comparison is made with data for a localized heat input zone. Agreement between the theory and the model is good. This model can be used for design purposes if a boiling correlation is available. The model can be extended to provide improved predictions of heat pipe performance

  9. Input preshaping with frequency domain information for flexible-link manipulator control

    Science.gov (United States)

    Tzes, Anthony; Englehart, Matthew J.; Yurkovich, Stephen

    1989-01-01

    The application of an input preshaping scheme to flexible manipulators is considered. The resulting control corresponds to a feedforward term that convolves in real-time the desired reference input with a sequence of impulses and produces a vibration free output. The robustness of the algorithm with respect to injected disturbances and modal frequency variations is not satisfactory and can be improved by convolving the input with a longer sequence of impulses. The incorporation of the preshaping scheme to a closed-loop plant, using acceleration feedback, offers satisfactory disturbance rejection due to feedback and cancellation of the flexible mode effects due to the preshaping. A frequency domain identification scheme is used to estimate the modal frequencies on-line and subsequently update the spacing between the impulses. The combined adaptive input preshaping scheme provides the fastest possible slew that results in a vibration free output.

  10. Input-Output model for waste management plan for Nigeria | Njoku ...

    African Journals Online (AJOL)

    An Input-Output Model for Waste Management Plan has been developed for Nigeria based on Leontief concept and life cycle analysis. Waste was considered as source of pollution, loss of resources, and emission of green house gasses from bio-chemical treatment and decomposition, with negative impact on the ...

  11. DIMITRI 1.0: Beschrijving en toepassing van een dynamisch input-output model

    NARCIS (Netherlands)

    Wilting HC; Blom WF; Thomas R; Idenburg AM; LAE

    2001-01-01

    DIMITRI, the Dynamic Input-Output Model to study the Impacts of Technology Related Innovations, was developed in the framework of the RIVM Environment and Economy project to answer questions about interrelationships between economy, technology and the environment. DIMITRI, a meso-economic model,

  12. Hidden Markov event sequence models: toward unsupervised functional MRI brain mapping.

    Science.gov (United States)

    Faisan, Sylvain; Thoraval, Laurent; Armspach, Jean-Paul; Foucher, Jack R; Metz-Lutz, Marie-Noëlle; Heitz, Fabrice

    2005-01-01

    Most methods used in functional MRI (fMRI) brain mapping require restrictive assumptions about the shape and timing of the fMRI signal in activated voxels. Consequently, fMRI data may be partially and misleadingly characterized, leading to suboptimal or invalid inference. To limit these assumptions and to capture the broad range of possible activation patterns, a novel statistical fMRI brain mapping method is proposed. It relies on hidden semi-Markov event sequence models (HSMESMs), a special class of hidden Markov models (HMMs) dedicated to the modeling and analysis of event-based random processes. Activation detection is formulated in terms of time coupling between (1) the observed sequence of hemodynamic response onset (HRO) events detected in the voxel's fMRI signal and (2) the "hidden" sequence of task-induced neural activation onset (NAO) events underlying the HROs. Both event sequences are modeled within a single HSMESM. The resulting brain activation model is trained to automatically detect neural activity embedded in the input fMRI data set under analysis. The data sets considered in this article are threefold: synthetic epoch-related, real epoch-related (auditory lexical processing task), and real event-related (oddball detection task) fMRI data sets. Synthetic data: Activation detection results demonstrate the superiority of the HSMESM mapping method with respect to a standard implementation of the statistical parametric mapping (SPM) approach. They are also very close, sometimes equivalent, to those obtained with an "ideal" implementation of SPM in which the activation patterns synthesized are reused for analysis. The HSMESM method appears clearly insensitive to timing variations of the hemodynamic response and exhibits low sensitivity to fluctuations of its shape (unsustained activation during task). Real epoch-related data: HSMESM activation detection results compete with those obtained with SPM, without requiring any prior definition of the expected

  13. Multi input single output model predictive control of non-linear bio-polymerization process

    Energy Technology Data Exchange (ETDEWEB)

    Arumugasamy, Senthil Kumar; Ahmad, Z. [School of Chemical Engineering, Univerisiti Sains Malaysia, Engineering Campus, Seri Ampangan,14300 Nibong Tebal, Seberang Perai Selatan, Pulau Pinang (Malaysia)

    2015-05-15

    This paper focuses on Multi Input Single Output (MISO) Model Predictive Control of bio-polymerization process in which mechanistic model is developed and linked with the feedforward neural network model to obtain a hybrid model (Mechanistic-FANN) of lipase-catalyzed ring-opening polymerization of ε-caprolactone (ε-CL) for Poly (ε-caprolactone) production. In this research, state space model was used, in which the input to the model were the reactor temperatures and reactor impeller speeds and the output were the molecular weight of polymer (M{sub n}) and polymer polydispersity index. State space model for MISO created using System identification tool box of Matlab™. This state space model is used in MISO MPC. Model predictive control (MPC) has been applied to predict the molecular weight of the biopolymer and consequently control the molecular weight of biopolymer. The result shows that MPC is able to track reference trajectory and give optimum movement of manipulated variable.

  14. Model morphing and sequence assignment after molecular replacement

    Energy Technology Data Exchange (ETDEWEB)

    Terwilliger, Thomas C., E-mail: terwilliger@lanl.gov [Los Alamos National Laboratory, Mail Stop M888, Los Alamos, NM 87545 (United States); Read, Randy J. [University of Cambridge, Cambridge Institute for Medical Research, Cambridge CB2 0XY (United Kingdom); Adams, Paul D. [Lawrence Berkeley National Laboratory, One Cyclotron Road, Bldg 64R0121, Berkeley, CA 94720 (United States); Brunger, Axel T. [Stanford University, 318 Campus Drive West, Stanford, CA 94305 (United States); Afonine, Pavel V. [Lawrence Berkeley National Laboratory, One Cyclotron Road, Bldg 64R0121, Berkeley, CA 94720 (United States); Hung, Li-Wei [Los Alamos National Laboratory, Mail Stop M888, Los Alamos, NM 87545 (United States)

    2013-11-01

    A procedure for model building is described that combines morphing a model to match a density map, trimming the morphed model and aligning the model to a sequence. A procedure termed ‘morphing’ for improving a model after it has been placed in the crystallographic cell by molecular replacement has recently been developed. Morphing consists of applying a smooth deformation to a model to make it match an electron-density map more closely. Morphing does not change the identities of the residues in the chain, only their coordinates. Consequently, if the true structure differs from the working model by containing different residues, these differences cannot be corrected by morphing. Here, a procedure that helps to address this limitation is described. The goal of the procedure is to obtain a relatively complete model that has accurate main-chain atomic positions and residues that are correctly assigned to the sequence. Residues in a morphed model that do not match the electron-density map are removed. Each segment of the resulting trimmed morphed model is then assigned to the sequence of the molecule using information about the connectivity of the chains from the working model and from connections that can be identified from the electron-density map. The procedure was tested by application to a recently determined structure at a resolution of 3.2 Å and was found to increase the number of correctly identified residues in this structure from the 88 obtained using phenix.resolve sequence assignment alone (Terwilliger, 2003 ▶) to 247 of a possible 359. Additionally, the procedure was tested by application to a series of templates with sequence identities to a target structure ranging between 7 and 36%. The mean fraction of correctly identified residues in these cases was increased from 33% using phenix.resolve sequence assignment to 47% using the current procedure. The procedure is simple to apply and is available in the Phenix software package.

  15. Model morphing and sequence assignment after molecular replacement

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.; Read, Randy J.; Adams, Paul D.; Brunger, Axel T.; Afonine, Pavel V.; Hung, Li-Wei

    2013-01-01

    A procedure for model building is described that combines morphing a model to match a density map, trimming the morphed model and aligning the model to a sequence. A procedure termed ‘morphing’ for improving a model after it has been placed in the crystallographic cell by molecular replacement has recently been developed. Morphing consists of applying a smooth deformation to a model to make it match an electron-density map more closely. Morphing does not change the identities of the residues in the chain, only their coordinates. Consequently, if the true structure differs from the working model by containing different residues, these differences cannot be corrected by morphing. Here, a procedure that helps to address this limitation is described. The goal of the procedure is to obtain a relatively complete model that has accurate main-chain atomic positions and residues that are correctly assigned to the sequence. Residues in a morphed model that do not match the electron-density map are removed. Each segment of the resulting trimmed morphed model is then assigned to the sequence of the molecule using information about the connectivity of the chains from the working model and from connections that can be identified from the electron-density map. The procedure was tested by application to a recently determined structure at a resolution of 3.2 Å and was found to increase the number of correctly identified residues in this structure from the 88 obtained using phenix.resolve sequence assignment alone (Terwilliger, 2003 ▶) to 247 of a possible 359. Additionally, the procedure was tested by application to a series of templates with sequence identities to a target structure ranging between 7 and 36%. The mean fraction of correctly identified residues in these cases was increased from 33% using phenix.resolve sequence assignment to 47% using the current procedure. The procedure is simple to apply and is available in the Phenix software package

  16. An analytical model for an input/output-subsystem

    International Nuclear Information System (INIS)

    Roemgens, J.

    1983-05-01

    An input/output-subsystem of one or several computers if formed by the external memory units and the peripheral units of a computer system. For these subsystems mathematical models are established, taking into account the special properties of the I/O-subsystems, in order to avoid planning errors and to allow for predictions of the capacity of such systems. Here an analytical model is presented for the magnetic discs of a I/O-subsystem, using analytical methods for the individual waiting queues or waiting queue networks. Only I/O-subsystems of IBM-computer configurations are considered, which can be controlled by the MVS operating system. After a description of the hardware and software components of these I/O-systems, possible solutions from the literature are presented and discussed with respect to their applicability in IBM-I/O-subsystems. Based on these models a special scheme is developed which combines the advantages of the literature models and avoids the disadvantages in part. (orig./RW) [de

  17. A New Ensemble of Perturbed-Input-Parameter Simulations by the Community Atmosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    Covey, C; Brandon, S; Bremer, P T; Domyancis, D; Garaizar, X; Johannesson, G; Klein, R; Klein, S A; Lucas, D D; Tannahill, J; Zhang, Y

    2011-10-27

    Uncertainty quantification (UQ) is a fundamental challenge in the numerical simulation of Earth's weather and climate, and other complex systems. It entails much more than attaching defensible error bars to predictions: in particular it includes assessing low-probability but high-consequence events. To achieve these goals with models containing a large number of uncertain input parameters, structural uncertainties, etc., raw computational power is needed. An automated, self-adapting search of the possible model configurations is also useful. Our UQ initiative at the Lawrence Livermore National Laboratory has produced the most extensive set to date of simulations from the US Community Atmosphere Model. We are examining output from about 3,000 twelve-year climate simulations generated with a specialized UQ software framework, and assessing the model's accuracy as a function of 21 to 28 uncertain input parameter values. Most of the input parameters we vary are related to the boundary layer, clouds, and other sub-grid scale processes. Our simulations prescribe surface boundary conditions (sea surface temperatures and sea ice amounts) to match recent observations. Fully searching this 21+ dimensional space is impossible, but sensitivity and ranking algorithms can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination. Bayesian statistical constraints, employing a variety of climate observations as metrics, also seem promising. Observational constraints will be important in the next step of our project, which will compute sea surface temperatures and sea ice interactively, and will study climate change due to increasing atmospheric carbon dioxide.

  18. Computer Generated Inputs for NMIS Processor Verification

    International Nuclear Information System (INIS)

    J. A. Mullens; J. E. Breeding; J. A. McEvers; R. W. Wysor; L. G. Chiang; J. R. Lenarduzzi; J. T. Mihalczo; J. K. Mattingly

    2001-01-01

    Proper operation of the Nuclear Identification Materials System (NMIS) processor can be verified using computer-generated inputs [BIST (Built-In-Self-Test)] at the digital inputs. Preselected sequences of input pulses to all channels with known correlation functions are compared to the output of the processor. These types of verifications have been utilized in NMIS type correlation processors at the Oak Ridge National Laboratory since 1984. The use of this test confirmed a malfunction in a NMIS processor at the All-Russian Scientific Research Institute of Experimental Physics (VNIIEF) in 1998. The NMIS processor boards were returned to the U.S. for repair and subsequently used in NMIS passive and active measurements with Pu at VNIIEF in 1999

  19. Lysimeter data as input to performance assessment models

    International Nuclear Information System (INIS)

    McConnell, J.W. Jr.

    1998-01-01

    The Field Lysimeter Investigations: Low-Level Waste Data Base Development Program is obtaining information on the performance of radioactive waste forms in a disposal environment. Waste forms fabricated using ion-exchange resins from EPICOR-117 prefilters employed in the cleanup of the Three Mile Island (TMI) Nuclear Power Station are being tested to develop a low-level waste data base and to obtain information on survivability of waste forms in a disposal environment. The program includes reviewing radionuclide releases from those waste forms in the first 7 years of sampling and examining the relationship between code input parameters and lysimeter data. Also, lysimeter data are applied to performance assessment source term models, and initial results from use of data in two models are presented

  20. Scientific and technical advisory committee review of the nutrient inputs to the watershed model

    Science.gov (United States)

    The following is a report by a STAC Review Team concerning the methods and documentation used by the Chesapeake Bay Partnership for evaluation of nutrient inputs to Phase 6 of the Chesapeake Bay Watershed Model. The “STAC Review of the Nutrient Inputs to the Watershed Model” (previously referred to...

  1. Regional disaster impact analysis: comparing Input-Output and Computable General Equilibrium models

    NARCIS (Netherlands)

    Koks, E.E.; Carrera, L.; Jonkeren, O.; Aerts, J.C.J.H.; Husby, T.G.; Thissen, M.; Standardi, G.; Mysiak, J.

    2016-01-01

    A variety of models have been applied to assess the economic losses of disasters, of which the most common ones are input-output (IO) and computable general equilibrium (CGE) models. In addition, an increasing number of scholars have developed hybrid approaches: one that combines both or either of

  2. Input data requirements for performance modelling and monitoring of photovoltaic plants

    DEFF Research Database (Denmark)

    Gavriluta, Anamaria Florina; Spataru, Sergiu; Sera, Dezso

    2018-01-01

    This work investigates the input data requirements in the context of performance modeling of thin-film photovoltaic (PV) systems. The analysis focuses on the PVWatts performance model, well suited for on-line performance monitoring of PV strings, due to its low number of parameters and high......, modelling the performance of the PV modules at high irradiances requires a dataset of only a few hundred samples in order to obtain a power estimation accuracy of ~1-2\\%....

  3. Hidden Markov models for labeled sequences

    DEFF Research Database (Denmark)

    Krogh, Anders Stærmose

    1994-01-01

    A hidden Markov model for labeled observations, called a class HMM, is introduced and a maximum likelihood method is developed for estimating the parameters of the model. Instead of training it to model the statistics of the training sequences it is trained to optimize recognition. It resembles MMI...

  4. Tables and intercomparisons of evolutionary sequences of models for massive stars

    International Nuclear Information System (INIS)

    Chin, Chaowen; Stothers, R.B.

    1990-01-01

    Tables of evolutionary sequences of models for massive stars have been prepared for a variety of physical input parameters that are normally treated as free. These parameters include the interior convective mixing scheme, the mixing length in the outer convective envelope, the rate of stellar-wind mass loss, the initial stellar mass, and the initial chemical composition. Ranges of specified initial mass and initial chemical composition are M = 10-120 solar masses, Xe = 0.602-0.739, and Ze = 0.021-0.044. The tables cover evolution of the star from the ZAMS to either the end of core H burning or the end of core He burning. Differences among the evolutionary tracks are illustrated primarily in terms of the interior mixing scheme, since the amount and timing of stellar wind mass loss are still very uncertain for initial masses above about 30 solar masses. 52 refs

  5. A latent low-dimensional common input drives a pool of motor neurons: a probabilistic latent state-space model.

    Science.gov (United States)

    Feeney, Daniel F; Meyer, François G; Noone, Nicholas; Enoka, Roger M

    2017-10-01

    Motor neurons appear to be activated with a common input signal that modulates the discharge activity of all neurons in the motor nucleus. It has proven difficult for neurophysiologists to quantify the variability in a common input signal, but characterization of such a signal may improve our understanding of how the activation signal varies across motor tasks. Contemporary methods of quantifying the common input to motor neurons rely on compiling discrete action potentials into continuous time series, assuming the motor pool acts as a linear filter, and requiring signals to be of sufficient duration for frequency analysis. We introduce a space-state model in which the discharge activity of motor neurons is modeled as inhomogeneous Poisson processes and propose a method to quantify an abstract latent trajectory that represents the common input received by motor neurons. The approach also approximates the variation in synaptic noise in the common input signal. The model is validated with four data sets: a simulation of 120 motor units, a pair of integrate-and-fire neurons with a Renshaw cell providing inhibitory feedback, the discharge activity of 10 integrate-and-fire neurons, and the discharge times of concurrently active motor units during an isometric voluntary contraction. The simulations revealed that a latent state-space model is able to quantify the trajectory and variability of the common input signal across all four conditions. When compared with the cumulative spike train method of characterizing common input, the state-space approach was more sensitive to the details of the common input current and was less influenced by the duration of the signal. The state-space approach appears to be capable of detecting rather modest changes in common input signals across conditions. NEW & NOTEWORTHY We propose a state-space model that explicitly delineates a common input signal sent to motor neurons and the physiological noise inherent in synaptic signal

  6. Sequence Tree Modeling for Combined Accident and Feed-and-Bleed Operation

    International Nuclear Information System (INIS)

    Kim, Bo Gyung; Kang Hyun Gook; Yoon, Ho Joon

    2016-01-01

    In order to address this issue, this study suggests the sequence tree model to analyze accident sequence systematically. Using the sequence tree model, all possible scenarios which need a specific safety action to prevent the core damage can be identified and success conditions of safety action under complicated situation such as combined accident will be also identified. Sequence tree is branch model to divide plant condition considering the plant dynamics. Since sequence tree model can reflect the plant dynamics, arising from interaction of different accident timing and plant condition and from the interaction between the operator action, mitigation system, and the indicators for operation, sequence tree model can be used to develop the dynamic event tree model easily. Target safety action for this study is a feed-and-bleed (F and B) operation. A F and B operation directly cools down the reactor cooling system (RCS) using the primary cooling system when residual heat removal by the secondary cooling system is not available. In this study, a TLOFW accident and a TLOFW accident with LOCA were the target accidents. Based on the conventional PSA model and indicators, the sequence tree model for a TLOFW accident was developed. If sampling analysis is performed, practical accident sequences can be identified based on the sequence analysis. If a realistic distribution for the variables can be obtained for sampling analysis, much more realistic accident sequences can be described. Moreover, if the initiating event frequency under a combined accident can be quantified, the sequence tree model can translate into a dynamic event tree model based on the sampling analysis results

  7. Sequence Tree Modeling for Combined Accident and Feed-and-Bleed Operation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Bo Gyung; Kang Hyun Gook [KAIST, Daejeon (Korea, Republic of); Yoon, Ho Joon [Khalifa University of Science, Abu Dhabi (United Arab Emirates)

    2016-05-15

    In order to address this issue, this study suggests the sequence tree model to analyze accident sequence systematically. Using the sequence tree model, all possible scenarios which need a specific safety action to prevent the core damage can be identified and success conditions of safety action under complicated situation such as combined accident will be also identified. Sequence tree is branch model to divide plant condition considering the plant dynamics. Since sequence tree model can reflect the plant dynamics, arising from interaction of different accident timing and plant condition and from the interaction between the operator action, mitigation system, and the indicators for operation, sequence tree model can be used to develop the dynamic event tree model easily. Target safety action for this study is a feed-and-bleed (F and B) operation. A F and B operation directly cools down the reactor cooling system (RCS) using the primary cooling system when residual heat removal by the secondary cooling system is not available. In this study, a TLOFW accident and a TLOFW accident with LOCA were the target accidents. Based on the conventional PSA model and indicators, the sequence tree model for a TLOFW accident was developed. If sampling analysis is performed, practical accident sequences can be identified based on the sequence analysis. If a realistic distribution for the variables can be obtained for sampling analysis, much more realistic accident sequences can be described. Moreover, if the initiating event frequency under a combined accident can be quantified, the sequence tree model can translate into a dynamic event tree model based on the sampling analysis results.

  8. Input-constrained model predictive control via the alternating direction method of multipliers

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Frison, Gianluca; Andersen, Martin S.

    2014-01-01

    This paper presents an algorithm, based on the alternating direction method of multipliers, for the convex optimal control problem arising in input-constrained model predictive control. We develop an efficient implementation of the algorithm for the extended linear quadratic control problem (LQCP......) with input and input-rate limits. The algorithm alternates between solving an extended LQCP and a highly structured quadratic program. These quadratic programs are solved using a Riccati iteration procedure, and a structure-exploiting interior-point method, respectively. The computational cost per iteration...... is quadratic in the dimensions of the controlled system, and linear in the length of the prediction horizon. Simulations show that the approach proposed in this paper is more than an order of magnitude faster than several state-of-the-art quadratic programming algorithms, and that the difference in computation...

  9. Development of an Input Model to MELCOR 1.8.5 for the Oskarshamn 3 BWR

    Energy Technology Data Exchange (ETDEWEB)

    Nilsson, Lars [Lentek, Nykoeping (Sweden)

    2006-05-15

    An input model has been prepared to the code MELCOR 1.8.5 for the Swedish Oskarshamn 3 Boiling Water Reactor (O3). This report describes the modelling work and the various files which comprise the input deck. Input data are mainly based on original drawings and system descriptions made available by courtesy of OKG AB. Comparison and check of some primary system data were made against an O3 input file to the SCDAP/RELAP5 code that was used in the SARA project. Useful information was also obtained from the FSAR (Final Safety Analysis Report) for O3 and the SKI report '2003 Stoerningshandboken BWR'. The input models the O3 reactor at its current state with the operating power of 3300 MW{sub th}. One aim with this work is that the MELCOR input could also be used for power upgrading studies. All fuel assemblies are thus assumed to consist of the new Westinghouse-Atom's SVEA-96 Optima2 fuel. MELCOR is a severe accident code developed by Sandia National Laboratory under contract from the U.S. Nuclear Regulatory Commission (NRC). MELCOR is a successor to STCP (Source Term Code Package) and has thus a long evolutionary history. The input described here is adapted to the latest version 1.8.5 available when the work began. It was released the year 2000, but a new version 1.8.6 was distributed recently. Conversion to the new version is recommended. (During the writing of this report still another code version, MELCOR 2.0, has been announced to be released within short.) In version 1.8.5 there is an option to describe the accident progression in the lower plenum and the melt-through of the reactor vessel bottom in more detail by use of the Bottom Head (BH) package developed by Oak Ridge National Laboratory especially for BWRs. This is in addition to the ordinary MELCOR COR package. Since problems arose running with the BH input two versions of the O3 input deck were produced, a NONBH and a BH deck. The BH package is no longer a separate package in the new 1

  10. Prediction of Chl-a concentrations in an eutrophic lake using ANN models with hybrid inputs

    Science.gov (United States)

    Aksoy, A.; Yuzugullu, O.

    2017-12-01

    Chlorophyll-a (Chl-a) concentrations in water bodies exhibit both spatial and temporal variations. As a result, frequent sampling is required with higher number of samples. This motivates the use of remote sensing as a monitoring tool. Yet, prediction performances of models that convert radiance values into Chl-a concentrations can be poor in shallow lakes. In this study, Chl-a concentrations in Lake Eymir, a shallow eutrophic lake in Ankara (Turkey), are determined using artificial neural network (ANN) models that use hybrid inputs composed of water quality and meteorological data as well as remotely sensed radiance values to improve prediction performance. Following a screening based on multi-collinearity and principal component analysis (PCA), dissolved-oxygen concentration (DO), pH, turbidity, and humidity were selected among several parameters as the constituents of the hybrid input dataset. Radiance values were obtained from QuickBird-2 satellite. Conversion of the hybrid input into Chl-a concentrations were studied for two different periods in the lake. ANN models were successful in predicting Chl-a concentrations. Yet, prediction performance declined for low Chl-a concentrations in the lake. In general, models with hybrid inputs were superior over the ones that solely used remotely sensed data.

  11. A quantitative approach to modeling the information processing of NPP operators under input information overload

    International Nuclear Information System (INIS)

    Kim, Jong Hyun; Seong, Poong Hyun

    2002-01-01

    This paper proposes a quantitative approach to modeling the information processing of NPP operators. The aim of this work is to derive the amount of the information processed during a certain control task under input information overload. We primarily develop the information processing model having multiple stages, which contains information flow. Then the uncertainty of the information is quantified using the Conant's model, a kind of information theory. We also investigate the applicability of this approach to quantifying the information reduction of operators under the input information overload

  12. Thermodynamics-based models of transcriptional regulation with gene sequence.

    Science.gov (United States)

    Wang, Shuqiang; Shen, Yanyan; Hu, Jinxing

    2015-12-01

    Quantitative models of gene regulatory activity have the potential to improve our mechanistic understanding of transcriptional regulation. However, the few models available today have been based on simplistic assumptions about the sequences being modeled or heuristic approximations of the underlying regulatory mechanisms. In this work, we have developed a thermodynamics-based model to predict gene expression driven by any DNA sequence. The proposed model relies on a continuous time, differential equation description of transcriptional dynamics. The sequence features of the promoter are exploited to derive the binding affinity which is derived based on statistical molecular thermodynamics. Experimental results show that the proposed model can effectively identify the activity levels of transcription factors and the regulatory parameters. Comparing with the previous models, the proposed model can reveal more biological sense.

  13. Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) Benchmark Phase II: Identification of Influential Parameters

    International Nuclear Information System (INIS)

    Kovtonyuk, A.; Petruzzi, A.; D'Auria, F.

    2015-01-01

    The objective of the Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) benchmark is to progress on the issue of the quantification of the uncertainty of the physical models in system thermal-hydraulic codes by considering a concrete case: the physical models involved in the prediction of core reflooding. The PREMIUM benchmark consists of five phases. This report presents the results of Phase II dedicated to the identification of the uncertain code parameters associated with physical models used in the simulation of reflooding conditions. This identification is made on the basis of the Test 216 of the FEBA/SEFLEX programme according to the following steps: - identification of influential phenomena; - identification of the associated physical models and parameters, depending on the used code; - quantification of the variation range of identified input parameters through a series of sensitivity calculations. A procedure for the identification of potentially influential code input parameters has been set up in the Specifications of Phase II of PREMIUM benchmark. A set of quantitative criteria has been as well proposed for the identification of influential IP and their respective variation range. Thirteen participating organisations, using 8 different codes (7 system thermal-hydraulic codes and 1 sub-channel module of a system thermal-hydraulic code) submitted Phase II results. The base case calculations show spread in predicted cladding temperatures and quench front propagation that has been characterized. All the participants, except one, predict a too fast quench front progression. Besides, the cladding temperature time trends obtained by almost all the participants show oscillatory behaviour which may have numeric origins. Adopted criteria for identification of influential input parameters differ between the participants: some organisations used the set of criteria proposed in Specifications 'as is', some modified the quantitative thresholds

  14. Influential input parameters for reflood model of MARS code

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Deog Yeon; Bang, Young Seok [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2012-10-15

    Best Estimate (BE) calculation has been more broadly used in nuclear industries and regulations to reduce the significant conservatism for evaluating Loss of Coolant Accident (LOCA). Reflood model has been identified as one of the problems in BE calculation. The objective of the Post BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) program of OECD/NEA is to make progress the issue of the quantification of the uncertainty of the physical models in system thermal hydraulic codes, by considering an experimental result especially for reflood. It is important to establish a methodology to identify and select the parameters influential to the response of reflood phenomena following Large Break LOCA. For this aspect, a reference calculation and sensitivity analysis to select the dominant influential parameters for FEBA experiment are performed.

  15. SVM-PB-Pred: SVM based protein block prediction method using sequence profiles and secondary structures.

    Science.gov (United States)

    Suresh, V; Parthasarathy, S

    2014-01-01

    We developed a support vector machine based web server called SVM-PB-Pred, to predict the Protein Block for any given amino acid sequence. The input features of SVM-PB-Pred include i) sequence profiles (PSSM) and ii) actual secondary structures (SS) from DSSP method or predicted secondary structures from NPS@ and GOR4 methods. There were three combined input features PSSM+SS(DSSP), PSSM+SS(NPS@) and PSSM+SS(GOR4) used to test and train the SVM models. Similarly, four datasets RS90, DB433, LI1264 and SP1577 were used to develop the SVM models. These four SVM models developed were tested using three different benchmarking tests namely; (i) self consistency, (ii) seven fold cross validation test and (iii) independent case test. The maximum possible prediction accuracy of ~70% was observed in self consistency test for the SVM models of both LI1264 and SP1577 datasets, where PSSM+SS(DSSP) input features was used to test. The prediction accuracies were reduced to ~53% for PSSM+SS(NPS@) and ~43% for PSSM+SS(GOR4) in independent case test, for the SVM models of above two same datasets. Using our method, it is possible to predict the protein block letters for any query protein sequence with ~53% accuracy, when the SP1577 dataset and predicted secondary structure from NPS@ server were used. The SVM-PB-Pred server can be freely accessed through http://bioinfo.bdu.ac.in/~svmpbpred.

  16. Comparison of different snow model formulations and their responses to input uncertainties in the Upper Indus Basin

    Science.gov (United States)

    Pritchard, David; Fowler, Hayley; Forsythe, Nathan; O'Donnell, Greg; Rutter, Nick; Bardossy, Andras

    2017-04-01

    Snow and glacier melt in the mountainous Upper Indus Basin (UIB) sustain water supplies, irrigation networks, hydropower production and ecosystems in extensive downstream lowlands. Understanding hydrological and cryospheric sensitivities to climatic variability and change in the basin is therefore critical for local, national and regional water resources management. Assessing these sensitivities using numerical modelling is challenging, due to limitations in the quality and quantity of input and evaluation data, as well as uncertainties in model structures and parameters. This study explores how these uncertainties in inputs and process parameterisations affect distributed simulations of ablation in the complex climatic setting of the UIB. The role of model forcing uncertainties is explored using combinations of local observations, remote sensing and reanalysis - including the high resolution High Asia Refined Analysis - to generate multiple realisations of spatiotemporal model input fields. Forcing a range of model structures with these input fields then provides an indication of how different ablation parameterisations respond to uncertainties and perturbations in climatic drivers. Model structures considered include simple, empirical representations of melt processes through to physically based, full energy balance models with multi-physics options for simulating snowpack evolution (including an adapted version of FSM). Analysing model input and structural uncertainties in this way provides insights for methodological choices in climate sensitivity assessments of data-sparse, high mountain catchments. Such assessments are key for supporting water resource management in these catchments, particularly given the potential complications of enhanced warming through elevation effects or, in the case of the UIB, limited understanding of how and why local climate change signals differ from broader patterns.

  17. Use of regional climate model simulations as an input for hydrological models for the Hindukush-Karakorum-Himalaya region

    NARCIS (Netherlands)

    Akhtar, M.; Ahmad, N.; Booij, Martijn J.

    2009-01-01

    The most important climatological inputs required for the calibration and validation of hydrological models are temperature and precipitation that can be derived from observational records or alternatively from regional climate models (RCMs). In this paper, meteorological station observations and

  18. Model morphing and sequence assignment after molecular replacement.

    Science.gov (United States)

    Terwilliger, Thomas C; Read, Randy J; Adams, Paul D; Brunger, Axel T; Afonine, Pavel V; Hung, Li-Wei

    2013-11-01

    A procedure termed `morphing' for improving a model after it has been placed in the crystallographic cell by molecular replacement has recently been developed. Morphing consists of applying a smooth deformation to a model to make it match an electron-density map more closely. Morphing does not change the identities of the residues in the chain, only their coordinates. Consequently, if the true structure differs from the working model by containing different residues, these differences cannot be corrected by morphing. Here, a procedure that helps to address this limitation is described. The goal of the procedure is to obtain a relatively complete model that has accurate main-chain atomic positions and residues that are correctly assigned to the sequence. Residues in a morphed model that do not match the electron-density map are removed. Each segment of the resulting trimmed morphed model is then assigned to the sequence of the molecule using information about the connectivity of the chains from the working model and from connections that can be identified from the electron-density map. The procedure was tested by application to a recently determined structure at a resolution of 3.2 Å and was found to increase the number of correctly identified residues in this structure from the 88 obtained using phenix.resolve sequence assignment alone (Terwilliger, 2003) to 247 of a possible 359. Additionally, the procedure was tested by application to a series of templates with sequence identities to a target structure ranging between 7 and 36%. The mean fraction of correctly identified residues in these cases was increased from 33% using phenix.resolve sequence assignment to 47% using the current procedure. The procedure is simple to apply and is available in the Phenix software package.

  19. Remote sensing inputs to water demand modeling

    Science.gov (United States)

    Estes, J. E.; Jensen, J. R.; Tinney, L. R.; Rector, M.

    1975-01-01

    In an attempt to determine the ability of remote sensing techniques to economically generate data required by water demand models, the Geography Remote Sensing Unit, in conjunction with the Kern County Water Agency of California, developed an analysis model. As a result it was determined that agricultural cropland inventories utilizing both high altitude photography and LANDSAT imagery can be conducted cost effectively. In addition, by using average irrigation application rates in conjunction with cropland data, estimates of agricultural water demand can be generated. However, more accurate estimates are possible if crop type, acreage, and crop specific application rates are employed. An analysis of the effect of saline-alkali soils on water demand in the study area is also examined. Finally, reference is made to the detection and delineation of water tables that are perched near the surface by semi-permeable clay layers. Soil salinity prediction, automated crop identification on a by-field basis, and a potential input to the determination of zones of equal benefit taxation are briefly touched upon.

  20. PERMODELAN INDEKS HARGA KONSUMEN INDONESIA DENGAN MENGGUNAKAN MODEL INTERVENSI MULTI INPUT

    KAUST Repository

    Novianti, Putri Wikie

    2017-01-24

    There are some events which are expected effecting CPI’s fluctuation, i.e. financial crisis 1997/1998, fuel price risings, base year changing’s, independence of Timor-Timur (October 1999), and Tsunami disaster in Aceh (December 2004). During re-search period, there were eight fuel price risings and four base year changing’s. The objective of this research is to obtain multi input intervention model which can des-cribe magnitude and duration of each event effected to CPI. Most of intervention re-searches that have been done are only contain of an intervention with single input, ei-ther step or pulse function. Multi input intervention was used in Indonesia CPI case because there are some events which are expected effecting CPI. Based on the result, those events were affecting CPI. Additionally, other events, such as Ied on January 1999, events on April 2002, July 2003, December 2005, and September 2008, were affecting CPI too. In general, those events gave positive effect to CPI, except events on April 2002 and July 2003 which gave negative effects.

  1. Integration of auditory and tactile inputs in musical meter perception.

    Science.gov (United States)

    Huang, Juan; Gamble, Darik; Sarnlertsophon, Kristine; Wang, Xiaoqin; Hsiao, Steven

    2013-01-01

    Musicians often say that they not only hear but also "feel" music. To explore the contribution of tactile information to "feeling" music, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter-recognition task. Subjects discriminated between two types of sequences, "duple" (march-like rhythms) and "triple" (waltz-like rhythms), presented in three conditions: (1) unimodal inputs (auditory or tactile alone); (2) various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts; and (3) bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70-85 %) when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70-90 %) when all of the metrically important notes are assigned to one channel and is reduced to 60 % when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90 %). Performance dropped dramatically when subjects were presented with incongruent auditory cues (10 %), as opposed to incongruent tactile cues (60 %), demonstrating that auditory input dominates meter perception. These observations support the notion that meter perception is a cross-modal percept with tactile inputs underlying the perception of "feeling" music.

  2. From LCC to LCA Using a Hybrid Input Output Model – A Maritime Case Study

    DEFF Research Database (Denmark)

    Kjær, Louise Laumann; Pagoropoulos, Aris; Hauschild, Michael Zwicky

    2015-01-01

    As companies try to embrace life cycle thinking, Life Cycle Assessment (LCA) and Life Cycle Costing (LCC) have proven to be powerful tools. In this paper, an Environmental Input-Output model is used for analysis as it enables an LCA using the same economic input data as LCC. This approach helps...

  3. Transport coefficient computation based on input/output reduced order models

    Science.gov (United States)

    Hurst, Joshua L.

    The guiding purpose of this thesis is to address the optimal material design problem when the material description is a molecular dynamics model. The end goal is to obtain a simplified and fast model that captures the property of interest such that it can be used in controller design and optimization. The approach is to examine model reduction analysis and methods to capture a specific property of interest, in this case viscosity, or more generally complex modulus or complex viscosity. This property and other transport coefficients are defined by a input/output relationship and this motivates model reduction techniques that are tailored to preserve input/output behavior. In particular Singular Value Decomposition (SVD) based methods are investigated. First simulation methods are identified that are amenable to systems theory analysis. For viscosity, these models are of the Gosling and Lees-Edwards type. They are high order nonlinear Ordinary Differential Equations (ODEs) that employ Periodic Boundary Conditions. Properties can be calculated from the state trajectories of these ODEs. In this research local linear approximations are rigorously derived and special attention is given to potentials that are evaluated with Periodic Boundary Conditions (PBC). For the Gosling description LTI models are developed from state trajectories but are found to have limited success in capturing the system property, even though it is shown that full order LTI models can be well approximated by reduced order LTI models. For the Lees-Edwards SLLOD type model nonlinear ODEs will be approximated by a Linear Time Varying (LTV) model about some nominal trajectory and both balanced truncation and Proper Orthogonal Decomposition (POD) will be used to assess the plausibility of reduced order models to this system description. An immediate application of the derived LTV models is Quasilinearization or Waveform Relaxation. Quasilinearization is a Newton's method applied to the ODE operator

  4. Computational Techniques for Model Predictive Control of Large-Scale Systems with Continuous-Valued and Discrete-Valued Inputs

    Directory of Open Access Journals (Sweden)

    Koichi Kobayashi

    2013-01-01

    Full Text Available We propose computational techniques for model predictive control of large-scale systems with both continuous-valued control inputs and discrete-valued control inputs, which are a class of hybrid systems. In the proposed method, we introduce the notion of virtual control inputs, which are obtained by relaxing discrete-valued control inputs to continuous variables. In online computation, first, we find continuous-valued control inputs and virtual control inputs minimizing a cost function. Next, using the obtained virtual control inputs, only discrete-valued control inputs at the current time are computed in each subsystem. In addition, we also discuss the effect of quantization errors. Finally, the effectiveness of the proposed method is shown by a numerical example. The proposed method enables us to reduce and decentralize the computation load.

  5. A Streaming Distance Transform Algorithm for Neighborhood-Sequence Distances

    Directory of Open Access Journals (Sweden)

    Nicolas Normand

    2014-09-01

    Full Text Available We describe an algorithm that computes a “translated” 2D Neighborhood-Sequence Distance Transform (DT using a look up table approach. It requires a single raster scan of the input image and produces one line of output for every line of input. The neighborhood sequence is specified either by providing one period of some integer periodic sequence or by providing the rate of appearance of neighborhoods. The full algorithm optionally derives the regular (centered DT from the “translated” DT, providing the result image on-the-fly, with a minimal delay, before the input image is fully processed. Its efficiency can benefit all applications that use neighborhood- sequence distances, particularly when pipelined processing architectures are involved, or when the size of objects in the source image is limited.

  6. ANALYSIS OF THE BANDUNG CHANGES EXCELLENT POTENTIAL THROUGH INPUT-OUTPUT MODEL USING INDEX LE MASNE

    Directory of Open Access Journals (Sweden)

    Teti Sofia Yanti

    2017-03-01

    Full Text Available Input-Output Table is arranged to present an overview of the interrelationships and interdependence between units of activity (sector production in the whole economy. Therefore the input-output models are complete and comprehensive analytical tool. The usefulness of input-output tables is an analysis of the economic structure of the national/regional level which covers the structure of production and value-added (GDP of each sector. For the purposes of planning and evaluation of the outcomes of development that is comprehensive both national and smaller scale (district/city, a model for regional development planning approach can use the model input-output analysis. Analysis of Bandung Economic Structure did use Le Masne index, by comparing the coefficients of the technology in 2003 and 2008, of which nearly 50% change. The trade sector has grown very conspicuous than other areas, followed by the services of road transport and air transport services, the development priorities and investment Bandung should be directed to these areas, this is due to these areas can be thrust and be power attraction for the growth of other areas. The areas that experienced the highest decrease was Industrial Chemicals and Goods from Chemistry, followed by Oil and Refinery Industry Textile Industry Except For Garment.

  7. Simultaneous activation of parallel sensory pathways promotes a grooming sequence in Drosophila

    Science.gov (United States)

    Hampel, Stefanie; McKellar, Claire E

    2017-01-01

    A central model that describes how behavioral sequences are produced features a neural architecture that readies different movements simultaneously, and a mechanism where prioritized suppression between the movements determines their sequential performance. We previously described a model whereby suppression drives a Drosophila grooming sequence that is induced by simultaneous activation of different sensory pathways that each elicit a distinct movement (Seeds et al., 2014). Here, we confirm this model using transgenic expression to identify and optogenetically activate sensory neurons that elicit specific grooming movements. Simultaneous activation of different sensory pathways elicits a grooming sequence that resembles the naturally induced sequence. Moreover, the sequence proceeds after the sensory excitation is terminated, indicating that a persistent trace of this excitation induces the next grooming movement once the previous one is performed. This reveals a mechanism whereby parallel sensory inputs can be integrated and stored to elicit a delayed and sequential grooming response. PMID:28887878

  8. AlignMe—a membrane protein sequence alignment web server

    Science.gov (United States)

    Stamm, Marcus; Staritzbichler, René; Khafizov, Kamil; Forrest, Lucy R.

    2014-01-01

    We present a web server for pair-wise alignment of membrane protein sequences, using the program AlignMe. The server makes available two operational modes of AlignMe: (i) sequence to sequence alignment, taking two sequences in fasta format as input, combining information about each sequence from multiple sources and producing a pair-wise alignment (PW mode); and (ii) alignment of two multiple sequence alignments to create family-averaged hydropathy profile alignments (HP mode). For the PW sequence alignment mode, four different optimized parameter sets are provided, each suited to pairs of sequences with a specific similarity level. These settings utilize different types of inputs: (position-specific) substitution matrices, secondary structure predictions and transmembrane propensities from transmembrane predictions or hydrophobicity scales. In the second (HP) mode, each input multiple sequence alignment is converted into a hydrophobicity profile averaged over the provided set of sequence homologs; the two profiles are then aligned. The HP mode enables qualitative comparison of transmembrane topologies (and therefore potentially of 3D folds) of two membrane proteins, which can be useful if the proteins have low sequence similarity. In summary, the AlignMe web server provides user-friendly access to a set of tools for analysis and comparison of membrane protein sequences. Access is available at http://www.bioinfo.mpg.de/AlignMe PMID:24753425

  9. The Canadian Defence Input-Output Model DIO Version 4.41

    Science.gov (United States)

    2011-09-01

    Request to develop DND tailored Input/Output Model. Electronic communication from AllenWeldon to Team Leader, Defence Economics Team onMarch 12, 2011...and similar contain- ers 166 1440 Handbags, wallets and similar personal articles such as eyeglass and cigar cases and coin purses 167 1450 Cotton yarn...408 3600 Radar and radio navigation equipment 409 3619 Semi-conductors 410 3621 Printed circuits 411 3622 Integrated circuits 412 3623 Other electronic

  10. Application of MELCOR Code to a French PWR 900 MWe Severe Accident Sequence and Evaluation of Models Performance Focusing on In-Vessel Thermal Hydraulic Results

    International Nuclear Information System (INIS)

    De Rosa, Felice

    2006-01-01

    In the ambit of the Severe Accident Network of Excellence Project (SARNET), funded by the European Union, 6. FISA (Fission Safety) Programme, one of the main tasks is the development and validation of the European Accident Source Term Evaluation Code (ASTEC Code). One of the reference codes used to compare ASTEC results, coming from experimental and Reactor Plant applications, is MELCOR. ENEA is a SARNET member and also an ASTEC and MELCOR user. During the first 18 months of this project, we performed a series of MELCOR and ASTEC calculations referring to a French PWR 900 MWe and to the accident sequence of 'Loss of Steam Generator (SG) Feedwater' (known as H2 sequence in the French classification). H2 is an accident sequence substantially equivalent to a Station Blackout scenario, like a TMLB accident, with the only difference that in H2 sequence the scram is forced to occur with a delay of 28 seconds. The main events during the accident sequence are a loss of normal and auxiliary SG feedwater (0 s), followed by a scram when the water level in SG is equal or less than 0.7 m (after 28 seconds). There is also a main coolant pumps trip when ΔTsat < 10 deg. C, a total opening of the three relief valves when Tric (core maximal outlet temperature) is above 603 K (330 deg. C) and accumulators isolation when primary pressure goes below 1.5 MPa (15 bar). Among many other points, it is worth noting that this was the first time that a MELCOR 1.8.5 input deck was available for a French PWR 900. The main ENEA effort in this period was devoted to prepare the MELCOR input deck using the code version v.1.8.5 (build QZ Oct 2000 with the latest patch 185003 Oct 2001). The input deck, completely new, was prepared taking into account structure, data and same conditions as those found inside ASTEC input decks. The main goal of the work presented in this paper is to put in evidence where and when MELCOR provides good enough results and why, in some cases mainly referring to its

  11. Low-level waste shallow land disposal source term model: Data input guides

    International Nuclear Information System (INIS)

    Sullivan, T.M.; Suen, C.J.

    1989-07-01

    This report provides an input guide for the computational models developed to predict the rate of radionuclide release from shallow land disposal of low-level waste. Release of contaminants depends on four processes: water flow, container degradation, waste from leaching, and contaminant transport. The computer code FEMWATER has been selected to predict the movement of water in an unsaturated porous media. The computer code BLT (Breach, Leach, and Transport), a modification of FEMWASTE, has been selected to predict the processes of container degradation (Breach), contaminant release from the waste form (Leach), and contaminant migration (Transport). In conjunction, these two codes have the capability to account for the effects of disposal geometry, unsaturated/water flow, container degradation, waste form leaching, and migration of contaminants releases within a single disposal trench. In addition to the input requirements, this report presents the fundamental equations and relationships used to model the four different processes previously discussed. Further, the appendices provide a representative sample of data required by the different models. 14 figs., 27 tabs

  12. PLEXOS Input Data Generator

    Energy Technology Data Exchange (ETDEWEB)

    2017-02-01

    The PLEXOS Input Data Generator (PIDG) is a tool that enables PLEXOS users to better version their data, automate data processing, collaborate in developing inputs, and transfer data between different production cost modeling and other power systems analysis software. PIDG can process data that is in a generalized format from multiple input sources, including CSV files, PostgreSQL databases, and PSS/E .raw files and write it to an Excel file that can be imported into PLEXOS with only limited manual intervention.

  13. Temporal rainfall estimation using input data reduction and model inversion

    Science.gov (United States)

    Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.

    2016-12-01

    Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a

  14. Design, Fabrication, and Modeling of a Novel Dual-Axis Control Input PZT Gyroscope

    Directory of Open Access Journals (Sweden)

    Cheng-Yang Chang

    2017-10-01

    Full Text Available Conventional gyroscopes are equipped with a single-axis control input, limiting their performance. Although researchers have proposed control algorithms with dual-axis control inputs to improve gyroscope performance, most have verified the control algorithms through numerical simulations because they lacked practical devices with dual-axis control inputs. The aim of this study was to design a piezoelectric gyroscope equipped with a dual-axis control input so that researchers may experimentally verify those control algorithms in future. Designing a piezoelectric gyroscope with a dual-axis control input is more difficult than designing a conventional gyroscope because the control input must be effective over a broad frequency range to compensate for imperfections, and the multiple mode shapes in flexural deformations complicate the relation between flexural deformation and the proof mass position. This study solved these problems by using a lead zirconate titanate (PZT material, introducing additional electrodes for shielding, developing an optimal electrode pattern, and performing calibrations of undesired couplings. The results indicated that the fabricated device could be operated at 5.5±1 kHz to perform dual-axis actuations and position measurements. The calibration of the fabricated device was completed by system identifications of a new dynamic model including gyroscopic motions, electromechanical coupling, mechanical coupling, electrostatic coupling, and capacitive output impedance. Finally, without the assistance of control algorithms, the “open loop sensitivity” of the fabricated gyroscope was 1.82 μV/deg/s with a nonlinearity of 9.5% full-scale output. This sensitivity is comparable with those of other PZT gyroscopes with single-axis control inputs.

  15. Design, Fabrication, and Modeling of a Novel Dual-Axis Control Input PZT Gyroscope.

    Science.gov (United States)

    Chang, Cheng-Yang; Chen, Tsung-Lin

    2017-10-31

    Conventional gyroscopes are equipped with a single-axis control input, limiting their performance. Although researchers have proposed control algorithms with dual-axis control inputs to improve gyroscope performance, most have verified the control algorithms through numerical simulations because they lacked practical devices with dual-axis control inputs. The aim of this study was to design a piezoelectric gyroscope equipped with a dual-axis control input so that researchers may experimentally verify those control algorithms in future. Designing a piezoelectric gyroscope with a dual-axis control input is more difficult than designing a conventional gyroscope because the control input must be effective over a broad frequency range to compensate for imperfections, and the multiple mode shapes in flexural deformations complicate the relation between flexural deformation and the proof mass position. This study solved these problems by using a lead zirconate titanate (PZT) material, introducing additional electrodes for shielding, developing an optimal electrode pattern, and performing calibrations of undesired couplings. The results indicated that the fabricated device could be operated at 5.5±1 kHz to perform dual-axis actuations and position measurements. The calibration of the fabricated device was completed by system identifications of a new dynamic model including gyroscopic motions, electromechanical coupling, mechanical coupling, electrostatic coupling, and capacitive output impedance. Finally, without the assistance of control algorithms, the "open loop sensitivity" of the fabricated gyroscope was 1.82 μV/deg/s with a nonlinearity of 9.5% full-scale output. This sensitivity is comparable with those of other PZT gyroscopes with single-axis control inputs.

  16. On the Influence of Input Data Quality to Flood Damage Estimation: The Performance of the INSYDE Model

    Directory of Open Access Journals (Sweden)

    Daniela Molinari

    2017-09-01

    Full Text Available IN-depth SYnthetic Model for Flood Damage Estimation (INSYDE is a model for the estimation of flood damage to residential buildings at the micro-scale. This study investigates the sensitivity of INSYDE to the accuracy of input data. Starting from the knowledge of input parameters at the scale of individual buildings for a case study, the level of detail of input data is progressively downgraded until the condition in which a representative value is defined for all inputs at the census block scale. The analysis reveals that two conditions are required to limit the errors in damage estimation: the representativeness of representatives values with respect to micro-scale values and the local knowledge of the footprint area of the buildings, being the latter the main extensive variable adopted by INSYDE. Such a result allows for extending the usability of the model at the meso-scale, also in different countries, depending on the availability of aggregated building data.

  17. Development of algorithm for depreciation costs allocation in dynamic input-output industrial enterprise model

    Directory of Open Access Journals (Sweden)

    Keller Alevtina

    2017-01-01

    Full Text Available The article considers the issue of allocation of depreciation costs in the dynamic inputoutput model of an industrial enterprise. Accounting the depreciation costs in such a model improves the policy of fixed assets management. It is particularly relevant to develop the algorithm for the allocation of depreciation costs in the construction of dynamic input-output model of an industrial enterprise, since such enterprises have a significant amount of fixed assets. Implementation of terms of the adequacy of such an algorithm itself allows: evaluating the appropriateness of investments in fixed assets, studying the final financial results of an industrial enterprise, depending on management decisions in the depreciation policy. It is necessary to note that the model in question for the enterprise is always degenerate. It is caused by the presence of zero rows in the matrix of capital expenditures by lines of structural elements unable to generate fixed assets (part of the service units, households, corporate consumers. The paper presents the algorithm for the allocation of depreciation costs for the model. This algorithm was developed by the authors and served as the basis for further development of the flowchart for subsequent implementation with use of software. The construction of such algorithm and its use for dynamic input-output models of industrial enterprises is actualized by international acceptance of the effectiveness of the use of input-output models for national and regional economic systems. This is what allows us to consider that the solutions discussed in the article are of interest to economists of various industrial enterprises.

  18. A Water-Withdrawal Input-Output Model of the Indian Economy.

    Science.gov (United States)

    Bogra, Shelly; Bakshi, Bhavik R; Mathur, Ritu

    2016-02-02

    Managing freshwater allocation for a highly populated and growing economy like India can benefit from knowledge about the effect of economic activities. This study transforms the 2003-2004 economic input-output (IO) table of India into a water withdrawal input-output model to quantify direct and indirect flows. This unique model is based on a comprehensive database compiled from diverse public sources, and estimates direct and indirect water withdrawal of all economic sectors. It distinguishes between green (rainfall), blue (surface and ground), and scarce groundwater. Results indicate that the total direct water withdrawal is nearly 3052 billion cubic meter (BCM) and 96% of this is used in agriculture sectors with the contribution of direct green water being about 1145 BCM, excluding forestry. Apart from 727 BCM direct blue water withdrawal for agricultural, other significant users include "Electricity" with 64 BCM, "Water supply" with 44 BCM and other industrial sectors with nearly 14 BCM. "Construction", "miscellaneous food products"; "Hotels and restaurants"; "Paper, paper products, and newsprint" are other significant indirect withdrawers. The net virtual water import is found to be insignificant compared to direct water used in agriculture nationally, while scarce ground water associated with crops is largely contributed by northern states.

  19. A Probabilistic Genome-Wide Gene Reading Frame Sequence Model

    DEFF Research Database (Denmark)

    Have, Christian Theil; Mørk, Søren

    We introduce a new type of probabilistic sequence model, that model the sequential composition of reading frames of genes in a genome. Our approach extends gene finders with a model of the sequential composition of genes at the genome-level -- effectively producing a sequential genome annotation...... as output. The model can be used to obtain the most probable genome annotation based on a combination of i: a gene finder score of each gene candidate and ii: the sequence of the reading frames of gene candidates through a genome. The model --- as well as a higher order variant --- is developed and tested...... and are evaluated by the effect on prediction performance. Since bacterial gene finding to a large extent is a solved problem it forms an ideal proving ground for evaluating the explicit modeling of larger scale gene sequence composition of genomes. We conclude that the sequential composition of gene reading frames...

  20. Metocean input data for drift models applications: Loustic study

    International Nuclear Information System (INIS)

    Michon, P.; Bossart, C.; Cabioc'h, M.

    1995-01-01

    Real-time monitoring and crisis management of oil slicks or floating structures displacement require a good knowledge of local winds, waves and currents used as input data for operational drift models. Fortunately, thanks to world-wide and all-weather coverage, satellite measurements have recently enabled the introduction of new methods for the remote sensing of the marine environment. Within a French joint industry project, a procedure has been developed using basically satellite measurements combined to metocean models in order to provide marine operators' drift models with reliable wind, wave and current analyses and short term forecasts. Particularly, a model now allows the calculation of the drift current, under the joint action of wind and sea-state, thus radically improving the classical laws. This global procedure either directly uses satellite wind and waves measurements (if available on the study area) or indirectly, as calibration of metocean models results which are brought to the oil slick or floating structure location. The operational use of this procedure is reported here with an example of floating structure drift offshore from the Brittany coasts

  1. Analysis on relation between safety input and accidents

    Institute of Scientific and Technical Information of China (English)

    YAO Qing-guo; ZHANG Xue-mu; LI Chun-hui

    2007-01-01

    The number of safety input directly determines the level of safety, and there exists dialectical and unified relations between safety input and accidents. Based on the field investigation and reliable data, this paper deeply studied the dialectical relationship between safety input and accidents, and acquired the conclusions. The security situation of the coal enterprises was related to the security input rate, being effected little by the security input scale, and build the relationship model between safety input and accidents on this basis, that is the accident model.

  2. Probabilistic topic modeling for the analysis and classification of genomic sequences

    Science.gov (United States)

    2015-01-01

    Background Studies on genomic sequences for classification and taxonomic identification have a leading role in the biomedical field and in the analysis of biodiversity. These studies are focusing on the so-called barcode genes, representing a well defined region of the whole genome. Recently, alignment-free techniques are gaining more importance because they are able to overcome the drawbacks of sequence alignment techniques. In this paper a new alignment-free method for DNA sequences clustering and classification is proposed. The method is based on k-mers representation and text mining techniques. Methods The presented method is based on Probabilistic Topic Modeling, a statistical technique originally proposed for text documents. Probabilistic topic models are able to find in a document corpus the topics (recurrent themes) characterizing classes of documents. This technique, applied on DNA sequences representing the documents, exploits the frequency of fixed-length k-mers and builds a generative model for a training group of sequences. This generative model, obtained through the Latent Dirichlet Allocation (LDA) algorithm, is then used to classify a large set of genomic sequences. Results and conclusions We performed classification of over 7000 16S DNA barcode sequences taken from Ribosomal Database Project (RDP) repository, training probabilistic topic models. The proposed method is compared to the RDP tool and Support Vector Machine (SVM) classification algorithm in a extensive set of trials using both complete sequences and short sequence snippets (from 400 bp to 25 bp). Our method reaches very similar results to RDP classifier and SVM for complete sequences. The most interesting results are obtained when short sequence snippets are considered. In these conditions the proposed method outperforms RDP and SVM with ultra short sequences and it exhibits a smooth decrease of performance, at every taxonomic level, when the sequence length is decreased. PMID:25916734

  3. Scaling precipitation input to spatially distributed hydrological models by measured snow distribution

    Directory of Open Access Journals (Sweden)

    Christian Vögeli

    2016-12-01

    Full Text Available Accurate knowledge on snow distribution in alpine terrain is crucial for various applicationssuch as flood risk assessment, avalanche warning or managing water supply and hydro-power.To simulate the seasonal snow cover development in alpine terrain, the spatially distributed,physics-based model Alpine3D is suitable. The model is typically driven by spatial interpolationsof observations from automatic weather stations (AWS, leading to errors in the spatial distributionof atmospheric forcing. With recent advances in remote sensing techniques, maps of snowdepth can be acquired with high spatial resolution and accuracy. In this work, maps of the snowdepth distribution, calculated from summer and winter digital surface models based on AirborneDigital Sensors (ADS, are used to scale precipitation input data, with the aim to improve theaccuracy of simulation of the spatial distribution of snow with Alpine3D. A simple method toscale and redistribute precipitation is presented and the performance is analysed. The scalingmethod is only applied if it is snowing. For rainfall the precipitation is distributed by interpolation,with a simple air temperature threshold used for the determination of the precipitation phase.It was found that the accuracy of spatial snow distribution could be improved significantly forthe simulated domain. The standard deviation of absolute snow depth error is reduced up toa factor 3.4 to less than 20 cm. The mean absolute error in snow distribution was reducedwhen using representative input sources for the simulation domain. For inter-annual scaling, themodel performance could also be improved, even when using a remote sensing dataset from adifferent winter. In conclusion, using remote sensing data to process precipitation input, complexprocesses such as preferential snow deposition and snow relocation due to wind or avalanches,can be substituted and modelling performance of spatial snow distribution is improved.

  4. WE-FG-206-06: Dual-Input Tracer Kinetic Modeling and Its Analog Implementation for Dynamic Contrast-Enhanced (DCE-) MRI of Malignant Mesothelioma (MPM)

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S; Rimner, A; Hayes, S; Hunt, M; Deasy, J; Zauderer, M; Rusch, V; Tyagi, N [Memorial Sloan Kettering Cancer Center, New York, NY (United States)

    2016-06-15

    Purpose: To use dual-input tracer kinetic modeling of the lung for mapping spatial heterogeneity of various kinetic parameters in malignant MPM Methods: Six MPM patients received DCE-MRI as part of their radiation therapy simulation scan. 5 patients had the epitheloid subtype of MPM, while one was biphasic. A 3D fast-field echo sequence with TR/TE/Flip angle of 3.62ms/1.69ms/15° was used for DCE-MRI acquisition. The scan was collected for 5 minutes with a temporal resolution of 5-9 seconds depending on the spatial extent of the tumor. A principal component analysis-based groupwise deformable registration was used to co-register all the DCE-MRI series for motion compensation. All the images were analyzed using five different dual-input tracer kinetic models implemented in analog continuous-time formalism: the Tofts-Kety (TK), extended TK (ETK), two compartment exchange (2CX), adiabatic approximation to the tissue homogeneity (AATH), and distributed parameter (DP) models. The following parameters were computed for each model: total blood flow (BF), pulmonary flow fraction (γ), pulmonary blood flow (BF-pa), systemic blood flow (BF-a), blood volume (BV), mean transit time (MTT), permeability-surface area product (PS), fractional interstitial volume (vi), extraction fraction (E), volume transfer constant (Ktrans) and efflux rate constant (kep). Results: Although the majority of patients had epitheloid histologies, kinetic parameter values varied across different models. One patient showed a higher total BF value in all models among the epitheloid histologies, although the γ value was varying among these different models. In one tumor with a large area of necrosis, the TK and ETK models showed higher E, Ktrans, and kep values and lower interstitial volume as compared to AATH and DP and 2CX models. Kinetic parameters such as BF-pa, BF-a, PS, Ktrans values were higher in surviving group compared to non-surviving group across most models. Conclusion: Dual-input tracer

  5. SISTEM KONTROL OTOMATIK DENGAN MODEL SINGLE-INPUT-DUAL-OUTPUT DALAM KENDALI EFISIENSI UMUR-PEMAKAIAN INSTRUMEN

    Directory of Open Access Journals (Sweden)

    S.N.M.P. Simamora

    2014-10-01

    Full Text Available Efficiency condition occurs when the value of the used outputs compared to the resource total that has been used almost close to the value 1 (absolute environment. An instrument to achieve efficiency if the power output level has decreased significantly in the life of the instrument used, if it compared to the previous condition, when the instrument is not equipped with additional systems (or proposed model improvement. Even more effective if the inputs model that are used in unison to achieve a homogeneous output. On this research has been designed and implemented the automatic control system for models of single input-dual-output, wherein the sampling instruments used are lamp and fan. Source voltage used is AC (alternate-current and tested using quantitative research methods and instrumentation (with measuring instruments are observed. The results obtained demonstrate the efficiency of the instrument experienced a significant current model of single-input-dual-output applied separately instrument trials such as lamp and fan when it compared to the condition or state before. And the result show that the design has been built, can also run well.

  6. Generation of synthetic sequences of electricity demand: Application in South Australia

    International Nuclear Information System (INIS)

    Magnano, L.; Boland, J.W.

    2007-01-01

    We have developed a model to generate synthetic sequences of half-hourly electricity demand. The generated sequences represent possible realisations of electricity load that could have occurred. Each of the components included in the model has a physical interpretation. These components are yearly and daily seasonality which were modelled using Fourier series, weekly seasonality modelled with dummy variables, and the relationship with current temperature described by polynomial functions of temperature. Finally the stochastic component was modelled with autoregressive moving average (ARMA) processes. These synthetic sequences were developed for two purposes. The first one is to use them as input data in market simulation software. The second one is to build probability distributions of the outputs to calculate probabilistic forecasts. As an application several summers of half-hourly electricity demand were generated and from them the value of demand that is not expected to be exceeded more than once in 10 years was calculated

  7. A Local Poisson Graphical Model for inferring networks from sequencing data.

    Science.gov (United States)

    Allen, Genevera I; Liu, Zhandong

    2013-09-01

    Gaussian graphical models, a class of undirected graphs or Markov Networks, are often used to infer gene networks based on microarray expression data. Many scientists, however, have begun using high-throughput sequencing technologies such as RNA-sequencing or next generation sequencing to measure gene expression. As the resulting data consists of counts of sequencing reads for each gene, Gaussian graphical models are not optimal for this discrete data. In this paper, we propose a novel method for inferring gene networks from sequencing data: the Local Poisson Graphical Model. Our model assumes a Local Markov property where each variable conditional on all other variables is Poisson distributed. We develop a neighborhood selection algorithm to fit our model locally by performing a series of l1 penalized Poisson, or log-linear, regressions. This yields a fast parallel algorithm for estimating networks from next generation sequencing data. In simulations, we illustrate the effectiveness of our methods for recovering network structure from count data. A case study on breast cancer microRNAs (miRNAs), a novel application of graphical models, finds known regulators of breast cancer genes and discovers novel miRNA clusters and hubs that are targets for future research.

  8. Nonlinear Synchronization for Automatic Learning of 3D Pose Variability in Human Motion Sequences

    Directory of Open Access Journals (Sweden)

    Mozerov M

    2010-01-01

    Full Text Available A dense matching algorithm that solves the problem of synchronizing prerecorded human motion sequences, which show different speeds and accelerations, is proposed. The approach is based on minimization of MRF energy and solves the problem by using Dynamic Programming. Additionally, an optimal sequence is automatically selected from the input dataset to be a time-scale pattern for all other sequences. The paper utilizes an action specific model which automatically learns the variability of 3D human postures observed in a set of training sequences. The model is trained using the public CMU motion capture dataset for the walking action, and a mean walking performance is automatically learnt. Additionally, statistics about the observed variability of the postures and motion direction are also computed at each time step. The synchronized motion sequences are used to learn a model of human motion for action recognition and full-body tracking purposes.

  9. Using hidden Markov models to align multiple sequences.

    Science.gov (United States)

    Mount, David W

    2009-07-01

    A hidden Markov model (HMM) is a probabilistic model of a multiple sequence alignment (msa) of proteins. In the model, each column of symbols in the alignment is represented by a frequency distribution of the symbols (called a "state"), and insertions and deletions are represented by other states. One moves through the model along a particular path from state to state in a Markov chain (i.e., random choice of next move), trying to match a given sequence. The next matching symbol is chosen from each state, recording its probability (frequency) and also the probability of going to that state from a previous one (the transition probability). State and transition probabilities are multiplied to obtain a probability of the given sequence. The hidden nature of the HMM is due to the lack of information about the value of a specific state, which is instead represented by a probability distribution over all possible values. This article discusses the advantages and disadvantages of HMMs in msa and presents algorithms for calculating an HMM and the conditions for producing the best HMM.

  10. Vascular input function correction of inflow enhancement for improved pharmacokinetic modeling of liver DCE-MRI.

    Science.gov (United States)

    Ning, Jia; Schubert, Tilman; Johnson, Kevin M; Roldán-Alzate, Alejandro; Chen, Huijun; Yuan, Chun; Reeder, Scott B

    2018-06-01

    To propose a simple method to correct vascular input function (VIF) due to inflow effects and to test whether the proposed method can provide more accurate VIFs for improved pharmacokinetic modeling. A spoiled gradient echo sequence-based inflow quantification and contrast agent concentration correction method was proposed. Simulations were conducted to illustrate improvement in the accuracy of VIF estimation and pharmacokinetic fitting. Animal studies with dynamic contrast-enhanced MR scans were conducted before, 1 week after, and 2 weeks after portal vein embolization (PVE) was performed in the left portal circulation of pigs. The proposed method was applied to correct the VIFs for model fitting. Pharmacokinetic parameters fitted using corrected and uncorrected VIFs were compared between different lobes and visits. Simulation results demonstrated that the proposed method can improve accuracy of VIF estimation and pharmacokinetic fitting. In animal study results, pharmacokinetic fitting using corrected VIFs demonstrated changes in perfusion consistent with changes expected after PVE, whereas the perfusion estimates derived by uncorrected VIFs showed no significant changes. The proposed correction method improves accuracy of VIFs and therefore provides more precise pharmacokinetic fitting. This method may be promising in improving the reliability of perfusion quantification. Magn Reson Med 79:3093-3102, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  11. The HMMER Web Server for Protein Sequence Similarity Search.

    Science.gov (United States)

    Prakash, Ananth; Jeffryes, Matt; Bateman, Alex; Finn, Robert D

    2017-12-08

    Protein sequence similarity search is one of the most commonly used bioinformatics methods for identifying evolutionarily related proteins. In general, sequences that are evolutionarily related share some degree of similarity, and sequence-search algorithms use this principle to identify homologs. The requirement for a fast and sensitive sequence search method led to the development of the HMMER software, which in the latest version (v3.1) uses a combination of sophisticated acceleration heuristics and mathematical and computational optimizations to enable the use of profile hidden Markov models (HMMs) for sequence analysis. The HMMER Web server provides a common platform by linking the HMMER algorithms to databases, thereby enabling the search for homologs, as well as providing sequence and functional annotation by linking external databases. This unit describes three basic protocols and two alternate protocols that explain how to use the HMMER Web server using various input formats and user defined parameters. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  12. Unitary input DEA model to identify beef cattle production systems typologies

    Directory of Open Access Journals (Sweden)

    Eliane Gonçalves Gomes

    2012-08-01

    Full Text Available The cow-calf beef production sector in Brazil has a wide variety of operating systems. This suggests the identification and the characterization of homogeneous regions of production, with consequent implementation of actions to achieve its sustainability. In this paper we attempted to measure the performance of 21 livestock modal production systems, in their cow-calf phase. We measured the performance of these systems, considering husbandry and production variables. The proposed approach is based on data envelopment analysis (DEA. We used unitary input DEA model, with apparent input orientation, together with the efficiency measurements generated by the inverted DEA frontier. We identified five modal production systems typologies, using the isoefficiency layers approach. The results showed that the knowledge and the processes management are the most important factors for improving the efficiency of beef cattle production systems.

  13. BlockLogo: Visualization of peptide and sequence motif conservation

    DEFF Research Database (Denmark)

    Olsen, Lars Rønn; Kudahl, Ulrich Johan; Simon, Christian

    2013-01-01

    BlockLogo is a web-server application for the visualization of protein and nucleotide fragments, continuous protein sequence motifs, and discontinuous sequence motifs using calculation of block entropy from multiple sequence alignments. The user input consists of a multiple sequence alignment, se...

  14. Revised sequence components power system models for unbalanced power system studies

    Energy Technology Data Exchange (ETDEWEB)

    Abdel-Akher, M. [Tunku Abdul Rahman Univ., Kuala Lumpur (Malaysia); Nor, K.-M. [Univ. of Technology Malaysia, Johor (Malaysia); Rashid, A.H.A. [Univ. of Malaya, Kuala Lumpur (Malaysia)

    2007-07-01

    The principle method of analysis using positive, negative, and zero-sequence networks has been used to examine the balanced power system under both balanced and unbalanced loading conditions. The significant advantage of the sequence networks is that the sequence networks become entirely uncoupled in the case of balanced three-phase power systems. The uncoupled sequence networks then can be solved in independent way such as in fault calculation programs. However, the hypothesis of balanced power systems cannot be considered in many cases due to untransposed transmission lines; multiphase line segments in a distribution power system; or transformer phase shifts which cannot be incorporated in the existing models. A revised sequence decoupled power system models for analyzing unbalanced power systems based on symmetrical networks was presented in this paper. These models included synchronous machines, transformers, transmission lines, and voltage regulators. The models were derived from their counterpart's models in phase coordinates frame of reference. In these models, the three sequence networks were fully decoupled with a three-phase coordinates features such as transformer phase shifts and transmission line coupling. The proposed models were used to develop an unbalanced power-flow program for analyzing both balanced and unbalanced networks. The power flow solution was identical to results obtained from a full phase coordinate three-phase power-flow program. 11 refs., 3 tabs.

  15. Using Whole-House Field Tests to Empirically Derive Moisture Buffering Model Inputs

    Energy Technology Data Exchange (ETDEWEB)

    Woods, J.; Winkler, J.; Christensen, D.; Hancock, E.

    2014-08-01

    Building energy simulations can be used to predict a building's interior conditions, along with the energy use associated with keeping these conditions comfortable. These models simulate the loads on the building (e.g., internal gains, envelope heat transfer), determine the operation of the space conditioning equipment, and then calculate the building's temperature and humidity throughout the year. The indoor temperature and humidity are affected not only by the loads and the space conditioning equipment, but also by the capacitance of the building materials, which buffer changes in temperature and humidity. This research developed an empirical method to extract whole-house model inputs for use with a more accurate moisture capacitance model (the effective moisture penetration depth model). The experimental approach was to subject the materials in the house to a square-wave relative humidity profile, measure all of the moisture transfer terms (e.g., infiltration, air conditioner condensate) and calculate the only unmeasured term: the moisture absorption into the materials. After validating the method with laboratory measurements, we performed the tests in a field house. A least-squares fit of an analytical solution to the measured moisture absorption curves was used to determine the three independent model parameters representing the moisture buffering potential of this house and its furnishings. Follow on tests with realistic latent and sensible loads showed good agreement with the derived parameters, especially compared to the commonly-used effective capacitance approach. These results show that the EMPD model, once the inputs are known, is an accurate moisture buffering model.

  16. Measurement of Laser Weld Temperatures for 3D Model Input

    Energy Technology Data Exchange (ETDEWEB)

    Dagel, Daryl [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grossetete, Grant [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Maccallum, Danny O. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-10-01

    Laser welding is a key joining process used extensively in the manufacture and assembly of critical components for several weapons systems. Sandia National Laboratories advances the understanding of the laser welding process through coupled experimentation and modeling. This report summarizes the experimental portion of the research program, which focused on measuring temperatures and thermal history of laser welds on steel plates. To increase confidence in measurement accuracy, researchers utilized multiple complementary techniques to acquire temperatures during laser welding. This data serves as input to and validation of 3D laser welding models aimed at predicting microstructure and the formation of defects and their impact on weld-joint reliability, a crucial step in rapid prototyping of weapons components.

  17. Modeling ChIP sequencing in silico with applications.

    Directory of Open Access Journals (Sweden)

    Zhengdong D Zhang

    2008-08-01

    Full Text Available ChIP sequencing (ChIP-seq is a new method for genomewide mapping of protein binding sites on DNA. It has generated much excitement in functional genomics. To score data and determine adequate sequencing depth, both the genomic background and the binding sites must be properly modeled. To develop a computational foundation to tackle these issues, we first performed a study to characterize the observed statistical nature of this new type of high-throughput data. By linking sequence tags into clusters, we show that there are two components to the distribution of tag counts observed in a number of recent experiments: an initial power-law distribution and a subsequent long right tail. Then we develop in silico ChIP-seq, a computational method to simulate the experimental outcome by placing tags onto the genome according to particular assumed distributions for the actual binding sites and for the background genomic sequence. In contrast to current assumptions, our results show that both the background and the binding sites need to have a markedly nonuniform distribution in order to correctly model the observed ChIP-seq data, with, for instance, the background tag counts modeled by a gamma distribution. On the basis of these results, we extend an existing scoring approach by using a more realistic genomic-background model. This enables us to identify transcription-factor binding sites in ChIP-seq data in a statistically rigorous fashion.

  18. Sources of PCR-induced distortions in high-throughput sequencing data sets

    Science.gov (United States)

    Kebschull, Justus M.; Zador, Anthony M.

    2015-01-01

    PCR permits the exponential and sequence-specific amplification of DNA, even from minute starting quantities. PCR is a fundamental step in preparing DNA samples for high-throughput sequencing. However, there are errors associated with PCR-mediated amplification. Here we examine the effects of four important sources of error—bias, stochasticity, template switches and polymerase errors—on sequence representation in low-input next-generation sequencing libraries. We designed a pool of diverse PCR amplicons with a defined structure, and then used Illumina sequencing to search for signatures of each process. We further developed quantitative models for each process, and compared predictions of these models to our experimental data. We find that PCR stochasticity is the major force skewing sequence representation after amplification of a pool of unique DNA amplicons. Polymerase errors become very common in later cycles of PCR but have little impact on the overall sequence distribution as they are confined to small copy numbers. PCR template switches are rare and confined to low copy numbers. Our results provide a theoretical basis for removing distortions from high-throughput sequencing data. In addition, our findings on PCR stochasticity will have particular relevance to quantification of results from single cell sequencing, in which sequences are represented by only one or a few molecules. PMID:26187991

  19. PSA modeling of long-term accident sequences

    International Nuclear Information System (INIS)

    Georgescu, Gabriel; Corenwinder, Francois; Lanore, Jeanne-Marie

    2014-01-01

    In the context of the extension of PSA scope to include external hazards, in France, both operator (EDF) and IRSN work for the improvement of methods to better take into account in the PSA the accident sequences induced by initiators which affect a whole site containing several nuclear units (reactors, fuel pools,...). These methodological improvements represent an essential prerequisite for the development of external hazards PSA. However, it has to be noted that in French PSA, even before Fukushima, long term accident sequences were taken into account: many insight were therefore used, as complementary information, to enhance the safety level of the plants. IRSN proposed an external events PSA development program. One of the first steps of the program is the development of methods to model in the PSA the long term accident sequences, based on the experience gained. At short term IRSN intends to enhance the modeling of the 'long term' accident sequences induced by the loss of the heat sink or/and the loss of external power supply. The experience gained by IRSN and EDF from the development of several probabilistic studies treating long term accident sequences shows that the simple extension of the mission time of the mitigation systems from 24 hours to longer times is not sufficient to realistically quantify the risk and to obtain a correct ranking of the risk contributions and that treatment of recoveries is also necessary. IRSN intends to develop a generic study which can be used as a general methodology for the assessment of the long term accident sequences, mainly generated by external hazards and their combinations. This first attempt to develop this generic study allowed identifying some aspects, which may be hazard (or combinations of hazards) or related to initial boundary conditions, which should be taken into account for further developments. (authors)

  20. Characterization of memory states of the Preisach operator with stochastic inputs

    International Nuclear Information System (INIS)

    Amann, A.; Brokate, M.; McCarthy, S.; Rachinskii, D.; Temnov, G.

    2012-01-01

    The Preisach operator with inputs defined by a Markov process x t is considered. The question we address is: what is the distribution of the random memory state of the Preisach operator at a given time moment t 0 in the limit r→∞ of infinitely long input history x t , t 0 -r≤t≤t 0 ? In order to answer this question, we introduce a Markov chain (called the memory state Markov chain) where the states are pairs (m k ,M k ) of elements from the monotone sequences of the local minimum input values m k and the local maximum input values M k recorded in the memory state and the index k of the elements plays the role of time. We express the transition probabilities of this Markov chain in terms of the transition probabilities of the input stochastic process and show that the memory state Markov chain and the input process generate the same distribution of the memory states. These results are illustrated by several examples of stochastic inputs such as the Wiener and Bernoulli processes and their mixture (we first discuss a discrete version of these processes and then the continuous time and state setting). The memory state Markov chain is then used to find the distribution of the random number of elements in the memory state sequence. We show that this number has the Poisson distribution for the Wiener and Bernoulli processes inputs. In particular, in the discrete setting, the mean value of the number of elements in the memory state scales as lnN, where N is the number of the input states, while the mean time it takes the input to generate this memory state scales as N 2 for the Wiener process and as N for the Bernoulli process. A similar relationship between the dimension of the memory state vector and the number of iterations in the numerical realization of the input is shown for the mixture of the Wiener and Bernoulli processes, thus confirming that the memory state Markov chain is an efficient tool for generating the distribution of the Preisach operator memory

  1. Characterization of memory states of the Preisach operator with stochastic inputs

    Energy Technology Data Exchange (ETDEWEB)

    Amann, A. [Department of Applied Mathematics, University College Cork (Ireland); Brokate, M. [Zentrum Mathematik, Technische Universitaet Muenchen (Germany); McCarthy, S. [Department of Applied Mathematics, University College Cork (Ireland); Rachinskii, D., E-mail: d.rachinskii@ucc.ie [Department of Applied Mathematics, University College Cork (Ireland); Temnov, G. [Department of Mathematics, University College Cork (Ireland)

    2012-05-01

    The Preisach operator with inputs defined by a Markov process x{sup t} is considered. The question we address is: what is the distribution of the random memory state of the Preisach operator at a given time moment t{sub 0} in the limit r{yields}{infinity} of infinitely long input history x{sup t}, t{sub 0}-r{<=}t{<=}t{sub 0}? In order to answer this question, we introduce a Markov chain (called the memory state Markov chain) where the states are pairs (m{sub k},M{sub k}) of elements from the monotone sequences of the local minimum input values m{sub k} and the local maximum input values M{sub k} recorded in the memory state and the index k of the elements plays the role of time. We express the transition probabilities of this Markov chain in terms of the transition probabilities of the input stochastic process and show that the memory state Markov chain and the input process generate the same distribution of the memory states. These results are illustrated by several examples of stochastic inputs such as the Wiener and Bernoulli processes and their mixture (we first discuss a discrete version of these processes and then the continuous time and state setting). The memory state Markov chain is then used to find the distribution of the random number of elements in the memory state sequence. We show that this number has the Poisson distribution for the Wiener and Bernoulli processes inputs. In particular, in the discrete setting, the mean value of the number of elements in the memory state scales as lnN, where N is the number of the input states, while the mean time it takes the input to generate this memory state scales as N{sup 2} for the Wiener process and as N for the Bernoulli process. A similar relationship between the dimension of the memory state vector and the number of iterations in the numerical realization of the input is shown for the mixture of the Wiener and Bernoulli processes, thus confirming that the memory state Markov chain is an efficient tool for

  2. Linear and quadratic models of point process systems: contributions of patterned input to output.

    Science.gov (United States)

    Lindsay, K A; Rosenberg, J R

    2012-08-01

    In the 1880's Volterra characterised a nonlinear system using a functional series connecting continuous input and continuous output. Norbert Wiener, in the 1940's, circumvented problems associated with the application of Volterra series to physical problems by deriving from it a new series of terms that are mutually uncorrelated with respect to Gaussian processes. Subsequently, Brillinger, in the 1970's, introduced a point-process analogue of Volterra's series connecting point-process inputs to the instantaneous rate of point-process output. We derive here a new series from this analogue in which its terms are mutually uncorrelated with respect to Poisson processes. This new series expresses how patterned input in a spike train, represented by third-order cross-cumulants, is converted into the instantaneous rate of an output point-process. Given experimental records of suitable duration, the contribution of arbitrary patterned input to an output process can, in principle, be determined. Solutions for linear and quadratic point-process models with one and two inputs and a single output are investigated. Our theoretical results are applied to isolated muscle spindle data in which the spike trains from the primary and secondary endings from the same muscle spindle are recorded in response to stimulation of one and then two static fusimotor axons in the absence and presence of a random length change imposed on the parent muscle. For a fixed mean rate of input spikes, the analysis of the experimental data makes explicit which patterns of two input spikes contribute to an output spike. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Statistical approaches to use a model organism for regulatory sequences annotation of newly sequenced species.

    Directory of Open Access Journals (Sweden)

    Pietro Liò

    Full Text Available A major goal of bioinformatics is the characterization of transcription factors and the transcriptional programs they regulate. Given the speed of genome sequencing, we would like to quickly annotate regulatory sequences in newly-sequenced genomes. In such cases, it would be helpful to predict sequence motifs by using experimental data from closely related model organism. Here we present a general algorithm that allow to identify transcription factor binding sites in one newly sequenced species by performing Bayesian regression on the annotated species. First we set the rationale of our method by applying it within the same species, then we extend it to use data available in closely related species. Finally, we generalise the method to handle the case when a certain number of experiments, from several species close to the species on which to make inference, are available. In order to show the performance of the method, we analyse three functionally related networks in the Ascomycota. Two gene network case studies are related to the G2/M phase of the Ascomycota cell cycle; the third is related to morphogenesis. We also compared the method with MatrixReduce and discuss other types of validation and tests. The first network is well known and provides a biological validation test of the method. The two cell cycle case studies, where the gene network size is conserved, demonstrate an effective utility in annotating new species sequences using all the available replicas from model species. The third case, where the gene network size varies among species, shows that the combination of information is less powerful but is still informative. Our methodology is quite general and could be extended to integrate other high-throughput data from model organisms.

  4. The economic impact of multifunctional agriculture in Dutch regions: An input-output model

    NARCIS (Netherlands)

    Heringa, P.W.; Heide, van der C.M.; Heijman, W.J.M.

    2013-01-01

    Multifunctional agriculture is a broad concept lacking a precise definition. Moreover, little is known about the societal importance of multifunctional agriculture. This paper is an empirical attempt to fill this gap. To this end, an input-output model was constructed for multifunctional agriculture

  5. Assigning probability distributions to input parameters of performance assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta [INTERA Inc., Austin, TX (United States)

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.

  6. Assigning probability distributions to input parameters of performance assessment models

    International Nuclear Information System (INIS)

    Mishra, Srikanta

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available

  7. Race: A scalable and elastic parallel system for discovering repeats in very long sequences

    KAUST Repository

    Mansour, Essam

    2013-08-26

    A wide range of applications, including bioinformatics, time series, and log analysis, depend on the identification of repetitions in very long sequences. The problem of finding maximal pairs subsumes most important types of repetition-finding tasks. Existing solutions require both the input sequence and its index (typically an order of magnitude larger than the input) to fit in memory. Moreover, they are serial algorithms with long execution time. Therefore, they are limited to small datasets, despite the fact that modern applications demand orders of magnitude longer sequences. In this paper we present RACE, a parallel system for finding maximal pairs in very long sequences. RACE supports parallel execution on stand-alone multicore systems, in addition to scaling to thousands of nodes on clusters or supercomputers. RACE does not require the input or the index to fit in memory; therefore, it supports very long sequences with limited memory. Moreover, it uses a novel array representation that allows for cache-efficient implementation. RACE is particularly suitable for the cloud (e.g., Amazon EC2) because, based on availability, it can scale elastically to more or fewer machines during its execution. Since scaling out introduces overheads, mainly due to load imbalance, we propose a cost model to estimate the expected speedup, based on statistics gathered through sampling. The model allows the user to select the appropriate combination of cloud resources based on the provider\\'s prices and the required deadline. We conducted extensive experimental evaluation with large real datasets and large computing infrastructures. In contrast to existing methods, RACE can handle the entire human genome on a typical desktop computer with 16GB RAM. Moreover, for a problem that takes 10 hours of serial execution, RACE finishes in 28 seconds using 2,048 nodes on an IBM BlueGene/P supercomputer.

  8. Urban Landscape Characterization Using Remote Sensing Data For Input into Air Quality Modeling

    Science.gov (United States)

    Quattrochi, Dale A.; Estes, Maurice G., Jr.; Crosson, William; Khan, Maudood

    2005-01-01

    The urban landscape is inherently complex and this complexity is not adequately captured in air quality models that are used to assess whether urban areas are in attainment of EPA air quality standards, particularly for ground level ozone. This inadequacy of air quality models to sufficiently respond to the heterogeneous nature of the urban landscape can impact how well these models predict ozone pollutant levels over metropolitan areas and ultimately, whether cities exceed EPA ozone air quality standards. We are exploring the utility of high-resolution remote sensing data and urban growth projections as improved inputs to meteorological and air quality models focusing on the Atlanta, Georgia metropolitan area as a case study. The National Land Cover Dataset at 30m resolution is being used as the land use/land cover input and aggregated to the 4km scale for the MM5 mesoscale meteorological model and the Community Multiscale Air Quality (CMAQ) modeling schemes. Use of these data have been found to better characterize low density/suburban development as compared with USGS 1 km land use/land cover data that have traditionally been used in modeling. Air quality prediction for future scenarios to 2030 is being facilitated by land use projections using a spatial growth model. Land use projections were developed using the 2030 Regional Transportation Plan developed by the Atlanta Regional Commission. This allows the State Environmental Protection agency to evaluate how these transportation plans will affect future air quality.

  9. Efficient uncertainty quantification of a fully nonlinear and dispersive water wave model with random inputs

    DEFF Research Database (Denmark)

    Bigoni, Daniele; Engsig-Karup, Allan Peter; Eskilsson, Claes

    2016-01-01

    A major challenge in next-generation industrial applications is to improve numerical analysis by quantifying uncertainties in predictions. In this work we present a formulation of a fully nonlinear and dispersive potential flow water wave model with random inputs for the probabilistic description...... at different points in the parameter space, allowing for the reuse of existing simulation software. The choice of the applied methods is driven by the number of uncertain input parameters and by the fact that finding the solution of the considered model is computationally intensive. We revisit experimental...... benchmarks often used for validation of deterministic water wave models. Based on numerical experiments and assumed uncertainties in boundary data, our analysis reveals that some of the known discrepancies from deterministic simulation in comparison with experimental measurements could be partially explained...

  10. Chimera: construction of chimeric sequences for phylogenetic analysis

    NARCIS (Netherlands)

    Leunissen, J.A.M.

    2003-01-01

    Chimera allows the construction of chimeric protein or nucleic acid sequence files by concatenating sequences from two or more sequence files in PHYLIP formats. It allows the user to interactively select genes and species from the input files. The concatenated result is stored to one single output

  11. Hierarchical Bayesian modelling of mobility metrics for hazard model input calibration

    Science.gov (United States)

    Calder, Eliza; Ogburn, Sarah; Spiller, Elaine; Rutarindwa, Regis; Berger, Jim

    2015-04-01

    In this work we present a method to constrain flow mobility input parameters for pyroclastic flow models using hierarchical Bayes modeling of standard mobility metrics such as H/L and flow volume etc. The advantage of hierarchical modeling is that it can leverage the information in global dataset for a particular mobility metric in order to reduce the uncertainty in modeling of an individual volcano, especially important where individual volcanoes have only sparse datasets. We use compiled pyroclastic flow runout data from Colima, Merapi, Soufriere Hills, Unzen and Semeru volcanoes, presented in an open-source database FlowDat (https://vhub.org/groups/massflowdatabase). While the exact relationship between flow volume and friction varies somewhat between volcanoes, dome collapse flows originating from the same volcano exhibit similar mobility relationships. Instead of fitting separate regression models for each volcano dataset, we use a variation of the hierarchical linear model (Kass and Steffey, 1989). The model presents a hierarchical structure with two levels; all dome collapse flows and dome collapse flows at specific volcanoes. The hierarchical model allows us to assume that the flows at specific volcanoes share a common distribution of regression slopes, then solves for that distribution. We present comparisons of the 95% confidence intervals on the individual regression lines for the data set from each volcano as well as those obtained from the hierarchical model. The results clearly demonstrate the advantage of considering global datasets using this technique. The technique developed is demonstrated here for mobility metrics, but can be applied to many other global datasets of volcanic parameters. In particular, such methods can provide a means to better contain parameters for volcanoes for which we only have sparse data, a ubiquitous problem in volcanology.

  12. Rapid Diagnostics of Onboard Sequences

    Science.gov (United States)

    Starbird, Thomas W.; Morris, John R.; Shams, Khawaja S.; Maimone, Mark W.

    2012-01-01

    Keeping track of sequences onboard a spacecraft is challenging. When reviewing Event Verification Records (EVRs) of sequence executions on the Mars Exploration Rover (MER), operators often found themselves wondering which version of a named sequence the EVR corresponded to. The lack of this information drastically impacts the operators diagnostic capabilities as well as their situational awareness with respect to the commands the spacecraft has executed, since the EVRs do not provide argument values or explanatory comments. Having this information immediately available can be instrumental in diagnosing critical events and can significantly enhance the overall safety of the spacecraft. This software provides auditing capability that can eliminate that uncertainty while diagnosing critical conditions. Furthermore, the Restful interface provides a simple way for sequencing tools to automatically retrieve binary compiled sequence SCMFs (Space Command Message Files) on demand. It also enables developers to change the underlying database, while maintaining the same interface to the existing applications. The logging capabilities are also beneficial to operators when they are trying to recall how they solved a similar problem many days ago: this software enables automatic recovery of SCMF and RML (Robot Markup Language) sequence files directly from the command EVRs, eliminating the need for people to find and validate the corresponding sequences. To address the lack of auditing capability for sequences onboard a spacecraft during earlier missions, extensive logging support was added on the Mars Science Laboratory (MSL) sequencing server. This server is responsible for generating all MSL binary SCMFs from RML input sequences. The sequencing server logs every SCMF it generates into a MySQL database, as well as the high-level RML file and dictionary name inputs used to create the SCMF. The SCMF is then indexed by a hash value that is automatically included in all command

  13. Self-Exciting Point Process Modeling of Conversation Event Sequences

    Science.gov (United States)

    Masuda, Naoki; Takaguchi, Taro; Sato, Nobuo; Yano, Kazuo

    Self-exciting processes of Hawkes type have been used to model various phenomena including earthquakes, neural activities, and views of online videos. Studies of temporal networks have revealed that sequences of social interevent times for individuals are highly bursty. We examine some basic properties of event sequences generated by the Hawkes self-exciting process to show that it generates bursty interevent times for a wide parameter range. Then, we fit the model to the data of conversation sequences recorded in company offices in Japan. In this way, we can estimate relative magnitudes of the self excitement, its temporal decay, and the base event rate independent of the self excitation. These variables highly depend on individuals. We also point out that the Hawkes model has an important limitation that the correlation in the interevent times and the burstiness cannot be independently modulated.

  14. Evaluating the efficiency of municipalities in collecting and processing municipal solid waste: a shared input DEA-model.

    Science.gov (United States)

    Rogge, Nicky; De Jaeger, Simon

    2012-10-01

    This paper proposed an adjusted "shared-input" version of the popular efficiency measurement technique Data Envelopment Analysis (DEA) that enables evaluating municipality waste collection and processing performances in settings in which one input (waste costs) is shared among treatment efforts of multiple municipal solid waste fractions. The main advantage of this version of DEA is that it not only provides an estimate of the municipalities overall cost efficiency but also estimates of the municipalities' cost efficiency in the treatment of the different fractions of municipal solid waste (MSW). To illustrate the practical usefulness of the shared input DEA-model, we apply the model to data on 293 municipalities in Flanders, Belgium, for the year 2008. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Biological sequence analysis: probabilistic models of proteins and nucleic acids

    National Research Council Canada - National Science Library

    Durbin, Richard

    1998-01-01

    ... analysis methods are now based on principles of probabilistic modelling. Examples of such methods include the use of probabilistically derived score matrices to determine the significance of sequence alignments, the use of hidden Markov models as the basis for profile searches to identify distant members of sequence families, and the inference...

  16. Universal sequence replication, reversible polymerization and early functional biopolymers: a model for the initiation of prebiotic sequence evolution.

    Directory of Open Access Journals (Sweden)

    Sara Imari Walker

    Full Text Available Many models for the origin of life have focused on understanding how evolution can drive the refinement of a preexisting enzyme, such as the evolution of efficient replicase activity. Here we present a model for what was, arguably, an even earlier stage of chemical evolution, when polymer sequence diversity was generated and sustained before, and during, the onset of functional selection. The model includes regular environmental cycles (e.g. hydration-dehydration cycles that drive polymers between times of replication and functional activity, which coincide with times of different monomer and polymer diffusivity. Template-directed replication of informational polymers, which takes place during the dehydration stage of each cycle, is considered to be sequence-independent. New sequences are generated by spontaneous polymer formation, and all sequences compete for a finite monomer resource that is recycled via reversible polymerization. Kinetic Monte Carlo simulations demonstrate that this proposed prebiotic scenario provides a robust mechanism for the exploration of sequence space. Introduction of a polymer sequence with monomer synthetase activity illustrates that functional sequences can become established in a preexisting pool of otherwise non-functional sequences. Functional selection does not dominate system dynamics and sequence diversity remains high, permitting the emergence and spread of more than one functional sequence. It is also observed that polymers spontaneously form clusters in simulations where polymers diffuse more slowly than monomers, a feature that is reminiscent of a previous proposal that the earliest stages of life could have been defined by the collective evolution of a system-wide cooperation of polymer aggregates. Overall, the results presented demonstrate the merits of considering plausible prebiotic polymer chemistries and environments that would have allowed for the rapid turnover of monomer resources and for

  17. Multiregional input-output model for the evaluation of Spanish water flows.

    Science.gov (United States)

    Cazcarro, Ignacio; Duarte, Rosa; Sánchez Chóliz, Julio

    2013-01-01

    We construct a multiregional input-output model for Spain, in order to evaluate the pressures on the water resources, virtual water flows, and water footprints of the regions, and the water impact of trade relationships within Spain and abroad. The study is framed with those interregional input-output models constructed to study water flows and impacts of regions in China, Australia, Mexico, or the UK. To build our database, we reconcile regional IO tables, national and regional accountancy of Spain, trade and water data. Results show an important imbalance between origin of water resources and final destination, with significant water pressures in the South, Mediterranean, and some central regions. The most populated and dynamic regions of Madrid and Barcelona are important drivers of water consumption in Spain. Main virtual water exporters are the South and Central agrarian regions: Andalusia, Castile-La Mancha, Castile-Leon, Aragon, and Extremadura, while the main virtual water importers are the industrialized regions of Madrid, Basque country, and the Mediterranean coast. The paper shows the different location of direct and indirect consumers of water in Spain and how the economic trade and consumption pattern of certain areas has significant impacts on the availability of water resources in other different and often drier regions.

  18. On the redistribution of existing inputs using the spherical frontier dea model

    Directory of Open Access Journals (Sweden)

    José Virgilio Guedes de Avellar

    2010-04-01

    Full Text Available The Spherical Frontier DEA Model (SFM (Avellar et al., 2007 was developed to be used when one wants to fairly distribute a new and fixed input to a group of Decision Making Units (DMU's. SFM's basic idea is to distribute this new and fixed input in such a way that every DMU will be placed on an efficiency frontier with a spherical shape. We use SFM to analyze the problems that appear when one wants to redistribute an already existing input to a group of DMU's such that the total sum of this input will remain constant. We also analyze the case in which this total sum may vary.O Modelo de Fronteira Esférica (MFE (Avellar et al., 2007 foi desenvolvido para ser usado quando se deseja distribuir de maneira justa um novo insumo a um conjunto de unidades tomadoras de decisão (DMU's, da sigla em inglês, Decision Making Units. A ideia básica do MFE é a de distribuir esse novo insumo de maneira que todas as DMU's sejam colocadas numa fronteira de eficiência com um formato esférico. Neste artigo, usamos MFE para analisar o problema que surge quando se deseja redistribuir um insumo já existente para um grupo de DMU's de tal forma que a soma desse insumo para todas as DMU's se mantenha constante. Também analisamos o caso em que essa soma possa variar.

  19. Digital PCR provides sensitive and absolute calibration for high throughput sequencing

    Directory of Open Access Journals (Sweden)

    Fan H Christina

    2009-03-01

    Full Text Available Abstract Background Next-generation DNA sequencing on the 454, Solexa, and SOLiD platforms requires absolute calibration of the number of molecules to be sequenced. This requirement has two unfavorable consequences. First, large amounts of sample-typically micrograms-are needed for library preparation, thereby limiting the scope of samples which can be sequenced. For many applications, including metagenomics and the sequencing of ancient, forensic, and clinical samples, the quantity of input DNA can be critically limiting. Second, each library requires a titration sequencing run, thereby increasing the cost and lowering the throughput of sequencing. Results We demonstrate the use of digital PCR to accurately quantify 454 and Solexa sequencing libraries, enabling the preparation of sequencing libraries from nanogram quantities of input material while eliminating costly and time-consuming titration runs of the sequencer. We successfully sequenced low-nanogram scale bacterial and mammalian DNA samples on the 454 FLX and Solexa DNA sequencing platforms. This study is the first to definitively demonstrate the successful sequencing of picogram quantities of input DNA on the 454 platform, reducing the sample requirement more than 1000-fold without pre-amplification and the associated bias and reduction in library depth. Conclusion The digital PCR assay allows absolute quantification of sequencing libraries, eliminates uncertainties associated with the construction and application of standard curves to PCR-based quantification, and with a coefficient of variation close to 10%, is sufficiently precise to enable direct sequencing without titration runs.

  20. Development of MIDAS/SMR Input Deck for SMART

    International Nuclear Information System (INIS)

    Cho, S. W.; Oh, H. K.; Lee, J. M.; Lee, J. H.; Yoo, K. J.; Kwun, S. K.; Hur, H.

    2010-01-01

    The objective of this study is to develop MIDAS/SMR code basic input deck for the severe accidents by simulating the steady state for the SMART plant. SMART plant is an integrated reactor developed by KAERI. For the assessment of reactor safety and severe accident management strategy, it is necessary to simulate severe accidents using the MIDAS/SMR code which is being developed by KAERI. The input deck of the MIDAS/SMR code for the SMART plant is prepared to simulate severe accident sequences for the users who are not familiar with the code. A steady state is obtained and the results are compared with design values. The input deck will be improved through the simulation of the DBAs and severe accidents. The base input deck of the MIDAS/SMR code can be used to simulate severe accident scenarios after improvement. Source terms and hydrogen generation can be analyzed through the simulation of the severe accident. The information gained from analyses of severe accidents is expected to be helpful to develop the severe accident management strategy

  1. Modeling the ionosphere-thermosphere response to a geomagnetic storm using physics-based magnetospheric energy input: OpenGGCM-CTIM results

    Directory of Open Access Journals (Sweden)

    Connor Hyunju Kim

    2016-01-01

    Full Text Available The magnetosphere is a major source of energy for the Earth’s ionosphere and thermosphere (IT system. Current IT models drive the upper atmosphere using empirically calculated magnetospheric energy input. Thus, they do not sufficiently capture the storm-time dynamics, particularly at high latitudes. To improve the prediction capability of IT models, a physics-based magnetospheric input is necessary. Here, we use the Open Global General Circulation Model (OpenGGCM coupled with the Coupled Thermosphere Ionosphere Model (CTIM. OpenGGCM calculates a three-dimensional global magnetosphere and a two-dimensional high-latitude ionosphere by solving resistive magnetohydrodynamic (MHD equations with solar wind input. CTIM calculates a global thermosphere and a high-latitude ionosphere in three dimensions using realistic magnetospheric inputs from the OpenGGCM. We investigate whether the coupled model improves the storm-time IT responses by simulating a geomagnetic storm that is preceded by a strong solar wind pressure front on August 24, 2005. We compare the OpenGGCM-CTIM results with low-earth-orbit satellite observations and with the model results of Coupled Thermosphere-Ionosphere-Plasmasphere electrodynamics (CTIPe. CTIPe is an up-to-date version of CTIM that incorporates more IT dynamics such as a low-latitude ionosphere and a plasmasphere, but uses empirical magnetospheric input. OpenGGCM-CTIM reproduces localized neutral density peaks at ~ 400 km altitude in the high-latitude dayside regions in agreement with in situ observations during the pressure shock and the early phase of the storm. Although CTIPe is in some sense a much superior model than CTIM, it misses these localized enhancements. Unlike the CTIPe empirical input models, OpenGGCM-CTIM more faithfully produces localized increases of both auroral precipitation and ionospheric electric fields near the high-latitude dayside region after the pressure shock and after the storm onset

  2. Artificial neural network modelling of biological oxygen demand in rivers at the national level with input selection based on Monte Carlo simulations.

    Science.gov (United States)

    Šiljić, Aleksandra; Antanasijević, Davor; Perić-Grujić, Aleksandra; Ristić, Mirjana; Pocajt, Viktor

    2015-03-01

    Biological oxygen demand (BOD) is the most significant water quality parameter and indicates water pollution with respect to the present biodegradable organic matter content. European countries are therefore obliged to report annual BOD values to Eurostat; however, BOD data at the national level is only available for 28 of 35 listed European countries for the period prior to 2008, among which 46% of data is missing. This paper describes the development of an artificial neural network model for the forecasting of annual BOD values at the national level, using widely available sustainability and economical/industrial parameters as inputs. The initial general regression neural network (GRNN) model was trained, validated and tested utilizing 20 inputs. The number of inputs was reduced to 15 using the Monte Carlo simulation technique as the input selection method. The best results were achieved with the GRNN model utilizing 25% less inputs than the initial model and a comparison with a multiple linear regression model trained and tested using the same input variables using multiple statistical performance indicators confirmed the advantage of the GRNN model. Sensitivity analysis has shown that inputs with the greatest effect on the GRNN model were (in descending order) precipitation, rural population with access to improved water sources, treatment capacity of wastewater treatment plants (urban) and treatment of municipal waste, with the last two having an equal effect. Finally, it was concluded that the developed GRNN model can be useful as a tool to support the decision-making process on sustainable development at a regional, national and international level.

  3. An Approach for Generating Precipitation Input for Worst-Case Flood Modelling

    Science.gov (United States)

    Felder, Guido; Weingartner, Rolf

    2015-04-01

    There is a lack of suitable methods for creating precipitation scenarios that can be used to realistically estimate peak discharges with very low probabilities. On the one hand, existing methods are methodically questionable when it comes to physical system boundaries. On the other hand, the spatio-temporal representativeness of precipitation patterns as system input is limited. In response, this study proposes a method of deriving representative spatio-temporal precipitation patterns and presents a step towards making methodically correct estimations of infrequent floods by using a worst-case approach. A Monte-Carlo rainfall-runoff model allows for the testing of a wide range of different spatio-temporal distributions of an extreme precipitation event and therefore for the generation of a hydrograph for each of these distributions. Out of these numerous hydrographs and their corresponding peak discharges, the worst-case catchment reactions on the system input can be derived. The spatio-temporal distributions leading to the highest peak discharges are identified and can eventually be used for further investigations.

  4. On the Roche constants for main-sequence binaries

    International Nuclear Information System (INIS)

    Giannuzzi, M.A.

    1979-01-01

    The ratios C 1 /C 2 of the constants defining the equipotential surfaces which describe the external forms of the components of a close binary system have been calculated on the basis of evolutionary models. Theoretical systems have been considered allowing for a wide range of input parameters (masses and separation) and taking into account the evolutionary effects on the radii of the stars during their Main-Sequence lifetime. The systems have not undergone any transfer of matter and are representative of detached binaries with Main-sequence components. The ratios of the constants are confined in limited intervals and, for the highest values of the mass-ratios, they are clustered around the unit. (Auth.)

  5. Enhanced learning of natural visual sequences in newborn chicks.

    Science.gov (United States)

    Wood, Justin N; Prasad, Aditya; Goldman, Jason G; Wood, Samantha M W

    2016-07-01

    To what extent are newborn brains designed to operate over natural visual input? To address this question, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) show enhanced learning of natural visual sequences at the onset of vision. We took the same set of images and grouped them into either natural sequences (i.e., sequences showing different viewpoints of the same real-world object) or unnatural sequences (i.e., sequences showing different images of different real-world objects). When raised in virtual worlds containing natural sequences, newborn chicks developed the ability to recognize familiar images of objects. Conversely, when raised in virtual worlds containing unnatural sequences, newborn chicks' object recognition abilities were severely impaired. In fact, the majority of the chicks raised with the unnatural sequences failed to recognize familiar images of objects despite acquiring over 100 h of visual experience with those images. Thus, newborn chicks show enhanced learning of natural visual sequences at the onset of vision. These results indicate that newborn brains are designed to operate over natural visual input.

  6. Usefulness of non-linear input-output models for economic impact analyses in tourism and recreation

    NARCIS (Netherlands)

    Klijs, J.; Peerlings, J.H.M.; Heijman, W.J.M.

    2015-01-01

    In tourism and recreation management it is still common practice to apply traditional input–output (IO) economic impact models, despite their well-known limitations. In this study the authors analyse the usefulness of applying a non-linear input–output (NLIO) model, in which price-induced input

  7. Remote sensing inputs to landscape models which predict future spatial land use patterns for hydrologic models

    Science.gov (United States)

    Miller, L. D.; Tom, C.; Nualchawee, K.

    1977-01-01

    A tropical forest area of Northern Thailand provided a test case of the application of the approach in more natural surroundings. Remote sensing imagery subjected to proper computer analysis has been shown to be a very useful means of collecting spatial data for the science of hydrology. Remote sensing products provide direct input to hydrologic models and practical data bases for planning large and small-scale hydrologic developments. Combining the available remote sensing imagery together with available map information in the landscape model provides a basis for substantial improvements in these applications.

  8. Non-perturbative inputs for gluon distributions in the hadrons

    International Nuclear Information System (INIS)

    Ermolaev, B.I.; Troyan, S.I.

    2017-01-01

    Description of hadronic reactions at high energies is conventionally done in the framework of QCD factorization. All factorization convolutions comprise non-perturbative inputs mimicking non-perturbative contributions and perturbative evolution of those inputs. We construct inputs for the gluon-hadron scattering amplitudes in the forward kinematics and, using the optical theorem, convert them into inputs for gluon distributions in the hadrons, embracing the cases of polarized and unpolarized hadrons. In the first place, we formulate mathematical criteria which any model for the inputs should obey and then suggest a model satisfying those criteria. This model is based on a simple reasoning: after emitting an active parton off the hadron, the remaining set of spectators becomes unstable and therefore it can be described through factors of the resonance type, so we call it the resonance model. We use it to obtain non-perturbative inputs for gluon distributions in unpolarized and polarized hadrons for all available types of QCD factorization: basic, K_T-and collinear factorizations. (orig.)

  9. Non-perturbative inputs for gluon distributions in the hadrons

    Energy Technology Data Exchange (ETDEWEB)

    Ermolaev, B.I. [Ioffe Physico-Technical Institute, Saint Petersburg (Russian Federation); Troyan, S.I. [St. Petersburg Institute of Nuclear Physics, Gatchina (Russian Federation)

    2017-03-15

    Description of hadronic reactions at high energies is conventionally done in the framework of QCD factorization. All factorization convolutions comprise non-perturbative inputs mimicking non-perturbative contributions and perturbative evolution of those inputs. We construct inputs for the gluon-hadron scattering amplitudes in the forward kinematics and, using the optical theorem, convert them into inputs for gluon distributions in the hadrons, embracing the cases of polarized and unpolarized hadrons. In the first place, we formulate mathematical criteria which any model for the inputs should obey and then suggest a model satisfying those criteria. This model is based on a simple reasoning: after emitting an active parton off the hadron, the remaining set of spectators becomes unstable and therefore it can be described through factors of the resonance type, so we call it the resonance model. We use it to obtain non-perturbative inputs for gluon distributions in unpolarized and polarized hadrons for all available types of QCD factorization: basic, K{sub T}-and collinear factorizations. (orig.)

  10. Evaluation of precipitation input for SWAT modeling in Alpine catchment: A case study in the Adige river basin (Italy).

    Science.gov (United States)

    Tuo, Ye; Duan, Zheng; Disse, Markus; Chiogna, Gabriele

    2016-12-15

    Precipitation is often the most important input data in hydrological models when simulating streamflow. The Soil and Water Assessment Tool (SWAT), a widely used hydrological model, only makes use of data from one precipitation gauge station that is nearest to the centroid of each subbasin, which is eventually corrected using the elevation band method. This leads in general to inaccurate representation of subbasin precipitation input data, particularly in catchments with complex topography. To investigate the impact of different precipitation inputs on the SWAT model simulations in Alpine catchments, 13years (1998-2010) of daily precipitation data from four datasets including OP (Observed precipitation), IDW (Inverse Distance Weighting data), CHIRPS (Climate Hazards Group InfraRed Precipitation with Station data) and TRMM (Tropical Rainfall Measuring Mission) has been considered. Both model performances (comparing simulated and measured streamflow data at the catchment outlet) as well as parameter and prediction uncertainties have been quantified. For all three subbasins, the use of elevation bands is fundamental to match the water budget. Streamflow predictions obtained using IDW inputs are better than those obtained using the other datasets in terms of both model performance and prediction uncertainty. Models using the CHIRPS product as input provide satisfactory streamflow estimation, suggesting that this satellite product can be applied to this data-scarce Alpine region. Comparing the performance of SWAT models using different precipitation datasets is therefore important in data-scarce regions. This study has shown that, precipitation is the main source of uncertainty, and different precipitation datasets in SWAT models lead to different best estimate ranges for the calibrated parameters. This has important implications for the interpretation of the simulated hydrological processes. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  11. Analysis of Sequence Diagram Layout in Advanced UML Modelling Tools

    Directory of Open Access Journals (Sweden)

    Ņikiforova Oksana

    2016-05-01

    Full Text Available System modelling using Unified Modelling Language (UML is the task that should be solved for software development. The more complex software becomes the higher requirements are stated to demonstrate the system to be developed, especially in its dynamic aspect, which in UML is offered by a sequence diagram. To solve this task, the main attention is devoted to the graphical presentation of the system, where diagram layout plays the central role in information perception. The UML sequence diagram due to its specific structure is selected for a deeper analysis on the elements’ layout. The authors research represents the abilities of modern UML modelling tools to offer automatic layout of the UML sequence diagram and analyse them according to criteria required for the diagram perception.

  12. Targeting the right input data to improve crop modeling at global level

    Science.gov (United States)

    Adam, M.; Robertson, R.; Gbegbelegbe, S.; Jones, J. W.; Boote, K. J.; Asseng, S.

    2012-12-01

    Designed for location-specific simulations, the use of crop models at a global level raises important questions. Crop models are originally premised on small unit areas where environmental conditions and management practices are considered homogeneous. Specific information describing soils, climate, management, and crop characteristics are used in the calibration process. However, when scaling up for global application, we rely on information derived from geographical information systems and weather generators. To run crop models at broad, we use a modeling platform that assumes a uniformly generated grid cell as a unit area. Specific weather, specific soil and specific management practices for each crop are represented for each of the cell grids. Studies on the impacts of the uncertainties of weather information and climate change on crop yield at a global level have been carried out (Osborne et al, 2007, Nelson et al., 2010, van Bussel et al, 2011). Detailed information on soils and management practices at global level are very scarce but recognized to be of critical importance (Reidsma et al., 2009). Few attempts to assess the impact of their uncertainties on cropping systems performances can be found. The objectives of this study are (i) to determine sensitivities of a crop model to soil and management practices, inputs most relevant to low input rainfed cropping systems, and (ii) to define hotspots of sensitivity according to the input data. We ran DSSAT v4.5 globally (CERES-CROPSIM) to simulate wheat yields at 45arc-minute resolution. Cultivar parameters were calibrated and validated for different mega-environments (results not shown). The model was run for nitrogen-limited production systems. This setting was chosen as the most representative to simulate actual yield (especially for low-input rainfed agricultural systems) and assumes crop growth to be free of any pest and diseases damages. We conducted a sensitivity analysis on contrasting management

  13. DSAP: deep-sequencing small RNA analysis pipeline.

    Science.gov (United States)

    Huang, Po-Jung; Liu, Yi-Chung; Lee, Chi-Ching; Lin, Wei-Chen; Gan, Richie Ruei-Chi; Lyu, Ping-Chiang; Tang, Petrus

    2010-07-01

    DSAP is an automated multiple-task web service designed to provide a total solution to analyzing deep-sequencing small RNA datasets generated by next-generation sequencing technology. DSAP uses a tab-delimited file as an input format, which holds the unique sequence reads (tags) and their corresponding number of copies generated by the Solexa sequencing platform. The input data will go through four analysis steps in DSAP: (i) cleanup: removal of adaptors and poly-A/T/C/G/N nucleotides; (ii) clustering: grouping of cleaned sequence tags into unique sequence clusters; (iii) non-coding RNA (ncRNA) matching: sequence homology mapping against a transcribed sequence library from the ncRNA database Rfam (http://rfam.sanger.ac.uk/); and (iv) known miRNA matching: detection of known miRNAs in miRBase (http://www.mirbase.org/) based on sequence homology. The expression levels corresponding to matched ncRNAs and miRNAs are summarized in multi-color clickable bar charts linked to external databases. DSAP is also capable of displaying miRNA expression levels from different jobs using a log(2)-scaled color matrix. Furthermore, a cross-species comparative function is also provided to show the distribution of identified miRNAs in different species as deposited in miRBase. DSAP is available at http://dsap.cgu.edu.tw.

  14. Iterative normalization technique for reference sequence generation for zero-tail discrete fourier transform spread orthogonal frequency division multiplexing

    DEFF Research Database (Denmark)

    2017-01-01

    , and performing an iterative manipulation of the input sequence. The performing of the iterative manipulation of the input sequence may include, for example: computing frequency domain response of the sequence, normalizing elements of the computed frequency domain sequence to unitary power while maintaining phase...

  15. International trade inoperability input-output model (IT-IIM): theory and application.

    Science.gov (United States)

    Jung, Jeesang; Santos, Joost R; Haimes, Yacov Y

    2009-01-01

    The inoperability input-output model (IIM) has been used for analyzing disruptions due to man-made or natural disasters that can adversely affect the operation of economic systems or critical infrastructures. Taking economic perturbation for each sector as inputs, the IIM provides the degree of economic production impacts on all industry sectors as the outputs for the model. The current version of the IIM does not provide a separate analysis for the international trade component of the inoperability. If an important port of entry (e.g., Port of Los Angeles) is disrupted, then international trade inoperability becomes a highly relevant subject for analysis. To complement the current IIM, this article develops the International Trade-IIM (IT-IIM). The IT-IIM investigates the resulting international trade inoperability for all industry sectors resulting from disruptions to a major port of entry. Similar to traditional IIM analysis, the inoperability metrics that the IT-IIM provides can be used to prioritize economic sectors based on the losses they could potentially incur. The IT-IIM is used to analyze two types of direct perturbations: (1) the reduced capacity of ports of entry, including harbors and airports (e.g., a shutdown of any port of entry); and (2) restrictions on commercial goods that foreign countries trade with the base nation (e.g., embargo).

  16. Detection of no-model input-output pairs in closed-loop systems.

    Science.gov (United States)

    Potts, Alain Segundo; Alvarado, Christiam Segundo Morales; Garcia, Claudio

    2017-11-01

    The detection of no-model input-output (IO) pairs is important because it can speed up the multivariable system identification process, since all the pairs with null transfer functions are previously discarded and it can also improve the identified model quality, thus improving the performance of model based controllers. In the available literature, the methods focus just on the open-loop case, since in this case there is not the effect of the controller forcing the main diagonal in the transfer matrix to one and all the other terms to zero. In this paper, a modification of a previous method able to detect no-model IO pairs in open-loop systems is presented, but adapted to perform this duty in closed-loop systems. Tests are performed by using the traditional methods and the proposed one to show its effectiveness. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  17. MODexplorer: an integrated tool for exploring protein sequence, structure and function relationships.

    KAUST Repository

    Kosinski, Jan; Barbato, Alessandro; Tramontano, Anna

    2013-01-01

    SUMMARY: MODexplorer is an integrated tool aimed at exploring the sequence, structural and functional diversity in protein families useful in homology modeling and in analyzing protein families in general. It takes as input either the sequence or the structure of a protein and provides alignments with its homologs along with a variety of structural and functional annotations through an interactive interface. The annotations include sequence conservation, similarity scores, ligand-, DNA- and RNA-binding sites, secondary structure, disorder, crystallographic structure resolution and quality scores of models implied by the alignments to the homologs of known structure. MODexplorer can be used to analyze sequence and structural conservation among the structures of similar proteins, to find structures of homologs solved in different conformational state or with different ligands and to transfer functional annotations. Furthermore, if the structure of the query is not known, MODexplorer can be used to select the modeling templates taking all this information into account and to build a comparative model. AVAILABILITY AND IMPLEMENTATION: Freely available on the web at http://modorama.biocomputing.it/modexplorer. Website implemented in HTML and JavaScript with all major browsers supported. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

  18. MODexplorer: an integrated tool for exploring protein sequence, structure and function relationships.

    KAUST Repository

    Kosinski, Jan

    2013-02-08

    SUMMARY: MODexplorer is an integrated tool aimed at exploring the sequence, structural and functional diversity in protein families useful in homology modeling and in analyzing protein families in general. It takes as input either the sequence or the structure of a protein and provides alignments with its homologs along with a variety of structural and functional annotations through an interactive interface. The annotations include sequence conservation, similarity scores, ligand-, DNA- and RNA-binding sites, secondary structure, disorder, crystallographic structure resolution and quality scores of models implied by the alignments to the homologs of known structure. MODexplorer can be used to analyze sequence and structural conservation among the structures of similar proteins, to find structures of homologs solved in different conformational state or with different ligands and to transfer functional annotations. Furthermore, if the structure of the query is not known, MODexplorer can be used to select the modeling templates taking all this information into account and to build a comparative model. AVAILABILITY AND IMPLEMENTATION: Freely available on the web at http://modorama.biocomputing.it/modexplorer. Website implemented in HTML and JavaScript with all major browsers supported. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

  19. Inputs and spatial distribution patterns of Cr in Jiaozhou Bay

    Science.gov (United States)

    Yang, Dongfang; Miao, Zhenqing; Huang, Xinmin; Wei, Linzhen; Feng, Ming

    2018-03-01

    Cr pollution in marine bays has been one of the critical environmental issues, and understanding the input and spatial distribution patterns is essential to pollution control. In according to the source strengths of the major pollution sources, the input patterns of pollutants to marine bay include slight, moderate and heavy, and the spatial distribution are corresponding to three block models respectively. This paper analyzed input patterns and distributions of Cr in Jiaozhou Bay, eastern China based on investigation on Cr in surface waters during 1979-1983. Results showed that the input strengths of Cr in Jiaozhou Bay could be classified as moderate input and slight input, and the input strengths were 32.32-112.30 μg L-1 and 4.17-19.76 μg L-1, respectively. The input patterns of Cr included two patterns of moderate input and slight input, and the horizontal distributions could be defined by means of Block Model 2 and Block Model 3, respectively. In case of moderate input pattern via overland runoff, Cr contents were decreasing from the estuaries to the bay mouth, and the distribution pattern was parallel. In case of moderate input pattern via marine current, Cr contents were decreasing from the bay mouth to the bay, and the distribution pattern was parallel to circular. The Block Models were able to reveal the transferring process of various pollutants, and were helpful to understand the distributions of pollutants in marine bay.

  20. Categorical Inputs, Sensitivity Analysis, Optimization and Importance Tempering with tgp Version 2, an R Package for Treed Gaussian Process Models

    Directory of Open Access Journals (Sweden)

    Robert B. Gramacy

    2010-02-01

    Full Text Available This document describes the new features in version 2.x of the tgp package for R, implementing treed Gaussian process (GP models. The topics covered include methods for dealing with categorical inputs and excluding inputs from the tree or GP part of the model; fully Bayesian sensitivity analysis for inputs/covariates; sequential optimization of black-box functions; and a new Monte Carlo method for inference in multi-modal posterior distributions that combines simulated tempering and importance sampling. These additions extend the functionality of tgp across all models in the hierarchy: from Bayesian linear models, to classification and regression trees (CART, to treed Gaussian processes with jumps to the limiting linear model. It is assumed that the reader is familiar with the baseline functionality of the package, outlined in the first vignette (Gramacy 2007.

  1. Uncertainty of input data for room acoustic simulations

    DEFF Research Database (Denmark)

    Jeong, Cheol-Ho; Marbjerg, Gerd; Brunskog, Jonas

    2016-01-01

    Although many room acoustic simulation models have been well established, simulation results will never be accurate with inaccurate and uncertain input data. This study addresses inappropriateness and uncertainty of input data for room acoustic simulations. Firstly, the random incidence absorption...... and scattering coefficients are insufficient when simulating highly non-diffuse rooms. More detailed information, such as the phase and angle dependence, can greatly improve the simulation results of pressure-based geometrical and wave-based models at frequencies well below the Schroeder frequency. Phase...... summarizes potential advanced absorption measurement techniques that can improve the quality of input data for room acoustic simulations. Lastly, plenty of uncertain input data are copied from unreliable sources. Software developers and users should be careful when spreading such uncertain input data. More...

  2. Quantifying input uncertainty in an assemble-to-order system simulation with correlated input variables of mixed types

    NARCIS (Netherlands)

    Akçay, A.E.; Biller, B.

    2014-01-01

    We consider an assemble-to-order production system where the product demands and the time since the last customer arrival are not independent. The simulation of this system requires a multivariate input model that generates random input vectors with correlated discrete and continuous components. In

  3. Feeling music: integration of auditory and tactile inputs in musical meter perception.

    Science.gov (United States)

    Huang, Juan; Gamble, Darik; Sarnlertsophon, Kristine; Wang, Xiaoqin; Hsiao, Steven

    2012-01-01

    Musicians often say that they not only hear, but also "feel" music. To explore the contribution of tactile information in "feeling" musical rhythm, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter recognition task. Subjects discriminated between two types of sequences, 'duple' (march-like rhythms) and 'triple' (waltz-like rhythms) presented in three conditions: 1) Unimodal inputs (auditory or tactile alone), 2) Various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts, and 3) Simultaneously presented bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70%-85%) when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70%-90%) when all of the metrically important notes are assigned to one channel and is reduced to 60% when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90%). Performance drops dramatically when subjects were presented with incongruent auditory cues (10%), as opposed to incongruent tactile cues (60%), demonstrating that auditory input dominates meter perception. We believe that these results are the first demonstration of cross-modal sensory grouping between any two senses.

  4. Information-Theoretic Properties of Auditory Sequences Dynamically Influence Expectation and Memory.

    Science.gov (United States)

    Agres, Kat; Abdallah, Samer; Pearce, Marcus

    2018-01-01

    A basic function of cognition is to detect regularities in sensory input to facilitate the prediction and recognition of future events. It has been proposed that these implicit expectations arise from an internal predictive coding model, based on knowledge acquired through processes such as statistical learning, but it is unclear how different types of statistical information affect listeners' memory for auditory stimuli. We used a combination of behavioral and computational methods to investigate memory for non-linguistic auditory sequences. Participants repeatedly heard tone sequences varying systematically in their information-theoretic properties. Expectedness ratings of tones were collected during three listening sessions, and a recognition memory test was given after each session. Information-theoretic measures of sequential predictability significantly influenced listeners' expectedness ratings, and variations in these properties had a significant impact on memory performance. Predictable sequences yielded increasingly better memory performance with increasing exposure. Computational simulations using a probabilistic model of auditory expectation suggest that listeners dynamically formed a new, and increasingly accurate, implicit cognitive model of the information-theoretic structure of the sequences throughout the experimental session. Copyright © 2017 Cognitive Science Society, Inc.

  5. Visual Perceptual Echo Reflects Learning of Regularities in Rapid Luminance Sequences.

    Science.gov (United States)

    Chang, Acer Y-C; Schwartzman, David J; VanRullen, Rufin; Kanai, Ryota; Seth, Anil K

    2017-08-30

    A novel neural signature of active visual processing has recently been described in the form of the "perceptual echo", in which the cross-correlation between a sequence of randomly fluctuating luminance values and occipital electrophysiological signals exhibits a long-lasting periodic (∼100 ms cycle) reverberation of the input stimulus (VanRullen and Macdonald, 2012). As yet, however, the mechanisms underlying the perceptual echo and its function remain unknown. Reasoning that natural visual signals often contain temporally predictable, though nonperiodic features, we hypothesized that the perceptual echo may reflect a periodic process associated with regularity learning. To test this hypothesis, we presented subjects with successive repetitions of a rapid nonperiodic luminance sequence, and examined the effects on the perceptual echo, finding that echo amplitude linearly increased with the number of presentations of a given luminance sequence. These data suggest that the perceptual echo reflects a neural signature of regularity learning.Furthermore, when a set of repeated sequences was followed by a sequence with inverted luminance polarities, the echo amplitude decreased to the same level evoked by a novel stimulus sequence. Crucially, when the original stimulus sequence was re-presented, the echo amplitude returned to a level consistent with the number of presentations of this sequence, indicating that the visual system retained sequence-specific information, for many seconds, even in the presence of intervening visual input. Altogether, our results reveal a previously undiscovered regularity learning mechanism within the human visual system, reflected by the perceptual echo. SIGNIFICANCE STATEMENT How the brain encodes and learns fast-changing but nonperiodic visual input remains unknown, even though such visual input characterizes natural scenes. We investigated whether the phenomenon of "perceptual echo" might index such learning. The perceptual echo is a

  6. Performance assessment of retrospective meteorological inputs for use in air quality modeling during TexAQS 2006

    Science.gov (United States)

    Ngan, Fong; Byun, Daewon; Kim, Hyuncheol; Lee, Daegyun; Rappenglück, Bernhard; Pour-Biazar, Arastoo

    2012-07-01

    To achieve more accurate meteorological inputs than was used in the daily forecast for studying the TexAQS 2006 air quality, retrospective simulations were conducted using objective analysis and 3D/surface analysis nudging with surface and upper observations. Model ozone using the assimilated meteorological fields with improved wind fields shows better agreement with the observation compared to the forecasting results. In the post-frontal conditions, important factors for ozone modeling in terms of wind patterns are the weak easterlies in the morning for bringing in industrial emissions to the city and the subsequent clockwise turning of the wind direction induced by the Coriolis force superimposing the sea breeze, which keeps pollutants in the urban area. Objective analysis and nudging employed in the retrospective simulation minimize the wind bias but are not able to compensate for the general flow pattern biases inherited from large scale inputs. By using an alternative analyses data for initializing the meteorological simulation, the model can re-produce the flow pattern and generate the ozone peak location closer to the reality. The inaccurate simulation of precipitation and cloudiness cause over-prediction of ozone occasionally. Since there are limitations in the meteorological model to simulate precipitation and cloudiness in the fine scale domain (less than 4-km grid), the satellite-based cloud is an alternative way to provide necessary inputs for the retrospective study of air quality.

  7. Comparison of plasma input and reference tissue models for analysing [(11)C]flumazenil studies

    NARCIS (Netherlands)

    Klumpers, Ursula M. H.; Veltman, Dick J.; Boellaard, Ronald; Comans, Emile F.; Zuketto, Cassandra; Yaqub, Maqsood; Mourik, Jurgen E. M.; Lubberink, Mark; Hoogendijk, Witte J. G.; Lammertsma, Adriaan A.

    2008-01-01

    A single-tissue compartment model with plasma input is the established method for analysing [(11)C]flumazenil ([(11)C]FMZ) studies. However, arterial cannulation and measurement of metabolites are time-consuming. Therefore, a reference tissue approach is appealing, but this approach has not been

  8. The economic impact of multifunctional agriculture in The Netherlands: A regional input-output model

    NARCIS (Netherlands)

    Heringa, P.W.; Heide, van der C.M.; Heijman, W.J.M.

    2012-01-01

    Multifunctional agriculture is a broad concept lacking a precise and uniform definition. Moreover, little is known about the societal importance of multifunctional agriculture. This paper is an empirical attempt to fill this gap. To this end, an input-output model is constructed for multifunctional

  9. On the relationship between input parameters in two-mass vocal-fold model with acoustical coupling an signal parameters of the glottal flow

    NARCIS (Netherlands)

    van Hirtum, Annemie; Lopez, Ines; Hirschberg, Abraham; Pelorson, Xavier

    2003-01-01

    In this paper the sensitivity of the two-mass model with acoustical coupling to the model input-parameters is assessed. The model-output or the glottal volume air flow is characterised by signal-parameters in the time-domain. The influence of changing input-parameters on the signal-parameters is

  10. TurboFold: Iterative probabilistic estimation of secondary structures for multiple RNA sequences

    Directory of Open Access Journals (Sweden)

    Sharma Gaurav

    2011-04-01

    Full Text Available Abstract Background The prediction of secondary structure, i.e. the set of canonical base pairs between nucleotides, is a first step in developing an understanding of the function of an RNA sequence. The most accurate computational methods predict conserved structures for a set of homologous RNA sequences. These methods usually suffer from high computational complexity. In this paper, TurboFold, a novel and efficient method for secondary structure prediction for multiple RNA sequences, is presented. Results TurboFold takes, as input, a set of homologous RNA sequences and outputs estimates of the base pairing probabilities for each sequence. The base pairing probabilities for a sequence are estimated by combining intrinsic information, derived from the sequence itself via the nearest neighbor thermodynamic model, with extrinsic information, derived from the other sequences in the input set. For a given sequence, the extrinsic information is computed by using pairwise-sequence-alignment-based probabilities for co-incidence with each of the other sequences, along with estimated base pairing probabilities, from the previous iteration, for the other sequences. The extrinsic information is introduced as free energy modifications for base pairing in a partition function computation based on the nearest neighbor thermodynamic model. This process yields updated estimates of base pairing probability. The updated base pairing probabilities in turn are used to recompute extrinsic information, resulting in the overall iterative estimation procedure that defines TurboFold. TurboFold is benchmarked on a number of ncRNA datasets and compared against alternative secondary structure prediction methods. The iterative procedure in TurboFold is shown to improve estimates of base pairing probability with each iteration, though only small gains are obtained beyond three iterations. Secondary structures composed of base pairs with estimated probabilities higher than a

  11. Jointness through vessel capacity input in a multispecies fishery

    DEFF Research Database (Denmark)

    Hansen, Lars Gårn; Jensen, Carsten Lynge

    2014-01-01

    capacity. We develop a fixed but allocatable input model of purse seine fisheries capturing this particular type of jointness. We estimate the model for the Norwegian purse seine fishery and find that it is characterized by nonjointness, while estimations for this fishery using the standard models imply...... are typically modeled as either independent single species fisheries or using standard multispecies functional forms characterized by jointness in inputs. We argue that production of each species is essentially independent but that jointness may be caused by competition for fixed but allocable input of vessel...

  12. Long period pseudo random number sequence generator

    Science.gov (United States)

    Wang, Charles C. (Inventor)

    1989-01-01

    A circuit for generating a sequence of pseudo random numbers, (A sub K). There is an exponentiator in GF(2 sup m) for the normal basis representation of elements in a finite field GF(2 sup m) each represented by m binary digits and having two inputs and an output from which the sequence (A sub K). Of pseudo random numbers is taken. One of the two inputs is connected to receive the outputs (E sub K) of maximal length shift register of n stages. There is a switch having a pair of inputs and an output. The switch outputs is connected to the other of the two inputs of the exponentiator. One of the switch inputs is connected for initially receiving a primitive element (A sub O) in GF(2 sup m). Finally, there is a delay circuit having an input and an output. The delay circuit output is connected to the other of the switch inputs and the delay circuit input is connected to the output of the exponentiator. Whereby after the exponentiator initially receives the primitive element (A sub O) in GF(2 sup m) through the switch, the switch can be switched to cause the exponentiator to receive as its input a delayed output A(K-1) from the exponentiator thereby generating (A sub K) continuously at the output of the exponentiator. The exponentiator in GF(2 sup m) is novel and comprises a cyclic-shift circuit; a Massey-Omura multiplier; and, a control logic circuit all operably connected together to perform the function U(sub i) = 92(sup i) (for n(sub i) = 1 or 1 (for n(subi) = 0).

  13. ToPS: a framework to manipulate probabilistic models of sequence data.

    Directory of Open Access Journals (Sweden)

    André Yoshiaki Kashiwabara

    Full Text Available Discrete Markovian models can be used to characterize patterns in sequences of values and have many applications in biological sequence analysis, including gene prediction, CpG island detection, alignment, and protein profiling. We present ToPS, a computational framework that can be used to implement different applications in bioinformatics analysis by combining eight kinds of models: (i independent and identically distributed process; (ii variable-length Markov chain; (iii inhomogeneous Markov chain; (iv hidden Markov model; (v profile hidden Markov model; (vi pair hidden Markov model; (vii generalized hidden Markov model; and (viii similarity based sequence weighting. The framework includes functionality for training, simulation and decoding of the models. Additionally, it provides two methods to help parameter setting: Akaike and Bayesian information criteria (AIC and BIC. The models can be used stand-alone, combined in Bayesian classifiers, or included in more complex, multi-model, probabilistic architectures using GHMMs. In particular the framework provides a novel, flexible, implementation of decoding in GHMMs that detects when the architecture can be traversed efficiently.

  14. High Resolution Modeling of the Thermospheric Response to Energy Inputs During the RENU-2 Rocket Flight

    Science.gov (United States)

    Walterscheid, R. L.; Brinkman, D. G.; Clemmons, J. H.; Hecht, J. H.; Lessard, M.; Fritz, B.; Hysell, D. L.; Clausen, L. B. N.; Moen, J.; Oksavik, K.; Yeoman, T. K.

    2017-12-01

    The Earth's magnetospheric cusp provides direct access of energetic particles to the thermosphere. These particles produce ionization and kinetic (particle) heating of the atmosphere. The increased ionization coupled with enhanced electric fields in the cusp produces increased Joule heating and ion drag forcing. These energy inputs cause large wind and temperature changes in the cusp region. The Rocket Experiment for Neutral Upwelling -2 (RENU-2) launched from Andoya, Norway at 0745UT on 13 December 2015 into the ionosphere-thermosphere beneath the magnetic cusp. It made measurements of the energy inputs (e.g., precipitating particles, electric fields) and the thermospheric response to these energy inputs (e.g., neutral density and temperature, neutral winds). Complementary ground based measurements were made. In this study, we use a high resolution two-dimensional time-dependent non hydrostatic nonlinear dynamical model driven by rocket and ground based measurements of the energy inputs to simulate the thermospheric response during the RENU-2 flight. Model simulations will be compared to the corresponding measurements of the thermosphere to see what they reveal about thermospheric structure and the nature of magnetosphere-ionosphere-thermosphere coupling in the cusp. Acknowledgements: This material is based upon work supported by the National Aeronautics and Space Administration under Grants: NNX16AH46G and NNX13AJ93G. This research was also supported by The Aerospace Corporation's Technical Investment program

  15. [Prosody, speech input and language acquisition].

    Science.gov (United States)

    Jungheim, M; Miller, S; Kühn, D; Ptok, M

    2014-04-01

    In order to acquire language, children require speech input. The prosody of the speech input plays an important role. In most cultures adults modify their code when communicating with children. Compared to normal speech this code differs especially with regard to prosody. For this review a selective literature search in PubMed and Scopus was performed. Prosodic characteristics are a key feature of spoken language. By analysing prosodic features, children gain knowledge about underlying grammatical structures. Child-directed speech (CDS) is modified in a way that meaningful sequences are highlighted acoustically so that important information can be extracted from the continuous speech flow more easily. CDS is said to enhance the representation of linguistic signs. Taking into consideration what has previously been described in the literature regarding the perception of suprasegmentals, CDS seems to be able to support language acquisition due to the correspondence of prosodic and syntactic units. However, no findings have been reported, stating that the linguistically reduced CDS could hinder first language acquisition.

  16. Nitric oxide regulates input specificity of long-term depression and context dependence of cerebellar learning.

    Directory of Open Access Journals (Sweden)

    Hideaki Ogasawara

    2007-01-01

    Full Text Available Recent studies have shown that multiple internal models are acquired in the cerebellum and that these can be switched under a given context of behavior. It has been proposed that long-term depression (LTD of parallel fiber (PF-Purkinje cell (PC synapses forms the cellular basis of cerebellar learning, and that the presynaptically synthesized messenger nitric oxide (NO is a crucial "gatekeeper" for LTD. Because NO diffuses freely to neighboring synapses, this volume learning is not input-specific and brings into question the biological significance of LTD as the basic mechanism for efficient supervised learning. To better characterize the role of NO in cerebellar learning, we simulated the sequence of electrophysiological and biochemical events in PF-PC LTD by combining established simulation models of the electrophysiology, calcium dynamics, and signaling pathways of the PC. The results demonstrate that the local NO concentration is critical for induction of LTD and for its input specificity. Pre- and postsynaptic coincident firing is not sufficient for a PF-PC synapse to undergo LTD, and LTD is induced only when a sufficient amount of NO is provided by activation of the surrounding PFs. On the other hand, above-adequate levels of activity in nearby PFs cause accumulation of NO, which also allows LTD in neighboring synapses that were not directly stimulated, ruining input specificity. These findings lead us to propose the hypothesis that NO represents the relevance of a given context and enables context-dependent selection of internal models to be updated. We also predict sparse PF activity in vivo because, otherwise, input specificity would be lost.

  17. Persistence and ergodicity of plant disease model with markov conversion and impulsive toxicant input

    Science.gov (United States)

    Zhao, Wencai; Li, Juan; Zhang, Tongqian; Meng, Xinzhu; Zhang, Tonghua

    2017-07-01

    Taking into account of both white and colored noises, a stochastic mathematical model with impulsive toxicant input is formulated. Based on this model, we investigate dynamics, such as the persistence and ergodicity, of plant infectious disease model with Markov conversion in a polluted environment. The thresholds of extinction and persistence in mean are obtained. By using Lyapunov functions, we prove that the system is ergodic and has a stationary distribution under certain sufficient conditions. Finally, numerical simulations are employed to illustrate our theoretical analysis.

  18. The sensitivity of ecosystem service models to choices of input data and spatial resolution

    Science.gov (United States)

    Bagstad, Kenneth J.; Cohen, Erika; Ancona, Zachary H.; McNulty, Steven; Sun, Ge

    2018-01-01

    Although ecosystem service (ES) modeling has progressed rapidly in the last 10–15 years, comparative studies on data and model selection effects have become more common only recently. Such studies have drawn mixed conclusions about whether different data and model choices yield divergent results. In this study, we compared the results of different models to address these questions at national, provincial, and subwatershed scales in Rwanda. We compared results for carbon, water, and sediment as modeled using InVEST and WaSSI using (1) land cover data at 30 and 300 m resolution and (2) three different input land cover datasets. WaSSI and simpler InVEST models (carbon storage and annual water yield) were relatively insensitive to the choice of spatial resolution, but more complex InVEST models (seasonal water yield and sediment regulation) produced large differences when applied at differing resolution. Six out of nine ES metrics (InVEST annual and seasonal water yield and WaSSI) gave similar predictions for at least two different input land cover datasets. Despite differences in mean values when using different data sources and resolution, we found significant and highly correlated results when using Spearman's rank correlation, indicating consistent spatial patterns of high and low values. Our results confirm and extend conclusions of past studies, showing that in certain cases (e.g., simpler models and national-scale analyses), results can be robust to data and modeling choices. For more complex models, those with different output metrics, and subnational to site-based analyses in heterogeneous environments, data and model choices may strongly influence study findings.

  19. Sequence-structure relationships in RNA loops: establishing the basis for loop homology modeling.

    Science.gov (United States)

    Schudoma, Christian; May, Patrick; Nikiforova, Viktoria; Walther, Dirk

    2010-01-01

    The specific function of RNA molecules frequently resides in their seemingly unstructured loop regions. We performed a systematic analysis of RNA loops extracted from experimentally determined three-dimensional structures of RNA molecules. A comprehensive loop-structure data set was created and organized into distinct clusters based on structural and sequence similarity. We detected clear evidence of the hallmark of homology present in the sequence-structure relationships in loops. Loops differing by structures. Thus, our results support the application of homology modeling for RNA loop model building. We established a threshold that may guide the sequence divergence-based selection of template structures for RNA loop homology modeling. Of all possible sequences that are, under the assumption of isosteric relationships, theoretically compatible with actual sequences observed in RNA structures, only a small fraction is contained in the Rfam database of RNA sequences and classes implying that the actual RNA loop space may consist of a limited number of unique loop structures and conserved sequences. The loop-structure data sets are made available via an online database, RLooM. RLooM also offers functionalities for the modeling of RNA loop structures in support of RNA engineering and design efforts.

  20. A generic method for automatic translation between input models for different versions of simulation codes

    International Nuclear Information System (INIS)

    Serfontein, Dawid E.; Mulder, Eben J.; Reitsma, Frederik

    2014-01-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications

  1. A generic method for automatic translation between input models for different versions of simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Serfontein, Dawid E., E-mail: Dawid.Serfontein@nwu.ac.za [School of Mechanical and Nuclear Engineering, North West University (PUK-Campus), PRIVATE BAG X6001 (Internal Post Box 360), Potchefstroom 2520 (South Africa); Mulder, Eben J. [School of Mechanical and Nuclear Engineering, North West University (South Africa); Reitsma, Frederik [Calvera Consultants (South Africa)

    2014-05-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications.

  2. Accident sequence precursor analysis level 2/3 model development

    International Nuclear Information System (INIS)

    Lui, C.H.; Galyean, W.J.; Brownson, D.A.

    1997-01-01

    The US Nuclear Regulatory Commission's Accident Sequence Precursor (ASP) program currently uses simple Level 1 models to assess the conditional core damage probability for operational events occurring in commercial nuclear power plants (NPP). Since not all accident sequences leading to core damage will result in the same radiological consequences, it is necessary to develop simple Level 2/3 models that can be used to analyze the response of the NPP containment structure in the context of a core damage accident, estimate the magnitude of the resulting radioactive releases to the environment, and calculate the consequences associated with these releases. The simple Level 2/3 model development work was initiated in 1995, and several prototype models have been completed. Once developed, these simple Level 2/3 models are linked to the simple Level 1 models to provide risk perspectives for operational events. This paper describes the methods implemented for the development of these simple Level 2/3 ASP models, and the linkage process to the existing Level 1 models

  3. Evaluating the efficiency of municipalities in collecting and processing municipal solid waste: A shared input DEA-model

    International Nuclear Information System (INIS)

    Rogge, Nicky; De Jaeger, Simon

    2012-01-01

    Highlights: ► Complexity in local waste management calls for more in depth efficiency analysis. ► Shared-input Data Envelopment Analysis can provide solution. ► Considerable room for the Flemish municipalities to improve their cost efficiency. - Abstract: This paper proposed an adjusted “shared-input” version of the popular efficiency measurement technique Data Envelopment Analysis (DEA) that enables evaluating municipality waste collection and processing performances in settings in which one input (waste costs) is shared among treatment efforts of multiple municipal solid waste fractions. The main advantage of this version of DEA is that it not only provides an estimate of the municipalities overall cost efficiency but also estimates of the municipalities’ cost efficiency in the treatment of the different fractions of municipal solid waste (MSW). To illustrate the practical usefulness of the shared input DEA-model, we apply the model to data on 293 municipalities in Flanders, Belgium, for the year 2008.

  4. Constituency Input into Budget Management.

    Science.gov (United States)

    Miller, Norman E.

    1995-01-01

    Presents techniques for ensuring constituency involvement in district- and site-level budget management. Outlines four models for securing constituent input and focuses on strategies to orchestrate the more complex model for staff and community participation. Two figures are included. (LMI)

  5. Using physicochemical and compositional characteristics of DNA sequence for prediction of genomic signals

    KAUST Repository

    Mulamba, Pierre Abraham

    2014-12-01

    The challenge in finding genes in eukaryotic organisms using computational methods is an ongoing problem in the biology. Based on various genomic signals found in eukaryotic genomes, this problem can be divided into many different sub­‐problems such as identification of transcription start sites, translation initiation sites, splice sites, poly (A) signals, etc. Each sub-­problem deals with a particular type of genomic signals and various computational methods are used to solve each sub-­problem. Aggregating information from all these individual sub-­problems can lead to a complete annotation of a gene and its component signals. The fundamental principle of most of these computational methods is the mapping principle – building an input-­output model for the prediction of a particular genomic signal based on a set of known input signals and their corresponding output signal. The type of input signals used to build the model is an essential element in most of these computational methods. The common factor of most of these methods is that they are mainly based on the statistical analysis of the basic nucleotide sequence string composition. 4 Our study is based on a novel approach to predict genomic signals in which uniquely generated structural profiles that combine compressed physicochemical properties with topological and compositional properties of DNA sequences are used to develop machine learning predictive models. The compression of the physicochemical properties is made using principal component analysis transformation. Our ideas are evaluated through prediction models of canonical splice sites using support vector machine models. We demonstrate across several species that the proposed methodology has resulted in the most accurate splice site predictors that are publicly available or described. We believe that the approach in this study is quite general and has various applications in other biological modeling problems.

  6. Hydrogen Generation Rate Model Calculation Input Data

    International Nuclear Information System (INIS)

    KUFAHL, M.A.

    2000-01-01

    This report documents the procedures and techniques utilized in the collection and analysis of analyte input data values in support of the flammable gas hazard safety analyses. This document represents the analyses of data current at the time of its writing and does not account for data available since then

  7. Increasing inhibitory input increases neuronal firing rate: why and when? Diffusion process cases

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [COGS, Sussex University (United Kingdom)]. E-mail: jf218@cam.ac.uk; Wei Gang [Department of Mathematics, Hong Kong Baptist University, Hong Kong (China)]. E-mail gwei@math.hkbu.edu.hk

    2001-09-21

    Increasing inhibitory input to single neuronal models, such as the FitzHugh-Nagumo model and the Hodgkin-Huxley model, can sometimes increase their firing rates, a phenomenon which we term inhibition-boosted firing (IBF). Here we consider neuronal models with diffusion approximation inputs, i.e. they share the identical first- and second-order statistics of the corresponding Poisson process inputs. Using the integrate-and-fire model and the IF-FHN model, we explore theoretically how and when IBF can happen. For both models, it is shown that there is a critical input frequency at which the efferent firing rate is identical when the neuron receives purely excitatory inputs or exactly balanced inhibitory and excitatory inputs. When the input frequency is lower than the critical frequency, IBF occurs. (author)

  8. Model-based quality assessment and base-calling for second-generation sequencing data.

    Science.gov (United States)

    Bravo, Héctor Corrada; Irizarry, Rafael A

    2010-09-01

    Second-generation sequencing (sec-gen) technology can sequence millions of short fragments of DNA in parallel, making it capable of assembling complex genomes for a small fraction of the price and time of previous technologies. In fact, a recently formed international consortium, the 1000 Genomes Project, plans to fully sequence the genomes of approximately 1200 people. The prospect of comparative analysis at the sequence level of a large number of samples across multiple populations may be achieved within the next five years. These data present unprecedented challenges in statistical analysis. For instance, analysis operates on millions of short nucleotide sequences, or reads-strings of A,C,G, or T's, between 30 and 100 characters long-which are the result of complex processing of noisy continuous fluorescence intensity measurements known as base-calling. The complexity of the base-calling discretization process results in reads of widely varying quality within and across sequence samples. This variation in processing quality results in infrequent but systematic errors that we have found to mislead downstream analysis of the discretized sequence read data. For instance, a central goal of the 1000 Genomes Project is to quantify across-sample variation at the single nucleotide level. At this resolution, small error rates in sequencing prove significant, especially for rare variants. Sec-gen sequencing is a relatively new technology for which potential biases and sources of obscuring variation are not yet fully understood. Therefore, modeling and quantifying the uncertainty inherent in the generation of sequence reads is of utmost importance. In this article, we present a simple model to capture uncertainty arising in the base-calling procedure of the Illumina/Solexa GA platform. Model parameters have a straightforward interpretation in terms of the chemistry of base-calling allowing for informative and easily interpretable metrics that capture the variability in

  9. 'Fingerprints' of four crop models as affected by soil input data aggregation

    DEFF Research Database (Denmark)

    Angulo, Carlos; Gaiser, Thomas; Rötter, Reimund P

    2014-01-01

    for all models. Further analysis revealed that the small influence of spatial resolution of soil input data might be related to: (a) the high precipitation amount in the region which partly masked differences in soil characteristics for water holding capacity, (b) the loss of variability in hydraulic soil...... properties due to the methods applied to calculate water retention properties of the used soil profiles, and (c) the method of soil data aggregation. No characteristic “fingerprint” between sites, years and resolutions could be found for any of the models. Our results support earlier recommendation....... In this study we used four crop models (SIMPLACE, DSSAT-CSM, EPIC and DAISY) differing in the detail of modeling above-ground biomass and yield as well as of modeling soil water dynamics, water uptake and drought effects on plants to simulate winter wheat in two (agro-climatologically and geo...

  10. Good Modeling Practice for PAT Applications: Propagation of Input Uncertainty and Sensitivity Analysis

    DEFF Research Database (Denmark)

    Sin, Gürkan; Gernaey, Krist; Eliasson Lantz, Anna

    2009-01-01

    The uncertainty and sensitivity analysis are evaluated for their usefulness as part of the model-building within Process Analytical Technology applications. A mechanistic model describing a batch cultivation of Streptomyces coelicolor for antibiotic production was used as case study. The input...... compared to the large uncertainty observed in the antibiotic and off-gas CO2 predictions. The output uncertainty was observed to be lower during the exponential growth phase, while higher in the stationary and death phases - meaning the model describes some periods better than others. To understand which...... promising for helping to build reliable mechanistic models and to interpret the model outputs properly. These tools make part of good modeling practice, which can contribute to successful PAT applications for increased process understanding, operation and control purposes. © 2009 American Institute...

  11. New Results on Robust Model Predictive Control for Time-Delay Systems with Input Constraints

    Directory of Open Access Journals (Sweden)

    Qing Lu

    2014-01-01

    Full Text Available This paper investigates the problem of model predictive control for a class of nonlinear systems subject to state delays and input constraints. The time-varying delay is considered with both upper and lower bounds. A new model is proposed to approximate the delay. And the uncertainty is polytopic type. For the state-feedback MPC design objective, we formulate an optimization problem. Under model transformation, a new model predictive controller is designed such that the robust asymptotical stability of the closed-loop system can be guaranteed. Finally, the applicability of the presented results are demonstrated by a practical example.

  12. Structured prediction models for RNN based sequence labeling in clinical text.

    Science.gov (United States)

    Jagannatha, Abhyuday N; Yu, Hong

    2016-11-01

    Sequence labeling is a widely used method for named entity recognition and information extraction from unstructured natural language data. In clinical domain one major application of sequence labeling involves extraction of medical entities such as medication, indication, and side-effects from Electronic Health Record narratives. Sequence labeling in this domain, presents its own set of challenges and objectives. In this work we experimented with various CRF based structured learning models with Recurrent Neural Networks. We extend the previously studied LSTM-CRF models with explicit modeling of pairwise potentials. We also propose an approximate version of skip-chain CRF inference with RNN potentials. We use these methodologies for structured prediction in order to improve the exact phrase detection of various medical entities.

  13. The genome sequence of the model ascomycete fungus Podospora anserina

    NARCIS (Netherlands)

    Espagne, Eric; Lespinet, Olivier; Malagnac, Fabienne; Da Silva, Corinne; Jaillon, Olivier; Porcel, Betina M; Couloux, Arnaud; Aury, Jean-Marc; Ségurens, Béatrice; Poulain, Julie; Anthouard, Véronique; Grossetete, Sandrine; Khalili, Hamid; Coppin, Evelyne; Déquard-Chablat, Michelle; Picard, Marguerite; Contamine, Véronique; Arnaise, Sylvie; Bourdais, Anne; Berteaux-Lecellier, Véronique; Gautheret, Daniel; de Vries, Ronald P; Battaglia, Evy; Coutinho, Pedro M; Danchin, Etienne Gj; Henrissat, Bernard; Khoury, Riyad El; Sainsard-Chanet, Annie; Boivin, Antoine; Pinan-Lucarré, Bérangère; Sellem, Carole H; Debuchy, Robert; Wincker, Patrick; Weissenbach, Jean; Silar, Philippe

    2008-01-01

    BACKGROUND: The dung-inhabiting ascomycete fungus Podospora anserina is a model used to study various aspects of eukaryotic and fungal biology, such as ageing, prions and sexual development. RESULTS: We present a 10X draft sequence of P. anserina genome, linked to the sequences of a large expressed

  14. AN ACCURATE MODELING OF DELAY AND SLEW METRICS FOR ON-CHIP VLSI RC INTERCONNECTS FOR RAMP INPUTS USING BURR’S DISTRIBUTION FUNCTION

    Directory of Open Access Journals (Sweden)

    Rajib Kar

    2010-09-01

    Full Text Available This work presents an accurate and efficient model to compute the delay and slew metric of on-chip interconnect of high speed CMOS circuits foe ramp input. Our metric assumption is based on the Burr’s Distribution function. The Burr’s distribution is used to characterize the normalized homogeneous portion of the step response. We used the PERI (Probability distribution function Extension for Ramp Inputs technique that extends delay metrics and slew metric for step inputs to the more general and realistic non-step inputs. The accuracy of our models is justified with the results compared with that of SPICE simulations.

  15. Optimization and evaluation of probabilistic-logic sequence models

    DEFF Research Database (Denmark)

    Christiansen, Henning; Lassen, Ole Torp

    to, in principle, Turing complete languages. In general, such models are computationally far to complex for direct use, so optimization by pruning and approximation are needed. % The first steps are made towards a methodology for optimizing such models by approximations using auxiliary models......Analysis of biological sequence data demands more and more sophisticated and fine-grained models, but these in turn introduce hard computational problems. A class of probabilistic-logic models is considered, which increases the expressibility from HMM's and SCFG's regular and context-free languages...

  16. On the relationship between input parameters in the two-mass vocal-fold model with acoustical coupling and signal parameters of the glottal flow

    NARCIS (Netherlands)

    Hirtum, van A.; Lopez Arteaga, I.; Hirschberg, A.; Pelorson, X.

    2003-01-01

    In this paper the sensitivity of the two-mass model with acoustical coupling to the model input-parameters is assessed. The model-output or the glottal volume air flow is characterised by signal-parameters in the time-domain. The influence of changing input-parameters on the signal-parameters is

  17. Seq2Logo: a method for construction and visualization of amino acid binding motifs and sequence profiles including sequence weighting, pseudo counts and two-sided representation of amino acid enrichment and depletion

    DEFF Research Database (Denmark)

    Thomsen, Martin Christen Frølund; Nielsen, Morten

    2012-01-01

    Seq2Logo is a web-based sequence logo generator. Sequence logos are a graphical representation of the information content stored in a multiple sequence alignment (MSA) and provide a compact and highly intuitive representation of the position-specific amino acid composition of binding motifs, active...... related to amino acid enrichment and depletion. Besides allowing input in the format of peptides and MSA, Seq2Logo accepts input as Blast sequence profiles, providing easy access for non-expert end-users to characterize and identify functionally conserved/variable amino acids in any given protein...... sites, etc. in biological sequences. Accurate generation of sequence logos is often compromised by sequence redundancy and low number of observations. Moreover, most methods available for sequence logo generation focus on displaying the position-specific enrichment of amino acids, discarding the equally...

  18. Enhancement of information transmission with stochastic resonance in hippocampal CA1 neuron models: effects of noise input location.

    Science.gov (United States)

    Kawaguchi, Minato; Mino, Hiroyuki; Durand, Dominique M

    2007-01-01

    Stochastic resonance (SR) has been shown to enhance the signal to noise ratio or detection of signals in neurons. It is not yet clear how this effect of SR on the signal to noise ratio affects signal processing in neural networks. In this paper, we investigate the effects of the location of background noise input on information transmission in a hippocampal CA1 neuron model. In the computer simulation, random sub-threshold spike trains (signal) generated by a filtered homogeneous Poisson process were presented repeatedly to the middle point of the main apical branch, while the homogeneous Poisson shot noise (background noise) was applied to a location of the dendrite in the hippocampal CA1 model consisting of the soma with a sodium, a calcium, and five potassium channels. The location of the background noise input was varied along the dendrites to investigate the effects of background noise input location on information transmission. The computer simulation results show that the information rate reached a maximum value for an optimal amplitude of the background noise amplitude. It is also shown that this optimal amplitude of the background noise is independent of the distance between the soma and the noise input location. The results also show that the location of the background noise input does not significantly affect the maximum values of the information rates generated by stochastic resonance.

  19. Comparison of static model and dynamic model for the evaluation of station blackout sequences

    International Nuclear Information System (INIS)

    Lee, Kwang-Nam; Kang, Sun-Koo; Hong, Sung-Yull.

    1992-01-01

    Station blackout is one of major contributors to the core damage frequency (CDF) in many PSA studies. Since station blackout sequence exhibits dynamic features, accurate calculation of CDF for the station blackout sequence is not possible with event tree/fault tree (ET/FT) method. Although the integral method can determine accurate CDF, it is time consuming and is difficult to evaluate various alternative AC source configuration and sensitivities. In this study, a comparison is made between static model and dynamic model and a new methodology which combines static model and dynamic model is provided for the accurate quantification of CDF and evaluation of improvement alternatives. Results of several case studies show that accurate calculation of CDF is possible by introducing equivalent mission time. (author)

  20. Input-output and energy demand models for Ireland: Data collection report. Part 1: EXPLOR

    Energy Technology Data Exchange (ETDEWEB)

    Henry, E W; Scott, S

    1981-01-01

    Data are presented in support of EXPLOR, an input-output economic model for Ireland. The data follow the listing of exogenous data-sets used by Batelle in document X11/515/77. Data are given for 1974, 1980, and 1985 and consist of household consumption, final demand-production, and commodity prices. (ACR)

  1. Mesoscopic modeling of DNA denaturation rates: Sequence dependence and experimental comparison

    Energy Technology Data Exchange (ETDEWEB)

    Dahlen, Oda, E-mail: oda.dahlen@ntnu.no; Erp, Titus S. van, E-mail: titus.van.erp@ntnu.no [Department of Chemistry, Norwegian University of Science and Technology (NTNU), Høgskoleringen 5, Realfagbygget D3-117 7491 Trondheim (Norway)

    2015-06-21

    Using rare event simulation techniques, we calculated DNA denaturation rate constants for a range of sequences and temperatures for the Peyrard-Bishop-Dauxois (PBD) model with two different parameter sets. We studied a larger variety of sequences compared to previous studies that only consider DNA homopolymers and DNA sequences containing an equal amount of weak AT- and strong GC-base pairs. Our results show that, contrary to previous findings, an even distribution of the strong GC-base pairs does not always result in the fastest possible denaturation. In addition, we applied an adaptation of the PBD model to study hairpin denaturation for which experimental data are available. This is the first quantitative study in which dynamical results from the mesoscopic PBD model have been compared with experiments. Our results show that present parameterized models, although giving good results regarding thermodynamic properties, overestimate denaturation rates by orders of magnitude. We believe that our dynamical approach is, therefore, an important tool for verifying DNA models and for developing next generation models that have higher predictive power than present ones.

  2. Next-Generation Sequencing Workflow for NSCLC Critical Samples Using a Targeted Sequencing Approach by Ion Torrent PGM™ Platform.

    Science.gov (United States)

    Vanni, Irene; Coco, Simona; Truini, Anna; Rusmini, Marta; Dal Bello, Maria Giovanna; Alama, Angela; Banelli, Barbara; Mora, Marco; Rijavec, Erika; Barletta, Giulia; Genova, Carlo; Biello, Federica; Maggioni, Claudia; Grossi, Francesco

    2015-12-03

    Next-generation sequencing (NGS) is a cost-effective technology capable of screening several genes simultaneously; however, its application in a clinical context requires an established workflow to acquire reliable sequencing results. Here, we report an optimized NGS workflow analyzing 22 lung cancer-related genes to sequence critical samples such as DNA from formalin-fixed paraffin-embedded (FFPE) blocks and circulating free DNA (cfDNA). Snap frozen and matched FFPE gDNA from 12 non-small cell lung cancer (NSCLC) patients, whose gDNA fragmentation status was previously evaluated using a multiplex PCR-based quality control, were successfully sequenced with Ion Torrent PGM™. The robust bioinformatic pipeline allowed us to correctly call both Single Nucleotide Variants (SNVs) and indels with a detection limit of 5%, achieving 100% specificity and 96% sensitivity. This workflow was also validated in 13 FFPE NSCLC biopsies. Furthermore, a specific protocol for low input gDNA capable of producing good sequencing data with high coverage, high uniformity, and a low error rate was also optimized. In conclusion, we demonstrate the feasibility of obtaining gDNA from FFPE samples suitable for NGS by performing appropriate quality controls. The optimized workflow, capable of screening low input gDNA, highlights NGS as a potential tool in the detection, disease monitoring, and treatment of NSCLC.

  3. Detection of viral sequence fragments of HIV-1 subfamilies yet unknown

    Directory of Open Access Journals (Sweden)

    Stanke Mario

    2011-04-01

    Full Text Available Abstract Background Methods of determining whether or not any particular HIV-1 sequence stems - completely or in part - from some unknown HIV-1 subtype are important for the design of vaccines and molecular detection systems, as well as for epidemiological monitoring. Nevertheless, a single algorithm only, the Branching Index (BI, has been developed for this task so far. Moving along the genome of a query sequence in a sliding window, the BI computes a ratio quantifying how closely the query sequence clusters with a subtype clade. In its current version, however, the BI does not provide predicted boundaries of unknown fragments. Results We have developed Unknown Subtype Finder (USF, an algorithm based on a probabilistic model, which automatically determines which parts of an input sequence originate from a subtype yet unknown. The underlying model is based on a simple profile hidden Markov model (pHMM for each known subtype and an additional pHMM for an unknown subtype. The emission probabilities of the latter are estimated using the emission frequencies of the known subtypes by means of a (position-wise probabilistic model for the emergence of new subtypes. We have applied USF to SIV and HIV-1 sequences formerly classified as having emerged from an unknown subtype. Moreover, we have evaluated its performance on artificial HIV-1 recombinants and non-recombinant HIV-1 sequences. The results have been compared with the corresponding results of the BI. Conclusions Our results demonstrate that USF is suitable for detecting segments in HIV-1 sequences stemming from yet unknown subtypes. Comparing USF with the BI shows that our algorithm performs as good as the BI or better.

  4. Finding identifiable parameter combinations in nonlinear ODE models and the rational reparameterization of their input-output equations.

    Science.gov (United States)

    Meshkat, Nicolette; Anderson, Chris; Distefano, Joseph J

    2011-09-01

    When examining the structural identifiability properties of dynamic system models, some parameters can take on an infinite number of values and yet yield identical input-output data. These parameters and the model are then said to be unidentifiable. Finding identifiable combinations of parameters with which to reparameterize the model provides a means for quantitatively analyzing the model and computing solutions in terms of the combinations. In this paper, we revisit and explore the properties of an algorithm for finding identifiable parameter combinations using Gröbner Bases and prove useful theoretical properties of these parameter combinations. We prove a set of M algebraically independent identifiable parameter combinations can be found using this algorithm and that there exists a unique rational reparameterization of the input-output equations over these parameter combinations. We also demonstrate application of the procedure to a nonlinear biomodel. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Phasing Out a Polluting Input

    OpenAIRE

    Eriksson, Clas

    2015-01-01

    This paper explores economic policies related to the potential conflict between economic growth and the environment. It applies a model with directed technological change and focuses on the case with low elasticity of substitution between clean and dirty inputs in production. New technology is substituted for the polluting input, which results in a gradual decline in pollution along the optimal long-run growth path. In contrast to some recent work, the era of pollution and environmental polic...

  6. A time-resolved model of the mesospheric Na layer: constraints on the meteor input function

    Directory of Open Access Journals (Sweden)

    J. M. C. Plane

    2004-01-01

    Full Text Available A time-resolved model of the Na layer in the mesosphere/lower thermosphere region is described, where the continuity equations for the major sodium species Na, Na+ and NaHCO3 are solved explicity, and the other short-lived species are treated in steady-state. It is shown that the diurnal variation of the Na layer can only be modelled satisfactorily if sodium species are permanently removed below about 85 km, both through the dimerization of NaHCO3 and the uptake of sodium species on meteoric smoke particles that are assumed to have formed from the recondensation of vaporized meteoroids. When the sensitivity of the Na layer to the meteoroid input function is considered, an inconsistent picture emerges. The ratio of the column abundance of Na+ to Na is shown to increase strongly with the average meteoroid velocity, because the Na is injected at higher altitudes. Comparison with a limited set of Na+ measurements indicates that the average meteoroid velocity is probably less than about 25 km s-1, in agreement with velocity estimates from conventional meteor radars, and considerably slower than recent observations made by wide aperture incoherent scatter radars. The Na column abundance is shown to be very sensitive to the meteoroid mass input rate, and to the rate of vertical transport by eddy diffusion. Although the magnitude of the eddy diffusion coefficient in the 80–90 km region is uncertain, there is a consensus between recent models using parameterisations of gravity wave momentum deposition that the average value is less than 3×105 cm2 s-1. This requires that the global meteoric mass input rate is less than about 20 td-1, which is closest to estimates from incoherent scatter radar observations. Finally, the diurnal variation in the meteoroid input rate only slight perturbs the Na layer, because the residence time of Na in the layer is several days, and diurnal effects are effectively averaged out.

  7. ETFOD: a point model physics code with arbitrary input

    International Nuclear Information System (INIS)

    Rothe, K.E.; Attenberger, S.E.

    1980-06-01

    ETFOD is a zero-dimensional code which solves a set of physics equations by minimization. The technique used is different than normally used, in that the input is arbitrary. The user is supplied with a set of variables from which he specifies which variables are input (unchanging). The remaining variables become the output. Presently the code is being used for ETF reactor design studies. The code was written in a manner to allow easy modificaton of equations, variables, and physics calculations. The solution technique is presented along with hints for using the code

  8. Bacterial DNA Sequence Compression Models Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Armando J. Pinho

    2013-08-01

    Full Text Available It is widely accepted that the advances in DNA sequencing techniques have contributed to an unprecedented growth of genomic data. This fact has increased the interest in DNA compression, not only from the information theory and biology points of view, but also from a practical perspective, since such sequences require storage resources. Several compression methods exist, and particularly, those using finite-context models (FCMs have received increasing attention, as they have been proven to effectively compress DNA sequences with low bits-per-base, as well as low encoding/decoding time-per-base. However, the amount of run-time memory required to store high-order finite-context models may become impractical, since a context-order as low as 16 requires a maximum of 17.2 x 109 memory entries. This paper presents a method to reduce such a memory requirement by using a novel application of artificial neural networks (ANN to build such probabilistic models in a compact way and shows how to use them to estimate the probabilities. Such a system was implemented, and its performance compared against state-of-the art compressors, such as XM-DNA (expert model and FCM-Mx (mixture of finite-context models , as well as with general-purpose compressors. Using a combination of order-10 FCM and ANN, similar encoding results to those of FCM, up to order-16, are obtained using only 17 megabytes of memory, whereas the latter, even employing hash-tables, uses several hundreds of megabytes.

  9. A study on the multi-dimensional spectral analysis for response of a piping model with two-seismic inputs

    International Nuclear Information System (INIS)

    Suzuki, K.; Sato, H.

    1975-01-01

    The power and the cross power spectrum analysis by which the vibration characteristic of structures, such as natural frequency, mode of vibration and damping ratio, can be identified would be effective for the confirmation of the characteristics after the construction is completed by using the response for small earthquakes or the micro-tremor under the operating condition. This method of analysis previously utilized only from the view point of systems with single input so far, is extensively applied for the analysis of a medium scale model of a piping system subjected to two seismic inputs. The piping system attached to a three storied concrete structure model which is constructed on a shaking table was excited due to earthquake motions. The inputs to the piping system were recorded at the second floor and the ceiling of the third floor where the system was attached to. The output, the response of the piping system, was instrumented at a middle point on the system. As a result, the multi-dimensional power spectrum analysis is effective for a more reliable identification of the vibration characteristics of the multi-input structure system

  10. Genomic sequencing: assessing the health care system, policy, and big-data implications.

    Science.gov (United States)

    Phillips, Kathryn A; Trosman, Julia R; Kelley, Robin K; Pletcher, Mark J; Douglas, Michael P; Weldon, Christine B

    2014-07-01

    New genomic sequencing technologies enable the high-speed analysis of multiple genes simultaneously, including all of those in a person's genome. Sequencing is a prominent example of a "big data" technology because of the massive amount of information it produces and its complexity, diversity, and timeliness. Our objective in this article is to provide a policy primer on sequencing and illustrate how it can affect health care system and policy issues. Toward this end, we developed an easily applied classification of sequencing based on inputs, methods, and outputs. We used it to examine the implications of sequencing for three health care system and policy issues: making care more patient-centered, developing coverage and reimbursement policies, and assessing economic value. We conclude that sequencing has great promise but that policy challenges include how to optimize patient engagement as well as privacy, develop coverage policies that distinguish research from clinical uses and account for bioinformatics costs, and determine the economic value of sequencing through complex economic models that take into account multiple findings and downstream costs. Project HOPE—The People-to-People Health Foundation, Inc.

  11. Modelling Analysis of Forestry Input-Output Elasticity in China

    Directory of Open Access Journals (Sweden)

    Guofeng Wang

    2016-01-01

    Full Text Available Based on an extended economic model and space econometrics, this essay analyzed the spatial distributions and interdependent relationships of the production of forestry in China; also the input-output elasticity of forestry production were calculated. Results figure out there exists significant spatial correlation in forestry production in China. Spatial distribution is mainly manifested as spatial agglomeration. The output elasticity of labor force is equal to 0.6649, and that of capital is equal to 0.8412. The contribution of land is significantly negative. Labor and capital are the main determinants for the province-level forestry production in China. Thus, research on the province-level forestry production should not ignore the spatial effect. The policy-making process should take into consideration the effects between provinces on the production of forestry. This study provides some scientific technical support for forestry production.

  12. A new chance-constrained DEA model with birandom input and output data

    OpenAIRE

    Tavana, M.; Shiraz, R. K.; Hatami-Marbini, A.

    2013-01-01

    The purpose of conventional Data Envelopment Analysis (DEA) is to evaluate the performance of a set of firms or Decision-Making Units using deterministic input and output data. However, the input and output data in the real-life performance evaluation problems are often stochastic. The stochastic input and output data in DEA can be represented with random variables. Several methods have been proposed to deal with the random input and output data in DEA. In this paper, we propose a new chance-...

  13. Dynamic PET of human liver inflammation: impact of kinetic modeling with optimization-derived dual-blood input function.

    Science.gov (United States)

    Wang, Guobao; Corwin, Michael T; Olson, Kristin A; Badawi, Ramsey D; Sarkar, Souvik

    2018-05-30

    The hallmark of nonalcoholic steatohepatitis is hepatocellular inflammation and injury in the setting of hepatic steatosis. Recent work has indicated that dynamic 18F-FDG PET with kinetic modeling has the potential to assess hepatic inflammation noninvasively, while static FDG-PET did not show a promise. Because the liver has dual blood supplies, kinetic modeling of dynamic liver PET data is challenging in human studies. The objective of this study is to evaluate and identify a dual-input kinetic modeling approach for dynamic FDG-PET of human liver inflammation. Fourteen human patients with nonalcoholic fatty liver disease were included in the study. Each patient underwent one-hour dynamic FDG-PET/CT scan and had liver biopsy within six weeks. Three models were tested for kinetic analysis: traditional two-tissue compartmental model with an image-derived single-blood input function (SBIF), model with population-based dual-blood input function (DBIF), and modified model with optimization-derived DBIF through a joint estimation framework. The three models were compared using Akaike information criterion (AIC), F test and histopathologic inflammation reference. The results showed that the optimization-derived DBIF model improved the fitting of liver time activity curves and achieved lower AIC values and higher F values than the SBIF and population-based DBIF models in all patients. The optimization-derived model significantly increased FDG K1 estimates by 101% and 27% as compared with traditional SBIF and population-based DBIF. K1 by the optimization-derived model was significantly associated with histopathologic grades of liver inflammation while the other two models did not provide a statistical significance. In conclusion, modeling of DBIF is critical for kinetic analysis of dynamic liver FDG-PET data in human studies. The optimization-derived DBIF model is more appropriate than SBIF and population-based DBIF for dynamic FDG-PET of liver inflammation. © 2018

  14. A robust hybrid model integrating enhanced inputs based extreme learning machine with PLSR (PLSR-EIELM) and its application to intelligent measurement.

    Science.gov (United States)

    He, Yan-Lin; Geng, Zhi-Qiang; Xu, Yuan; Zhu, Qun-Xiong

    2015-09-01

    In this paper, a robust hybrid model integrating an enhanced inputs based extreme learning machine with the partial least square regression (PLSR-EIELM) was proposed. The proposed PLSR-EIELM model can overcome two main flaws in the extreme learning machine (ELM), i.e. the intractable problem in determining the optimal number of the hidden layer neurons and the over-fitting phenomenon. First, a traditional extreme learning machine (ELM) is selected. Second, a method of randomly assigning is applied to the weights between the input layer and the hidden layer, and then the nonlinear transformation for independent variables can be obtained from the output of the hidden layer neurons. Especially, the original input variables are regarded as enhanced inputs; then the enhanced inputs and the nonlinear transformed variables are tied together as the whole independent variables. In this way, the PLSR can be carried out to identify the PLS components not only from the nonlinear transformed variables but also from the original input variables, which can remove the correlation among the whole independent variables and the expected outputs. Finally, the optimal relationship model of the whole independent variables with the expected outputs can be achieved by using PLSR. Thus, the PLSR-EIELM model is developed. Then the PLSR-EIELM model served as an intelligent measurement tool for the key variables of the Purified Terephthalic Acid (PTA) process and the High Density Polyethylene (HDPE) process. The experimental results show that the predictive accuracy of PLSR-EIELM is stable, which indicate that PLSR-EIELM has good robust character. Moreover, compared with ELM, PLSR, hierarchical ELM (HELM), and PLSR-ELM, PLSR-EIELM can achieve much smaller predicted relative errors in these two applications. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Low pressure injection sequence sensitivity study of the M1 module of MEDICI

    International Nuclear Information System (INIS)

    Corradini, M.L.; Moses, G.A.; Norkus, J.K.; Welzbacker, R.T.

    1985-01-01

    In order to assess the consequences of a PWR containment failure and the ensuing radiological source term following a severe reactor accident, it is necessary to understand the ex-vessel behavior of the molten core. The M1 module of MEDICI models the dynamic fuel-coolant mixing, energetic interaction, and ejection of fuel and coolant from the reactor cavity following such an accident. A sensitivity study of the low pressure injection sequence was performed utilizing a Box-Behnken statistical design to treat five sets of input variables considered to be significant in the mixing and steam explosion processes. The low pressure injection sequence was studied in which the molten corium is modeled as a pour stream entering the cavity without entraining or sweeping out fuel or coolant

  16. Computational analysis of sequence selection mechanisms.

    Science.gov (United States)

    Meyerguz, Leonid; Grasso, Catherine; Kleinberg, Jon; Elber, Ron

    2004-04-01

    Mechanisms leading to gene variations are responsible for the diversity of species and are important components of the theory of evolution. One constraint on gene evolution is that of protein foldability; the three-dimensional shapes of proteins must be thermodynamically stable. We explore the impact of this constraint and calculate properties of foldable sequences using 3660 structures from the Protein Data Bank. We seek a selection function that receives sequences as input, and outputs survival probability based on sequence fitness to structure. We compute the number of sequences that match a particular protein structure with energy lower than the native sequence, the density of the number of sequences, the entropy, and the "selection" temperature. The mechanism of structure selection for sequences longer than 200 amino acids is approximately universal. For shorter sequences, it is not. We speculate on concrete evolutionary mechanisms that show this behavior.

  17. Hidden Markov models for sequence analysis: extension and analysis of the basic method

    DEFF Research Database (Denmark)

    Hughey, Richard; Krogh, Anders Stærmose

    1996-01-01

    -maximization training procedure is relatively straight-forward. In this paper,we review the mathematical extensions and heuristics that move the method from the theoreticalto the practical. Then, we experimentally analyze the effectiveness of model regularization,dynamic model modification, and optimization strategies......Hidden Markov models (HMMs) are a highly effective means of modeling a family of unalignedsequences or a common motif within a set of unaligned sequences. The trained HMM can then beused for discrimination or multiple alignment. The basic mathematical description of an HMMand its expectation....... Finally it is demonstrated on the SH2domain how a domain can be found from unaligned sequences using a special model type. Theexperimental work was completed with the aid of the Sequence Alignment and Modeling softwaresuite....

  18. Modeling genetic imprinting effects of DNA sequences with multilocus polymorphism data

    Directory of Open Access Journals (Sweden)

    Staud Roland

    2009-08-01

    Full Text Available Abstract Single nucleotide polymorphisms (SNPs represent the most widespread type of DNA sequence variation in the human genome and they have recently emerged as valuable genetic markers for revealing the genetic architecture of complex traits in terms of nucleotide combination and sequence. Here, we extend an algorithmic model for the haplotype analysis of SNPs to estimate the effects of genetic imprinting expressed at the DNA sequence level. The model provides a general procedure for identifying the number and types of optimal DNA sequence variants that are expressed differently due to their parental origin. The model is used to analyze a genetic data set collected from a pain genetics project. We find that DNA haplotype GAC from three SNPs, OPRKG36T (with two alleles G and T, OPRKA843G (with alleles A and G, and OPRKC846T (with alleles C and T, at the kappa-opioid receptor, triggers a significant effect on pain sensitivity, but with expression significantly depending on the parent from which it is inherited (p = 0.008. With a tremendous advance in SNP identification and automated screening, the model founded on haplotype discovery and statistical inference may provide a useful tool for genetic analysis of any quantitative trait with complex inheritance.

  19. Automated cleaning and pre-processing of immunoglobulin gene sequences from high-throughput sequencing

    Directory of Open Access Journals (Sweden)

    Miri eMichaeli

    2012-12-01

    Full Text Available High throughput sequencing (HTS yields tens of thousands to millions of sequences that require a large amount of pre-processing work to clean various artifacts. Such cleaning cannot be performed manually. Existing programs are not suitable for immunoglobulin (Ig genes, which are variable and often highly mutated. This paper describes Ig-HTS-Cleaner (Ig High Throughput Sequencing Cleaner, a program containing a simple cleaning procedure that successfully deals with pre-processing of Ig sequences derived from HTS, and Ig-Indel-Identifier (Ig Insertion – Deletion Identifier, a program for identifying legitimate and artifact insertions and/or deletions (indels. Our programs were designed for analyzing Ig gene sequences obtained by 454 sequencing, but they are applicable to all types of sequences and sequencing platforms. Ig-HTS-Cleaner and Ig-Indel-Identifier have been implemented in Java and saved as executable JAR files, supported on Linux and MS Windows. No special requirements are needed in order to run the programs, except for correctly constructing the input files as explained in the text. The programs' performance has been tested and validated on real and simulated data sets.

  20. A Model to Determinate the Influence of Probability Density Functions (PDFs of Input Quantities in Measurements

    Directory of Open Access Journals (Sweden)

    Jesús Caja

    2016-06-01

    Full Text Available A method for analysing the effect of different hypotheses about the type of the input quantities distributions of a measurement model is presented here so that the developed algorithms can be simplified. As an example, a model of indirect measurements with optical coordinate measurement machine was employed to evaluate these different hypotheses. As a result of the different experiments, the assumption that the different variables of the model can be modelled as normal distributions is proved.

  1. Harmonize input selection for sediment transport prediction

    Science.gov (United States)

    Afan, Haitham Abdulmohsin; Keshtegar, Behrooz; Mohtar, Wan Hanna Melini Wan; El-Shafie, Ahmed

    2017-09-01

    In this paper, three modeling approaches using a Neural Network (NN), Response Surface Method (RSM) and response surface method basis Global Harmony Search (GHS) are applied to predict the daily time series suspended sediment load. Generally, the input variables for forecasting the suspended sediment load are manually selected based on the maximum correlations of input variables in the modeling approaches based on NN and RSM. The RSM is improved to select the input variables by using the errors terms of training data based on the GHS, namely as response surface method and global harmony search (RSM-GHS) modeling method. The second-order polynomial function with cross terms is applied to calibrate the time series suspended sediment load with three, four and five input variables in the proposed RSM-GHS. The linear, square and cross corrections of twenty input variables of antecedent values of suspended sediment load and water discharge are investigated to achieve the best predictions of the RSM based on the GHS method. The performances of the NN, RSM and proposed RSM-GHS including both accuracy and simplicity are compared through several comparative predicted and error statistics. The results illustrated that the proposed RSM-GHS is as uncomplicated as the RSM but performed better, where fewer errors and better correlation was observed (R = 0.95, MAE = 18.09 (ton/day), RMSE = 25.16 (ton/day)) compared to the ANN (R = 0.91, MAE = 20.17 (ton/day), RMSE = 33.09 (ton/day)) and RSM (R = 0.91, MAE = 20.06 (ton/day), RMSE = 31.92 (ton/day)) for all types of input variables.

  2. Two-Stage orders sequencing system for mixed-model assembly

    Science.gov (United States)

    Zemczak, M.; Skolud, B.; Krenczyk, D.

    2015-11-01

    In the paper, the authors focus on the NP-hard problem of orders sequencing, formulated similarly to Car Sequencing Problem (CSP). The object of the research is the assembly line in an automotive industry company, on which few different models of products, each in a certain number of versions, are assembled on the shared resources, set in a line. Such production type is usually determined as a mixed-model production, and arose from the necessity of manufacturing customized products on the basis of very specific orders from single clients. The producers are nowadays obliged to provide each client the possibility to determine a huge amount of the features of the product they are willing to buy, as the competition in the automotive market is large. Due to the previously mentioned nature of the problem (NP-hard), in the given time period only satisfactory solutions are sought, as the optimal solution method has not yet been found. Most of the researchers that implemented inaccurate methods (e.g. evolutionary algorithms) to solving sequencing problems dropped the research after testing phase, as they were not able to obtain reproducible results, and met problems while determining the quality of the received solutions. Therefore a new approach to solving the problem, presented in this paper as a sequencing system is being developed. The sequencing system consists of a set of determined rules, implemented into computer environment. The system itself works in two stages. First of them is connected with the determination of a place in the storage buffer to which certain production orders should be sent. In the second stage of functioning, precise sets of sequences are determined and evaluated for certain parts of the storage buffer under certain criteria.

  3. Blackout sequence modeling for Atucha-I with MARCH3 code

    International Nuclear Information System (INIS)

    Baron, J.; Bastianelli, B.

    1997-01-01

    The modeling of a blackout sequence in Atucha I nuclear power plant is presented in this paper, as a preliminary phase for a level II probabilistic safety assessment. Such sequence is analyzed with the code MARCH3 from STCP (Source Term Code Package), based on a specific model developed for Atucha, that takes into accounts it peculiarities. The analysis includes all the severe accident phases, from the initial transient (loss of heat sink), loss of coolant through the safety valves, core uncovered, heatup, metal-water reaction, melting and relocation, heatup and failure of the pressure vessel, core-concrete interaction in the reactor cavity, heatup and failure of the containment building (multi-compartmented) due to quasi-static overpressurization. The results obtained permit to visualize the time sequence of these events, as well as provide the basis for source term studies. (author) [es

  4. Isolating Visual and Proprioceptive Components of Motor Sequence Learning in ASD.

    Science.gov (United States)

    Sharer, Elizabeth A; Mostofsky, Stewart H; Pascual-Leone, Alvaro; Oberman, Lindsay M

    2016-05-01

    In addition to defining impairments in social communication skills, individuals with autism spectrum disorder (ASD) also show impairments in more basic sensory and motor skills. Development of new skills involves integrating information from multiple sensory modalities. This input is then used to form internal models of action that can be accessed when both performing skilled movements, as well as understanding those actions performed by others. Learning skilled gestures is particularly reliant on integration of visual and proprioceptive input. We used a modified serial reaction time task (SRTT) to decompose proprioceptive and visual components and examine whether patterns of implicit motor skill learning differ in ASD participants as compared with healthy controls. While both groups learned the implicit motor sequence during training, healthy controls showed robust generalization whereas ASD participants demonstrated little generalization when visual input was constant. In contrast, no group differences in generalization were observed when proprioceptive input was constant, with both groups showing limited degrees of generalization. The findings suggest, when learning a motor sequence, individuals with ASD tend to rely less on visual feedback than do healthy controls. Visuomotor representations are considered to underlie imitative learning and action understanding and are thereby crucial to social skill and cognitive development. Thus, anomalous patterns of implicit motor learning, with a tendency to discount visual feedback, may be an important contributor in core social communication deficits that characterize ASD. Autism Res 2016, 9: 563-569. © 2015 International Society for Autism Research, Wiley Periodicals, Inc. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  5. Early neonatal loss of inhibitory synaptic input to the spinal motor neurons confers spina bifida-like leg dysfunction in a chicken model

    Directory of Open Access Journals (Sweden)

    Md. Sakirul Islam Khan

    2017-12-01

    Full Text Available Spina bifida aperta (SBA, one of the most common congenital malformations, causes lifelong neurological complications, particularly in terms of motor dysfunction. Fetuses with SBA exhibit voluntary leg movements in utero and during early neonatal life, but these disappear within the first few weeks after birth. However, the pathophysiological sequence underlying such motor dysfunction remains unclear. Additionally, because important insights have yet to be obtained from human cases, an appropriate animal model is essential. Here, we investigated the neuropathological mechanisms of progression of SBA-like motor dysfunctions in a neural tube surgery-induced chicken model of SBA at different pathogenesis points ranging from embryonic to posthatch ages. We found that chicks with SBA-like features lose voluntary leg movements and subsequently exhibit lower-limb paralysis within the first 2 weeks after hatching, coinciding with the synaptic change-induced disruption of spinal motor networks at the site of the SBA lesion in the lumbosacral region. Such synaptic changes reduced the ratio of inhibitory-to-excitatory inputs to motor neurons and were associated with a drastic loss of γ-aminobutyric acid (GABAergic inputs and upregulation of the cholinergic activities of motor neurons. Furthermore, most of the neurons in ventral horns, which appeared to be suffering from excitotoxicity during the early postnatal days, underwent apoptosis. However, the triggers of cellular abnormalization and neurodegenerative signaling were evident in the middle- to late-gestational stages, probably attributable to the amniotic fluid-induced in ovo milieu. In conclusion, we found that early neonatal loss of neurons in the ventral horn of exposed spinal cord affords novel insights into the pathophysiology of SBA-like leg dysfunction.

  6. Prioritizing Interdependent Production Processes using Leontief Input-Output Model

    Directory of Open Access Journals (Sweden)

    Masbad Jesah Grace

    2016-03-01

    Full Text Available This paper proposes a methodology in identifying key production processes in an interdependent production system. Previous approaches on this domain have drawbacks that may potentially affect the reliability of decision-making. The proposed approach adopts the Leontief input-output model (L-IOM which was proven successful in analyzing interdependent economic systems. The motivation behind such adoption lies in the strength of L-IOM in providing a rigorous quantitative framework in identifying key components of interdependent systems. In this proposed approach, the consumption and production flows of each process are represented respectively by the material inventory produced by the prior process and the material inventory produced by the current process, both in monetary values. A case study in a furniture production system located in central Philippines was carried out to elucidate the proposed approach. Results of the case were reported in this work

  7. Mathematical model and metaheuristics for simultaneous balancing and sequencing of a robotic mixed-model assembly line

    Science.gov (United States)

    Li, Zixiang; Janardhanan, Mukund Nilakantan; Tang, Qiuhua; Nielsen, Peter

    2018-05-01

    This article presents the first method to simultaneously balance and sequence robotic mixed-model assembly lines (RMALB/S), which involves three sub-problems: task assignment, model sequencing and robot allocation. A new mixed-integer programming model is developed to minimize makespan and, using CPLEX solver, small-size problems are solved for optimality. Two metaheuristics, the restarted simulated annealing algorithm and co-evolutionary algorithm, are developed and improved to address this NP-hard problem. The restarted simulated annealing method replaces the current temperature with a new temperature to restart the search process. The co-evolutionary method uses a restart mechanism to generate a new population by modifying several vectors simultaneously. The proposed algorithms are tested on a set of benchmark problems and compared with five other high-performing metaheuristics. The proposed algorithms outperform their original editions and the benchmarked methods. The proposed algorithms are able to solve the balancing and sequencing problem of a robotic mixed-model assembly line effectively and efficiently.

  8. An efficient binomial model-based measure for sequence comparison and its application.

    Science.gov (United States)

    Liu, Xiaoqing; Dai, Qi; Li, Lihua; He, Zerong

    2011-04-01

    Sequence comparison is one of the major tasks in bioinformatics, which could serve as evidence of structural and functional conservation, as well as of evolutionary relations. There are several similarity/dissimilarity measures for sequence comparison, but challenges remains. This paper presented a binomial model-based measure to analyze biological sequences. With help of a random indicator, the occurrence of a word at any position of sequence can be regarded as a random Bernoulli variable, and the distribution of a sum of the word occurrence is well known to be a binomial one. By using a recursive formula, we computed the binomial probability of the word count and proposed a binomial model-based measure based on the relative entropy. The proposed measure was tested by extensive experiments including classification of HEV genotypes and phylogenetic analysis, and further compared with alignment-based and alignment-free measures. The results demonstrate that the proposed measure based on binomial model is more efficient.

  9. Maximum-Entropy Models of Sequenced Immune Repertoires Predict Antigen-Antibody Affinity.

    Directory of Open Access Journals (Sweden)

    Lorenzo Asti

    2016-04-01

    Full Text Available The immune system has developed a number of distinct complex mechanisms to shape and control the antibody repertoire. One of these mechanisms, the affinity maturation process, works in an evolutionary-like fashion: after binding to a foreign molecule, the antibody-producing B-cells exhibit a high-frequency mutation rate in the genome region that codes for the antibody active site. Eventually, cells that produce antibodies with higher affinity for their cognate antigen are selected and clonally expanded. Here, we propose a new statistical approach based on maximum entropy modeling in which a scoring function related to the binding affinity of antibodies against a specific antigen is inferred from a sample of sequences of the immune repertoire of an individual. We use our inference strategy to infer a statistical model on a data set obtained by sequencing a fairly large portion of the immune repertoire of an HIV-1 infected patient. The Pearson correlation coefficient between our scoring function and the IC50 neutralization titer measured on 30 different antibodies of known sequence is as high as 0.77 (p-value 10-6, outperforming other sequence- and structure-based models.

  10. Secondary structure classification of amino-acid sequences using state-space modeling

    OpenAIRE

    Brunnert, Marcus; Krahnke, Tillmann; Urfer, Wolfgang

    2001-01-01

    The secondary structure classification of amino acid sequences can be carried out by a statistical analysis of sequence and structure data using state-space models. Aiming at this classification, a modified filter algorithm programmed in S is applied to data of three proteins. The application leads to correct classifications of two proteins even when using relatively simple estimation methods for the parameters of the state-space models. Furthermore, it has been shown that the assumed initial...

  11. PERMODELAN INDEKS HARGA KONSUMEN INDONESIA DENGAN MENGGUNAKAN MODEL INTERVENSI MULTI INPUT

    KAUST Repository

    Novianti, Putri Wikie; Suhartono, Suhartono

    2017-01-01

    -searches that have been done are only contain of an intervention with single input, ei-ther step or pulse function. Multi input intervention was used in Indonesia CPI case because there are some events which are expected effecting CPI. Based on the result, those

  12. Data Envelopment Analysis with Fixed Inputs, Undesirable Outputs and Negative Data

    Directory of Open Access Journals (Sweden)

    F. Seyed Esmaeili

    2017-03-01

    Full Text Available In Data Envelopment Analysis (DEA, different models have been measured to evaluate the performance of decision making units with multiple inputs and outputs. Revised model of Slack-based measures known as MBSM of collective models family has been introduced by Sharp et al. Slack-based measure has been introduced by Ton. In this study, a model is proposed that is able to estimate the efficiency when a number of outputs of decision making units are undesirable, inputs are fixed and some of outputs and inputs are negative. So that, level of undesirable output is reduced at the constant level of inputs in the evaluation unit and by conserving the efficiency.

  13. Extended Fitts' model of pointing time in eye-gaze input system - Incorporating effects of target shape and movement direction into modeling.

    Science.gov (United States)

    Murata, Atsuo; Fukunaga, Daichi

    2018-04-01

    This study attempted to investigate the effects of the target shape and the movement direction on the pointing time using an eye-gaze input system and extend Fitts' model so that these factors are incorporated into the model and the predictive power of Fitts' model is enhanced. The target shape, the target size, the movement distance, and the direction of target presentation were set as within-subject experimental variables. The target shape included: a circle, and rectangles with an aspect ratio of 1:1, 1:2, 1:3, and 1:4. The movement direction included eight directions: upper, lower, left, right, upper left, upper right, lower left, and lower right. On the basis of the data for identifying the effects of the target shape and the movement direction on the pointing time, an attempt was made to develop a generalized and extended Fitts' model that took into account the movement direction and the target shape. As a result, the generalized and extended model was found to fit better to the experimental data, and be more effective for predicting the pointing time for a variety of human-computer interaction (HCI) task using an eye-gaze input system. Copyright © 2017. Published by Elsevier Ltd.

  14. Galaxy LIMS for next-generation sequencing

    NARCIS (Netherlands)

    Scholtalbers, J.; Rossler, J.; Sorn, P.; Graaf, J. de; Boisguerin, V.; Castle, J.; Sahin, U.

    2013-01-01

    SUMMARY: We have developed a laboratory information management system (LIMS) for a next-generation sequencing (NGS) laboratory within the existing Galaxy platform. The system provides lab technicians standard and customizable sample information forms, barcoded submission forms, tracking of input

  15. Analysis of the AD sequence in Zion plant using the March 1.1 code

    International Nuclear Information System (INIS)

    Oriolo, F.; Paci, S.

    1985-01-01

    The analyses of the AD sequences for the Zion power plant, made at the Pisa University, in the framework of the participation in the Source Tern Working Group. After a short description of the plant and the sequence under analysis, the model used for the reference computation and the results obtained using the March 1.1 code are shown. Together with the reference computation a series of parametric tests have been also made, concerning some input code variables, in order to ascertain their influence on the transient trend. The results of these analyses are shown in Appendix

  16. Multi-Input Convolutional Neural Network for Flower Grading

    Directory of Open Access Journals (Sweden)

    Yu Sun

    2017-01-01

    Full Text Available Flower grading is a significant task because it is extremely convenient for managing the flowers in greenhouse and market. With the development of computer vision, flower grading has become an interdisciplinary focus in both botany and computer vision. A new dataset named BjfuGloxinia contains three quality grades; each grade consists of 107 samples and 321 images. A multi-input convolutional neural network is designed for large scale flower grading. Multi-input CNN achieves a satisfactory accuracy of 89.6% on the BjfuGloxinia after data augmentation. Compared with a single-input CNN, the accuracy of multi-input CNN is increased by 5% on average, demonstrating that multi-input convolutional neural network is a promising model for flower grading. Although data augmentation contributes to the model, the accuracy is still limited by lack of samples diversity. Majority of misclassification is derived from the medium class. The image processing based bud detection is useful for reducing the misclassification, increasing the accuracy of flower grading to approximately 93.9%.

  17. Sequence-based prediction of protein-binding sites in DNA: comparative study of two SVM models.

    Science.gov (United States)

    Park, Byungkyu; Im, Jinyong; Tuvshinjargal, Narankhuu; Lee, Wook; Han, Kyungsook

    2014-11-01

    As many structures of protein-DNA complexes have been known in the past years, several computational methods have been developed to predict DNA-binding sites in proteins. However, its inverse problem (i.e., predicting protein-binding sites in DNA) has received much less attention. One of the reasons is that the differences between the interaction propensities of nucleotides are much smaller than those between amino acids. Another reason is that DNA exhibits less diverse sequence patterns than protein. Therefore, predicting protein-binding DNA nucleotides is much harder than predicting DNA-binding amino acids. We computed the interaction propensity (IP) of nucleotide triplets with amino acids using an extensive dataset of protein-DNA complexes, and developed two support vector machine (SVM) models that predict protein-binding nucleotides from sequence data alone. One SVM model predicts protein-binding nucleotides using DNA sequence data alone, and the other SVM model predicts protein-binding nucleotides using both DNA and protein sequences. In a 10-fold cross-validation with 1519 DNA sequences, the SVM model that uses DNA sequence data only predicted protein-binding nucleotides with an accuracy of 67.0%, an F-measure of 67.1%, and a Matthews correlation coefficient (MCC) of 0.340. With an independent dataset of 181 DNAs that were not used in training, it achieved an accuracy of 66.2%, an F-measure 66.3% and a MCC of 0.324. Another SVM model that uses both DNA and protein sequences achieved an accuracy of 69.6%, an F-measure of 69.6%, and a MCC of 0.383 in a 10-fold cross-validation with 1519 DNA sequences and 859 protein sequences. With an independent dataset of 181 DNAs and 143 proteins, it showed an accuracy of 67.3%, an F-measure of 66.5% and a MCC of 0.329. Both in cross-validation and independent testing, the second SVM model that used both DNA and protein sequence data showed better performance than the first model that used DNA sequence data. To the best of

  18. Modelling the soil microclimate: does the spatial or temporal resolution of input parameters matter?

    Directory of Open Access Journals (Sweden)

    Anna Carter

    2016-01-01

    Full Text Available The urgency of predicting future impacts of environmental change on vulnerable populations is advancing the development of spatially explicit habitat models. Continental-scale climate and microclimate layers are now widely available. However, most terrestrial organisms exist within microclimate spaces that are very small, relative to the spatial resolution of those layers. We examined the effects of multi-resolution, multi-extent topographic and climate inputs on the accuracy of hourly soil temperature predictions for a small island generated at a very high spatial resolution (<1 m2 using the mechanistic microclimate model in NicheMapR. Achieving an accuracy comparable to lower-resolution, continental-scale microclimate layers (within about 2–3°C of observed values required the use of daily weather data as well as high resolution topographic layers (elevation, slope, aspect, horizon angles, while inclusion of site-specific soil properties did not markedly improve predictions. Our results suggest that large-extent microclimate layers may not provide accurate estimates of microclimate conditions when the spatial extent of a habitat or other area of interest is similar to or smaller than the spatial resolution of the layers themselves. Thus, effort in sourcing model inputs should be focused on obtaining high resolution terrain data, e.g., via LiDAR or photogrammetry, and local weather information rather than in situ sampling of microclimate characteristics.

  19. Next-Generation Sequencing Analysis and Algorithms for PDX and CDX Models.

    Science.gov (United States)

    Khandelwal, Garima; Girotti, María Romina; Smowton, Christopher; Taylor, Sam; Wirth, Christopher; Dynowski, Marek; Frese, Kristopher K; Brady, Ged; Dive, Caroline; Marais, Richard; Miller, Crispin

    2017-08-01

    Patient-derived xenograft (PDX) and circulating tumor cell-derived explant (CDX) models are powerful methods for the study of human disease. In cancer research, these methods have been applied to multiple questions, including the study of metastatic progression, genetic evolution, and therapeutic drug responses. As PDX and CDX models can recapitulate the highly heterogeneous characteristics of a patient tumor, as well as their response to chemotherapy, there is considerable interest in combining them with next-generation sequencing to monitor the genomic, transcriptional, and epigenetic changes that accompany oncogenesis. When used for this purpose, their reliability is highly dependent on being able to accurately distinguish between sequencing reads that originate from the host, and those that arise from the xenograft itself. Here, we demonstrate that failure to correctly identify contaminating host reads when analyzing DNA- and RNA-sequencing (DNA-Seq and RNA-Seq) data from PDX and CDX models is a major confounding factor that can lead to incorrect mutation calls and a failure to identify canonical mutation signatures associated with tumorigenicity. In addition, a highly sensitive algorithm and open source software tool for identifying and removing contaminating host sequences is described. Importantly, when applied to PDX and CDX models of melanoma, these data demonstrate its utility as a sensitive and selective tool for the correction of PDX- and CDX-derived whole-exome and RNA-Seq data. Implications: This study describes a sensitive method to identify contaminating host reads in xenograft and explant DNA- and RNA-Seq data and is applicable to other forms of deep sequencing. Mol Cancer Res; 15(8); 1012-6. ©2017 AACR . ©2017 American Association for Cancer Research.

  20. Model-free aftershock forecasts constructed from similar sequences in the past

    Science.gov (United States)

    van der Elst, N.; Page, M. T.

    2017-12-01

    The basic premise behind aftershock forecasting is that sequences in the future will be similar to those in the past. Forecast models typically use empirically tuned parametric distributions to approximate past sequences, and project those distributions into the future to make a forecast. While parametric models do a good job of describing average outcomes, they are not explicitly designed to capture the full range of variability between sequences, and can suffer from over-tuning of the parameters. In particular, parametric forecasts may produce a high rate of "surprises" - sequences that land outside the forecast range. Here we present a non-parametric forecast method that cuts out the parametric "middleman" between training data and forecast. The method is based on finding past sequences that are similar to the target sequence, and evaluating their outcomes. We quantify similarity as the Poisson probability that the observed event count in a past sequence reflects the same underlying intensity as the observed event count in the target sequence. Event counts are defined in terms of differential magnitude relative to the mainshock. The forecast is then constructed from the distribution of past sequences outcomes, weighted by their similarity. We compare the similarity forecast with the Reasenberg and Jones (RJ95) method, for a set of 2807 global aftershock sequences of M≥6 mainshocks. We implement a sequence-specific RJ95 forecast using a global average prior and Bayesian updating, but do not propagate epistemic uncertainty. The RJ95 forecast is somewhat more precise than the similarity forecast: 90% of observed sequences fall within a factor of two of the median RJ95 forecast value, whereas the fraction is 85% for the similarity forecast. However, the surprise rate is much higher for the RJ95 forecast; 10% of observed sequences fall in the upper 2.5% of the (Poissonian) forecast range. The surprise rate is less than 3% for the similarity forecast. The similarity

  1. Automated side-chain model building and sequence assignment by template matching

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.

    2002-01-01

    A method for automated macromolecular side-chain model building and for aligning the sequence to the map is described. An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer

  2. Spatiotemporal coding of inputs for a system of globally coupled phase oscillators

    Science.gov (United States)

    Wordsworth, John; Ashwin, Peter

    2008-12-01

    We investigate the spatiotemporal coding of low amplitude inputs to a simple system of globally coupled phase oscillators with coupling function g(ϕ)=-sin(ϕ+α)+rsin(2ϕ+β) that has robust heteroclinic cycles (slow switching between cluster states). The inputs correspond to detuning of the oscillators. It was recently noted that globally coupled phase oscillators can encode their frequencies in the form of spatiotemporal codes of a sequence of cluster states [P. Ashwin, G. Orosz, J. Wordsworth, and S. Townley, SIAM J. Appl. Dyn. Syst. 6, 728 (2007)]. Concentrating on the case of N=5 oscillators we show in detail how the spatiotemporal coding can be used to resolve all of the information that relates the individual inputs to each other, providing that a long enough time series is considered. We investigate robustness to the addition of noise and find a remarkable stability, especially of the temporal coding, to the addition of noise even for noise of a comparable magnitude to the inputs.

  3. Analytical model for advective-dispersive transport involving flexible boundary inputs, initial distributions and zero-order productions

    Science.gov (United States)

    Chen, Jui-Sheng; Li, Loretta Y.; Lai, Keng-Hsin; Liang, Ching-Ping

    2017-11-01

    A novel solution method is presented which leads to an analytical model for the advective-dispersive transport in a semi-infinite domain involving a wide spectrum of boundary inputs, initial distributions, and zero-order productions. The novel solution method applies the Laplace transform in combination with the generalized integral transform technique (GITT) to obtain the generalized analytical solution. Based on this generalized analytical expression, we derive a comprehensive set of special-case solutions for some time-dependent boundary distributions and zero-order productions, described by the Dirac delta, constant, Heaviside, exponentially-decaying, or periodically sinusoidal functions as well as some position-dependent initial conditions and zero-order productions specified by the Dirac delta, constant, Heaviside, or exponentially-decaying functions. The developed solutions are tested against an analytical solution from the literature. The excellent agreement between the analytical solutions confirms that the new model can serve as an effective tool for investigating transport behaviors under different scenarios. Several examples of applications, are given to explore transport behaviors which are rarely noted in the literature. The results show that the concentration waves resulting from the periodically sinusoidal input are sensitive to dispersion coefficient. The implication of this new finding is that a tracer test with a periodic input may provide additional information when for identifying the dispersion coefficients. Moreover, the solution strategy presented in this study can be extended to derive analytical models for handling more complicated problems of solute transport in multi-dimensional media subjected to sequential decay chain reactions, for which analytical solutions are not currently available.

  4. Modeling and sliding mode predictive control of the ultra-supercritical boiler-turbine system with uncertainties and input constraints.

    Science.gov (United States)

    Tian, Zhen; Yuan, Jingqi; Zhang, Xiang; Kong, Lei; Wang, Jingcheng

    2018-05-01

    The coordinated control system (CCS) serves as an important role in load regulation, efficiency optimization and pollutant reduction for coal-fired power plants. The CCS faces with tough challenges, such as the wide-range load variation, various uncertainties and constraints. This paper aims to improve the load tacking ability and robustness for boiler-turbine units under wide-range operation. To capture the key dynamics of the ultra-supercritical boiler-turbine system, a nonlinear control-oriented model is developed based on mechanism analysis and model reduction techniques, which is validated with the history operation data of a real 1000 MW unit. To simultaneously address the issues of uncertainties and input constraints, a discrete-time sliding mode predictive controller (SMPC) is designed with the dual-mode control law. Moreover, the input-to-state stability and robustness of the closed-loop system are proved. Simulation results are presented to illustrate the effectiveness of the proposed control scheme, which achieves good tracking performance, disturbance rejection ability and compatibility to input constraints. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  5. A branch-heterogeneous model of protein evolution for efficient inference of ancestral sequences.

    Science.gov (United States)

    Groussin, M; Boussau, B; Gouy, M

    2013-07-01

    Most models of nucleotide or amino acid substitution used in phylogenetic studies assume that the evolutionary process has been homogeneous across lineages and that composition of nucleotides or amino acids has remained the same throughout the tree. These oversimplified assumptions are refuted by the observation that compositional variability characterizes extant biological sequences. Branch-heterogeneous models of protein evolution that account for compositional variability have been developed, but are not yet in common use because of the large number of parameters required, leading to high computational costs and potential overparameterization. Here, we present a new branch-nonhomogeneous and nonstationary model of protein evolution that captures more accurately the high complexity of sequence evolution. This model, henceforth called Correspondence and likelihood analysis (COaLA), makes use of a correspondence analysis to reduce the number of parameters to be optimized through maximum likelihood, focusing on most of the compositional variation observed in the data. The model was thoroughly tested on both simulated and biological data sets to show its high performance in terms of data fitting and CPU time. COaLA efficiently estimates ancestral amino acid frequencies and sequences, making it relevant for studies aiming at reconstructing and resurrecting ancestral amino acid sequences. Finally, we applied COaLA on a concatenate of universal amino acid sequences to confirm previous results obtained with a nonhomogeneous Bayesian model regarding the early pattern of adaptation to optimal growth temperature, supporting the mesophilic nature of the Last Universal Common Ancestor.

  6. Protein secondary structure prediction for a single-sequence using hidden semi-Markov models

    Directory of Open Access Journals (Sweden)

    Borodovsky Mark

    2006-03-01

    Full Text Available Abstract Background The accuracy of protein secondary structure prediction has been improving steadily towards the 88% estimated theoretical limit. There are two types of prediction algorithms: Single-sequence prediction algorithms imply that information about other (homologous proteins is not available, while algorithms of the second type imply that information about homologous proteins is available, and use it intensively. The single-sequence algorithms could make an important contribution to studies of proteins with no detected homologs, however the accuracy of protein secondary structure prediction from a single-sequence is not as high as when the additional evolutionary information is present. Results In this paper, we further refine and extend the hidden semi-Markov model (HSMM initially considered in the BSPSS algorithm. We introduce an improved residue dependency model by considering the patterns of statistically significant amino acid correlation at structural segment borders. We also derive models that specialize on different sections of the dependency structure and incorporate them into HSMM. In addition, we implement an iterative training method to refine estimates of HSMM parameters. The three-state-per-residue accuracy and other accuracy measures of the new method, IPSSP, are shown to be comparable or better than ones for BSPSS as well as for PSIPRED, tested under the single-sequence condition. Conclusions We have shown that new dependency models and training methods bring further improvements to single-sequence protein secondary structure prediction. The results are obtained under cross-validation conditions using a dataset with no pair of sequences having significant sequence similarity. As new sequences are added to the database it is possible to augment the dependency structure and obtain even higher accuracy. Current and future advances should contribute to the improvement of function prediction for orphan proteins inscrutable

  7. A probabilistic cell model in background corrected image sequences for single cell analysis

    Directory of Open Access Journals (Sweden)

    Fieguth Paul

    2010-10-01

    Full Text Available Abstract Background Methods of manual cell localization and outlining are so onerous that automated tracking methods would seem mandatory for handling huge image sequences, nevertheless manual tracking is, astonishingly, still widely practiced in areas such as cell biology which are outside the influence of most image processing research. The goal of our research is to address this gap by developing automated methods of cell tracking, localization, and segmentation. Since even an optimal frame-to-frame association method cannot compensate and recover from poor detection, it is clear that the quality of cell tracking depends on the quality of cell detection within each frame. Methods Cell detection performs poorly where the background is not uniform and includes temporal illumination variations, spatial non-uniformities, and stationary objects such as well boundaries (which confine the cells under study. To improve cell detection, the signal to noise ratio of the input image can be increased via accurate background estimation. In this paper we investigate background estimation, for the purpose of cell detection. We propose a cell model and a method for background estimation, driven by the proposed cell model, such that well structure can be identified, and explicitly rejected, when estimating the background. Results The resulting background-removed images have fewer artifacts and allow cells to be localized and detected more reliably. The experimental results generated by applying the proposed method to different Hematopoietic Stem Cell (HSC image sequences are quite promising. Conclusion The understanding of cell behavior relies on precise information about the temporal dynamics and spatial distribution of cells. Such information may play a key role in disease research and regenerative medicine, so automated methods for observation and measurement of cells from microscopic images are in high demand. The proposed method in this paper is capable

  8. Fuzzy portfolio model with fuzzy-input return rates and fuzzy-output proportions

    Science.gov (United States)

    Tsaur, Ruey-Chyn

    2015-02-01

    In the finance market, a short-term investment strategy is usually applied in portfolio selection in order to reduce investment risk; however, the economy is uncertain and the investment period is short. Further, an investor has incomplete information for selecting a portfolio with crisp proportions for each chosen security. In this paper we present a new method of constructing fuzzy portfolio model for the parameters of fuzzy-input return rates and fuzzy-output proportions, based on possibilistic mean-standard deviation models. Furthermore, we consider both excess or shortage of investment in different economic periods by using fuzzy constraint for the sum of the fuzzy proportions, and we also refer to risks of securities investment and vagueness of incomplete information during the period of depression economics for the portfolio selection. Finally, we present a numerical example of a portfolio selection problem to illustrate the proposed model and a sensitivity analysis is realised based on the results.

  9. Distinct Element Method modelling of fold-related fractures in a multilayer sequence

    Science.gov (United States)

    Kaserer, Klemens; Schöpfer, Martin P. J.; Grasemann, Bernhard

    2017-04-01

    Natural fractures have a significant impact on the performance of hydrocarbon systems/reservoirs. In a multilayer sequence, both the fracture density within the individual layers and the type of fracture intersection with bedding contacts are key parameters controlling fluid pathways. In the present study the influence of layer stacking and interlayer friction on fracture density and connectivity within a folded sequence is systematically investigated using 2D Distinct Element Method modelling. Our numerical approach permits forward modelling of both fracture nucleation/propagation/arrest and (contemporaneous) frictional slip along bedding planes in a robust and mechanically sound manner. Folding of the multilayer sequence is achieved by enforcing constant curvature folding by means of a velocity boundary condition at the model base, while a constant overburden pressure is maintained at the model top. The modelling reveals that with high bedding plane friction the multilayer stack behaves mechanically as a single layer so that the neutral surface develops in centre of the sequence and fracture spacing is controlled by the total thickness of the folded sequence. In contrast, low bedding plane friction leads to decoupling of the individual layers (flexural slip folding) so that a neutral surface develops in the centre of each layer and fracture spacing is controlled by the thickness of the individual layers. The low interfacial friction models illustrate that stepping of fractures across bedding planes is a common process, which can however have two contrasting origins: The mechanical properties of the interface cause fracture stepping during fracture propagation. Originally through-going fractures are later offset by interfacial slip during folding. A combination of these two different origins may lead to (apparently) inconsistent fracture offsets across bedding planes within a flexural slip fold.

  10. Input data for inferring species distributions in Kyphosidae world-wide

    Directory of Open Access Journals (Sweden)

    Steen Wilhelm Knudsen

    2016-09-01

    Full Text Available Input data files for inferring the relationship among the family Kyphosidae, as presented in (Knudsen and Clements, 2016 [1], is here provided together with resulting topologies, to allow the reader to explore the topologies in detail. The input data files comprise seven nexus-files with sequence alignments of mtDNA and nDNA markers for performing Bayesian analysis. A matrix of recoded character states inferred from the morphology examined in museum specimens representing Dichistiidae, Girellidae, Kyphosidae, Microcanthidae and Scorpididae, is also provided, and can be used for performing a parsimonious analysis to infer the relationship among these perciform families. The nucleotide input data files comprise both multiple and single representatives of the various species to allow for inference of the relationship among the species in Kyphosidae and between the families closely related to Kyphosidae. The ‘.xml’-files with various constrained relationships among the families potentially closely related to Kyphosidae are also provided to allow the reader to rerun and explore the results from the stepping-stone analysis. The resulting topologies are supplied in newick-file formats together with input data files for Bayesian analysis, together with ‘.xml’-files. Re-running the input data files in the appropriate software, will enable the reader to examine log-files and tree-files themselves. Keywords: Sea chub, Drummer, Kyphosus, Scorpis, Girella

  11. Plasticity of the cis-regulatory input function of a gene.

    Directory of Open Access Journals (Sweden)

    Avraham E Mayo

    2006-04-01

    Full Text Available The transcription rate of a gene is often controlled by several regulators that bind specific sites in the gene's cis-regulatory region. The combined effect of these regulators is described by a cis-regulatory input function. What determines the form of an input function, and how variable is it with respect to mutations? To address this, we employ the well-characterized lac operon of Escherichia coli, which has an elaborate input function, intermediate between Boolean AND-gate and OR-gate logic. We mapped in detail the input function of 12 variants of the lac promoter, each with different point mutations in the regulator binding sites, by means of accurate expression measurements from living cells. We find that even a few mutations can significantly change the input function, resulting in functions that resemble Pure AND gates, OR gates, or single-input switches. Other types of gates were not found. The variant input functions can be described in a unified manner by a mathematical model. The model also lets us predict which functions cannot be reached by point mutations. The input function that we studied thus appears to be plastic, in the sense that many of the mutations do not ruin the regulation completely but rather result in new ways to integrate the inputs.

  12. Modeling compositional dynamics based on GC and purine contents of protein-coding sequences

    KAUST Repository

    Zhang, Zhang

    2010-11-08

    Background: Understanding the compositional dynamics of genomes and their coding sequences is of great significance in gaining clues into molecular evolution and a large number of publically-available genome sequences have allowed us to quantitatively predict deviations of empirical data from their theoretical counterparts. However, the quantification of theoretical compositional variations for a wide diversity of genomes remains a major challenge.Results: To model the compositional dynamics of protein-coding sequences, we propose two simple models that take into account both mutation and selection effects, which act differently at the three codon positions, and use both GC and purine contents as compositional parameters. The two models concern the theoretical composition of nucleotides, codons, and amino acids, with no prerequisite of homologous sequences or their alignments. We evaluated the two models by quantifying theoretical compositions of a large collection of protein-coding sequences (including 46 of Archaea, 686 of Bacteria, and 826 of Eukarya), yielding consistent theoretical compositions across all the collected sequences.Conclusions: We show that the compositions of nucleotides, codons, and amino acids are largely determined by both GC and purine contents and suggest that deviations of the observed from the expected compositions may reflect compositional signatures that arise from a complex interplay between mutation and selection via DNA replication and repair mechanisms.Reviewers: This article was reviewed by Zhaolei Zhang (nominated by Mark Gerstein), Guruprasad Ananda (nominated by Kateryna Makova), and Daniel Haft. 2010 Zhang and Yu; licensee BioMed Central Ltd.

  13. Modeling compositional dynamics based on GC and purine contents of protein-coding sequences

    KAUST Repository

    Zhang, Zhang; Yu, Jun

    2010-01-01

    Background: Understanding the compositional dynamics of genomes and their coding sequences is of great significance in gaining clues into molecular evolution and a large number of publically-available genome sequences have allowed us to quantitatively predict deviations of empirical data from their theoretical counterparts. However, the quantification of theoretical compositional variations for a wide diversity of genomes remains a major challenge.Results: To model the compositional dynamics of protein-coding sequences, we propose two simple models that take into account both mutation and selection effects, which act differently at the three codon positions, and use both GC and purine contents as compositional parameters. The two models concern the theoretical composition of nucleotides, codons, and amino acids, with no prerequisite of homologous sequences or their alignments. We evaluated the two models by quantifying theoretical compositions of a large collection of protein-coding sequences (including 46 of Archaea, 686 of Bacteria, and 826 of Eukarya), yielding consistent theoretical compositions across all the collected sequences.Conclusions: We show that the compositions of nucleotides, codons, and amino acids are largely determined by both GC and purine contents and suggest that deviations of the observed from the expected compositions may reflect compositional signatures that arise from a complex interplay between mutation and selection via DNA replication and repair mechanisms.Reviewers: This article was reviewed by Zhaolei Zhang (nominated by Mark Gerstein), Guruprasad Ananda (nominated by Kateryna Makova), and Daniel Haft. 2010 Zhang and Yu; licensee BioMed Central Ltd.

  14. Modeling coding-sequence evolution within the context of residue solvent accessibility.

    Science.gov (United States)

    Scherrer, Michael P; Meyer, Austin G; Wilke, Claus O

    2012-09-12

    Protein structure mediates site-specific patterns of sequence divergence. In particular, residues in the core of a protein (solvent-inaccessible residues) tend to be more evolutionarily conserved than residues on the surface (solvent-accessible residues). Here, we present a model of sequence evolution that explicitly accounts for the relative solvent accessibility of each residue in a protein. Our model is a variant of the Goldman-Yang 1994 (GY94) model in which all model parameters can be functions of the relative solvent accessibility (RSA) of a residue. We apply this model to a data set comprised of nearly 600 yeast genes, and find that an evolutionary-rate ratio ω that varies linearly with RSA provides a better model fit than an RSA-independent ω or an ω that is estimated separately in individual RSA bins. We further show that the branch length t and the transition-transverion ratio κ also vary with RSA. The RSA-dependent GY94 model performs better than an RSA-dependent Muse-Gaut 1994 (MG94) model in which the synonymous and non-synonymous rates individually are linear functions of RSA. Finally, protein core size affects the slope of the linear relationship between ω and RSA, and gene expression level affects both the intercept and the slope. Structure-aware models of sequence evolution provide a significantly better fit than traditional models that neglect structure. The linear relationship between ω and RSA implies that genes are better characterized by their ω slope and intercept than by just their mean ω.

  15. Modeling coding-sequence evolution within the context of residue solvent accessibility

    Directory of Open Access Journals (Sweden)

    Scherrer Michael P

    2012-09-01

    Full Text Available Abstract Background Protein structure mediates site-specific patterns of sequence divergence. In particular, residues in the core of a protein (solvent-inaccessible residues tend to be more evolutionarily conserved than residues on the surface (solvent-accessible residues. Results Here, we present a model of sequence evolution that explicitly accounts for the relative solvent accessibility of each residue in a protein. Our model is a variant of the Goldman-Yang 1994 (GY94 model in which all model parameters can be functions of the relative solvent accessibility (RSA of a residue. We apply this model to a data set comprised of nearly 600 yeast genes, and find that an evolutionary-rate ratio ω that varies linearly with RSA provides a better model fit than an RSA-independent ω or an ω that is estimated separately in individual RSA bins. We further show that the branch length t and the transition-transverion ratio κ also vary with RSA. The RSA-dependent GY94 model performs better than an RSA-dependent Muse-Gaut 1994 (MG94 model in which the synonymous and non-synonymous rates individually are linear functions of RSA. Finally, protein core size affects the slope of the linear relationship between ω and RSA, and gene expression level affects both the intercept and the slope. Conclusions Structure-aware models of sequence evolution provide a significantly better fit than traditional models that neglect structure. The linear relationship between ω and RSA implies that genes are better characterized by their ω slope and intercept than by just their mean ω.

  16. Alternative to Ritt's pseudodivision for finding the input-output equations of multi-output models.

    Science.gov (United States)

    Meshkat, Nicolette; Anderson, Chris; DiStefano, Joseph J

    2012-09-01

    Differential algebra approaches to structural identifiability analysis of a dynamic system model in many instances heavily depend upon Ritt's pseudodivision at an early step in analysis. The pseudodivision algorithm is used to find the characteristic set, of which a subset, the input-output equations, is used for identifiability analysis. A simpler algorithm is proposed for this step, using Gröbner Bases, along with a proof of the method that includes a reduced upper bound on derivative requirements. Efficacy of the new algorithm is illustrated with several biosystem model examples. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Chaos Synchronization Based on Unknown Input Proportional Multiple-Integral Fuzzy Observer

    Directory of Open Access Journals (Sweden)

    T. Youssef

    2013-01-01

    Full Text Available This paper presents an unknown input Proportional Multiple-Integral Observer (PIO for synchronization of chaotic systems based on Takagi-Sugeno (TS fuzzy chaotic models subject to unmeasurable decision variables and unknown input. In a secure communication configuration, this unknown input is regarded as a message encoded in the chaotic system and recovered by the proposed PIO. Both states and outputs of the fuzzy chaotic models are subject to polynomial unknown input with kth derivative zero. Using Lyapunov stability theory, sufficient design conditions for synchronization are proposed. The PIO gains matrices are obtained by resolving linear matrix inequalities (LMIs constraints. Simulation results show through two TS fuzzy chaotic models the validity of the proposed method.

  18. INPUT DATA OF BURNING WOOD FOR CFD MODELLING USING SMALL-SCALE EXPERIMENTS

    Directory of Open Access Journals (Sweden)

    Petr Hejtmánek

    2017-12-01

    Full Text Available The paper presents an option how to acquire simplified input data for modelling of burning wood in CFD programmes. The option lies in combination of data from small- and molecular-scale experiments in order to describe the material as a one-reaction material property. Such virtual material would spread fire, develop the fire according to surrounding environment and it could be extinguished without using complex reaction molecular description. Series of experiments including elemental analysis, thermogravimetric analysis and difference thermal analysis, and combustion analysis were performed. Then the FDS model of burning pine wood in a cone calorimeter was built. In the model where those values were used. The model was validated to HRR (Heat Release Rate from the real cone calorimeter experiment. The results show that for the purpose of CFD modelling the effective heat of combustion, which is one of the basic material property for fire modelling affecting the total intensity of burning, should be used. Using the net heat of combustion in the model leads to higher values of HRR in comparison to the real experiment data. Considering all the results shown in this paper, it was shown that it is possible to simulate burning of wood using the extrapolated data obtained in small-size experiments.

  19. Data Envelopment Analysis with Uncertain Inputs and Outputs

    Directory of Open Access Journals (Sweden)

    Meilin Wen

    2014-01-01

    Full Text Available Data envelopment analysis (DEA, as a useful management and decision tool, has been widely used since it was first invented by Charnes et al. in 1978. On the one hand, the DEA models need accurate inputs and outputs data. On the other hand, in many situations, inputs and outputs are volatile and complex so that they are difficult to measure in an accurate way. The conflict leads to the researches of uncertain DEA models. This paper will consider DEA in uncertain environment, thus producing a new model based on uncertain measure. Due to the complexity of the new uncertain DEA model, an equivalent deterministic model is presented. Finally, a numerical example is presented to illustrate the effectiveness of the uncertain DEA model.

  20. A new approach to modeling temperature-related mortality: Non-linear autoregressive models with exogenous input.

    Science.gov (United States)

    Lee, Cameron C; Sheridan, Scott C

    2018-07-01

    Temperature-mortality relationships are nonlinear, time-lagged, and can vary depending on the time of year and geographic location, all of which limits the applicability of simple regression models in describing these associations. This research demonstrates the utility of an alternative method for modeling such complex relationships that has gained recent traction in other environmental fields: nonlinear autoregressive models with exogenous input (NARX models). All-cause mortality data and multiple temperature-based data sets were gathered from 41 different US cities, for the period 1975-2010, and subjected to ensemble NARX modeling. Models generally performed better in larger cities and during the winter season. Across the US, median absolute percentage errors were 10% (ranging from 4% to 15% in various cities), the average improvement in the r-squared over that of a simple persistence model was 17% (6-24%), and the hit rate for modeling spike days in mortality (>80th percentile) was 54% (34-71%). Mortality responded acutely to hot summer days, peaking at 0-2 days of lag before dropping precipitously, and there was an extended mortality response to cold winter days, peaking at 2-4 days of lag and dropping slowly and continuing for multiple weeks. Spring and autumn showed both of the aforementioned temperature-mortality relationships, but generally to a lesser magnitude than what was seen in summer or winter. When compared to distributed lag nonlinear models, NARX model output was nearly identical. These results highlight the applicability of NARX models for use in modeling complex and time-dependent relationships for various applications in epidemiology and environmental sciences. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Recurrent Neural Network Model for Constructive Peptide Design.

    Science.gov (United States)

    Müller, Alex T; Hiss, Jan A; Schneider, Gisbert

    2018-02-26

    We present a generative long short-term memory (LSTM) recurrent neural network (RNN) for combinatorial de novo peptide design. RNN models capture patterns in sequential data and generate new data instances from the learned context. Amino acid sequences represent a suitable input for these machine-learning models. Generative models trained on peptide sequences could therefore facilitate the design of bespoke peptide libraries. We trained RNNs with LSTM units on pattern recognition of helical antimicrobial peptides and used the resulting model for de novo sequence generation. Of these sequences, 82% were predicted to be active antimicrobial peptides compared to 65% of randomly sampled sequences with the same amino acid distribution as the training set. The generated sequences also lie closer to the training data than manually designed amphipathic helices. The results of this study showcase the ability of LSTM RNNs to construct new amino acid sequences within the applicability domain of the model and motivate their prospective application to peptide and protein design without the need for the exhaustive enumeration of sequence libraries.

  2. Automated sequence-specific protein NMR assignment using the memetic algorithm MATCH

    International Nuclear Information System (INIS)

    Volk, Jochen; Herrmann, Torsten; Wuethrich, Kurt

    2008-01-01

    MATCH (Memetic Algorithm and Combinatorial Optimization Heuristics) is a new memetic algorithm for automated sequence-specific polypeptide backbone NMR assignment of proteins. MATCH employs local optimization for tracing partial sequence-specific assignments within a global, population-based search environment, where the simultaneous application of local and global optimization heuristics guarantees high efficiency and robustness. MATCH thus makes combined use of the two predominant concepts in use for automated NMR assignment of proteins. Dynamic transition and inherent mutation are new techniques that enable automatic adaptation to variable quality of the experimental input data. The concept of dynamic transition is incorporated in all major building blocks of the algorithm, where it enables switching between local and global optimization heuristics at any time during the assignment process. Inherent mutation restricts the intrinsically required randomness of the evolutionary algorithm to those regions of the conformation space that are compatible with the experimental input data. Using intact and artificially deteriorated APSY-NMR input data of proteins, MATCH performed sequence-specific resonance assignment with high efficiency and robustness

  3. Solar Load Inputs for USARIEM Thermal Strain Models and the Solar Radiation-Sensitive Components of the WBGT Index

    National Research Council Canada - National Science Library

    Matthew, William

    2001-01-01

    This report describes processes we have implemented to use global pyranometer-based estimates of mean radiant temperature as the common solar load input for the Scenario model, the USARIEM heat strain...

  4. Improved Stabilization Conditions for Nonlinear Systems with Input and State Delays via T-S Fuzzy Model

    Directory of Open Access Journals (Sweden)

    Chang Che

    2018-01-01

    Full Text Available This paper focuses on the problem of nonlinear systems with input and state delays. The considered nonlinear systems are represented by Takagi-Sugeno (T-S fuzzy model. A new state feedback control approach is introduced for T-S fuzzy systems with input delay and state delays. A new Lyapunov-Krasovskii functional is employed to derive less conservative stability conditions by incorporating a recently developed Wirtinger-based integral inequality. Based on the Lyapunov stability criterion, a series of linear matrix inequalities (LMIs are obtained by using the slack variables and integral inequality, which guarantees the asymptotic stability of the closed-loop system. Several numerical examples are given to show the advantages of the proposed results.

  5. Phonology: An Emergent Consequence of Memory Constraints and Sensory Input.

    Science.gov (United States)

    Lacerda, Francisco

    2003-01-01

    Presents a theoretical model that attempts to account for the early stages of language acquisition in terms of interaction between biological constraints and input characteristics. Notes that the model uses the implications of stochastic representations of the sensory input in a volatile and limited memory. Argues that phonological structure is a…

  6. Input-output model of regional environmental and economic impacts of nuclear power plants

    International Nuclear Information System (INIS)

    Johnson, M.H.; Bennett, J.T.

    1979-01-01

    The costs of delayed licensing of nuclear power plants calls for a more-comprehensive method of quantifying the economic and environmental impacts on a region. A traditional input-output (I-O) analysis approach is extended to assess the effects of changes in output, income, employment, pollution, water consumption, and the costs and revenues of local government disaggregated among 23 industry sectors during the construction and operating phases. Unlike earlier studies, this model uses nonlinear environmental interactions and specifies environmental feedbacks to the economic sector. 20 references

  7. Latitudinal and seasonal variability of the micrometeor input function: A study using model predictions and observations from Arecibo and PFISR

    Science.gov (United States)

    Fentzke, J. T.; Janches, D.; Sparks, J. J.

    2009-05-01

    In this work, we use a semi-empirical model of the micrometeor input function (MIF) together with meteor head-echo observations obtained with two high power and large aperture (HPLA) radars, the 430 MHz Arecibo Observatory (AO) radar in Puerto Rico (18°N, 67°W) and the 450 MHz Poker flat incoherent scatter radar (PFISR) in Alaska (65°N, 147°W), to study the seasonal and geographical dependence of the meteoric flux in the upper atmosphere. The model, recently developed by Janches et al. [2006a. Modeling the global micrometeor input function in the upper atmosphere observed by high power and large aperture radars. Journal of Geophysical Research 111] and Fentzke and Janches [2008. A semi-empirical model of the contribution from sporadic meteoroid sources on the meteor input function observed at arecibo. Journal of Geophysical Research (Space Physics) 113 (A03304)], includes an initial mass flux that is provided by the six known meteor sources (i.e. orbital families of dust) as well as detailed modeling of meteoroid atmospheric entry and ablation physics. In addition, we use a simple ionization model to treat radar sensitivity issues by defining minimum electron volume density production thresholds required in the meteor head-echo plasma for detection. This simplified approach works well because we use observations from two radars with similar frequencies, but different sensitivities and locations. This methodology allows us to explore the initial input of particles and how it manifests in different parts of the MLT as observed by these instruments without the need to invoke more sophisticated plasma models, which are under current development. The comparisons between model predictions and radar observations show excellent agreement between diurnal, seasonal, and latitudinal variability of the detected meteor rate and radial velocity distributions, allowing us to understand how individual meteoroid populations contribute to the overall flux at a particular

  8. Next-generation sequence analysis of cancer xenograft models.

    Directory of Open Access Journals (Sweden)

    Fernando J Rossello

    Full Text Available Next-generation sequencing (NGS studies in cancer are limited by the amount, quality and purity of tissue samples. In this situation, primary xenografts have proven useful preclinical models. However, the presence of mouse-derived stromal cells represents a technical challenge to their use in NGS studies. We examined this problem in an established primary xenograft model of small cell lung cancer (SCLC, a malignancy often diagnosed from small biopsy or needle aspirate samples. Using an in silico strategy that assign reads according to species-of-origin, we prospectively compared NGS data from primary xenograft models with matched cell lines and with published datasets. We show here that low-coverage whole-genome analysis demonstrated remarkable concordance between published genome data and internal controls, despite the presence of mouse genomic DNA. Exome capture sequencing revealed that this enrichment procedure was highly species-specific, with less than 4% of reads aligning to the mouse genome. Human-specific expression profiling with RNA-Seq replicated array-based gene expression experiments, whereas mouse-specific transcript profiles correlated with published datasets from human cancer stroma. We conclude that primary xenografts represent a useful platform for complex NGS analysis in cancer research for tumours with limited sample resources, or those with prominent stromal cell populations.

  9. FLUTAN input specifications

    International Nuclear Information System (INIS)

    Borgwaldt, H.; Baumann, W.; Willerding, G.

    1991-05-01

    FLUTAN is a highly vectorized computer code for 3-D fluiddynamic and thermal-hydraulic analyses in cartesian and cylinder coordinates. It is related to the family of COMMIX codes originally developed at Argonne National Laboratory, USA. To a large extent, FLUTAN relies on basic concepts and structures imported from COMMIX-1B and COMMIX-2 which were made available to KfK in the frame of cooperation contracts in the fast reactor safety field. While on the one hand not all features of the original COMMIX versions have been implemented in FLUTAN, the code on the other hand includes some essential innovative options like CRESOR solution algorithm, general 3-dimensional rebalacing scheme for solving the pressure equation, and LECUSSO-QUICK-FRAM techniques suitable for reducing 'numerical diffusion' in both the enthalphy and momentum equations. This report provides users with detailed input instructions, presents formulations of the various model options, and explains by means of comprehensive sample input, how to use the code. (orig.) [de

  10. Modeling the Process of Event Sequence Data Generated for Working Condition Diagnosis

    Directory of Open Access Journals (Sweden)

    Jianwei Ding

    2015-01-01

    Full Text Available Condition monitoring systems are widely used to monitor the working condition of equipment, generating a vast amount and variety of telemetry data in the process. The main task of surveillance focuses on analyzing these routinely collected telemetry data to help analyze the working condition in the equipment. However, with the rapid increase in the volume of telemetry data, it is a nontrivial task to analyze all the telemetry data to understand the working condition of the equipment without any a priori knowledge. In this paper, we proposed a probabilistic generative model called working condition model (WCM, which is capable of simulating the process of event sequence data generated and depicting the working condition of equipment at runtime. With the help of WCM, we are able to analyze how the event sequence data behave in different working modes and meanwhile to detect the working mode of an event sequence (working condition diagnosis. Furthermore, we have applied WCM to illustrative applications like automated detection of an anomalous event sequence for the runtime of equipment. Our experimental results on the real data sets demonstrate the effectiveness of the model.

  11. Modeling framework for crew decisions during accident sequences

    International Nuclear Information System (INIS)

    Lukic, Y.D.; Worledge, D.H.; Hannaman, G.W.; Spurgin, A.J.

    1986-01-01

    The ability to model the average behavior of operating crews in the course of accident sequences is vital in learning on how to prevent damage to power plants and to maintain safety. This paper summarizes the work carried out in support of a Human Reliability Model framework. This work develops the mathematical framework of the model and identifies the parameters which could be measured in some way, e.g., through simulator experience and/or small scale tests. Selected illustrative examples are presented, of the numerical experiments carried out in order to understand the model sensitivity to parameter variation. These examples ar discussed with the objective of deriving insights of general nature regarding operating of the model which may lead to enhanced understanding of man/machine interactions

  12. Learning a weighted sequence model of the nucleosome core and linker yields more accurate predictions in Saccharomyces cerevisiae and Homo sapiens.

    Directory of Open Access Journals (Sweden)

    Sheila M Reynolds

    2010-07-01

    Full Text Available DNA in eukaryotes is packaged into a chromatin complex, the most basic element of which is the nucleosome. The precise positioning of the nucleosome cores allows for selective access to the DNA, and the mechanisms that control this positioning are important pieces of the gene expression puzzle. We describe a large-scale nucleosome pattern that jointly characterizes the nucleosome core and the adjacent linkers and is predominantly characterized by long-range oscillations in the mono, di- and tri-nucleotide content of the DNA sequence, and we show that this pattern can be used to predict nucleosome positions in both Homo sapiens and Saccharomyces cerevisiae more accurately than previously published methods. Surprisingly, in both H. sapiens and S. cerevisiae, the most informative individual features are the mono-nucleotide patterns, although the inclusion of di- and tri-nucleotide features results in improved performance. Our approach combines a much longer pattern than has been previously used to predict nucleosome positioning from sequence-301 base pairs, centered at the position to be scored-with a novel discriminative classification approach that selectively weights the contributions from each of the input features. The resulting scores are relatively insensitive to local AT-content and can be used to accurately discriminate putative dyad positions from adjacent linker regions without requiring an additional dynamic programming step and without the attendant edge effects and assumptions about linker length modeling and overall nucleosome density. Our approach produces the best dyad-linker classification results published to date in H. sapiens, and outperforms two recently published models on a large set of S. cerevisiae nucleosome positions. Our results suggest that in both genomes, a comparable and relatively small fraction of nucleosomes are well-positioned and that these positions are predictable based on sequence alone. We believe that the

  13. Learning a weighted sequence model of the nucleosome core and linker yields more accurate predictions in Saccharomyces cerevisiae and Homo sapiens.

    Science.gov (United States)

    Reynolds, Sheila M; Bilmes, Jeff A; Noble, William Stafford

    2010-07-08

    DNA in eukaryotes is packaged into a chromatin complex, the most basic element of which is the nucleosome. The precise positioning of the nucleosome cores allows for selective access to the DNA, and the mechanisms that control this positioning are important pieces of the gene expression puzzle. We describe a large-scale nucleosome pattern that jointly characterizes the nucleosome core and the adjacent linkers and is predominantly characterized by long-range oscillations in the mono, di- and tri-nucleotide content of the DNA sequence, and we show that this pattern can be used to predict nucleosome positions in both Homo sapiens and Saccharomyces cerevisiae more accurately than previously published methods. Surprisingly, in both H. sapiens and S. cerevisiae, the most informative individual features are the mono-nucleotide patterns, although the inclusion of di- and tri-nucleotide features results in improved performance. Our approach combines a much longer pattern than has been previously used to predict nucleosome positioning from sequence-301 base pairs, centered at the position to be scored-with a novel discriminative classification approach that selectively weights the contributions from each of the input features. The resulting scores are relatively insensitive to local AT-content and can be used to accurately discriminate putative dyad positions from adjacent linker regions without requiring an additional dynamic programming step and without the attendant edge effects and assumptions about linker length modeling and overall nucleosome density. Our approach produces the best dyad-linker classification results published to date in H. sapiens, and outperforms two recently published models on a large set of S. cerevisiae nucleosome positions. Our results suggest that in both genomes, a comparable and relatively small fraction of nucleosomes are well-positioned and that these positions are predictable based on sequence alone. We believe that the bulk of the

  14. Heat input control in coke ovens battery using artificial intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, R.; Kannan, C.; Sistla, S.; Kumar, D. [Tata Steel, Jamshedpur (India)

    2005-07-01

    Controlled heating is very essential for producing coke with certain desired properties. Controlled heating involves controlling the heat input into the battery dynamically depending on the various process parameters like current battery temperature, the set point of battery temperature, moisture in coal, ambient temperature, coal fineness, cake breakage etc. An artificial intelligence (AI) based heat input control has been developed in which currently some of the above mentioned process parameters are considered and used for calculating the pause time which is applied between reversal during the heating process. The AI based model currently considers 3 input variables, temperature deviation history, current deviation of the battery temperature from the target temperature and the actual heat input into the battery. Work is in progress to control the standard deviation of coke end temperature using this model. The new system which has been developed in-house has replaced Hoogovens supplied model. 7 figs.

  15. Loss of GABAergic inputs in APP/PS1 mouse model of Alzheimer's disease

    Directory of Open Access Journals (Sweden)

    Tutu Oyelami

    2014-04-01

    Full Text Available Alzheimer's disease (AD is characterized by symptoms which include seizures, sleep disruption, loss of memory as well as anxiety in patients. Of particular importance is the possibility of preventing the progressive loss of neuronal projections in the disease. Transgenic mice overexpressing EOFAD mutant PS1 (L166P and mutant APP (APP KM670/671NL Swedish (APP/PS1 develop a very early and robust Amyloid pathology and display synaptic plasticity impairments and cognitive dysfunction. Here we investigated GABAergic neurotransmission, using multi-electrode array (MEA technology and pharmacological manipulation to quantify the effect of GABA Blockers on field excitatory postsynaptic potentials (fEPSP, and immunostaining of GABAergic neurons. Using MEA technology we confirm impaired LTP induction by high frequency stimulation in APPPS1 hippocampal CA1 region that was associated with reduced alteration of the pair pulse ratio after LTP induction. Synaptic dysfunction was also observed under manipulation of external Calcium concentration and input-output curve. Electrophysiological recordings from brain slice of CA1 hippocampus area, in the presence of GABAergic receptors blockers cocktails further demonstrated significant reduction in the GABAergic inputs in APP/PS1 mice. Moreover, immunostaining of GAD65 a specific marker for GABAergic neurons revealed reduction of the GABAergic inputs in CA1 area of the hippocampus. These results might be linked to increased seizure sensitivity, premature death and cognitive dysfunction in this animal model of AD. Further in depth analysis of GABAergic dysfunction in APP/PS1 mice is required and may open new perspectives for AD therapy by restoring GABAergic function.

  16. Sequence2Vec: A novel embedding approach for modeling transcription factor binding affinity landscape

    KAUST Repository

    Dai, Hanjun

    2017-07-26

    Motivation: An accurate characterization of transcription factor (TF)-DNA affinity landscape is crucial to a quantitative understanding of the molecular mechanisms underpinning endogenous gene regulation. While recent advances in biotechnology have brought the opportunity for building binding affinity prediction methods, the accurate characterization of TF-DNA binding affinity landscape still remains a challenging problem. Results: Here we propose a novel sequence embedding approach for modeling the transcription factor binding affinity landscape. Our method represents DNA binding sequences as a hidden Markov model (HMM) which captures both position specific information and long-range dependency in the sequence. A cornerstone of our method is a novel message passing-like embedding algorithm, called Sequence2Vec, which maps these HMMs into a common nonlinear feature space and uses these embedded features to build a predictive model. Our method is a novel combination of the strength of probabilistic graphical models, feature space embedding and deep learning. We conducted comprehensive experiments on over 90 large-scale TF-DNA data sets which were measured by different high-throughput experimental technologies. Sequence2Vec outperforms alternative machine learning methods as well as the state-of-the-art binding affinity prediction methods.

  17. A method to synchronize signals from multiple patient monitoring devices through a single input channel for inclusion in list-mode acquisitions

    International Nuclear Information System (INIS)

    O’Connor, J. Michael; Pretorius, P. Hendrik; Johnson, Karen; King, Michael A.

    2013-01-01

    Purpose: This technical note documents a method that the authors developed for combining a signal to synchronize a patient-monitoring device with a second physiological signal for inclusion into list-mode acquisition. Our specific application requires synchronizing an external patient motion-tracking system with a medical imaging system by multiplexing the tracking input with the ECG input. The authors believe that their methodology can be adapted for use in a variety of medical imaging modalities including single photon emission computed tomography (SPECT) and positron emission tomography (PET). Methods: The authors insert a unique pulse sequence into a single physiological input channel. This sequence is then recorded in the list-mode acquisition along with the R-wave pulse used for ECG gating. The specific form of our pulse sequence allows for recognition of the time point being synchronized even when portions of the pulse sequence are lost due to collisions with R-wave pulses. This was achieved by altering our software used in binning the list-mode data to recognize even a portion of our pulse sequence. Limitations on heart rates at which our pulse sequence could be reliably detected were investigated by simulating the mixing of the two signals as a function of heart rate and time point during the cardiac cycle at which our pulse sequence is mixed with the cardiac signal. Results: The authors have successfully achieved accurate temporal synchronization of our motion-tracking system with acquisition of SPECT projections used in 17 recent clinical research cases. In our simulation analysis the authors determined that synchronization to enable compensation for body and respiratory motion could be achieved for heart rates up to 125 beats-per-minute (bpm). Conclusions: Synchronization of list-mode acquisition with external patient monitoring devices such as those employed in motion-tracking can reliably be achieved using a simple method that can be implemented using

  18. A method to synchronize signals from multiple patient monitoring devices through a single input channel for inclusion in list-mode acquisitions

    Energy Technology Data Exchange (ETDEWEB)

    O’Connor, J. Michael; Pretorius, P. Hendrik; Johnson, Karen; King, Michael A., E-mail: Michael.King@umassmed.edu [Department of Radiology, University of Massachusetts Medical School, Worcester, Massachusetts 01655 (United States)

    2013-12-15

    Purpose: This technical note documents a method that the authors developed for combining a signal to synchronize a patient-monitoring device with a second physiological signal for inclusion into list-mode acquisition. Our specific application requires synchronizing an external patient motion-tracking system with a medical imaging system by multiplexing the tracking input with the ECG input. The authors believe that their methodology can be adapted for use in a variety of medical imaging modalities including single photon emission computed tomography (SPECT) and positron emission tomography (PET). Methods: The authors insert a unique pulse sequence into a single physiological input channel. This sequence is then recorded in the list-mode acquisition along with the R-wave pulse used for ECG gating. The specific form of our pulse sequence allows for recognition of the time point being synchronized even when portions of the pulse sequence are lost due to collisions with R-wave pulses. This was achieved by altering our software used in binning the list-mode data to recognize even a portion of our pulse sequence. Limitations on heart rates at which our pulse sequence could be reliably detected were investigated by simulating the mixing of the two signals as a function of heart rate and time point during the cardiac cycle at which our pulse sequence is mixed with the cardiac signal. Results: The authors have successfully achieved accurate temporal synchronization of our motion-tracking system with acquisition of SPECT projections used in 17 recent clinical research cases. In our simulation analysis the authors determined that synchronization to enable compensation for body and respiratory motion could be achieved for heart rates up to 125 beats-per-minute (bpm). Conclusions: Synchronization of list-mode acquisition with external patient monitoring devices such as those employed in motion-tracking can reliably be achieved using a simple method that can be implemented using

  19. Video-based Chinese Input System via Fingertip Tracking

    Directory of Open Access Journals (Sweden)

    Chih-Chang Yu

    2012-10-01

    Full Text Available In this paper, we propose a system to detect and track fingertips online and recognize Mandarin Phonetic Symbol (MPS for user-friendly Chinese input purposes. Using fingertips and cameras to replace pens and touch panels as input devices could reduce the cost and improve the ease-of-use and comfort of computer-human interface. In the proposed framework, particle filters with enhanced appearance models are applied for robust fingertip tracking. Afterwards, MPS combination recognition is performed on the tracked fingertip trajectories using Hidden Markov Models. In the proposed system, the fingertips of the users could be robustly tracked. Also, the challenges of entering, leaving and virtual strokes caused by video-based fingertip input can be overcome. Experimental results have shown the feasibility and effectiveness of the proposed work.

  20. A Markovian model of evolving world input-output network.

    Directory of Open Access Journals (Sweden)

    Vahid Moosavi

    Full Text Available The initial theoretical connections between Leontief input-output models and Markov chains were established back in 1950s. However, considering the wide variety of mathematical properties of Markov chains, so far there has not been a full investigation of evolving world economic networks with Markov chain formalism. In this work, using the recently available world input-output database, we investigated the evolution of the world economic network from 1995 to 2011 through analysis of a time series of finite Markov chains. We assessed different aspects of this evolving system via different known properties of the Markov chains such as mixing time, Kemeny constant, steady state probabilities and perturbation analysis of the transition matrices. First, we showed how the time series of mixing times and Kemeny constants could be used as an aggregate index of globalization. Next, we focused on the steady state probabilities as a measure of structural power of the economies that are comparable to GDP shares of economies as the traditional index of economies welfare. Further, we introduced two measures of systemic risk, called systemic influence and systemic fragility, where the former is the ratio of number of influenced nodes to the total number of nodes, caused by a shock in the activity of a node, and the latter is based on the number of times a specific economic node is affected by a shock in the activity of any of the other nodes. Finally, focusing on Kemeny constant as a global indicator of monetary flow across the network, we showed that there is a paradoxical effect of a change in activity levels of economic nodes on the overall flow of the world economic network. While the economic slowdown of the majority of nodes with high structural power results to a slower average monetary flow over the network, there are some nodes, where their slowdowns improve the overall quality of the network in terms of connectivity and the average flow of the money.

  1. A Markovian model of evolving world input-output network.

    Science.gov (United States)

    Moosavi, Vahid; Isacchini, Giulio

    2017-01-01

    The initial theoretical connections between Leontief input-output models and Markov chains were established back in 1950s. However, considering the wide variety of mathematical properties of Markov chains, so far there has not been a full investigation of evolving world economic networks with Markov chain formalism. In this work, using the recently available world input-output database, we investigated the evolution of the world economic network from 1995 to 2011 through analysis of a time series of finite Markov chains. We assessed different aspects of this evolving system via different known properties of the Markov chains such as mixing time, Kemeny constant, steady state probabilities and perturbation analysis of the transition matrices. First, we showed how the time series of mixing times and Kemeny constants could be used as an aggregate index of globalization. Next, we focused on the steady state probabilities as a measure of structural power of the economies that are comparable to GDP shares of economies as the traditional index of economies welfare. Further, we introduced two measures of systemic risk, called systemic influence and systemic fragility, where the former is the ratio of number of influenced nodes to the total number of nodes, caused by a shock in the activity of a node, and the latter is based on the number of times a specific economic node is affected by a shock in the activity of any of the other nodes. Finally, focusing on Kemeny constant as a global indicator of monetary flow across the network, we showed that there is a paradoxical effect of a change in activity levels of economic nodes on the overall flow of the world economic network. While the economic slowdown of the majority of nodes with high structural power results to a slower average monetary flow over the network, there are some nodes, where their slowdowns improve the overall quality of the network in terms of connectivity and the average flow of the money.

  2. Dynamics of a Stage Structured Pest Control Model in a Polluted Environment with Pulse Pollution Input

    OpenAIRE

    Liu, Bing; Xu, Ling; Kang, Baolin

    2013-01-01

    By using pollution model and impulsive delay differential equation, we formulate a pest control model with stage structure for natural enemy in a polluted environment by introducing a constant periodic pollutant input and killing pest at different fixed moments and investigate the dynamics of such a system. We assume only that the natural enemies are affected by pollution, and we choose the method to kill the pest without harming natural enemies. Sufficient conditions for global attractivity ...

  3. Mars 2.2 code manual: input requirements

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Lee, Won Jae; Jeong, Jae Jun; Lee, Young Jin; Hwang, Moon Kyu; Kim, Kyung Doo; Lee, Seung Wook; Bae, Sung Won

    2003-07-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This input manual provides a complete list of input required to run MARS. The manual is divided largely into two parts, namely, the one-dimensional part and the multi-dimensional part. The inputs for auxiliary parts such as minor edit requests and graph formatting inputs are shared by the two parts and as such mixed input is possible. The overall structure of the input is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS. MARS development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  4. MARS code manual volume II: input requirements

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Kim, Kyung Doo; Bae, Sung Won; Jeong, Jae Jun; Lee, Seung Wook; Hwang, Moon Kyu

    2010-02-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This input manual provides a complete list of input required to run MARS. The manual is divided largely into two parts, namely, the one-dimensional part and the multi-dimensional part. The inputs for auxiliary parts such as minor edit requests and graph formatting inputs are shared by the two parts and as such mixed input is possible. The overall structure of the input is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  5. The input and output management of solid waste using DEA models: A case study at Jengka, Pahang

    Science.gov (United States)

    Mohamed, Siti Rosiah; Ghazali, Nur Fadzrina Mohd; Mohd, Ainun Hafizah

    2017-08-01

    Data Envelopment Analysis (DEA) as a tool for obtaining performance indices has been used extensively in several of organizations sector. The ways to improve the efficiency of Decision Making Units (DMUs) is impractical because some of inputs and outputs are uncontrollable and in certain situation its produce weak efficiency which often reflect the impact for operating environment. Based on the data from Alam Flora Sdn. Bhd Jengka, the researcher wants to determine the efficiency of solid waste management (SWM) in town Jengka Pahang using CCRI and CCRO model of DEA and duality formulation with vector average input and output. Three input variables (length collection in meter, frequency time per week in hour and number of garbage truck) and 2 outputs variables (frequency collection and the total solid waste collection in kilogram) are analyzed. As a conclusion, it shows only three roads from 23 roads are efficient that achieve efficiency score 1. Meanwhile, 20 other roads are in an inefficient management.

  6. Graph mining for next generation sequencing: leveraging the assembly graph for biological insights.

    Science.gov (United States)

    Warnke-Sommer, Julia; Ali, Hesham

    2016-05-06

    The assembly of Next Generation Sequencing (NGS) reads remains a challenging task. This is especially true for the assembly of metagenomics data that originate from environmental samples potentially containing hundreds to thousands of unique species. The principle objective of current assembly tools is to assemble NGS reads into contiguous stretches of sequence called contigs while maximizing for both accuracy and contig length. The end goal of this process is to produce longer contigs with the major focus being on assembly only. Sequence read assembly is an aggregative process, during which read overlap relationship information is lost as reads are merged into longer sequences or contigs. The assembly graph is information rich and capable of capturing the genomic architecture of an input read data set. We have developed a novel hybrid graph in which nodes represent sequence regions at different levels of granularity. This model, utilized in the assembly and analysis pipeline Focus, presents a concise yet feature rich view of a given input data set, allowing for the extraction of biologically relevant graph structures for graph mining purposes. Focus was used to create hybrid graphs to model metagenomics data sets obtained from the gut microbiomes of five individuals with Crohn's disease and eight healthy individuals. Repetitive and mobile genetic elements are found to be associated with hybrid graph structure. Using graph mining techniques, a comparative study of the Crohn's disease and healthy data sets was conducted with focus on antibiotics resistance genes associated with transposase genes. Results demonstrated significant differences in the phylogenetic distribution of categories of antibiotics resistance genes in the healthy and diseased patients. Focus was also evaluated as a pure assembly tool and produced excellent results when compared against the Meta-velvet, Omega, and UD-IDBA assemblers. Mining the hybrid graph can reveal biological phenomena captured

  7. JACOP: A simple and robust method for the automated classification of protein sequences with modular architecture

    Directory of Open Access Journals (Sweden)

    Pagni Marco

    2005-08-01

    Full Text Available Abstract Background Whole-genome sequencing projects are rapidly producing an enormous number of new sequences. Consequently almost every family of proteins now contains hundreds of members. It has thus become necessary to develop tools, which classify protein sequences automatically and also quickly and reliably. The difficulty of this task is intimately linked to the mechanism by which protein sequences diverge, i.e. by simultaneous residue substitutions, insertions and/or deletions and whole domain reorganisations (duplications/swapping/fusion. Results Here we present a novel approach, which is based on random sampling of sub-sequences (probes out of a set of input sequences. The probes are compared to the input sequences, after a normalisation step; the results are used to partition the input sequences into homogeneous groups of proteins. In addition, this method provides information on diagnostic parts of the proteins. The performance of this method is challenged by two data sets. The first one contains the sequences of prokaryotic lyases that could be arranged as a multiple sequence alignment. The second one contains all proteins from Swiss-Prot Release 36 with at least one Src homology 2 (SH2 domain – a classical example for proteins with modular architecture. Conclusion The outcome of our method is robust, highly reproducible as shown using bootstrap and resampling validation procedures. The results are essentially coherent with the biology. This method depends solely on well-established publicly available software and algorithms.

  8. Input Forces Estimation for Nonlinear Systems by Applying a Square-Root Cubature Kalman Filter.

    Science.gov (United States)

    Song, Xuegang; Zhang, Yuexin; Liang, Dakai

    2017-10-10

    This work presents a novel inverse algorithm to estimate time-varying input forces in nonlinear beam systems. With the system parameters determined, the input forces can be estimated in real-time from dynamic responses, which can be used for structural health monitoring. In the process of input forces estimation, the Runge-Kutta fourth-order algorithm was employed to discretize the state equations; a square-root cubature Kalman filter (SRCKF) was employed to suppress white noise; the residual innovation sequences, a priori state estimate, gain matrix, and innovation covariance generated by SRCKF were employed to estimate the magnitude and location of input forces by using a nonlinear estimator. The nonlinear estimator was based on the least squares method. Numerical simulations of a large deflection beam and an experiment of a linear beam constrained by a nonlinear spring were employed. The results demonstrated accuracy of the nonlinear algorithm.

  9. Input Forces Estimation for Nonlinear Systems by Applying a Square-Root Cubature Kalman Filter

    Directory of Open Access Journals (Sweden)

    Xuegang Song

    2017-10-01

    Full Text Available This work presents a novel inverse algorithm to estimate time-varying input forces in nonlinear beam systems. With the system parameters determined, the input forces can be estimated in real-time from dynamic responses, which can be used for structural health monitoring. In the process of input forces estimation, the Runge-Kutta fourth-order algorithm was employed to discretize the state equations; a square-root cubature Kalman filter (SRCKF was employed to suppress white noise; the residual innovation sequences, a priori state estimate, gain matrix, and innovation covariance generated by SRCKF were employed to estimate the magnitude and location of input forces by using a nonlinear estimator. The nonlinear estimator was based on the least squares method. Numerical simulations of a large deflection beam and an experiment of a linear beam constrained by a nonlinear spring were employed. The results demonstrated accuracy of the nonlinear algorithm.

  10. Realistic modelling of the seismic input: Site effects and parametric studies

    International Nuclear Information System (INIS)

    Romanelli, F.; Vaccari, F.; Panza, G.F.

    2002-11-01

    We illustrate the work done in the framework of a large international cooperation, showing the very recent numerical experiments carried out within the framework of the EC project 'Advanced methods for assessing the seismic vulnerability of existing motorway bridges' (VAB) to assess the importance of non-synchronous seismic excitation of long structures. The definition of the seismic input at the Warth bridge site, i.e. the determination of the seismic ground motion due to an earthquake with a given magnitude and epicentral distance from the site, has been done following a theoretical approach. In order to perform an accurate and realistic estimate of site effects and of differential motion it is necessary to make a parametric study that takes into account the complex combination of the source and propagation parameters, in realistic geological structures. The computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different sources and structural models, allows us the construction of damage scenarios that are out of the reach of stochastic models, at a very low cost/benefit ratio. (author)

  11. Establishing gene models from the Pinus pinaster genome using gene capture and BAC sequencing.

    Science.gov (United States)

    Seoane-Zonjic, Pedro; Cañas, Rafael A; Bautista, Rocío; Gómez-Maldonado, Josefa; Arrillaga, Isabel; Fernández-Pozo, Noé; Claros, M Gonzalo; Cánovas, Francisco M; Ávila, Concepción

    2016-02-27

    In the era of DNA throughput sequencing, assembling and understanding gymnosperm mega-genomes remains a challenge. Although drafts of three conifer genomes have recently been published, this number is too low to understand the full complexity of conifer genomes. Using techniques focused on specific genes, gene models can be established that can aid in the assembly of gene-rich regions, and this information can be used to compare genomes and understand functional evolution. In this study, gene capture technology combined with BAC isolation and sequencing was used as an experimental approach to establish de novo gene structures without a reference genome. Probes were designed for 866 maritime pine transcripts to sequence genes captured from genomic DNA. The gene models were constructed using GeneAssembler, a new bioinformatic pipeline, which reconstructed over 82% of the gene structures, and a high proportion (85%) of the captured gene models contained sequences from the promoter regulatory region. In a parallel experiment, the P. pinaster BAC library was screened to isolate clones containing genes whose cDNA sequence were already available. BAC clones containing the asparagine synthetase, sucrose synthase and xyloglucan endotransglycosylase gene sequences were isolated and used in this study. The gene models derived from the gene capture approach were compared with the genomic sequences derived from the BAC clones. This combined approach is a particularly efficient way to capture the genomic structures of gene families with a small number of members. The experimental approach used in this study is a valuable combined technique to study genomic gene structures in species for which a reference genome is unavailable. It can be used to establish exon/intron boundaries in unknown gene structures, to reconstruct incomplete genes and to obtain promoter sequences that can be used for transcriptional studies. A bioinformatics algorithm (GeneAssembler) is also provided as a

  12. Property - preserving convergent sequences of invariant sets for linear discrete - time systems

    NARCIS (Netherlands)

    Athanasopoulos, N.; Lazar, M.; Bitsoris, G.

    2014-01-01

    Abstract: New sequences of monotonically increasing sets are introduced, for linear discrete-time systems subject to input and state constraints. The elements of the set sequences are controlled invariant and admissible regions of stabilizability. They are generated from the iterative application of

  13. Outsourcing, public Input provision and policy cooperation

    OpenAIRE

    Aronsson, Thomas; Koskela, Erkki

    2009-01-01

    This paper concerns public input provision as an instrument for redistribution under international outsourcing by using a model-economy comprising two countries, North and South, where firms in the North may outsource part of their low-skilled labor intensive production to the South. We consider two interrelated issues: (i) the incentives for each country to modify the provision of public input goods in response to international outsourcing, and (ii) whether international outsourcing justifie...

  14. A grey neural network and input-output combined forecasting model. Primary energy consumption forecasts in Spanish economic sectors

    International Nuclear Information System (INIS)

    Liu, Xiuli; Moreno, Blanca; García, Ana Salomé

    2016-01-01

    A combined forecast of Grey forecasting method and neural network back propagation model, which is called Grey Neural Network and Input-Output Combined Forecasting Model (GNF-IO model), is proposed. A real case of energy consumption forecast is used to validate the effectiveness of the proposed model. The GNF-IO model predicts coal, crude oil, natural gas, renewable and nuclear primary energy consumption volumes by Spain's 36 sub-sectors from 2010 to 2015 according to three different GDP growth scenarios (optimistic, baseline and pessimistic). Model test shows that the proposed model has higher simulation and forecasting accuracy on energy consumption than Grey models separately and other combination methods. The forecasts indicate that the primary energies as coal, crude oil and natural gas will represent on average the 83.6% percent of the total of primary energy consumption, raising concerns about security of supply and energy cost and adding risk for some industrial production processes. Thus, Spanish industry must speed up its transition to an energy-efficiency economy, achieving a cost reduction and increase in the level of self-supply. - Highlights: • Forecasting System Using Grey Models combined with Input-Output Models is proposed. • Primary energy consumption in Spain is used to validate the model. • The grey-based combined model has good forecasting performance. • Natural gas will represent the majority of the total of primary energy consumption. • Concerns about security of supply, energy cost and industry competitiveness are raised.

  15. On Input Vector Representation for the SVR model of Reactor Core Loading Pattern Critical Parameters

    International Nuclear Information System (INIS)

    Trontl, K.; Pevec, D.; Smuc, T.

    2008-01-01

    Determination and optimization of reactor core loading pattern is an important factor in nuclear power plant operation. The goal is to minimize the amount of enriched uranium (fresh fuel) and burnable absorbers placed in the core, while maintaining nuclear power plant operational and safety characteristics. The usual approach to loading pattern optimization involves high degree of engineering judgment, a set of heuristic rules, an optimization algorithm and a computer code used for evaluating proposed loading patterns. The speed of the optimization process is highly dependent on the computer code used for the evaluation. Recently, we proposed a new method for fast loading pattern evaluation based on general robust regression model relying on the state of the art research in the field of machine learning. We employed Support Vector Regression (SVR) technique. SVR is a supervised learning method in which model parameters are automatically determined by solving a quadratic optimization problem. The preliminary tests revealed a good potential of the SVR method application for fast and accurate reactor core loading pattern evaluation. However, some aspects of model development are still unresolved. The main objective of the work reported in this paper was to conduct additional tests and analyses required for full clarification of the SVR applicability for loading pattern evaluation. We focused our attention on the parameters defining input vector, primarily its structure and complexity, and parameters defining kernel functions. All the tests were conducted on the NPP Krsko reactor core, using MCRAC code for the calculation of reactor core loading pattern critical parameters. The tested input vector structures did not influence the accuracy of the models suggesting that the initially tested input vector, consisted of the number of IFBAs and the k-inf at the beginning of the cycle, is adequate. The influence of kernel function specific parameters (σ for RBF kernel

  16. MCTP system model based on linear programming optimization of apertures obtained from sequencing patient image data maps

    Energy Technology Data Exchange (ETDEWEB)

    Ureba, A. [Dpto. Fisiología Médica y Biofísica. Facultad de Medicina, Universidad de Sevilla, E-41009 Sevilla (Spain); Salguero, F. J. [Nederlands Kanker Instituut, Antoni van Leeuwenhoek Ziekenhuis, 1066 CX Ámsterdam, The Nederlands (Netherlands); Barbeiro, A. R.; Jimenez-Ortega, E.; Baeza, J. A.; Leal, A., E-mail: alplaza@us.es [Dpto. Fisiología Médica y Biofísica, Facultad de Medicina, Universidad de Sevilla, E-41009 Sevilla (Spain); Miras, H. [Servicio de Radiofísica, Hospital Universitario Virgen Macarena, E-41009 Sevilla (Spain); Linares, R.; Perucha, M. [Servicio de Radiofísica, Hospital Infanta Luisa, E-41010 Sevilla (Spain)

    2014-08-15

    Purpose: The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. Methods: The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called “biophysical” map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reduce the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Results: Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast

  17. MCTP system model based on linear programming optimization of apertures obtained from sequencing patient image data maps

    International Nuclear Information System (INIS)

    Ureba, A.; Salguero, F. J.; Barbeiro, A. R.; Jimenez-Ortega, E.; Baeza, J. A.; Leal, A.; Miras, H.; Linares, R.; Perucha, M.

    2014-01-01

    Purpose: The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. Methods: The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called “biophysical” map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reduce the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Results: Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast

  18. MCTP system model based on linear programming optimization of apertures obtained from sequencing patient image data maps.

    Science.gov (United States)

    Ureba, A; Salguero, F J; Barbeiro, A R; Jimenez-Ortega, E; Baeza, J A; Miras, H; Linares, R; Perucha, M; Leal, A

    2014-08-01

    The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called "biophysical" map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reduce the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast irradiation case (Case II) solved

  19. A ripple-spreading genetic algorithm for the aircraft sequencing problem.

    Science.gov (United States)

    Hu, Xiao-Bing; Di Paolo, Ezequiel A

    2011-01-01

    When genetic algorithms (GAs) are applied to combinatorial problems, permutation representations are usually adopted. As a result, such GAs are often confronted with feasibility and memory-efficiency problems. With the aircraft sequencing problem (ASP) as a study case, this paper reports on a novel binary-representation-based GA scheme for combinatorial problems. Unlike existing GAs for the ASP, which typically use permutation representations based on aircraft landing order, the new GA introduces a novel ripple-spreading model which transforms the original landing-order-based ASP solutions into value-based ones. In the new scheme, arriving aircraft are projected as points into an artificial space. A deterministic method inspired by the natural phenomenon of ripple-spreading on liquid surfaces is developed, which uses a few parameters as input to connect points on this space to form a landing sequence. A traditional GA, free of feasibility and memory-efficiency problems, can then be used to evolve the ripple-spreading related parameters in order to find an optimal sequence. Since the ripple-spreading model is the centerpiece of the new algorithm, it is called the ripple-spreading GA (RSGA). The advantages of the proposed RSGA are illustrated by extensive comparative studies for the case of the ASP.

  20. Solar Luminosity on the Main Sequence, Standard Model and Variations

    Science.gov (United States)

    Ayukov, S. V.; Baturin, V. A.; Gorshkov, A. B.; Oreshina, A. V.

    2017-05-01

    Our Sun became Main Sequence star 4.6 Gyr ago according Standard Solar Model. At that time solar luminosity was 30% lower than current value. This conclusion is based on assumption that Sun is fueled by thermonuclear reactions. If Earth's albedo and emissivity in infrared are unchanged during Earth history, 2.3 Gyr ago oceans had to be frozen. This contradicts to geological data: there was liquid water 3.6-3.8 Gyr ago on Earth. This problem is known as Faint Young Sun Paradox. We analyze luminosity change in standard solar evolution theory. Increase of mean molecular weight in the central part of the Sun due to conversion of hydrogen to helium leads to gradual increase of luminosity with time on the Main Sequence. We also consider several exotic models: fully mixed Sun; drastic change of pp reaction rate; Sun consisting of hydrogen and helium only. Solar neutrino observations however exclude most non-standard solar models.

  1. Stochastic weather inputs for improved urban water demand forecasting: application of nonlinear input variable selection and machine learning methods

    Science.gov (United States)

    Quilty, J.; Adamowski, J. F.

    2015-12-01

    Urban water supply systems are often stressed during seasonal outdoor water use as water demands related to the climate are variable in nature making it difficult to optimize the operation of the water supply system. Urban water demand forecasts (UWD) failing to include meteorological conditions as inputs to the forecast model may produce poor forecasts as they cannot account for the increase/decrease in demand related to meteorological conditions. Meteorological records stochastically simulated into the future can be used as inputs to data-driven UWD forecasts generally resulting in improved forecast accuracy. This study aims to produce data-driven UWD forecasts for two different Canadian water utilities (Montreal and Victoria) using machine learning methods by first selecting historical UWD and meteorological records derived from a stochastic weather generator using nonlinear input variable selection. The nonlinear input variable selection methods considered in this work are derived from the concept of conditional mutual information, a nonlinear dependency measure based on (multivariate) probability density functions and accounts for relevancy, conditional relevancy, and redundancy from a potential set of input variables. The results of our study indicate that stochastic weather inputs can improve UWD forecast accuracy for the two sites considered in this work. Nonlinear input variable selection is suggested as a means to identify which meteorological conditions should be utilized in the forecast.

  2. Simulation model structure numerically robust to changes in magnitude and combination of input and output variables

    DEFF Research Database (Denmark)

    Rasmussen, Bjarne D.; Jakobsen, Arne

    1999-01-01

    Mathematical models of refrigeration systems are often based on a coupling of component models forming a “closed loop” type of system model. In these models the coupling structure of the component models represents the actual flow path of refrigerant in the system. Very often numerical...... instabilities prevent the practical use of such a system model for more than one input/output combination and for other magnitudes of refrigerating capacities.A higher numerical robustness of system models can be achieved by making a model for the refrigeration cycle the core of the system model and by using...... variables with narrow definition intervals for the exchange of information between the cycle model and the component models.The advantages of the cycle-oriented method are illustrated by an example showing the refrigeration cycle similarities between two very different refrigeration systems....

  3. A Hierarchical multi-input and output Bi-GRU Model for Sentiment Analysis on Customer Reviews

    Science.gov (United States)

    Zhang, Liujie; Zhou, Yanquan; Duan, Xiuyu; Chen, Ruiqi

    2018-03-01

    Multi-label sentiment classification on customer reviews is a practical challenging task in Natural Language Processing. In this paper, we propose a hierarchical multi-input and output model based bi-directional recurrent neural network, which both considers the semantic and lexical information of emotional expression. Our model applies two independent Bi-GRU layer to generate part of speech and sentence representation. Then the lexical information is considered via attention over output of softmax activation on part of speech representation. In addition, we combine probability of auxiliary labels as feature with hidden layer to capturing crucial correlation between output labels. The experimental result shows that our model is computationally efficient and achieves breakthrough improvements on customer reviews dataset.

  4. Automated Testing with Targeted Event Sequence Generation

    DEFF Research Database (Denmark)

    Jensen, Casper Svenning; Prasad, Mukul R.; Møller, Anders

    2013-01-01

    Automated software testing aims to detect errors by producing test inputs that cover as much of the application source code as possible. Applications for mobile devices are typically event-driven, which raises the challenge of automatically producing event sequences that result in high coverage...

  5. Finding and Improving the Key-Frames of Long Video Sequences for Face Recognition

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.

    2010-01-01

    Face recognition systems are very sensitive to the quality and resolution of their input face images. This makes such systems unreliable when working with long surveillance video sequences without employing some selection and enhancement algorithms. On the other hand, processing all the frames...... of such video sequences by any enhancement or even face recognition algorithm is demanding. Thus, there is a need for a mechanism to summarize the input video sequence to a set of key-frames and then applying an enhancement algorithm to this subset. This paper presents a system doing exactly this. The system...... uses face quality assessment to select the key-frames and a hybrid super-resolution to enhance the face image quality. The suggested system that employs a linear associator face recognizer to evaluate the enhanced results has been tested on real surveillance video sequences and the experimental results...

  6. An Imaging And Graphics Workstation For Image Sequence Analysis

    Science.gov (United States)

    Mostafavi, Hassan

    1990-01-01

    This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.

  7. FLUTAN 2.0. Input specifications

    International Nuclear Information System (INIS)

    Willerding, G.; Baumann, W.

    1996-05-01

    FLUTAN is a highly vectorized computer code for 3D fluiddynamic and thermal-hydraulic analyses in Cartesian or cylinder coordinates. It is related to the family of COMMIX codes originally developed at Argonne National Laboratory, USA, and particularly to COMMIX-1A and COMMIX-1B, which were made available to FZK in the frame of cooperation contracts within the fast reactor safety field. FLUTAN 2.0 is an improved version of the FLUTAN code released in 1992. It offers some additional innovations, e.g. the QUICK-LECUSSO-FRAM techniques for reducing numerical diffusion in the k-ε turbulence model equations; a higher sophisticated wall model for specifying a mass flow outside the surface walls together with its flow path and its associated inlet and outlet flow temperatures; and a revised and upgraded pressure boundary condition to fully include the outlet cells in the solution process of the conservation equations. Last but not least, a so-called visualization option based on VISART standards has been provided. This report contains detailed input instructions, presents formulations of the various model options, and explains how to use the code by means of comprehensive sample input. (orig.) [de

  8. Bayesian clustering of DNA sequences using Markov chains and a stochastic partition model.

    Science.gov (United States)

    Jääskinen, Väinö; Parkkinen, Ville; Cheng, Lu; Corander, Jukka

    2014-02-01

    In many biological applications it is necessary to cluster DNA sequences into groups that represent underlying organismal units, such as named species or genera. In metagenomics this grouping needs typically to be achieved on the basis of relatively short sequences which contain different types of errors, making the use of a statistical modeling approach desirable. Here we introduce a novel method for this purpose by developing a stochastic partition model that clusters Markov chains of a given order. The model is based on a Dirichlet process prior and we use conjugate priors for the Markov chain parameters which enables an analytical expression for comparing the marginal likelihoods of any two partitions. To find a good candidate for the posterior mode in the partition space, we use a hybrid computational approach which combines the EM-algorithm with a greedy search. This is demonstrated to be faster and yield highly accurate results compared to earlier suggested clustering methods for the metagenomics application. Our model is fairly generic and could also be used for clustering of other types of sequence data for which Markov chains provide a reasonable way to compress information, as illustrated by experiments on shotgun sequence type data from an Escherichia coli strain.

  9. Modeling Soil Carbon Dynamics in Northern Forests: Effects of Spatial and Temporal Aggregation of Climatic Input Data.

    Science.gov (United States)

    Dalsgaard, Lise; Astrup, Rasmus; Antón-Fernández, Clara; Borgen, Signe Kynding; Breidenbach, Johannes; Lange, Holger; Lehtonen, Aleksi; Liski, Jari

    2016-01-01

    Boreal forests contain 30% of the global forest carbon with the majority residing in soils. While challenging to quantify, soil carbon changes comprise a significant, and potentially increasing, part of the terrestrial carbon cycle. Thus, their estimation is important when designing forest-based climate change mitigation strategies and soil carbon change estimates are required for the reporting of greenhouse gas emissions. Organic matter decomposition varies with climate in complex nonlinear ways, rendering data aggregation nontrivial. Here, we explored the effects of temporal and spatial aggregation of climatic and litter input data on regional estimates of soil organic carbon stocks and changes for upland forests. We used the soil carbon and decomposition model Yasso07 with input from the Norwegian National Forest Inventory (11275 plots, 1960-2012). Estimates were produced at three spatial and three temporal scales. Results showed that a national level average soil carbon stock estimate varied by 10% depending on the applied spatial and temporal scale of aggregation. Higher stocks were found when applying plot-level input compared to country-level input and when long-term climate was used as compared to annual or 5-year mean values. A national level estimate for soil carbon change was similar across spatial scales, but was considerably (60-70%) lower when applying annual or 5-year mean climate compared to long-term mean climate reflecting the recent climatic changes in Norway. This was particularly evident for the forest-dominated districts in the southeastern and central parts of Norway and in the far north. We concluded that the sensitivity of model estimates to spatial aggregation will depend on the region of interest. Further, that using long-term climate averages during periods with strong climatic trends results in large differences in soil carbon estimates. The largest differences in this study were observed in central and northern regions with strongly

  10. Attributing uncertainty in streamflow simulations due to variable inputs via the Quantile Flow Deviation metric

    Science.gov (United States)

    Shoaib, Syed Abu; Marshall, Lucy; Sharma, Ashish

    2018-06-01

    Every model to characterise a real world process is affected by uncertainty. Selecting a suitable model is a vital aspect of engineering planning and design. Observation or input errors make the prediction of modelled responses more uncertain. By way of a recently developed attribution metric, this study is aimed at developing a method for analysing variability in model inputs together with model structure variability to quantify their relative contributions in typical hydrological modelling applications. The Quantile Flow Deviation (QFD) metric is used to assess these alternate sources of uncertainty. The Australian Water Availability Project (AWAP) precipitation data for four different Australian catchments is used to analyse the impact of spatial rainfall variability on simulated streamflow variability via the QFD. The QFD metric attributes the variability in flow ensembles to uncertainty associated with the selection of a model structure and input time series. For the case study catchments, the relative contribution of input uncertainty due to rainfall is higher than that due to potential evapotranspiration, and overall input uncertainty is significant compared to model structure and parameter uncertainty. Overall, this study investigates the propagation of input uncertainty in a daily streamflow modelling scenario and demonstrates how input errors manifest across different streamflow magnitudes.

  11. PCC/SRC, PCC and SRC Calculation from Multivariate Input for Sensitivity Analysis

    International Nuclear Information System (INIS)

    Iman, R.L.; Shortencarier, M.J.; Johnson, J.D.

    1995-01-01

    1 - Description of program or function: PCC/SRC is designed for use in conjunction with sensitivity analyses of complex computer models. PCC/SRC calculates the partial correlation coefficients (PCC) and the standardized regression coefficients (SRC) from the multivariate input to, and output from, a computer model. 2 - Method of solution: PCC/SRC calculates the coefficients on either the original observations or on the ranks of the original observations. These coefficients provide alternative measures of the relative contribution (importance) of each of the various input variables to the observed variations in output. Relationships between the coefficients and differences in their interpretations are identified. If the computer model output has an associated time or spatial history, PCC/SRC will generate a graph of the coefficients over time or space for each input-variable, output- variable combination of interest, indicating the importance of each input value over time or space. 3 - Restrictions on the complexity of the problem - Maxima of: 100 observations, 100 different time steps or intervals between successive dependent variable readings, 50 independent variables (model input), 20 dependent variables (model output). 10 ordered triples specifying intervals between dependent variable readings

  12. TART input manual

    International Nuclear Information System (INIS)

    Kimlinger, J.R.; Plechaty, E.F.

    1982-01-01

    The TART code is a Monte Carlo neutron/photon transport code that is only on the CRAY computer. All the input cards for the TART code are listed, and definitions for all input parameters are given. The execution and limitations of the code are described, and input for two sample problems are given

  13. Effects of allochthonous inputs in the control of infectious disease of prey

    International Nuclear Information System (INIS)

    Sahoo, Banshidhar; Poria, Swarup

    2015-01-01

    Highlights: •Infected predator–prey model with allochthonous inputs is proposed. •Stability and persistence conditions are derived. •Bifurcation is determined with respect to allochthonous inputs. •Results show that system can not be disease free without allochthonous inputs. •Hopf and its continuation bifurcation is analysed numerically. -- Abstract: Allochthonous inputs are important sources of productivity in many food webs and their influences on food chain model demand further investigations. In this paper, assuming the existence of allochthonous inputs for intermediate predator, a food chain model is formulated with disease in the prey. The stability and persistence conditions of the equilibrium points are determined. Extinction criterion for infected prey population is obtained. It is shown that suitable amount of allochthonous inputs to intermediate predator can control infectious disease of prey population, provided initial intermediate predator population is above a critical value. This critical intermediate population size increases monotonically with the increase of infection rate. It is also shown that control of infectious disease of prey is possible in some cases of seasonally varying contact rate. Dynamical behaviours of the model are investigated numerically through one and two parameter bifurcation analysis using MATCONT 2.5.1 package. The occurrence of Hopf and its continuation curves are noted with the variation of infection rate and allochthonous food availability. The continuation curves of limit point cycle and Neimark Sacker bifurcation are drawn by varying the rate of infection and allochthonous inputs. This study introduces a novel natural non-toxic method for controlling infectious disease of prey in a food chain model

  14. State of the Art in Input Ground Motions for Seismic Fragility and Risk Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung Han; Choi, In Kil; Kim, Min Kyu [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    The purpose of a Seismic Probabilistic Safety Analysis (SPSA) is to determine the probability distribution of core damage due to the potential effects of earthquakes. The SPSA is performed based on four steps, a seismic hazard analysis, a component fragility evaluation, a plant system and accident sequence analysis, and a consequence analysis. There are very different spectrum shapes in every ground motions. The structural response and the seismic load applied to equipment are greatly influenced by a spectral shape of the input ground motion. Therefore the input ground motion need to be determined under the same assumption in risk calculation. Several technic for the determination of input ground motions has developed and reviewed in this study. In this research, the methodologies of the determination of input ground motion for the seismic risk assessment are reviewed and discussed. It has developed to reduce the uncertainty in fragility curves and to remove the conservatism in risk values.

  15. Characterization, scaling, and partial representation of diffuse and discrete input junctions to CA3 hippocampus.

    Science.gov (United States)

    Ascarrunz, F G; Kisley, M A; Flach, K A; Hamilton, R W; MacGregor, R J

    1995-07-01

    This paper applies a general mathematical system for characterizing and scaling functional connectivity and information flow across the diffuse (EC) and discrete (DG) input junctions to the CA3 hippocampus. Both gross connectivity and coordinated multiunit informational firing patterns are quantitatively characterized in terms of 32 defining parameters interrelated by 17 equations, and then scaled down according to rules for uniformly proportional scaling and for partial representation. The diffuse EC-CA3 junction is shown to be uniformly scalable with realistic representation of both essential spatiotemporal cooperativity and coordinated firing patterns down to populations of a few hundred neurons. Scaling of the discrete DG-CA3 junction can be effected with a two-step process, which necessarily deviates from uniform proportionality but nonetheless produces a valuable and readily interpretable reduced model, also utilizing a few hundred neurons in the receiving population. Partial representation produces a reduced model of only a portion of the full network where each model neuron corresponds directly to a biological neuron. The mathematical analysis illustrated here shows that although omissions and distortions are inescapable in such an application, satisfactorily complete and accurate models the size of pattern modules are possible. Finally, the mathematical characterization of these junctions generates a theory which sees the DG as a definer of the fine structure of embedded traces in the hippocampus and entire coordinated patterns of sequences of 14-cell links in CA3 as triggered by the firing of sequences of individual neurons in DG.

  16. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models

    International Nuclear Information System (INIS)

    Lamboni, Matieyendou; Monod, Herve; Makowski, David

    2011-01-01

    Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006 ) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.

  17. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models

    Energy Technology Data Exchange (ETDEWEB)

    Lamboni, Matieyendou [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Monod, Herve, E-mail: herve.monod@jouy.inra.f [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Makowski, David [INRA, UMR Agronomie INRA/AgroParisTech (UMR 211), BP 01, F78850 Thiverval-Grignon (France)

    2011-04-15

    Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.

  18. Influence of Road Excitation and Steering Wheel Input on Vehicle System Dynamic Responses

    Directory of Open Access Journals (Sweden)

    Zhen-Feng Wang

    2017-06-01

    Full Text Available Considering the importance of increasing driving safety, the study of safety is a popular and critical topic of research in the vehicle industry. Vehicle roll behavior with sudden steering input is a main source of untripped rollover. However, previous research has seldom considered road excitation and its coupled effect on vehicle lateral response when focusing on lateral and vertical dynamics. To address this issue, a novel method was used to evaluate effects of varying road level and steering wheel input on vehicle roll behavior. Then, a 9 degree of freedom (9-DOF full-car roll nonlinear model including vertical and lateral dynamics was developed to study vehicle roll dynamics with or without of road excitation. Based on a 6-DOF half-car roll model and 9-DOF full-car nonlinear model, relationship between three-dimensional (3-D road excitation and various steering wheel inputs on vehicle roll performance was studied. Finally, an E-Class (SUV level car model in CARSIM® was used, as a benchmark, with and without road input conditions. Both half-car and full-car models were analyzed under steering wheel inputs of 5°, 10° and 15°. Simulation results showed that the half-car model considering road input was found to have a maximum accuracy of 65%. Whereas, the full-car model had a minimum accuracy of 85%, which was significantly higher compared to the half-car model under the same scenario.

  19. Physical-mathematical model for cybernetic description of the human organs with trace element concentrations as input variables

    International Nuclear Information System (INIS)

    Mihai, Maria; Popescu, I.V.

    2003-01-01

    In this paper we report a physical-mathematical model for studying the organs and humans fluids by cybernetic principle. The input variables represent the trace elements which are determined by atomic and nuclear methods of elemental analysis. We have determined the health limits between which the organs might function. (authors)

  20. Activity and function recognition for moving and static objects in urban environments from wide-area persistent surveillance inputs

    Science.gov (United States)

    Levchuk, Georgiy; Bobick, Aaron; Jones, Eric

    2010-04-01

    In this paper, we describe results from experimental analysis of a model designed to recognize activities and functions of moving and static objects from low-resolution wide-area video inputs. Our model is based on representing the activities and functions using three variables: (i) time; (ii) space; and (iii) structures. The activity and function recognition is achieved by imposing lexical, syntactic, and semantic constraints on the lower-level event sequences. In the reported research, we have evaluated the utility and sensitivity of several algorithms derived from natural language processing and pattern recognition domains. We achieved high recognition accuracy for a wide range of activity and function types in the experiments using Electro-Optical (EO) imagery collected by Wide Area Airborne Surveillance (WAAS) platform.

  1. Modelling pesticide leaching under climate change: parameter vs. climate input uncertainty

    Directory of Open Access Journals (Sweden)

    K. Steffens

    2014-02-01

    Full Text Available Assessing climate change impacts on pesticide leaching requires careful consideration of different sources of uncertainty. We investigated the uncertainty related to climate scenario input and its importance relative to parameter uncertainty of the pesticide leaching model. The pesticide fate model MACRO was calibrated against a comprehensive one-year field data set for a well-structured clay soil in south-western Sweden. We obtained an ensemble of 56 acceptable parameter sets that represented the parameter uncertainty. Nine different climate model projections of the regional climate model RCA3 were available as driven by different combinations of global climate models (GCM, greenhouse gas emission scenarios and initial states of the GCM. The future time series of weather data used to drive the MACRO model were generated by scaling a reference climate data set (1970–1999 for an important agricultural production area in south-western Sweden based on monthly change factors for 2070–2099. 30 yr simulations were performed for different combinations of pesticide properties and application seasons. Our analysis showed that both the magnitude and the direction of predicted change in pesticide leaching from present to future depended strongly on the particular climate scenario. The effect of parameter uncertainty was of major importance for simulating absolute pesticide losses, whereas the climate uncertainty was relatively more important for predictions of changes of pesticide losses from present to future. The climate uncertainty should be accounted for by applying an ensemble of different climate scenarios. The aggregated ensemble prediction based on both acceptable parameterizations and different climate scenarios has the potential to provide robust probabilistic estimates of future pesticide losses.

  2. Next-generation phylogeography: a targeted approach for multilocus sequencing of non-model organisms.

    Directory of Open Access Journals (Sweden)

    Jonathan B Puritz

    Full Text Available The field of phylogeography has long since realized the need and utility of incorporating nuclear DNA (nDNA sequences into analyses. However, the use of nDNA sequence data, at the population level, has been hindered by technical laboratory difficulty, sequencing costs, and problematic analytical methods dealing with genotypic sequence data, especially in non-model organisms. Here, we present a method utilizing the 454 GS-FLX Titanium pyrosequencing platform with the capacity to simultaneously sequence two species of sea star (Meridiastra calcar and Parvulastra exigua at five different nDNA loci across 16 different populations of 20 individuals each per species. We compare results from 3 populations with traditional Sanger sequencing based methods, and demonstrate that this next-generation sequencing platform is more time and cost effective and more sensitive to rare variants than Sanger based sequencing. A crucial advantage is that the high coverage of clonally amplified sequences simplifies haplotype determination, even in highly polymorphic species. This targeted next-generation approach can greatly increase the use of nDNA sequence loci in phylogeographic and population genetic studies by mitigating many of the time, cost, and analytical issues associated with highly polymorphic, diploid sequence markers.

  3. Inhibitory Gating of Basolateral Amygdala Inputs to the Prefrontal Cortex.

    Science.gov (United States)

    McGarry, Laura M; Carter, Adam G

    2016-09-07

    with nearby corticostriatal neurons. However, these inputs are even more powerful at parvalbumin and somatostatin expressing interneurons. BLA inputs thus activate two parallel inhibitory networks, whose contributions change during repetitive activity. Finally, connections from these interneurons are also more powerful at corticoamygdala neurons compared with corticostriatal neurons. Together, our results demonstrate how the BLA predominantly inhibits the PFC via a complex sequence involving multiple cell-type and input-specific connections. Copyright © 2016 the authors 0270-6474/16/369391-16$15.00/0.

  4. Optimal Input Design for Aircraft Parameter Estimation using Dynamic Programming Principles

    Science.gov (United States)

    Morelli, Eugene A.; Klein, Vladislav

    1990-01-01

    A new technique was developed for designing optimal flight test inputs for aircraft parameter estimation experiments. The principles of dynamic programming were used for the design in the time domain. This approach made it possible to include realistic practical constraints on the input and output variables. A description of the new approach is presented, followed by an example for a multiple input linear model describing the lateral dynamics of a fighter aircraft. The optimal input designs produced by the new technique demonstrated improved quality and expanded capability relative to the conventional multiple input design method.

  5. Plantagora: modeling whole genome sequencing and assembly of plant genomes.

    Directory of Open Access Journals (Sweden)

    Roger Barthelson

    Full Text Available BACKGROUND: Genomics studies are being revolutionized by the next generation sequencing technologies, which have made whole genome sequencing much more accessible to the average researcher. Whole genome sequencing with the new technologies is a developing art that, despite the large volumes of data that can be produced, may still fail to provide a clear and thorough map of a genome. The Plantagora project was conceived to address specifically the gap between having the technical tools for genome sequencing and knowing precisely the best way to use them. METHODOLOGY/PRINCIPAL FINDINGS: For Plantagora, a platform was created for generating simulated reads from several different plant genomes of different sizes. The resulting read files mimicked either 454 or Illumina reads, with varying paired end spacing. Thousands of datasets of reads were created, most derived from our primary model genome, rice chromosome one. All reads were assembled with different software assemblers, including Newbler, Abyss, and SOAPdenovo, and the resulting assemblies were evaluated by an extensive battery of metrics chosen for these studies. The metrics included both statistics of the assembly sequences and fidelity-related measures derived by alignment of the assemblies to the original genome source for the reads. The results were presented in a website, which includes a data graphing tool, all created to help the user compare rapidly the feasibility and effectiveness of different sequencing and assembly strategies prior to testing an approach in the lab. Some of our own conclusions regarding the different strategies were also recorded on the website. CONCLUSIONS/SIGNIFICANCE: Plantagora provides a substantial body of information for comparing different approaches to sequencing a plant genome, and some conclusions regarding some of the specific approaches. Plantagora also provides a platform of metrics and tools for studying the process of sequencing and assembly

  6. High organic inputs explain shallow and deep SOC storage in a long-term agroforestry system - combining experimental and modeling approaches

    Science.gov (United States)

    Cardinael, Rémi; Guenet, Bertrand; Chevallier, Tiphaine; Dupraz, Christian; Cozzi, Thomas; Chenu, Claire

    2018-01-01

    Agroforestry is an increasingly popular farming system enabling agricultural diversification and providing several ecosystem services. In agroforestry systems, soil organic carbon (SOC) stocks are generally increased, but it is difficult to disentangle the different factors responsible for this storage. Organic carbon (OC) inputs to the soil may be larger, but SOC decomposition rates may be modified owing to microclimate, physical protection, or priming effect from roots, especially at depth. We used an 18-year-old silvoarable system associating hybrid walnut trees (Juglans regia × nigra) and durum wheat (Triticum turgidum L. subsp. durum) and an adjacent agricultural control plot to quantify all OC inputs to the soil - leaf litter, tree fine root senescence, crop residues, and tree row herbaceous vegetation - and measured SOC stocks down to 2 m of depth at varying distances from the trees. We then proposed a model that simulates SOC dynamics in agroforestry accounting for both the whole soil profile and the lateral spatial heterogeneity. The model was calibrated to the control plot only. Measured OC inputs to soil were increased by about 40 % (+ 1.11 t C ha-1 yr-1) down to 2 m of depth in the agroforestry plot compared to the control, resulting in an additional SOC stock of 6.3 t C ha-1 down to 1 m of depth. However, most of the SOC storage occurred in the first 30 cm of soil and in the tree rows. The model was strongly validated, properly describing the measured SOC stocks and distribution with depth in agroforestry tree rows and alleys. It showed that the increased inputs of fresh biomass to soil explained the observed additional SOC storage in the agroforestry plot. Moreover, only a priming effect variant of the model was able to capture the depth distribution of SOC stocks, suggesting the priming effect as a possible mechanism driving deep SOC dynamics. This result questions the potential of soils to store large amounts of carbon, especially at depth. Deep

  7. The effect of blood inflow and B(1)-field inhomogeneity on measurement of the arterial input function in axial 3D spoiled gradient echo dynamic contrast-enhanced MRI.

    Science.gov (United States)

    Roberts, Caleb; Little, Ross; Watson, Yvonne; Zhao, Sha; Buckley, David L; Parker, Geoff J M

    2011-01-01

    A major potential confound in axial 3D dynamic contrast-enhanced magnetic resonance imaging studies is the blood inflow effect; therefore, the choice of slice location for arterial input function measurement within the imaging volume must be considered carefully. The objective of this study was to use computer simulations, flow phantom, and in vivo studies to describe and understand the effect of blood inflow on the measurement of the arterial input function. All experiments were done at 1.5 T using a typical 3D dynamic contrast-enhanced magnetic resonance imaging sequence, and arterial input functions were extracted for each slice in the imaging volume. We simulated a set of arterial input functions based on the same imaging parameters and accounted for blood inflow and radiofrequency field inhomogeneities. Measured arterial input functions along the vessel length from both in vivo and the flow phantom agreed with simulated arterial input functions and show large overestimations in the arterial input function in the first 30 mm of the vessel, whereas arterial input functions measured more centrally achieve accurate contrast agent concentrations. Use of inflow-affected arterial input functions in tracer kinetic modeling shows potential errors of up to 80% in tissue microvascular parameters. These errors emphasize the importance of careful placement of the arterial input function definition location to avoid the effects of blood inflow. © 2010 Wiley-Liss, Inc.

  8. Quantum Mechanical Noise in a Michelson Interferometer with Nonclassical Inputs: Nonperturbative Treatment

    Science.gov (United States)

    King, Sun-Kun

    1996-01-01

    The variances of the quantum-mechanical noise in a two-input-port Michelson interferometer within the framework of the Loudon-Ni model were solved exactly in two general cases: (1) one coherent state input and one squeezed state input, and (2) two photon number states inputs. Low intensity limit, exponential decaying signal and the noise due to mixing were discussed briefly.

  9. Decoding spatiotemporal spike sequences via the finite state automata dynamics of spiking neural networks

    International Nuclear Information System (INIS)

    Jin, Dezhe Z

    2008-01-01

    Temporally complex stimuli are encoded into spatiotemporal spike sequences of neurons in many sensory areas. Here, we describe how downstream neurons with dendritic bistable plateau potentials can be connected to decode such spike sequences. Driven by feedforward inputs from the sensory neurons and controlled by feedforward inhibition and lateral excitation, the neurons transit between UP and DOWN states of the membrane potentials. The neurons spike only in the UP states. A decoding neuron spikes at the end of an input to signal the recognition of specific spike sequences. The transition dynamics is equivalent to that of a finite state automaton. A connection rule for the networks guarantees that any finite state automaton can be mapped into the transition dynamics, demonstrating the equivalence in computational power between the networks and finite state automata. The decoding mechanism is capable of recognizing an arbitrary number of spatiotemporal spike sequences, and is insensitive to the variations of the spike timings in the sequences

  10. Data input guide for SWIFT II. The Sandia waste-isolation flow and transport model for fractured media, Release 4.84

    International Nuclear Information System (INIS)

    Reeves, M.; Ward, D.S.; Johns, N.D.; Cranwell, R.M.

    1986-04-01

    This report is one of three which describes the SWIFT II computer code. The code simulates flow and transport processes in geologic media which may be fractured. SWIFT II was developed for use in the analysis of deep geologic facilities for nuclear-waste disposal. This user's manual should permit the analyst to use the code effectively by facilitating the preparation of input data. A second companion document discusses the theory and implementation of the models employed by the SWIFT II code. A third document provides illustrative problems for instructional purposes. This report contains detailed descriptions of the input data along with an appendix of the input diagnostics. The use of auxiliary files, unit conversions, and program variable descriptors also are included in this document

  11. Monte Carlo simulation of a statistical mechanical model of multiple protein sequence alignment.

    Science.gov (United States)

    Kinjo, Akira R

    2017-01-01

    A grand canonical Monte Carlo (MC) algorithm is presented for studying the lattice gas model (LGM) of multiple protein sequence alignment, which coherently combines long-range interactions and variable-length insertions. MC simulations are used for both parameter optimization of the model and production runs to explore the sequence subspace around a given protein family. In this Note, I describe the details of the MC algorithm as well as some preliminary results of MC simulations with various temperatures and chemical potentials, and compare them with the mean-field approximation. The existence of a two-state transition in the sequence space is suggested for the SH3 domain family, and inappropriateness of the mean-field approximation for the LGM is demonstrated.

  12. Stabilization of (state, input)-disturbed CSTRs through the port-Hamiltonian systems approach

    OpenAIRE

    Lu, Yafei; Fang, Zhou; Gao, Chuanhou

    2017-01-01

    It is a universal phenomenon that the state and input of the continuous stirred tank reactor (CSTR) systems are both disturbed. This paper proposes a (state, input)-disturbed port-Hamiltonian framework that can be used to model and further designs a stochastic passivity based controller to asymptotically stabilize in probability the (state, input)-disturbed CSTR (sidCSTR) systems. The opposite entropy function and the availability function are selected as the Hamiltonian for the model and con...

  13. Enhanced Sensitivity to Rapid Input Fluctuations by Nonlinear Threshold Dynamics in Neocortical Pyramidal Neurons.

    Science.gov (United States)

    Mensi, Skander; Hagens, Olivier; Gerstner, Wulfram; Pozzorini, Christian

    2016-02-01

    The way in which single neurons transform input into output spike trains has fundamental consequences for network coding. Theories and modeling studies based on standard Integrate-and-Fire models implicitly assume that, in response to increasingly strong inputs, neurons modify their coding strategy by progressively reducing their selective sensitivity to rapid input fluctuations. Combining mathematical modeling with in vitro experiments, we demonstrate that, in L5 pyramidal neurons, the firing threshold dynamics adaptively adjust the effective timescale of somatic integration in order to preserve sensitivity to rapid signals over a broad range of input statistics. For that, a new Generalized Integrate-and-Fire model featuring nonlinear firing threshold dynamics and conductance-based adaptation is introduced that outperforms state-of-the-art neuron models in predicting the spiking activity of neurons responding to a variety of in vivo-like fluctuating currents. Our model allows for efficient parameter extraction and can be analytically mapped to a Generalized Linear Model in which both the input filter--describing somatic integration--and the spike-history filter--accounting for spike-frequency adaptation--dynamically adapt to the input statistics, as experimentally observed. Overall, our results provide new insights on the computational role of different biophysical processes known to underlie adaptive coding in single neurons and support previous theoretical findings indicating that the nonlinear dynamics of the firing threshold due to Na+-channel inactivation regulate the sensitivity to rapid input fluctuations.

  14. Protein model discrimination using mutational sensitivity derived from deep sequencing.

    Science.gov (United States)

    Adkar, Bharat V; Tripathi, Arti; Sahoo, Anusmita; Bajaj, Kanika; Goswami, Devrishi; Chakrabarti, Purbani; Swarnkar, Mohit K; Gokhale, Rajesh S; Varadarajan, Raghavan

    2012-02-08

    A major bottleneck in protein structure prediction is the selection of correct models from a pool of decoys. Relative activities of ∼1,200 individual single-site mutants in a saturation library of the bacterial toxin CcdB were estimated by determining their relative populations using deep sequencing. This phenotypic information was used to define an empirical score for each residue (RankScore), which correlated with the residue depth, and identify active-site residues. Using these correlations, ∼98% of correct models of CcdB (RMSD ≤ 4Å) were identified from a large set of decoys. The model-discrimination methodology was further validated on eleven different monomeric proteins using simulated RankScore values. The methodology is also a rapid, accurate way to obtain relative activities of each mutant in a large pool and derive sequence-structure-function relationships without protein isolation or characterization. It can be applied to any system in which mutational effects can be monitored by a phenotypic readout. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Construction sequence scale model: an aid to productivity and quality assurance

    International Nuclear Information System (INIS)

    Clothier, W.A. Sr.

    1978-01-01

    The natural tendencies of an engineering scale model to promote a high level of quality by error prevention during design and construction stages of a project are studied. A brief section on the basic history of engineering modeling is used to describe TVA's usage of the model. The basic design model is explored in an overview touching the highlights of that form of modeling. A detailed look at the construction sequence model, a relatively new form of model, is presented to demonstrate quality and productivity awareness

  16. Regulation of Wnt signaling by nociceptive input in animal models

    Directory of Open Access Journals (Sweden)

    Shi Yuqiang

    2012-06-01

    Full Text Available Abstract Background Central sensitization-associated synaptic plasticity in the spinal cord dorsal horn (SCDH critically contributes to the development of chronic pain, but understanding of the underlying molecular pathways is still incomplete. Emerging evidence suggests that Wnt signaling plays a crucial role in regulation of synaptic plasticity. Little is known about the potential function of the Wnt signaling cascades in chronic pain development. Results Fluorescent immunostaining results indicate that β-catenin, an essential protein in the canonical Wnt signaling pathway, is expressed in the superficial layers of the mouse SCDH with enrichment at synapses in lamina II. In addition, Wnt3a, a prototypic Wnt ligand that activates the canonical pathway, is also enriched in the superficial layers. Immunoblotting analysis indicates that both Wnt3a a β-catenin are up-regulated in the SCDH of various mouse pain models created by hind-paw injection of capsaicin, intrathecal (i.t. injection of HIV-gp120 protein or spinal nerve ligation (SNL. Furthermore, Wnt5a, a prototypic Wnt ligand for non-canonical pathways, and its receptor Ror2 are also up-regulated in the SCDH of these models. Conclusion Our results suggest that Wnt signaling pathways are regulated by nociceptive input. The activation of Wnt signaling may regulate the expression of spinal central sensitization during the development of acute and chronic pain.

  17. Investigations of the sensitivity of a coronal mass ejection model (ENLIL) to solar input parameters

    DEFF Research Database (Denmark)

    Falkenberg, Thea Vilstrup; Vršnak, B.; Taktakishvili, A.

    2010-01-01

    Understanding space weather is not only important for satellite operations and human exploration of the solar system but also to phenomena here on Earth that may potentially disturb and disrupt electrical signals. Some of the most violent space weather effects are caused by coronal mass ejections...... (CMEs), but in order to predict the caused effects, we need to be able to model their propagation from their origin in the solar corona to the point of interest, e.g., Earth. Many such models exist, but to understand the models in detail we must understand the primary input parameters. Here we...... investigate the parameter space of the ENLILv2.5b model using the CME event of 25 July 2004. ENLIL is a time‐dependent 3‐D MHD model that can simulate the propagation of cone‐shaped interplanetary coronal mass ejections (ICMEs) through the solar system. Excepting the cone parameters (radius, position...

  18. Markov model plus k-word distributions: a synergy that produces novel statistical measures for sequence comparison.

    Science.gov (United States)

    Dai, Qi; Yang, Yanchun; Wang, Tianming

    2008-10-15

    Many proposed statistical measures can efficiently compare biological sequences to further infer their structures, functions and evolutionary information. They are related in spirit because all the ideas for sequence comparison try to use the information on the k-word distributions, Markov model or both. Motivated by adding k-word distributions to Markov model directly, we investigated two novel statistical measures for sequence comparison, called wre.k.r and S2.k.r. The proposed measures were tested by similarity search, evaluation on functionally related regulatory sequences and phylogenetic analysis. This offers the systematic and quantitative experimental assessment of our measures. Moreover, we compared our achievements with these based on alignment or alignment-free. We grouped our experiments into two sets. The first one, performed via ROC (receiver operating curve) analysis, aims at assessing the intrinsic ability of our statistical measures to search for similar sequences from a database and discriminate functionally related regulatory sequences from unrelated sequences. The second one aims at assessing how well our statistical measure is used for phylogenetic analysis. The experimental assessment demonstrates that our similarity measures intending to incorporate k-word distributions into Markov model are more efficient.

  19. A Practical pedestrian approach to parsimonious regression with inaccurate inputs

    Directory of Open Access Journals (Sweden)

    Seppo Karrila

    2014-04-01

    Full Text Available A measurement result often dictates an interval containing the correct value. Interval data is also created by roundoff, truncation, and binning. We focus on such common interval uncertainty in data. Inaccuracy in model inputs is typically ignored on model fitting. We provide a practical approach for regression with inaccurate data: the mathematics is easy, and the linear programming formulations simple to use even in a spreadsheet. This self-contained elementary presentation introduces interval linear systems and requires only basic knowledge of algebra. Feature selection is automatic; but can be controlled to find only a few most relevant inputs; and joint feature selection is enabled for multiple modeled outputs. With more features than cases, a novel connection to compressed sensing emerges: robustness against interval errors-in-variables implies model parsimony, and the input inaccuracies determine the regularization term. A small numerical example highlights counterintuitive results and a dramatic difference to total least squares.

  20. WORM: A general-purpose input deck specification language

    International Nuclear Information System (INIS)

    Jones, T.

    1999-01-01

    Using computer codes to perform criticality safety calculations has become common practice in the industry. The vast majority of these codes use simple text-based input decks to represent the geometry, materials, and other parameters that describe the problem. However, the data specified in input files are usually processed results themselves. For example, input decks tend to require the geometry specification in linear dimensions and materials in atom or weight fractions, while the parameter of interest might be mass or concentration. The calculations needed to convert from the item of interest to the required parameter in the input deck are usually performed separately and then incorporated into the input deck. This process of calculating, editing, and renaming files to perform a simple parameter study is tedious at best. In addition, most computer codes require dimensions to be specified in centimeters, while drawings or other materials used to create the input decks might be in other units. This also requires additional calculation or conversion prior to composition of the input deck. These additional calculations, while extremely simple, introduce a source for error in both the calculations and transcriptions. To overcome these difficulties, WORM (Write One, Run Many) was created. It is an easy-to-use programming language to describe input decks and can be used with any computer code that uses standard text files for input. WORM is available, via the Internet, at worm.lanl.gov. A user's guide, tutorials, example models, and other WORM-related materials are also available at this Web site. Questions regarding WORM should be directed to wormatlanl.gov

  1. Methodology for deriving hydrogeological input parameters for safety-analysis models - application to fractured crystalline rocks of Northern Switzerland

    International Nuclear Information System (INIS)

    Vomvoris, S.; Andrews, R.W.; Lanyon, G.W.; Voborny, O.; Wilson, W.

    1996-04-01

    Switzerland is one of many nations with nuclear power that is seeking to identify rock types and locations that would be suitable for the underground disposal of nuclear waste. A common challenge among these programs is to provide engineering designers and safety analysts with a reasonably representative hydrogeological input dataset that synthesizes the relevant information from direct field observations as well as inferences and model results derived from those observations. Needed are estimates of the volumetric flux through a volume of rock and the distribution of that flux into discrete pathways between the repository zones and the biosphere. These fluxes are not directly measurable but must be derived based on understandings of the range of plausible hydrogeologic conditions expected at the location investigated. The methodology described in this report utilizes conceptual and numerical models at various scales to derive the input dataset. The methodology incorporates an innovative approach, called the geometric approach, in which field observations and their associated uncertainty, together with a conceptual representation of those features that most significantly affect the groundwater flow regime, were rigorously applied to generate alternative possible realizations of hydrogeologic features in the geosphere. In this approach, the ranges in the output values directly reflect uncertainties in the input values. As a demonstration, the methodology is applied to the derivation of the hydrogeological dataset for the crystalline basement of Northern Switzerland. (author) figs., tabs., refs

  2. Input-output supervisor

    International Nuclear Information System (INIS)

    Dupuy, R.

    1970-01-01

    The input-output supervisor is the program which monitors the flow of informations between core storage and peripheral equipments of a computer. This work is composed of three parts: 1 - Study of a generalized input-output supervisor. With sample modifications it looks like most of input-output supervisors which are running now on computers. 2 - Application of this theory on a magnetic drum. 3 - Hardware requirement for time-sharing. (author) [fr

  3. Modelling groundwater discharge areas using only digital elevation models as input data

    International Nuclear Information System (INIS)

    Brydsten, Lars

    2006-10-01

    Advanced geohydrological models require data on topography, soil distribution in three dimensions, vegetation, land use, bedrock fracture zones. To model present geohydrological conditions, these factors can be gathered with different techniques. If a future geohydrological condition is modelled in an area with positive shore displacement (say 5,000 or 10,000 years), some of these factors can be difficult to measure. This could include the development of wetlands and the filling of lakes. If the goal of the model is to predict distribution of groundwater recharge and discharge areas in the landscape, the most important factor is topography. The question is how much can topography alone explain the distribution of geohydrological objects in the landscape. A simplified description of the distribution of geohydrological objects in the landscape is that groundwater recharge areas occur at local elevation curvatures and discharge occurs in lakes, brooks, and low situated slopes. Areas in-between these make up discharge areas during wet periods and recharge areas during dry periods. A model that could predict this pattern only using topography data needs to be able to predict high ridges and future lakes and brooks. This study uses GIS software with four different functions using digital elevation models as input data, geomorphometrical parameters to predict landscape ridges, basin fill for predicting lakes, flow accumulations for predicting future waterways, and topographical wetness indexes for dividing in-between areas based on degree of wetness. An area between the village of and Forsmarks' Nuclear Power Plant has been used to calibrate the model. The area is within the SKB 10-metre Elevation Model (DEM) and has a high-resolution orienteering map for wetlands. Wetlands are assumed to be groundwater discharge areas. Five hundred points were randomly distributed across the wetlands. These are potential discharge points. Model parameters were chosen with the

  4. Distribution of return point memory states for systems with stochastic inputs

    International Nuclear Information System (INIS)

    Amann, A; Brokate, M; Rachinskii, D; Temnov, G

    2011-01-01

    We consider the long term effect of stochastic inputs on the state of an open loop system which exhibits the so-called return point memory. An example of such a system is the Preisach model; more generally, systems with the Preisach type input-state relationship, such as in spin-interaction models, are considered. We focus on the characterisation of the expected memory configuration after the system has been effected by the input for sufficiently long period of time. In the case where the input is given by a discrete time random walk process, or the Wiener process, simple closed form expressions for the probability density of the vector of the main input extrema recorded by the memory state, and scaling laws for the dimension of this vector, are derived. If the input is given by a general continuous Markov process, we show that the distribution of previous memory elements can be obtained from a Markov chain scheme which is derived from the solution of an associated one-dimensional escape type problem. Formulas for transition probabilities defining this Markov chain scheme are presented. Moreover, explicit formulas for the conditional probability densities of previous main extrema are obtained for the Ornstein-Uhlenbeck input process. The analytical results are confirmed by numerical experiments.

  5. MSuPDA: A Memory Efficient Algorithm for Sequence Alignment.

    Science.gov (United States)

    Khan, Mohammad Ibrahim; Kamal, Md Sarwar; Chowdhury, Linkon

    2016-03-01

    Space complexity is a million dollar question in DNA sequence alignments. In this regard, memory saving under pushdown automata can help to reduce the occupied spaces in computer memory. Our proposed process is that anchor seed (AS) will be selected from given data set of nucleotide base pairs for local sequence alignment. Quick splitting techniques will separate the AS from all the DNA genome segments. Selected AS will be placed to pushdown automata's (PDA) input unit. Whole DNA genome segments will be placed into PDA's stack. AS from input unit will be matched with the DNA genome segments from stack of PDA. Match, mismatch and indel of nucleotides will be popped from the stack under the control unit of pushdown automata. During the POP operation on stack, it will free the memory cell occupied by the nucleotide base pair.

  6. Jointness through fishing days input in a multi-species fishery

    DEFF Research Database (Denmark)

    Hansen, Lars Gårn; Jensen, Carsten Lynge

    .g. translog, normalized quadratic). In this paper we argue that jointness in the latter, essentially separable fishery is caused by allocation of fishing days input among harvested species. We developed a structural model of a multi-species fishery where the allocation of fishing days input causes production...

  7. Shaped input distributions for structural damage localization

    DEFF Research Database (Denmark)

    Ulriksen, Martin Dalgaard; Bernal, Dionisio; Damkilde, Lars

    2018-01-01

    localization method is cast that operates on the premise of shaping inputs—whose spatial distribution is fixed—by use of a model, such that these inputs, in one structural subdomain at a time, suppress certain steady-state vibration quantities (depending on the type of damage one seeks to interrogate for......). Accordingly, damage is localized when the vibration signature induced by the shaped inputs in the damaged state corresponds to that in the reference state, hereby implying that the approach does not point directly to damage. Instead, it operates with interrogation based on postulated damage patterns...

  8. Input/Output linearizing control of a nuclear reactor

    International Nuclear Information System (INIS)

    Perez C, V.

    1994-01-01

    The feedback linearization technique is an approach to nonlinear control design. The basic idea is to transform, by means of algebraic methods, the dynamics of a nonlinear control system into a full or partial linear system. As a result of this linearization process, the well known basic linear control techniques can be used to obtain some desired dynamic characteristics. When full linearization is achieved, the method is referred to as input-state linearization, whereas when partial linearization is achieved, the method is referred to as input-output linearization. We will deal with the latter. By means of input-output linearization, the dynamics of a nonlinear system can be decomposed into an external part (input-output), and an internal part (unobservable). Since the external part consists of a linear relationship among the output of the plant and the auxiliary control input mentioned above, it is easy to design such an auxiliary control input so that we get the output to behave in a predetermined way. Since the internal dynamics of the system is known, we can check its dynamics behavior on order of to ensure that the internal states are bounded. The linearization method described here can be applied to systems with one-input/one-output, as well as to systems with multiple-inputs/multiple-outputs. Typical control problems such as stabilization and reference path tracking can be solved using this technique. In this work, the input/output linearization theory is presented, as well as the problem of getting the output variable to track some desired trayectories. Further, the design of an input/output control system applied to the nonlinear model of a research nuclear reactor is included, along with the results obtained by computer simulation. (Author)

  9. A connectionist model for the simulation of human spoken-word recognition

    NARCIS (Netherlands)

    Kuijk, D.J. van; Wittenburg, P.; Dijkstra, A.F.J.; Den Brinker, B.P.L.M.; Beek, P.J.; Brand, A.N.; Maarse, F.J.; Mulder, L.J.M.

    1999-01-01

    A new psycholinguistically motivated and neural network base model of human word recognition is presented. In contrast to earlier models it uses real speech as input. At the word layer acoustical and temporal information is stored by sequences of connected sensory neurons that pass on sensor

  10. Modelling Implicit Communication in Multi-Agent Systems with Hybrid Input/Output Automata

    Directory of Open Access Journals (Sweden)

    Marta Capiluppi

    2012-10-01

    Full Text Available We propose an extension of Hybrid I/O Automata (HIOAs to model agent systems and their implicit communication through perturbation of the environment, like localization of objects or radio signals diffusion and detection. To this end we decided to specialize some variables of the HIOAs whose values are functions both of time and space. We call them world variables. Basically they are treated similarly to the other variables of HIOAs, but they have the function of representing the interaction of each automaton with the surrounding environment, hence they can be output, input or internal variables. Since these special variables have the role of simulating implicit communication, their dynamics are specified both in time and space, because they model the perturbations induced by the agent to the environment, and the perturbations of the environment as perceived by the agent. Parallel composition of world variables is slightly different from parallel composition of the other variables, since their signals are summed. The theory is illustrated through a simple example of agents systems.

  11. Automated side-chain model building and sequence assignment by template matching.

    Science.gov (United States)

    Terwilliger, Thomas C

    2003-01-01

    An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer.

  12. INPUT-OUTPUT STRUCTURE OF LINEAR-DIFFERENTIAL ALGEBRAIC SYSTEMS

    NARCIS (Netherlands)

    KUIJPER, M; SCHUMACHER, JM

    Systems of linear differential and algebraic equations occur in various ways, for instance, as a result of automated modeling procedures and in problems involving algebraic constraints, such as zero dynamics and exact model matching. Differential/algebraic systems may represent an input-output

  13. Subsidy or subtraction: how do terrestrial inputs influence consumer production in lakes?

    Science.gov (United States)

    Jones, Stuart E.; Solomon, Christopher T.; Weidel, Brian C.

    2012-01-01

    Cross-ecosystem fluxes are ubiquitous in food webs and are generally thought of as subsidies to consumer populations. Yet external or allochthonous inputs may in fact have complex and habitat-specific effects on recipient ecosystems. In lakes, terrestrial inputs of organic carbon contribute to basal resource availability, but can also reduce resource availability via shading effects on phytoplankton and periphyton. Terrestrial inputs might therefore either subsidise or subtract from consumer production. We developed and parameterised a simple model to explore this idea. The model estimates basal resource supply and consumer production given lake-level characteristics including total phosphorus (TP) and dissolved organic carbon (DOC) concentration, and consumer-level characteristics including resource preferences and growth efficiencies. Terrestrial inputs diminished primary production and total basal resource supply at the whole-lake level, except in ultra-oligotrophic systems. However, this system-level generalisation masked complex habitat-specific effects. In the pelagic zone, dissolved and particulate terrestrial carbon inputs were available to zooplankton via several food web pathways. Consequently, zooplankton production usually increased with terrestrial inputs, even as total whole-lake resource availability decreased. In contrast, in the benthic zone the dominant, dissolved portion of the terrestrial carbon load had predominantly negative effects on resource availability via shading of periphyton. Consequently, terrestrial inputs always decreased zoobenthic production except under extreme and unrealistic parameterisations of the model. Appreciating the complex and habitat-specific effects of allochthonous inputs may be essential for resolving the effects of cross-habitat fluxes on consumers in lakes and other food webs.

  14. Image sequence analysis workstation for multipoint motion analysis

    Science.gov (United States)

    Mostafavi, Hassan

    1990-08-01

    This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.

  15. From cultured to uncultured genome sequences: metagenomics and modeling microbial ecosystems.

    Science.gov (United States)

    Garza, Daniel R; Dutilh, Bas E

    2015-11-01

    Microorganisms and the viruses that infect them are the most numerous biological entities on Earth and enclose its greatest biodiversity and genetic reservoir. With strength in their numbers, these microscopic organisms are major players in the cycles of energy and matter that sustain all life. Scientists have only scratched the surface of this vast microbial world through culture-dependent methods. Recent developments in generating metagenomes, large random samples of nucleic acid sequences isolated directly from the environment, are providing comprehensive portraits of the composition, structure, and functioning of microbial communities. Moreover, advances in metagenomic analysis have created the possibility of obtaining complete or nearly complete genome sequences from uncultured microorganisms, providing important means to study their biology, ecology, and evolution. Here we review some of the recent developments in the field of metagenomics, focusing on the discovery of genetic novelty and on methods for obtaining uncultured genome sequences, including through the recycling of previously published datasets. Moreover we discuss how metagenomics has become a core scientific tool to characterize eco-evolutionary patterns of microbial ecosystems, thus allowing us to simultaneously discover new microbes and study their natural communities. We conclude by discussing general guidelines and challenges for modeling the interactions between uncultured microorganisms and viruses based on the information contained in their genome sequences. These models will significantly advance our understanding of the functioning of microbial ecosystems and the roles of microbes in the environment.

  16. Influence of deleting some of the inputs and outputs on efficiency status of units in DEA

    Directory of Open Access Journals (Sweden)

    Abbas ali Noora

    2013-06-01

    Full Text Available One of the important issues in data envelopment analysis (DEA is sensitivity analysis. This study discusses about deleting some of the inputs and outputs and investigates the influence of it on efficiency status of Decision Making Units (DMUs. To this end some models are presented for recognizing this influence on efficient DMUs. Model 2 (Model 3 in section 3 investigates the influence of deleting i(th input (r(th output on an efficient DMU. Thereafter these models are improved for deleting multiple inputs and outputs. Furthermore, a model is presented for recognizing the maximum number of inputs and (or outputs from among specified inputs and outputs which can be deleted, whereas an efficient DMU preserves its efficiency. Finally, the presented models are utilized for a set of DMUs and the results are reported.

  17. A single point of pressure approach as input for injury models with respect to complex blast loading conditions

    NARCIS (Netherlands)

    Teland, J.A.; Doormaal, J.C.A.M. van; Horst, M.J. van der; Svinsås, E.

    2010-01-01

    Blast injury models, like Axelsson and Stuhmiller, require four pressure signals as input. Those pressure signals must be acquired by a Blast Test Device (BTD) that has four pressure transducers placed in a horizontal plane at intervals of 90 degrees. This can be either in a physical test setup or

  18. A Flexible, Efficient Binomial Mixed Model for Identifying Differential DNA Methylation in Bisulfite Sequencing Data

    Science.gov (United States)

    Lea, Amanda J.

    2015-01-01

    Identifying sources of variation in DNA methylation levels is important for understanding gene regulation. Recently, bisulfite sequencing has become a popular tool for investigating DNA methylation levels. However, modeling bisulfite sequencing data is complicated by dramatic variation in coverage across sites and individual samples, and because of the computational challenges of controlling for genetic covariance in count data. To address these challenges, we present a binomial mixed model and an efficient, sampling-based algorithm (MACAU: Mixed model association for count data via data augmentation) for approximate parameter estimation and p-value computation. This framework allows us to simultaneously account for both the over-dispersed, count-based nature of bisulfite sequencing data, as well as genetic relatedness among individuals. Using simulations and two real data sets (whole genome bisulfite sequencing (WGBS) data from Arabidopsis thaliana and reduced representation bisulfite sequencing (RRBS) data from baboons), we show that our method provides well-calibrated test statistics in the presence of population structure. Further, it improves power to detect differentially methylated sites: in the RRBS data set, MACAU detected 1.6-fold more age-associated CpG sites than a beta-binomial model (the next best approach). Changes in these sites are consistent with known age-related shifts in DNA methylation levels, and are enriched near genes that are differentially expressed with age in the same population. Taken together, our results indicate that MACAU is an efficient, effective tool for analyzing bisulfite sequencing data, with particular salience to analyses of structured populations. MACAU is freely available at www.xzlab.org/software.html. PMID:26599596

  19. DendroBLAST: approximate phylogenetic trees in the absence of multiple sequence alignments.

    Science.gov (United States)

    Kelly, Steven; Maini, Philip K

    2013-01-01

    The rapidly growing availability of genome information has created considerable demand for both fast and accurate phylogenetic inference algorithms. We present a novel method called DendroBLAST for reconstructing phylogenetic dendrograms/trees from protein sequences using BLAST. This method differs from other methods by incorporating a simple model of sequence evolution to test the effect of introducing sequence changes on the reliability of the bipartitions in the inferred tree. Using realistic simulated sequence data we demonstrate that this method produces phylogenetic trees that are more accurate than other commonly-used distance based methods though not as accurate as maximum likelihood methods from good quality multiple sequence alignments. In addition to tests on simulated data, we use DendroBLAST to generate input trees for a supertree reconstruction of the phylogeny of the Archaea. This independent analysis produces an approximate phylogeny of the Archaea that has both high precision and recall when compared to previously published analysis of the same dataset using conventional methods. Taken together these results demonstrate that approximate phylogenetic trees can be produced in the absence of multiple sequence alignments, and we propose that these trees will provide a platform for improving and informing downstream bioinformatic analysis. A web implementation of the DendroBLAST method is freely available for use at http://www.dendroblast.com/.

  20. DendroBLAST: approximate phylogenetic trees in the absence of multiple sequence alignments.

    Directory of Open Access Journals (Sweden)

    Steven Kelly

    Full Text Available The rapidly growing availability of genome information has created considerable demand for both fast and accurate phylogenetic inference algorithms. We present a novel method called DendroBLAST for reconstructing phylogenetic dendrograms/trees from protein sequences using BLAST. This method differs from other methods by incorporating a simple model of sequence evolution to test the effect of introducing sequence changes on the reliability of the bipartitions in the inferred tree. Using realistic simulated sequence data we demonstrate that this method produces phylogenetic trees that are more accurate than other commonly-used distance based methods though not as accurate as maximum likelihood methods from good quality multiple sequence alignments. In addition to tests on simulated data, we use DendroBLAST to generate input trees for a supertree reconstruction of the phylogeny of the Archaea. This independent analysis produces an approximate phylogeny of the Archaea that has both high precision and recall when compared to previously published analysis of the same dataset using conventional methods. Taken together these results demonstrate that approximate phylogenetic trees can be produced in the absence of multiple sequence alignments, and we propose that these trees will provide a platform for improving and informing downstream bioinformatic analysis. A web implementation of the DendroBLAST method is freely available for use at http://www.dendroblast.com/.

  1. QDD: a user-friendly program to select microsatellite markers and design primers from large sequencing projects.

    Science.gov (United States)

    Meglécz, Emese; Costedoat, Caroline; Dubut, Vincent; Gilles, André; Malausa, Thibaut; Pech, Nicolas; Martin, Jean-François

    2010-02-01

    QDD is an open access program providing a user-friendly tool for microsatellite detection and primer design from large sets of DNA sequences. The program is designed to deal with all steps of treatment of raw sequences obtained from pyrosequencing of enriched DNA libraries, but it is also applicable to data obtained through other sequencing methods, using FASTA files as input. The following tasks are completed by QDD: tag sorting, adapter/vector removal, elimination of redundant sequences, detection of possible genomic multicopies (duplicated loci or transposable elements), stringent selection of target microsatellites and customizable primer design. It can treat up to one million sequences of a few hundred base pairs in the tag-sorting step, and up to 50,000 sequences in a single input file for the steps involving estimation of sequence similarity. QDD is freely available under the GPL licence for Windows and Linux from the following web site: http://www.univ-provence.fr/gsite/Local/egee/dir/meglecz/QDD.html. Supplementary data are available at Bioinformatics online.

  2. Modelling noise in second generation sequencing forensic genetics STR data using a one-inflated (zero-truncated) negative binomial model

    DEFF Research Database (Denmark)

    Vilsen, Søren B.; Tvedebrink, Torben; Mogensen, Helle Smidt

    2015-01-01

    We present a model fitting the distribution of non-systematic errors in STR second generation sequencing, SGS, analysis. The model fits the distribution of non-systematic errors, i.e. the noise, using a one-inflated, zero-truncated, negative binomial model. The model is a two component model...

  3. Targeted assembly of short sequence reads.

    Directory of Open Access Journals (Sweden)

    René L Warren

    Full Text Available As next-generation sequence (NGS production continues to increase, analysis is becoming a significant bottleneck. However, in situations where information is required only for specific sequence variants, it is not necessary to assemble or align whole genome data sets in their entirety. Rather, NGS data sets can be mined for the presence of sequence variants of interest by localized assembly, which is a faster, easier, and more accurate approach. We present TASR, a streamlined assembler that interrogates very large NGS data sets for the presence of specific variants by only considering reads within the sequence space of input target sequences provided by the user. The NGS data set is searched for reads with an exact match to all possible short words within the target sequence, and these reads are then assembled stringently to generate a consensus of the target and flanking sequence. Typically, variants of a particular locus are provided as different target sequences, and the presence of the variant in the data set being interrogated is revealed by a successful assembly outcome. However, TASR can also be used to find unknown sequences that flank a given target. We demonstrate that TASR has utility in finding or confirming genomic mutations, polymorphisms, fusions and integration events. Targeted assembly is a powerful method for interrogating large data sets for the presence of sequence variants of interest. TASR is a fast, flexible and easy to use tool for targeted assembly.

  4. Linear and Non-linear Multi-Input Multi-Output Model Predictive Control of Continuous Stirred Tank Reactor

    Directory of Open Access Journals (Sweden)

    Muayad Al-Qaisy

    2015-02-01

    Full Text Available In this article, multi-input multi-output (MIMO linear model predictive controller (LMPC based on state space model and nonlinear model predictive controller based on neural network (NNMPC are applied on a continuous stirred tank reactor (CSTR. The idea is to have a good control system that will be able to give optimal performance, reject high load disturbance, and track set point change. In order to study the performance of the two model predictive controllers, MIMO Proportional-Integral-Derivative controller (PID strategy is used as benchmark. The LMPC, NNMPC, and PID strategies are used for controlling the residual concentration (CA and reactor temperature (T. NNMPC control shows a superior performance over the LMPC and PID controllers by presenting a smaller overshoot and shorter settling time.

  5. Method and apparatus for biological sequence comparison

    Science.gov (United States)

    Marr, T.G.; Chang, W.I.

    1997-12-23

    A method and apparatus are disclosed for comparing biological sequences from a known source of sequences, with a subject (query) sequence. The apparatus takes as input a set of target similarity levels (such as evolutionary distances in units of PAM), and finds all fragments of known sequences that are similar to the subject sequence at each target similarity level, and are long enough to be statistically significant. The invention device filters out fragments from the known sequences that are too short, or have a lower average similarity to the subject sequence than is required by each target similarity level. The subject sequence is then compared only to the remaining known sequences to find the best matches. The filtering member divides the subject sequence into overlapping blocks, each block being sufficiently large to contain a minimum-length alignment from a known sequence. For each block, the filter member compares the block with every possible short fragment in the known sequences and determines a best match for each comparison. The determined set of short fragment best matches for the block provide an upper threshold on alignment values. Regions of a certain length from the known sequences that have a mean alignment value upper threshold greater than a target unit score are concatenated to form a union. The current block is compared to the union and provides an indication of best local alignment with the subject sequence. 5 figs.

  6. Maximum-Entropy Models of Sequenced Immune Repertoires Predict Antigen-Antibody Affinity

    DEFF Research Database (Denmark)

    Asti, Lorenzo; Uguzzoni, Guido; Marcatili, Paolo

    2016-01-01

    The immune system has developed a number of distinct complex mechanisms to shape and control the antibody repertoire. One of these mechanisms, the affinity maturation process, works in an evolutionary-like fashion: after binding to a foreign molecule, the antibody-producing B-cells exhibit a high...... of an HIV-1 infected patient. The Pearson correlation coefficient between our scoring function and the IC50 neutralization titer measured on 30 different antibodies of known sequence is as high as 0.77 (p-value 10-6), outperforming other sequence- and structure-based models....

  7. Stochastic modelling of daily rainfall sequences

    NARCIS (Netherlands)

    Buishand, T.A.

    1977-01-01

    Rainfall series of different climatic regions were analysed with the aim of generating daily rainfall sequences. A survey of the data is given in I, 1. When analysing daily rainfall sequences one must be aware of the following points:
    a. Seasonality. Because of seasonal variation

  8. Frequency Preference Response to Oscillatory Inputs in Two-dimensional Neural Models: A Geometric Approach to Subthreshold Amplitude and Phase Resonance.

    Science.gov (United States)

    Rotstein, Horacio G

    2014-01-01

    We investigate the dynamic mechanisms of generation of subthreshold and phase resonance in two-dimensional linear and linearized biophysical (conductance-based) models, and we extend our analysis to account for the effect of simple, but not necessarily weak, types of nonlinearities. Subthreshold resonance refers to the ability of neurons to exhibit a peak in their voltage amplitude response to oscillatory input currents at a preferred non-zero (resonant) frequency. Phase-resonance refers to the ability of neurons to exhibit a zero-phase (or zero-phase-shift) response to oscillatory input currents at a non-zero (phase-resonant) frequency. We adapt the classical phase-plane analysis approach to account for the dynamic effects of oscillatory inputs and develop a tool, the envelope-plane diagrams, that captures the role that conductances and time scales play in amplifying the voltage response at the resonant frequency band as compared to smaller and larger frequencies. We use envelope-plane diagrams in our analysis. We explain why the resonance phenomena do not necessarily arise from the presence of imaginary eigenvalues at rest, but rather they emerge from the interplay of the intrinsic and input time scales. We further explain why an increase in the time-scale separation causes an amplification of the voltage response in addition to shifting the resonant and phase-resonant frequencies. This is of fundamental importance for neural models since neurons typically exhibit a strong separation of time scales. We extend this approach to explain the effects of nonlinearities on both resonance and phase-resonance. We demonstrate that nonlinearities in the voltage equation cause amplifications of the voltage response and shifts in the resonant and phase-resonant frequencies that are not predicted by the corresponding linearized model. The differences between the nonlinear response and the linear prediction increase with increasing levels of the time scale separation between

  9. Estimating severity of sideways fall using a generic multi linear regression model based on kinematic input variables.

    Science.gov (United States)

    van der Zijden, A M; Groen, B E; Tanck, E; Nienhuis, B; Verdonschot, N; Weerdesteyn, V

    2017-03-21

    Many research groups have studied fall impact mechanics to understand how fall severity can be reduced to prevent hip fractures. Yet, direct impact force measurements with force plates are restricted to a very limited repertoire of experimental falls. The purpose of this study was to develop a generic model for estimating hip impact forces (i.e. fall severity) in in vivo sideways falls without the use of force plates. Twelve experienced judokas performed sideways Martial Arts (MA) and Block ('natural') falls on a force plate, both with and without a mat on top. Data were analyzed to determine the hip impact force and to derive 11 selected (subject-specific and kinematic) variables. Falls from kneeling height were used to perform a stepwise regression procedure to assess the effects of these input variables and build the model. The final model includes four input variables, involving one subject-specific measure and three kinematic variables: maximum upper body deceleration, body mass, shoulder angle at the instant of 'maximum impact' and maximum hip deceleration. The results showed that estimated and measured hip impact forces were linearly related (explained variances ranging from 46 to 63%). Hip impact forces of MA falls onto the mat from a standing position (3650±916N) estimated by the final model were comparable with measured values (3698±689N), even though these data were not used for training the model. In conclusion, a generic linear regression model was developed that enables the assessment of fall severity through kinematic measures of sideways falls, without using force plates. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. A statistical model for investigating binding probabilities of DNA nucleotide sequences using microarrays.

    Science.gov (United States)

    Lee, Mei-Ling Ting; Bulyk, Martha L; Whitmore, G A; Church, George M

    2002-12-01

    There is considerable scientific interest in knowing the probability that a site-specific transcription factor will bind to a given DNA sequence. Microarray methods provide an effective means for assessing the binding affinities of a large number of DNA sequences as demonstrated by Bulyk et al. (2001, Proceedings of the National Academy of Sciences, USA 98, 7158-7163) in their study of the DNA-binding specificities of Zif268 zinc fingers using microarray technology. In a follow-up investigation, Bulyk, Johnson, and Church (2002, Nucleic Acid Research 30, 1255-1261) studied the interdependence of nucleotides on the binding affinities of transcription proteins. Our article is motivated by this pair of studies. We present a general statistical methodology for analyzing microarray intensity measurements reflecting DNA-protein interactions. The log probability of a protein binding to a DNA sequence on an array is modeled using a linear ANOVA model. This model is convenient because it employs familiar statistical concepts and procedures and also because it is effective for investigating the probability structure of the binding mechanism.

  11. The Ising model for prediction of disordered residues from protein sequence alone

    International Nuclear Information System (INIS)

    Lobanov, Michail Yu; Galzitskaya, Oxana V

    2011-01-01

    Intrinsically disordered regions serve as molecular recognition elements, which play an important role in the control of many cellular processes and signaling pathways. It is useful to be able to predict positions of disordered residues and disordered regions in protein chains using protein sequence alone. A new method (IsUnstruct) based on the Ising model for prediction of disordered residues from protein sequence alone has been developed. According to this model, each residue can be in one of two states: ordered or disordered. The model is an approximation of the Ising model in which the interaction term between neighbors has been replaced by a penalty for changing between states (the energy of border). The IsUnstruct has been compared with other available methods and found to perform well. The method correctly finds 77% of disordered residues as well as 87% of ordered residues in the CASP8 database, and 72% of disordered residues as well as 85% of ordered residues in the DisProt database

  12. Model analysis of riparian buffer effectiveness for reducing nutrient inputs to streams in agricultural landscapes

    Science.gov (United States)

    McKane, R. B.; M, S.; F, P.; Kwiatkowski, B. L.; Rastetter, E. B.

    2006-12-01

    Federal and state agencies responsible for protecting water quality rely mainly on statistically-based methods to assess and manage risks to the nation's streams, lakes and estuaries. Although statistical approaches provide valuable information on current trends in water quality, process-based simulation models are essential for understanding and forecasting how changes in human activities across complex landscapes impact the transport of nutrients and contaminants to surface waters. To address this need, we developed a broadly applicable, process-based watershed simulator that links a spatially-explicit hydrologic model and a terrestrial biogeochemistry model (MEL). See Stieglitz et al. and Pan et al., this meeting, for details on the design and verification of this simulator. Here we apply the watershed simulator to a generalized agricultural setting to demonstrate its potential for informing policy and management decisions concerning water quality. This demonstration specifically explores the effectiveness of riparian buffers for reducing the transport of nitrogenous fertilizers from agricultural fields to streams. The interaction of hydrologic and biogeochemical processes represented in our simulator allows several important questions to be addressed. (1) For a range of upland fertilization rates, to what extent do riparian buffers reduce nitrogen inputs to streams? (2) How does buffer effectiveness change over time as the plant-soil system approaches N-saturation? (3) How can buffers be managed to increase their effectiveness, e.g., through periodic harvest and replanting? The model results illustrate that, while the answers to these questions depend to some extent on site factors (climatic regime, soil properties and vegetation type), in all cases riparian buffers have a limited capacity to reduce nitrogen inputs to streams where fertilization rates approach those typically used for intensive agriculture (e.g., 200 kg N per ha per year for corn in the U

  13. Response sensitivity of barrel neuron subpopulations to simulated thalamic input.

    Science.gov (United States)

    Pesavento, Michael J; Rittenhouse, Cynthia D; Pinto, David J

    2010-06-01

    Our goal is to examine the relationship between neuron- and network-level processing in the context of a well-studied cortical function, the processing of thalamic input by whisker-barrel circuits in rodent neocortex. Here we focus on neuron-level processing and investigate the responses of excitatory and inhibitory barrel neurons to simulated thalamic inputs applied using the dynamic clamp method in brain slices. Simulated inputs are modeled after real thalamic inputs recorded in vivo in response to brief whisker deflections. Our results suggest that inhibitory neurons require more input to reach firing threshold, but then fire earlier, with less variability, and respond to a broader range of inputs than do excitatory neurons. Differences in the responses of barrel neuron subtypes depend on their intrinsic membrane properties. Neurons with a low input resistance require more input to reach threshold but then fire earlier than neurons with a higher input resistance, regardless of the neuron's classification. Our results also suggest that the response properties of excitatory versus inhibitory barrel neurons are consistent with the response sensitivities of the ensemble barrel network. The short response latency of inhibitory neurons may serve to suppress ensemble barrel responses to asynchronous thalamic input. Correspondingly, whereas neurons acting as part of the barrel circuit in vivo are highly selective for temporally correlated thalamic input, excitatory barrel neurons acting alone in vitro are less so. These data suggest that network-level processing of thalamic input in barrel cortex depends on neuron-level processing of the same input by excitatory and inhibitory barrel neurons.

  14. SNBRFinder: A Sequence-Based Hybrid Algorithm for Enhanced Prediction of Nucleic Acid-Binding Residues.

    Directory of Open Access Journals (Sweden)

    Xiaoxia Yang

    Full Text Available Protein-nucleic acid interactions are central to various fundamental biological processes. Automated methods capable of reliably identifying DNA- and RNA-binding residues in protein sequence are assuming ever-increasing importance. The majority of current algorithms rely on feature-based prediction, but their accuracy remains to be further improved. Here we propose a sequence-based hybrid algorithm SNBRFinder (Sequence-based Nucleic acid-Binding Residue Finder by merging a feature predictor SNBRFinderF and a template predictor SNBRFinderT. SNBRFinderF was established using the support vector machine whose inputs include sequence profile and other complementary sequence descriptors, while SNBRFinderT was implemented with the sequence alignment algorithm based on profile hidden Markov models to capture the weakly homologous template of query sequence. Experimental results show that SNBRFinderF was clearly superior to the commonly used sequence profile-based predictor and SNBRFinderT can achieve comparable performance to the structure-based template methods. Leveraging the complementary relationship between these two predictors, SNBRFinder reasonably improved the performance of both DNA- and RNA-binding residue predictions. More importantly, the sequence-based hybrid prediction reached competitive performance relative to our previous structure-based counterpart. Our extensive and stringent comparisons show that SNBRFinder has obvious advantages over the existing sequence-based prediction algorithms. The value of our algorithm is highlighted by establishing an easy-to-use web server that is freely accessible at http://ibi.hzau.edu.cn/SNBRFinder.

  15. SNBRFinder: A Sequence-Based Hybrid Algorithm for Enhanced Prediction of Nucleic Acid-Binding Residues.

    Science.gov (United States)

    Yang, Xiaoxia; Wang, Jia; Sun, Jun; Liu, Rong

    2015-01-01

    Protein-nucleic acid interactions are central to various fundamental biological processes. Automated methods capable of reliably identifying DNA- and RNA-binding residues in protein sequence are assuming ever-increasing importance. The majority of current algorithms rely on feature-based prediction, but their accuracy remains to be further improved. Here we propose a sequence-based hybrid algorithm SNBRFinder (Sequence-based Nucleic acid-Binding Residue Finder) by merging a feature predictor SNBRFinderF and a template predictor SNBRFinderT. SNBRFinderF was established using the support vector machine whose inputs include sequence profile and other complementary sequence descriptors, while SNBRFinderT was implemented with the sequence alignment algorithm based on profile hidden Markov models to capture the weakly homologous template of query sequence. Experimental results show that SNBRFinderF was clearly superior to the commonly used sequence profile-based predictor and SNBRFinderT can achieve comparable performance to the structure-based template methods. Leveraging the complementary relationship between these two predictors, SNBRFinder reasonably improved the performance of both DNA- and RNA-binding residue predictions. More importantly, the sequence-based hybrid prediction reached competitive performance relative to our previous structure-based counterpart. Our extensive and stringent comparisons show that SNBRFinder has obvious advantages over the existing sequence-based prediction algorithms. The value of our algorithm is highlighted by establishing an easy-to-use web server that is freely accessible at http://ibi.hzau.edu.cn/SNBRFinder.

  16. Fast and secure retrieval of DNA sequences

    NARCIS (Netherlands)

    2014-01-01

    Sequence models are retrieved from a sequences index. The sequence models model DNA or RNA sequences stored in a database, and each comprises a finite memory tree source model and parameters for the finite memory tree source model. One or more DNA or RNA sequences stored in the database are

  17. A machine learning model to determine the accuracy of variant calls in capture-based next generation sequencing.

    Science.gov (United States)

    van den Akker, Jeroen; Mishne, Gilad; Zimmer, Anjali D; Zhou, Alicia Y

    2018-04-17

    Next generation sequencing (NGS) has become a common technology for clinical genetic tests. The quality of NGS calls varies widely and is influenced by features like reference sequence characteristics, read depth, and mapping accuracy. With recent advances in NGS technology and software tools, the majority of variants called using NGS alone are in fact accurate and reliable. However, a small subset of difficult-to-call variants that still do require orthogonal confirmation exist. For this reason, many clinical laboratories confirm NGS results using orthogonal technologies such as Sanger sequencing. Here, we report the development of a deterministic machine-learning-based model to differentiate between these two types of variant calls: those that do not require confirmation using an orthogonal technology (high confidence), and those that require additional quality testing (low confidence). This approach allows reliable NGS-based calling in a clinical setting by identifying the few important variant calls that require orthogonal confirmation. We developed and tested the model using a set of 7179 variants identified by a targeted NGS panel and re-tested by Sanger sequencing. The model incorporated several signals of sequence characteristics and call quality to determine if a variant was identified at high or low confidence. The model was tuned to eliminate false positives, defined as variants that were called by NGS but not confirmed by Sanger sequencing. The model achieved very high accuracy: 99.4% (95% confidence interval: +/- 0.03%). It categorized 92.2% (6622/7179) of the variants as high confidence, and 100% of these were confirmed to be present by Sanger sequencing. Among the variants that were categorized as low confidence, defined as NGS calls of low quality that are likely to be artifacts, 92.1% (513/557) were found to be not present by Sanger sequencing. This work shows that NGS data contains sufficient characteristics for a machine-learning-based model to

  18. Preparation and documentation of a CATHENA input file for Darlington NGS

    International Nuclear Information System (INIS)

    1989-03-01

    A CATHENA input model has been developed and documented for the heat transport system of the Darlington Nuclear Generating Station. CATHENA, an advanced two-fluid thermalhydraulic computer code, has been designed for analysis of postulated loss-of-coolant accidents (LOCA) and upset conditions in the CANDU system. This report describes the Darlington input model (or idealization), and gives representative results for a simulation of a small break at an inlet header

  19. Supervised Sequence Labelling with Recurrent Neural Networks

    CERN Document Server

    Graves, Alex

    2012-01-01

    Supervised sequence labelling is a vital area of machine learning, encompassing tasks such as speech, handwriting and gesture recognition, protein secondary structure prediction and part-of-speech tagging. Recurrent neural networks are powerful sequence learning tools—robust to input noise and distortion, able to exploit long-range contextual information—that would seem ideally suited to such problems. However their role in large-scale sequence labelling systems has so far been auxiliary.    The goal of this book is a complete framework for classifying and transcribing sequential data with recurrent neural networks only. Three main innovations are introduced in order to realise this goal. Firstly, the connectionist temporal classification output layer allows the framework to be trained with unsegmented target sequences, such as phoneme-level speech transcriptions; this is in contrast to previous connectionist approaches, which were dependent on error-prone prior segmentation. Secondly, multidimensional...

  20. Highly multiplexed targeted DNA sequencing from single nuclei.

    Science.gov (United States)

    Leung, Marco L; Wang, Yong; Kim, Charissa; Gao, Ruli; Jiang, Jerry; Sei, Emi; Navin, Nicholas E

    2016-02-01

    Single-cell DNA sequencing methods are challenged by poor physical coverage, high technical error rates and low throughput. To address these issues, we developed a single-cell DNA sequencing protocol that combines flow-sorting of single nuclei, time-limited multiple-displacement amplification (MDA), low-input library preparation, DNA barcoding, targeted capture and next-generation sequencing (NGS). This approach represents a major improvement over our previous single nucleus sequencing (SNS) Nature Protocols paper in terms of generating higher-coverage data (>90%), thereby enabling the detection of genome-wide variants in single mammalian cells at base-pair resolution. Furthermore, by pooling 48-96 single-cell libraries together for targeted capture, this approach can be used to sequence many single-cell libraries in parallel in a single reaction. This protocol greatly reduces the cost of single-cell DNA sequencing, and it can be completed in 5-6 d by advanced users. This single-cell DNA sequencing protocol has broad applications for studying rare cells and complex populations in diverse fields of biological research and medicine.

  1. Chance Constrained Input Relaxation to Congestion in Stochastic DEA. An Application to Iranian Hospitals.

    Science.gov (United States)

    Kheirollahi, Hooshang; Matin, Behzad Karami; Mahboubi, Mohammad; Alavijeh, Mehdi Mirzaei

    2015-01-01

    This article developed an approached model of congestion, based on relaxed combination of inputs, in stochastic data envelopment analysis (SDEA) with chance constrained programming approaches. Classic data envelopment analysis models with deterministic data have been used by many authors to identify congestion and estimate its levels; however, data envelopment analysis with stochastic data were rarely used to identify congestion. This article used chance constrained programming approaches to replace stochastic models with "deterministic equivalents". This substitution leads us to non-linear problems that should be solved. Finally, the proposed method based on relaxed combination of inputs was used to identify congestion input in six Iranian hospital with one input and two outputs in the period of 2009 to 2012.

  2. Self-Structured Organizing Single-Input CMAC Control for Robot Manipulator

    Directory of Open Access Journals (Sweden)

    ThanhQuyen Ngo

    2011-09-01

    Full Text Available This paper represents a self-structured organizing single-input control system based on differentiable cerebellar model articulation controller (CMAC for an n-link robot manipulator to achieve the high-precision position tracking. In the proposed scheme, the single-input CMAC controller is solely used to control the plant, so the input space dimension of CMAC can be simplified and no conventional controller is needed. The structure of single-input CMAC will also be self-organized; that is, the layers of single-input CMAC will grow or prune systematically and their receptive functions can be automatically adjusted. The online tuning laws of single-input CMAC parameters are derived in gradient-descent learning method and the discrete-type Lyapunov function is applied to determine the learning rates of proposed control system so that the stability of the system can be guaranteed. The simulation results of robot manipulator are provided to verify the effectiveness of the proposed control methodology.

  3. Load Estimation from Natural input Modal Analysis

    DEFF Research Database (Denmark)

    Aenlle, Manuel López; Brincker, Rune; Canteli, Alfonso Fernández

    2005-01-01

    One application of Natural Input Modal Analysis consists in estimating the unknown load acting on structures such as wind loads, wave loads, traffic loads, etc. In this paper, a procedure to determine loading from a truncated modal model, as well as the results of an experimental testing programme...... estimation. In the experimental program a small structure subjected to vibration was used to estimate the loading from the measurements and the experimental modal space. The modal parameters were estimated by Natural Input Modal Analysis and the scaling factors of the mode shapes obtained by the mass change...

  4. ColloInputGenerator

    DEFF Research Database (Denmark)

    2013-01-01

    This is a very simple program to help you put together input files for use in Gries' (2007) R-based collostruction analysis program. It basically puts together a text file with a frequency list of lexemes in the construction and inserts a column where you can add the corpus frequencies. It requires...... it as input for basic collexeme collostructional analysis (Stefanowitsch & Gries 2003) in Gries' (2007) program. ColloInputGenerator is, in its current state, based on programming commands introduced in Gries (2009). Projected updates: Generation of complete work-ready frequency lists....

  5. A multifactorial likelihood model for MMR gene variant classification incorporating probabilities based on sequence bioinformatics and tumor characteristics: a report from the Colon Cancer Family Registry.

    Science.gov (United States)

    Thompson, Bryony A; Goldgar, David E; Paterson, Carol; Clendenning, Mark; Walters, Rhiannon; Arnold, Sven; Parsons, Michael T; Michael D, Walsh; Gallinger, Steven; Haile, Robert W; Hopper, John L; Jenkins, Mark A; Lemarchand, Loic; Lindor, Noralane M; Newcomb, Polly A; Thibodeau, Stephen N; Young, Joanne P; Buchanan, Daniel D; Tavtigian, Sean V; Spurdle, Amanda B

    2013-01-01

    Mismatch repair (MMR) gene sequence variants of uncertain clinical significance are often identified in suspected Lynch syndrome families, and this constitutes a challenge for both researchers and clinicians. Multifactorial likelihood model approaches provide a quantitative measure of MMR variant pathogenicity, but first require input of likelihood ratios (LRs) for different MMR variation-associated characteristics from appropriate, well-characterized reference datasets. Microsatellite instability (MSI) and somatic BRAF tumor data for unselected colorectal cancer probands of known pathogenic variant status were used to derive LRs for tumor characteristics using the Colon Cancer Family Registry (CFR) resource. These tumor LRs were combined with variant segregation within families, and estimates of prior probability of pathogenicity based on sequence conservation and position, to analyze 44 unclassified variants identified initially in Australasian Colon CFR families. In addition, in vitro splicing analyses were conducted on the subset of variants based on bioinformatic splicing predictions. The LR in favor of pathogenicity was estimated to be ~12-fold for a colorectal tumor with a BRAF mutation-negative MSI-H phenotype. For 31 of the 44 variants, the posterior probabilities of pathogenicity were such that altered clinical management would be indicated. Our findings provide a working multifactorial likelihood model for classification that carefully considers mode of ascertainment for gene testing. © 2012 Wiley Periodicals, Inc.

  6. Industrial and ecological cumulative exergy consumption of the United States via the 1997 input-output benchmark model

    International Nuclear Information System (INIS)

    Ukidwe, Nandan U.; Bakshi, Bhavik R.

    2007-01-01

    This paper develops a thermodynamic input-output (TIO) model of the 1997 United States economy that accounts for the flow of cumulative exergy in the 488-sector benchmark economic input-output model in two different ways. Industrial cumulative exergy consumption (ICEC) captures the exergy of all natural resources consumed directly and indirectly by each economic sector, while ecological cumulative exergy consumption (ECEC) also accounts for the exergy consumed in ecological systems for producing each natural resource. Information about exergy consumed in nature is obtained from the thermodynamics of biogeochemical cycles. As used in this work, ECEC is analogous to the concept of emergy, but does not rely on any of its controversial claims. The TIO model can also account for emissions from each sector and their impact and the role of labor. The use of consistent exergetic units permits the combination of various streams to define aggregate metrics that may provide insight into aspects related to the impact of economic sectors on the environment. Accounting for the contribution of natural capital by ECEC has been claimed to permit better representation of the quality of ecosystem goods and services than ICEC. The results of this work are expected to permit evaluation of these claims. If validated, this work is expected to lay the foundation for thermodynamic life cycle assessment, particularly of emerging technologies and with limited information

  7. Predicting musically induced emotions from physiological inputs: linear and neural network models.

    Science.gov (United States)

    Russo, Frank A; Vempala, Naresh N; Sandstrom, Gillian M

    2013-01-01

    Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of "felt" emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants-heart rate (HR), respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA) dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a non-linear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The non-linear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the non-linear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

  8. Predicting musically induced emotions from physiological inputs: Linear and neural network models

    Directory of Open Access Journals (Sweden)

    Frank A. Russo

    2013-08-01

    Full Text Available Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of 'felt' emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants – heart rate, respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a nonlinear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The nonlinear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the nonlinear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

  9. Impact of environmental inputs on reverse-engineering approach to network structures.

    Science.gov (United States)

    Wu, Jianhua; Sinfield, James L; Buchanan-Wollaston, Vicky; Feng, Jianfeng

    2009-12-04

    Uncovering complex network structures from a biological system is one of the main topic in system biology. The network structures can be inferred by the dynamical Bayesian network or Granger causality, but neither techniques have seriously taken into account the impact of environmental inputs. With considerations of natural rhythmic dynamics of biological data, we propose a system biology approach to reveal the impact of environmental inputs on network structures. We first represent the environmental inputs by a harmonic oscillator and combine them with Granger causality to identify environmental inputs and then uncover the causal network structures. We also generalize it to multiple harmonic oscillators to represent various exogenous influences. This system approach is extensively tested with toy models and successfully applied to a real biological network of microarray data of the flowering genes of the model plant Arabidopsis Thaliana. The aim is to identify those genes that are directly affected by the presence of the sunlight and uncover the interactive network structures associating with flowering metabolism. We demonstrate that environmental inputs are crucial for correctly inferring network structures. Harmonic causal method is proved to be a powerful technique to detect environment inputs and uncover network structures, especially when the biological data exhibit periodic oscillations.

  10. Response of the Black Sea methane budget to massive short-term submarine inputs of methane

    DEFF Research Database (Denmark)

    Schmale, O.; Haeckel, M.; McGinnis, D. F.

    2011-01-01

    A steady state box model was developed to estimate the methane input into the Black Sea water column at various water depths. Our model results reveal a total input of methane of 4.7 Tg yr(-1). The model predicts that the input of methane is largest at water depths between 600 and 700 m (7......% of the total input), suggesting that the dissociation of methane gas hydrates at water depths equivalent to their upper stability limit may represent an important source of methane into the water column. In addition we discuss the effects of massive short-term methane inputs (e. g. through eruptions of deep......-water mud volcanoes or submarine landslides at intermediate water depths) on the water column methane distribution and the resulting methane emission to the atmosphere. Our non-steady state simulations predict that these inputs will be effectively buffered by intense microbial methane consumption...

  11. Sequencing historical specimens: successful preparation of small specimens with low amounts of degraded DNA.

    Science.gov (United States)

    Sproul, John S; Maddison, David R

    2017-11-01

    Despite advances that allow DNA sequencing of old museum specimens, sequencing small-bodied, historical specimens can be challenging and unreliable as many contain only small amounts of fragmented DNA. Dependable methods to sequence such specimens are especially critical if the specimens are unique. We attempt to sequence small-bodied (3-6 mm) historical specimens (including nomenclatural types) of beetles that have been housed, dried, in museums for 58-159 years, and for which few or no suitable replacement specimens exist. To better understand ideal approaches of sample preparation and produce preparation guidelines, we compared different library preparation protocols using low amounts of input DNA (1-10 ng). We also explored low-cost optimizations designed to improve library preparation efficiency and sequencing success of historical specimens with minimal DNA, such as enzymatic repair of DNA. We report successful sample preparation and sequencing for all historical specimens despite our low-input DNA approach. We provide a list of guidelines related to DNA repair, bead handling, reducing adapter dimers and library amplification. We present these guidelines to facilitate more economical use of valuable DNA and enable more consistent results in projects that aim to sequence challenging, irreplaceable historical specimens. © 2017 John Wiley & Sons Ltd.

  12. Modelling groundwater discharge areas using only digital elevation models as input data

    Energy Technology Data Exchange (ETDEWEB)

    Brydsten, Lars [Umeaa Univ. (Sweden). Dept. of Biology and Environmental Science

    2006-10-15

    Advanced geohydrological models require data on topography, soil distribution in three dimensions, vegetation, land use, bedrock fracture zones. To model present geohydrological conditions, these factors can be gathered with different techniques. If a future geohydrological condition is modelled in an area with positive shore displacement (say 5,000 or 10,000 years), some of these factors can be difficult to measure. This could include the development of wetlands and the filling of lakes. If the goal of the model is to predict distribution of groundwater recharge and discharge areas in the landscape, the most important factor is topography. The question is how much can topography alone explain the distribution of geohydrological objects in the landscape. A simplified description of the distribution of geohydrological objects in the landscape is that groundwater recharge areas occur at local elevation curvatures and discharge occurs in lakes, brooks, and low situated slopes. Areas in-between these make up discharge areas during wet periods and recharge areas during dry periods. A model that could predict this pattern only using topography data needs to be able to predict high ridges and future lakes and brooks. This study uses GIS software with four different functions using digital elevation models as input data, geomorphometrical parameters to predict landscape ridges, basin fill for predicting lakes, flow accumulations for predicting future waterways, and topographical wetness indexes for dividing in-between areas based on degree of wetness. An area between the village of and Forsmarks' Nuclear Power Plant has been used to calibrate the model. The area is within the SKB 10-metre Elevation Model (DEM) and has a high-resolution orienteering map for wetlands. Wetlands are assumed to be groundwater discharge areas. Five hundred points were randomly distributed across the wetlands. These are potential discharge points. Model parameters were chosen with the

  13. Input Shaping to Reduce Solar Array Structural Vibrations

    Science.gov (United States)

    Doherty, Michael J.; Tolson, Robert J.

    1998-01-01

    Structural vibrations induced by actuators can be minimized using input shaping. Input shaping is a feedforward method in which actuator commands are convolved with shaping functions to yield a shaped set of commands. These commands are designed to perform the maneuver while minimizing the residual structural vibration. In this report, input shaping is extended to stepper motor actuators. As a demonstration, an input-shaping technique based on pole-zero cancellation was used to modify the Solar Array Drive Assembly (SADA) actuator commands for the Lewis satellite. A series of impulses were calculated as the ideal SADA output for vibration control. These impulses were then discretized for use by the SADA stepper motor actuator and simulated actuator outputs were used to calculate the structural response. The effectiveness of input shaping is limited by the accuracy of the knowledge of the modal frequencies. Assuming perfect knowledge resulted in significant vibration reduction. Errors of 10% in the modal frequencies caused notably higher levels of vibration. Controller robustness was improved by incorporating additional zeros in the shaping function. The additional zeros did not require increased performance from the actuator. Despite the identification errors, the resulting feedforward controller reduced residual vibrations to the level of the exactly modeled input shaper and well below the baseline cases. These results could be easily applied to many other vibration-sensitive applications involving stepper motor actuators.

  14. GARFEM input deck description

    Energy Technology Data Exchange (ETDEWEB)

    Zdunek, A.; Soederberg, M. (Aeronautical Research Inst. of Sweden, Bromma (Sweden))

    1989-01-01

    The input card deck for the finite element program GARFEM version 3.2 is described in this manual. The program includes, but is not limited to, capabilities to handle the following problems: * Linear bar and beam element structures, * Geometrically non-linear problems (bar and beam), both static and transient dynamic analysis, * Transient response dynamics from a catalog of time varying external forcing function types or input function tables, * Eigenvalue solution (modes and frequencies), * Multi point constraints (MPC) for the modelling of mechanisms and e.g. rigid links. The MPC definition is used only in the geometrically linearized sense, * Beams with disjunct shear axis and neutral axis, * Beams with rigid offset. An interface exist that connects GARFEM with the program GAROS. GAROS is a program for aeroelastic analysis of rotating structures. Since this interface was developed GARFEM now serves as a preprocessor program in place of NASTRAN which was formerly used. Documentation of the methods applied in GARFEM exists but is so far limited to the capacities in existence before the GAROS interface was developed.

  15. Fast and Sequence-Adaptive Whole-Brain Segmentation Using Parametric Bayesian Modeling

    DEFF Research Database (Denmark)

    Puonti, Oula; Iglesias, Juan Eugenio; Van Leemput, Koen

    2016-01-01

    the performance of a segmentation algorithm designed to meet these requirements, building upon generative parametric models previously used in tissue classification. The method is tested on four different datasets acquired with different scanners, field strengths and pulse sequences, demonstrating comparable...

  16. A Novel Approach to Develop the Lower Order Model of Multi-Input Multi-Output System

    Science.gov (United States)

    Rajalakshmy, P.; Dharmalingam, S.; Jayakumar, J.

    2017-10-01

    A mathematical model is a virtual entity that uses mathematical language to describe the behavior of a system. Mathematical models are used particularly in the natural sciences and engineering disciplines like physics, biology, and electrical engineering as well as in the social sciences like economics, sociology and political science. Physicists, Engineers, Computer scientists, and Economists use mathematical models most extensively. With the advent of high performance processors and advanced mathematical computations, it is possible to develop high performing simulators for complicated Multi Input Multi Ouptut (MIMO) systems like Quadruple tank systems, Aircrafts, Boilers etc. This paper presents the development of the mathematical model of a 500 MW utility boiler which is a highly complex system. A synergistic combination of operational experience, system identification and lower order modeling philosophy has been effectively used to develop a simplified but accurate model of a circulation system of a utility boiler which is a MIMO system. The results obtained are found to be in good agreement with the physics of the process and with the results obtained through design procedure. The model obtained can be directly used for control system studies and to realize hardware simulators for boiler testing and operator training.

  17. Sequence selection by dynamical symmetry breaking in an autocatalytic binary polymer model

    DEFF Research Database (Denmark)

    Fellermann, Harold; Tanaka, Shinpei; Rasmussen, Steen

    2017-01-01

    Template-directed replication of nucleic acids is at the essence of all living beings and a major milestone for any origin of life scenario. We present an idealized model of prebiotic sequence replication, where binary polymers act as templates for their autocatalytic replication, thereby serving...... as each others reactants and products in an intertwined molecular ecology. Our model demonstrates how autocatalysis alters the qualitative and quantitative system dynamics in counterintuitive ways. Most notably, numerical simulations reveal a very strong intrinsic selection mechanism that favors...... the appearance of a few population structures with highly ordered and repetitive sequence patterns when starting from a pool of monomers. We demonstrate both analytically and through simulation how this "selection of the dullest" is caused by continued symmetry breaking through random fluctuations...

  18. A generative Bezier curve model for surf-zone tracking in coastal image sequences

    CSIR Research Space (South Africa)

    Burke, Michael G

    2017-09-01

    Full Text Available This work introduces a generative Bezier curve model suitable for surf-zone curve tracking in coastal image sequences. The model combines an adaptive curve parametrised by control points governed by local random walks with a global sinusoidal motion...

  19. Analysis of correlations between sites in models of protein sequences

    International Nuclear Information System (INIS)

    Giraud, B.G.; Lapedes, A.; Liu, L.C.

    1998-01-01

    A criterion based on conditional probabilities, related to the concept of algorithmic distance, is used to detect correlated mutations at noncontiguous sites on sequences. We apply this criterion to the problem of analyzing correlations between sites in protein sequences; however, the analysis applies generally to networks of interacting sites with discrete states at each site. Elementary models, where explicit results can be derived easily, are introduced. The number of states per site considered ranges from 2, illustrating the relation to familiar classical spin systems, to 20 states, suitable for representing amino acids. Numerical simulations show that the criterion remains valid even when the genetic history of the data samples (e.g., protein sequences), as represented by a phylogenetic tree, introduces nonindependence between samples. Statistical fluctuations due to finite sampling are also investigated and do not invalidate the criterion. A subsidiary result is found: The more homogeneous a population, the more easily its average properties can drift from the properties of its ancestor. copyright 1998 The American Physical Society

  20. Synaptic inputs compete during rapid formation of the calyx of Held: a new model system for neural development.

    Science.gov (United States)

    Holcomb, Paul S; Hoffpauir, Brian K; Hoyson, Mitchell C; Jackson, Dakota R; Deerinck, Thomas J; Marrs, Glenn S; Dehoff, Marlin; Wu, Jonathan; Ellisman, Mark H; Spirou, George A

    2013-08-07

    Hallmark features of neural circuit development include early exuberant innervation followed by competition and pruning to mature innervation topography. Several neural systems, including the neuromuscular junction and climbing fiber innervation of Purkinje cells, are models to study neural development in part because they establish a recognizable endpoint of monoinnervation of their targets and because the presynaptic terminals are large and easily monitored. We demonstrate here that calyx of Held (CH) innervation of its target, which forms a key element of auditory brainstem binaural circuitry, exhibits all of these characteristics. To investigate CH development, we made the first application of serial block-face scanning electron microscopy to neural development with fine temporal resolution and thereby accomplished the first time series for 3D ultrastructural analysis of neural circuit formation. This approach revealed a growth spurt of added apposed surface area (ASA)>200 μm2/d centered on a single age at postnatal day 3 in mice and an initial rapid phase of growth and competition that resolved to monoinnervation in two-thirds of cells within 3 d. This rapid growth occurred in parallel with an increase in action potential threshold, which may mediate selection of the strongest input as the winning competitor. ASAs of competing inputs were segregated on the cell body surface. These data suggest mechanisms to select "winning" inputs by regional reinforcement of postsynaptic membrane to mediate size and strength of competing synaptic inputs.