WorldWideScience

Sample records for model input requirements

  1. Input data requirements for performance modelling and monitoring of photovoltaic plants

    DEFF Research Database (Denmark)

    Gavriluta, Anamaria Florina; Spataru, Sergiu; Sera, Dezso

    2018-01-01

    This work investigates the input data requirements in the context of performance modeling of thin-film photovoltaic (PV) systems. The analysis focuses on the PVWatts performance model, well suited for on-line performance monitoring of PV strings, due to its low number of parameters and high...... accuracy. The work aims to identify the minimum amount of input data required for parameterizing an accurate model of the PV plant. The analysis was carried out for both amorphous silicon (a-Si) and cadmium telluride (CdTe), using crystalline silicon (c-Si) as a base for comparison. In the studied cases...

  2. MARS code manual volume II: input requirements

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Kim, Kyung Doo; Bae, Sung Won; Jeong, Jae Jun; Lee, Seung Wook; Hwang, Moon Kyu

    2010-02-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This input manual provides a complete list of input required to run MARS. The manual is divided largely into two parts, namely, the one-dimensional part and the multi-dimensional part. The inputs for auxiliary parts such as minor edit requests and graph formatting inputs are shared by the two parts and as such mixed input is possible. The overall structure of the input is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  3. Mars 2.2 code manual: input requirements

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Lee, Won Jae; Jeong, Jae Jun; Lee, Young Jin; Hwang, Moon Kyu; Kim, Kyung Doo; Lee, Seung Wook; Bae, Sung Won

    2003-07-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This input manual provides a complete list of input required to run MARS. The manual is divided largely into two parts, namely, the one-dimensional part and the multi-dimensional part. The inputs for auxiliary parts such as minor edit requests and graph formatting inputs are shared by the two parts and as such mixed input is possible. The overall structure of the input is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS. MARS development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  4. Modeling and generating input processes

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, M.E.

    1987-01-01

    This tutorial paper provides information relevant to the selection and generation of stochastic inputs to simulation studies. The primary area considered is multivariate but much of the philosophy at least is relevant to univariate inputs as well. 14 refs.

  5. Input data required for specific performance assessment codes

    International Nuclear Information System (INIS)

    Seitz, R.R.; Garcia, R.S.; Starmer, R.J.; Dicke, C.A.; Leonard, P.R.; Maheras, S.J.; Rood, A.S.; Smith, R.W.

    1992-02-01

    The Department of Energy's National Low-Level Waste Management Program at the Idaho National Engineering Laboratory generated this report on input data requirements for computer codes to assist States and compacts in their performance assessments. This report gives generators, developers, operators, and users some guidelines on what input data is required to satisfy 22 common performance assessment codes. Each of the codes is summarized and a matrix table is provided to allow comparison of the various input required by the codes. This report does not determine or recommend which codes are preferable

  6. Reducing external speedup requirements for input-queued crossbars

    DEFF Research Database (Denmark)

    Berger, Michael Stubert

    2005-01-01

    This paper presents a modified architecture for an input queued switch that reduces external speedup. Maximal size scheduling algorithms for input-buffered crossbars requires a speedup between port card and switch card. The speedup is typically in the range of 2, to compensate for the scheduler...... performance degradation. This implies, that the required bandwidth between port card and switch card is 2 times the actual port speed, adding to cost and complexity. To reduce this bandwidth, a modified architecture is proposed that introduces a small amount of input and output memory on the switch card chip...

  7. Fishing input requirements of artisanal fishers in coastal ...

    African Journals Online (AJOL)

    Efforts towards increase in fish production through artisanal fishery can be achieved by making needed inputs available. Fishing requirements of artisanal fishers in coastal communities of Ondo State, Nigeria were studied. Data were obtained from two hundred and sixteen artisans using multistage random sampling ...

  8. Remote sensing inputs to water demand modeling

    Science.gov (United States)

    Estes, J. E.; Jensen, J. R.; Tinney, L. R.; Rector, M.

    1975-01-01

    In an attempt to determine the ability of remote sensing techniques to economically generate data required by water demand models, the Geography Remote Sensing Unit, in conjunction with the Kern County Water Agency of California, developed an analysis model. As a result it was determined that agricultural cropland inventories utilizing both high altitude photography and LANDSAT imagery can be conducted cost effectively. In addition, by using average irrigation application rates in conjunction with cropland data, estimates of agricultural water demand can be generated. However, more accurate estimates are possible if crop type, acreage, and crop specific application rates are employed. An analysis of the effect of saline-alkali soils on water demand in the study area is also examined. Finally, reference is made to the detection and delineation of water tables that are perched near the surface by semi-permeable clay layers. Soil salinity prediction, automated crop identification on a by-field basis, and a potential input to the determination of zones of equal benefit taxation are briefly touched upon.

  9. Software safety analysis on the model specified by NuSCR and SMV input language at requirements phase of software development life cycle using SMV

    International Nuclear Information System (INIS)

    Koh, Kwang Yong; Seong, Poong Hyun

    2005-01-01

    Safety-critical software process is composed of development process, verification and validation (V and V) process and safety analysis process. Safety analysis process has been often treated as an additional process and not found in a conventional software process. But software safety analysis (SSA) is required if software is applied to a safety system, and the SSA shall be performed independently for the safety software through software development life cycle (SDLC). Of all the phases in software development, requirements engineering is generally considered to play the most critical role in determining the overall software quality. NASA data demonstrate that nearly 75% of failures found in operational software were caused by errors in the requirements. The verification process in requirements phase checks the correctness of software requirements specification, and the safety analysis process analyzes the safety-related properties in detail. In this paper, the method for safety analysis at requirements phase of software development life cycle using symbolic model verifier (SMV) is proposed. Hazard is discovered by hazard analysis and in other to use SMV for the safety analysis, the safety-related properties are expressed by computation tree logic (CTL)

  10. Agricultural and Environmental Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    Kaylie Rasmuson; Kurt Rautenstrauch

    2003-06-20

    This analysis is one of nine technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. It documents input parameters for the biosphere model, and supports the use of the model to develop Biosphere Dose Conversion Factors (BDCF). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in the biosphere Technical Work Plan (TWP, BSC 2003a). It should be noted that some documents identified in Figure 1-1 may be under development and therefore not available at the time this document is issued. The ''Biosphere Model Report'' (BSC 2003b) describes the ERMYN and its input parameters. This analysis report, ANL-MGR-MD-000006, ''Agricultural and Environmental Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. This report defines and justifies values for twelve parameters required in the biosphere model. These parameters are related to use of contaminated groundwater to grow crops. The parameter values recommended in this report are used in the soil, plant, and carbon-14 submodels of the ERMYN.

  11. Robust input design for nonlinear dynamic modeling of AUV.

    Science.gov (United States)

    Nouri, Nowrouz Mohammad; Valadi, Mehrdad

    2017-09-01

    Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  12. A probabilistic graphical model based stochastic input model construction

    International Nuclear Information System (INIS)

    Wan, Jiang; Zabaras, Nicholas

    2014-01-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media

  13. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. A. Wasiolek

    2003-01-01

    This analysis is one of the nine reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2003a) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents a set of input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for a Yucca Mountain repository. This report, ''Inhalation Exposure Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003b). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available at that time. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this analysis report. This analysis report defines and justifies values of mass loading, which is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Measurements of mass loading are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air surrounding crops and concentrations in air inhaled by a receptor. Concentrations in air to which the

  14. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-09-24

    This analysis is one of the nine reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2003a) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents a set of input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for a Yucca Mountain repository. This report, ''Inhalation Exposure Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003b). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available at that time. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this analysis report. This analysis report defines and justifies values of mass loading, which is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Measurements of mass loading are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air surrounding crops and concentrations in air

  15. Soil-related Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    A. J. Smith

    2003-01-01

    This analysis is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the geologic repository at Yucca Mountain. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN biosphere model is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003 [163602]). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. ''The Biosphere Model Report'' (BSC 2003 [160699]) describes in detail the conceptual model as well as the mathematical model and its input parameters. The purpose of this analysis was to develop the biosphere model parameters needed to evaluate doses from pathways associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation and ash

  16. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  17. Analytic uncertainty and sensitivity analysis of models with input correlations

    Science.gov (United States)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  18. Variance-based sensitivity indices for models with dependent inputs

    International Nuclear Information System (INIS)

    Mara, Thierry A.; Tarantola, Stefano

    2012-01-01

    Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.

  19. Soil-Related Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    Smith, A. J.

    2004-01-01

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure was defined as AP-SIII.9Q, ''Scientific Analyses''. This

  20. Soil-Related Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    A. J. Smith

    2004-09-09

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure

  1. A didactic Input-Output model for territorial ecology analyses

    OpenAIRE

    Garry Mcdonald

    2010-01-01

    This report describes a didactic input-output modelling framework created jointly be the team at REEDS, Universite de Versailles and Dr Garry McDonald, Director, Market Economics Ltd. There are three key outputs associated with this framework: (i) a suite of didactic input-output models developed in Microsoft Excel, (ii) a technical report (this report) which describes the framework and the suite of models1, and (iii) a two week intensive workshop dedicated to the training of REEDS researcher...

  2. Can Simulation Credibility Be Improved Using Sensitivity Analysis to Understand Input Data Effects on Model Outcome?

    Science.gov (United States)

    Myers, Jerry G.; Young, M.; Goodenow, Debra A.; Keenan, A.; Walton, M.; Boley, L.

    2015-01-01

    Model and simulation (MS) credibility is defined as, the quality to elicit belief or trust in MS results. NASA-STD-7009 [1] delineates eight components (Verification, Validation, Input Pedigree, Results Uncertainty, Results Robustness, Use History, MS Management, People Qualifications) that address quantifying model credibility, and provides guidance to the model developers, analysts, and end users for assessing the MS credibility. Of the eight characteristics, input pedigree, or the quality of the data used to develop model input parameters, governing functions, or initial conditions, can vary significantly. These data quality differences have varying consequences across the range of MS application. NASA-STD-7009 requires that the lowest input data quality be used to represent the entire set of input data when scoring the input pedigree credibility of the model. This requirement provides a conservative assessment of model inputs, and maximizes the communication of the potential level of risk of using model outputs. Unfortunately, in practice, this may result in overly pessimistic communication of the MS output, undermining the credibility of simulation predictions to decision makers. This presentation proposes an alternative assessment mechanism, utilizing results parameter robustness, also known as model input sensitivity, to improve the credibility scoring process for specific simulations.

  3. Effective Moisture Penetration Depth Model for Residential Buildings: Sensitivity Analysis and Guidance on Model Inputs

    Energy Technology Data Exchange (ETDEWEB)

    Woods, Jason D [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Winkler, Jonathan M [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-01-31

    Moisture buffering of building materials has a significant impact on the building's indoor humidity, and building energy simulations need to model this buffering to accurately predict the humidity. Researchers requiring a simple moisture-buffering approach typically rely on the effective-capacitance model, which has been shown to be a poor predictor of actual indoor humidity. This paper describes an alternative two-layer effective moisture penetration depth (EMPD) model and its inputs. While this model has been used previously, there is a need to understand the sensitivity of this model to uncertain inputs. In this paper, we use the moisture-adsorbent materials exposed to the interior air: drywall, wood, and carpet. We use a global sensitivity analysis to determine which inputs are most influential and how the model's prediction capability degrades due to uncertainty in these inputs. We then compare the model's humidity prediction with measured data from five houses, which shows that this model, and a set of simple inputs, can give reasonable prediction of the indoor humidity.

  4. Environmental Transport Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    Wasiolek, M. A.

    2003-01-01

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699], Section 6.2). Parameter values

  5. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-06-27

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699

  6. Global sensitivity analysis of computer models with functional inputs

    International Nuclear Information System (INIS)

    Iooss, Bertrand; Ribatet, Mathieu

    2009-01-01

    Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.

  7. Use of regional climate model simulations as an input for hydrological models for the Hindukush-Karakorum-Himalaya region

    NARCIS (Netherlands)

    Akhtar, M.; Ahmad, N.; Booij, Martijn J.

    2009-01-01

    The most important climatological inputs required for the calibration and validation of hydrological models are temperature and precipitation that can be derived from observational records or alternatively from regional climate models (RCMs). In this paper, meteorological station observations and

  8. Computer supported estimation of input data for transportation models

    OpenAIRE

    Cenek, Petr; Tarábek, Peter; Kopf, Marija

    2010-01-01

    Control and management of transportation systems frequently rely on optimization or simulation methods based on a suitable model. Such a model uses optimization or simulation procedures and correct input data. The input data define transportation infrastructure and transportation flows. Data acquisition is a costly process and so an efficient approach is highly desirable. The infrastructure can be recognized from drawn maps using segmentation, thinning and vectorization. The accurate definiti...

  9. Applications of flocking algorithms to input modeling for agent movement

    OpenAIRE

    Singham, Dashi; Therkildsen, Meredith; Schruben, Lee

    2011-01-01

    Refereed Conference Paper The article of record as published can be found at http://dx.doi.org/10.1109/WSC.2011.6147953 Simulation flocking has been introduced as a method for generating simulation input from multivariate dependent time series for sensitivity and risk analysis. It can be applied to data for which a parametric model is not readily available or imposes too many restrictions on the possible inputs. This method uses techniques from agent-based modeling to generate ...

  10. Space market model space industry input-output model

    Science.gov (United States)

    Hodgin, Robert F.; Marchesini, Roberto

    1987-01-01

    The goal of the Space Market Model (SMM) is to develop an information resource for the space industry. The SMM is intended to contain information appropriate for decision making in the space industry. The objectives of the SMM are to: (1) assemble information related to the development of the space business; (2) construct an adequate description of the emerging space market; (3) disseminate the information on the space market to forecasts and planners in government agencies and private corporations; and (4) provide timely analyses and forecasts of critical elements of the space market. An Input-Output model of market activity is proposed which are capable of transforming raw data into useful information for decision makers and policy makers dealing with the space sector.

  11. Metocean input data for drift models applications: Loustic study

    International Nuclear Information System (INIS)

    Michon, P.; Bossart, C.; Cabioc'h, M.

    1995-01-01

    Real-time monitoring and crisis management of oil slicks or floating structures displacement require a good knowledge of local winds, waves and currents used as input data for operational drift models. Fortunately, thanks to world-wide and all-weather coverage, satellite measurements have recently enabled the introduction of new methods for the remote sensing of the marine environment. Within a French joint industry project, a procedure has been developed using basically satellite measurements combined to metocean models in order to provide marine operators' drift models with reliable wind, wave and current analyses and short term forecasts. Particularly, a model now allows the calculation of the drift current, under the joint action of wind and sea-state, thus radically improving the classical laws. This global procedure either directly uses satellite wind and waves measurements (if available on the study area) or indirectly, as calibration of metocean models results which are brought to the oil slick or floating structure location. The operational use of this procedure is reported here with an example of floating structure drift offshore from the Brittany coasts

  12. Bayesian tsunami fragility modeling considering input data uncertainty

    OpenAIRE

    De Risi, Raffaele; Goda, Katsu; Mori, Nobuhito; Yasuda, Tomohiro

    2017-01-01

    Empirical tsunami fragility curves are developed based on a Bayesian framework by accounting for uncertainty of input tsunami hazard data in a systematic and comprehensive manner. Three fragility modeling approaches, i.e. lognormal method, binomial logistic method, and multinomial logistic method, are considered, and are applied to extensive tsunami damage data for the 2011 Tohoku earthquake. A unique aspect of this study is that uncertainty of tsunami inundation data (i.e. input hazard data ...

  13. Agricultural and Environmental Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    K. Rasmuson; K. Rautenstrauch

    2004-01-01

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters

  14. Input-output analysis of energy requirements for short rotation, intensive culture, woody biomass

    International Nuclear Information System (INIS)

    Strauss, C.H.; Grado, S.C.

    1992-01-01

    A production model for short rotation, intensive culture (SRIC) plantations was developed to determine the energy and financial cost of woody biomass. The model was based on hybrid poplars planted on good quality agricultural sites at a density of 2100 cuttings ha -1 , with average annual growth forecast at 16 metric tonne, oven dry (mg(OD)). Energy and financial analyses showed preharvest cost 4381 megajoules (MJ) Mg -1 (OD) and $16 (US) Mg -1 (OD). Harvesting and transportation requirements increased the total costs 6130 MJ Mg -1 (OD) and $39 Mg -1 (OD) for the delivered material. On an energy cost basis, the principal input was land, whereas on a financial basis, costs were more uniformly distributed among equipment, land, labor, and materials and fuel

  15. Quality assurance of weather data for agricultural system model input

    Science.gov (United States)

    It is well known that crop production and hydrologic variation on watersheds is weather related. Rarely, however, is meteorological data quality checks reported for agricultural systems model research. We present quality assurance procedures for agricultural system model weather data input. Problems...

  16. Gaussian-input Gaussian mixture model for representing density maps and atomic models.

    Science.gov (United States)

    Kawabata, Takeshi

    2018-03-06

    A new Gaussian mixture model (GMM) has been developed for better representations of both atomic models and electron microscopy 3D density maps. The standard GMM algorithm employs an EM algorithm to determine the parameters. It accepted a set of 3D points with weights, corresponding to voxel or atomic centers. Although the standard algorithm worked reasonably well; however, it had three problems. First, it ignored the size (voxel width or atomic radius) of the input, and thus it could lead to a GMM with a smaller spread than the input. Second, the algorithm had a singularity problem, as it sometimes stopped the iterative procedure due to a Gaussian function with almost zero variance. Third, a map with a large number of voxels required a long computation time for conversion to a GMM. To solve these problems, we have introduced a Gaussian-input GMM algorithm, which considers the input atoms or voxels as a set of Gaussian functions. The standard EM algorithm of GMM was extended to optimize the new GMM. The new GMM has identical radius of gyration to the input, and does not suddenly stop due to the singularity problem. For fast computation, we have introduced a down-sampled Gaussian functions (DSG) by merging neighboring voxels into an anisotropic Gaussian function. It provides a GMM with thousands of Gaussian functions in a short computation time. We also have introduced a DSG-input GMM: the Gaussian-input GMM with the DSG as the input. This new algorithm is much faster than the standard algorithm. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  17. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    K. Rautenstrauch

    2004-01-01

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception

  18. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rautenstrauch

    2004-09-10

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.

  19. Investigation of RADTRAN Stop Model input parameters for truck stops

    International Nuclear Information System (INIS)

    Griego, N.R.; Smith, J.D.; Neuhauser, K.S.

    1996-01-01

    RADTRAN is a computer code for estimating the risks and consequences as transport of radioactive materials (RAM). RADTRAN was developed and is maintained by Sandia National Laboratories for the US Department of Energy (DOE). For incident-free transportation, the dose to persons exposed while the shipment is stopped is frequently a major percentage of the overall dose. This dose is referred to as Stop Dose and is calculated by the Stop Model. Because stop dose is a significant portion of the overall dose associated with RAM transport, the values used as input for the Stop Model are important. Therefore, an investigation of typical values for RADTRAN Stop Parameters for truck stops was performed. The resulting data from these investigations were analyzed to provide mean values, standard deviations, and histograms. Hence, the mean values can be used when an analyst does not have a basis for selecting other input values for the Stop Model. In addition, the histograms and their characteristics can be used to guide statistical sampling techniques to measure sensitivity of the RADTRAN calculated Stop Dose to the uncertainties in the stop model input parameters. This paper discusses the details and presents the results of the investigation of stop model input parameters at truck stops

  20. Environmental Transport Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. Wasiolek

    2004-01-01

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573])

  1. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2004-09-10

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis

  2. TASS/SMR Code Topical Report for SMART Plant, Vol II: User's Guide and Input Requirement

    International Nuclear Information System (INIS)

    Kim, See Darl; Kim, Soo Hyoung; Kim, Hyung Rae

    2008-10-01

    The TASS/SMR code has been developed with domestic technologies for the safety analysis of the SMART plant which is an integral type pressurized water reactor. It can be applied to the analysis of design basis accidents including non-LOCA (loss of coolant accident) and LOCA of the SMART plant. The TASS/SMR code can be applied to any plant regardless of the structural characteristics of a reactor since the code solves the same governing equations for both the primary and secondary system. The code has been developed to meet the requirements of the safety analysis code. This report describes the overall structure of the TASS/SMR, input processing, and the processes of a steady state and transient calculations. In addition, basic differential equations, finite difference equations, state relationships, and constitutive models are described in the report. First, the conservation equations, a discretization process for numerical analysis, search method for state relationship are described. Then, a core power model, heat transfer models, physical models for various components, and control and trip models are explained

  3. Model reduction of nonlinear systems subject to input disturbances

    KAUST Repository

    Ndoye, Ibrahima

    2017-07-10

    The method of convex optimization is used as a tool for model reduction of a class of nonlinear systems in the presence of disturbances. It is shown that under some conditions the nonlinear disturbed system can be approximated by a reduced order nonlinear system with similar disturbance-output properties to the original plant. The proposed model reduction strategy preserves the nonlinearity and the input disturbance nature of the model. It guarantees a sufficiently small error between the outputs of the original and the reduced-order systems, and also maintains the properties of input-to-state stability. The matrices of the reduced order system are given in terms of a set of linear matrix inequalities (LMIs). The paper concludes with a demonstration of the proposed approach on model reduction of a nonlinear electronic circuit with additive disturbances.

  4. Effects of input uncertainty on cross-scale crop modeling

    Science.gov (United States)

    Waha, Katharina; Huth, Neil; Carberry, Peter

    2014-05-01

    The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input

  5. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. Wasiolek

    2006-01-01

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This report is concerned primarily with the

  6. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2006-06-05

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This

  7. Input point distribution for regular stem form spline modeling

    Directory of Open Access Journals (Sweden)

    Karel Kuželka

    2015-04-01

    Full Text Available Aim of study: To optimize an interpolation method and distribution of measured diameters to represent regular stem form of coniferous trees using a set of discrete points. Area of study: Central-Bohemian highlands, Czech Republic; a region that represents average stand conditions of production forests of Norway spruce (Picea abies [L.] Karst. in central Europe Material and methods: The accuracy of stem curves modeled using natural cubic splines from a set of measured diameters was evaluated for 85 closely measured stems of Norway spruce using five statistical indicators and compared to the accuracy of three additional models based on different spline types selected for their ability to represent stem curves. The optimal positions to measure diameters were identified using an aggregate objective function approach. Main results: The optimal positions of the input points vary depending on the properties of each spline type. If the optimal input points for each spline are used, then all spline types are able to give reasonable results with higher numbers of input points. The commonly used natural cubic spline was outperformed by other spline types. The lowest errors occur by interpolating the points using the Catmull-Rom spline, which gives accurate and unbiased volume estimates, even with only five input points. Research highlights: The study contributes to more accurate representation of stem form and therefore more accurate estimation of stem volume using data obtained from terrestrial imagery or other close-range remote sensing methods.

  8. Agricultural and Environmental Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rasmuson; K. Rautenstrauch

    2004-09-14

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters.

  9. Applications of Flocking Algorithms to Input Modeling for Agent Movement

    Science.gov (United States)

    2011-12-01

    2445 Singham, Therkildsen, and Schruben We apply the following flocking algorithm to this leading boid to generate followers, who will then be mapped...due to the paths crossing. 2447 Singham, Therkildsen, and Schruben Figure 2: Plot of the path of a boid generated by the Group 4 flocking algorithm ...on the possible inputs. This method uses techniques from agent-based modeling to generate a flock of boids that follow the data. In this paper, we

  10. A stochastic model of input effectiveness during irregular gamma rhythms.

    Science.gov (United States)

    Dumont, Grégory; Northoff, Georg; Longtin, André

    2016-02-01

    Gamma-band synchronization has been linked to attention and communication between brain regions, yet the underlying dynamical mechanisms are still unclear. How does the timing and amplitude of inputs to cells that generate an endogenously noisy gamma rhythm affect the network activity and rhythm? How does such "communication through coherence" (CTC) survive in the face of rhythm and input variability? We present a stochastic modelling approach to this question that yields a very fast computation of the effectiveness of inputs to cells involved in gamma rhythms. Our work is partly motivated by recent optogenetic experiments (Cardin et al. Nature, 459(7247), 663-667 2009) that tested the gamma phase-dependence of network responses by first stabilizing the rhythm with periodic light pulses to the interneurons (I). Our computationally efficient model E-I network of stochastic two-state neurons exhibits finite-size fluctuations. Using the Hilbert transform and Kuramoto index, we study how the stochastic phase of its gamma rhythm is entrained by external pulses. We then compute how this rhythmic inhibition controls the effectiveness of external input onto pyramidal (E) cells, and how variability shapes the window of firing opportunity. For transferring the time variations of an external input to the E cells, we find a tradeoff between the phase selectivity and depth of rate modulation. We also show that the CTC is sensitive to the jitter in the arrival times of spikes to the E cells, and to the degree of I-cell entrainment. We further find that CTC can occur even if the underlying deterministic system does not oscillate; quasicycle-type rhythms induced by the finite-size noise retain the basic CTC properties. Finally a resonance analysis confirms the relative importance of the I cell pacing for rhythm generation. Analysis of whole network behaviour, including computations of synchrony, phase and shifts in excitatory-inhibitory balance, can be further sped up by orders of

  11. A one-model approach based on relaxed combinations of inputs for evaluating input congestion in DEA

    Science.gov (United States)

    Khodabakhshi, Mohammad

    2009-08-01

    This paper provides a one-model approach of input congestion based on input relaxation model developed in data envelopment analysis (e.g. [G.R. Jahanshahloo, M. Khodabakhshi, Suitable combination of inputs for improving outputs in DEA with determining input congestion -- Considering textile industry of China, Applied Mathematics and Computation (1) (2004) 263-273; G.R. Jahanshahloo, M. Khodabakhshi, Determining assurance interval for non-Archimedean ele improving outputs model in DEA, Applied Mathematics and Computation 151 (2) (2004) 501-506; M. Khodabakhshi, A super-efficiency model based on improved outputs in data envelopment analysis, Applied Mathematics and Computation 184 (2) (2007) 695-703; M. Khodabakhshi, M. Asgharian, An input relaxation measure of efficiency in stochastic data analysis, Applied Mathematical Modelling 33 (2009) 2010-2023]. This approach reduces solving three problems with the two-model approach introduced in the first of the above-mentioned reference to two problems which is certainly important from computational point of view. The model is applied to a set of data extracted from ISI database to estimate input congestion of 12 Canadian business schools.

  12. Remotely sensed soil moisture input to a hydrologic model

    Science.gov (United States)

    Engman, E. T.; Kustas, W. P.; Wang, J. R.

    1989-01-01

    The possibility of using detailed spatial soil moisture maps as input to a runoff model was investigated. The water balance of a small drainage basin was simulated using a simple storage model. Aircraft microwave measurements of soil moisture were used to construct two-dimensional maps of the spatial distribution of the soil moisture. Data from overflights on different dates provided the temporal changes resulting from soil drainage and evapotranspiration. The study site and data collection are described, and the soil measurement data are given. The model selection is discussed, and the simulation results are summarized. It is concluded that a time series of soil moisture is a valuable new type of data for verifying model performance and for updating and correcting simulated streamflow.

  13. Temporal rainfall estimation using input data reduction and model inversion

    Science.gov (United States)

    Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.

    2016-12-01

    Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a

  14. Development of the MARS input model for Kori nuclear units 1 transient analyzer

    International Nuclear Information System (INIS)

    Hwang, M.; Kim, K. D.; Lee, S. W.; Lee, Y. J.; Lee, W. J.; Chung, B. D.; Jeong, J. J.

    2004-11-01

    KAERI has been developing the 'NSSS transient analyzer' based on best-estimate codes for Kori Nuclear Units 1 plants. The MARS and RETRAN codes have been used as the best-estimate codes for the NSSS transient analyzer. Among these codes, the MARS code is adopted for realistic analysis of small- and large-break loss-of-coolant accidents, of which break size is greater than 2 inch diameter. So it is necessary to develop the MARS input model for Kori Nuclear Units 1 plants. This report includes the input model (hydrodynamic component and heat structure models) requirements and the calculation note for the MARS input data generation for Kori Nuclear Units 1 plant analyzer (see the Appendix). In order to confirm the validity of the input data, we performed the calculations for a steady state at 100 % power operation condition and a double-ended cold leg break LOCA. The results of the steady-state calculation agree well with the design data. The results of the LOCA calculation seem to be reasonable and consistent with those of other best-estimate calculations. Therefore, the MARS input data can be used as a base input deck for the MARS transient analyzer for Kori Nuclear Units 1

  15. Comprehensive Information Retrieval and Model Input Sequence (CIRMIS)

    Energy Technology Data Exchange (ETDEWEB)

    Friedrichs, D.R.

    1977-04-01

    The Comprehensive Information Retrieval and Model Input Sequence (CIRMIS) was developed to provide the research scientist with man--machine interactive capabilities in a real-time environment, and thereby produce results more quickly and efficiently. The CIRMIS system was originally developed to increase data storage and retrieval capabilities and ground-water model control for the Hanford site. The overall configuration, however, can be used in other areas. The CIRMIS system provides the user with three major functions: retrieval of well-based data, special application for manipulating surface data or background maps, and the manipulation and control of ground-water models. These programs comprise only a portion of the entire CIRMIS system. A complete description of the CIRMIS system is given in this report. 25 figures, 7 tables. (RWR)

  16. Input data requirements for special processors in the computation system containing the VENTURE neutronics code

    International Nuclear Information System (INIS)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.

    1979-07-01

    User input data requirements are presented for certain special processors in a nuclear reactor computation system. These processors generally read data in formatted form and generate binary interface data files. Some data processing is done to convert from the user oriented form to the interface file forms. The VENTURE diffusion theory neutronics code and other computation modules in this system use the interface data files which are generated

  17. Nursing home staffing requirements and input substitution: effects on housekeeping, food service, and activities staff.

    Science.gov (United States)

    Bowblis, John R; Hyer, Kathryn

    2013-08-01

    To study the effect of minimum nurse staffing requirements on the subsequent employment of nursing home support staff. Nursing home data from the Online Survey Certification and Reporting (OSCAR) System merged with state nurse staffing requirements. Facility-level housekeeping, food service, and activities staff levels are regressed on nurse staffing requirements and other controls using fixed effect panel regression. OSCAR surveys from 1999 to 2004. Increases in state direct care and licensed nurse staffing requirements are associated with decreases in the staffing levels of all types of support staff. Increased nursing home nurse staffing requirements lead to input substitution in the form of reduced support staffing levels. © Health Research and Educational Trust.

  18. Measurement of Laser Weld Temperatures for 3D Model Input

    Energy Technology Data Exchange (ETDEWEB)

    Dagel, Daryl [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grossetete, Grant [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Maccallum, Danny O. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-10-01

    Laser welding is a key joining process used extensively in the manufacture and assembly of critical components for several weapons systems. Sandia National Laboratories advances the understanding of the laser welding process through coupled experimentation and modeling. This report summarizes the experimental portion of the research program, which focused on measuring temperatures and thermal history of laser welds on steel plates. To increase confidence in measurement accuracy, researchers utilized multiple complementary techniques to acquire temperatures during laser welding. This data serves as input to and validation of 3D laser welding models aimed at predicting microstructure and the formation of defects and their impact on weld-joint reliability, a crucial step in rapid prototyping of weapons components.

  19. The definition of input parameters for modelling of energetic subsystems

    Directory of Open Access Journals (Sweden)

    Ptacek M.

    2013-06-01

    Full Text Available This paper is a short review and a basic description of mathematical models of renewable energy sources which present individual investigated subsystems of a system created in Matlab/Simulink. It solves the physical and mathematical relationships of photovoltaic and wind energy sources that are often connected to the distribution networks. The fuel cell technology is much less connected to the distribution networks but it could be promising in the near future. Therefore, the paper informs about a new dynamic model of the low-temperature fuel cell subsystem, and the main input parameters are defined as well. Finally, the main evaluated and achieved graphic results for the suggested parameters and for all the individual subsystems mentioned above are shown.

  20. The definition of input parameters for modelling of energetic subsystems

    Science.gov (United States)

    Ptacek, M.

    2013-06-01

    This paper is a short review and a basic description of mathematical models of renewable energy sources which present individual investigated subsystems of a system created in Matlab/Simulink. It solves the physical and mathematical relationships of photovoltaic and wind energy sources that are often connected to the distribution networks. The fuel cell technology is much less connected to the distribution networks but it could be promising in the near future. Therefore, the paper informs about a new dynamic model of the low-temperature fuel cell subsystem, and the main input parameters are defined as well. Finally, the main evaluated and achieved graphic results for the suggested parameters and for all the individual subsystems mentioned above are shown.

  1. Lysimeter data as input to performance assessment models

    International Nuclear Information System (INIS)

    McConnell, J.W. Jr.

    1998-01-01

    The Field Lysimeter Investigations: Low-Level Waste Data Base Development Program is obtaining information on the performance of radioactive waste forms in a disposal environment. Waste forms fabricated using ion-exchange resins from EPICOR-117 prefilters employed in the cleanup of the Three Mile Island (TMI) Nuclear Power Station are being tested to develop a low-level waste data base and to obtain information on survivability of waste forms in a disposal environment. The program includes reviewing radionuclide releases from those waste forms in the first 7 years of sampling and examining the relationship between code input parameters and lysimeter data. Also, lysimeter data are applied to performance assessment source term models, and initial results from use of data in two models are presented

  2. Requirements on design earthquake input time-histories in different regulations for nuclear power plant

    International Nuclear Information System (INIS)

    Hou Chunlin; Pan Rong; Li Xiaojun

    2012-01-01

    In this paper, provisions referred to artificial ground motions in different regulations for nuclear power plant have been investigated, difference of design earthquake input time-histories requirements between regulations which cited nowadays in China has been compared. Then, corresponding relationship while used to design for France Pressurized Water Reactor M310 and The Third Generation Advanced Reactor AP1000 has been listed. We reviewed technical background, requirement details, application situation of different regulations, analyzed their difference. These works could offer important references for emending the code for seismic design of nuclear power plants and other related codes. (authors)

  3. Phylogenetic mixtures and linear invariants for equal input models.

    Science.gov (United States)

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  4. Assigning probability distributions to input parameters of performance assessment models

    International Nuclear Information System (INIS)

    Mishra, Srikanta

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available

  5. Assigning probability distributions to input parameters of performance assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta [INTERA Inc., Austin, TX (United States)

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.

  6. RELAP5/MOD3 code manual: User`s guide and input requirements. Volume 2

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-08-01

    The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. Volume II contains detailed instructions for code application and input data preparation.

  7. Input modeling with phase-type distributions and Markov models theory and applications

    CERN Document Server

    Buchholz, Peter; Felko, Iryna

    2014-01-01

    Containing a summary of several recent results on Markov-based input modeling in a coherent notation, this book introduces and compares algorithms for parameter fitting and gives an overview of available software tools in the area. Due to progress made in recent years with respect to new algorithms to generate PH distributions and Markovian arrival processes from measured data, the models outlined are useful alternatives to other distributions or stochastic processes used for input modeling. Graduate students and researchers in applied probability, operations research and computer science along with practitioners using simulation or analytical models for performance analysis and capacity planning will find the unified notation and up-to-date results presented useful. Input modeling is the key step in model based system analysis to adequately describe the load of a system using stochastic models. The goal of input modeling is to find a stochastic model to describe a sequence of measurements from a real system...

  8. Recurrent network models for perfect temporal integration of fluctuating correlated inputs.

    Directory of Open Access Journals (Sweden)

    Hiroshi Okamoto

    2009-06-01

    Full Text Available Temporal integration of input is essential to the accumulation of information in various cognitive and behavioral processes, and gradually increasing neuronal activity, typically occurring within a range of seconds, is considered to reflect such computation by the brain. Some psychological evidence suggests that temporal integration by the brain is nearly perfect, that is, the integration is non-leaky, and the output of a neural integrator is accurately proportional to the strength of input. Neural mechanisms of perfect temporal integration, however, remain largely unknown. Here, we propose a recurrent network model of cortical neurons that perfectly integrates partially correlated, irregular input spike trains. We demonstrate that the rate of this temporal integration changes proportionately to the probability of spike coincidences in synaptic inputs. We analytically prove that this highly accurate integration of synaptic inputs emerges from integration of the variance of the fluctuating synaptic inputs, when their mean component is kept constant. Highly irregular neuronal firing and spike coincidences are the major features of cortical activity, but they have been separately addressed so far. Our results suggest that the efficient protocol of information integration by cortical networks essentially requires both features and hence is heterotic.

  9. Human upright posture control models based on multisensory inputs; in fast and slow dynamics.

    Science.gov (United States)

    Chiba, Ryosuke; Takakusaki, Kaoru; Ota, Jun; Yozu, Arito; Haga, Nobuhiko

    2016-03-01

    Posture control to maintain an upright stance is one of the most important and basic requirements in the daily life of humans. The sensory inputs involved in posture control include visual and vestibular inputs, as well as proprioceptive and tactile somatosensory inputs. These multisensory inputs are integrated to represent the body state (body schema); this is then utilized in the brain to generate the motion. Changes in the multisensory inputs result in postural alterations (fast dynamics), as well as long-term alterations in multisensory integration and posture control itself (slow dynamics). In this review, we discuss the fast and slow dynamics, with a focus on multisensory integration including an introduction of our study to investigate "internal force control" with multisensory integration-evoked posture alteration. We found that the study of the slow dynamics is lagging compared to that of fast dynamics, such that our understanding of long-term alterations is insufficient to reveal the underlying mechanisms and to propose suitable models. Additional studies investigating slow dynamics are required to expand our knowledge of this area, which would support the physical training and rehabilitation of elderly and impaired persons. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  10. A Markovian model of evolving world input-output network.

    Directory of Open Access Journals (Sweden)

    Vahid Moosavi

    Full Text Available The initial theoretical connections between Leontief input-output models and Markov chains were established back in 1950s. However, considering the wide variety of mathematical properties of Markov chains, so far there has not been a full investigation of evolving world economic networks with Markov chain formalism. In this work, using the recently available world input-output database, we investigated the evolution of the world economic network from 1995 to 2011 through analysis of a time series of finite Markov chains. We assessed different aspects of this evolving system via different known properties of the Markov chains such as mixing time, Kemeny constant, steady state probabilities and perturbation analysis of the transition matrices. First, we showed how the time series of mixing times and Kemeny constants could be used as an aggregate index of globalization. Next, we focused on the steady state probabilities as a measure of structural power of the economies that are comparable to GDP shares of economies as the traditional index of economies welfare. Further, we introduced two measures of systemic risk, called systemic influence and systemic fragility, where the former is the ratio of number of influenced nodes to the total number of nodes, caused by a shock in the activity of a node, and the latter is based on the number of times a specific economic node is affected by a shock in the activity of any of the other nodes. Finally, focusing on Kemeny constant as a global indicator of monetary flow across the network, we showed that there is a paradoxical effect of a change in activity levels of economic nodes on the overall flow of the world economic network. While the economic slowdown of the majority of nodes with high structural power results to a slower average monetary flow over the network, there are some nodes, where their slowdowns improve the overall quality of the network in terms of connectivity and the average flow of the money.

  11. A Markovian model of evolving world input-output network.

    Science.gov (United States)

    Moosavi, Vahid; Isacchini, Giulio

    2017-01-01

    The initial theoretical connections between Leontief input-output models and Markov chains were established back in 1950s. However, considering the wide variety of mathematical properties of Markov chains, so far there has not been a full investigation of evolving world economic networks with Markov chain formalism. In this work, using the recently available world input-output database, we investigated the evolution of the world economic network from 1995 to 2011 through analysis of a time series of finite Markov chains. We assessed different aspects of this evolving system via different known properties of the Markov chains such as mixing time, Kemeny constant, steady state probabilities and perturbation analysis of the transition matrices. First, we showed how the time series of mixing times and Kemeny constants could be used as an aggregate index of globalization. Next, we focused on the steady state probabilities as a measure of structural power of the economies that are comparable to GDP shares of economies as the traditional index of economies welfare. Further, we introduced two measures of systemic risk, called systemic influence and systemic fragility, where the former is the ratio of number of influenced nodes to the total number of nodes, caused by a shock in the activity of a node, and the latter is based on the number of times a specific economic node is affected by a shock in the activity of any of the other nodes. Finally, focusing on Kemeny constant as a global indicator of monetary flow across the network, we showed that there is a paradoxical effect of a change in activity levels of economic nodes on the overall flow of the world economic network. While the economic slowdown of the majority of nodes with high structural power results to a slower average monetary flow over the network, there are some nodes, where their slowdowns improve the overall quality of the network in terms of connectivity and the average flow of the money.

  12. Optimal input design for model discrimination using Pontryagin's maximum principle: Application to kinetic model structures

    NARCIS (Netherlands)

    Keesman, K.J.; Walter, E.

    2014-01-01

    The paper presents a methodology for an optimal input design for model discrimination. To allow analytical solutions, the method, using Pontryagin’s maximum principle, is developed for non-linear single-state systems that are affine in their joint input. The method is demonstrated on a fed-batch

  13. Regulation of Wnt signaling by nociceptive input in animal models

    Directory of Open Access Journals (Sweden)

    Shi Yuqiang

    2012-06-01

    Full Text Available Abstract Background Central sensitization-associated synaptic plasticity in the spinal cord dorsal horn (SCDH critically contributes to the development of chronic pain, but understanding of the underlying molecular pathways is still incomplete. Emerging evidence suggests that Wnt signaling plays a crucial role in regulation of synaptic plasticity. Little is known about the potential function of the Wnt signaling cascades in chronic pain development. Results Fluorescent immunostaining results indicate that β-catenin, an essential protein in the canonical Wnt signaling pathway, is expressed in the superficial layers of the mouse SCDH with enrichment at synapses in lamina II. In addition, Wnt3a, a prototypic Wnt ligand that activates the canonical pathway, is also enriched in the superficial layers. Immunoblotting analysis indicates that both Wnt3a a β-catenin are up-regulated in the SCDH of various mouse pain models created by hind-paw injection of capsaicin, intrathecal (i.t. injection of HIV-gp120 protein or spinal nerve ligation (SNL. Furthermore, Wnt5a, a prototypic Wnt ligand for non-canonical pathways, and its receptor Ror2 are also up-regulated in the SCDH of these models. Conclusion Our results suggest that Wnt signaling pathways are regulated by nociceptive input. The activation of Wnt signaling may regulate the expression of spinal central sensitization during the development of acute and chronic pain.

  14. ON MODELING METHODS OF REPRODUCTION OF FIXED ASSETS IN DYNAMIC INPUT - OUTPUT MODELS

    Directory of Open Access Journals (Sweden)

    Baranov A. O.

    2014-12-01

    Full Text Available The article presents a comparative study of methods for modeling reproduction of fixed assets in various types of dynamic input-output models, which have been developed at the Novosibirsk State University and at the Institute of Economics and Industrial Engineering of the Siberian Division of Russian Academy of Sciences. The study compares the technique of information providing for the investment blocks of the models. Considered in detail mathematical description of the block of fixed assets reproduction in the Dynamic Input - Output Model included in the KAMIN system and the optimization interregional input - output model. Analyzes the peculiarities of information support of investment and fixed assets blocks of the Dynamic Input - Output Model included in the KAMIN system and the optimization interregional input - output model. In conclusion of the article provides suggestions for joint use of the analyzed models for Russian economy development forecasting. Provided the use of the KAMIN system’s models for short-term and middle-term forecasting and the optimization interregional input - output model to develop long-term forecasts based on the spatial structure of the economy.

  15. Efficient uncertainty quantification of a fully nonlinear and dispersive water wave model with random inputs

    DEFF Research Database (Denmark)

    Bigoni, Daniele; Engsig-Karup, Allan Peter; Eskilsson, Claes

    2016-01-01

    A major challenge in next-generation industrial applications is to improve numerical analysis by quantifying uncertainties in predictions. In this work we present a formulation of a fully nonlinear and dispersive potential flow water wave model with random inputs for the probabilistic description...... of the evolution of waves. The model is analyzed using random sampling techniques and nonintrusive methods based on generalized polynomial chaos (PC). These methods allow us to accurately and efficiently estimate the probability distribution of the solution and require only the computation of the solution...... at different points in the parameter space, allowing for the reuse of existing simulation software. The choice of the applied methods is driven by the number of uncertain input parameters and by the fact that finding the solution of the considered model is computationally intensive. We revisit experimental...

  16. Development of the MARS input model for Ulchin 1/2 transient analyzer

    International Nuclear Information System (INIS)

    Jeong, J. J.; Kim, K. D.; Lee, S. W.; Lee, Y. J.; Chung, B. D.; Hwang, M.

    2003-03-01

    KAERI has been developing the NSSS transient analyzer based on best-estimate codes for Ulchin 1/2 plants. The MARS and RETRAN code are used as the best-estimate codes for the NSSS transient analyzer. Among the two codes, the MARS code is to be used for realistic analysis of small- and large-break loss-of-coolant accidents, of which break size is greater than 2 inch diameter. This report includes the input model requirements and the calculation note for the Ulchin 1/2 MARS input data generation (see the Appendix). In order to confirm the validity of the input data, we performed the calculations for a steady state at 100 % power operation condition and a double-ended cold leg break LOCA. The results of the steady-state calculation agree well with the design data. The results of the LOCA calculation seem to be reasonable and consistent with those of other best-estimate calculations. Therefore, the MARS input data can be used as a base input deck for the MARS transient analyzer for Ulchin 1/2

  17. Development of the MARS input model for Ulchin 3/4 transient analyzer

    International Nuclear Information System (INIS)

    Jeong, J. J.; Kim, K. D.; Lee, S. W.; Lee, Y. J.; Lee, W. J.; Chung, B. D.; Hwang, M. G.

    2003-12-01

    KAERI has been developing the NSSS transient analyzer based on best-estimate codes.The MARS and RETRAN code are adopted as the best-estimate codes for the NSSS transient analyzer. Among these two codes, the MARS code is to be used for realistic analysis of small- and large-break loss-of-coolant accidents, of which break size is greater than 2 inch diameter. This report includes the MARS input model requirements and the calculation note for the MARS input data generation (see the Appendix) for Ulchin 3/4 plant analyzer. In order to confirm the validity of the input data, we performed the calculations for a steady state at 100 % power operation condition and a double-ended cold leg break LOCA. The results of the steady-state calculation agree well with the design data. The results of the LOCA calculation seem to be reasonable and consistent with those of other best-estimate calculations. Therefore, the MARS input data can be used as a base input deck for the MARS transient analyzer for Ulchin 3/4

  18. Fast Prediction of Differential Mode Noise Input Filter Requirements for FLyback and Boost Unity Power Factor Converters

    DEFF Research Database (Denmark)

    Andersen, Michael Andreas E.

    1997-01-01

    Two new and simple methods to make predictions of the differential mode (DM) input filter requirements are presented, one for flyback and one for boost unity power factor converters. They have been verified by measurements. They give the designer the ability to predict the DM input noise filter...

  19. The stability of input structures in a supply-driven input-output model: A regional analysis

    Energy Technology Data Exchange (ETDEWEB)

    Allison, T.

    1994-06-01

    Disruptions in the supply of strategic resources or other crucial factor inputs often present significant problems for planners and policymakers. The problem may be particularly significant at the regional level where higher levels of product specialization mean supply restrictions are more likely to affect leading regional industries. To maintain economic stability in the event of a supply restriction, regional planners may therefore need to evaluate the importance of market versus non-market systems for allocating the remaining supply of the disrupted resource to the region`s leading consuming industries. This paper reports on research that has attempted to show that large short term changes on the supply side do not lead to substantial changes in input coefficients and do not therefore mean the abandonment of the concept of the production function as has been suggested (Oosterhaven, 1988). The supply-driven model was tested for six sectors of the economy of Washington State and found to yield new input coefficients whose values were in most cases close approximations of their original values, even with substantial changes in supply. Average coefficient changes from a 50% output reduction in these six sectors were in the vast majority of cases (297 from a total of 315) less than +2.0% of their original values, excluding coefficient changes for the restricted input. Given these small changes, the most important issue for the validity of the supply-driven input-output model may therefore be the empirical question of the extent to which these coefficient changes are acceptable as being within the limits of approximation.

  20. High Temperature Test Facility Preliminary RELAP5-3D Input Model Description

    Energy Technology Data Exchange (ETDEWEB)

    Bayless, Paul David [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-12-01

    A RELAP5-3D input model is being developed for the High Temperature Test Facility at Oregon State University. The current model is described in detail. Further refinements will be made to the model as final as-built drawings are released and when system characterization data are available for benchmarking the input model.

  1. Getting innovation out of suppliers? A conceptual model for characterizing supplier inputs to new product development

    OpenAIRE

    Lakemond, Nicolette; Rosell, David T.

    2011-01-01

    There are many studies on supplier collaborations in NPD. However, there is not much written about what suppliers actually contribute to innovation. Based on a literature review focusing on 80 articles we develop a conceptual framework categorizing different supplier inputs to innovation. This model is formulated by characterizing supplier inputs related to the component level and architectural level, and inputs that are incremental or radical in nature. On a component level, supplier inputs ...

  2. Modeling Input Errors to Improve Uncertainty Estimates for Sediment Transport Model Predictions

    Science.gov (United States)

    Jung, J. Y.; Niemann, J. D.; Greimann, B. P.

    2016-12-01

    Bayesian methods using Markov chain Monte Carlo algorithms have recently been applied to sediment transport models to assess the uncertainty in the model predictions due to the parameter values. Unfortunately, the existing approaches can only attribute overall uncertainty to the parameters. This limitation is critical because no model can produce accurate forecasts if forced with inaccurate input data, even if the model is well founded in physical theory. In this research, an existing Bayesian method is modified to consider the potential errors in input data during the uncertainty evaluation process. The input error is modeled using Gaussian distributions, and the means and standard deviations are treated as uncertain parameters. The proposed approach is tested by coupling it to the Sedimentation and River Hydraulics - One Dimension (SRH-1D) model and simulating a 23-km reach of the Tachia River in Taiwan. The Wu equation in SRH-1D is used for computing the transport capacity for a bed material load of non-cohesive material. Three types of input data are considered uncertain: (1) the input flowrate at the upstream boundary, (2) the water surface elevation at the downstream boundary, and (3) the water surface elevation at a hydraulic structure in the middle of the reach. The benefits of modeling the input errors in the uncertainty analysis are evaluated by comparing the accuracy of the most likely forecast and the coverage of the observed data by the credible intervals to those of the existing method. The results indicate that the internal boundary condition has the largest uncertainty among those considered. Overall, the uncertainty estimates from the new method are notably different from those of the existing method for both the calibration and forecast periods.

  3. A time-resolved model of the mesospheric Na layer: constraints on the meteor input function

    Directory of Open Access Journals (Sweden)

    J. M. C. Plane

    2004-01-01

    Full Text Available A time-resolved model of the Na layer in the mesosphere/lower thermosphere region is described, where the continuity equations for the major sodium species Na, Na+ and NaHCO3 are solved explicity, and the other short-lived species are treated in steady-state. It is shown that the diurnal variation of the Na layer can only be modelled satisfactorily if sodium species are permanently removed below about 85 km, both through the dimerization of NaHCO3 and the uptake of sodium species on meteoric smoke particles that are assumed to have formed from the recondensation of vaporized meteoroids. When the sensitivity of the Na layer to the meteoroid input function is considered, an inconsistent picture emerges. The ratio of the column abundance of Na+ to Na is shown to increase strongly with the average meteoroid velocity, because the Na is injected at higher altitudes. Comparison with a limited set of Na+ measurements indicates that the average meteoroid velocity is probably less than about 25 km s-1, in agreement with velocity estimates from conventional meteor radars, and considerably slower than recent observations made by wide aperture incoherent scatter radars. The Na column abundance is shown to be very sensitive to the meteoroid mass input rate, and to the rate of vertical transport by eddy diffusion. Although the magnitude of the eddy diffusion coefficient in the 80–90 km region is uncertain, there is a consensus between recent models using parameterisations of gravity wave momentum deposition that the average value is less than 3×105 cm2 s-1. This requires that the global meteoric mass input rate is less than about 20 td-1, which is closest to estimates from incoherent scatter radar observations. Finally, the diurnal variation in the meteoroid input rate only slight perturbs the Na layer, because the residence time of Na in the layer is several days, and diurnal effects are effectively averaged out.

  4. Input data for mathematical modeling and numerical simulation of switched reluctance machines

    Directory of Open Access Journals (Sweden)

    Ali Asghar Memon

    2017-10-01

    Full Text Available The modeling and simulation of Switched Reluctance (SR machine and drives is challenging for its dual pole salient structure and magnetic saturation. This paper presents the input data in form of experimentally obtained magnetization characteristics. This data was used for computer simulation based model of SR machine, “Selecting Best Interpolation Technique for Simulation Modeling of Switched Reluctance Machine” [1], “Modeling of Static Characteristics of Switched Reluctance Motor” [2]. This data is primary source of other data tables of co energy and static torque which are also among the required data essential for the simulation and can be derived from this data. The procedure and experimental setup for collection of the data is presented in detail.

  5. Input data for mathematical modeling and numerical simulation of switched reluctance machines.

    Science.gov (United States)

    Memon, Ali Asghar; Shaikh, Muhammad Mujtaba

    2017-10-01

    The modeling and simulation of Switched Reluctance (SR) machine and drives is challenging for its dual pole salient structure and magnetic saturation. This paper presents the input data in form of experimentally obtained magnetization characteristics. This data was used for computer simulation based model of SR machine, "Selecting Best Interpolation Technique for Simulation Modeling of Switched Reluctance Machine" [1], "Modeling of Static Characteristics of Switched Reluctance Motor" [2]. This data is primary source of other data tables of co energy and static torque which are also among the required data essential for the simulation and can be derived from this data. The procedure and experimental setup for collection of the data is presented in detail.

  6. Dispersion modeling of accidental releases of toxic gases - Sensitivity study and optimization of the meteorological input

    Science.gov (United States)

    Baumann-Stanzer, K.; Stenzel, S.

    2009-04-01

    based on the weather forecast model ALADIN. The meteorological field's analysis with INCA include: Temperature, Humidity, Wind, Precipitation and Cloudiness. In the frame of the project INCA data were compared with measurements conducted at traffic-near sites. INCA analysis and very short term forecast fields (up to 6 hours) are found to be an advanced possibility to provide on-line meteorological input for the model package used by the fire brigade. Nevertheless a high degree of caution in the interpretation of the model results is required - especially in the case of very slow wind speeds, very stable atmospheric condition, and flow deflection by buildings in the urban area or by complex topography.

  7. TASS/SMR Code Topical Report for SMART Plant, Vol II: User's Guide and Input Requirement

    Energy Technology Data Exchange (ETDEWEB)

    Kim, See Darl; Kim, Soo Hyoung; Kim, Hyung Rae (and others)

    2008-10-15

    The TASS/SMR code has been developed with domestic technologies for the safety analysis of the SMART plant which is an integral type pressurized water reactor. It can be applied to the analysis of design basis accidents including non-LOCA (loss of coolant accident) and LOCA of the SMART plant. The TASS/SMR code can be applied to any plant regardless of the structural characteristics of a reactor since the code solves the same governing equations for both the primary and secondary system. The code has been developed to meet the requirements of the safety analysis code. This report describes the overall structure of the TASS/SMR, input processing, and the processes of a steady state and transient calculations. In addition, basic differential equations, finite difference equations, state relationships, and constitutive models are described in the report. First, the conservation equations, a discretization process for numerical analysis, search method for state relationship are described. Then, a core power model, heat transfer models, physical models for various components, and control and trip models are explained.

  8. The MARINA model (Model to Assess River Inputs of Nutrients to seAs)

    NARCIS (Netherlands)

    Strokal, Maryna; Kroeze, Carolien; Wang, Mengru; Bai, Zhaohai; Ma, Lin

    2016-01-01

    Chinese agriculture has been developing fast towards industrial food production systems that discharge nutrient-rich wastewater into rivers. As a result, nutrient export by rivers has been increasing, resulting in coastal water pollution. We developed a Model to Assess River Inputs of Nutrients

  9. Combining predictions from linear models when training and test inputs differ

    NARCIS (Netherlands)

    T. van Ommen (Thijs); N.L. Zhang (Nevin); J. Tian (Jin)

    2014-01-01

    textabstractMethods for combining predictions from different models in a supervised learning setting must somehow estimate/predict the quality of a model's predictions at unknown future inputs. Many of these methods (often implicitly) make the assumption that the test inputs are identical to the

  10. Motivation Monitoring and Assessment Extension for Input-Process-Outcome Game Model

    Science.gov (United States)

    Ghergulescu, Ioana; Muntean, Cristina Hava

    2014-01-01

    This article proposes a Motivation Assessment-oriented Input-Process-Outcome Game Model (MotIPO), which extends the Input-Process-Outcome game model with game-centred and player-centred motivation assessments performed right from the beginning of the game-play. A feasibility case-study involving 67 participants playing an educational game and…

  11. Requirements for Medical Modeling Languages

    Science.gov (United States)

    van der Maas, Arnoud A.F.; Ter Hofstede, Arthur H.M.; Ten Hoopen, A. Johannes

    2001-01-01

    Objective: The development of tailor-made domain-specific modeling languages is sometimes desirable in medical informatics. Naturally, the development of such languages should be guided. The purpose of this article is to introduce a set of requirements for such languages and show their application in analyzing and comparing existing modeling languages. Design: The requirements arise from the practical experience of the authors and others in the development of modeling languages in both general informatics and medical informatics. The requirements initially emerged from the analysis of information modeling techniques. The requirements are designed to be orthogonal, i.e., one requirement can be violated without violation of the others. Results: The proposed requirements for any modeling language are that it be “formal” with regard to syntax and semantics, “conceptual,” “expressive,” “comprehensible,” “suitable,” and “executable.” The requirements are illustrated using both the medical logic modules of the Arden Syntax as a running example and selected examples from other modeling languages. Conclusion: Activity diagrams of the Unified Modeling Language, task structures for work flows, and Petri nets are discussed with regard to the list of requirements, and various tradeoffs are thus made explicit. It is concluded that this set of requirements has the potential to play a vital role in both the evaluation of existing domain-specific languages and the development of new ones. PMID:11230383

  12. The Modulated-Input Modulated-Output Model

    National Research Council Canada - National Science Library

    Moskowitz, Ira S; Kang, Myong H

    1995-01-01

    .... The data replication problem in database systems is our motivation. We introduce a new queueing theoretic model, the MIMO model, that incorporates burstiness in the sending side and busy periods in the receiving side...

  13. Modeling and Control of a Dual-Input Isolated Full-Bridge Boost Converter

    DEFF Research Database (Denmark)

    Zhang, Zhe; Thomsen, Ole Cornelius; Andersen, Michael A. E.

    2012-01-01

    In this paper, a steady-state model, a large-signal (LS) model and an ac small-signal (SS) model for a recently proposed dual-input transformer-isolated boost converter are derived respectively by the switching flow-graph (SFG) nonlinear modeling technique. Based upon the converter’s model, the c....... The measured experimental results match the simulation results fairly well on both input source dynamic and step load transient responses....

  14. The MARINA model (Model to Assess River Inputs of Nutrients to seAs)

    OpenAIRE

    Strokal, Maryna; Kroeze, Carolien; Wang, Mengru; Bai, Zhaohai; Ma, Lin

    2016-01-01

    Chinese agriculture has been developing fast towards industrial food production systems that discharge nutrient-rich wastewater into rivers. As a result, nutrient export by rivers has been increasing, resulting in coastal water pollution. We developed a Model to Assess River Inputs of Nutrients to seAs (MARINA) for China. The MARINA Nutrient Model quantifies river export of nutrients by source at the sub-basin scale as a function of human activities on land. MARINA is a downscaled version for...

  15. Precipitation forecasts and their uncertainty as input into hydrological models

    Directory of Open Access Journals (Sweden)

    M. Kobold

    2005-01-01

    Full Text Available Torrential streams and fast runoff are characteristic of most Slovenian rivers and extensive damage is caused almost every year by rainstorms affecting different regions of Slovenia. Rainfall-runoff models which are tools for runoff calculation can be used for flood forecasting. In Slovenia, the lag time between rainfall and runoff is only a few hours and on-line data are used only for now-casting. Predicted precipitation is necessary in flood forecasting some days ahead. The ECMWF (European Centre for Medium-Range Weather Forecasts model gives general forecasts several days ahead while more detailed precipitation data with the ALADIN/SI model are available for two days ahead. Combining the weather forecasts with the information on catchment conditions and a hydrological forecasting model can give advance warning of potential flooding notwithstanding a certain degree of uncertainty in using precipitation forecasts based on meteorological models. Analysis of the sensitivity of the hydrological model to the rainfall error has shown that the deviation in runoff is much larger than the rainfall deviation. Therefore, verification of predicted precipitation for large precipitation events was performed with the ECMWF model. Measured precipitation data were interpolated on a regular grid and compared with the results from the ECMWF model. The deviation in predicted precipitation from interpolated measurements is shown with the model bias resulting from the inability of the model to predict the precipitation correctly and a bias for horizontal resolution of the model and natural variability of precipitation.

  16. Characteristic length scale of input data in distributed models: implications for modeling grid size

    Science.gov (United States)

    Artan, G. A.; Neale, C. M. U.; Tarboton, D. G.

    2000-01-01

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.

  17. Characteristic length scale of input data in distributed models: implications for modeling grain size

    Science.gov (United States)

    Artan, Guleid A.; Neale, C. M. U.; Tarboton, D. G.

    2000-01-01

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.

  18. Modelling groundwater discharge areas using only digital elevation models as input data

    International Nuclear Information System (INIS)

    Brydsten, Lars

    2006-10-01

    Advanced geohydrological models require data on topography, soil distribution in three dimensions, vegetation, land use, bedrock fracture zones. To model present geohydrological conditions, these factors can be gathered with different techniques. If a future geohydrological condition is modelled in an area with positive shore displacement (say 5,000 or 10,000 years), some of these factors can be difficult to measure. This could include the development of wetlands and the filling of lakes. If the goal of the model is to predict distribution of groundwater recharge and discharge areas in the landscape, the most important factor is topography. The question is how much can topography alone explain the distribution of geohydrological objects in the landscape. A simplified description of the distribution of geohydrological objects in the landscape is that groundwater recharge areas occur at local elevation curvatures and discharge occurs in lakes, brooks, and low situated slopes. Areas in-between these make up discharge areas during wet periods and recharge areas during dry periods. A model that could predict this pattern only using topography data needs to be able to predict high ridges and future lakes and brooks. This study uses GIS software with four different functions using digital elevation models as input data, geomorphometrical parameters to predict landscape ridges, basin fill for predicting lakes, flow accumulations for predicting future waterways, and topographical wetness indexes for dividing in-between areas based on degree of wetness. An area between the village of and Forsmarks' Nuclear Power Plant has been used to calibrate the model. The area is within the SKB 10-metre Elevation Model (DEM) and has a high-resolution orienteering map for wetlands. Wetlands are assumed to be groundwater discharge areas. Five hundred points were randomly distributed across the wetlands. These are potential discharge points. Model parameters were chosen with the

  19. Modelling groundwater discharge areas using only digital elevation models as input data

    Energy Technology Data Exchange (ETDEWEB)

    Brydsten, Lars [Umeaa Univ. (Sweden). Dept. of Biology and Environmental Science

    2006-10-15

    Advanced geohydrological models require data on topography, soil distribution in three dimensions, vegetation, land use, bedrock fracture zones. To model present geohydrological conditions, these factors can be gathered with different techniques. If a future geohydrological condition is modelled in an area with positive shore displacement (say 5,000 or 10,000 years), some of these factors can be difficult to measure. This could include the development of wetlands and the filling of lakes. If the goal of the model is to predict distribution of groundwater recharge and discharge areas in the landscape, the most important factor is topography. The question is how much can topography alone explain the distribution of geohydrological objects in the landscape. A simplified description of the distribution of geohydrological objects in the landscape is that groundwater recharge areas occur at local elevation curvatures and discharge occurs in lakes, brooks, and low situated slopes. Areas in-between these make up discharge areas during wet periods and recharge areas during dry periods. A model that could predict this pattern only using topography data needs to be able to predict high ridges and future lakes and brooks. This study uses GIS software with four different functions using digital elevation models as input data, geomorphometrical parameters to predict landscape ridges, basin fill for predicting lakes, flow accumulations for predicting future waterways, and topographical wetness indexes for dividing in-between areas based on degree of wetness. An area between the village of and Forsmarks' Nuclear Power Plant has been used to calibrate the model. The area is within the SKB 10-metre Elevation Model (DEM) and has a high-resolution orienteering map for wetlands. Wetlands are assumed to be groundwater discharge areas. Five hundred points were randomly distributed across the wetlands. These are potential discharge points. Model parameters were chosen with the

  20. Influence of input matrix representation on topic modelling performance

    CSIR Research Space (South Africa)

    De Waal, A

    2010-11-01

    Full Text Available Topic models explain a collection of documents with a small set of distributions over terms. These distributions over terms define the topics. Topic models ignore the structure of documents and use a bag-of-words approach which relies solely...

  1. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Science.gov (United States)

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  2. On Input Vector Representation for the SVR model of Reactor Core Loading Pattern Critical Parameters

    International Nuclear Information System (INIS)

    Trontl, K.; Pevec, D.; Smuc, T.

    2008-01-01

    Determination and optimization of reactor core loading pattern is an important factor in nuclear power plant operation. The goal is to minimize the amount of enriched uranium (fresh fuel) and burnable absorbers placed in the core, while maintaining nuclear power plant operational and safety characteristics. The usual approach to loading pattern optimization involves high degree of engineering judgment, a set of heuristic rules, an optimization algorithm and a computer code used for evaluating proposed loading patterns. The speed of the optimization process is highly dependent on the computer code used for the evaluation. Recently, we proposed a new method for fast loading pattern evaluation based on general robust regression model relying on the state of the art research in the field of machine learning. We employed Support Vector Regression (SVR) technique. SVR is a supervised learning method in which model parameters are automatically determined by solving a quadratic optimization problem. The preliminary tests revealed a good potential of the SVR method application for fast and accurate reactor core loading pattern evaluation. However, some aspects of model development are still unresolved. The main objective of the work reported in this paper was to conduct additional tests and analyses required for full clarification of the SVR applicability for loading pattern evaluation. We focused our attention on the parameters defining input vector, primarily its structure and complexity, and parameters defining kernel functions. All the tests were conducted on the NPP Krsko reactor core, using MCRAC code for the calculation of reactor core loading pattern critical parameters. The tested input vector structures did not influence the accuracy of the models suggesting that the initially tested input vector, consisted of the number of IFBAs and the k-inf at the beginning of the cycle, is adequate. The influence of kernel function specific parameters (σ for RBF kernel

  3. "Updates to Model Algorithms & Inputs for the Biogenic ...

    Science.gov (United States)

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observations. This has resulted in improvements in model evaluations of modeled isoprene, NOx, and O3. The National Exposure Research Laboratory (NERL) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA mission to protect human health and the environment. AMAD research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for providing a sound scientific and technical basis for regulatory policies based on air quality models to improve ambient air quality. The models developed by AMAD are being used by EPA, NOAA, and the air pollution community in understanding and forecasting not only the magnitude of the air pollution problem, but also in developing emission control policies and regulations for air quality improvements.

  4. Using Crowd Sensed Data as Input to Congestion Model

    DEFF Research Database (Denmark)

    Lehmann, Anders; Gross, Allan

    2016-01-01

    Emission of airborne pollutants and climate gasses from the transport sector is a growing problem, both in indus- trialised and developing countries. Planning of urban transport system is essential to minimise the environmental, health and economic impact of congestion in the transport system...... traffic systems, in less than an hour. The model is implemented in an open source database system, for easy interface with GIS resources and crowd sensed transportation data........ To get accurate and timely information on traffic congestion, and by extension information on air pollution, near real time traffic models are needed. We present in this paper an implementation of the Restricted Stochastic User equilibrium model, that is capable to model congestions for very large Urban...

  5. Requirements for effective modelling strategies.

    NARCIS (Netherlands)

    Gaunt, J.L.; Riley, J.; Stein, A.; Penning de Vries, F.W.T.

    1997-01-01

    As the result of a recent BBSRC-funded workshop between soil scientists, modellers, statisticians and others to discuss issues relating to the derivation of complex environmental models, a set of modelling guidelines is presented and the required associated research areas are discussed.

  6. Input-dependent wave attenuation in a critically-balanced model of cortex.

    Directory of Open Access Journals (Sweden)

    Xiao-Hu Yan

    Full Text Available A number of studies have suggested that many properties of brain activity can be understood in terms of critical systems. However it is still not known how the long-range susceptibilities characteristic of criticality arise in the living brain from its local connectivity structures. Here we prove that a dynamically critically-poised model of cortex acquires an infinitely-long ranged susceptibility in the absence of input. When an input is presented, the susceptibility attenuates exponentially as a function of distance, with an increasing spatial attenuation constant (i.e., decreasing range the larger the input. This is in direct agreement with recent results that show that waves of local field potential activity evoked by single spikes in primary visual cortex of cat and macaque attenuate with a characteristic length that also increases with decreasing contrast of the visual stimulus. A susceptibility that changes spatial range with input strength can be thought to implement an input-dependent spatial integration: when the input is large, no additional evidence is needed in addition to the local input; when the input is weak, evidence needs to be integrated over a larger spatial domain to achieve a decision. Such input-strength-dependent strategies have been demonstrated in visual processing. Our results suggest that input-strength dependent spatial integration may be a natural feature of a critically-balanced cortical network.

  7. Reissner-Mindlin plate model with uncertain input data

    Czech Academy of Sciences Publication Activity Database

    Hlaváček, Ivan; Chleboun, J.

    2014-01-01

    Roč. 17, Jun (2014), s. 71-88 ISSN 1468-1218 Institutional support: RVO:67985840 Keywords : Reissner-Mindlin model * orthotropic plate Subject RIV: BA - General Mathematics Impact factor: 2.519, year: 2014 http://www.sciencedirect.com/science/article/pii/S1468121813001077

  8. Determining input values for a simple parametric model to estimate ...

    African Journals Online (AJOL)

    Estimating soil evaporation (Es) is an important part of modelling vineyard evapotranspiration for irrigation purposes. Furthermore, quantification of possible soil texture and trellis effects is essential. Daily Es from six topsoils packed into lysimeters was measured under grapevines on slanting and vertical trellises, ...

  9. Land Building Models: Uncertainty in and Sensitivity to Input Parameters

    Science.gov (United States)

    2013-08-01

    Vicksburg, MS: US Army Engineer Research and Development Center. An electronic copy of this CHETN is available from http://chl.erdc.usace.army.mil/chetn...Nourishment Module, Chapter 8. In Coastal Louisiana Ecosystem Assessment and Restoration (CLEAR) Model of Louisiana Coastal Area ( LCA ) Comprehensive

  10. Scientific and technical advisory committee review of the nutrient inputs to the watershed model

    Science.gov (United States)

    The following is a report by a STAC Review Team concerning the methods and documentation used by the Chesapeake Bay Partnership for evaluation of nutrient inputs to Phase 6 of the Chesapeake Bay Watershed Model. The “STAC Review of the Nutrient Inputs to the Watershed Model” (previously referred to...

  11. Crop growth modelling and crop yield forecasting using satellite derived meteorological inputs

    NARCIS (Netherlands)

    Wit, de A.J.W.; Diepen, van K.

    2006-01-01

    One of the key challenges for operational crop monitoring and yield forecasting using crop models is to find spatially representative meteorological input data. Currently, weather inputs are often interpolated from low density networks of weather stations or derived from output from coarse (0.5

  12. A switchable light-input, light-output system modelled and constructed in yeast

    Directory of Open Access Journals (Sweden)

    Kozma-Bognar Laszlo

    2009-09-01

    Full Text Available Abstract Background Advances in synthetic biology will require spatio-temporal regulation of biological processes in heterologous host cells. We develop a light-switchable, two-hybrid interaction in yeast, based upon the Arabidopsis proteins PHYTOCHROME A and FAR-RED ELONGATED HYPOCOTYL 1-LIKE. Light input to this regulatory module allows dynamic control of a light-emitting LUCIFERASE reporter gene, which we detect by real-time imaging of yeast colonies on solid media. Results The reversible activation of the phytochrome by red light, and its inactivation by far-red light, is retained. We use this quantitative readout to construct a mathematical model that matches the system's behaviour and predicts the molecular targets for future manipulation. Conclusion Our model, methods and materials together constitute a novel system for a eukaryotic host with the potential to convert a dynamic pattern of light input into a predictable gene expression response. This system could be applied for the regulation of genetic networks - both known and synthetic.

  13. Little Higgs model limits from LHC - Input for Snowmass 2013

    International Nuclear Information System (INIS)

    Reuter, Juergen; Tonini, Marco; Vries, Maikel de

    2013-07-01

    The status of the most prominent model implementations of the Little Higgs paradigm, the Littlest Higgs with and without discrete T parity as well as the Simplest Little Higgs are reviewed. For this, we are taking into account a fit to 21 electroweak precision observables from LEP, SLC, Tevatron together with the full 25 fb -1 of Higgs data reported from ATLAS and CMS at Moriond 2013. We also - focusing on the Littlest Higgs with T parity - include an outlook on corresponding direct searches at the 8 TeV LHC and their competitiveness with the EW and Higgs data regarding their exclusion potential. This contribution to the Snowmass procedure serves as a guideline which regions in parameter space of Little Higgs models are still compatible for the upcoming LHC runs and future experiments at the energy frontier. For this we propose two different benchmark scenarios for the Littlest Higgs with T parity, one with heavy mirror quarks, one with light ones.

  14. Mechanistic interpretation of glass reaction: Input to kinetic model development

    International Nuclear Information System (INIS)

    Bates, J.K.; Ebert, W.L.; Bradley, J.P.; Bourcier, W.L.

    1991-05-01

    Actinide-doped SRL 165 type glass was reacted in J-13 groundwater at 90 degree C for times up to 278 days. The reaction was characterized by both solution and solid analyses. The glass was seen to react nonstoichiometrically with preferred leaching of alkali metals and boron. High resolution electron microscopy revealed the formation of a complex layer structure which became separated from the underlying glass as the reaction progressed. The formation of the layer and its effect on continued glass reaction are discussed with respect to the current model for glass reaction used in the EQ3/6 computer simulation. It is concluded that the layer formed after 278 days is not protective and may eventually become fractured and generate particulates that may be transported by liquid water. 5 refs., 5 figs. , 3 tabs

  15. Assessing the required additional organic inputs to soils to reach the 4 per 1000 objective at the global scale: a RothC project

    Science.gov (United States)

    Lutfalla, Suzanne; Skalsky, Rastislav; Martin, Manuel; Balkovic, Juraj; Havlik, Petr; Soussana, Jean-François

    2017-04-01

    The 4 per 1000 Initiative underlines the role of soil organic matter in addressing the three-fold challenge of food security, adaptation of the land sector to climate change, and mitigation of human-induced GHG emissions. It sets an ambitious global target of a 0.4% (4/1000) annual increase in top soil organic carbon (SOC) stock. The present collaborative project between the 4 per 1000 research program, INRA and IIASA aims at providing a first global assessment of the translation of this soil organic carbon sequestration target into the equivalent organic matter inputs target. Indeed, soil organic carbon builds up in the soil through different processes leading to an increased input of carbon to the system (by increasing returns to the soil for instance) or a decreased output of carbon from the system (mainly by biodegradation and mineralization processes). Here we answer the question of how much extra organic matter must be added to agricultural soils every year (in otherwise unchanged climatic conditions) in order to guarantee a 0.4% yearly increase of total soil organic carbon stocks (40cm soil depth is considered). We use the RothC model of soil organic matter turnover on a spatial grid over 10 years to model two situations for croplands: a first situation where soil organic carbon remains constant (system at equilibrium) and a second situation where soil organic matter increases by 0.4% every year. The model accounts for the effects of soil type, temperature, moisture content and plant cover on the turnover process, it is run on a monthly time step, and it can simulate the needed organic input to sustain a certain SOC stock (or evolution of SOC stock). These two SOC conditions lead to two average yearly plant inputs over 10 years. The difference between the two simulated inputs represent the additional yearly input needed to reach the 4 per 1000 objective (input_eq for inputs needed for SOC to remain constant; input_4/1000 for inputs needed for SOC to reach

  16. A latent low-dimensional common input drives a pool of motor neurons: a probabilistic latent state-space model.

    Science.gov (United States)

    Feeney, Daniel F; Meyer, François G; Noone, Nicholas; Enoka, Roger M

    2017-10-01

    Motor neurons appear to be activated with a common input signal that modulates the discharge activity of all neurons in the motor nucleus. It has proven difficult for neurophysiologists to quantify the variability in a common input signal, but characterization of such a signal may improve our understanding of how the activation signal varies across motor tasks. Contemporary methods of quantifying the common input to motor neurons rely on compiling discrete action potentials into continuous time series, assuming the motor pool acts as a linear filter, and requiring signals to be of sufficient duration for frequency analysis. We introduce a space-state model in which the discharge activity of motor neurons is modeled as inhomogeneous Poisson processes and propose a method to quantify an abstract latent trajectory that represents the common input received by motor neurons. The approach also approximates the variation in synaptic noise in the common input signal. The model is validated with four data sets: a simulation of 120 motor units, a pair of integrate-and-fire neurons with a Renshaw cell providing inhibitory feedback, the discharge activity of 10 integrate-and-fire neurons, and the discharge times of concurrently active motor units during an isometric voluntary contraction. The simulations revealed that a latent state-space model is able to quantify the trajectory and variability of the common input signal across all four conditions. When compared with the cumulative spike train method of characterizing common input, the state-space approach was more sensitive to the details of the common input current and was less influenced by the duration of the signal. The state-space approach appears to be capable of detecting rather modest changes in common input signals across conditions. NEW & NOTEWORTHY We propose a state-space model that explicitly delineates a common input signal sent to motor neurons and the physiological noise inherent in synaptic signal

  17. Remote sensing inputs to landscape models which predict future spatial land use patterns for hydrologic models

    Science.gov (United States)

    Miller, L. D.; Tom, C.; Nualchawee, K.

    1977-01-01

    A tropical forest area of Northern Thailand provided a test case of the application of the approach in more natural surroundings. Remote sensing imagery subjected to proper computer analysis has been shown to be a very useful means of collecting spatial data for the science of hydrology. Remote sensing products provide direct input to hydrologic models and practical data bases for planning large and small-scale hydrologic developments. Combining the available remote sensing imagery together with available map information in the landscape model provides a basis for substantial improvements in these applications.

  18. Lessons learned using HAMMLAB experimenter systems: Input for HAMMLAB 2000 functional requirements

    International Nuclear Information System (INIS)

    Sebok, Angelia L.

    1998-02-01

    To design a usable HAMMLAB 2000, lessons learned from use of the existing HAMMLAB must be documented. User suggestions are important and must be taken into account. Different roles in HAMMLAB experimental sessions are identified, and major functions of each role were specified. A series of questionnaires were developed and administered to different users of HAMMLAB, each tailored to the individual job description. The results of those questionnaires are included in this report. Previous HAMMLAB modification recommendations were also reviewed, to provide input to this document. A trial experimental session was also conducted, to give an overview of the tasks in HAMMLAB. (author)

  19. Modelling of Multi Input Transfer Function for Rainfall Forecasting in Batu City

    Directory of Open Access Journals (Sweden)

    Priska Arindya Purnama

    2017-11-01

    Full Text Available The aim of this research is to model and forecast the rainfall in Batu City using multi input transfer function model based on air temperature, humidity, wind speed and cloud. Transfer function model is a multivariate time series model which consists of an output series (Yt sequence expected to be effected by an input series (Xt and other inputs in a group called a noise series (Nt. Multi input transfer function model obtained is (b1,s1,r1 (b2,s2,r2 (b3,s3,r3 (b4,s4,r4(pn,qn = (0,0,0 (23,0,0 (1,2,0 (0,0,0 ([5,8],2 and shows that air temperature on t-day affects rainfall on t-day, rainfall on t-day is influenced by air humidity in the previous 23 days, rainfall on t-day is affected by wind speed in the previous day , and rainfall on day t is affected by clouds on day t. The results of rainfall forecasting in Batu City with multi input transfer function model can be said to be accurate, because it produces relatively small RMSE value. The value of RMSE data forecasting training is 7.7921 while forecasting data testing is 4.2184. Multi-input transfer function model is suitable for rainfall in Batu City.

  20. Loss of GABAergic inputs in APP/PS1 mouse model of Alzheimer's disease

    Directory of Open Access Journals (Sweden)

    Tutu Oyelami

    2014-04-01

    Full Text Available Alzheimer's disease (AD is characterized by symptoms which include seizures, sleep disruption, loss of memory as well as anxiety in patients. Of particular importance is the possibility of preventing the progressive loss of neuronal projections in the disease. Transgenic mice overexpressing EOFAD mutant PS1 (L166P and mutant APP (APP KM670/671NL Swedish (APP/PS1 develop a very early and robust Amyloid pathology and display synaptic plasticity impairments and cognitive dysfunction. Here we investigated GABAergic neurotransmission, using multi-electrode array (MEA technology and pharmacological manipulation to quantify the effect of GABA Blockers on field excitatory postsynaptic potentials (fEPSP, and immunostaining of GABAergic neurons. Using MEA technology we confirm impaired LTP induction by high frequency stimulation in APPPS1 hippocampal CA1 region that was associated with reduced alteration of the pair pulse ratio after LTP induction. Synaptic dysfunction was also observed under manipulation of external Calcium concentration and input-output curve. Electrophysiological recordings from brain slice of CA1 hippocampus area, in the presence of GABAergic receptors blockers cocktails further demonstrated significant reduction in the GABAergic inputs in APP/PS1 mice. Moreover, immunostaining of GAD65 a specific marker for GABAergic neurons revealed reduction of the GABAergic inputs in CA1 area of the hippocampus. These results might be linked to increased seizure sensitivity, premature death and cognitive dysfunction in this animal model of AD. Further in depth analysis of GABAergic dysfunction in APP/PS1 mice is required and may open new perspectives for AD therapy by restoring GABAergic function.

  1. Analyzing Requirements for and Designing a Collaborative Tool Based on Functional and User Input

    National Research Council Canada - National Science Library

    Curtis, Christopher K; Burneka, Chris; Whited, Vaughan; Kancler, David E

    2006-01-01

    .... Technology provides a multitude of potential collaborative tools and techniques, and this must be balanced against the requirement to leverage and/or support maintainer's existing interaction skills...

  2. Droplet size characteristics and energy input requirements of emulsions formed using high-intensity-pulsed electric fields

    International Nuclear Information System (INIS)

    Scott, T.C.; Sisson, W.G.

    1987-01-01

    Experimental methods have been developed to measure droplet size characteristics and energy inputs associated with the rupture of aqueous droplets by high-intensity-pulsed electric fields. The combination of in situ microscope optics and high-speed video cameras allows reliable observation of liquid droplets down to 0.5 μm in size. Videotapes of electric-field-created emulsions reveal that average droplet sizes of less than 5 μm are easily obtained in such systems. Analysis of the energy inputs into the fluids indicates that the electric field method requires less than 1% of the energy required from mechanical agitation to create comparable droplet sizes. 11 refs., 3 figs., 2 tabs

  3. Garbage In Garbage Out Garbage In : Improving the Inputs and Atmospheric Feedbacks in Seasonal Snowpack Modeling

    Science.gov (United States)

    Gutmann, E. D.

    2016-12-01

    Without good input data, almost any model will produce bad output; however, alpine environments are extremely difficult places to make measurements of those inputs. Perhaps the least well known input is precipitation, but almost as important are temperature, wind, humidity, and radiation. Recent advances in atmospheric modeling have improved the fidelity of the output such that model output is sometimes better than interpolated observations, particularly for precipitation; however these models come with a tremendous computational cost. We describe the Intermediate Complexity Atmospheric Research model (ICAR) as one path to a computationally efficient method to improve snow pack model inputs over complex terrain. ICAR provides estimates of all inputs at a small fraction of the computational cost of a traditional atmospheric model such as the Weather Research and Forecasting model (WRF). Importantly, ICAR is able to simulate feedbacks from the land surface that are critical for estimating the air temperature. In addition, we will explore future improvements to the local wind fields including the use of statistics derived from limited duration Large Eddy Simulation (LES) model runs. These wind fields play a critical role in determing the redistribution of snow, and the redistribution of snow changes the surface topography and thus the wind field. We show that a proper depiction of snowpack redistribution can have a large affect on streamflow timing, and an even larger effect on the climate change signal of that streamflow.

  4. Modeling of heat transfer into a heat pipe for a localized heat input zone

    International Nuclear Information System (INIS)

    Rosenfeld, J.H.

    1987-01-01

    A general model is presented for heat transfer into a heat pipe using a localized heat input. Conduction in the wall of the heat pipe and boiling in the interior structure are treated simultaneously. The model is derived from circumferential heat transfer in a cylindrical heat pipe evaporator and for radial heat transfer in a circular disk with boiling from the interior surface. A comparison is made with data for a localized heat input zone. Agreement between the theory and the model is good. This model can be used for design purposes if a boiling correlation is available. The model can be extended to provide improved predictions of heat pipe performance

  5. Input-output model for MACCS nuclear accident impacts estimation¹

    Energy Technology Data Exchange (ETDEWEB)

    Outkin, Alexander V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bixler, Nathan E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vargas, Vanessa N [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-27

    Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domestic product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.

  6. Allocatable Fixed Inputs and Two-Stage Aggregation Models of Multioutput Production Decisions

    OpenAIRE

    Barry T. Coyle

    1993-01-01

    Allocation decisions for a fixed input such as land are incorporated into a two-stage aggregation model of multioutput production decisions. The resulting two-stage model is more realistic and is as tractable for empirical research as the standard model.

  7. Multivariate Self-Exciting Threshold Autoregressive Models with eXogenous Input

    OpenAIRE

    Peter Martey Addo

    2014-01-01

    This study defines a multivariate Self--Exciting Threshold Autoregressive with eXogenous input (MSETARX) models and present an estimation procedure for the parameters. The conditions for stationarity of the nonlinear MSETARX models is provided. In particular, the efficiency of an adaptive parameter estimation algorithm and LSE (least squares estimate) algorithm for this class of models is then provided via simulations.

  8. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    International Nuclear Information System (INIS)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok

    2016-01-01

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  9. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    Energy Technology Data Exchange (ETDEWEB)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin, E-mail: dengbin@tju.edu.cn; Chan, Wai-lok [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2016-06-15

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  10. Monitoring the inputs required to extend and sustain hygiene promotion: findings from the GLAAS 2013/2014 survey.

    Science.gov (United States)

    Moreland, Leslie D; Gore, Fiona M; Andre, Nathalie; Cairncross, Sandy; Ensink, Jeroen H J

    2016-08-01

    There are significant gaps in information about the inputs required to effectively extend and sustain hygiene promotion activities to improve people's health outcomes through water, sanitation and hygiene (WASH) interventions. We sought to analyse current country and global trends in the use of key inputs required for effective and sustainable implementation of hygiene promotion to help guide hygiene promotion policy and decision-making after 2015. Data collected in response to the GLAAS 2013/2014 survey from 93 countries of 94 were included, and responses were analysed for 12 questions assessing the inputs and enabling environment for hygiene promotion under four thematic areas. Data were included and analysed from 20 External Support Agencies (ESA) of 23 collected through self-administered surveys. Firstly, the data showed a large variation in the way in which hygiene promotion is defined and what constitutes key activities in this area. Secondly, challenges to implement hygiene promotion are considerable: include poor implementation of policies and plans, weak coordination mechanisms, human resource limitations and a lack of available hygiene promotion budget data. Despite the proven benefits of hand washing with soap, a critical hygiene-related factor in minimising infection, GLAAS 2013/2014 survey data showed that hygiene promotion remains a neglected component of WASH. Additional research to identify the context-specific strategies and inputs required to enhance the effectiveness of hygiene promotion at scale are needed. Improved data collection methods are also necessary to advance the availability and reliability of hygiene-specific information. © 2016 John Wiley & Sons Ltd.

  11. Backstepping control for a 3DOF model helicopter with input and output constraints

    Directory of Open Access Journals (Sweden)

    Rong Mei

    2016-02-01

    Full Text Available In this article, a backstepping control scheme is developed for the motion control of a Three degrees of freedom (3DOF model helicopter with unknown external disturbance, modelling uncertainties and input and output constraints. In the developed robust control scheme, augmented state observers are applied to estimate the unknown states, unknown external disturbance and modelling uncertainties. Auxiliary systems are designed to deal with input saturation. A barrier Lyapunov function is employed to handle the output saturation. The stability of closed-loop system is proved by the Lyapunov method. Simulation results show that the designed control scheme is effective at dealing with the motion control of a 3DOF model helicopter in the presence of unknown external disturbance and modelling uncertainties, and input and output saturation.

  12. ASR in a Human Word Recognition Model: Generating Phonemic Input for Shortlist

    OpenAIRE

    Scharenborg, O.E.; Boves, L.W.J.; Veth, J.M. de

    2002-01-01

    The current version of the psycholinguistic model of human word recognition Shortlist suffers from two unrealistic constraints. First, the input of Shortlist must consist of a single string of phoneme symbols. Second, the current version of the search in Shortlist makes it difficult to deal with insertions and deletions in the input phoneme string. This research attempts to fully automatically derive a phoneme string from the acoustic signal that is as close as possible to the number of phone...

  13. Calibration of uncertain inputs to computer models using experimentally measured quantities and the BMARS emulator

    International Nuclear Information System (INIS)

    Stripling, H.F.; McClarren, R.G.; Kuranz, C.C.; Grosskopf, M.J.; Rutter, E.; Torralva, B.R.

    2011-01-01

    We present a method for calibrating the uncertain inputs to a computer model using available experimental data. The goal of the procedure is to produce posterior distributions of the uncertain inputs such that when samples from the posteriors are used as inputs to future model runs, the model is more likely to replicate (or predict) the experimental response. The calibration is performed by sampling the space of the uncertain inputs, using the computer model (or, more likely, an emulator for the computer model) to assign weights to the samples, and applying the weights to produce the posterior distributions and generate predictions of new experiments within confidence bounds. The method is similar to the Markov chain Monte Carlo (MCMC) calibration methods with independent sampling with the exception that we generate samples beforehand and replace the candidate acceptance routine with a weighting scheme. We apply our method to the calibration of a Hyades 2D model of laser energy deposition in beryllium. We employ a Bayesian Multivariate Adaptive Regression Splines (BMARS) emulator as a surrogate for Hyades 2D. We treat a range of uncertainties in our system, including uncertainties in the experimental inputs, experimental measurement error, and systematic experimental timing errors. The results of the calibration are posterior distributions that both agree with intuition and improve the accuracy and decrease the uncertainty in experimental predictions. (author)

  14. 75 FR 30395 - Stakeholder Input; National Pollutant Discharge Elimination System (NPDES) Permit Requirements...

    Science.gov (United States)

    2010-06-01

    ..., the elderly and those with weakened immune systems, can be at a higher risk of illness from exposure... municipal collection systems including satellite portions. 4. What is the appropriate role of NPDES permits... Pollutant Discharge Elimination System (NPDES) Permit Requirements for Municipal Sanitary Sewer Collection...

  15. Sensitivity Analysis of Input Parameters for a Dynamic Food Chain Model DYNACON

    International Nuclear Information System (INIS)

    Hwang, Won Tae; Lee, Geun Chang; Han, Moon Hee; Cho, Gyu Seong

    2000-01-01

    The sensitivity analysis of input parameters for a dynamic food chain model DYNACON was conducted as a function of deposition data for the long-lived radionuclides ( 137 Cs, 90 Sr). Also, the influence of input parameters for the short and long-terms contamination of selected foodstuffs (cereals, leafy vegetables, milk) was investigated. The input parameters were sampled using the LHS technique, and their sensitivity indices represented as PRCC. The sensitivity index was strongly dependent on contamination period as well as deposition data. In case of deposition during the growing stages of plants, the input parameters associated with contamination by foliar absorption were relatively important in long-term contamination as well as short-term contamination. They were also important in short-term contamination in case of deposition during the non-growing stages. In long-term contamination, the influence of input parameters associated with foliar absorption decreased, while the influence of input parameters associated with root uptake increased. These phenomena were more remarkable in case of the deposition of non-growing stages than growing stages, and in case of 90 Sr deposition than 137 Cs deposition. In case of deposition during growing stages of pasture, the input parameters associated with the characteristics of cattle such as feed-milk transfer factor and daily intake rate of cattle were relatively important in contamination of milk

  16. The interspike interval of a cable model neuron with white noise input.

    Science.gov (United States)

    Tuckwell, H C; Wan, F Y; Wong, Y S

    1984-01-01

    The firing time of a cable model neuron in response to white noise current injection is investigated with various methods. The Fourier decomposition of the depolarization leads to partial differential equations for the moments of the firing time. These are solved by perturbation and numerical methods, and the results obtained are in excellent agreement with those obtained by Monte Carlo simulation. The convergence of the random Fourier series is found to be very slow for small times so that when the firing time is small it is more efficient to simulate the solution of the stochastic cable equation directly using the two different representations of the Green's function, one which converges rapidly for small times and the other which converges rapidly for large times. The shape of the interspike interval density is found to depend strongly on input position. The various shapes obtained for different input positions resemble those for real neurons. The coefficient of variation of the interspike interval decreases monotonically as the distance between the input and trigger zone increases. A diffusion approximation for a nerve cell receiving Poisson input is considered and input/output frequency relations obtained for different input sites. The cases of multiple trigger zones and multiple input sites are briefly discussed.

  17. An integrated model for the assessment of global water resources Part 1: Model description and input meteorological forcing

    Science.gov (United States)

    Hanasaki, N.; Kanae, S.; Oki, T.; Masuda, K.; Motoya, K.; Shirakawa, N.; Shen, Y.; Tanaka, K.

    2008-07-01

    To assess global water availability and use at a subannual timescale, an integrated global water resources model was developed consisting of six modules: land surface hydrology, river routing, crop growth, reservoir operation, environmental flow requirement estimation, and anthropogenic water withdrawal. The model simulates both natural and anthropogenic water flow globally (excluding Antarctica) on a daily basis at a spatial resolution of 1°×1° (longitude and latitude). This first part of the two-feature report describes the six modules and the input meteorological forcing. The input meteorological forcing was provided by the second Global Soil Wetness Project (GSWP2), an international land surface modeling project. Several reported shortcomings of the forcing component were improved. The land surface hydrology module was developed based on a bucket type model that simulates energy and water balance on land surfaces. The crop growth module is a relatively simple model based on concepts of heat unit theory, potential biomass, and a harvest index. In the reservoir operation module, 452 major reservoirs with >1 km3 each of storage capacity store and release water according to their own rules of operation. Operating rules were determined for each reservoir by an algorithm that used currently available global data such as reservoir storage capacity, intended purposes, simulated inflow, and water demand in the lower reaches. The environmental flow requirement module was newly developed based on case studies from around the world. Simulated runoff was compared and validated with observation-based global runoff data sets and observed streamflow records at 32 major river gauging stations around the world. Mean annual runoff agreed well with earlier studies at global and continental scales, and in individual basins, the mean bias was less than ±20% in 14 of the 32 river basins and less than ±50% in 24 basins. The error in the peak was less than ±1 mo in 19 of the 27

  18. Modeling the short-run effect of fiscal stimuli on GDP : A new semi-closed input-output model

    NARCIS (Netherlands)

    Chen, Quanrun; Dietzenbacher, Erik; Los, Bart; Yang, Cuihong

    2016-01-01

    In this study, we propose a new semi-closed input-output model, which reconciles input-output analysis with modern consumption theories. It can simulate changes in household consumption behavior when exogenous stimulus policies lead to higher disposable income levels. It is useful for quantifying

  19. Regional disaster impact analysis: comparing Input-Output and Computable General Equilibrium models

    NARCIS (Netherlands)

    Koks, E.E.; Carrera, L.; Jonkeren, O.; Aerts, J.C.J.H.; Husby, T.G.; Thissen, M.; Standardi, G.; Mysiak, J.

    2016-01-01

    A variety of models have been applied to assess the economic losses of disasters, of which the most common ones are input-output (IO) and computable general equilibrium (CGE) models. In addition, an increasing number of scholars have developed hybrid approaches: one that combines both or either of

  20. Improving the Performance of Water Demand Forecasting Models by Using Weather Input

    NARCIS (Netherlands)

    Bakker, M.; Van Duist, H.; Van Schagen, K.; Vreeburg, J.; Rietveld, L.

    2014-01-01

    Literature shows that water demand forecasting models which use water demand as single input, are capable of generating a fairly accurate forecast. However, at changing weather conditions the forecasting errors are quite large. In this paper three different forecasting models are studied: an

  1. Logistics flows and enterprise input-output models: aggregate and disaggregate analysis

    NARCIS (Netherlands)

    Albino, V.; Yazan, Devrim; Messeni Petruzzelli, A.; Okogbaa, O.G.

    2011-01-01

    In the present paper, we propose the use of enterprise input-output (EIO) models to describe and analyse the logistics flows considering spatial issues and related environmental effects associated with production and transportation processes. In particular, transportation is modelled as a specific

  2. On the Influence of Input Data Quality to Flood Damage Estimation: The Performance of the INSYDE Model

    Directory of Open Access Journals (Sweden)

    Daniela Molinari

    2017-09-01

    Full Text Available IN-depth SYnthetic Model for Flood Damage Estimation (INSYDE is a model for the estimation of flood damage to residential buildings at the micro-scale. This study investigates the sensitivity of INSYDE to the accuracy of input data. Starting from the knowledge of input parameters at the scale of individual buildings for a case study, the level of detail of input data is progressively downgraded until the condition in which a representative value is defined for all inputs at the census block scale. The analysis reveals that two conditions are required to limit the errors in damage estimation: the representativeness of representatives values with respect to micro-scale values and the local knowledge of the footprint area of the buildings, being the latter the main extensive variable adopted by INSYDE. Such a result allows for extending the usability of the model at the meso-scale, also in different countries, depending on the availability of aggregated building data.

  3. Modelling pesticide leaching under climate change: parameter vs. climate input uncertainty

    Directory of Open Access Journals (Sweden)

    K. Steffens

    2014-02-01

    Full Text Available Assessing climate change impacts on pesticide leaching requires careful consideration of different sources of uncertainty. We investigated the uncertainty related to climate scenario input and its importance relative to parameter uncertainty of the pesticide leaching model. The pesticide fate model MACRO was calibrated against a comprehensive one-year field data set for a well-structured clay soil in south-western Sweden. We obtained an ensemble of 56 acceptable parameter sets that represented the parameter uncertainty. Nine different climate model projections of the regional climate model RCA3 were available as driven by different combinations of global climate models (GCM, greenhouse gas emission scenarios and initial states of the GCM. The future time series of weather data used to drive the MACRO model were generated by scaling a reference climate data set (1970–1999 for an important agricultural production area in south-western Sweden based on monthly change factors for 2070–2099. 30 yr simulations were performed for different combinations of pesticide properties and application seasons. Our analysis showed that both the magnitude and the direction of predicted change in pesticide leaching from present to future depended strongly on the particular climate scenario. The effect of parameter uncertainty was of major importance for simulating absolute pesticide losses, whereas the climate uncertainty was relatively more important for predictions of changes of pesticide losses from present to future. The climate uncertainty should be accounted for by applying an ensemble of different climate scenarios. The aggregated ensemble prediction based on both acceptable parameterizations and different climate scenarios has the potential to provide robust probabilistic estimates of future pesticide losses.

  4. GEN-IV Benchmarking of Triso Fuel Performance Models under accident conditions modeling input data

    Energy Technology Data Exchange (ETDEWEB)

    Collin, Blaise Paul [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: • The modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release. • The modeling of the AGR-1 and HFR-EU1bis safety testing experiments. • The comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from “Case 5” of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. “Case 5” of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to “effects of the numerical calculation method rather than the physical model” [IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read

  5. Development of an Input Model to MELCOR 1.8.5 for the Oskarshamn 3 BWR

    Energy Technology Data Exchange (ETDEWEB)

    Nilsson, Lars [Lentek, Nykoeping (Sweden)

    2006-05-15

    An input model has been prepared to the code MELCOR 1.8.5 for the Swedish Oskarshamn 3 Boiling Water Reactor (O3). This report describes the modelling work and the various files which comprise the input deck. Input data are mainly based on original drawings and system descriptions made available by courtesy of OKG AB. Comparison and check of some primary system data were made against an O3 input file to the SCDAP/RELAP5 code that was used in the SARA project. Useful information was also obtained from the FSAR (Final Safety Analysis Report) for O3 and the SKI report '2003 Stoerningshandboken BWR'. The input models the O3 reactor at its current state with the operating power of 3300 MW{sub th}. One aim with this work is that the MELCOR input could also be used for power upgrading studies. All fuel assemblies are thus assumed to consist of the new Westinghouse-Atom's SVEA-96 Optima2 fuel. MELCOR is a severe accident code developed by Sandia National Laboratory under contract from the U.S. Nuclear Regulatory Commission (NRC). MELCOR is a successor to STCP (Source Term Code Package) and has thus a long evolutionary history. The input described here is adapted to the latest version 1.8.5 available when the work began. It was released the year 2000, but a new version 1.8.6 was distributed recently. Conversion to the new version is recommended. (During the writing of this report still another code version, MELCOR 2.0, has been announced to be released within short.) In version 1.8.5 there is an option to describe the accident progression in the lower plenum and the melt-through of the reactor vessel bottom in more detail by use of the Bottom Head (BH) package developed by Oak Ridge National Laboratory especially for BWRs. This is in addition to the ordinary MELCOR COR package. Since problems arose running with the BH input two versions of the O3 input deck were produced, a NONBH and a BH deck. The BH package is no longer a separate package in the new 1

  6. RET Functions as a Dual-Specificity Kinase that Requires Allosteric Inputs from Juxtamembrane Elements

    Directory of Open Access Journals (Sweden)

    Iván Plaza-Menacho

    2016-12-01

    Full Text Available Receptor tyrosine kinases exhibit a variety of activation mechanisms despite highly homologous catalytic domains. Such diversity arises through coupling of extracellular ligand-binding portions with highly variable intracellular sequences flanking the tyrosine kinase domain and specific patterns of autophosphorylation sites. Here, we show that the juxtamembrane (JM segment enhances RET catalytic domain activity through Y687. This phospho-site is also required by the JM region to rescue an otherwise catalytically deficient RET activation-loop mutant lacking tyrosines. Structure-function analyses identified interactions between the JM hinge, αC helix, and an unconventional activation-loop serine phosphorylation site that engages the HRD motif and promotes phospho-tyrosine conformational accessibility and regulatory spine assembly. We demonstrate that this phospho-S909 arises from an intrinsic RET dual-specificity kinase activity and show that an equivalent serine is required for RET signaling in Drosophila. Our findings reveal dual-specificity and allosteric components for the mechanism of RET activation and signaling with direct implications for drug discovery.

  7. Realistic modeling of seismic input for megacities and large urban areas

    International Nuclear Information System (INIS)

    Panza, Giuliano F.; Alvarez, Leonardo; Aoudia, Abdelkrim

    2002-06-01

    The project addressed the problem of pre-disaster orientation: hazard prediction, risk assessment, and hazard mapping, in connection with seismic activity and man-induced vibrations. The definition of realistic seismic input has been obtained from the computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different source and structural models. The innovative modeling technique, that constitutes the common tool to the entire project, takes into account source, propagation and local site effects. This is done using first principles of physics about wave generation and propagation in complex media, and does not require to resort to convolutive approaches, that have been proven to be quite unreliable, mainly when dealing with complex geological structures, the most interesting from the practical point of view. In fact, several techniques that have been proposed to empirically estimate the site effects using observations convolved with theoretically computed signals corresponding to simplified models, supply reliable information about the site response to non-interfering seismic phases. They are not adequate in most of the real cases, when the seismic sequel is formed by several interfering waves. The availability of realistic numerical simulations enables us to reliably estimate the amplification effects even in complex geological structures, exploiting the available geotechnical, lithological, geophysical parameters, topography of the medium, tectonic, historical, palaeoseismological data, and seismotectonic models. The realistic modeling of the ground motion is a very important base of knowledge for the preparation of groundshaking scenarios that represent a valid and economic tool for the seismic microzonation. This knowledge can be very fruitfully used by civil engineers in the design of new seismo-resistant constructions and in the reinforcement of the existing built environment, and, therefore

  8. Design of vaccination and fumigation on Host-Vector Model by input-output linearization method

    Science.gov (United States)

    Nugraha, Edwin Setiawan; Naiborhu, Janson; Nuraini, Nuning

    2017-03-01

    Here, we analyze the Host-Vector Model and proposed design of vaccination and fumigation to control infectious population by using feedback control especially input-output liniearization method. Host population is divided into three compartments: susceptible, infectious and recovery. Whereas the vector population is divided into two compartment such as susceptible and infectious. In this system, vaccination and fumigation treat as input factors and infectious population as output result. The objective of design is to stabilize of the output asymptotically tend to zero. We also present the examples to illustrate the design model.

  9. Latitudinal and seasonal variability of the micrometeor input function: A study using model predictions and observations from Arecibo and PFISR

    Science.gov (United States)

    Fentzke, J. T.; Janches, D.; Sparks, J. J.

    2009-05-01

    In this work, we use a semi-empirical model of the micrometeor input function (MIF) together with meteor head-echo observations obtained with two high power and large aperture (HPLA) radars, the 430 MHz Arecibo Observatory (AO) radar in Puerto Rico (18°N, 67°W) and the 450 MHz Poker flat incoherent scatter radar (PFISR) in Alaska (65°N, 147°W), to study the seasonal and geographical dependence of the meteoric flux in the upper atmosphere. The model, recently developed by Janches et al. [2006a. Modeling the global micrometeor input function in the upper atmosphere observed by high power and large aperture radars. Journal of Geophysical Research 111] and Fentzke and Janches [2008. A semi-empirical model of the contribution from sporadic meteoroid sources on the meteor input function observed at arecibo. Journal of Geophysical Research (Space Physics) 113 (A03304)], includes an initial mass flux that is provided by the six known meteor sources (i.e. orbital families of dust) as well as detailed modeling of meteoroid atmospheric entry and ablation physics. In addition, we use a simple ionization model to treat radar sensitivity issues by defining minimum electron volume density production thresholds required in the meteor head-echo plasma for detection. This simplified approach works well because we use observations from two radars with similar frequencies, but different sensitivities and locations. This methodology allows us to explore the initial input of particles and how it manifests in different parts of the MLT as observed by these instruments without the need to invoke more sophisticated plasma models, which are under current development. The comparisons between model predictions and radar observations show excellent agreement between diurnal, seasonal, and latitudinal variability of the detected meteor rate and radial velocity distributions, allowing us to understand how individual meteoroid populations contribute to the overall flux at a particular

  10. Development of algorithm for depreciation costs allocation in dynamic input-output industrial enterprise model

    OpenAIRE

    Keller Alevtina; Vinogradova Tatyana

    2017-01-01

    The article considers the issue of allocation of depreciation costs in the dynamic inputoutput model of an industrial enterprise. Accounting the depreciation costs in such a model improves the policy of fixed assets management. It is particularly relevant to develop the algorithm for the allocation of depreciation costs in the construction of dynamic input-output model of an industrial enterprise, since such enterprises have a significant amount of fixed assets. Implementation of terms of the...

  11. Stream Heat Budget Modeling of Groundwater Inputs: Model Development and Validation

    Science.gov (United States)

    Glose, A.; Lautz, L. K.

    2012-12-01

    Models of physical processes in fluvial systems are useful for improving understanding of hydrologic systems and for predicting future conditions. Process-based models of fluid flow and heat transport in fluvial systems can be used to quantify unknown spatial and temporal patterns of hydrologic fluxes, such as groundwater discharge, and to predict system response to future change. In this study, a stream heat budget model was developed and calibrated to observed stream water temperature data for Meadowbrook Creek in Syracuse, NY. The one-dimensional (longitudinal), transient stream temperature model is programmed in Matlab and solves the equations for heat and fluid transport using a Crank-Nicholson finite difference scheme. The model considers four meteorologically driven heat fluxes: shortwave solar radiation, longwave radiation, latent heat flux, and sensible heat flux. Streambed conduction is also considered. Input data for the model were collected from June 13-18, 2012 over a 500 m reach of Meadowbrook Creek, a first order urban stream that drains a retention pond in the city of Syracuse, NY. Stream temperature data were recorded every 20 m longitudinally in the stream at 5-minute intervals using iButtons (model DS1922L, accuracy of ±0.5°C, resolution of 0.0625°C). Meteorological data, including air temperature, solar radiation, relative humidity, and wind speed, were recorded at 5-minute intervals using an on-site weather station. Groundwater temperature was measured in wells adjacent to the stream. Stream dimensions, bed temperatures, and type of bed sediments were also collected. A constant rate tracer injection of Rhodamine WT was used to quantify groundwater inputs every 10 m independently to validate model results. Stream temperatures fluctuated diurnally by ~3-5 °C during the observation period with temperatures peaking around 2 pm and cooling overnight, reaching a minimum between 6 and 7 am. Spatially, the stream shows a cooling trend along the

  12. Input-Output model for waste management plan for Nigeria | Njoku ...

    African Journals Online (AJOL)

    An Input-Output Model for Waste Management Plan has been developed for Nigeria based on Leontief concept and life cycle analysis. Waste was considered as source of pollution, loss of resources, and emission of green house gasses from bio-chemical treatment and decomposition, with negative impact on the ...

  13. Land cover models to predict non-point nutrient inputs for selected ...

    African Journals Online (AJOL)

    WQSAM is a practical water quality model for use in guiding southern African water quality management. However, the estimation of non-point nutrient inputs within WQSAM is uncertain, as it is achieved through a combination of calibration and expert knowledge. Non-point source loads can be correlated to particular land ...

  14. Comparison of plasma input and reference tissue models for analysing [(11)C]flumazenil studies

    NARCIS (Netherlands)

    Klumpers, Ursula M. H.; Veltman, Dick J.; Boellaard, Ronald; Comans, Emile F.; Zuketto, Cassandra; Yaqub, Maqsood; Mourik, Jurgen E. M.; Lubberink, Mark; Hoogendijk, Witte J. G.; Lammertsma, Adriaan A.

    2008-01-01

    A single-tissue compartment model with plasma input is the established method for analysing [(11)C]flumazenil ([(11)C]FMZ) studies. However, arterial cannulation and measurement of metabolites are time-consuming. Therefore, a reference tissue approach is appealing, but this approach has not been

  15. The economic impact of multifunctional agriculture in The Netherlands: A regional input-output model

    NARCIS (Netherlands)

    Heringa, P.W.; Heide, van der C.M.; Heijman, W.J.M.

    2012-01-01

    Multifunctional agriculture is a broad concept lacking a precise and uniform definition. Moreover, little is known about the societal importance of multifunctional agriculture. This paper is an empirical attempt to fill this gap. To this end, an input-output model is constructed for multifunctional

  16. The economic impact of multifunctional agriculture in Dutch regions: An input-output model

    NARCIS (Netherlands)

    Heringa, P.W.; Heide, van der C.M.; Heijman, W.J.M.

    2013-01-01

    Multifunctional agriculture is a broad concept lacking a precise definition. Moreover, little is known about the societal importance of multifunctional agriculture. This paper is an empirical attempt to fill this gap. To this end, an input-output model was constructed for multifunctional agriculture

  17. INTEGRATION OF INPUT - OUTPUT APPROACH INTO AGENT-BASED MODELING. PART 2. INTERREGIONAL ANALYSIS IN AN ARTIFICIAL ECONOMY

    Directory of Open Access Journals (Sweden)

    Domozhirov D. A.

    2017-06-01

    Full Text Available The article demonstrates the possibilities of spatial analysis provided by the Agent-Based Multiregional Input - Output Model (ABMIOM of the Russian economy. The basic hypothesis of the ABMIOM is that agents’ decisions at the microeconomic level lead to spatial changes at the macro level. Confirmation of this hypothesis requires experimental calculations with changes in various parameters that influence agents’ decisions (such as prices, taxes, tariffs, etc.. Analyzing the results of these calculations requires moving from microeconomic data to the macro level. The paper proposes a method for the structural analysis of the model simulation results using input-output tables. The method involves statistical aggregation of calculation results, construction of regional, national and interregional input-output tables and structural analysis of the obtained tables including calculation of regional Leontief multipliers. The method proposed is used to study the influence of the level of transport costs on the geographical structure of trade flows. The results of the experiments confirmed that with the increase of transportation costs economic agents prefer to interact with nearest agents, which leads to a decreased interregional commodity exchange and to economic «insulation» of the regions.

  18. Design, Fabrication, and Modeling of a Novel Dual-Axis Control Input PZT Gyroscope

    Directory of Open Access Journals (Sweden)

    Cheng-Yang Chang

    2017-10-01

    Full Text Available Conventional gyroscopes are equipped with a single-axis control input, limiting their performance. Although researchers have proposed control algorithms with dual-axis control inputs to improve gyroscope performance, most have verified the control algorithms through numerical simulations because they lacked practical devices with dual-axis control inputs. The aim of this study was to design a piezoelectric gyroscope equipped with a dual-axis control input so that researchers may experimentally verify those control algorithms in future. Designing a piezoelectric gyroscope with a dual-axis control input is more difficult than designing a conventional gyroscope because the control input must be effective over a broad frequency range to compensate for imperfections, and the multiple mode shapes in flexural deformations complicate the relation between flexural deformation and the proof mass position. This study solved these problems by using a lead zirconate titanate (PZT material, introducing additional electrodes for shielding, developing an optimal electrode pattern, and performing calibrations of undesired couplings. The results indicated that the fabricated device could be operated at 5.5±1 kHz to perform dual-axis actuations and position measurements. The calibration of the fabricated device was completed by system identifications of a new dynamic model including gyroscopic motions, electromechanical coupling, mechanical coupling, electrostatic coupling, and capacitive output impedance. Finally, without the assistance of control algorithms, the “open loop sensitivity” of the fabricated gyroscope was 1.82 μV/deg/s with a nonlinearity of 9.5% full-scale output. This sensitivity is comparable with those of other PZT gyroscopes with single-axis control inputs.

  19. Linear and quadratic models of point process systems: contributions of patterned input to output.

    Science.gov (United States)

    Lindsay, K A; Rosenberg, J R

    2012-08-01

    In the 1880's Volterra characterised a nonlinear system using a functional series connecting continuous input and continuous output. Norbert Wiener, in the 1940's, circumvented problems associated with the application of Volterra series to physical problems by deriving from it a new series of terms that are mutually uncorrelated with respect to Gaussian processes. Subsequently, Brillinger, in the 1970's, introduced a point-process analogue of Volterra's series connecting point-process inputs to the instantaneous rate of point-process output. We derive here a new series from this analogue in which its terms are mutually uncorrelated with respect to Poisson processes. This new series expresses how patterned input in a spike train, represented by third-order cross-cumulants, is converted into the instantaneous rate of an output point-process. Given experimental records of suitable duration, the contribution of arbitrary patterned input to an output process can, in principle, be determined. Solutions for linear and quadratic point-process models with one and two inputs and a single output are investigated. Our theoretical results are applied to isolated muscle spindle data in which the spike trains from the primary and secondary endings from the same muscle spindle are recorded in response to stimulation of one and then two static fusimotor axons in the absence and presence of a random length change imposed on the parent muscle. For a fixed mean rate of input spikes, the analysis of the experimental data makes explicit which patterns of two input spikes contribute to an output spike. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Design, Fabrication, and Modeling of a Novel Dual-Axis Control Input PZT Gyroscope.

    Science.gov (United States)

    Chang, Cheng-Yang; Chen, Tsung-Lin

    2017-10-31

    Conventional gyroscopes are equipped with a single-axis control input, limiting their performance. Although researchers have proposed control algorithms with dual-axis control inputs to improve gyroscope performance, most have verified the control algorithms through numerical simulations because they lacked practical devices with dual-axis control inputs. The aim of this study was to design a piezoelectric gyroscope equipped with a dual-axis control input so that researchers may experimentally verify those control algorithms in future. Designing a piezoelectric gyroscope with a dual-axis control input is more difficult than designing a conventional gyroscope because the control input must be effective over a broad frequency range to compensate for imperfections, and the multiple mode shapes in flexural deformations complicate the relation between flexural deformation and the proof mass position. This study solved these problems by using a lead zirconate titanate (PZT) material, introducing additional electrodes for shielding, developing an optimal electrode pattern, and performing calibrations of undesired couplings. The results indicated that the fabricated device could be operated at 5.5±1 kHz to perform dual-axis actuations and position measurements. The calibration of the fabricated device was completed by system identifications of a new dynamic model including gyroscopic motions, electromechanical coupling, mechanical coupling, electrostatic coupling, and capacitive output impedance. Finally, without the assistance of control algorithms, the "open loop sensitivity" of the fabricated gyroscope was 1.82 μV/deg/s with a nonlinearity of 9.5% full-scale output. This sensitivity is comparable with those of other PZT gyroscopes with single-axis control inputs.

  1. The Neurobiological Basis of Cognition: Identification by Multi-Input, Multioutput Nonlinear Dynamic Modeling

    Science.gov (United States)

    Berger, Theodore W.; Song, Dong; Chan, Rosa H. M.; Marmarelis, Vasilis Z.

    2010-01-01

    The successful development of neural prostheses requires an understanding of the neurobiological bases of cognitive processes, i.e., how the collective activity of populations of neurons results in a higher level process not predictable based on knowledge of the individual neurons and/or synapses alone. We have been studying and applying novel methods for representing nonlinear transformations of multiple spike train inputs (multiple time series of pulse train inputs) produced by synaptic and field interactions among multiple subclasses of neurons arrayed in multiple layers of incompletely connected units. We have been applying our methods to study of the hippocampus, a cortical brain structure that has been demonstrated, in humans and in animals, to perform the cognitive function of encoding new long-term (declarative) memories. Without their hippocampi, animals and humans retain a short-term memory (memory lasting approximately 1 min), and long-term memory for information learned prior to loss of hippocampal function. Results of more than 20 years of studies have demonstrated that both individual hippocampal neurons, and populations of hippocampal cells, e.g., the neurons comprising one of the three principal subsystems of the hippocampus, induce strong, higher order, nonlinear transformations of hippocampal inputs into hippocampal outputs. For one synaptic input or for a population of synchronously active synaptic inputs, such a transformation is represented by a sequence of action potential inputs being changed into a different sequence of action potential outputs. In other words, an incoming temporal pattern is transformed into a different, outgoing temporal pattern. For multiple, asynchronous synaptic inputs, such a transformation is represented by a spatiotemporal pattern of action potential inputs being changed into a different spatiotemporal pattern of action potential outputs. Our primary thesis is that the encoding of short-term memories into new, long

  2. Responses of two nonlinear microbial models to warming and increased carbon input

    Science.gov (United States)

    Wang, Y. P.; Jiang, J.; Chen-Charpentier, B.; Agusto, F. B.; Hastings, A.; Hoffman, F.; Rasmussen, M.; Smith, M. J.; Todd-Brown, K.; Wang, Y.; Xu, X.; Luo, Y. Q.

    2016-02-01

    A number of nonlinear microbial models of soil carbon decomposition have been developed. Some of them have been applied globally but have yet to be shown to realistically represent soil carbon dynamics in the field. A thorough analysis of their key differences is needed to inform future model developments. Here we compare two nonlinear microbial models of soil carbon decomposition: one based on reverse Michaelis-Menten kinetics (model A) and the other on regular Michaelis-Menten kinetics (model B). Using analytic approximations and numerical solutions, we find that the oscillatory responses of carbon pools to a small perturbation in their initial pool sizes dampen faster in model A than in model B. Soil warming always decreases carbon storage in model A, but in model B it predominantly decreases carbon storage in cool regions and increases carbon storage in warm regions. For both models, the CO2 efflux from soil carbon decomposition reaches a maximum value some time after increased carbon input (as in priming experiments). This maximum CO2 efflux (Fmax) decreases with an increase in soil temperature in both models. However, the sensitivity of Fmax to the increased amount of carbon input increases with soil temperature in model A but decreases monotonically with an increase in soil temperature in model B. These differences in the responses to soil warming and carbon input between the two nonlinear models can be used to discern which model is more realistic when compared to results from field or laboratory experiments. These insights will contribute to an improved understanding of the significance of soil microbial processes in soil carbon responses to future climate change.

  3. ANALYSIS OF THE BANDUNG CHANGES EXCELLENT POTENTIAL THROUGH INPUT-OUTPUT MODEL USING INDEX LE MASNE

    Directory of Open Access Journals (Sweden)

    Teti Sofia Yanti

    2017-03-01

    Full Text Available Input-Output Table is arranged to present an overview of the interrelationships and interdependence between units of activity (sector production in the whole economy. Therefore the input-output models are complete and comprehensive analytical tool. The usefulness of input-output tables is an analysis of the economic structure of the national/regional level which covers the structure of production and value-added (GDP of each sector. For the purposes of planning and evaluation of the outcomes of development that is comprehensive both national and smaller scale (district/city, a model for regional development planning approach can use the model input-output analysis. Analysis of Bandung Economic Structure did use Le Masne index, by comparing the coefficients of the technology in 2003 and 2008, of which nearly 50% change. The trade sector has grown very conspicuous than other areas, followed by the services of road transport and air transport services, the development priorities and investment Bandung should be directed to these areas, this is due to these areas can be thrust and be power attraction for the growth of other areas. The areas that experienced the highest decrease was Industrial Chemicals and Goods from Chemistry, followed by Oil and Refinery Industry Textile Industry Except For Garment.

  4. Simulation model structure numerically robust to changes in magnitude and combination of input and output variables

    DEFF Research Database (Denmark)

    Rasmussen, Bjarne D.; Jakobsen, Arne

    1999-01-01

    Mathematical models of refrigeration systems are often based on a coupling of component models forming a “closed loop” type of system model. In these models the coupling structure of the component models represents the actual flow path of refrigerant in the system. Very often numerical...... variables with narrow definition intervals for the exchange of information between the cycle model and the component models.The advantages of the cycle-oriented method are illustrated by an example showing the refrigeration cycle similarities between two very different refrigeration systems....... instabilities prevent the practical use of such a system model for more than one input/output combination and for other magnitudes of refrigerating capacities.A higher numerical robustness of system models can be achieved by making a model for the refrigeration cycle the core of the system model and by using...

  5. Dynamics of a Stage Structured Pest Control Model in a Polluted Environment with Pulse Pollution Input

    OpenAIRE

    Liu, Bing; Xu, Ling; Kang, Baolin

    2013-01-01

    By using pollution model and impulsive delay differential equation, we formulate a pest control model with stage structure for natural enemy in a polluted environment by introducing a constant periodic pollutant input and killing pest at different fixed moments and investigate the dynamics of such a system. We assume only that the natural enemies are affected by pollution, and we choose the method to kill the pest without harming natural enemies. Sufficient conditions for global attractivity ...

  6. Filtering Based Recursive Least Squares Algorithm for Multi-Input Multioutput Hammerstein Models

    OpenAIRE

    Wang, Ziyun; Wang, Yan; Ji, Zhicheng

    2014-01-01

    This paper considers the parameter estimation problem for Hammerstein multi-input multioutput finite impulse response (FIR-MA) systems. Filtered by the noise transfer function, the FIR-MA model is transformed into a controlled autoregressive model. The key-term variable separation principle is used to derive a data filtering based recursive least squares algorithm. The numerical examples confirm that the proposed algorithm can estimate parameters more accurately and has a higher computational...

  7. Input parameters for LEAP and analysis of the Model 22C data base

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, L.; Goldstein, M.

    1981-05-01

    The input data for the Long-Term Energy Analysis Program (LEAP) employed by EIA for projections of long-term energy supply and demand in the US were studied and additional documentation provided. Particular emphasis has been placed on the LEAP Model 22C input data base, which was used in obtaining the output projections which appear in the 1978 Annual Report to Congress. Definitions, units, associated model parameters, and translation equations are given in detail. Many parameters were set to null values in Model 22C so as to turn off certain complexities in LEAP; these parameters are listed in Appendix B along with parameters having constant values across all activities. The values of the parameters for each activity are tabulated along with the source upon which each parameter is based - and appropriate comments provided, where available. The structure of the data base is briefly outlined and an attempt made to categorize the parameters according to the methods employed for estimating the numerical values. Due to incomplete documentation and/or lack of specific parameter definitions, few of the input values could be traced and uniquely interpreted using the information provided in the primary and secondary sources. Input parameter choices were noted which led to output projections which are somewhat suspect. Other data problems encountered are summarized. Some of the input data were corrected and a revised base case was constructed. The output projections for this revised case are compared with the Model 22C output for the year 2020, for the Transportation Sector. LEAP could be a very useful tool, especially so in the study of emerging technologies over long-time frames.

  8. CONSTRUCTION OF A DYNAMIC INPUT-OUTPUT MODEL WITH A HUMAN CAPITAL BLOCK

    Directory of Open Access Journals (Sweden)

    Baranov A. O.

    2017-03-01

    Full Text Available The accumulation of human capital is an important factor of economic growth. It seems to be useful to include «human capital» as a factor of a macroeconomic model, as it helps to take into account the quality differentiation of the workforce. Most of the models usually distinguish labor force by the levels of education, while some of the factors remain unaccounted. Among them are health status and culture development level, which influence productivity level as well as gross product reproduction. Inclusion of the human capital block to the interindustry model can help to make it more reliable for economic development forecasting. The article presents a mathematical description of the extended dynamic input-output model (DIOM with a human capital block. The extended DIOM is based on the Input-Output Model from The KAMIN system (the System of Integrated Analyses of Interindustrial Information developed at the Institute of Economics and Industrial Engineering of the Siberian Branch of the Academy of Sciences of the Russian Federation and at the Novosibirsk State University. The extended input-output model can be used to analyze and forecast development of Russian economy.

  9. Subjective bias in PRA - the role of judgement in the selection of plant modeling input data for establishing safety goals

    International Nuclear Information System (INIS)

    Haenni, H.P.; Smith, A.L.

    1986-01-01

    The sources of the uncertainties are generally accepted as modeling deficiencies, lack of completeness in the analysis and the input data deficiencies. The role of judgement in selecting input data for establishing safety goals will be discussed. As an example, a safety goal for unacceptable radioactivity release will be considered. Two analysts are discussing the introduction of an emergency service water system, applying a different way of engineering judgement. Using PRA combined with safety goals as a decision-making tool it could have an important influence on the design and the costs of the plant. The suitability of the methodology has to be generally accepted before it will be established as a regulatory requirement. (orig.)

  10. An integrated model for the assessment of global water resources - Part 1: Input meteorological forcing and natural hydrological cycle modules

    Science.gov (United States)

    Hanasaki, N.; Kanae, S.; Oki, T.; Masuda, K.; Motoya, K.; Shen, Y.; Tanaka, K.

    2007-10-01

    An integrated global water resources model was developed consisting of six modules: land surface hydrology, river routing, crop growth, reservoir operation, environmental flow requirement estimation, and anthropogenic water withdrawal. It simulates both natural and anthropogenic water flow globally (excluding Antarctica) on a daily basis at a spatial resolution of 1°×1° (longitude and latitude). The simulation period is 10 years, from 1986 to 1995. This first part of the two-feature report describes the input meteorological forcing and natural hydrological cycle modules of the integrated model, namely the land surface hydrology module and the river routing module. The input meteorological forcing was provided by the second Global Soil Wetness Project (GSWP2), an international land surface modeling project. Several reported shortcomings of the forcing component were improved. The land surface hydrology module was developed based on a bucket type model that simulates energy and water balance on land surfaces. Simulated runoff was compared and validated with observation-based global runoff data sets and observed streamflow records at 32 major river gauging stations around the world. Mean annual runoff agreed well with earlier studies at global, continental, and continental zonal mean scales, indicating the validity of the input meteorological data and land surface hydrology module. In individual basins, the mean bias was less than ±20% in 14 of the 32 river basins and less than ±50% in 24 of the basins. The performance was similar to the best available precedent studies with closure of energy and water. The timing of the peak in streamflow and the shape of monthly hydrographs were well simulated in most of the river basins when large lakes or reservoirs did not affect them. The results indicate that the input meteorological forcing component and the land surface hydrology module provide a framework with which to assess global water resources, with the potential

  11. Application of a Linear Input/Output Model to Tankless Water Heaters

    Energy Technology Data Exchange (ETDEWEB)

    Butcher T.; Schoenbauer, B.

    2011-12-31

    In this study, the applicability of a linear input/output model to gas-fired, tankless water heaters has been evaluated. This simple model assumes that the relationship between input and output, averaged over both active draw and idle periods, is linear. This approach is being applied to boilers in other studies and offers the potential to make a small number of simple measurements to obtain the model parameters. These parameters can then be used to predict performance under complex load patterns. Both condensing and non-condensing water heaters have been tested under a very wide range of load conditions. It is shown that this approach can be used to reproduce performance metrics, such as the energy factor, and can be used to evaluate the impacts of alternative draw patterns and conditions.

  12. High Resolution Modeling of the Thermospheric Response to Energy Inputs During the RENU-2 Rocket Flight

    Science.gov (United States)

    Walterscheid, R. L.; Brinkman, D. G.; Clemmons, J. H.; Hecht, J. H.; Lessard, M.; Fritz, B.; Hysell, D. L.; Clausen, L. B. N.; Moen, J.; Oksavik, K.; Yeoman, T. K.

    2017-12-01

    The Earth's magnetospheric cusp provides direct access of energetic particles to the thermosphere. These particles produce ionization and kinetic (particle) heating of the atmosphere. The increased ionization coupled with enhanced electric fields in the cusp produces increased Joule heating and ion drag forcing. These energy inputs cause large wind and temperature changes in the cusp region. The Rocket Experiment for Neutral Upwelling -2 (RENU-2) launched from Andoya, Norway at 0745UT on 13 December 2015 into the ionosphere-thermosphere beneath the magnetic cusp. It made measurements of the energy inputs (e.g., precipitating particles, electric fields) and the thermospheric response to these energy inputs (e.g., neutral density and temperature, neutral winds). Complementary ground based measurements were made. In this study, we use a high resolution two-dimensional time-dependent non hydrostatic nonlinear dynamical model driven by rocket and ground based measurements of the energy inputs to simulate the thermospheric response during the RENU-2 flight. Model simulations will be compared to the corresponding measurements of the thermosphere to see what they reveal about thermospheric structure and the nature of magnetosphere-ionosphere-thermosphere coupling in the cusp. Acknowledgements: This material is based upon work supported by the National Aeronautics and Space Administration under Grants: NNX16AH46G and NNX13AJ93G. This research was also supported by The Aerospace Corporation's Technical Investment program

  13. Input vs. Output Taxation—A DSGE Approach to Modelling Resource Decoupling

    Directory of Open Access Journals (Sweden)

    Marek Antosiewicz

    2016-04-01

    Full Text Available Environmental taxes constitute a crucial instrument aimed at reducing resource use through lower production losses, resource-leaner products, and more resource-efficient production processes. In this paper we focus on material use and apply a multi-sector dynamic stochastic general equilibrium (DSGE model to study two types of taxation: tax on material inputs used by industry, energy, construction, and transport sectors, and tax on output of these sectors. We allow for endogenous adoption of resource-saving technologies. We calibrate the model for the EU27 area using an IO matrix. We consider taxation introduced from 2021 and simulate its impact until 2050. We compare the taxes along their ability to induce reduction in material use and raise revenue. We also consider the effect of spending this revenue on reduction of labour taxation. We find that input and output taxation create contrasting incentives and have opposite effects on resource efficiency. The material input tax induces investment in efficiency-improving technology which, in the long term, results in GDP and employment by 15%–20% higher than in the case of a comparable output tax. We also find that using revenues to reduce taxes on labour has stronger beneficial effects for the input tax.

  14. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    Science.gov (United States)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    2008-06-01

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space Rn. An isometric mapping F from M to a low-dimensional, compact, connected set A⊂Rd(d≪n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology by constructing low

  15. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    International Nuclear Information System (INIS)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    2008-01-01

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space R n . An isometric mapping F from M to a low-dimensional, compact, connected set A is contained in R d (d<< n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology

  16. Integrate-and-fire models with an almost periodic input function

    Science.gov (United States)

    Kasprzak, Piotr; Nawrocki, Adam; Signerska-Rynkowska, Justyna

    2018-02-01

    We investigate leaky integrate-and-fire models (LIF models for short) driven by Stepanov and μ-almost periodic functions. Special attention is paid to the properties of the firing map and its displacement, which give information about the spiking behavior of the considered system. We provide conditions under which such maps are well-defined and are uniformly continuous. We show that the LIF models with Stepanov almost periodic inputs have uniformly almost periodic displacements. We also show that in the case of μ-almost periodic drives it may happen that the displacement map is uniformly continuous, but is not μ-almost periodic (and thus cannot be Stepanov or uniformly almost periodic). By allowing discontinuous inputs, we extend some previous results, showing, for example, that the firing rate for the LIF models with Stepanov almost periodic input exists and is unique. This is a starting point for the investigation of the dynamics of almost-periodically driven integrate-and-fire systems.

  17. Non parametric, self organizing, scalable modeling of spatiotemporal inputs: the sign language paradigm.

    Science.gov (United States)

    Caridakis, G; Karpouzis, K; Drosopoulos, A; Kollias, S

    2012-12-01

    Modeling and recognizing spatiotemporal, as opposed to static input, is a challenging task since it incorporates input dynamics as part of the problem. The vast majority of existing methods tackle the problem as an extension of the static counterpart, using dynamics, such as input derivatives, at feature level and adopting artificial intelligence and machine learning techniques originally designed for solving problems that do not specifically address the temporal aspect. The proposed approach deals with temporal and spatial aspects of the spatiotemporal domain in a discriminative as well as coupling manner. Self Organizing Maps (SOM) model the spatial aspect of the problem and Markov models its temporal counterpart. Incorporation of adjacency, both in training and classification, enhances the overall architecture with robustness and adaptability. The proposed scheme is validated both theoretically, through an error propagation study, and experimentally, on the recognition of individual signs, performed by different, native Greek Sign Language users. Results illustrate the architecture's superiority when compared to Hidden Markov Model techniques and variations both in terms of classification performance and computational cost. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Computational Techniques for Model Predictive Control of Large-Scale Systems with Continuous-Valued and Discrete-Valued Inputs

    Directory of Open Access Journals (Sweden)

    Koichi Kobayashi

    2013-01-01

    Full Text Available We propose computational techniques for model predictive control of large-scale systems with both continuous-valued control inputs and discrete-valued control inputs, which are a class of hybrid systems. In the proposed method, we introduce the notion of virtual control inputs, which are obtained by relaxing discrete-valued control inputs to continuous variables. In online computation, first, we find continuous-valued control inputs and virtual control inputs minimizing a cost function. Next, using the obtained virtual control inputs, only discrete-valued control inputs at the current time are computed in each subsystem. In addition, we also discuss the effect of quantization errors. Finally, the effectiveness of the proposed method is shown by a numerical example. The proposed method enables us to reduce and decentralize the computation load.

  19. The role of additive neurogenesis and synaptic plasticity in a hippocampal memory model with grid-cell like input.

    Directory of Open Access Journals (Sweden)

    Peter A Appleby

    Full Text Available Recently, we presented a study of adult neurogenesis in a simplified hippocampal memory model. The network was required to encode and decode memory patterns despite changing input statistics. We showed that additive neurogenesis was a more effective adaptation strategy compared to neuronal turnover and conventional synaptic plasticity as it allowed the network to respond to changes in the input statistics while preserving representations of earlier environments. Here we extend our model to include realistic, spatially driven input firing patterns in the form of grid cells in the entorhinal cortex. We compare network performance across a sequence of spatial environments using three distinct adaptation strategies: conventional synaptic plasticity, where the network is of fixed size but the connectivity is plastic; neuronal turnover, where the network is of fixed size but units in the network may die and be replaced; and additive neurogenesis, where the network starts out with fewer initial units but grows over time. We confirm that additive neurogenesis is a superior adaptation strategy when using realistic, spatially structured input patterns. We then show that a more biologically plausible neurogenesis rule that incorporates cell death and enhanced plasticity of new granule cells has an overall performance significantly better than any one of the three individual strategies operating alone. This adaptation rule can be tailored to maximise performance of the network when operating as either a short- or long-term memory store. We also examine the time course of adult neurogenesis over the lifetime of an animal raised under different hypothetical rearing conditions. These growth profiles have several distinct features that form a theoretical prediction that could be tested experimentally. Finally, we show that place cells can emerge and refine in a realistic manner in our model as a direct result of the sparsification performed by the dentate gyrus

  20. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    International Nuclear Information System (INIS)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-01-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction. (paper)

  1. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    Science.gov (United States)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-03-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.

  2. Input vs. Output Taxation—A DSGE Approach to Modelling Resource Decoupling

    OpenAIRE

    Marek Antosiewicz; Piotr Lewandowski; Jan Witajewski-Baltvilks

    2016-01-01

    Environmental taxes constitute a crucial instrument aimed at reducing resource use through lower production losses, resource-leaner products, and more resource-efficient production processes. In this paper we focus on material use and apply a multi-sector dynamic stochastic general equilibrium (DSGE) model to study two types of taxation: tax on material inputs used by industry, energy, construction, and transport sectors, and tax on output of these sectors. We allow for endogenous adoption of...

  3. A New Ensemble of Perturbed-Input-Parameter Simulations by the Community Atmosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    Covey, C; Brandon, S; Bremer, P T; Domyancis, D; Garaizar, X; Johannesson, G; Klein, R; Klein, S A; Lucas, D D; Tannahill, J; Zhang, Y

    2011-10-27

    Uncertainty quantification (UQ) is a fundamental challenge in the numerical simulation of Earth's weather and climate, and other complex systems. It entails much more than attaching defensible error bars to predictions: in particular it includes assessing low-probability but high-consequence events. To achieve these goals with models containing a large number of uncertain input parameters, structural uncertainties, etc., raw computational power is needed. An automated, self-adapting search of the possible model configurations is also useful. Our UQ initiative at the Lawrence Livermore National Laboratory has produced the most extensive set to date of simulations from the US Community Atmosphere Model. We are examining output from about 3,000 twelve-year climate simulations generated with a specialized UQ software framework, and assessing the model's accuracy as a function of 21 to 28 uncertain input parameter values. Most of the input parameters we vary are related to the boundary layer, clouds, and other sub-grid scale processes. Our simulations prescribe surface boundary conditions (sea surface temperatures and sea ice amounts) to match recent observations. Fully searching this 21+ dimensional space is impossible, but sensitivity and ranking algorithms can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination. Bayesian statistical constraints, employing a variety of climate observations as metrics, also seem promising. Observational constraints will be important in the next step of our project, which will compute sea surface temperatures and sea ice interactively, and will study climate change due to increasing atmospheric carbon dioxide.

  4. Urban Landscape Characterization Using Remote Sensing Data For Input into Air Quality Modeling

    Science.gov (United States)

    Quattrochi, Dale A.; Estes, Maurice G., Jr.; Crosson, William; Khan, Maudood

    2005-01-01

    The urban landscape is inherently complex and this complexity is not adequately captured in air quality models that are used to assess whether urban areas are in attainment of EPA air quality standards, particularly for ground level ozone. This inadequacy of air quality models to sufficiently respond to the heterogeneous nature of the urban landscape can impact how well these models predict ozone pollutant levels over metropolitan areas and ultimately, whether cities exceed EPA ozone air quality standards. We are exploring the utility of high-resolution remote sensing data and urban growth projections as improved inputs to meteorological and air quality models focusing on the Atlanta, Georgia metropolitan area as a case study. The National Land Cover Dataset at 30m resolution is being used as the land use/land cover input and aggregated to the 4km scale for the MM5 mesoscale meteorological model and the Community Multiscale Air Quality (CMAQ) modeling schemes. Use of these data have been found to better characterize low density/suburban development as compared with USGS 1 km land use/land cover data that have traditionally been used in modeling. Air quality prediction for future scenarios to 2030 is being facilitated by land use projections using a spatial growth model. Land use projections were developed using the 2030 Regional Transportation Plan developed by the Atlanta Regional Commission. This allows the State Environmental Protection agency to evaluate how these transportation plans will affect future air quality.

  5. Automated detection of arterial input function in DSC perfusion MRI in a stroke rat model

    Energy Technology Data Exchange (ETDEWEB)

    Yeh, M-Y; Liu, H-L [Graduate Institute of Medical Physics and Imaging Science, Chang Gung University, Taoyuan, Taiwan (China); Lee, T-H; Yang, S-T; Kuo, H-H [Stroke Section, Department of Neurology, Chang Gung Memorial Hospital and Chang Gung University, Taoyuan, Taiwan (China); Chyi, T-K [Molecular Imaging Center Chang Gung Memorial Hospital, Taoyuan, Taiwan (China)], E-mail: hlaliu@mail.cgu.edu.tw

    2009-05-15

    Quantitative cerebral blood flow (CBF) estimation requires deconvolution of the tissue concentration time curves with an arterial input function (AIF). However, image-based determination of AIF in rodent is challenged due to limited spatial resolution. We evaluated the feasibility of quantitative analysis using automated AIF detection and compared the results with commonly applied semi-quantitative analysis. Permanent occlusion of bilateral or unilateral common carotid artery was used to induce cerebral ischemia in rats. The image using dynamic susceptibility contrast method was performed on a 3-T magnetic resonance scanner with a spin-echo echo-planar-image sequence (TR/TE = 700/80 ms, FOV = 41 mm, matrix = 64, 3 slices, SW = 2 mm), starting from 7 s prior to contrast injection (1.2 ml/kg) at four different time points. For quantitative analysis, CBF was calculated by the AIF which was obtained from 10 voxels with greatest contrast enhancement after deconvolution. For semi-quantitative analysis, relative CBF was estimated by the integral divided by the first moment of the relaxivity time curves. We observed if the AIFs obtained in the three different ROIs (whole brain, hemisphere without lesion and hemisphere with lesion) were similar, the CBF ratios (lesion/normal) between quantitative and semi-quantitative analyses might have a similar trend at different operative time points. If the AIFs were different, the CBF ratios might be different. We concluded that using local maximum one can define proper AIF without knowing the anatomical location of arteries in a stroke rat model.

  6. A generic method for automatic translation between input models for different versions of simulation codes

    International Nuclear Information System (INIS)

    Serfontein, Dawid E.; Mulder, Eben J.; Reitsma, Frederik

    2014-01-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications

  7. Effects of uncertain topographic input data on two-dimensional modeling of flow hydraulics, habitat suitability, and bed mobility

    Science.gov (United States)

    Legleiter, C. J.; McDonald, R.; Kyriakidis, P. C.; Nelson, J. M.

    2009-12-01

    Numerical models of flow and sediment transport increasingly are used to inform studies of aquatic habitat and river morphodynamics. Accurate topographic information is required to parameterize such models, but this fundamental input is typically subject to considerable uncertainty, which can propagate through a model to produce uncertain predictions of flow hydraulics. In this study, we examined the effects of uncertain topographic input on the output from FaSTMECH, a two-dimensional, finite difference flow model implemented on a regular, channel-centered grid; the model was applied to a simple, restored gravel-bed river. We adopted a spatially explicit stochastic simulation approach because elevation differences (i.e., perturbations) at one node of the computational grid influenced model predictions at nearby nodes, due to the strong coupling between proximal locations dictated by the governing equations of fluid flow. Geostatistical techniques provided an appropriate framework for examining the impacts of topographic uncertainty by generating many, equally likely realizations, each consistent with a statistical model summarizing the variability and spatial structure of channel morphology. By applying the model to each realization in turn, a distribution of model outputs was generated for each grid node. One set of realizations, conditioned to the available survey data and progressively thinned versions thereof, was used to quantify the effects of sampling strategy on topographic uncertainty and hence the uncertainty of model predictions. This analysis indicated that as the spacing between surveyed cross-sections increased, the reach-averaged ensemble standard deviation of water surface elevation, depth, velocity, and boundary shear stress increased as well, for both baseflow conditions and for a discharge of ~75% bankfull. A second set of realizations was generated by retaining randomly selected subsets of the original survey data and used to investigate the

  8. Responses of two nonlinear microbial models to warming or increased carbon input

    Science.gov (United States)

    Wang, Y. P.; Jiang, J.; Chen-Charpentier, B.; Agusto, F. B.; Hastings, A.; Hoffman, F.; Rasmussen, M.; Smith, M. J.; Todd-Brown, K.; Wang, Y.; Xu, X.; Luo, Y. Q.

    2015-09-01

    A number of nonlinear microbial models of soil carbon decomposition have been developed. Some of them have been applied globally but have yet to be shown to realistically represent soil carbon dynamics in the field. Therefore a thorough analysis of their key differences will be very useful for the future development of these models. Here we compare two nonlinear microbial models of soil carbon decomposition: one is based on reverse Michaelis-Menten kinetics (model A) and the other on regular Michaelis-Menten kinetics (model B). Using a combination of analytic solutions and numerical simulations, we find that the oscillatory responses of carbon pools model A to a small perturbation in the initial pool sizes have a higher frequency and damps faster than model B. In response to soil warming, soil carbon always decreases in model A; but likely decreases in cool regions and increases in warm regions in model B. Maximum CO2 efflux from soil carbon decomposition (Fmax) after an increased carbon addition decreases with an increase in soil temperature in both models, and the sensitivity of Fmax to the amount of carbon input increases with soil temperature in model A; but decreases monotonically with an increase in soil temperature in model B. These differences in the responses to soil warming and carbon input between the two nonlinear models can be used to differentiate which model is more realistic with field or laboratory experiments. This will lead to a better understanding of the significance of soil microbial processes in the responses of soil carbon to future climate change at regional or global scales.

  9. Development of algorithm for depreciation costs allocation in dynamic input-output industrial enterprise model

    Directory of Open Access Journals (Sweden)

    Keller Alevtina

    2017-01-01

    Full Text Available The article considers the issue of allocation of depreciation costs in the dynamic inputoutput model of an industrial enterprise. Accounting the depreciation costs in such a model improves the policy of fixed assets management. It is particularly relevant to develop the algorithm for the allocation of depreciation costs in the construction of dynamic input-output model of an industrial enterprise, since such enterprises have a significant amount of fixed assets. Implementation of terms of the adequacy of such an algorithm itself allows: evaluating the appropriateness of investments in fixed assets, studying the final financial results of an industrial enterprise, depending on management decisions in the depreciation policy. It is necessary to note that the model in question for the enterprise is always degenerate. It is caused by the presence of zero rows in the matrix of capital expenditures by lines of structural elements unable to generate fixed assets (part of the service units, households, corporate consumers. The paper presents the algorithm for the allocation of depreciation costs for the model. This algorithm was developed by the authors and served as the basis for further development of the flowchart for subsequent implementation with use of software. The construction of such algorithm and its use for dynamic input-output models of industrial enterprises is actualized by international acceptance of the effectiveness of the use of input-output models for national and regional economic systems. This is what allows us to consider that the solutions discussed in the article are of interest to economists of various industrial enterprises.

  10. Filtering Based Recursive Least Squares Algorithm for Multi-Input Multioutput Hammerstein Models

    Directory of Open Access Journals (Sweden)

    Ziyun Wang

    2014-01-01

    Full Text Available This paper considers the parameter estimation problem for Hammerstein multi-input multioutput finite impulse response (FIR-MA systems. Filtered by the noise transfer function, the FIR-MA model is transformed into a controlled autoregressive model. The key-term variable separation principle is used to derive a data filtering based recursive least squares algorithm. The numerical examples confirm that the proposed algorithm can estimate parameters more accurately and has a higher computational efficiency compared with the recursive least squares algorithm.

  11. Unitary input DEA model to identify beef cattle production systems typologies

    Directory of Open Access Journals (Sweden)

    Eliane Gonçalves Gomes

    2012-08-01

    Full Text Available The cow-calf beef production sector in Brazil has a wide variety of operating systems. This suggests the identification and the characterization of homogeneous regions of production, with consequent implementation of actions to achieve its sustainability. In this paper we attempted to measure the performance of 21 livestock modal production systems, in their cow-calf phase. We measured the performance of these systems, considering husbandry and production variables. The proposed approach is based on data envelopment analysis (DEA. We used unitary input DEA model, with apparent input orientation, together with the efficiency measurements generated by the inverted DEA frontier. We identified five modal production systems typologies, using the isoefficiency layers approach. The results showed that the knowledge and the processes management are the most important factors for improving the efficiency of beef cattle production systems.

  12. Discrete element modelling (DEM) input parameters: understanding their impact on model predictions using statistical analysis

    Science.gov (United States)

    Yan, Z.; Wilkinson, S. K.; Stitt, E. H.; Marigo, M.

    2015-09-01

    Selection or calibration of particle property input parameters is one of the key problematic aspects for the implementation of the discrete element method (DEM). In the current study, a parametric multi-level sensitivity method is employed to understand the impact of the DEM input particle properties on the bulk responses for a given simple system: discharge of particles from a flat bottom cylindrical container onto a plate. In this case study, particle properties, such as Young's modulus, friction parameters and coefficient of restitution were systematically changed in order to assess their effect on material repose angles and particle flow rate (FR). It was shown that inter-particle static friction plays a primary role in determining both final angle of repose and FR, followed by the role of inter-particle rolling friction coefficient. The particle restitution coefficient and Young's modulus were found to have insignificant impacts and were strongly cross correlated. The proposed approach provides a systematic method that can be used to show the importance of specific DEM input parameters for a given system and then potentially facilitates their selection or calibration. It is concluded that shortening the process for input parameters selection and calibration can help in the implementation of DEM.

  13. The sensitivity of ecosystem service models to choices of input data and spatial resolution

    Science.gov (United States)

    Bagstad, Kenneth J.; Cohen, Erika; Ancona, Zachary H.; McNulty, Steven; Sun, Ge

    2018-01-01

    Although ecosystem service (ES) modeling has progressed rapidly in the last 10–15 years, comparative studies on data and model selection effects have become more common only recently. Such studies have drawn mixed conclusions about whether different data and model choices yield divergent results. In this study, we compared the results of different models to address these questions at national, provincial, and subwatershed scales in Rwanda. We compared results for carbon, water, and sediment as modeled using InVEST and WaSSI using (1) land cover data at 30 and 300 m resolution and (2) three different input land cover datasets. WaSSI and simpler InVEST models (carbon storage and annual water yield) were relatively insensitive to the choice of spatial resolution, but more complex InVEST models (seasonal water yield and sediment regulation) produced large differences when applied at differing resolution. Six out of nine ES metrics (InVEST annual and seasonal water yield and WaSSI) gave similar predictions for at least two different input land cover datasets. Despite differences in mean values when using different data sources and resolution, we found significant and highly correlated results when using Spearman's rank correlation, indicating consistent spatial patterns of high and low values. Our results confirm and extend conclusions of past studies, showing that in certain cases (e.g., simpler models and national-scale analyses), results can be robust to data and modeling choices. For more complex models, those with different output metrics, and subnational to site-based analyses in heterogeneous environments, data and model choices may strongly influence study findings.

  14. VSC Input-Admittance Modeling and Analysis Above the Nyquist Frequency for Passivity-Based Stability Assessment

    DEFF Research Database (Denmark)

    Harnefors, Lennart; Finger, Raphael; Wang, Xiongfei

    2017-01-01

    The interconnection stability of a gridconnected voltage-source converter (VSC) can be assessed via the dissipative properties of its input admittance. In this paper, the modeling of the current control loop is revisited with the aim to improve the accuracy of the input-admittance model above the...

  15. The MARINA model (Model to Assess River Inputs of Nutrients to seAs): Model description and results for China.

    Science.gov (United States)

    Strokal, Maryna; Kroeze, Carolien; Wang, Mengru; Bai, Zhaohai; Ma, Lin

    2016-08-15

    Chinese agriculture has been developing fast towards industrial food production systems that discharge nutrient-rich wastewater into rivers. As a result, nutrient export by rivers has been increasing, resulting in coastal water pollution. We developed a Model to Assess River Inputs of Nutrients to seAs (MARINA) for China. The MARINA Nutrient Model quantifies river export of nutrients by source at the sub-basin scale as a function of human activities on land. MARINA is a downscaled version for China of the Global NEWS-2 (Nutrient Export from WaterSheds) model with an improved approach for nutrient losses from animal production and population. We use the model to quantify dissolved inorganic and organic nitrogen (N) and phosphorus (P) export by six large rivers draining into the Bohai Gulf (Yellow, Hai, Liao), Yellow Sea (Yangtze, Huai) and South China Sea (Pearl) in 1970, 2000 and 2050. We addressed uncertainties in the MARINA Nutrient model. Between 1970 and 2000 river export of dissolved N and P increased by a factor of 2-8 depending on sea and nutrient form. Thus, the risk for coastal eutrophication increased. Direct losses of manure to rivers contribute to 60-78% of nutrient inputs to the Bohai Gulf and 20-74% of nutrient inputs to the other seas in 2000. Sewage is an important source of dissolved inorganic P, and synthetic fertilizers of dissolved inorganic N. Over half of the nutrients exported by the Yangtze and Pearl rivers originated from human activities in downstream and middlestream sub-basins. The Yellow River exported up to 70% of dissolved inorganic N and P from downstream sub-basins and of dissolved organic N and P from middlestream sub-basins. Rivers draining into the Bohai Gulf are drier, and thus transport fewer nutrients. For the future we calculate further increases in river export of nutrients. The MARINA Nutrient model quantifies the main sources of coastal water pollution for sub-basins. This information can contribute to formulation of

  16. Assessment of input function distortions on kinetic model parameters in simulated dynamic 82Rb PET perfusion studies

    International Nuclear Information System (INIS)

    Meyer, Carsten; Peligrad, Dragos-Nicolae; Weibrecht, Martin

    2007-01-01

    Cardiac 82 rubidium dynamic PET studies allow quantifying absolute myocardial perfusion by using tracer kinetic modeling. Here, the accurate measurement of the input function, i.e. the tracer concentration in blood plasma, is a major challenge. This measurement is deteriorated by inappropriate temporal sampling, spillover, etc. Such effects may influence the measured input peak value and the measured blood pool clearance. The aim of our study is to evaluate the effect of input function distortions on the myocardial perfusion as estimated by the model. To this end, we simulate noise-free myocardium time activity curves (TACs) with a two-compartment kinetic model. The input function to the model is a generic analytical function. Distortions of this function have been introduced by varying its parameters. Using the distorted input function, the compartment model has been fitted to the simulated myocardium TAC. This analysis has been performed for various sets of model parameters covering a physiologically relevant range. The evaluation shows that ±10% error in the input peak value can easily lead to ±10-25% error in the model parameter K 1 , which relates to myocardial perfusion. Variations in the input function tail are generally less relevant. We conclude that an accurate estimation especially of the plasma input peak is crucial for a reliable kinetic analysis and blood flow estimation

  17. Scaling precipitation input to spatially distributed hydrological models by measured snow distribution

    Directory of Open Access Journals (Sweden)

    Christian Vögeli

    2016-12-01

    Full Text Available Accurate knowledge on snow distribution in alpine terrain is crucial for various applicationssuch as flood risk assessment, avalanche warning or managing water supply and hydro-power.To simulate the seasonal snow cover development in alpine terrain, the spatially distributed,physics-based model Alpine3D is suitable. The model is typically driven by spatial interpolationsof observations from automatic weather stations (AWS, leading to errors in the spatial distributionof atmospheric forcing. With recent advances in remote sensing techniques, maps of snowdepth can be acquired with high spatial resolution and accuracy. In this work, maps of the snowdepth distribution, calculated from summer and winter digital surface models based on AirborneDigital Sensors (ADS, are used to scale precipitation input data, with the aim to improve theaccuracy of simulation of the spatial distribution of snow with Alpine3D. A simple method toscale and redistribute precipitation is presented and the performance is analysed. The scalingmethod is only applied if it is snowing. For rainfall the precipitation is distributed by interpolation,with a simple air temperature threshold used for the determination of the precipitation phase.It was found that the accuracy of spatial snow distribution could be improved significantly forthe simulated domain. The standard deviation of absolute snow depth error is reduced up toa factor 3.4 to less than 20 cm. The mean absolute error in snow distribution was reducedwhen using representative input sources for the simulation domain. For inter-annual scaling, themodel performance could also be improved, even when using a remote sensing dataset from adifferent winter. In conclusion, using remote sensing data to process precipitation input, complexprocesses such as preferential snow deposition and snow relocation due to wind or avalanches,can be substituted and modelling performance of spatial snow distribution is improved.

  18. PERMODELAN INDEKS HARGA KONSUMEN INDONESIA DENGAN MENGGUNAKAN MODEL INTERVENSI MULTI INPUT

    KAUST Repository

    Novianti, Putri Wikie

    2017-01-24

    There are some events which are expected effecting CPI’s fluctuation, i.e. financial crisis 1997/1998, fuel price risings, base year changing’s, independence of Timor-Timur (October 1999), and Tsunami disaster in Aceh (December 2004). During re-search period, there were eight fuel price risings and four base year changing’s. The objective of this research is to obtain multi input intervention model which can des-cribe magnitude and duration of each event effected to CPI. Most of intervention re-searches that have been done are only contain of an intervention with single input, ei-ther step or pulse function. Multi input intervention was used in Indonesia CPI case because there are some events which are expected effecting CPI. Based on the result, those events were affecting CPI. Additionally, other events, such as Ied on January 1999, events on April 2002, July 2003, December 2005, and September 2008, were affecting CPI too. In general, those events gave positive effect to CPI, except events on April 2002 and July 2003 which gave negative effects.

  19. Input-Output Modeling for Urban Energy Consumption in Beijing: Dynamics and Comparison

    Science.gov (United States)

    Zhang, Lixiao; Hu, Qiuhong; Zhang, Fan

    2014-01-01

    Input-output analysis has been proven to be a powerful instrument for estimating embodied (direct plus indirect) energy usage through economic sectors. Using 9 economic input-output tables of years 1987, 1990, 1992, 1995, 1997, 2000, 2002, 2005, and 2007, this paper analyzes energy flows for the entire city of Beijing and its 30 economic sectors, respectively. Results show that the embodied energy consumption of Beijing increased from 38.85 million tonnes of coal equivalent (Mtce) to 206.2 Mtce over the past twenty years of rapid urbanization; the share of indirect energy consumption in total energy consumption increased from 48% to 76%, suggesting the transition of Beijing from a production-based and manufacturing-dominated economy to a consumption-based and service-dominated economy. Real estate development has shown to be a major driving factor of the growth in indirect energy consumption. The boom and bust of construction activities have been strongly correlated with the increase and decrease of system-side indirect energy consumption. Traditional heavy industries remain the most energy-intensive sectors in the economy. However, the transportation and service sectors have contributed most to the rapid increase in overall energy consumption. The analyses in this paper demonstrate that a system-wide approach such as that based on input-output model can be a useful tool for robust energy policy making. PMID:24595199

  20. A comparison of numerical and machine-learning modeling of soil water content with limited input data

    Science.gov (United States)

    Karandish, Fatemeh; Šimůnek, Jiří

    2016-12-01

    Soil water content (SWC) is a key factor in optimizing the usage of water resources in agriculture since it provides information to make an accurate estimation of crop water demand. Methods for predicting SWC that have simple data requirements are needed to achieve an optimal irrigation schedule, especially for various water-saving irrigation strategies that are required to resolve both food and water security issues under conditions of water shortages. Thus, a two-year field investigation was carried out to provide a dataset to compare the effectiveness of HYDRUS-2D, a physically-based numerical model, with various machine-learning models, including Multiple Linear Regressions (MLR), Adaptive Neuro-Fuzzy Inference Systems (ANFIS), and Support Vector Machines (SVM), for simulating time series of SWC data under water stress conditions. SWC was monitored using TDRs during the maize growing seasons of 2010 and 2011. Eight combinations of six, simple, independent parameters, including pan evaporation and average air temperature as atmospheric parameters, cumulative growth degree days (cGDD) and crop coefficient (Kc) as crop factors, and water deficit (WD) and irrigation depth (In) as crop stress factors, were adopted for the estimation of SWCs in the machine-learning models. Having Root Mean Square Errors (RMSE) in the range of 0.54-2.07 mm, HYDRUS-2D ranked first for the SWC estimation, while the ANFIS and SVM models with input datasets of cGDD, Kc, WD and In ranked next with RMSEs ranging from 1.27 to 1.9 mm and mean bias errors of -0.07 to 0.27 mm, respectively. However, the MLR models did not perform well for SWC forecasting, mainly due to non-linear changes of SWCs under the irrigation process. The results demonstrated that despite requiring only simple input data, the ANFIS and SVM models could be favorably used for SWC predictions under water stress conditions, especially when there is a lack of data. However, process-based numerical models are undoubtedly a

  1. Modelling Effects on Grid Cells of Sensory Input During Self-motion

    Science.gov (United States)

    2016-04-20

    Olton et al. 1979, 1986; Morris et al. 1982), and hence their accurate updating on the basis of sensory features appears to be essential to memory -guided...J Physiol 000.0 (2016) pp 1–14 1 Th e Jo u rn al o f Ph ys io lo g y N eu ro sc ie nc e SYMPOS IUM REV IEW Modelling effects on grid cells of sensory ...input during self-motion Florian Raudies, James R. Hinman and Michael E. Hasselmo Center for Systems Neuroscience, Centre for Memory and Brain

  2. Input-constrained model predictive control via the alternating direction method of multipliers

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Frison, Gianluca; Andersen, Martin S.

    2014-01-01

    is quadratic in the dimensions of the controlled system, and linear in the length of the prediction horizon. Simulations show that the approach proposed in this paper is more than an order of magnitude faster than several state-of-the-art quadratic programming algorithms, and that the difference in computation......This paper presents an algorithm, based on the alternating direction method of multipliers, for the convex optimal control problem arising in input-constrained model predictive control. We develop an efficient implementation of the algorithm for the extended linear quadratic control problem (LQCP...

  3. Using Whole-House Field Tests to Empirically Derive Moisture Buffering Model Inputs

    Energy Technology Data Exchange (ETDEWEB)

    Woods, J.; Winkler, J.; Christensen, D.; Hancock, E.

    2014-08-01

    Building energy simulations can be used to predict a building's interior conditions, along with the energy use associated with keeping these conditions comfortable. These models simulate the loads on the building (e.g., internal gains, envelope heat transfer), determine the operation of the space conditioning equipment, and then calculate the building's temperature and humidity throughout the year. The indoor temperature and humidity are affected not only by the loads and the space conditioning equipment, but also by the capacitance of the building materials, which buffer changes in temperature and humidity. This research developed an empirical method to extract whole-house model inputs for use with a more accurate moisture capacitance model (the effective moisture penetration depth model). The experimental approach was to subject the materials in the house to a square-wave relative humidity profile, measure all of the moisture transfer terms (e.g., infiltration, air conditioner condensate) and calculate the only unmeasured term: the moisture absorption into the materials. After validating the method with laboratory measurements, we performed the tests in a field house. A least-squares fit of an analytical solution to the measured moisture absorption curves was used to determine the three independent model parameters representing the moisture buffering potential of this house and its furnishings. Follow on tests with realistic latent and sensible loads showed good agreement with the derived parameters, especially compared to the commonly-used effective capacitance approach. These results show that the EMPD model, once the inputs are known, is an accurate moisture buffering model.

  4. Visual Predictive Check in Models with Time-Varying Input Function.

    Science.gov (United States)

    Largajolli, Anna; Bertoldo, Alessandra; Campioni, Marco; Cobelli, Claudio

    2015-11-01

    The nonlinear mixed effects models are commonly used modeling techniques in the pharmaceutical research as they enable the characterization of the individual profiles together with the population to which the individuals belong. To ensure a correct use of them is fundamental to provide powerful diagnostic tools that are able to evaluate the predictive performance of the models. The visual predictive check (VPC) is a commonly used tool that helps the user to check by visual inspection if the model is able to reproduce the variability and the main trend of the observed data. However, the simulation from the model is not always trivial, for example, when using models with time-varying input function (IF). In this class of models, there is a potential mismatch between each set of simulated parameters and the associated individual IF which can cause an incorrect profile simulation. We introduce a refinement of the VPC by taking in consideration a correlation term (the Mahalanobis or normalized Euclidean distance) that helps the association of the correct IF with the individual set of simulated parameters. We investigate and compare its performance with the standard VPC in models of the glucose and insulin system applied on real and simulated data and in a simulated pharmacokinetic/pharmacodynamic (PK/PD) example. The newly proposed VPC performance appears to be better with respect to the standard VPC especially for the models with big variability in the IF where the probability of simulating incorrect profiles is higher.

  5. Dual-input two-compartment pharmacokinetic model of dynamic contrast-enhanced magnetic resonance imaging in hepatocellular carcinoma.

    Science.gov (United States)

    Yang, Jian-Feng; Zhao, Zhen-Hua; Zhang, Yu; Zhao, Li; Yang, Li-Ming; Zhang, Min-Ming; Wang, Bo-Yin; Wang, Ting; Lu, Bao-Chun

    2016-04-07

    To investigate the feasibility of a dual-input two-compartment tracer kinetic model for evaluating tumorous microvascular properties in advanced hepatocellular carcinoma (HCC). From January 2014 to April 2015, we prospectively measured and analyzed pharmacokinetic parameters [transfer constant (Ktrans), plasma flow (Fp), permeability surface area product (PS), efflux rate constant (kep), extravascular extracellular space volume ratio (ve), blood plasma volume ratio (vp), and hepatic perfusion index (HPI)] using dual-input two-compartment tracer kinetic models [a dual-input extended Tofts model and a dual-input 2-compartment exchange model (2CXM)] in 28 consecutive HCC patients. A well-known consensus that HCC is a hypervascular tumor supplied by the hepatic artery and the portal vein was used as a reference standard. A paired Student's t-test and a nonparametric paired Wilcoxon rank sum test were used to compare the equivalent pharmacokinetic parameters derived from the two models, and Pearson correlation analysis was also applied to observe the correlations among all equivalent parameters. The tumor size and pharmacokinetic parameters were tested by Pearson correlation analysis, while correlations among stage, tumor size and all pharmacokinetic parameters were assessed by Spearman correlation analysis. The Fp value was greater than the PS value (FP = 1.07 mL/mL per minute, PS = 0.19 mL/mL per minute) in the dual-input 2CXM; HPI was 0.66 and 0.63 in the dual-input extended Tofts model and the dual-input 2CXM, respectively. There were no significant differences in the kep, vp, or HPI between the dual-input extended Tofts model and the dual-input 2CXM (P = 0.524, 0.569, and 0.622, respectively). All equivalent pharmacokinetic parameters, except for ve, were correlated in the two dual-input two-compartment pharmacokinetic models; both Fp and PS in the dual-input 2CXM were correlated with Ktrans derived from the dual-input extended Tofts model (P = 0.002, r = 0.566; P

  6. A Water-Withdrawal Input-Output Model of the Indian Economy.

    Science.gov (United States)

    Bogra, Shelly; Bakshi, Bhavik R; Mathur, Ritu

    2016-02-02

    Managing freshwater allocation for a highly populated and growing economy like India can benefit from knowledge about the effect of economic activities. This study transforms the 2003-2004 economic input-output (IO) table of India into a water withdrawal input-output model to quantify direct and indirect flows. This unique model is based on a comprehensive database compiled from diverse public sources, and estimates direct and indirect water withdrawal of all economic sectors. It distinguishes between green (rainfall), blue (surface and ground), and scarce groundwater. Results indicate that the total direct water withdrawal is nearly 3052 billion cubic meter (BCM) and 96% of this is used in agriculture sectors with the contribution of direct green water being about 1145 BCM, excluding forestry. Apart from 727 BCM direct blue water withdrawal for agricultural, other significant users include "Electricity" with 64 BCM, "Water supply" with 44 BCM and other industrial sectors with nearly 14 BCM. "Construction", "miscellaneous food products"; "Hotels and restaurants"; "Paper, paper products, and newsprint" are other significant indirect withdrawers. The net virtual water import is found to be insignificant compared to direct water used in agriculture nationally, while scarce ground water associated with crops is largely contributed by northern states.

  7. INPUT DATA OF BURNING WOOD FOR CFD MODELLING USING SMALL-SCALE EXPERIMENTS

    Directory of Open Access Journals (Sweden)

    Petr Hejtmánek

    2017-12-01

    Full Text Available The paper presents an option how to acquire simplified input data for modelling of burning wood in CFD programmes. The option lies in combination of data from small- and molecular-scale experiments in order to describe the material as a one-reaction material property. Such virtual material would spread fire, develop the fire according to surrounding environment and it could be extinguished without using complex reaction molecular description. Series of experiments including elemental analysis, thermogravimetric analysis and difference thermal analysis, and combustion analysis were performed. Then the FDS model of burning pine wood in a cone calorimeter was built. In the model where those values were used. The model was validated to HRR (Heat Release Rate from the real cone calorimeter experiment. The results show that for the purpose of CFD modelling the effective heat of combustion, which is one of the basic material property for fire modelling affecting the total intensity of burning, should be used. Using the net heat of combustion in the model leads to higher values of HRR in comparison to the real experiment data. Considering all the results shown in this paper, it was shown that it is possible to simulate burning of wood using the extrapolated data obtained in small-size experiments.

  8. Estimation and impact assessment of input and parameter uncertainty in predicting groundwater flow with a fully distributed model

    Science.gov (United States)

    Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke

    2017-04-01

    Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.

  9. Targeting the right input data to improve crop modeling at global level

    Science.gov (United States)

    Adam, M.; Robertson, R.; Gbegbelegbe, S.; Jones, J. W.; Boote, K. J.; Asseng, S.

    2012-12-01

    Designed for location-specific simulations, the use of crop models at a global level raises important questions. Crop models are originally premised on small unit areas where environmental conditions and management practices are considered homogeneous. Specific information describing soils, climate, management, and crop characteristics are used in the calibration process. However, when scaling up for global application, we rely on information derived from geographical information systems and weather generators. To run crop models at broad, we use a modeling platform that assumes a uniformly generated grid cell as a unit area. Specific weather, specific soil and specific management practices for each crop are represented for each of the cell grids. Studies on the impacts of the uncertainties of weather information and climate change on crop yield at a global level have been carried out (Osborne et al, 2007, Nelson et al., 2010, van Bussel et al, 2011). Detailed information on soils and management practices at global level are very scarce but recognized to be of critical importance (Reidsma et al., 2009). Few attempts to assess the impact of their uncertainties on cropping systems performances can be found. The objectives of this study are (i) to determine sensitivities of a crop model to soil and management practices, inputs most relevant to low input rainfed cropping systems, and (ii) to define hotspots of sensitivity according to the input data. We ran DSSAT v4.5 globally (CERES-CROPSIM) to simulate wheat yields at 45arc-minute resolution. Cultivar parameters were calibrated and validated for different mega-environments (results not shown). The model was run for nitrogen-limited production systems. This setting was chosen as the most representative to simulate actual yield (especially for low-input rainfed agricultural systems) and assumes crop growth to be free of any pest and diseases damages. We conducted a sensitivity analysis on contrasting management

  10. New insights into mammalian signaling pathways using microfluidic pulsatile inputs and mathematical modeling

    Science.gov (United States)

    Sumit, M.; Takayama, S.; Linderman, J. J.

    2016-01-01

    Temporally modulated input mimics physiology. This chemical communication strategy filters the biochemical noise through entrainment and phase-locking. Under laboratory conditions, it also expands the observability space for downstream responses. A combined approach involving microfluidic pulsatile stimulation and mathematical modeling has led to deciphering of hidden/unknown temporal motifs in several mammalian signaling pathways and has provided mechanistic insights, including how these motifs combine to form distinct band-pass filters and govern fate regulation under dynamic microenvironment. This approach can be utilized to understand signaling circuit architectures and to gain mechanistic insights for several other signaling systems. Potential applications include synthetic biology and biotechnology, in developing pharmaceutical interventions, and in developing lab-on-chip models. PMID:27868126

  11. New insights into mammalian signaling pathways using microfluidic pulsatile inputs and mathematical modeling.

    Science.gov (United States)

    Sumit, M; Takayama, S; Linderman, J J

    2017-01-23

    Temporally modulated input mimics physiology. This chemical communication strategy filters the biochemical noise through entrainment and phase-locking. Under laboratory conditions, it also expands the observability space for downstream responses. A combined approach involving microfluidic pulsatile stimulation and mathematical modeling has led to deciphering of hidden/unknown temporal motifs in several mammalian signaling pathways and has provided mechanistic insights, including how these motifs combine to form distinct band-pass filters and govern fate regulation under dynamic microenvironment. This approach can be utilized to understand signaling circuit architectures and to gain mechanistic insights for several other signaling systems. Potential applications include synthetic biology and biotechnology, in developing pharmaceutical interventions, and in developing lab-on-chip models.

  12. Review of Literature for Inputs to the National Water Savings Model and Spreadsheet Tool-Commercial/Institutional

    Energy Technology Data Exchange (ETDEWEB)

    Whitehead, Camilla Dunham; Melody, Moya; Lutz, James

    2009-05-29

    Lawrence Berkeley National Laboratory (LBNL) is developing a computer model and spreadsheet tool for the United States Environmental Protection Agency (EPA) to help estimate the water savings attributable to their WaterSense program. WaterSense has developed a labeling program for three types of plumbing fixtures commonly used in commercial and institutional settings: flushometer valve toilets, urinals, and pre-rinse spray valves. This National Water Savings-Commercial/Institutional (NWS-CI) model is patterned after the National Water Savings-Residential model, which was completed in 2008. Calculating the quantity of water and money saved through the WaterSense labeling program requires three primary inputs: (1) the quantity of a given product in use; (2) the frequency with which units of the product are replaced or are installed in new construction; and (3) the number of times or the duration the product is used in various settings. To obtain the information required for developing the NWS-CI model, LBNL reviewed various resources pertaining to the three WaterSense-labeled commercial/institutional products. The data gathered ranged from the number of commercial buildings in the United States to numbers of employees in various sectors of the economy and plumbing codes for commercial buildings. This document summarizes information obtained about the three products' attributes, quantities, and use in commercial and institutional settings that is needed to estimate how much water EPA's WaterSense program saves.

  13. An Approach for Generating Precipitation Input for Worst-Case Flood Modelling

    Science.gov (United States)

    Felder, Guido; Weingartner, Rolf

    2015-04-01

    There is a lack of suitable methods for creating precipitation scenarios that can be used to realistically estimate peak discharges with very low probabilities. On the one hand, existing methods are methodically questionable when it comes to physical system boundaries. On the other hand, the spatio-temporal representativeness of precipitation patterns as system input is limited. In response, this study proposes a method of deriving representative spatio-temporal precipitation patterns and presents a step towards making methodically correct estimations of infrequent floods by using a worst-case approach. A Monte-Carlo rainfall-runoff model allows for the testing of a wide range of different spatio-temporal distributions of an extreme precipitation event and therefore for the generation of a hydrograph for each of these distributions. Out of these numerous hydrographs and their corresponding peak discharges, the worst-case catchment reactions on the system input can be derived. The spatio-temporal distributions leading to the highest peak discharges are identified and can eventually be used for further investigations.

  14. Model-based human reliability analysis: prospects and requirements

    International Nuclear Information System (INIS)

    Mosleh, A.; Chang, Y.H.

    2004-01-01

    Major limitations of the conventional methods for human reliability analysis (HRA), particularly those developed for operator response analysis in probabilistic safety assessments (PSA) of nuclear power plants, are summarized as a motivation for the need and a basis for developing requirements for the next generation HRA methods. It is argued that a model-based approach that provides explicit cognitive causal links between operator behaviors and directly or indirectly measurable causal factors should be at the core of the advanced methods. An example of such causal model is briefly reviewed, where due to the model complexity and input requirements can only be currently implemented in a dynamic PSA environment. The computer simulation code developed for this purpose is also described briefly, together with current limitations in the models, data, and the computer implementation

  15. Solar Load Inputs for USARIEM Thermal Strain Models and the Solar Radiation-Sensitive Components of the WBGT Index

    National Research Council Canada - National Science Library

    Matthew, William

    2001-01-01

    This report describes processes we have implemented to use global pyranometer-based estimates of mean radiant temperature as the common solar load input for the Scenario model, the USARIEM heat strain...

  16. Boolean modeling of neural systems with point-process inputs and outputs. Part I: theory and simulations.

    Science.gov (United States)

    Marmarelis, Vasilis Z; Zanos, Theodoros P; Berger, Theodore W

    2009-08-01

    This paper presents a new modeling approach for neural systems with point-process (spike) inputs and outputs that utilizes Boolean operators (i.e. modulo 2 multiplication and addition that correspond to the logical AND and OR operations respectively, as well as the AND_NOT logical operation representing inhibitory effects). The form of the employed mathematical models is akin to a "Boolean-Volterra" model that contains the product terms of all relevant input lags in a hierarchical order, where terms of order higher than first represent nonlinear interactions among the various lagged values of each input point-process or among lagged values of various inputs (if multiple inputs exist) as they reflect on the output. The coefficients of this Boolean-Volterra model are also binary variables that indicate the presence or absence of the respective term in each specific model/system. Simulations are used to explore the properties of such models and the feasibility of their accurate estimation from short data-records in the presence of noise (i.e. spurious spikes). The results demonstrate the feasibility of obtaining reliable estimates of such models, with excitatory and inhibitory terms, in the presence of considerable noise (spurious spikes) in the outputs and/or the inputs in a computationally efficient manner. A pilot application of this approach to an actual neural system is presented in the companion paper (Part II).

  17. Reconstruction of rocks petrophysical properties as input data for reservoir modeling

    Science.gov (United States)

    Cantucci, B.; Montegrossi, G.; Lucci, F.; Quattrocchi, F.

    2016-11-01

    The worldwide increasing energy demand triggered studies focused on defining the underground energy potential even in areas previously discharged or neglected. Nowadays, geological gas storage (CO2 and/or CH4) and geothermal energy are considered strategic for low-carbon energy development. A widespread and safe application of these technologies needs an accurate characterization of the underground, in terms of geology, hydrogeology, geochemistry, and geomechanics. However, during prefeasibility study-stage, the limited number of available direct measurements of reservoirs, and the high costs of reopening closed deep wells must be taken into account. The aim of this work is to overcome these limits, proposing a new methodology to reconstruct vertical profiles, from surface to reservoir base, of: (i) thermal capacity, (ii) thermal conductivity, (iii) porosity, and (iv) permeability, through integration of well-log information, petrographic observations on inland outcropping samples, and flow and heat transport modeling. As case study to test our procedure we selected a deep structure, located in the medium Tyrrhenian Sea (Italy). Obtained results are consistent with measured data, confirming the validity of the proposed model. Notwithstanding intrinsic limitations due to manual calibration of the model with measured data, this methodology represents an useful tool for reservoir and geochemical modelers that need to define petrophysical input data for underground modeling before the well reopening.

  18. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models

    International Nuclear Information System (INIS)

    Lamboni, Matieyendou; Monod, Herve; Makowski, David

    2011-01-01

    Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006 ) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.

  19. Comparison of several climate indices as inputs in modelling of the Baltic Sea runoff

    Energy Technology Data Exchange (ETDEWEB)

    Hanninen, J.; Vuorinen, I. [Turku Univ. (Finland). Archipelaco Research Inst.], e-mail: jari.hanninen@utu.fi

    2012-11-01

    Using Transfer function (TF) models, we have earlier presented a chain of events between changes in the North Atlantic Oscillation (NAO) and their oceanographical and ecological consequences in the Baltic Sea. Here we tested whether other climate indices as inputs would improve TF models, and our understanding of the Baltic Sea ecosystem. Besides NAO, the predictors were the Arctic Oscillation (AO), sea-level air pressures at Iceland (SLP), and wind speeds at Hoburg (Gotland). All indices produced good TF models when the total riverine runoff to the Baltic Sea was used as a modelling basis. AO was not applicable in all study areas, showing a delay of about half a year between climate and runoff events, connected with freezing and melting time of ice and snow in the northern catchment area of the Baltic Sea. NAO appeared to be most useful modelling tool as its area of applicability was the widest of the tested indices, and the time lag between climate and runoff events was the shortest. SLP and Hoburg wind speeds showed largely same results as NAO, but with smaller areal applicability. Thus AO and NAO were both mostly contributing to the general understanding of climate control of runoff events in the Baltic Sea ecosystem. (orig.)

  20. Neural Systems with Numerically Matched Input-Output Statistic: Isotonic Bivariate Statistical Modeling

    Directory of Open Access Journals (Sweden)

    Simone Fiori

    2007-07-01

    Full Text Available Bivariate statistical modeling from incomplete data is a useful statistical tool that allows to discover the model underlying two data sets when the data in the two sets do not correspond in size nor in ordering. Such situation may occur when the sizes of the two data sets do not match (i.e., there are “holes” in the data or when the data sets have been acquired independently. Also, statistical modeling is useful when the amount of available data is enough to show relevant statistical features of the phenomenon underlying the data. We propose to tackle the problem of statistical modeling via a neural (nonlinear system that is able to match its input-output statistic to the statistic of the available data sets. A key point of the new implementation proposed here is that it is based on look-up-table (LUT neural systems, which guarantee a computationally advantageous way of implementing neural systems. A number of numerical experiments, performed on both synthetic and real-world data sets, illustrate the features of the proposed modeling procedure.

  1. Assessment of NASA's Physiographic and Meteorological Datasets as Input to HSPF and SWAT Hydrological Models

    Science.gov (United States)

    Alacron, Vladimir J.; Nigro, Joseph D.; McAnally, William H.; OHara, Charles G.; Engman, Edwin Ted; Toll, David

    2011-01-01

    This paper documents the use of simulated Moderate Resolution Imaging Spectroradiometer land use/land cover (MODIS-LULC), NASA-LIS generated precipitation and evapo-transpiration (ET), and Shuttle Radar Topography Mission (SRTM) datasets (in conjunction with standard land use, topographical and meteorological datasets) as input to hydrological models routinely used by the watershed hydrology modeling community. The study is focused in coastal watersheds in the Mississippi Gulf Coast although one of the test cases focuses in an inland watershed located in northeastern State of Mississippi, USA. The decision support tools (DSTs) into which the NASA datasets were assimilated were the Soil Water & Assessment Tool (SWAT) and the Hydrological Simulation Program FORTRAN (HSPF). These DSTs are endorsed by several US government agencies (EPA, FEMA, USGS) for water resources management strategies. These models use physiographic and meteorological data extensively. Precipitation gages and USGS gage stations in the region were used to calibrate several HSPF and SWAT model applications. Land use and topographical datasets were swapped to assess model output sensitivities. NASA-LIS meteorological data were introduced in the calibrated model applications for simulation of watershed hydrology for a time period in which no weather data were available (1997-2006). The performance of the NASA datasets in the context of hydrological modeling was assessed through comparison of measured and model-simulated hydrographs. Overall, NASA datasets were as useful as standard land use, topographical , and meteorological datasets. Moreover, NASA datasets were used for performing analyses that the standard datasets could not made possible, e.g., introduction of land use dynamics into hydrological simulations

  2. Realistic modeling of seismic input for megacities and large urban areas

    Science.gov (United States)

    Panza, G. F.; Unesco/Iugs/Igcp Project 414 Team

    2003-04-01

    The project addressed the problem of pre-disaster orientation: hazard prediction, risk assessment, and hazard mapping, in connection with seismic activity and man-induced vibrations. The definition of realistic seismic input has been obtained from the computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different source and structural models. The innovative modeling technique, that constitutes the common tool to the entire project, takes into account source, propagation and local site effects. This is done using first principles of physics about wave generation and propagation in complex media, and does not require to resort to convolutive approaches, that have been proven to be quite unreliable, mainly when dealing with complex geological structures, the most interesting from the practical point of view. In fact, several techniques that have been proposed to empirically estimate the site effects using observations convolved with theoretically computed signals corresponding to simplified models, supply reliable information about the site response to non-interfering seismic phases. They are not adequate in most of the real cases, when the seismic sequel is formed by several interfering waves. The availability of realistic numerical simulations enables us to reliably estimate the amplification effects even in complex geological structures, exploiting the available geotechnical, lithological, geophysical parameters, topography of the medium, tectonic, historical, palaeoseismological data, and seismotectonic models. The realistic modeling of the ground motion is a very important base of knowledge for the preparation of groundshaking scenarios that represent a valid and economic tool for the seismic microzonation. This knowledge can be very fruitfully used by civil engineers in the design of new seismo-resistant constructions and in the reinforcement of the existing built environment, and, therefore

  3. Modeling and Controller Design of PV Micro Inverter without Using Electrolytic Capacitors and Input Current Sensors

    Directory of Open Access Journals (Sweden)

    Faa Jeng Lin

    2016-11-01

    Full Text Available This paper outlines the modeling and controller design of a novel two-stage photovoltaic (PV micro inverter (MI that eliminates the need for an electrolytic capacitor (E-cap and input current sensor. The proposed MI uses an active-clamped current-fed push-pull DC-DC converter, cascaded with a full-bridge inverter. Three strategies are proposed to cope with the inherent limitations of a two-stage PV MI: (i high-speed DC bus voltage regulation using an integrator to deal with the 2nd harmonic voltage ripples found in single-phase systems; (ii inclusion of a small film capacitor in the DC bus to achieve ripple-free PV voltage; (iii improved incremental conductance (INC maximum power point tracking (MPPT without the need for current sensing by the PV module. Simulation and experimental results demonstrate the efficacy of the proposed system.

  4. Meta-requirements that Model Change

    OpenAIRE

    Gouri Prakash

    2010-01-01

    One of the common problems encountered in software engineering is addressing and responding to the changing nature of requirements. While several approaches have been devised to address this issue, ranging from instilling resistance to changing requirements in order to mitigate impact to project schedules, to developing an agile mindset towards requirements, the approach discussed in this paper is one of conceptualizing the delta in requirement and modeling it, in order t...

  5. Effects of degraded sensory input on memory for speech: behavioral data and a test of biologically constrained computational models.

    Science.gov (United States)

    Piquado, Tepring; Cousins, Katheryn A Q; Wingfield, Arthur; Miller, Paul

    2010-12-13

    Poor hearing acuity reduces memory for spoken words, even when the words are presented with enough clarity for correct recognition. An "effortful hypothesis" suggests that the perceptual effort needed for recognition draws from resources that would otherwise be available for encoding the word in memory. To assess this hypothesis, we conducted a behavioral task requiring immediate free recall of word-lists, some of which contained an acoustically masked word that was just above perceptual threshold. Results show that masking a word reduces the recall of that word and words prior to it, as well as weakening the linking associations between the masked and prior words. In contrast, recall probabilities of words following the masked word are not affected. To account for this effect we conducted computational simulations testing two classes of models: Associative Linking Models and Short-Term Memory Buffer Models. Only a model that integrated both contextual linking and buffer components matched all of the effects of masking observed in our behavioral data. In this Linking-Buffer Model, the masked word disrupts a short-term memory buffer, causing associative links of words in the buffer to be weakened, affecting memory for the masked word and the word prior to it, while allowing links of words following the masked word to be spared. We suggest that these data account for the so-called "effortful hypothesis", where distorted input has a detrimental impact on prior information stored in short-term memory. Copyright © 2010 Elsevier B.V. All rights reserved.

  6. Experimental development based on mapping rule between requirements analysis model and web framework specific design model.

    Science.gov (United States)

    Okuda, Hirotaka; Ogata, Shinpei; Matsuura, Saeko

    2013-12-01

    Model Driven Development is a promising approach to develop high quality software systems. We have proposed a method of model-driven requirements analysis using Unified Modeling Language (UML). The main feature of our method is to automatically generate a Web user interface prototype from UML requirements analysis model so that we can confirm validity of input/output data for each page and page transition on the system by directly operating the prototype. We proposes a mapping rule in which design information independent of each web application framework implementation is defined based on the requirements analysis model, so as to improve the traceability to the final product from the valid requirements analysis model. This paper discusses the result of applying our method to the development of a Group Work Support System that is currently running in our department.

  7. Efficient design and simulation of an expandable hybrid (wind-photovoltaic) power system with MPPT and inverter input voltage regulation features in compliance with electric grid requirements

    Energy Technology Data Exchange (ETDEWEB)

    Skretas, Sotirios B.; Papadopoulos, Demetrios P. [Electrical Machines Laboratory, Department of Electrical and Computer Engineering, Democritos University of Thrace (DUTH), 12 V. Sofias, 67100 Xanthi (Greece)

    2009-09-15

    In this paper an efficient design along with modeling and simulation of a transformer-less small-scale centralized DC - bus Grid Connected Hybrid (Wind-PV) power system for supplying electric power to a single phase of a three phase low voltage (LV) strong distribution grid are proposed and presented. The main components of the hybrid system are: a PV generator (PVG); and an array of horizontal-axis, fixed-pitch, small-size, variable-speed wind turbines (WTs) with direct-driven permanent magnet synchronous generator (PMSG) having an embedded uncontrolled bridge rectifier. An overview of the basic theory of such systems along with their modeling and simulation via Simulink/MATLAB software package are presented. An intelligent control method is applied to the proposed configuration to simultaneously achieve three desired goals: to extract maximum power from each hybrid power system component (PVG and WTs); to guarantee DC voltage regulation/stabilization at the input of the inverter; to transfer the total produced electric power to the electric grid, while fulfilling all necessary interconnection requirements. Finally, a practical case study is conducted for the purpose of fully evaluating a possible installation in a city site of Xanthi/Greece, and the practical results of the simulations are presented. (author)

  8. An extended TRANSCAR model including ionospheric convection: simulation of EISCAT observations using inputs from AMIE

    Directory of Open Access Journals (Sweden)

    P.-L. Blelly

    2005-02-01

    Full Text Available The TRANSCAR ionospheric model was extended to account for the convection of the magnetic field lines in the auroral and polar ionosphere. A mixed Eulerian-Lagrangian 13-moment approach was used to describe the dynamics of an ionospheric plasma tube. In the present study, one focuses on large scale transports in the polar ionosphere. The model was used to simulate a 35-h period of EISCAT-UHF observations on 16-17 February 1993. The first day was magnetically quiet, and characterized by elevated electron concentrations: the diurnal F2 layer reached as much as 1012m-3, which is unusual for a winter and moderate solar activity (F10.7=130 period. An intense geomagnetic event occurred on the second day, seen in the data as a strong intensification of the ionosphere convection velocities in the early afternoon (with the northward electric field reaching 150mVm-1 and corresponding frictional heating of the ions up to 2500K. The simulation used time-dependent AMIE outputs to infer flux-tube transports in the polar region, and to provide magnetospheric particle and energy inputs to the ionosphere. The overall very good agreement, obtained between the model and the observations, demonstrates the high ability of the extended TRANSCAR model for quantitative modelling of the high-latitude ionosphere; however, some differences are found which are attributed to the precipitation of electrons with very low energy. All these results are finally discussed in the frame of modelling the auroral ionosphere with space weather applications in mind.

  9. An extended TRANSCAR model including ionospheric convection: simulation of EISCAT observations using inputs from AMIE

    Directory of Open Access Journals (Sweden)

    P.-L. Blelly

    2005-02-01

    Full Text Available The TRANSCAR ionospheric model was extended to account for the convection of the magnetic field lines in the auroral and polar ionosphere. A mixed Eulerian-Lagrangian 13-moment approach was used to describe the dynamics of an ionospheric plasma tube. In the present study, one focuses on large scale transports in the polar ionosphere. The model was used to simulate a 35-h period of EISCAT-UHF observations on 16-17 February 1993. The first day was magnetically quiet, and characterized by elevated electron concentrations: the diurnal F2 layer reached as much as 1012m-3, which is unusual for a winter and moderate solar activity (F10.7=130 period. An intense geomagnetic event occurred on the second day, seen in the data as a strong intensification of the ionosphere convection velocities in the early afternoon (with the northward electric field reaching 150mVm-1 and corresponding frictional heating of the ions up to 2500K. The simulation used time-dependent AMIE outputs to infer flux-tube transports in the polar region, and to provide magnetospheric particle and energy inputs to the ionosphere. The overall very good agreement, obtained between the model and the observations, demonstrates the high ability of the extended TRANSCAR model for quantitative modelling of the high-latitude ionosphere; however, some differences are found which are attributed to the precipitation of electrons with very low energy. All these results are finally discussed in the frame of modelling the auroral ionosphere with space weather applications in mind.

  10. Predicting musically induced emotions from physiological inputs: Linear and neural network models

    Directory of Open Access Journals (Sweden)

    Frank A. Russo

    2013-08-01

    Full Text Available Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of 'felt' emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants – heart rate, respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a nonlinear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The nonlinear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the nonlinear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

  11. Multi-Layer Perceptron (MLP)-Based Nonlinear Auto-Regressive with Exogenous Inputs (NARX) Stock Forecasting Model

    OpenAIRE

    I. M. Yassin; M. F. Abdul Khalid; S. H. Herman; I. Pasya; N. Ab Wahab; Z. Awang

    2017-01-01

    The prediction of stocks in the stock market is important in investment as it would help the investor to time buy and sell transactions to maximize profits. In this paper, a Multi-Layer Perceptron (MLP)-based Nonlinear Auto-Regressive with Exogenous Inputs (NARX) model was used to predict the prices of the Apple Inc. weekly stock prices over a time horizon of 1995 to 2013. The NARX model belongs is a system identification model that constructs a mathematical model from the dynamic input/outpu...

  12. Requirements for clinical information modelling tools.

    Science.gov (United States)

    Moreno-Conde, Alberto; Jódar-Sánchez, Francisco; Kalra, Dipak

    2015-07-01

    This study proposes consensus requirements for clinical information modelling tools that can support modelling tasks in medium/large scale institutions. Rather than identify which functionalities are currently available in existing tools, the study has focused on functionalities that should be covered in order to provide guidance about how to evolve the existing tools. After identifying a set of 56 requirements for clinical information modelling tools based on a literature review and interviews with experts, a classical Delphi study methodology was applied to conduct a two round survey in order to classify them as essential or recommended. Essential requirements are those that must be met by any tool that claims to be suitable for clinical information modelling, and if we one day have a certified tools list, any tool that does not meet essential criteria would be excluded. Recommended requirements are those more advanced requirements that may be met by tools offering a superior product or only needed in certain modelling situations. According to the answers provided by 57 experts from 14 different countries, we found a high level of agreement to enable the study to identify 20 essential and 21 recommended requirements for these tools. It is expected that this list of identified requirements will guide developers on the inclusion of new basic and advanced functionalities that have strong support by end users. This list could also guide regulators in order to identify requirements that could be demanded of tools adopted within their institutions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. "Updates to Model Algorithms & Inputs for the Biogenic Emissions Inventory System (BEIS) Model"

    Science.gov (United States)

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observatio...

  14. Categorical Inputs, Sensitivity Analysis, Optimization and Importance Tempering with tgp Version 2, an R Package for Treed Gaussian Process Models

    Directory of Open Access Journals (Sweden)

    Robert B. Gramacy

    2010-02-01

    Full Text Available This document describes the new features in version 2.x of the tgp package for R, implementing treed Gaussian process (GP models. The topics covered include methods for dealing with categorical inputs and excluding inputs from the tree or GP part of the model; fully Bayesian sensitivity analysis for inputs/covariates; sequential optimization of black-box functions; and a new Monte Carlo method for inference in multi-modal posterior distributions that combines simulated tempering and importance sampling. These additions extend the functionality of tgp across all models in the hierarchy: from Bayesian linear models, to classification and regression trees (CART, to treed Gaussian processes with jumps to the limiting linear model. It is assumed that the reader is familiar with the baseline functionality of the package, outlined in the first vignette (Gramacy 2007.

  15. Modelling Implicit Communication in Multi-Agent Systems with Hybrid Input/Output Automata

    Directory of Open Access Journals (Sweden)

    Marta Capiluppi

    2012-10-01

    Full Text Available We propose an extension of Hybrid I/O Automata (HIOAs to model agent systems and their implicit communication through perturbation of the environment, like localization of objects or radio signals diffusion and detection. To this end we decided to specialize some variables of the HIOAs whose values are functions both of time and space. We call them world variables. Basically they are treated similarly to the other variables of HIOAs, but they have the function of representing the interaction of each automaton with the surrounding environment, hence they can be output, input or internal variables. Since these special variables have the role of simulating implicit communication, their dynamics are specified both in time and space, because they model the perturbations induced by the agent to the environment, and the perturbations of the environment as perceived by the agent. Parallel composition of world variables is slightly different from parallel composition of the other variables, since their signals are summed. The theory is illustrated through a simple example of agents systems.

  16. Effects of model input data uncertainty in simulating water resources of a transnational catchment

    Science.gov (United States)

    Camargos, Carla; Breuer, Lutz

    2016-04-01

    Landscape consists of different ecosystem components and how these components affect water quantity and quality need to be understood. We start from the assumption that water resources are generated in landscapes and that rural land use (particular agriculture) has a strong impact on water resources that are used downstream for domestic and industrial supply. Partly located in the north of Luxembourg and partly in the southeast of Belgium, the Haute-Sûre catchment is about 943 km2. As part of the catchment, the Haute-Sûre Lake is an important source of drinking water for Luxembourg population, satisfying 30% of the city's demand. The objective of this study is investigate impact of spatial input data uncertainty on water resources simulations for the Haute-Sûre catchment. We apply the SWAT model for the period 2006 to 2012 and use a variety of digital information on soils, elevation and land uses with various spatial resolutions. Several objective functions are being evaluated and we consider resulting parameter uncertainty to quantify an important part of the global uncertainty in model simulations.

  17. Modeling uncertainties in workforce disruptions from influenza pandemics using dynamic input-output analysis.

    Science.gov (United States)

    El Haimar, Amine; Santos, Joost R

    2014-03-01

    Influenza pandemic is a serious disaster that can pose significant disruptions to the workforce and associated economic sectors. This article examines the impact of influenza pandemic on workforce availability within an interdependent set of economic sectors. We introduce a simulation model based on the dynamic input-output model to capture the propagation of pandemic consequences through the National Capital Region (NCR). The analysis conducted in this article is based on the 2009 H1N1 pandemic data. Two metrics were used to assess the impacts of the influenza pandemic on the economic sectors: (i) inoperability, which measures the percentage gap between the as-planned output and the actual output of a sector, and (ii) economic loss, which quantifies the associated monetary value of the degraded output. The inoperability and economic loss metrics generate two different rankings of the critical economic sectors. Results show that most of the critical sectors in terms of inoperability are sectors that are related to hospitals and health-care providers. On the other hand, most of the sectors that are critically ranked in terms of economic loss are sectors with significant total production outputs in the NCR such as federal government agencies. Therefore, policy recommendations relating to potential mitigation and recovery strategies should take into account the balance between the inoperability and economic loss metrics. © 2013 Society for Risk Analysis.

  18. Modeling imbalanced economic recovery following a natural disaster using input-output analysis.

    Science.gov (United States)

    Li, Jun; Crawford-Brown, Douglas; Syddall, Mark; Guan, Dabo

    2013-10-01

    Input-output analysis is frequently used in studies of large-scale weather-related (e.g., Hurricanes and flooding) disruption of a regional economy. The economy after a sudden catastrophe shows a multitude of imbalances with respect to demand and production and may take months or years to recover. However, there is no consensus about how the economy recovers. This article presents a theoretical route map for imbalanced economic recovery called dynamic inequalities. Subsequently, it is applied to a hypothetical postdisaster economic scenario of flooding in London around the year 2020 to assess the influence of future shocks to a regional economy and suggest adaptation measures. Economic projections are produced by a macro econometric model and used as baseline conditions. The results suggest that London's economy would recover over approximately 70 months by applying a proportional rationing scheme under the assumption of initial 50% labor loss (with full recovery in six months), 40% initial loss to service sectors, and 10-30% initial loss to other sectors. The results also suggest that imbalance will be the norm during the postdisaster period of economic recovery even though balance may occur temporarily. Model sensitivity analysis suggests that a proportional rationing scheme may be an effective strategy to apply during postdisaster economic reconstruction, and that policies in transportation recovery and in health care are essential for effective postdisaster economic recovery. © 2013 Society for Risk Analysis.

  19. Vascular input function correction of inflow enhancement for improved pharmacokinetic modeling of liver DCE-MRI.

    Science.gov (United States)

    Ning, Jia; Schubert, Tilman; Johnson, Kevin M; Roldán-Alzate, Alejandro; Chen, Huijun; Yuan, Chun; Reeder, Scott B

    2018-06-01

    To propose a simple method to correct vascular input function (VIF) due to inflow effects and to test whether the proposed method can provide more accurate VIFs for improved pharmacokinetic modeling. A spoiled gradient echo sequence-based inflow quantification and contrast agent concentration correction method was proposed. Simulations were conducted to illustrate improvement in the accuracy of VIF estimation and pharmacokinetic fitting. Animal studies with dynamic contrast-enhanced MR scans were conducted before, 1 week after, and 2 weeks after portal vein embolization (PVE) was performed in the left portal circulation of pigs. The proposed method was applied to correct the VIFs for model fitting. Pharmacokinetic parameters fitted using corrected and uncorrected VIFs were compared between different lobes and visits. Simulation results demonstrated that the proposed method can improve accuracy of VIF estimation and pharmacokinetic fitting. In animal study results, pharmacokinetic fitting using corrected VIFs demonstrated changes in perfusion consistent with changes expected after PVE, whereas the perfusion estimates derived by uncorrected VIFs showed no significant changes. The proposed correction method improves accuracy of VIFs and therefore provides more precise pharmacokinetic fitting. This method may be promising in improving the reliability of perfusion quantification. Magn Reson Med 79:3093-3102, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  20. Three-scale input-output modeling for urban economy: Carbon emission by Beijing 2007

    Science.gov (United States)

    Chen, G. Q.; Guo, Shan; Shao, Ling; Li, J. S.; Chen, Zhan-Ming

    2013-09-01

    For urban economies, an ecological endowment embodiment analysis has to be supported by endowment intensities at both the international and domestic scales to reflect the international and domestic imports of increasing importance. A three-scale input-output modeling for an urban economy to give nine categories of embodiment fluxes is presented in this paper by a case study on the carbon dioxide emissions by the Beijing economy in 2007, based on the carbon intensities for the average world and national economies. The total direct emissions are estimated at 1.03E+08 t, in which 91.61% is energy-related emissions. By the modeling, emissions embodied in fixed capital formation amount to 7.20E+07 t, emissions embodied in household consumption are 1.58 times those in government consumption, and emissions in gross capital formation are 14.93% more than those in gross consumption. As a net exporter of carbon emissions, Beijing exports 5.21E+08 t carbon embodied in foreign imported commodities and 1.06E+08 t in domestic imported commodities, while emissions embodied in foreign and domestic imported commodities are 3.34E+07 and 1.75E+08 t respectively. The algorithm presented in this study is applicable to the embodiment analysis of other environmental resources for regional economies characteristic of multi-scales.

  1. Three-Verb Clusters in Interference Frisian: A Stochastic Model over Sequential Syntactic Input.

    Science.gov (United States)

    Hoekstra, Eric; Versloot, Arjen

    2016-03-01

    Abstract Interference Frisian (IF) is a variety of Frisian, spoken by mostly younger speakers, which is heavily influenced by Dutch. IF exhibits all six logically possible word orders in a cluster of three verbs. This phenomenon has been researched by Koeneman and Postma (2006), who argue for a parameter theory, which leaves frequency differences between various orders unexplained. Rejecting Koeneman and Postma's parameter theory, but accepting their conclusion that Dutch (and Frisian) data are input for the grammar of IF, we will argue that the word order preferences of speakers of IF are determined by frequency and similarity. More specifically, three-verb clusters in IF are sensitive to: their linear left-to-right similarity to two-verb clusters and three-verb clusters in Frisian and in Dutch; the (estimated) frequency of two- and three-verb clusters in Frisian and Dutch. The model will be shown to work best if Dutch and Frisian, and two- and three-verb clusters, have equal impact factors. If different impact factors are taken, the model's predictions do not change substantially, testifying to its robustness. This analysis is in line with recent ideas that the sequential nature of human speech is more important to syntactic processes than commonly assumed, and that less burden need be put on the hierarchical dimension of syntactic structure.

  2. Realistic modelling of the seismic input: Site effects and parametric studies

    International Nuclear Information System (INIS)

    Romanelli, F.; Vaccari, F.; Panza, G.F.

    2002-11-01

    We illustrate the work done in the framework of a large international cooperation, showing the very recent numerical experiments carried out within the framework of the EC project 'Advanced methods for assessing the seismic vulnerability of existing motorway bridges' (VAB) to assess the importance of non-synchronous seismic excitation of long structures. The definition of the seismic input at the Warth bridge site, i.e. the determination of the seismic ground motion due to an earthquake with a given magnitude and epicentral distance from the site, has been done following a theoretical approach. In order to perform an accurate and realistic estimate of site effects and of differential motion it is necessary to make a parametric study that takes into account the complex combination of the source and propagation parameters, in realistic geological structures. The computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different sources and structural models, allows us the construction of damage scenarios that are out of the reach of stochastic models, at a very low cost/benefit ratio. (author)

  3. Evaluating the effects of model structure and meteorological input data on runoff modelling in an alpine headwater basin

    Science.gov (United States)

    Schattan, Paul; Bellinger, Johannes; Förster, Kristian; Schöber, Johannes; Huttenlau, Matthias; Kirnbauer, Robert; Achleitner, Stefan

    2017-04-01

    Modelling water resources in snow-dominated mountainous catchments is challenging due to both, short concentration times and a highly variable contribution of snow melt in space and time from complex terrain. A number of model setups exist ranging from physically based models to conceptional models which do not attempt to represent the natural processes in a physically meaningful way. Within the flood forecasting system for the Tyrolean Inn River two serially linked hydrological models with differing process representation are used. Non- glacierized catchments are modelled by a semi-distributed, water balance model (HQsim) based on the HRU-approach. A fully-distributed energy and mass balance model (SES), purpose-built for snow- and icemelt, is used for highly glacierized headwater catchments. Previous work revealed uncertainties and limitations within the models' structures regarding (i) the representation of snow processes in HQsim, (ii) the runoff routing of SES, and (iii) the spatial resolution of the meteorological input data in both models. To overcome these limitations, a "strengths driven" model coupling is applied. Instead of linking the models serially, a vertical one-way coupling of models has been implemented. The fully-distributed snow modelling of SES is combined with the semi-distributed HQsim structure, allowing to benefit from soil and runoff routing schemes in HQsim. A monte-carlo based modelling experiment was set up to evaluate the resulting differences in the runoff prediction due to the improved model coupling and a refined spatial resolution of the meteorological forcing. The experiment design follows a gradient of spatial discretisation of hydrological processes and meteorological forcing data with a total of six different model setups for the alpine headwater basin of the Fagge River in the Tyrolean Alps. In general, all setups show a good performance for this particular basin. It is therefore planned to include other basins with differing

  4. Including operational data in QMRA model: development and impact of model inputs.

    Science.gov (United States)

    Jaidi, Kenza; Barbeau, Benoit; Carrière, Annie; Desjardins, Raymond; Prévost, Michèle

    2009-03-01

    A Monte Carlo model, based on the Quantitative Microbial Risk Analysis approach (QMRA), has been developed to assess the relative risks of infection associated with the presence of Cryptosporidium and Giardia in drinking water. The impact of various approaches for modelling the initial parameters of the model on the final risk assessments is evaluated. The Monte Carlo simulations that we performed showed that the occurrence of parasites in raw water was best described by a mixed distribution: log-Normal for concentrations > detection limit (DL), and a uniform distribution for concentrations risks significantly. The mean annual risks for conventional treatment are: 1.97E-03 (removal credit adjusted by log parasite = log spores), 1.58E-05 (log parasite = 1.7 x log spores) or 9.33E-03 (regulatory credits based on the turbidity measurement in filtered water). Using full scale validated SCADA data, the simplified calculation of CT performed at the plant was shown to largely underestimate the risk relative to a more detailed CT calculation, which takes into consideration the downtime and system failure events identified at the plant (1.46E-03 vs. 3.93E-02 for the mean risk).

  5. Discharge simulations performed with a hydrological model using bias corrected regional climate model input

    Directory of Open Access Journals (Sweden)

    S. C. van Pelt

    2009-12-01

    Full Text Available Studies have demonstrated that precipitation on Northern Hemisphere mid-latitudes has increased in the last decades and that it is likely that this trend will continue. This will have an influence on discharge of the river Meuse. The use of bias correction methods is important when the effect of precipitation change on river discharge is studied. The objective of this paper is to investigate the effect of using two different bias correction methods on output from a Regional Climate Model (RCM simulation. In this study a Regional Atmospheric Climate Model (RACMO2 run is used, forced by ECHAM5/MPIOM under the condition of the SRES-A1B emission scenario, with a 25 km horizontal resolution. The RACMO2 runs contain a systematic precipitation bias on which two bias correction methods are applied. The first method corrects for the wet day fraction and wet day average (WD bias correction and the second method corrects for the mean and coefficient of variance (MV bias correction. The WD bias correction initially corrects well for the average, but it appears that too many successive precipitation days were removed with this correction. The second method performed less well on average bias correction, but the temporal precipitation pattern was better. Subsequently, the discharge was calculated by using RACMO2 output as forcing to the HBV-96 hydrological model. A large difference was found between the simulated discharge of the uncorrected RACMO2 run, the WD bias corrected run and the MV bias corrected run. These results show the importance of an appropriate bias correction.

  6. Comparison of robust input shapers

    Science.gov (United States)

    Vaughan, Joshua; Yano, Aika; Singhose, William

    2008-09-01

    The rapid movement of machines is a challenging control problem because it often results in high levels of vibration. As a result, flexible machines are typically moved relatively slowly. Input shaping is a control method that allows much higher speeds of motion by limiting vibration induced by the reference command. To design an input-shaping controller, estimates of the system natural frequency and damping ratio are required. However, real world systems cannot be modeled exactly, making the robustness to modeling errors an important consideration. Many robust input shapers have been developed, but robust shapers typically have longer durations that slow the system response. This creates a compromise between shaper robustness and rise time. This paper analyzes the compromise between rapidity of motion and shaper robustness for several input-shaping methods. Experimental results from a portable bridge crane verify the theoretical predictions.

  7. Large uncertainty in soil carbon modelling related to method of calculation of plant carbon input to agriculutral systems

    DEFF Research Database (Denmark)

    Keel, S G; Leifeld, Jens; Mayer, Julius

    2017-01-01

    referred to as soil carbon inputs (C). The soil C inputs from plants are derived from measured agricultural yields using allometric equations. Here we compared the results of five previously published equations. Our goal was to test whether the choice of method is critical for modelling soil C and if so...... with the model C-TOOL showed that calculated SOC stocks were affected strongly by the choice of the allometric equation. With four equations, a decrease in SOC stocks was simulated, whereas with one equation there was no change. This considerable uncertainty in modelled soil C is attributable solely...... to the allometric equation used to estimate the soil C input. We identify the evaluation and selection of allometric equations and associated coefficients as critical steps when setting up a model-based soil C inventory for agricultural systems....

  8. Hydrological and sedimentological modeling of the Okavango Delta, Botswana, using remotely sensed input and calibration data

    Science.gov (United States)

    Milzow, C.; Kgotlhang, L.; Kinzelbach, W.; Bauer-Gottwein, P.

    2006-12-01

    medium-term. The Delta's size and limited accessibility make direct data acquisition on the ground difficult. Remote sensing methods are the most promising source of acquiring spatially distributed data for both, model input and calibration. Besides ground data, METEOSAT and NOAA data are used for precipitation and evapotranspiration inputs respectively. The topography is taken from a study from Gumbricht et al. (2004) where the SRTM shuttle mission data is refined using remotely sensed vegetation indexes. The aquifer thickness was determined with an aeromagnetic survey. For calibration, the simulated flooding patterns are compared to patterns derived from satellite imagery: recent ENVISAT ASAR and older NOAA AVHRR scenes. The final objective is to better understand the hydrological and hydraulic aspects of this complex ecosystem and eventually predict the consequences of human interventions. It will provide a tool for decision makers involved to assess the impact of possible upstream dams and water abstraction scenarios.

  9. Updating the Cornell Net Carbohydrate and Protein System feed library and analyzing model sensitivity to feed inputs.

    Science.gov (United States)

    Higgs, R J; Chase, L E; Ross, D A; Van Amburgh, M E

    2015-09-01

    The Cornell Net Carbohydrate and Protein System (CNCPS) is a nutritional model that evaluates the environmental and nutritional resources available in an animal production system and enables the formulation of diets that closely match the predicted animal requirements. The model includes a library of approximately 800 different ingredients that provide the platform for describing the chemical composition of the diet to be formulated. Each feed in the feed library was evaluated against data from 2 commercial laboratories and updated when required to enable more precise predictions of dietary energy and protein supply. A multistep approach was developed to predict uncertain values using linear regression, matrix regression, and optimization. The approach provided an efficient and repeatable way of evaluating and refining the composition of a large number of different feeds against commercially generated data similar to that used by CNCPS users on a daily basis. The protein A fraction in the CNCPS, formerly classified as nonprotein nitrogen, was reclassified to ammonia for ease and availability of analysis and to provide a better prediction of the contribution of metabolizable protein from free AA and small peptides. Amino acid profiles were updated using contemporary data sets and now represent the profile of AA in the whole feed rather than the insoluble residue. Model sensitivity to variation in feed library inputs was investigated using Monte Carlo simulation. Results showed the prediction of metabolizable energy was most sensitive to variation in feed chemistry and fractionation, whereas predictions of metabolizable protein were most sensitive to variation in digestion rates. Regular laboratory analysis of samples taken on-farm remains the recommended approach to characterizing the chemical components of feeds in a ration. However, updates to the CNCPS feed library provide a database of ingredients that are consistent with current feed chemistry information and

  10. Smoke inputs to climate models: optical properties and height distribution for nuclear winter studies

    International Nuclear Information System (INIS)

    Penner, J.E.; Haselman, L.C. Jr.

    1985-04-01

    Smoke from fires produced in the aftermath of a major nuclear exchange has been predicted to cause large decreases in land surface temperatures. The extent of the decrease and even the sign of the temperature change depend on the optical characteristics of the smoke and how it is distributed with altitude. The height distribution of smoke over a fire is determined by the amount of buoyant energy produced by the fire and the amount of energy released by the latent heat of condensation of water vapor. The optical properties of the smoke depend on the size distribution of smoke particles which changes due to coagulation within the lofted plume. We present calculations demonstrating these processes and estimate their importance for the smoke source term input for climate models. For high initial smoke densities and for absorbing smoke ( m = 1.75 - 0.3i), coagulation of smoke particles within the smoke plume is predicted to first increase, then decrease, the size-integrated extinction cross section. However, at the smoke densities predicted in our model (assuming a 3% emission rate for smoke) and for our assumed initial size distribution, the attachment rates for brownian and turbulent collision processes are not fast enough to alter the smoke size distribution enough to significantly change the integrated extinction cross section. Early-time coagulation is, however, fast enough to allow further coagulation, on longer time scales, to act to decrease the extinction cross section. On these longer time scales appropriate to climate models, coagulation can decrease the extinction cross section by almost a factor of two before the smoke becomes well mixed around the globe. This process has been neglected in past climate effect evaluations, but could have a significant effect, since the extinction cross section enters as an exponential factor in calculating the light attenuation due to smoke. 10 refs., 20 figs

  11. Parametric modeling of DSC-MRI data with stochastic filtration and optimal input design versus non-parametric modeling.

    Science.gov (United States)

    Kalicka, Renata; Pietrenko-Dabrowska, Anna

    2007-03-01

    In the paper MRI measurements are used for assessment of brain tissue perfusion and other features and functions of the brain (cerebral blood flow - CBF, cerebral blood volume - CBV, mean transit time - MTT). Perfusion is an important indicator of tissue viability and functioning as in pathological tissue blood flow, vascular and tissue structure are altered with respect to normal tissue. MRI enables diagnosing diseases at an early stage of their course. The parametric and non-parametric approaches to the identification of MRI models are presented and compared. The non-parametric modeling adopts gamma variate functions. The parametric three-compartmental catenary model, based on the general kinetic model, is also proposed. The parameters of the models are estimated on the basis of experimental data. The goodness of fit of the gamma variate and the three-compartmental models to the data and the accuracy of the parameter estimates are compared. Kalman filtering, smoothing the measurements, was adopted to improve the estimate accuracy of the parametric model. Parametric modeling gives a better fit and better parameter estimates than non-parametric and allows an insight into the functioning of the system. To improve the accuracy optimal experiment design related to the input signal was performed.

  12. Ecological input-output modeling for embodied resources and emissions in Chinese economy 2005

    Science.gov (United States)

    Chen, Z. M.; Chen, G. Q.; Zhou, J. B.; Jiang, M. M.; Chen, B.

    2010-07-01

    For the embodiment of natural resources and environmental emissions in Chinese economy 2005, a biophysical balance modeling is carried out based on an extension of the economic input-output table into an ecological one integrating the economy with its various environmental driving forces. Included resource flows into the primary resource sectors and environmental emission flows from the primary emission sectors belong to seven categories as energy resources in terms of fossil fuels, hydropower and nuclear energy, biomass, and other sources; freshwater resources; greenhouse gas emissions in terms of CO2, CH4, and N2O; industrial wastes in terms of waste water, waste gas, and waste solid; exergy in terms of fossil fuel resources, biological resources, mineral resources, and environmental resources; solar emergy and cosmic emergy in terms of climate resources, soil, fossil fuels, and minerals. The resulted database for embodiment intensity and sectoral embodiment of natural resources and environmental emissions is of essential implications in context of systems ecology and ecological economics in general and of global climate change in particular.

  13. The Use of an Eight-Step Instructional Model to Train School Staff in Partner-Augmented Input

    Science.gov (United States)

    Senner, Jill E.; Baud, Matthew R.

    2017-01-01

    An eight-step instruction model was used to train a self-contained classroom teacher, speech-language pathologist, and two instructional assistants in partner-augmented input, a modeling strategy for teaching augmentative and alternative communication use. With the exception of a 2-hr training session, instruction primarily was conducted during…

  14. Usefulness of non-linear input-output models for economic impact analyses in tourism and recreation

    NARCIS (Netherlands)

    Klijs, J.; Peerlings, J.H.M.; Heijman, W.J.M.

    2015-01-01

    In tourism and recreation management it is still common practice to apply traditional input–output (IO) economic impact models, despite their well-known limitations. In this study the authors analyse the usefulness of applying a non-linear input–output (NLIO) model, in which price-induced input

  15. Urban pluvial flood prediction: a case study evaluating radar rainfall nowcasts and numerical weather prediction models as model inputs.

    Science.gov (United States)

    Thorndahl, Søren; Nielsen, Jesper Ellerbæk; Jensen, David Getreuer

    2016-12-01

    Flooding produced by high-intensive local rainfall and drainage system capacity exceedance can have severe impacts in cities. In order to prepare cities for these types of flood events - especially in the future climate - it is valuable to be able to simulate these events numerically, both historically and in real-time. There is a rather untested potential in real-time prediction of urban floods. In this paper, radar data observations with different spatial and temporal resolution, radar nowcasts of 0-2 h leadtime, and numerical weather models with leadtimes up to 24 h are used as inputs to an integrated flood and drainage systems model in order to investigate the relative difference between different inputs in predicting future floods. The system is tested on the small town of Lystrup in Denmark, which was flooded in 2012 and 2014. Results show it is possible to generate detailed flood maps in real-time with high resolution radar rainfall data, but rather limited forecast performance in predicting floods with leadtimes more than half an hour.

  16. Modeling DPOAE input/output function compression: comparisons with hearing thresholds.

    Science.gov (United States)

    Bhagat, Shaum P

    2014-09-01

    Basilar membrane input/output (I/O) functions in mammalian animal models are characterized by linear and compressed segments when measured near the location corresponding to the characteristic frequency. A method of studying basilar membrane compression indirectly in humans involves measuring distortion-product otoacoustic emission (DPOAE) I/O functions. Previous research has linked compression estimates from behavioral growth-of-masking functions to hearing thresholds. The aim of this study was to compare compression estimates from DPOAE I/O functions and hearing thresholds at 1 and 2 kHz. A prospective correlational research design was performed. The relationship between DPOAE I/O function compression estimates and hearing thresholds was evaluated with Pearson product-moment correlations. Normal-hearing adults (n = 16) aged 22-42 yr were recruited. DPOAE I/O functions (L₂ = 45-70 dB SPL) and two-interval forced-choice hearing thresholds were measured in normal-hearing adults. A three-segment linear regression model applied to DPOAE I/O functions supplied estimates of compression thresholds, defined as breakpoints between linear and compressed segments and the slopes of the compressed segments. Pearson product-moment correlations between DPOAE compression estimates and hearing thresholds were evaluated. A high correlation between DPOAE compression thresholds and hearing thresholds was observed at 2 kHz, but not at 1 kHz. Compression slopes also correlated highly with hearing thresholds only at 2 kHz. The derivation of cochlear compression estimates from DPOAE I/O functions provides a means to characterize basilar membrane mechanics in humans and elucidates the role of compression in tone detection in the 1-2 kHz frequency range. American Academy of Audiology.

  17. Modelling human resource requirements for the nuclear industry in Europe

    Energy Technology Data Exchange (ETDEWEB)

    Roelofs, Ferry [Nuclear Research and Consultancy Group (NRG) (Netherlands); Flore, Massimo; Estorff, Ulrik von [Joint Research Center (JRC) (Netherlands)

    2017-11-15

    The European Human Resource Observatory for Nuclear (EHRO-N) provides the European Commission with essential data related to supply and demand for nuclear experts in the EU-28 and the enlargement and integration countries based on bottom-up information from the nuclear industry. The objective is to assess how the supply of experts for the nuclear industry responds to the needs for the same experts for present and future nuclear projects in the region. Complementary to the bottom-up approach taken by the EHRO-N team at JRC, a top-down modelling approach has been taken in a collaboration with NRG in the Netherlands. This top-down modelling approach focuses on the human resource requirements for operation, construction, decommissioning, and efforts for long term operation of nuclear power plants. This paper describes the top-down methodology, the model input, the main assumptions, and the results of the analyses.

  18. Development of ANFIS models for air quality forecasting and input optimization for reducing the computational cost and time

    Science.gov (United States)

    Prasad, Kanchan; Gorai, Amit Kumar; Goyal, Pramila

    2016-03-01

    This study aims to develop adaptive neuro-fuzzy inference system (ANFIS) for forecasting of daily air pollution concentrations of five air pollutants [sulphur dioxide (SO2), nitrogen dioxide (NO2), carbon monoxide (CO), ozone (O3) and particular matters (PM10)] in the atmosphere of a Megacity (Howrah). Air pollution in the city (Howrah) is rising in parallel with the economics and thus observing, forecasting and controlling the air pollution becomes increasingly important due to the health impact. ANFIS serve as a basis for constructing a set of fuzzy IF-THEN rules, with appropriate membership functions to generate the stipulated input-output pairs. The ANFIS model predictor considers the value of meteorological factors (pressure, temperature, relative humidity, dew point, visibility, wind speed, and precipitation) and previous day's pollutant concentration in different combinations as the inputs to predict the 1-day advance and same day air pollution concentration. The concentration value of five air pollutants and seven meteorological parameters of the Howrah city during the period 2009 to 2011 were used for development of the ANFIS model. Collinearity tests were conducted to eliminate the redundant input variables. A forward selection (FS) method is used for selecting the different subsets of input variables. Application of collinearity tests and FS techniques reduces the numbers of input variables and subsets which helps in reducing the computational cost and time. The performances of the models were evaluated on the basis of four statistical indices (coefficient of determination, normalized mean square error, index of agreement, and fractional bias).

  19. Decreased Hering-Breuer input-output entrainment in a mouse model of Rett syndrome

    Directory of Open Access Journals (Sweden)

    Rishi R Dhingra

    2013-04-01

    Full Text Available Rett syndrome, a severe X-linked neurodevelopmental disorder caused by mutations in the gene encoding methyl-CpG-binding protein 2 (Mecp2, is associated with a highly irregular respiratory pattern including severe upper-airway dysfunction. Recent work suggests that hyperexcitability of the Hering-Breuer reflex (HBR pathway contributes to respiratory dysrhythmia in Mecp2 mutant mice. To assess how enhanced HBR input impacts respiratory entrainment by sensory afferents in closed-loop in vivo-like conditions, we investigated the input (vagal stimulus trains – output (phrenic bursting entrainment via the HBR in wild-type and Mecp2-deficient mice. Using the in situ perfused brainstem preparation, which maintains an intact pontomedullary axis capable of generating an in vivo-like respiratory rhythm in the absence of the HBR, we mimicked the HBR feedback input by stimulating the vagus nerve (at threshold current, 0.5 ms pulse duration, 75 Hz pulse frequency, 100 ms train duration at an inter-burst frequency matching that of the intrinsic oscillation of the inspiratory motor output of each preparation. Using this approach, we observed significant input-output entrainment in wild-type mice as measured by the maximum of the cross-correlation function, the peak of the instantaneous relative phase distribution, and the mutual information of the instantaneous phases. This entrainment was associated with a reduction in inspiratory duration during feedback stimulation. In contrast, the strength of input-output entrainment was significantly weaker in Mecp2-/+ mice. However, Mecp2-/+ mice also had a reduced inspiratory duration during stimulation, indicating that reflex behavior in the HBR pathway was intact. Together, these observations suggest that the respiratory network compensates for enhanced sensitivity of HBR inputs by reducing HBR input-output entrainment.

  20. A fuzzy model for exploiting customer requirements

    Directory of Open Access Journals (Sweden)

    Zahra Javadirad

    2016-09-01

    Full Text Available Nowadays, Quality function deployment (QFD is one of the total quality management tools, where customers’ views and requirements are perceived and using various techniques improves the production requirements and operations. The QFD department, after identification and analysis of the competitors, takes customers’ feedbacks to meet the customers’ demands for the products compared with the competitors. In this study, a comprehensive model for assessing the importance of the customer requirements in the products or services for an organization is proposed. The proposed study uses linguistic variables, as a more comprehensive approach, to increase the precision of the expression evaluations. The importance of these requirements specifies the strengths and weaknesses of the organization in meeting the requirements relative to competitors. The results of these experiments show that the proposed method performs better than the other methods.

  1. Comparison of different snow model formulations and their responses to input uncertainties in the Upper Indus Basin

    Science.gov (United States)

    Pritchard, David; Fowler, Hayley; Forsythe, Nathan; O'Donnell, Greg; Rutter, Nick; Bardossy, Andras

    2017-04-01

    Snow and glacier melt in the mountainous Upper Indus Basin (UIB) sustain water supplies, irrigation networks, hydropower production and ecosystems in extensive downstream lowlands. Understanding hydrological and cryospheric sensitivities to climatic variability and change in the basin is therefore critical for local, national and regional water resources management. Assessing these sensitivities using numerical modelling is challenging, due to limitations in the quality and quantity of input and evaluation data, as well as uncertainties in model structures and parameters. This study explores how these uncertainties in inputs and process parameterisations affect distributed simulations of ablation in the complex climatic setting of the UIB. The role of model forcing uncertainties is explored using combinations of local observations, remote sensing and reanalysis - including the high resolution High Asia Refined Analysis - to generate multiple realisations of spatiotemporal model input fields. Forcing a range of model structures with these input fields then provides an indication of how different ablation parameterisations respond to uncertainties and perturbations in climatic drivers. Model structures considered include simple, empirical representations of melt processes through to physically based, full energy balance models with multi-physics options for simulating snowpack evolution (including an adapted version of FSM). Analysing model input and structural uncertainties in this way provides insights for methodological choices in climate sensitivity assessments of data-sparse, high mountain catchments. Such assessments are key for supporting water resource management in these catchments, particularly given the potential complications of enhanced warming through elevation effects or, in the case of the UIB, limited understanding of how and why local climate change signals differ from broader patterns.

  2. Evaluation of precipitation input for SWAT modeling in Alpine catchment: A case study in the Adige river basin (Italy).

    Science.gov (United States)

    Tuo, Ye; Duan, Zheng; Disse, Markus; Chiogna, Gabriele

    2016-12-15

    Precipitation is often the most important input data in hydrological models when simulating streamflow. The Soil and Water Assessment Tool (SWAT), a widely used hydrological model, only makes use of data from one precipitation gauge station that is nearest to the centroid of each subbasin, which is eventually corrected using the elevation band method. This leads in general to inaccurate representation of subbasin precipitation input data, particularly in catchments with complex topography. To investigate the impact of different precipitation inputs on the SWAT model simulations in Alpine catchments, 13years (1998-2010) of daily precipitation data from four datasets including OP (Observed precipitation), IDW (Inverse Distance Weighting data), CHIRPS (Climate Hazards Group InfraRed Precipitation with Station data) and TRMM (Tropical Rainfall Measuring Mission) has been considered. Both model performances (comparing simulated and measured streamflow data at the catchment outlet) as well as parameter and prediction uncertainties have been quantified. For all three subbasins, the use of elevation bands is fundamental to match the water budget. Streamflow predictions obtained using IDW inputs are better than those obtained using the other datasets in terms of both model performance and prediction uncertainty. Models using the CHIRPS product as input provide satisfactory streamflow estimation, suggesting that this satellite product can be applied to this data-scarce Alpine region. Comparing the performance of SWAT models using different precipitation datasets is therefore important in data-scarce regions. This study has shown that, precipitation is the main source of uncertainty, and different precipitation datasets in SWAT models lead to different best estimate ranges for the calibrated parameters. This has important implications for the interpretation of the simulated hydrological processes. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  3. Evaluating the efficiency of municipalities in collecting and processing municipal solid waste: a shared input DEA-model.

    Science.gov (United States)

    Rogge, Nicky; De Jaeger, Simon

    2012-10-01

    This paper proposed an adjusted "shared-input" version of the popular efficiency measurement technique Data Envelopment Analysis (DEA) that enables evaluating municipality waste collection and processing performances in settings in which one input (waste costs) is shared among treatment efforts of multiple municipal solid waste fractions. The main advantage of this version of DEA is that it not only provides an estimate of the municipalities overall cost efficiency but also estimates of the municipalities' cost efficiency in the treatment of the different fractions of municipal solid waste (MSW). To illustrate the practical usefulness of the shared input DEA-model, we apply the model to data on 293 municipalities in Flanders, Belgium, for the year 2008. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Modelling of capital requirements in the energy sector: capital market access. Final memorandum

    Energy Technology Data Exchange (ETDEWEB)

    1978-04-01

    Formal modelling techniques for analyzing the capital requirements of energy industries have been performed at DOE. A survey has been undertaken of a number of models which forecast energy-sector capital requirements or which detail the interactions of the energy sector and the economy. Models are identified which can be useful as prototypes for some portion of DOE's modelling needs. The models are examined to determine any useful data bases which could serve as inputs to an original DOE model. A selected group of models are examined which can comply with the stated capabilities. The data sources being used by these models are covered and a catalog of the relevant data bases is provided. The models covered are: capital markets and capital availability models (Fossil 1, Bankers Trust Co., DRI Macro Model); models of physical capital requirements (Bechtel Supply Planning Model, ICF Oil and Gas Model and Coal Model, Stanford Research Institute National Energy Model); macroeconomic forecasting models with input-output analysis capabilities (Wharton Annual Long-Term Forecasting Model, Brookhaven/University of Illinois Model, Hudson-Jorgenson/Brookhaven Model); utility models (MIT Regional Electricity Model-Baughman Joskow, Teknekron Electric Utility Simulation Model); and others (DRI Energy Model, DRI/Zimmerman Coal Model, and Oak Ridge Residential Energy Use Model).

  5. Enhancement of information transmission with stochastic resonance in hippocampal CA1 neuron models: effects of noise input location.

    Science.gov (United States)

    Kawaguchi, Minato; Mino, Hiroyuki; Durand, Dominique M

    2007-01-01

    Stochastic resonance (SR) has been shown to enhance the signal to noise ratio or detection of signals in neurons. It is not yet clear how this effect of SR on the signal to noise ratio affects signal processing in neural networks. In this paper, we investigate the effects of the location of background noise input on information transmission in a hippocampal CA1 neuron model. In the computer simulation, random sub-threshold spike trains (signal) generated by a filtered homogeneous Poisson process were presented repeatedly to the middle point of the main apical branch, while the homogeneous Poisson shot noise (background noise) was applied to a location of the dendrite in the hippocampal CA1 model consisting of the soma with a sodium, a calcium, and five potassium channels. The location of the background noise input was varied along the dendrites to investigate the effects of background noise input location on information transmission. The computer simulation results show that the information rate reached a maximum value for an optimal amplitude of the background noise amplitude. It is also shown that this optimal amplitude of the background noise is independent of the distance between the soma and the noise input location. The results also show that the location of the background noise input does not significantly affect the maximum values of the information rates generated by stochastic resonance.

  6. Output from Statistical Predictive Models as Input to eLearning Dashboards

    Directory of Open Access Journals (Sweden)

    Marlene A. Smith

    2015-06-01

    Full Text Available We describe how statistical predictive models might play an expanded role in educational analytics by giving students automated, real-time information about what their current performance means for eventual success in eLearning environments. We discuss how an online messaging system might tailor information to individual students using predictive analytics. The proposed system would be data-driven and quantitative; e.g., a message might furnish the probability that a student will successfully complete the certificate requirements of a massive open online course. Repeated messages would prod underperforming students and alert instructors to those in need of intervention. Administrators responsible for accreditation or outcomes assessment would have ready documentation of learning outcomes and actions taken to address unsatisfactory student performance. The article’s brief introduction to statistical predictive models sets the stage for a description of the messaging system. Resources and methods needed to develop and implement the system are discussed.

  7. Application of Context Input Process and Product Model in Curriculum Evaluation: Case Study of a Call Centre

    Science.gov (United States)

    Kavgaoglu, Derya; Alci, Bülent

    2016-01-01

    The goal of this research which was carried out in reputable dedicated call centres within the Turkish telecommunication sector aims is to evaluate competence-based curriculums designed by means of internal funding through Stufflebeam's context, input, process, product (CIPP) model. In the research, a general scanning pattern in the scope of…

  8. SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations

    Science.gov (United States)

    Baes, M.; Camps, P.

    2015-09-01

    The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.

  9. Modeling requirements for in situ vitrification

    International Nuclear Information System (INIS)

    MacKinnon, R.J.; Mecham, D.C.; Hagrman, D.L.; Johnson, R.W.; Murray, P.E.; Slater, C.E.; Marwil, E.S.; Weaver, R.A.; Argyle, M.D.

    1991-11-01

    This document outlines the requirements for the model being developed at the INEL which will provide analytical support for the ISV technology assessment program. The model includes representations of the electric potential field, thermal transport with melting, gas and particulate release, vapor migration, off-gas combustion and process chemistry. The modeling objectives are to (1) help determine the safety of the process by assessing the air and surrounding soil radionuclide and chemical pollution hazards, the nuclear criticality hazard, and the explosion and fire hazards, (2) help determine the suitability of the ISV process for stabilizing the buried wastes involved, and (3) help design laboratory and field tests and interpret results therefrom

  10. Enhancement of regional wet deposition estimates based on modeled precipitation inputs

    Science.gov (United States)

    James A. Lynch; Jeffery W. Grimm; Edward S. Corbett

    1996-01-01

    Application of a variety of two-dimensional interpolation algorithms to precipitation chemistry data gathered at scattered monitoring sites for the purpose of estimating precipitation- born ionic inputs for specific points or regions have failed to produce accurate estimates. The accuracy of these estimates is particularly poor in areas of high topographic relief....

  11. Input-output supervisor

    International Nuclear Information System (INIS)

    Dupuy, R.

    1970-01-01

    The input-output supervisor is the program which monitors the flow of informations between core storage and peripheral equipments of a computer. This work is composed of three parts: 1 - Study of a generalized input-output supervisor. With sample modifications it looks like most of input-output supervisors which are running now on computers. 2 - Application of this theory on a magnetic drum. 3 - Hardware requirement for time-sharing. (author) [fr

  12. The input and output management of solid waste using DEA models: A case study at Jengka, Pahang

    Science.gov (United States)

    Mohamed, Siti Rosiah; Ghazali, Nur Fadzrina Mohd; Mohd, Ainun Hafizah

    2017-08-01

    Data Envelopment Analysis (DEA) as a tool for obtaining performance indices has been used extensively in several of organizations sector. The ways to improve the efficiency of Decision Making Units (DMUs) is impractical because some of inputs and outputs are uncontrollable and in certain situation its produce weak efficiency which often reflect the impact for operating environment. Based on the data from Alam Flora Sdn. Bhd Jengka, the researcher wants to determine the efficiency of solid waste management (SWM) in town Jengka Pahang using CCRI and CCRO model of DEA and duality formulation with vector average input and output. Three input variables (length collection in meter, frequency time per week in hour and number of garbage truck) and 2 outputs variables (frequency collection and the total solid waste collection in kilogram) are analyzed. As a conclusion, it shows only three roads from 23 roads are efficient that achieve efficiency score 1. Meanwhile, 20 other roads are in an inefficient management.

  13. Information Models, Data Requirements, and Agile Data Curation

    Science.gov (United States)

    Hughes, John S.; Crichton, Dan; Ritschel, Bernd; Hardman, Sean; Joyner, Ron

    2015-04-01

    The Planetary Data System's next generation system, PDS4, is an example of the successful use of an ontology-based Information Model (IM) to drive the development and operations of a data system. In traditional systems engineering, requirements or statements about what is necessary for the system are collected and analyzed for input into the design stage of systems development. With the advent of big data the requirements associated with data have begun to dominate and an ontology-based information model can be used to provide a formalized and rigorous set of data requirements. These requirements address not only the usual issues of data quantity, quality, and disposition but also data representation, integrity, provenance, context, and semantics. In addition the use of these data requirements during system's development has many characteristics of Agile Curation as proposed by Young et al. [Taking Another Look at the Data Management Life Cycle: Deconstruction, Agile, and Community, AGU 2014], namely adaptive planning, evolutionary development, early delivery, continuous improvement, and rapid and flexible response to change. For example customers can be satisfied through early and continuous delivery of system software and services that are configured directly from the information model. This presentation will describe the PDS4 architecture and its three principle parts: the ontology-based Information Model (IM), the federated registries and repositories, and the REST-based service layer for search, retrieval, and distribution. The development of the IM will be highlighted with special emphasis on knowledge acquisition, the impact of the IM on development and operations, and the use of shared ontologies at multiple governance levels to promote system interoperability and data correlation.

  14. MODEL AND METHOD FOR SYNTHESIS OF PROJECT MANAGEMENT METHODOLOGY WITH FUZZY INPUT DATA

    Directory of Open Access Journals (Sweden)

    Igor V. KONONENKO

    2016-02-01

    Full Text Available Literature analysis concerning the selection or creation a project management methodology is performed. Creating a "complete" methodology is proposed which can be applied to managing projects with any complexity, various degrees of responsibility for results and different predictability of the requirements. For the formation of a "complete" methodology, it is proposed to take the PMBOK standard as the basis, which would be supplemented by processes of the most demanding plan driven and flexible Agile Methodologies. For each knowledge area of the PMBOK standard, The following groups of processes should be provided: initiation, planning, execution, reporting, and forecasting, controlling, analysis, decision making and closing. The method for generating a methodology for the specific project is presented. The multiple criteria mathematical model and method aredeveloped for the synthesis of methodology when initial data about the project and its environment are fuzzy.

  15. Initiation of male sperm-transfer behavior in Caenorhabditis elegans requires input from the ventral nerve cord

    Directory of Open Access Journals (Sweden)

    Gharib Shahla

    2006-08-01

    Full Text Available Abstract Background The Caenorhabditis elegans male exhibits a stereotypic behavioral pattern when attempting to mate. This behavior has been divided into the following steps: response, backing, turning, vulva location, spicule insertion, and sperm transfer. We and others have begun in-depth analyses of all these steps in order to understand how complex behaviors are generated. Here we extend our understanding of the sperm-transfer step of male mating behavior. Results Based on observation of wild-type males and on genetic analysis, we have divided the sperm-transfer step of mating behavior into four sub-steps: initiation, release, continued transfer, and cessation. To begin to understand how these sub-steps of sperm transfer are regulated, we screened for ethylmethanesulfonate (EMS-induced mutations that cause males to transfer sperm aberrantly. We isolated an allele of unc-18, a previously reported member of the Sec1/Munc-18 (SM family of proteins that is necessary for regulated exocytosis in C. elegans motor neurons. Our allele, sy671, is defective in two distinct sub-steps of sperm transfer: initiation and continued transfer. By a series of transgenic site-of-action experiments, we found that motor neurons in the ventral nerve cord require UNC-18 for the initiation of sperm transfer, and that UNC-18 acts downstream or in parallel to the SPV sensory neurons in this process. In addition to this neuronal requirement, we found that non-neuronal expression of UNC-18, in the male gonad, is necessary for the continuation of sperm transfer. Conclusion Our division of sperm-transfer behavior into sub-steps has provided a framework for the further detailed analysis of sperm transfer and its integration with other aspects of mating behavior. By determining the site of action of UNC-18 in sperm-transfer behavior, and its relation to the SPV sensory neurons, we have further defined the cells and tissues involved in the generation of this behavior. We

  16. Initiation of male sperm-transfer behavior in Caenorhabditis elegans requires input from the ventral nerve cord.

    Science.gov (United States)

    Schindelman, Gary; Whittaker, Allyson J; Thum, Jian Yuan; Gharib, Shahla; Sternberg, Paul W

    2006-08-15

    The Caenorhabditis elegans male exhibits a stereotypic behavioral pattern when attempting to mate. This behavior has been divided into the following steps: response, backing, turning, vulva location, spicule insertion, and sperm transfer. We and others have begun in-depth analyses of all these steps in order to understand how complex behaviors are generated. Here we extend our understanding of the sperm-transfer step of male mating behavior. Based on observation of wild-type males and on genetic analysis, we have divided the sperm-transfer step of mating behavior into four sub-steps: initiation, release, continued transfer, and cessation. To begin to understand how these sub-steps of sperm transfer are regulated, we screened for ethylmethanesulfonate (EMS)-induced mutations that cause males to transfer sperm aberrantly. We isolated an allele of unc-18, a previously reported member of the Sec1/Munc-18 (SM) family of proteins that is necessary for regulated exocytosis in C. elegans motor neurons. Our allele, sy671, is defective in two distinct sub-steps of sperm transfer: initiation and continued transfer. By a series of transgenic site-of-action experiments, we found that motor neurons in the ventral nerve cord require UNC-18 for the initiation of sperm transfer, and that UNC-18 acts downstream or in parallel to the SPV sensory neurons in this process. In addition to this neuronal requirement, we found that non-neuronal expression of UNC-18, in the male gonad, is necessary for the continuation of sperm transfer. Our division of sperm-transfer behavior into sub-steps has provided a framework for the further detailed analysis of sperm transfer and its integration with other aspects of mating behavior. By determining the site of action of UNC-18 in sperm-transfer behavior, and its relation to the SPV sensory neurons, we have further defined the cells and tissues involved in the generation of this behavior. We have shown both a neuronal and non-neuronal requirement for

  17. The effect of adjusting model inputs to achieve mass balance on time-dynamic simulations in a food-web model of Lake Huron

    Science.gov (United States)

    Langseth, Brian J.; Jones, Michael L.; Riley, Stephen C.

    2014-01-01

    Ecopath with Ecosim (EwE) is a widely used modeling tool in fishery research and management. Ecopath requires a mass-balanced snapshot of a food web at a particular point in time, which Ecosim then uses to simulate changes in biomass over time. Initial inputs to Ecopath, including estimates for biomasses, production to biomass ratios, consumption to biomass ratios, and diets, rarely produce mass balance, and thus ad hoc changes to inputs are required to balance the model. There has been little previous research of whether ad hoc changes to achieve mass balance affect Ecosim simulations. We constructed an EwE model for the offshore community of Lake Huron, and balanced the model using four contrasting but realistic methods. The four balancing methods were based on two contrasting approaches; in the first approach, production of unbalanced groups was increased by increasing either biomass or the production to biomass ratio, while in the second approach, consumption of predators on unbalanced groups was decreased by decreasing either biomass or the consumption to biomass ratio. We compared six simulation scenarios based on three alternative assumptions about the extent to which mortality rates of prey can change in response to changes in predator biomass (i.e., vulnerabilities) under perturbations to either fishing mortality or environmental production. Changes in simulated biomass values over time were used in a principal components analysis to assess the comparative effect of balancing method, vulnerabilities, and perturbation types. Vulnerabilities explained the most variation in biomass, followed by the type of perturbation. Choice of balancing method explained little of the overall variation in biomass. Under scenarios where changes in predator biomass caused large changes in mortality rates of prey (i.e., high vulnerabilities), variation in biomass was greater than when changes in predator biomass caused only small changes in mortality rates of prey (i.e., low

  18. A grey neural network and input-output combined forecasting model. Primary energy consumption forecasts in Spanish economic sectors

    International Nuclear Information System (INIS)

    Liu, Xiuli; Moreno, Blanca; García, Ana Salomé

    2016-01-01

    A combined forecast of Grey forecasting method and neural network back propagation model, which is called Grey Neural Network and Input-Output Combined Forecasting Model (GNF-IO model), is proposed. A real case of energy consumption forecast is used to validate the effectiveness of the proposed model. The GNF-IO model predicts coal, crude oil, natural gas, renewable and nuclear primary energy consumption volumes by Spain's 36 sub-sectors from 2010 to 2015 according to three different GDP growth scenarios (optimistic, baseline and pessimistic). Model test shows that the proposed model has higher simulation and forecasting accuracy on energy consumption than Grey models separately and other combination methods. The forecasts indicate that the primary energies as coal, crude oil and natural gas will represent on average the 83.6% percent of the total of primary energy consumption, raising concerns about security of supply and energy cost and adding risk for some industrial production processes. Thus, Spanish industry must speed up its transition to an energy-efficiency economy, achieving a cost reduction and increase in the level of self-supply. - Highlights: • Forecasting System Using Grey Models combined with Input-Output Models is proposed. • Primary energy consumption in Spain is used to validate the model. • The grey-based combined model has good forecasting performance. • Natural gas will represent the majority of the total of primary energy consumption. • Concerns about security of supply, energy cost and industry competitiveness are raised.

  19. A study on the multi-dimensional spectral analysis for response of a piping model with two-seismic inputs

    International Nuclear Information System (INIS)

    Suzuki, K.; Sato, H.

    1975-01-01

    The power and the cross power spectrum analysis by which the vibration characteristic of structures, such as natural frequency, mode of vibration and damping ratio, can be identified would be effective for the confirmation of the characteristics after the construction is completed by using the response for small earthquakes or the micro-tremor under the operating condition. This method of analysis previously utilized only from the view point of systems with single input so far, is extensively applied for the analysis of a medium scale model of a piping system subjected to two seismic inputs. The piping system attached to a three storied concrete structure model which is constructed on a shaking table was excited due to earthquake motions. The inputs to the piping system were recorded at the second floor and the ceiling of the third floor where the system was attached to. The output, the response of the piping system, was instrumented at a middle point on the system. As a result, the multi-dimensional power spectrum analysis is effective for a more reliable identification of the vibration characteristics of the multi-input structure system

  20. Pre-Mission Input Requirements to Enable Successful Sample Collection by A Remote Field/EVA Team

    Science.gov (United States)

    Cohen, B. A.; Lim, D. S. S.; Young, K. E.; Brunner, A.; Elphic, R. E.; Horne, A.; Kerrigan, M. C.; Osinski, G. R.; Skok, J. R.; Squyres, S. W.; hide

    2016-01-01

    The FINESSE (Field Investigations to Enable Solar System Science and Exploration) team, part of the Solar System Exploration Virtual Institute (SSERVI), is a field-based research program aimed at generating strategic knowledge in preparation for human and robotic exploration of the Moon, near-Earth asteroids, Phobos and Deimos, and beyond. In contract to other technology-driven NASA analog studies, The FINESSE WCIS activity is science-focused and, moreover, is sampling-focused with the explicit intent to return the best samples for geochronology studies in the laboratory. We used the FINESSE field excursion to the West Clearwater Lake Impact structure (WCIS) as an opportunity to test factors related to sampling decisions. We examined the in situ sample characterization and real-time decision-making process of the astronauts, with a guiding hypothesis that pre-mission training that included detailed background information on the analytical fate of a sample would better enable future astronauts to select samples that would best meet science requirements. We conducted three tests of this hypothesis over several days in the field. Our investigation was designed to document processes, tools and procedures for crew sampling of planetary targets. This was not meant to be a blind, controlled test of crew efficacy, but rather an effort to explicitly recognize the relevant variables that enter into sampling protocol and to be able to develop recommendations for crew and backroom training in future endeavors.

  1. An extended environmental input-output lifecycle assessment model to study the urban food-energy-water nexus

    Science.gov (United States)

    Sherwood, John; Clabeaux, Raeanne; Carbajales-Dale, Michael

    2017-10-01

    We developed a physically-based environmental account of US food production systems and integrated these data into the environmental-input-output life cycle assessment (EIO-LCA) model. The extended model was used to characterize the food, energy, and water (FEW) intensities of every US economic sector. The model was then applied to every Bureau of Economic Analysis metropolitan statistical area (MSA) to determine their FEW usages. The extended EIO-LCA model can determine the water resource use (kGal), energy resource use (TJ), and food resource use in units of mass (kg) or energy content (kcal) of any economic activity within the United States. We analyzed every economic sector to determine its FEW intensities per dollar of economic output. This data was applied to each of the 382 MSAs to determine their total and per dollar of GDP FEW usages by allocating MSA economic production to the corresponding FEW intensities of US economic sectors. Additionally, a longitudinal study was performed for the Los Angeles-Long Beach-Anaheim, CA, metropolitan statistical area to examine trends from this singular MSA and compare it to the overall results. Results show a strong correlation between GDP and energy use, and between food and water use across MSAs. There is also a correlation between GDP and greenhouse gas emissions. The longitudinal study indicates that these correlations can shift alongside a shifting industrial composition. Comparing MSAs on a per GDP basis reveals that central and southern California tend to be more resource intensive than many other parts of the country, while much of Florida has abnormally low resource requirements. Results of this study enable a more complete understanding of food, energy, and water as key ingredients to a functioning economy. With the addition of the food data to the EIO-LCA framework, researchers will be able to better study the food-energy-water nexus and gain insight into how these three vital resources are interconnected

  2. Terrestrial ecosystem recovery - Modelling the effects of reduced acidic inputs and increased inputs of sea-salts induced by global change

    DEFF Research Database (Denmark)

    Beier, C.; Moldan, F.; Wright, R.F.

    2003-01-01

    to 3 large-scale "clean rain" experiments, the so-called roof experiments at Risdalsheia, Norway; Gardsjon, Sweden, and Klosterhede, Denmark. Implementation of the Gothenburg protocol will initiate recovery of the soils at all 3 sites by rebuilding base saturation. The rate of recovery is small...... and base saturation increases less than 5% over the next 30 years. A climate-induced increase in storm severity will increase the sea-salt input to the ecosystems. This will provide additional base cations to the soils and more than double the rate of the recovery, but also lead to strong acid pulses...... following high sea-salt inputs as the deposited base cations exchange with the acidity stored in the soil. Future recovery of soils and runoff at acidified catchments will thus depend on the amount and rate of reduction of acid deposition, and in the case of systems near the coast, the frequency...

  3. SISTEM KONTROL OTOMATIK DENGAN MODEL SINGLE-INPUT-DUAL-OUTPUT DALAM KENDALI EFISIENSI UMUR-PEMAKAIAN INSTRUMEN

    Directory of Open Access Journals (Sweden)

    S.N.M.P. Simamora

    2014-10-01

    Full Text Available Efficiency condition occurs when the value of the used outputs compared to the resource total that has been used almost close to the value 1 (absolute environment. An instrument to achieve efficiency if the power output level has decreased significantly in the life of the instrument used, if it compared to the previous condition, when the instrument is not equipped with additional systems (or proposed model improvement. Even more effective if the inputs model that are used in unison to achieve a homogeneous output. On this research has been designed and implemented the automatic control system for models of single input-dual-output, wherein the sampling instruments used are lamp and fan. Source voltage used is AC (alternate-current and tested using quantitative research methods and instrumentation (with measuring instruments are observed. The results obtained demonstrate the efficiency of the instrument experienced a significant current model of single-input-dual-output applied separately instrument trials such as lamp and fan when it compared to the condition or state before. And the result show that the design has been built, can also run well.

  4. Performance assessment of retrospective meteorological inputs for use in air quality modeling during TexAQS 2006

    Science.gov (United States)

    Ngan, Fong; Byun, Daewon; Kim, Hyuncheol; Lee, Daegyun; Rappenglück, Bernhard; Pour-Biazar, Arastoo

    2012-07-01

    To achieve more accurate meteorological inputs than was used in the daily forecast for studying the TexAQS 2006 air quality, retrospective simulations were conducted using objective analysis and 3D/surface analysis nudging with surface and upper observations. Model ozone using the assimilated meteorological fields with improved wind fields shows better agreement with the observation compared to the forecasting results. In the post-frontal conditions, important factors for ozone modeling in terms of wind patterns are the weak easterlies in the morning for bringing in industrial emissions to the city and the subsequent clockwise turning of the wind direction induced by the Coriolis force superimposing the sea breeze, which keeps pollutants in the urban area. Objective analysis and nudging employed in the retrospective simulation minimize the wind bias but are not able to compensate for the general flow pattern biases inherited from large scale inputs. By using an alternative analyses data for initializing the meteorological simulation, the model can re-produce the flow pattern and generate the ozone peak location closer to the reality. The inaccurate simulation of precipitation and cloudiness cause over-prediction of ozone occasionally. Since there are limitations in the meteorological model to simulate precipitation and cloudiness in the fine scale domain (less than 4-km grid), the satellite-based cloud is an alternative way to provide necessary inputs for the retrospective study of air quality.

  5. Validation of Power Requirement Model for Active Loudspeakers

    DEFF Research Database (Denmark)

    Schneider, Henrik; Madsen, Anders Normann; Bjerregaard, Ruben

    2015-01-01

    The actual power requirement of an active loudspeaker during playback of music has not received much attention in the literature. This is probably because no single and simple solution exists and because a complete system knowledge from input voltage to output sound pressure level is required...

  6. Dynamics of a Birth-Pulse Single-Species Model with Restricted Toxin Input and Pulse Harvesting

    Directory of Open Access Journals (Sweden)

    Yi Ma

    2010-01-01

    Full Text Available We consider a birth-pulses single-species model with restricted toxin input and pulse harvesting in a polluted environment. Pollution accumulates as a slowly decaying stock and is assumed to affect the growth of the renewable resource population. Firstly, by using the discrete dynamical system determined by the stroboscopic map, we obtain an exact 1-period solution of system whose birth function is Ricker function or Beverton-Holt function and obtain the threshold conditions for their stability. Furthermore, we show that the timing of harvesting has a strong impact on the maximum annual sustainable yield. The best timing of harvesting is immediately after the birth pulses. Finally, we investigate the effect of the amount of toxin input on the stable resource population size. We find that when the birth rate is comparatively lower, the population size is decreasing with the increase of toxin input; that when the birth rate is high, the population size may begin to rise and then drop with the increase of toxin input.

  7. Synaptic inputs compete during rapid formation of the calyx of Held: a new model system for neural development.

    Science.gov (United States)

    Holcomb, Paul S; Hoffpauir, Brian K; Hoyson, Mitchell C; Jackson, Dakota R; Deerinck, Thomas J; Marrs, Glenn S; Dehoff, Marlin; Wu, Jonathan; Ellisman, Mark H; Spirou, George A

    2013-08-07

    Hallmark features of neural circuit development include early exuberant innervation followed by competition and pruning to mature innervation topography. Several neural systems, including the neuromuscular junction and climbing fiber innervation of Purkinje cells, are models to study neural development in part because they establish a recognizable endpoint of monoinnervation of their targets and because the presynaptic terminals are large and easily monitored. We demonstrate here that calyx of Held (CH) innervation of its target, which forms a key element of auditory brainstem binaural circuitry, exhibits all of these characteristics. To investigate CH development, we made the first application of serial block-face scanning electron microscopy to neural development with fine temporal resolution and thereby accomplished the first time series for 3D ultrastructural analysis of neural circuit formation. This approach revealed a growth spurt of added apposed surface area (ASA)>200 μm2/d centered on a single age at postnatal day 3 in mice and an initial rapid phase of growth and competition that resolved to monoinnervation in two-thirds of cells within 3 d. This rapid growth occurred in parallel with an increase in action potential threshold, which may mediate selection of the strongest input as the winning competitor. ASAs of competing inputs were segregated on the cell body surface. These data suggest mechanisms to select "winning" inputs by regional reinforcement of postsynaptic membrane to mediate size and strength of competing synaptic inputs.

  8. Development of a MODIS-Derived Surface Albedo Data Set: An Improved Model Input for Processing the NSRDB

    Energy Technology Data Exchange (ETDEWEB)

    Maclaurin, Galen [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sengupta, Manajit [National Renewable Energy Lab. (NREL), Golden, CO (United States); Xie, Yu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gilroy, Nicholas [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-12-01

    A significant source of bias in the transposition of global horizontal irradiance to plane-of-array (POA) irradiance arises from inaccurate estimations of surface albedo. The current physics-based model used to produce the National Solar Radiation Database (NSRDB) relies on model estimations of surface albedo from a reanalysis climatalogy produced at relatively coarse spatial resolution compared to that of the NSRDB. As an input to spectral decomposition and transposition models, more accurate surface albedo data from remotely sensed imagery at finer spatial resolutions would improve accuracy in the final product. The National Renewable Energy Laboratory (NREL) developed an improved white-sky (bi-hemispherical reflectance) broadband (0.3-5.0 ..mu..m) surface albedo data set for processing the NSRDB from two existing data sets: a gap-filled albedo product and a daily snow cover product. The Moderate Resolution Imaging Spectroradiometer (MODIS) sensors onboard the Terra and Aqua satellites have provided high-quality measurements of surface albedo at 30 arc-second spatial resolution and 8-day temporal resolution since 2001. The high spatial and temporal resolutions and the temporal coverage of the MODIS sensor will allow for improved modeling of POA irradiance in the NSRDB. However, cloud and snow cover interfere with MODIS observations of ground surface albedo, and thus they require post-processing. The MODIS production team applied a gap-filling methodology to interpolate observations obscured by clouds or ephemeral snow. This approach filled pixels with ephemeral snow cover because the 8-day temporal resolution is too coarse to accurately capture the variability of snow cover and its impact on albedo estimates. However, for this project, accurate representation of daily snow cover change is important in producing the NSRDB. Therefore, NREL also used the Integrated Multisensor Snow and Ice Mapping System data set, which provides daily snow cover observations of the

  9. The direct and indirect household energy requirements in the Republic of Korea from 1980 to 2000 - An input-output analysis

    International Nuclear Information System (INIS)

    Park, Hi-Chun; Heo, Eunnyeong

    2007-01-01

    As energy conservation can be realized through changes in the composition of goods and services consumed, there is a need to assess indirect and total household energy requirements. The Korean household sector was responsible for about 52% of the national primary energy requirement in the period from 1980 to 2000. Of this total, more than 60% of household energy requirement was indirect. Thus, not only direct but also indirect household energy requirement should be the target of energy conservation policies. Electricity became the main fuel in household energy use in 2000. Households consume more and more electricity intensive goods and services, a sign of increasing living standards. Increases in household consumption expenditure were responsible for a relatively high growth of energy consumption. Switching to consumption of less energy intensive products and decrease in energy intensities of products in 1990s contributed substantially to reduce the increase in the total household energy requirement. A future Korean study should apply a hybrid method as to reduce errors occurred by using uniform (average) prices in constructing energy input-output tables and as to make energy intensities of different years more comparable. (author)

  10. A Diffusion Approximation and Numerical Methods for Adaptive Neuron Models with Stochastic Inputs.

    Science.gov (United States)

    Rosenbaum, Robert

    2016-01-01

    Characterizing the spiking statistics of neurons receiving noisy synaptic input is a central problem in computational neuroscience. Monte Carlo approaches to this problem are computationally expensive and often fail to provide mechanistic insight. Thus, the field has seen the development of mathematical and numerical approaches, often relying on a Fokker-Planck formalism. These approaches force a compromise between biological realism, accuracy and computational efficiency. In this article we develop an extension of existing diffusion approximations to more accurately approximate the response of neurons with adaptation currents and noisy synaptic currents. The implementation refines existing numerical schemes for solving the associated Fokker-Planck equations to improve computationally efficiency and accuracy. Computer code implementing the developed algorithms is made available to the public.

  11. Nonlinear neural network for hemodynamic model state and input estimation using fMRI data

    KAUST Repository

    Karam, Ayman M.

    2014-11-01

    Originally inspired by biological neural networks, artificial neural networks (ANNs) are powerful mathematical tools that can solve complex nonlinear problems such as filtering, classification, prediction and more. This paper demonstrates the first successful implementation of ANN, specifically nonlinear autoregressive with exogenous input (NARX) networks, to estimate the hemodynamic states and neural activity from simulated and measured real blood oxygenation level dependent (BOLD) signals. Blocked and event-related BOLD data are used to test the algorithm on real experiments. The proposed method is accurate and robust even in the presence of signal noise and it does not depend on sampling interval. Moreover, the structure of the NARX networks is optimized to yield the best estimate with minimal network architecture. The results of the estimated neural activity are also discussed in terms of their potential use.

  12. Realistic modelling of the seismic input Site effects and parametric studies

    CERN Document Server

    Romanelli, F; Vaccari, F

    2002-01-01

    We illustrate the work done in the framework of a large international cooperation, showing the very recent numerical experiments carried out within the framework of the EC project 'Advanced methods for assessing the seismic vulnerability of existing motorway bridges' (VAB) to assess the importance of non-synchronous seismic excitation of long structures. The definition of the seismic input at the Warth bridge site, i.e. the determination of the seismic ground motion due to an earthquake with a given magnitude and epicentral distance from the site, has been done following a theoretical approach. In order to perform an accurate and realistic estimate of site effects and of differential motion it is necessary to make a parametric study that takes into account the complex combination of the source and propagation parameters, in realistic geological structures. The computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different sources and stru...

  13. The sensitivity of ecosystem service models to choices of input data and spatial resolution

    Science.gov (United States)

    Kenneth J. Bagstad; Erika Cohen; Zachary H. Ancona; Steven. G. McNulty; Ge   Sun

    2018-01-01

    Although ecosystem service (ES) modeling has progressed rapidly in the last 10–15 years, comparative studies on data and model selection effects have become more common only recently. Such studies have drawn mixed conclusions about whether different data and model choices yield divergent results. In this study, we compared the results of different models to address...

  14. Modeling moisture content of fine dead wildland fuels: Input to the BEHAVE fire prediction system

    Science.gov (United States)

    Richard C. Rothermel; Ralph A. Wilson; Glen A. Morris; Stephen S. Sackett

    1986-01-01

    Describes a model for predicting moisture content of fine fuels for use with the BEHAVE fire behavior and fuel modeling system. The model is intended to meet the need for more accurate predictions of fine fuel moisture, particularly in northern conifer stands and on days following rain. The model is based on the Canadian Fine Fuel Moisture Code (FFMC), modified to...

  15. Embodied water analysis for Hebei Province, China by input-output modelling

    Science.gov (United States)

    Liu, Siyuan; Han, Mengyao; Wu, Xudong; Wu, Xiaofang; Li, Zhi; Xia, Xiaohua; Ji, Xi

    2018-03-01

    With the accelerating coordinated development of the Beijing-Tianjin-Hebei region, regional economic integration is recognized as a national strategy. As water scarcity places Hebei Province in a dilemma, it is of critical importance for Hebei Province to balance water resources as well as make full use of its unique advantages in the transition to sustainable development. To our knowledge, related embodied water accounting analysis has been conducted for Beijing and Tianjin, while similar works with the focus on Hebei are not found. In this paper, using the most complete and recent statistics available for Hebei Province, the embodied water use in Hebei Province is analyzed in detail. Based on input-output analysis, it presents a complete set of systems accounting framework for water resources. In addition, a database of embodied water intensity is proposed which is applicable to both intermediate inputs and final demand. The result suggests that the total amount of embodied water in final demand is 10.62 billion m3, of which the water embodied in urban household consumption accounts for more than half. As a net embodied water importer, the water embodied in the commodity trade in Hebei Province is 17.20 billion m3. The outcome of this work implies that it is particularly urgent to adjust industrial structure and trade policies for water conservation, to upgrade technology and to improve water utilization. As a result, to relieve water shortages in Hebei Province, it is of crucial importance to regulate the balance of water use within the province, thus balancing water distribution in the various industrial sectors.

  16. Influence of the spatial extent and resolution of input data on soil carbon models in Florida, USA

    Science.gov (United States)

    Vasques, Gustavo M.; Grunwald, S.; Myers, D. Brenton

    2012-12-01

    Understanding the causes of spatial variation of soil carbon (C) has important implications for regional and global C dynamics studies. Soil C predictive models can identify sources of C variation, but may be influenced by scale parameters, including the spatial extent and resolution of input data. Our objective was to investigate the influence of these scale parameters on soil C spatial predictive models in Florida, USA. We used data from three nested spatial extents (Florida, 150,000 km2; Santa Fe River watershed, 3,585 km2; and University of Florida Beef Cattle Station, 5.58 km2) to derive stepwise linear models of soil C as a function of 24 environmental properties. Models were derived within the three extents and for seven resolutions (30-1920 m) of input environmental data in Florida and in the watershed, then cross-evaluated among extents and resolutions, respectively. The quality of soil C models increased with an increase in the spatial extent (R2 from 0.10 in the cattle station to 0.61 in Florida) and with a decrease in the resolution of input data (R2 from 0.33 at 1920-m resolution to 0.61 at 30-m resolution in Florida). Soil and hydrologic variables were the most important across the seven resolutions both in Florida and in the watershed. The spatial extent and resolution of environmental covariates modulate soil C variation and soil-landscape correlations influencing soil C predictive models. Our results provide scale boundaries to observe environmental data and assess soil C spatial patterns, supporting C sequestration, budgeting and monitoring programs.

  17. Input-dependent life-cycle inventory model of industrial wastewater-treatment processes in the chemical sector.

    Science.gov (United States)

    Köhler, Annette; Hellweg, Stefanie; Recan, Ercan; Hungerbühler, Konrad

    2007-08-01

    Industrial wastewater-treatment systems need to ensure a high level of protection for the environment as a whole. Life-cycle assessment (LCA) comprehensively evaluates the environmental impacts of complex treatment systems, taking into account impacts from auxiliaries and energy consumption as well as emissions. However, the application of LCA is limited by a scarcity of wastewater-specific life-cycle inventory (LCI) data. This study presents a modular gate-to-gate inventory model for industrial wastewater purification in the chemical and related sectors. It enables the calculation of inventory parameters as a function of the wastewater composition and the technologies applied. Forthis purpose, data on energy and auxiliaries' consumption, wastewater composition, and process parameters was collected from chemical industry. On this basis, causal relationships between wastewater input, emissions, and technical inputs were identified. These causal relationships were translated into a generic inventory model. Generic and site-specific data ranges for LCI parameters are provided for the following processes: mechanical-biological treatment, high-pressure wet-air oxidation, nanofiltration, and extraction. The input- and technology-dependent process inventories help to bridge data gaps where primary data are not available. Thus, they substantially help to perform an environmental assessment of industrial wastewater purification in the chemical and associated industries, which may be used, for instance, for technology choices.

  18. Methodology for deriving hydrogeological input parameters for safety-analysis models - application to fractured crystalline rocks of Northern Switzerland

    International Nuclear Information System (INIS)

    Vomvoris, S.; Andrews, R.W.; Lanyon, G.W.; Voborny, O.; Wilson, W.

    1996-04-01

    Switzerland is one of many nations with nuclear power that is seeking to identify rock types and locations that would be suitable for the underground disposal of nuclear waste. A common challenge among these programs is to provide engineering designers and safety analysts with a reasonably representative hydrogeological input dataset that synthesizes the relevant information from direct field observations as well as inferences and model results derived from those observations. Needed are estimates of the volumetric flux through a volume of rock and the distribution of that flux into discrete pathways between the repository zones and the biosphere. These fluxes are not directly measurable but must be derived based on understandings of the range of plausible hydrogeologic conditions expected at the location investigated. The methodology described in this report utilizes conceptual and numerical models at various scales to derive the input dataset. The methodology incorporates an innovative approach, called the geometric approach, in which field observations and their associated uncertainty, together with a conceptual representation of those features that most significantly affect the groundwater flow regime, were rigorously applied to generate alternative possible realizations of hydrogeologic features in the geosphere. In this approach, the ranges in the output values directly reflect uncertainties in the input values. As a demonstration, the methodology is applied to the derivation of the hydrogeological dataset for the crystalline basement of Northern Switzerland. (author) figs., tabs., refs

  19. Simulation of a Classically Conditioned Response: Components of the Input Trace and a Cerebellar Neural Network Implementation of the Sutton-Barto-Desmond Model.

    Science.gov (United States)

    1987-09-14

    inputs. Tesauro (1986) has criticized the SB model on the grounds that it is only applicable in situations where inputs are represented locally...Barto, A.G. A temporal-difference model of classical conditioning. , Technical Report TR87-509.2, GTE Labs, Waltham, Mass. (1987). Tesauro , G. Simple

  20. Adaptive Fault-Tolerant Control for Flight Systems with Input Saturation and Model Mismatch

    Directory of Open Access Journals (Sweden)

    Man Wang

    2013-01-01

    the original reference model may not be appropriate. Under this circumstance, an adaptive reference model which can also provide satisfactory performance is designed. Simulations of a flight control example are given to illustrate the effectiveness of the proposed scheme.

  1. Effect of Manure vs. Fertilizer Inputs on Productivity of Forage Crop Models

    Directory of Open Access Journals (Sweden)

    Pasquale Martiniello

    2011-06-01

    Full Text Available Manure produced by livestock activity is a dangerous product capable of causing serious environmental pollution. Agronomic management practices on the use of manure may transform the target from a waste to a resource product. Experiments performed on comparison of manure with standard chemical fertilizers (CF were studied under a double cropping per year regime (alfalfa, model I; Italian ryegrass-corn, model II; barley-seed sorghum, model III; and horse-bean-silage sorghum, model IV. The total amount of manure applied in the annual forage crops of the model II, III and IV was 158, 140 and 80 m3 ha−1, respectively. The manure applied to soil by broadcast and injection procedure provides an amount of nitrogen equal to that supplied by CF. The effect of manure applications on animal feeding production and biochemical soil characteristics was related to the models. The weather condition and manures and CF showed small interaction among treatments. The number of MFU ha−1 of biomass crop gross product produced in autumn and spring sowing models under manure applications was 11,769, 20,525, 11,342, 21,397 in models I through IV, respectively. The reduction of MFU ha−1 under CF ranges from 10.7% to 13.2% those of the manure models. The effect of manure on organic carbon and total nitrogen of topsoil, compared to model I, stressed the parameters as CF whose amount was higher in models II and III than model IV. In term of percentage the organic carbon and total nitrogen of model I and treatment with manure was reduced by about 18.5 and 21.9% in model II and model III and 8.8 and 6.3% in model IV, respectively. Manure management may substitute CF without reducing gross production and sustainability of cropping systems, thus allowing the opportunity to recycle the waste product for animal forage feeding.

  2. 'Fingerprints' of four crop models as affected by soil input data aggregation

    DEFF Research Database (Denmark)

    Angulo, Carlos; Gaiser, Thomas; Rötter, Reimund P

    2014-01-01

    . In this study we used four crop models (SIMPLACE, DSSAT-CSM, EPIC and DAISY) differing in the detail of modeling above-ground biomass and yield as well as of modeling soil water dynamics, water uptake and drought effects on plants to simulate winter wheat in two (agro-climatologically and geo-morphologically...

  3. Modelling the tongue-of-ionisation using CTIP with SuperDARN electric potential input: verification by radiotomography

    Directory of Open Access Journals (Sweden)

    S. E. Pryse

    2009-03-01

    Full Text Available Electric potential patterns obtained by the SuperDARN radar network are used as input to the Coupled Thermosphere-Ionosphere-Plasmasphere model, in an attempt to improve the modelling of the spatial distribution of the ionospheric plasma at high latitudes. Two case studies are considered, one under conditions of stable IMF Bz negative and the other under stable IMF Bz positive. The modelled plasma distributions are compared with sets of well-established tomographic reconstructions, which have been interpreted previously in multi-instrument studies. For IMF Bz negative both the model and observations show a tongue-of-ionisation on the nightside, with good agreement between the electron density and location of the tongue. Under Bz positive, the SuperDARN input allows the model to reproduce a spatial plasma distribution akin to that observed. In this case plasma, unable to penetrate the polar cap boundary into the polar cap, is drawn by the convective flow in a tongue-of-ionisation around the periphery of the polar cap.

  4. Modelling the tongue-of-ionisation using CTIP with SuperDARN electric potential input: verification by radiotomography

    Directory of Open Access Journals (Sweden)

    S. E. Pryse

    2009-03-01

    Full Text Available Electric potential patterns obtained by the SuperDARN radar network are used as input to the Coupled Thermosphere-Ionosphere-Plasmasphere model, in an attempt to improve the modelling of the spatial distribution of the ionospheric plasma at high latitudes. Two case studies are considered, one under conditions of stable IMF Bz negative and the other under stable IMF Bz positive. The modelled plasma distributions are compared with sets of well-established tomographic reconstructions, which have been interpreted previously in multi-instrument studies. For IMF Bz negative both the model and observations show a tongue-of-ionisation on the nightside, with good agreement between the electron density and location of the tongue. Under Bz positive, the SuperDARN input allows the model to reproduce a spatial plasma distribution akin to that observed. In this case plasma, unable to penetrate the polar cap boundary into the polar cap, is drawn by the convective flow in a tongue-of-ionisation around the periphery of the polar cap.

  5. A biologically inspired model of bat echolocation in a cluttered environment with inputs designed from field Recordings

    Science.gov (United States)

    Loncich, Kristen Teczar

    Bat echolocation strategies and neural processing of acoustic information, with a focus on cluttered environments, is investigated in this study. How a bat processes the dense field of echoes received while navigating and foraging in the dark is not well understood. While several models have been developed to describe the mechanisms behind bat echolocation, most are based in mathematics rather than biology, and focus on either peripheral or neural processing---not exploring how these two levels of processing are vitally connected. Current echolocation models also do not use habitat specific acoustic input, or account for field observations of echolocation strategies. Here, a new approach to echolocation modeling is described capturing the full picture of echolocation from signal generation to a neural picture of the acoustic scene. A biologically inspired echolocation model is developed using field research measurements of the interpulse interval timing used by a frequency modulating (FM) bat in the wild, with a whole method approach to modeling echolocation including habitat specific acoustic inputs, a biologically accurate peripheral model of sound processing by the outer, middle, and inner ear, and finally a neural model incorporating established auditory pathways and neuron types with echolocation adaptations. Field recordings analyzed underscore bat sonar design differences observed in the laboratory and wild, and suggest a correlation between interpulse interval groupings and increased clutter. The scenario model provides habitat and behavior specific echoes and is a useful tool for both modeling and behavioral studies, and the peripheral and neural model show that spike-time information and echolocation specific neuron types can produce target localization in the midbrain.

  6. AN ACCURATE MODELING OF DELAY AND SLEW METRICS FOR ON-CHIP VLSI RC INTERCONNECTS FOR RAMP INPUTS USING BURR’S DISTRIBUTION FUNCTION

    Directory of Open Access Journals (Sweden)

    Rajib Kar

    2010-09-01

    Full Text Available This work presents an accurate and efficient model to compute the delay and slew metric of on-chip interconnect of high speed CMOS circuits foe ramp input. Our metric assumption is based on the Burr’s Distribution function. The Burr’s distribution is used to characterize the normalized homogeneous portion of the step response. We used the PERI (Probability distribution function Extension for Ramp Inputs technique that extends delay metrics and slew metric for step inputs to the more general and realistic non-step inputs. The accuracy of our models is justified with the results compared with that of SPICE simulations.

  7. GAROS input deck description

    Energy Technology Data Exchange (ETDEWEB)

    Vollan, A.; Soederberg, M. (Aeronautical Research Inst. of Sweden, Bromma (Sweden))

    1989-01-01

    This report describes the input for the programs GAROS1 and GAROS2, version 5.8 and later, February 1988. The GAROS system, developed by Arne Vollan, Omega GmbH, is used for the analysis of the mechanical and aeroelastic properties for general rotating systems. It has been specially designed to meet the requirements of aeroelastic stability and dynamic response of horizontal axis wind energy converters. Some of the special characteristics are: * The rotor may have one or more blades. * The blades may be rigidly attached to the hub, or they may be fully articulated. * The full elastic properties of the blades, the hub, the machine house and the tower are taken into account. * With the same basic model, a number of different analyses can be performed: Snap-shot analysis, Floquet method, transient response analysis, frequency response analysis etc.

  8. Linear and Non-linear Multi-Input Multi-Output Model Predictive Control of Continuous Stirred Tank Reactor

    Directory of Open Access Journals (Sweden)

    Muayad Al-Qaisy

    2015-02-01

    Full Text Available In this article, multi-input multi-output (MIMO linear model predictive controller (LMPC based on state space model and nonlinear model predictive controller based on neural network (NNMPC are applied on a continuous stirred tank reactor (CSTR. The idea is to have a good control system that will be able to give optimal performance, reject high load disturbance, and track set point change. In order to study the performance of the two model predictive controllers, MIMO Proportional-Integral-Derivative controller (PID strategy is used as benchmark. The LMPC, NNMPC, and PID strategies are used for controlling the residual concentration (CA and reactor temperature (T. NNMPC control shows a superior performance over the LMPC and PID controllers by presenting a smaller overshoot and shorter settling time.

  9. Better temperature predictions in geothermal modelling by improved quality of input parameters

    DEFF Research Database (Denmark)

    Fuchs, Sven; Bording, Thue Sylvester; Balling, N.

    2015-01-01

    Thermal modelling is used to examine the subsurface temperature field and geothermal conditions at various scales (e.g. sedimentary basins, deep crust) and in the framework of different problem settings (e.g. scientific or industrial use). In such models, knowledge of rock thermal properties...

  10. Modeling spray drift and runoff-related inputs of pesticides to receiving water.

    Science.gov (United States)

    Zhang, Xuyang; Luo, Yuzhou; Goh, Kean S

    2018-03-01

    Pesticides move to surface water via various pathways including surface runoff, spray drift and subsurface flow. Little is known about the relative contributions of surface runoff and spray drift in agricultural watersheds. This study develops a modeling framework to address the contribution of spray drift to the total loadings of pesticides in receiving water bodies. The modeling framework consists of a GIS module for identifying drift potential, the AgDRIFT model for simulating spray drift, and the Soil and Water Assessment Tool (SWAT) for simulating various hydrological and landscape processes including surface runoff and transport of pesticides. The modeling framework was applied on the Orestimba Creek Watershed, California. Monitoring data collected from daily samples were used for model evaluation. Pesticide mass deposition on the Orestimba Creek ranged from 0.08 to 6.09% of applied mass. Monitoring data suggests that surface runoff was the major pathway for pesticide entering water bodies, accounting for 76% of the annual loading; the rest 24% from spray drift. The results from the modeling framework showed 81 and 19%, respectively, for runoff and spray drift. Spray drift contributed over half of the mass loading during summer months. The slightly lower spray drift contribution as predicted by the modeling framework was mainly due to SWAT's under-prediction of pesticide mass loading during summer and over-prediction of the loading during winter. Although model simulations were associated with various sources of uncertainties, the overall performance of the modeling framework was satisfactory as evaluated by multiple statistics: for simulation of daily flow, the Nash-Sutcliffe Efficiency Coefficient (NSE) ranged from 0.61 to 0.74 and the percent bias (PBIAS) modeling framework will be useful for assessing the relative exposure from pesticides related to spray drift and runoff in receiving waters and the design of management practices for mitigating pesticide

  11. Solar Sail Models and Test Measurements Correspondence for Validation Requirements Definition

    Science.gov (United States)

    Ewing, Anthony; Adams, Charles

    2004-01-01

    Solar sails are being developed as a mission-enabling technology in support of future NASA science missions. Current efforts have advanced solar sail technology sufficient to justify a flight validation program. A primary objective of this activity is to test and validate solar sail models that are currently under development so that they may be used with confidence in future science mission development (e.g., scalable to larger sails). Both system and model validation requirements must be defined early in the program to guide design cycles and to ensure that relevant and sufficient test data will be obtained to conduct model validation to the level required. A process of model identification, model input/output documentation, model sensitivity analyses, and test measurement correspondence is required so that decisions can be made to satisfy validation requirements within program constraints.

  12. Integrated modelling requires mass collaboration (Invited)

    Science.gov (United States)

    Moore, R. V.

    2009-12-01

    The need for sustainable solutions to the world’s problems is self evident; the challenge is to anticipate where, in the environment, economy or society, the proposed solution will have negative consequences. If we failed to realise that the switch to biofuels would have the seemingly obvious result of reduced food production, how much harder will it be to predict the likely impact of policies whose impacts may be more subtle? It has been clear for a long time that models and data will be important tools for assessing the impact of events and the measures for their mitigation. They are an effective way of encapsulating knowledge of a process and using it for prediction. However, most models represent a single or small group of processes. The sustainability challenges that face us now require not just the prediction of a single process but the prediction of how many interacting processes will respond in given circumstances. These processes will not be confined to a single discipline but will often straddle many. For example, the question, “What will be the impact on river water quality of the medical plans for managing a ‘flu pandemic and could they cause a further health hazard?” spans medical planning, the absorption of drugs by the body, the spread of disease, the hydraulic and chemical processes in sewers and sewage treatment works and river water quality. This question nicely reflects the present state of the art. We have models of the processes and standards, such as the Open Modelling Interface (the OpenMI), allow them to be linked together and to datasets. We can therefore answer the question but with the important proviso that we thought to ask it. The next and greater challenge is to deal with the open question, “What are the implications of the medical plans for managing a ‘flu pandemic?”. This implies a system that can make connections that may well not have occurred to us and then evaluate their probable impact. The final touch will be to

  13. Dependence of Computational Models on Input Dimension: Tractability of Approximation and Optimization Tasks

    Czech Academy of Sciences Publication Activity Database

    Kainen, P.C.; Kůrková, Věra; Sanguineti, M.

    2012-01-01

    Roč. 58, č. 2 (2012), s. 1203-1214 ISSN 0018-9448 R&D Projects: GA MŠk(CZ) ME10023; GA ČR GA201/08/1744; GA ČR GAP202/11/1368 Grant - others:CNR-AV ČR(CZ-IT) Project 2010–2012 Complexity of Neural-Network and Kernel Computational Models Institutional research plan: CEZ:AV0Z10300504 Keywords : dictionary -based computational models * high-dimensional approximation and optimization * model complexity * polynomial upper bounds Subject RIV: IN - Informatics, Computer Science Impact factor: 2.621, year: 2012

  14. Development of a General Form CO2 and Brine Flux Input Model

    Energy Technology Data Exchange (ETDEWEB)

    Mansoor, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sun, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Carroll, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-08-01

    The National Risk Assessment Partnership (NRAP) project is developing a science-based toolset for the quantitative analysis of the potential risks associated with changes in groundwater chemistry from CO2 injection. In order to address uncertainty probabilistically, NRAP is developing efficient, reduced-order models (ROMs) as part of its approach. These ROMs are built from detailed, physics-based process models to provide confidence in the predictions over a range of conditions. The ROMs are designed to reproduce accurately the predictions from the computationally intensive process models at a fraction of the computational time, thereby allowing the utilization of Monte Carlo methods to probe variability in key parameters. This report presents the procedures used to develop a generalized model for CO2 and brine leakage fluxes based on the output of a numerical wellbore simulation. The resulting generalized parameters and ranges reported here will be used for the development of third-generation groundwater ROMs.

  15. Why inputs matter: Selection of climatic variables for species distribution modelling in the Himalayan region

    Science.gov (United States)

    Bobrowski, Maria; Schickhoff, Udo

    2017-04-01

    Betula utilis is a major constituent of alpine treeline ecotones in the western and central Himalayan region. The objective of this study is to provide first time analysis of the potential distribution of Betula utilis in the subalpine and alpine belts of the Himalayan region using species distribution modelling. Using Generalized Linear Models (GLM) we aim at examining climatic factors controlling the species distribution under current climate conditions. Furthermore we evaluate the prediction ability of climate data derived from different statistical methods. GLMs were created using least correlated bioclimatic variables derived from two different climate models: 1) interpolated climate data (i.e. Worldclim, Hijmans et al., 2005) and 2) quasi-mechanistical statistical downscaling (i.e. Chelsa; Karger et al., 2016). Model accuracy was evaluated by the ability to predict the potential species distribution range. We found that models based on variables of Chelsa climate data had higher predictive power, whereas models using Worldclim climate data consistently overpredicted the potential suitable habitat for Betula utilis. Although climatic variables of Worldclim are widely used in modelling species distribution, our results suggest to treat them with caution when remote regions like the Himalayan mountains are in focus. Unmindful usage of climatic variables for species distribution models potentially cause misleading projections and may lead to wrong implications and recommendations for nature conservation. References: Hijmans, R.J., Cameron, S.E., Parra, J.L., Jones, P.G. & Jarvis, A. (2005) Very high resolution interpolated climate surfaces for global land areas. International Journal of Climatology, 25, 1965-1978. Karger, D.N., Conrad, O., Böhner, J., Kawohl, T., Kreft, H., Soria-Auza, R.W., Zimmermann, N., Linder, H.P. & Kessler, M. (2016) Climatologies at high resolution for the earth land surface areas. arXiv:1607.00217 [physics].

  16. Sensitivity of modeled estuarine circulation to spatial and temporal resolution of input meteorological forcing of a cold frontal passage

    Science.gov (United States)

    Weaver, Robert J.; Taeb, Peyman; Lazarus, Steven; Splitt, Michael; Holman, Bryan P.; Colvin, Jeffrey

    2016-12-01

    In this study, a four member ensemble of meteorological forcing is generated using the Weather Research and Forecasting (WRF) model in order to simulate a frontal passage event that impacted the Indian River Lagoon (IRL) during March 2015. The WRF model is run to provide high and low, spatial (0.005° and 0.1°) and temporal (30 min and 6 h) input wind and pressure fields. The four member ensemble is used to force the Advanced Circulation model (ADCIRC) coupled with Simulating Waves Nearshore (SWAN) and compute the hydrodynamic and wave response. Results indicate that increasing the spatial resolution of the meteorological forcing has a greater impact on the results than increasing the temporal resolution in coastal systems like the IRL where the length scales are smaller than the resolution of the operational meteorological model being used to generate the forecast. Changes in predicted water elevations are due in part to the upwind and downwind behavior of the input wind forcing. The significant wave height is more sensitive to the meteorological forcing, exhibited by greater ensemble spread throughout the simulation. It is important that the land mask, seen by the meteorological model, is representative of the geography of the coastal estuary as resolved by the hydrodynamic model. As long as the temporal resolution of the wind field captures the bulk characteristics of the frontal passage, computational resources should be focused so as to ensure that the meteorological model resolves the spatial complexities, such as the land-water interface, that drive the land use responsible for dynamic downscaling of the winds.

  17. Industrial and ecological cumulative exergy consumption of the United States via the 1997 input-output benchmark model

    International Nuclear Information System (INIS)

    Ukidwe, Nandan U.; Bakshi, Bhavik R.

    2007-01-01

    This paper develops a thermodynamic input-output (TIO) model of the 1997 United States economy that accounts for the flow of cumulative exergy in the 488-sector benchmark economic input-output model in two different ways. Industrial cumulative exergy consumption (ICEC) captures the exergy of all natural resources consumed directly and indirectly by each economic sector, while ecological cumulative exergy consumption (ECEC) also accounts for the exergy consumed in ecological systems for producing each natural resource. Information about exergy consumed in nature is obtained from the thermodynamics of biogeochemical cycles. As used in this work, ECEC is analogous to the concept of emergy, but does not rely on any of its controversial claims. The TIO model can also account for emissions from each sector and their impact and the role of labor. The use of consistent exergetic units permits the combination of various streams to define aggregate metrics that may provide insight into aspects related to the impact of economic sectors on the environment. Accounting for the contribution of natural capital by ECEC has been claimed to permit better representation of the quality of ecosystem goods and services than ICEC. The results of this work are expected to permit evaluation of these claims. If validated, this work is expected to lay the foundation for thermodynamic life cycle assessment, particularly of emerging technologies and with limited information

  18. Input characterization of a shock test strructure.

    Energy Technology Data Exchange (ETDEWEB)

    Hylok, J. E. (Jeffrey E.); Groethe, M. A.; Maupin, R. D. (Ryan D.)

    2004-01-01

    Often in experimental work, measuring input forces and pressures is a difficult and sometimes impossible task. For one particular shock test article, its input sensitivity required a detailed measurement of the pressure input. This paper discusses the use of a surrogate mass mock test article to measure spatial and temporal variations of the shock input within and between experiments. Also discussed will be the challenges and solutions in making some of the high speed transient measurements. The current input characterization work appears as part of the second phase in an extensive model validation project. During the first phase, the system under analysis displayed sensitivities to the shock input's qualitative and quantitative (magnitude) characteristics. However, multiple shortcomings existed in the characterization of the input. First, the experimental measurements of the input were made on a significantly simplified structure only, and the spatial fidelity of the measurements was minimal. Second, the sensors used for the pressure measurement contained known errors that could not be fully quantified. Finally, the measurements examined only one input pressure path (from contact with the energetic material). Airblast levels from the energetic materials were unknown. The result was a large discrepancy between the energy content in the analysis and experiments.

  19. A Hierarchical multi-input and output Bi-GRU Model for Sentiment Analysis on Customer Reviews

    Science.gov (United States)

    Zhang, Liujie; Zhou, Yanquan; Duan, Xiuyu; Chen, Ruiqi

    2018-03-01

    Multi-label sentiment classification on customer reviews is a practical challenging task in Natural Language Processing. In this paper, we propose a hierarchical multi-input and output model based bi-directional recurrent neural network, which both considers the semantic and lexical information of emotional expression. Our model applies two independent Bi-GRU layer to generate part of speech and sentence representation. Then the lexical information is considered via attention over output of softmax activation on part of speech representation. In addition, we combine probability of auxiliary labels as feature with hidden layer to capturing crucial correlation between output labels. The experimental result shows that our model is computationally efficient and achieves breakthrough improvements on customer reviews dataset.

  20. Comparison of squashing and self-consistent input-output models of quantum feedback

    Science.gov (United States)

    Peřinová, V.; Lukš, A.; Křepelka, J.

    2018-03-01

    The paper (Yanagisawa and Hope, 2010) opens with two ways of analysis of a measurement-based quantum feedback. The scheme of the feedback includes, along with the homodyne detector, a modulator and a beamsplitter, which does not enable one to extract the nonclassical field. In the present scheme, the beamsplitter is replaced by the quantum noise evader, which makes it possible to extract the nonclassical field. We re-approach the comparison of two models related to the same scheme. The first one admits that in the feedback loop between the photon annihilation and creation operators, unusual commutation relations hold. As a consequence, in the feedback loop, squashing of the light occurs. In the second one, the description arrives at the feedback loop via unitary transformations. But it is obvious that the unitary transformation which describes the modulator changes even the annihilation operator of the mode which passes by the modulator which is not natural. The first model could be called "squashing model" and the second one could be named "self-consistent model". Although the predictions of the two models differ only a little and both the ways of analysis have their advantages, they have also their drawbacks and further investigation is possible.

  1. Modeling microstructure of incudostapedial joint and the effect on cochlear input

    Science.gov (United States)

    Gan, Rong Z.; Wang, Xuelin

    2015-12-01

    The incudostapedial joint (ISJ) connects the incus to stapes in human ear and plays an important role for sound transmission from the tympanic membrane (TM) to cochlea. ISJ is a synovial joint composed of articular cartilage on the lenticular process and stapes head with the synovial fluid between them. However, there is no study on how the synovial ISJ affects the middle ear and cochlear functions. Recently, we have developed a 3-dimensinal finite element (FE) model of synovial ISJ and connected the model to our comprehensive FE model of the human ear. The motions of TM, stapes footplate, and basilar membrane and the pressures in scala vestibule and scala tympani were derived over frequencies and compared with experimental measurements. Results show that the synovial ISJ affects sound transmission into cochlea and the frequency-dependent viscoelastic behavior of ISJ provides protection for cochlea from high intensity sound.

  2. Estimating severity of sideways fall using a generic multi linear regression model based on kinematic input variables.

    Science.gov (United States)

    van der Zijden, A M; Groen, B E; Tanck, E; Nienhuis, B; Verdonschot, N; Weerdesteyn, V

    2017-03-21

    Many research groups have studied fall impact mechanics to understand how fall severity can be reduced to prevent hip fractures. Yet, direct impact force measurements with force plates are restricted to a very limited repertoire of experimental falls. The purpose of this study was to develop a generic model for estimating hip impact forces (i.e. fall severity) in in vivo sideways falls without the use of force plates. Twelve experienced judokas performed sideways Martial Arts (MA) and Block ('natural') falls on a force plate, both with and without a mat on top. Data were analyzed to determine the hip impact force and to derive 11 selected (subject-specific and kinematic) variables. Falls from kneeling height were used to perform a stepwise regression procedure to assess the effects of these input variables and build the model. The final model includes four input variables, involving one subject-specific measure and three kinematic variables: maximum upper body deceleration, body mass, shoulder angle at the instant of 'maximum impact' and maximum hip deceleration. The results showed that estimated and measured hip impact forces were linearly related (explained variances ranging from 46 to 63%). Hip impact forces of MA falls onto the mat from a standing position (3650±916N) estimated by the final model were comparable with measured values (3698±689N), even though these data were not used for training the model. In conclusion, a generic linear regression model was developed that enables the assessment of fall severity through kinematic measures of sideways falls, without using force plates. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Investigations of the sensitivity of a coronal mass ejection model (ENLIL) to solar input parameters

    DEFF Research Database (Denmark)

    Falkenberg, Thea Vilstrup; Vršnak, B.; Taktakishvili, A.

    2010-01-01

    Understanding space weather is not only important for satellite operations and human exploration of the solar system but also to phenomena here on Earth that may potentially disturb and disrupt electrical signals. Some of the most violent space weather effects are caused by coronal mass ejections...... investigate the parameter space of the ENLILv2.5b model using the CME event of 25 July 2004. ENLIL is a time‐dependent 3‐D MHD model that can simulate the propagation of cone‐shaped interplanetary coronal mass ejections (ICMEs) through the solar system. Excepting the cone parameters (radius, position...

  4. Linguistic Processing in a Mathematics Tutoring System: Cooperative Input Interpretation and Dialogue Modelling

    Science.gov (United States)

    Wolska, Magdalena; Buckley, Mark; Horacek, Helmut; Kruijff-Korbayová, Ivana; Pinkal, Manfred

    Formal domains, such as mathematics, require exact language to communicate the intended content. Special symbolic notations are used to express the semantics precisely, compactly, and unambiguously. Mathematical textbooks offer plenty of examples of concise, accurate presentations. This effective communication is enabled by interleaved use of formulas and natural language. Since natural language interaction has been shown to be an important factor in the efficiency of human tutoring [29], it would be desirable to enhance interaction with Intelligent Tutoring Systems for mathematics by allowing elements of mixed language combining the exactness of formal expressions with natural language flexibility.

  5. GALEV evolutionary synthesis models – I. Code, input physics and web

    NARCIS (Netherlands)

    Kotulla, R.; Fritze, U.; Weilbacher, P.; Anders, P.

    2009-01-01

    GALEV (GALaxy EVolution) evolutionary synthesis models describe the evolution of stellar populations in general, of star clusters as well as of galaxies, both in terms of resolved stellar populations and of integrated light properties over cosmological time-scales of ≥13 Gyr from the onset of star

  6. Modeling chronic diseases: the diabetes module. Justification of (new) input data

    NARCIS (Netherlands)

    Baan CA; Bos G; Jacobs-van der Bruggen MAM; Baan CA; Bos G; Jacobs-van der Bruggen MAM; PZO

    2005-01-01

    The RIVM chronic disease model (CDM) is an instrument designed to estimate the effects of changes in the prevalence of risk factors for chronic diseases on disease burden and mortality. To enable the computation of the effects of various diabetes prevention scenarios, the CDM has been updated and

  7. Multiscale Deterministic Wave Modeling with Wind Input and Wave Breaking Dissipation

    Science.gov (United States)

    2009-01-01

    Kudryavtsev , V. N., Makin, V. K. & Meirink, J. F. 2001 “Simplified model of the air flow above the waves,” Boundary-Layer Meteorol. 100, 63-90. 5 Li...Figure 6. Comparison of pressure profiles with exponential decays: solid line, the Kudryavtsev et al. (2001) profile estimated by Donelan et al

  8. Packaging tomorrow : modelling the material input for European packaging in the 21st century

    NARCIS (Netherlands)

    Hekkert, M.P.; Joosten, L.A.J.; Worrell, E.

    2006-01-01

    This report is a result of the MATTER project (MATerials Technology for CO2 Emission Reduction). The project focuses on CO2 emission reductions that are related to the Western European materials system. The total impact of the reduction options for different scenario's will be modeled in MARKAL

  9. Input Harmonic Analysis on the Slim DC-Link Drive Using Harmonic State Space Model

    DEFF Research Database (Denmark)

    Yang, Feng; Kwon, Jun Bum; Wang, Xiongfei

    2017-01-01

    variation according to the switching instant, the harmonics at the steady-state condition, as well as the coupling between the multiple harmonic impedances. By using this model, the impaction on the harmonics performance by the film capacitor and the grid inductance is derived. Simulation and experimental...

  10. Model-based extraction of input and organ functions in dynamic scintigraphic imaging

    Czech Academy of Sciences Publication Activity Database

    Tichý, Ondřej; Šmídl, Václav; Šámal, M.

    2016-01-01

    Roč. 4, 3-4 (2016), s. 135-145 ISSN 2168-1171 R&D Projects: GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : blind source separation * convolution * dynamic medical imaging * compartment modelling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2014/AS/tichy-0428540.pdf

  11. Treatment of input uncertainty in hydrologic modeling: Doing hydrology backward with Markov chain Monte Carlo simulation

    NARCIS (Netherlands)

    Vrugt, J.A.; Braak, ter C.J.F.; Clark, M.P.; Hyman, J.M.; Robinson, B.A.

    2008-01-01

    There is increasing consensus in the hydrologic literature that an appropriate framework for streamflow forecasting and simulation should include explicit recognition of forcing and parameter and model structural error. This paper presents a novel Markov chain Monte Carlo (MCMC) sampler, entitled

  12. Evapotranspiration and Precipitation inputs for SWAT model using remotely sensed observations

    Science.gov (United States)

    The ability of numerical models, such as the Soil and Water Assessment Tool (or SWAT), to accurately represent the partition of the water budget and describe sediment loads and other pollutant conditions related to water quality strongly depends on how well spatiotemporal variability in precipitatio...

  13. A stock-flow consistent input-output model with applications to energy price shocks, interest rates, and heat emissions

    Science.gov (United States)

    Berg, Matthew; Hartley, Brian; Richters, Oliver

    2015-01-01

    By synthesizing stock-flow consistent models, input-output models, and aspects of ecological macroeconomics, a method is developed to simultaneously model monetary flows through the financial system, flows of produced goods and services through the real economy, and flows of physical materials through the natural environment. This paper highlights the linkages between the physical environment and the economic system by emphasizing the role of the energy industry. A conceptual model is developed in general form with an arbitrary number of sectors, while emphasizing connections with the agent-based, econophysics, and complexity economics literature. First, we use the model to challenge claims that 0% interest rates are a necessary condition for a stationary economy and conduct a stability analysis within the parameter space of interest rates and consumption parameters of an economy in stock-flow equilibrium. Second, we analyze the role of energy price shocks in contributing to recessions, incorporating several propagation and amplification mechanisms. Third, implied heat emissions from energy conversion and the effect of anthropogenic heat flux on climate change are considered in light of a minimal single-layer atmosphere climate model, although the model is only implicitly, not explicitly, linked to the economic model.

  14. Future-year ozone prediction for the United States using updated models and inputs.

    Science.gov (United States)

    Collet, Susan; Kidokoro, Toru; Karamchandani, Prakash; Shah, Tejas; Jung, Jaegun

    2017-08-01

    The relationship between emission reductions and changes in ozone can be studied using photochemical grid models. These models are updated with new information as it becomes available. The primary objective of this study was to update the previous Collet et al. studies by using the most up-to-date (at the time the study was done) modeling emission tools, inventories, and meteorology available to conduct ozone source attribution and sensitivity studies. Results show future-year, 2030, design values for 8-hr ozone concentrations were lower than base-year values, 2011. The ozone source attribution results for selected cities showed that boundary conditions were the dominant contributors to ozone concentrations at the western U.S. locations, and were important for many of the eastern U.S. Point sources were generally more important in the eastern United States than in the western United States. The contributions of on-road mobile emissions were less than 5 ppb at a majority of the cities selected for analysis. The higher-order decoupled direct method (HDDM) results showed that in most of the locations selected for analysis, NOx emission reductions were more effective than VOC emission reductions in reducing ozone levels. The source attribution results from this study provide useful information on the important source categories and provide some initial guidance on future emission reduction strategies. The relationship between emission reductions and changes in ozone can be studied using photochemical grid models, which are updated with new available information. This study was to update the previous Collet et al. studies by using the most current, at the time the study was done, models and inventory to conduct ozone source attribution and sensitivity studies. The source attribution results from this study provide useful information on the important source categories and provide some initial guidance on future emission reduction strategies.

  15. PLEXOS Input Data Generator

    Energy Technology Data Exchange (ETDEWEB)

    2017-02-01

    The PLEXOS Input Data Generator (PIDG) is a tool that enables PLEXOS users to better version their data, automate data processing, collaborate in developing inputs, and transfer data between different production cost modeling and other power systems analysis software. PIDG can process data that is in a generalized format from multiple input sources, including CSV files, PostgreSQL databases, and PSS/E .raw files and write it to an Excel file that can be imported into PLEXOS with only limited manual intervention.

  16. Uncertainty into statistical landslide susceptibility models resulting from terrain mapping units and landslide input data

    Science.gov (United States)

    Zêzere, José Luis; Pereira, Susana; Melo, Raquel; Oliveira, Sérgio; Garcia, Ricardo

    2017-04-01

    There are multiple sources of uncertainty within statistically-based landslide susceptibility assessment that needs to be accounted and monitored. In this work we evaluate and discuss differences observed on landslide susceptibility maps resulting from the selection of the terrain mapping unit and the selection of the feature type to represent landslides (polygon vs point). The work is performed in the Silveira Basin (18.2 square kilometres) located north of Lisbon, Portugal, using a unique database of geo-environmental landslide predisposing factors and an inventory of 81 shallow translational slides. The Logistic Regression is the statistical method selected to combine the predictive factors with the dependent variable. Four landslide susceptibility models were computed using the complete landslide inventory and considering the total landslide area over four different terrain mapping units: Slope Terrain Units (STU), Geo-Hydrological Terrain Units (GHTU), Census Terrain Units (CTU) and Grid Cell Terrain Units (GCTU). Four additional landslide susceptibility models were made over the same four terrain mapping units using a landslide training group (50% of the inventory randomly selected). These models were independently validated with the other 50% of the landslide inventory (landslide test group). Lastly, two additional landslide susceptibility models were computed over GCTU, one using the landslide training group represented as point features corresponding to the centroid of landslide, and other using the centroid of landslide rupture zone. In total, 10 landslide susceptibility maps were constructed and classified in 10 classes of equal number of terrain units to allow comparison. The evaluation of the prediction skills of susceptibility models was made using ROC metrics and Success and Prediction rate curves. Lastly, the landslide susceptibility maps computed over GCTU were compared using the Kappa statistics. With this work we conclude that large differences

  17. Characteristic 'fingerprints' of crop model responses data at different spatial resolutions to weather input

    Czech Academy of Sciences Publication Activity Database

    Angulo, C.; Rotter, R.; Trnka, Miroslav; Pirttioja, N. K.; Gaiser, T.; Hlavinka, Petr; Ewert, F.

    2013-01-01

    Roč. 49, AUG 2013 (2013), s. 104-114 ISSN 1161-0301 R&D Projects: GA MŠk(CZ) EE2.3.20.0248; GA MŠk(CZ) EE2.4.31.0056 Institutional support: RVO:67179843 Keywords : Crop model * Weather data resolution * Aggregation * Yield distribution Subject RIV: EH - Ecology, Behaviour Impact factor: 2.918, year: 2013

  18. Fingerprints of four crop models as affected by soil input data aggregation

    Czech Academy of Sciences Publication Activity Database

    Angulo, C.; Gaiser, T.; Rötter, R. P.; Børgesen, C. D.; Hlavinka, Petr; Trnka, Miroslav; Ewert, F.

    2014-01-01

    Roč. 61, NOV 2014 (2014), s. 35-48 ISSN 1161-0301 R&D Projects: GA MŠk(CZ) EE2.3.20.0248; GA MŠk(CZ) EE2.4.31.0056; GA MZe QJ1310123 Institutional support: RVO:67179843 Keywords : crop model * soil data * spatial resolution * yield distribution * aggregation Subject RIV: EH - Ecology, Behaviour Impact factor: 2.704, year: 2014

  19. Errors in estimation of the input signal for integrate-and-fire neuronal models

    Czech Academy of Sciences Publication Activity Database

    Bibbona, E.; Lánský, Petr; Sacerdote, L.; Sirovich, R.

    2008-01-01

    Roč. 78, č. 1 (2008), s. 1-10 ISSN 1539-3755 R&D Projects: GA MŠk(CZ) LC554; GA AV ČR(CZ) 1ET400110401 Grant - others:EC(XE) MIUR PRIN 2005 Institutional research plan: CEZ:AV0Z50110509 Keywords : parameter estimation * stochastic neuronal model Subject RIV: BO - Biophysics Impact factor: 2.508, year: 2008 http://link.aps.org/abstract/PRE/v78/e011918

  20. Satellite, climatological, and theoretical inputs for modeling of the diurnal cycle of fire emissions

    Science.gov (United States)

    Hyer, E. J.; Reid, J. S.; Schmidt, C. C.; Giglio, L.; Prins, E.

    2009-12-01

    The diurnal cycle of fire activity is crucial for accurate simulation of atmospheric effects of fire emissions, especially at finer spatial and temporal scales. Estimating diurnal variability in emissions is also a critical problem for construction of emissions estimates from multiple sensors with variable coverage patterns. An optimal diurnal emissions estimate will use as much information as possible from satellite fire observations, compensate known biases in those observations, and use detailed theoretical models of the diurnal cycle to fill in missing information. As part of ongoing improvements to the Fire Location and Monitoring of Burning Emissions (FLAMBE) fire monitoring system, we evaluated several different methods of integrating observations with different temporal sampling. We used geostationary fire detections from WF_ABBA, fire detection data from MODIS, empirical diurnal cycles from TRMM, and simple theoretical diurnal curves based on surface heating. Our experiments integrated these data in different combinations to estimate the diurnal cycles of emissions for each location and time. Hourly emissions estimates derived using these methods were tested using an aerosol transport model. We present results of this comparison, and discuss the implications of our results for the broader problem of multi-sensor data fusion in fire emissions modeling.

  1. A Novel Approach to Develop the Lower Order Model of Multi-Input Multi-Output System

    Science.gov (United States)

    Rajalakshmy, P.; Dharmalingam, S.; Jayakumar, J.

    2017-10-01

    A mathematical model is a virtual entity that uses mathematical language to describe the behavior of a system. Mathematical models are used particularly in the natural sciences and engineering disciplines like physics, biology, and electrical engineering as well as in the social sciences like economics, sociology and political science. Physicists, Engineers, Computer scientists, and Economists use mathematical models most extensively. With the advent of high performance processors and advanced mathematical computations, it is possible to develop high performing simulators for complicated Multi Input Multi Ouptut (MIMO) systems like Quadruple tank systems, Aircrafts, Boilers etc. This paper presents the development of the mathematical model of a 500 MW utility boiler which is a highly complex system. A synergistic combination of operational experience, system identification and lower order modeling philosophy has been effectively used to develop a simplified but accurate model of a circulation system of a utility boiler which is a MIMO system. The results obtained are found to be in good agreement with the physics of the process and with the results obtained through design procedure. The model obtained can be directly used for control system studies and to realize hardware simulators for boiler testing and operator training.

  2. Optimization modeling of U.S. renewable electricity deployment using local input variables

    Science.gov (United States)

    Bernstein, Adam

    For the past five years, state Renewable Portfolio Standard (RPS) laws have been a primary driver of renewable electricity (RE) deployments in the United States. However, four key trends currently developing: (i) lower natural gas prices, (ii) slower growth in electricity demand, (iii) challenges of system balancing intermittent RE within the U.S. transmission regions, and (iv) fewer economical sites for RE development, may limit the efficacy of RPS laws over the remainder of the current RPS statutes' lifetime. An outsized proportion of U.S. RE build occurs in a small number of favorable locations, increasing the effects of these variables on marginal RE capacity additions. A state-by-state analysis is necessary to study the U.S. electric sector and to generate technology specific generation forecasts. We used LP optimization modeling similar to the National Renewable Energy Laboratory (NREL) Renewable Energy Development System (ReEDS) to forecast RE deployment across the 8 U.S. states with the largest electricity load, and found state-level RE projections to Year 2031 significantly lower than thoseimplied in the Energy Information Administration (EIA) 2013 Annual Energy Outlook forecast. Additionally, the majority of states do not achieve their RPS targets in our forecast. Combined with the tendency of prior research and RE forecasts to focus on larger national and global scale models, we posit that further bottom-up state and local analysis is needed for more accurate policy assessment, forecasting, and ongoing revision of variables as parameter values evolve through time. Current optimization software eliminates much of the need for algorithm coding and programming, allowing for rapid model construction and updating across many customized state and local RE parameters. Further, our results can be tested against the empirical outcomes that will be observed over the coming years, and the forecast deviation from the actuals can be attributed to discrete parameter

  3. Impact of Uncertainty Characterization of Satellite Rainfall Inputs and Model Parameters on Hydrological Data Assimilation with the Ensemble Kalman Filter for Flood Prediction

    Science.gov (United States)

    Vergara, H. J.; Kirstetter, P.; Hong, Y.; Gourley, J. J.; Wang, X.

    2013-12-01

    The Ensemble Kalman Filter (EnKF) is arguably the assimilation approach that has found the widest application in hydrologic modeling. Its relatively easy implementation and computational efficiency makes it an attractive method for research and operational purposes. However, the scientific literature featuring this approach lacks guidance on how the errors in the forecast need to be characterized so as to get the required corrections from the assimilation process. Moreover, several studies have indicated that the performance of the EnKF is 'sub-optimal' when assimilating certain hydrologic observations. Likewise, some authors have suggested that the underlying assumptions of the Kalman Filter and its dependence on linear dynamics make the EnKF unsuitable for hydrologic modeling. Such assertions are often based on ineffectiveness and poor robustness of EnKF implementations resulting from restrictive specification of error characteristics and the absence of a-priori information of error magnitudes. Therefore, understanding the capabilities and limitations of the EnKF to improve hydrologic forecasts require studying its sensitivity to the manner in which errors in the hydrologic modeling system are represented through ensembles. This study presents a methodology that explores various uncertainty representation configurations to characterize the errors in the hydrologic forecasts in a data assimilation context. The uncertainty in rainfall inputs is represented through a Generalized Additive Model for Location, Scale, and Shape (GAMLSS), which provides information about second-order statistics of quantitative precipitation estimates (QPE) error. The uncertainty in model parameters is described adding perturbations based on parameters covariance information. The method allows for the identification of rainfall and parameter perturbation combinations for which the performance of the EnKF is 'optimal' given a set of objective functions. In this process, information about

  4. Consumer input into health care: Time for a new active and comprehensive model of consumer involvement.

    Science.gov (United States)

    Hall, Alix E; Bryant, Jamie; Sanson-Fisher, Rob W; Fradgley, Elizabeth A; Proietto, Anthony M; Roos, Ian

    2018-03-07

    To ensure the provision of patient-centred health care, it is essential that consumers are actively involved in the process of determining and implementing health-care quality improvements. However, common strategies used to involve consumers in quality improvements, such as consumer membership on committees and collection of patient feedback via surveys, are ineffective and have a number of limitations, including: limited representativeness; tokenism; a lack of reliable and valid patient feedback data; infrequent assessment of patient feedback; delays in acquiring feedback; and how collected feedback is used to drive health-care improvements. We propose a new active model of consumer engagement that aims to overcome these limitations. This model involves the following: (i) the development of a new measure of consumer perceptions; (ii) low cost and frequent electronic data collection of patient views of quality improvements; (iii) efficient feedback to the health-care decision makers; and (iv) active involvement of consumers that fosters power to influence health system changes. © 2018 The Authors Health Expectations published by John Wiley & Sons Ltd.

  5. Global Nitrous Oxide Emissions from Agricultural Soils: Magnitude and Uncertainties Associated with Input Data and Model Parameters

    Science.gov (United States)

    Xu, R.; Tian, H.; Pan, S.; Yang, J.; Lu, C.; Zhang, B.

    2016-12-01

    Human activities have caused significant perturbations of the nitrogen (N) cycle, resulting in about 21% increase of atmospheric N2O concentration since the pre-industrial era. This large increase is mainly caused by intensive agricultural activities including the application of nitrogen fertilizer and the expansion of leguminous crops. Substantial efforts have been made to quantify the global and regional N2O emission from agricultural soils in the last several decades using a wide variety of approaches, such as ground-based observation, atmospheric inversion, and process-based model. However, large uncertainties exist in those estimates as well as methods themselves. In this study, we used a coupled biogeochemical model (DLEM) to estimate magnitude, spatial, and temporal patterns of N2O emissions from global croplands in the past five decades (1961-2012). To estimate uncertainties associated with input data and model parameters, we have implemented a number of simulation experiments with DLEM, accounting for key parameter values that affect calculation of N2O fluxes (i.e., maximum nitrification and denitrification rates, N fixation rate, and the adsorption coefficient for soil ammonium and nitrate), different sets of input data including climate, land management practices (i.e., nitrogen fertilizer types, application rates and timings, with/without irrigation), N deposition, and land use and land cover change. This work provides a robust estimate of global N2O emissions from agricultural soils as well as identifies key gaps and limitations in the existing model and data that need to be investigated in the future.

  6. Modeling of the impact of Rhone River nutrient inputs on the dynamics of planktonic diversity

    Science.gov (United States)

    Alekseenko, Elena; Baklouti, Melika; Garreau, Pierre; Guyennon, Arnaud; Carlotti, François

    2014-05-01

    Recent studies devoted to the Mediterranean Sea highlight that a large number of uncertainties still exist particularly as regards the variations of elemental stoichiometry of all compartments of pelagic ecosystems (The MerMex Group, 2011, Pujo-Pay et al., 2011, Malatonne-Rizotti and the Pan-Med Group, 2012). Moreover, during the last two decades, it was observed that the inorganic ratio N:P ratio in among all the Mediterranean rivers, including the Rhone River, has dramatically increased, thus strengthening the P-limitation in the Mediterranean waters (Ludwig et al, 2009, The MerMex group, 2011) and increasing the anomaly in the ratio N:P of the Gulf of Lions and all the western part of NW Mediterranean. At which time scales such a change will impact the biogeochemical stocks and fluxes of the Gulf of Lion and of the whole NW Mediterranean sea still remains unknown. In the same way, it is still uncertain how this increase in the N:P ratio will modify the composition of the trophic web, and potentially lead to regime shifts by favouring for example one of the classical food chains of the sea considered in Parsons & Lalli (2002). To address this question, the Eco3M-MED biogeochemical model (Baklouti et al., 2006a,b, Alekseenko et al., 2014) representing the first trophic levels from bacteria to mesozooplankton, coupled with the hydrodynamical model MARS3D (Lazure&Dumas, 2008) is used. This model has already been partially validated (Alekseenko et al., 2014) and the fact that it describes each biogenic compartment in terms of its abundance (for organisms), and carbon, phosphorus, nitrogen and chlorophyll (for autotrophs) implies that all the information on the intracellular status of organisms and on the element(s) that limit(s) their growth will be available. The N:P ratios in water, organisms and in the exported material will also be analyzed. In practice, the work will first consist in running different scenarios starting from similar initial early winter

  7. From requirements to Java in a snap model-driven requirements engineering in practice

    CERN Document Server

    Smialek, Michal

    2015-01-01

    This book provides a coherent methodology for Model-Driven Requirements Engineering which stresses the systematic treatment of requirements within the realm of modelling and model transformations. The underlying basic assumption is that detailed requirements models are used as first-class artefacts playing a direct role in constructing software. To this end, the book presents the Requirements Specification Language (RSL) that allows precision and formality, which eventually permits automation of the process of turning requirements into a working system by applying model transformations and co

  8. Remote sensing inputs to National Model Implementation Program for water resources quality improvement

    Science.gov (United States)

    Eidenshink, J. C.; Schmer, F. A.

    1979-01-01

    The Lake Herman watershed in southeastern South Dakota has been selected as one of seven water resources systems in the United States for involvement in the National Model Implementation Program (MIP). MIP is a pilot program initiated to illustrate the effectiveness of existing water resources quality improvement programs. The Remote Sensing Institute (RSI) at South Dakota State University has produced a computerized geographic information system for the Lake Herman watershed. All components necessary for the monitoring and evaluation process were included in the data base. The computerized data were used to produce thematic maps and tabular data for the land cover and soil classes within the watershed. These data are being utilized operationally by SCS resource personnel for planning and management purposes.

  9. Effect of delayed response in growth on the dynamics of a chemostat model with impulsive input

    International Nuclear Information System (INIS)

    Jiao Jianjun; Yang Xiaosong; Chen Lansun; Cai Shaohong

    2009-01-01

    In this paper, a chemostat model with delayed response in growth and impulsive perturbations on the substrate is considered. Using the discrete dynamical system determined by the stroboscopic map, we obtain a microorganism-extinction periodic solution, further, the globally attractive condition of the microorganism-extinction periodic solution is obtained. By the use of the theory on delay functional and impulsive differential equation, we also obtain the permanent condition of the investigated system. Our results indicate that the discrete time delay has influence to the dynamics behaviors of the investigated system, and provide tactical basis for the experimenters to control the outcome of the chemostat. Furthermore, numerical analysis is inserted to illuminate the dynamics of the system affected by the discrete time delay.

  10. Modeling uncertainty in requirements engineering decision support

    Science.gov (United States)

    Feather, Martin S.; Maynard-Zhang, Pedrito; Kiper, James D.

    2005-01-01

    One inherent characteristic of requrements engineering is a lack of certainty during this early phase of a project. Nevertheless, decisions about requirements must be made in spite of this uncertainty. Here we describe the context in which we are exploring this, and some initial work to support elicitation of uncertain requirements, and to deal with the combination of such information from multiple stakeholders.

  11. Development of input data layers for the FARSITE fire growth model for the Selway-Bitterroot Wilderness Complex, USA

    Science.gov (United States)

    Robert E. Keane; Janice L. Garner; Kirsten M. Schmidt; Donald G. Long; James P. Menakis; Mark A. Finney

    1998-01-01

    Fuel and vegetation spatial data layers required by the spatially explicit fire growth model FARSITE were developed for all lands in and around the Selway-Bitterroot Wilderness Area in Idaho and Montana. Satellite imagery and terrain modeling were used to create the three base vegetation spatial data layers of potential vegetation, cover type, and structural stage....

  12. Deriving input parameters for cost-effectiveness modeling: taxonomy of data types and approaches to their statistical synthesis.

    Science.gov (United States)

    Saramago, Pedro; Manca, Andrea; Sutton, Alex J

    2012-01-01

    The evidence base informing economic evaluation models is rarely derived from a single source. Researchers are typically expected to identify and combine available data to inform the estimation of model parameters for a particular decision problem. The absence of clear guidelines on what data can be used and how to effectively synthesize this evidence base under different scenarios inevitably leads to different approaches being used by different modelers. The aim of this article is to produce a taxonomy that can help modelers identify the most appropriate methods to use when synthesizing the available data for a given model parameter. This article developed a taxonomy based on possible scenarios faced by the analyst when dealing with the available evidence. While mainly focusing on clinical effectiveness parameters, this article also discusses strategies relevant to other key input parameters in any economic model (i.e., disease natural history, resource use/costs, and preferences). The taxonomy categorizes the evidence base for health economic modeling according to whether 1) single or multiple data sources are available, 2) individual or aggregate data are available (or both), or 3) individual or multiple decision model parameters are to be estimated from the data. References to examples of the key methodological developments for each entry in the taxonomy together with citations to where such methods have been used in practice are provided throughout. The use of the taxonomy developed in this article hopes to improve the quality of the synthesis of evidence informing decision models by bringing to the attention of health economics modelers recent methodological developments in this field. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  13. Optimization model of peach production relevant to input energies – Yield function in Chaharmahal va Bakhtiari province, Iran

    International Nuclear Information System (INIS)

    Ghatrehsamani, Shirin; Ebrahimi, Rahim; Kazi, Salim Newaz; Badarudin Badry, Ahmad; Sadeghinezhad, Emad

    2016-01-01

    The aim of this study was to determine the amount of input–output energy used in peach production and to develop an optimal model of production in Chaharmahal va Bakhtiari province, Iran. Data were collected from 100 producers by administering a questionnaire in face-to-face interviews. Farms were selected based on random sampling method. Results revealed that the total energy of production is 47,951.52 MJ/ha and the highest share of energy consumption belongs to chemical fertilizers (35.37%). Consumption of direct energy was 47.4% while indirect energy was 52.6%. Also, Total energy consumption was divided into two groups; renewable and non-renewable (19.2% and 80.8% respectively). Energy use efficiency, Energy productivity, Specific energy and Net energy were calculated as 0.433, 0.228 (kg/MJ), 4.38 (MJ/kg) and −27,161.722 (MJ/ha), respectively. According to the negative sign for Net energy, if special strategy is used, energy dismiss will decrease and negative effect of some parameters could be omitted. In the present case the amount is indicating decimate of production energy. In addition, energy efficiency was not high enough. Some of the input energies were applied to machinery, chemical fertilizer, water irrigation and electricity which had significant effect on increasing production and MPP (marginal physical productivity) was determined for variables. This parameter was positive for energy groups namely; machinery, diesel fuel, chemical fertilizer, water irrigation and electricity while it was negative for other kind of energy such as chemical pesticides and human labor. Finally, there is a need to pursue a new policy to force producers to undertake energy-efficient practices to establish sustainable production systems without disrupting the natural resources. In addition, extension activities are needed to improve the efficiency of energy consumption and to sustain the natural resources. - Highlights: • Replacing non-renewable energy with renewable

  14. Evaluation of a Regional Australian Nurse-Led Parkinson's Service Using the Context, Input, Process, and Product Evaluation Model.

    Science.gov (United States)

    Jones, Belinda; Hopkins, Genevieve; Wherry, Sally-Anne; Lueck, Christian J; Das, Chandi P; Dugdale, Paul

    2016-01-01

    A nurse-led Parkinson's service was introduced at Canberra Hospital and Health Services in 2012 with the primary objective of improving the care and self-management of people with a diagnosis of Parkinson's disease (PD) and related movement disorders. Other objectives of the Service included improving the quality of life of patients with PD and reducing their caregiver burden, improving the knowledge and understanding of PD among healthcare professionals, and reducing unnecessary hospital admissions. This article evaluates the first 2 years of this Service. The Context, Input, Process, and Product Evaluation Model was used to evaluate the Parkinson's and Movement Disorder Service. The context evaluation was conducted through discussions with stakeholders, review of PD guidelines and care pathways, and assessment of service gaps. Input: The input evaluation was carried out by reviewing the resources and strategies used in the development of the Service. The process evaluation was undertaken by reviewing the areas of the implementation that went well and identifying issues and ongoing gaps in service provision. Product: Finally, product evaluation was undertaken by conducting stakeholder interviews and surveying patients in order to assess their knowledge and perception of value, and the patient experience of the Service. Admission data before and after implementation of the Parkinson's and Movement Disorder Service were also compared for any notable trends. Several gaps in service provision for patients with PD in the Australian Capital Territory were identified, prompting the development of a PD Service to address some of them. Input: Funding for a Parkinson's disease nurse specialist was made available, and existing resources were used to develop clinics, education sessions, and outreach services. Clinics and education sessions were implemented successfully, with positive feedback from patients and healthcare professionals. However, outreach services were limited

  15. A goal-oriented requirements modelling language for enterprise architecture

    NARCIS (Netherlands)

    Quartel, Dick; Engelsman, W.; Jonkers, Henk; van Sinderen, Marten J.

    2009-01-01

    Methods for enterprise architecture, such as TOGAF, acknowledge the importance of requirements engineering in the development of enterprise architectures. Modelling support is needed to specify, document, communicate and reason about goals and requirements. Current modelling techniques for

  16. Channel responses to varying sediment input: A flume experiment modeled after Redwood Creek, California

    Science.gov (United States)

    Madej, M.A.; Sutherland, D.G.; Lisle, T.E.; Pryor, B.

    2009-01-01

    At the reach scale, a channel adjusts to sediment supply and flow through mutual interactions among channel form, bed particle size, and flow dynamics that govern river bed mobility. Sediment can impair the beneficial uses of a river, but the timescales for studying recovery following high sediment loading in the field setting make flume experiments appealing. We use a flume experiment, coupled with field measurements in a gravel-bed river, to explore sediment transport, storage, and mobility relations under various sediment supply conditions. Our flume experiment modeled adjustments of channel morphology, slope, and armoring in a gravel-bed channel. Under moderate sediment increases, channel bed elevation increased and sediment output increased, but channel planform remained similar to pre-feed conditions. During the following degradational cycle, most of the excess sediment was evacuated from the flume and the bed became armored. Under high sediment feed, channel bed elevation increased, the bed became smoother, mid-channel bars and bedload sheets formed, and water surface slope increased. Concurrently, output increased and became more poorly sorted. During the last degradational cycle, the channel became armored and channel incision ceased before all excess sediment was removed. Selective transport of finer material was evident throughout the aggradational cycles and became more pronounced during degradational cycles as the bed became armored. Our flume results of changes in bed elevation, sediment storage, channel morphology, and bed texture parallel those from field surveys of Redwood Creek, northern California, which has exhibited channel bed degradation for 30??years following a large aggradation event in the 1970s. The flume experiment suggested that channel recovery in terms of reestablishing a specific morphology may not occur, but the channel may return to a state of balancing sediment supply and transport capacity.

  17. Modelling of uranium inputs and its fate in soil; Modellierung von Uraneintraegen aus Duengern und ihr Verbleib im Boden

    Energy Technology Data Exchange (ETDEWEB)

    Achatz, M. [Bundesamt fuer Strahlenschutz, Berlin (Germany); Urso, L. [Bundesamt fuer Strahlenschutz, Oberschleissheim (Germany)

    2016-07-01

    87 % of mineral phosphate fertilizers are produced of sedimentary rock phosphate, which generally contains heavy metals, like uranium. The solution and migration behavior of uranium is apart from its redox ratio, determined by its pH conditions as well as its ligand quality and quantity. A further important role in sorption is played by soil components like clay minerals, pedogenic oxides and soil organic matter. To provide a preferably detailed speciation model of U in soil several physical and chemical components have to be included to be able to state distribution coefficients (k{sub D}) and sorption processes. The model of Hormann and Fischer served as the basis of modelling uranium mobility in soil by using the program PhreeqC. The usage of real soil and soil water measurements may contribute to identify factors and processes influencing the mobility of uranium under preferably realistic conditions. Additionally, the assessment of further predictions towards uranium migration in soil can be made based on a modelling with PhreeqC. The modelling of uranium inputs and its fate in soil can help to elucidate the human caused occurrence or geogenic origin of uranium in soil.

  18. Comparison of neutron capture cross sections obtained from two Hauser-Feshbach statistical models on a short-lived nucleus using experimentally constrained input

    Science.gov (United States)

    Lewis, Rebecca; Liddick, Sean; Spyrou, Artemis; Crider, Benjamin; Dombos, Alexander; Naqvi, Farheen; Prokop, Christopher; Quinn, Stephen; Larsen, Ann-Cecilie; Crespo Campo, Lucia; Guttormsen, Magne; Renstrom, Therese; Siem, Sunniva; Bleuel, Darren; Couture, Aaron; Mosby, Shea; Perdikakis, George

    2017-09-01

    A majority of the abundance of the elements above iron are produced by neutron capture reactions, and, in explosive stellar processes, many of these reactions take place on unstable nuclei. Direct neutron capture experiments can only be performed on stable and long-lived nuclei, requiring indirect methods for the remaining isotopes. Statistical neutron capture can be described using the nuclear level density (NLD), the γ strength function (γSF), and an optical model. The NLD and γSF can be obtained using the β-Oslo method. The NLD and γSF were recently determined for 74Zn using the β-Oslo method, and were used in both TALYS and CoH to calculate the 73Zn(n, γ)74Zn neutron capture cross section. The cross sections calculated in TALYS and CoH are expected to be identical if the inputs for both codes are the same, however, after a thorough investigation into the inputs for the 73Zn(n, γ)74Zn reaction there is still a factor of two discrepancy between the two codes.

  19. A 2nd generation static model for predicting greenhouse energy inputs, as an aid for production planning

    CERN Document Server

    Jolliet, O; Munday, G L

    1985-01-01

    A model which allows accurate prediction of energy consumption of a greenhouse is a useful tool for production planning and optimisation of greenhouse components. To date two types of model have been developed; some very simple models of low precision, others, precise dynamic models unsuitable for employment over long periods and too complex for use in practice. A theoretical study and measurements at the CERN trial greenhouse have allowed development of a new static model named "HORTICERN", easy to use and as precise as more complex dynamic models. This paper demonstrates the potential of this model for long-term production planning. The model gives precise predictions of energy consumption when given greenhouse conditions of use (inside temperatures, dehumidification by ventilation, …) and takes into account local climatic conditions (wind radiative losses to the sky and solar gains), type of greenhouse (cladding, thermal screen …). The HORTICERN method has been developed for PC use and requires less...

  20. Neonatal intensive care nursing curriculum challenges based on context, input, process, and product evaluation model: A qualitative study

    Directory of Open Access Journals (Sweden)

    Mansoureh Ashghali-Farahani

    2018-01-01

    Full Text Available Background: Weakness of curriculum development in nursing education results in lack of professional skills in graduates. This study was done on master's students in nursing to evaluate challenges of neonatal intensive care nursing curriculum based on context, input, process, and product (CIPP evaluation model. Materials and Methods: This study was conducted with qualitative approach, which was completed according to the CIPP evaluation model. The study was conducted from May 2014 to April 2015. The research community included neonatal intensive care nursing master's students, the graduates, faculty members, neonatologists, nurses working in neonatal intensive care unit (NICU, and mothers of infants who were hospitalized in such wards. Purposeful sampling was applied. Results: The data analysis showed that there were two main categories: “inappropriate infrastructure” and “unknown duties,” which influenced the context formation of NICU master's curriculum. The input was formed by five categories, including “biomedical approach,” “incomprehensive curriculum,” “lack of professional NICU nursing mentors,” “inappropriate admission process of NICU students,” and “lack of NICU skill labs.” Three categories were extracted in the process, including “more emphasize on theoretical education,” “the overlap of credits with each other and the inconsistency among the mentors,” and “ineffective assessment.” Finally, five categories were extracted in the product, including “preferring routine work instead of professional job,” “tendency to leave the job,” “clinical incompetency of graduates,” “the conflict between graduates and nursing staff expectations,” and “dissatisfaction of graduates.” Conclusions: Some changes are needed in NICU master's curriculum by considering the nursing experts' comments and evaluating the consequences of such program by them.

  1. Selection Input Output by Restriction Using DEA Models Based on a Fuzzy Delphi Approach and Expert Information

    Science.gov (United States)

    Arsad, Roslah; Nasir Abdullah, Mohammad; Alias, Suriana; Isa, Zaidi

    2017-09-01

    Stock evaluation has always been an interesting problem for investors. In this paper, a comparison regarding the efficiency stocks of listed companies in Bursa Malaysia were made through the application of estimation method of Data Envelopment Analysis (DEA). One of the interesting research subjects in DEA is the selection of appropriate input and output parameter. In this study, DEA was used to measure efficiency of stocks of listed companies in Bursa Malaysia in terms of the financial ratio to evaluate performance of stocks. Based on previous studies and Fuzzy Delphi Method (FDM), the most important financial ratio was selected. The results indicated that return on equity, return on assets, net profit margin, operating profit margin, earnings per share, price to earnings and debt to equity were the most important ratios. Using expert information, all the parameter were clarified as inputs and outputs. The main objectives were to identify most critical financial ratio, clarify them based on expert information and compute the relative efficiency scores of stocks as well as rank them in the construction industry and material completely. The methods of analysis using Alirezaee and Afsharian’s model were employed in this study, where the originality of Charnes, Cooper and Rhodes (CCR) with the assumption of Constant Return to Scale (CSR) still holds. This method of ranking relative efficiency of decision making units (DMUs) was value-added by the Balance Index. The interested data was made for year 2015 and the population of the research includes accepted companies in stock markets in the construction industry and material (63 companies). According to the ranking, the proposed model can rank completely for 63 companies using selected financial ratio.

  2. REQUIREMENTS PARTICLE NETWORKS: AN APPROACH TO FORMAL SOFTWARE FUNCTIONAL REQUIREMENTS MODELLING

    OpenAIRE

    Wiwat Vatanawood; Wanchai Rivepiboon

    2001-01-01

    In this paper, an approach to software functional requirements modelling using requirements particle networks is presented. In our approach, a set of requirements particles is defined as an essential tool to construct a visual model of software functional requirements specification during the software analysis phase and the relevant formal specification is systematically generated without the experience of writing formal specification. A number of algorithms are presented to perform these for...

  3. Building traceable Event-B models from requirements

    OpenAIRE

    Alkhammash, Eman; Butler, Michael; Fathabadi, Asieh Salehi; Cîrstea, Corina

    2015-01-01

    Abstract Bridging the gap between informal requirements and formal specifications is a key challenge in systems engineering. Constructing appropriate abstractions in formal models requires skill and managing the complexity of the relationships between requirements and formal models can be difficult. In this paper we present an approach that aims to address the twin challenges of finding appropriate abstractions and managing traceability between requirements and models. Our approach is based o...

  4. How uncertainty in input and parameters influences transport model output: four-stage model case-study

    DEFF Research Database (Denmark)

    Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2015-01-01

    -year model outputs uncertainty. More precisely, this study contributes to the existing literature on the topic by investigating the effects on model outputs uncertainty deriving from the use of (i) different probability distributions in the sampling process, (ii) different assignment algorithms, and (iii...... of coefficient of variation, resulting from stochastic user equilibrium and user equilibrium is, respectively, of 0.425 and 0.468. Finally, network congestion does not show a high effect on model output uncertainty at the network level. However, the final uncertainty of links with higher volume/capacity ratio......If not properly quantified, the uncertainty inherent to transport models makes analyses based on their output highly unreliable. This study investigated uncertainty in four-stage transport models by analysing a Danish case-study: the Næstved model. The model describes the demand of transport...

  5. Artificial neural network modelling of biological oxygen demand in rivers at the national level with input selection based on Monte Carlo simulations.

    Science.gov (United States)

    Šiljić, Aleksandra; Antanasijević, Davor; Perić-Grujić, Aleksandra; Ristić, Mirjana; Pocajt, Viktor

    2015-03-01

    Biological oxygen demand (BOD) is the most significant water quality parameter and indicates water pollution with respect to the present biodegradable organic matter content. European countries are therefore obliged to report annual BOD values to Eurostat; however, BOD data at the national level is only available for 28 of 35 listed European countries for the period prior to 2008, among which 46% of data is missing. This paper describes the development of an artificial neural network model for the forecasting of annual BOD values at the national level, using widely available sustainability and economical/industrial parameters as inputs. The initial general regression neural network (GRNN) model was trained, validated and tested utilizing 20 inputs. The number of inputs was reduced to 15 using the Monte Carlo simulation technique as the input selection method. The best results were achieved with the GRNN model utilizing 25% less inputs than the initial model and a comparison with a multiple linear regression model trained and tested using the same input variables using multiple statistical performance indicators confirmed the advantage of the GRNN model. Sensitivity analysis has shown that inputs with the greatest effect on the GRNN model were (in descending order) precipitation, rural population with access to improved water sources, treatment capacity of wastewater treatment plants (urban) and treatment of municipal waste, with the last two having an equal effect. Finally, it was concluded that the developed GRNN model can be useful as a tool to support the decision-making process on sustainable development at a regional, national and international level.

  6. Crop yield response to soil fertility and N, P, K inputs in different environments: Testing and improving the QUEFTS model

    NARCIS (Netherlands)

    Sattari, S.Z.; Ittersum, van M.K.; Bouwman, A.F.; Smit, A.L.; Janssen, B.H.

    2014-01-01

    Global food production strongly depends on availability of nutrients. Assessment of future global phosphorus (P) fertilizer demand in interaction with nitrogen (N) and potassium (K) fertilizers under different levels of food demand requires a model-based approach. In this paper we tested use of the

  7. Sensitivity of a radiative transfer model to the uncertainty in the aerosol optical depth used as input

    Science.gov (United States)

    Román, Roberto; Bilbao, Julia; de Miguel, Argimiro; Pérez-Burgos, Ana

    2014-05-01

    The radiative transfer models can be used to obtain solar radiative quantities in the Earth surface as the erythemal ultraviolet (UVER) irradiance, which is the spectral irradiance weighted with the erythemal (sunburn) action spectrum, and the total shortwave irradiance (SW; 305-2,8000 nm). Aerosol and atmospheric properties are necessary as inputs in the model in order to calculate the UVER and SW irradiances under cloudless conditions, however the uncertainty in these inputs causes another uncertainty in the simulations. The objective of this work is to quantify the uncertainty in UVER and SW simulations generated by the aerosol optical depth (AOD) uncertainty. The data from different satellite retrievals were downloaded at nine Spanish places located in the Iberian Peninsula: Total ozone column from different databases, spectral surface albedo and water vapour column from MODIS instrument, AOD at 443 nm and Angström Exponent (between 443 nm and 670 nm) from MISR instrument onboard Terra satellite, single scattering albedo from OMI instrument onboard Aura satellite. The obtained AOD at 443 nm data from MISR were compared with AERONET measurements in six Spanish sites finding an uncertainty in the AOD from MISR of 0.074. In this work the radiative transfer model UVSPEC/Libradtran (1.7 version) was used to obtain the SW and UVER irradiance under cloudless conditions for each month and for different solar zenith angles (SZA) in the nine mentioned locations. The inputs used for these simulations were monthly climatology tables obtained with the available data in each location. Once obtained the UVER and SW simulations, they were repeated twice but changing the AOD monthly values by the same AOD plus/minus its uncertainty. The maximum difference between the irradiance run with AOD and the irradiance run with AOD plus/minus its uncertainty was calculated for each month, SZA, and location. This difference was considered as the uncertainty on the model caused by the AOD

  8. Mixing Formal and Informal Model Elements for Tracing Requirements

    DEFF Research Database (Denmark)

    Jastram, Michael; Hallerstede, Stefan; Ladenberger, Lukas

    2011-01-01

    Tracing between informal requirements and formal models is challenging. A method for such tracing should permit to deal efficiently with changes to both the requirements and the model. A particular challenge is posed by the persisting interplay of formal and informal elements. In this paper, we...... describe an incremental approach to requirements validation and systems modelling. Formal modelling facilitates a high degree of automation: it serves for validation and traceability. The foundation for our approach are requirements that are structured according to the WRSPM reference model. We provide...... a system for traceability with a state-based formal method that supports refinement. We do not require all specification elements to be modelled formally and support incremental incorporation of new specification elements into the formal model. Refinement is used to deal with larger amounts of requirements...

  9. A CORF computational model of a simple cell that relies on LGN input outperforms the Gabor function model

    NARCIS (Netherlands)

    Azzopardi, George; Petkov, Nicolai

    Simple cells in primary visual cortex are believed to extract local contour information from a visual scene. The 2D Gabor function (GF) model has gained particular popularity as a computational model of a simple cell. However, it short-cuts the LGN, it cannot reproduce a number of properties of real

  10. Optimizing the modified microdosimetric kinetic model input parameters for proton and 4He ion beam therapy application

    Science.gov (United States)

    Mairani, A.; Magro, G.; Tessonnier, T.; Böhlen, T. T.; Molinelli, S.; Ferrari, A.; Parodi, K.; Debus, J.; Haberer, T.

    2017-06-01

    Models able to predict relative biological effectiveness (RBE) values are necessary for an accurate determination of the biological effect with proton and 4He ion beams. This is particularly important when including RBE calculations in treatment planning studies comparing biologically optimized proton and 4He ion beam plans. In this work, we have tailored the predictions of the modified microdosimetric kinetic model (MKM), which is clinically applied for carbon ion beam therapy in Japan, to reproduce RBE with proton and 4He ion beams. We have tuned the input parameters of the MKM, i.e. the domain and nucleus radii, reproducing an experimental database of initial RBE data for proton and He ion beams. The modified MKM, with the best fit parameters obtained, has been used to reproduce in vitro cell survival data in clinically-relevant scenarios. A satisfactory agreement has been found for the studied cell lines, A549 and RENCA, with the mean absolute survival variation between the data and predictions within 2% and 5% for proton and 4He ion beams, respectively. Moreover, a sensitivity study has been performed varying the domain and nucleus radii and the quadratic parameter of the photon response curve. The promising agreement found in this work for the studied clinical-like scenarios supports the usage of the modified MKM for treatment planning studies in proton and 4He ion beam therapy.

  11. Air quality modelling in the Berlin-Brandenburg region using WRF-Chem v3.7.1: sensitivity to resolution of model grid and input data

    Science.gov (United States)

    Kuik, Friderike; Lauer, Axel; Churkina, Galina; Denier van der Gon, Hugo A. C.; Fenner, Daniel; Mar, Kathleen A.; Butler, Tim M.

    2016-12-01

    Air pollution is the number one environmental cause of premature deaths in Europe. Despite extensive regulations, air pollution remains a challenge, especially in urban areas. For studying summertime air quality in the Berlin-Brandenburg region of Germany, the Weather Research and Forecasting Model with Chemistry (WRF-Chem) is set up and evaluated against meteorological and air quality observations from monitoring stations as well as from a field campaign conducted in 2014. The objective is to assess which resolution and level of detail in the input data is needed for simulating urban background air pollutant concentrations and their spatial distribution in the Berlin-Brandenburg area. The model setup includes three nested domains with horizontal resolutions of 15, 3 and 1 km and anthropogenic emissions from the TNO-MACC III inventory. We use RADM2 chemistry and the MADE/SORGAM aerosol scheme. Three sensitivity simulations are conducted updating input parameters to the single-layer urban canopy model based on structural data for Berlin, specifying land use classes on a sub-grid scale (mosaic option) and downscaling the original emissions to a resolution of ca. 1 km × 1 km for Berlin based on proxy data including traffic density and population density. The results show that the model simulates meteorology well, though urban 2 m temperature and urban wind speeds are biased high and nighttime mixing layer height is biased low in the base run with the settings described above. We show that the simulation of urban meteorology can be improved when specifying the input parameters to the urban model, and to a lesser extent when using the mosaic option. On average, ozone is simulated reasonably well, but maximum daily 8 h mean concentrations are underestimated, which is consistent with the results from previous modelling studies using the RADM2 chemical mechanism. Particulate matter is underestimated, which is partly due to an underestimation of secondary organic aerosols

  12. Spreadsheet Decision Support Model for Training Exercise Material Requirements Planning

    National Research Council Canada - National Science Library

    Tringali, Arthur

    1997-01-01

    ... associated with military training exercises. The model combines the business practice of Material Requirements Planning and the commercial spreadsheet software capabilities of Lotus 1-2-3 to calculate the requirements for food, consumable...

  13. Extending enterprise architecture modelling with business goals and requirements

    NARCIS (Netherlands)

    Engelsman, W.; Quartel, Dick; Jonkers, Henk; van Sinderen, Marten J.

    The methods for enterprise architecture (EA), such as The Open Group Architecture Framework, acknowledge the importance of requirements modelling in the development of EAs. Modelling support is needed to specify, document, communicate and reason about goals and requirements. The current modelling

  14. Extended Fitts' model of pointing time in eye-gaze input system - Incorporating effects of target shape and movement direction into modeling.

    Science.gov (United States)

    Murata, Atsuo; Fukunaga, Daichi

    2018-04-01

    This study attempted to investigate the effects of the target shape and the movement direction on the pointing time using an eye-gaze input system and extend Fitts' model so that these factors are incorporated into the model and the predictive power of Fitts' model is enhanced. The target shape, the target size, the movement distance, and the direction of target presentation were set as within-subject experimental variables. The target shape included: a circle, and rectangles with an aspect ratio of 1:1, 1:2, 1:3, and 1:4. The movement direction included eight directions: upper, lower, left, right, upper left, upper right, lower left, and lower right. On the basis of the data for identifying the effects of the target shape and the movement direction on the pointing time, an attempt was made to develop a generalized and extended Fitts' model that took into account the movement direction and the target shape. As a result, the generalized and extended model was found to fit better to the experimental data, and be more effective for predicting the pointing time for a variety of human-computer interaction (HCI) task using an eye-gaze input system. Copyright © 2017. Published by Elsevier Ltd.

  15. Formal Requirements Modeling for Simulation-Based Verification

    OpenAIRE

    Otter, Martin; Thuy, Nguyen; Bouskela, Daniel; Buffoni, Lena; Elmqvist, Hilding; Fritzson, Peter; Garro, Alfredo; Jardin, Audrey; Olsson, Hans; Payelleville, Maxime; Schamai, Wladimir; Thomas, Eric; Tundis, Andrea

    2015-01-01

    This paper describes a proposal on how to model formal requirements in Modelica for simulation-based verification. The approach is implemented in the open source Modelica_Requirements library. It requires extensions to the Modelica language, that have been prototypically implemented in the Dymola and Open-Modelica software. The design of the library is based on the FOrmal Requirement Modeling Language (FORM-L) defined by EDF, and on industrial use cases from EDF and Dassault Aviation. It uses...

  16. Oil spill modeling input to the offshore environmental cost model (OECM) for US-BOEMRE's spill risk and costs evaluations

    International Nuclear Information System (INIS)

    French McCay, Deborah; Reich, Danielle; Rowe, Jill; Schroeder, Melanie; Graham, Eileen

    2011-01-01

    This paper simulates the consequences of oil spills using a planning model known as the Offshore Environmental Cost Model (OECM). This study aims at creating various predictive models for possible oil spill scenarios in marine waters. A crucial part of this investigation was the SIMAP model. It analyzes the distance and the direction covered by the spill under certain test conditions, generating a regression equation that simulates the impact of the spill. Tests were run in two different regions; the Mid-Atlantic region and the Chukchi Sea. Results showed that the higher wind speeds and higher water temperature of the Mid-Atlantic region had greater impact on wildlife and the water column respectively. However, short-line impact was higher in the Chukchi area due to the multi-directional wind. It was also shown that, because of their higher diffusivity in water, lighter crude oils had more impact than heavier oils. It was suggested that this model could ultimately be applied to other oil spill scenarios happening under similar conditions.

  17. Modeling the cellular mechanisms and olfactory input underlying the triphasic response of moth pheromone-sensitive projection neurons.

    Directory of Open Access Journals (Sweden)

    Yuqiao Gu

    Full Text Available In the antennal lobe of the noctuid moth Agrotis ipsilon, most pheromone-sensitive projection neurons (PNs exhibit a triphasic firing pattern of excitation (E1-inhibition (I-excitation (E2 in response to a pulse of the sex pheromone. To understand the mechanisms underlying this stereotypical discharge, we developed a biophysical model of a PN receiving inputs from olfactory receptor neurons (ORNs via nicotinic cholinergic synapses. The ORN is modeled as an inhomogeneous Poisson process whose firing rate is a function of time and is fitted to extracellular data recorded in response to pheromone stimulations at various concentrations and durations. The PN model is based on the Hodgkin-Huxley formalism with realistic ionic currents whose parameters were derived from previous studies. Simulations revealed that the inhibitory phase I can be produced by a SK current (Ca2+-gated small conductance K+ current and that the excitatory phase E2 can result from the long-lasting response of the ORNs. Parameter analysis further revealed that the ending time of E1 depends on some parameters of SK, Ca2+, nACh and Na+ currents; I duration mainly depends on the time constant of intracellular Ca2+ dynamics, conductance of Ca2+ currents and some parameters of nACh currents; The mean firing frequency of E1 and E2 depends differentially on the interaction of various currents. Thus it is likely that the interplay between PN intrinsic currents and feedforward synaptic currents are sufficient to generate the triphasic firing patterns observed in the noctuid moth A. ipsilon.

  18. Extending enterprise architecture modelling with business goals and requirements

    Science.gov (United States)

    Engelsman, Wilco; Quartel, Dick; Jonkers, Henk; van Sinderen, Marten

    2011-02-01

    The methods for enterprise architecture (EA), such as The Open Group Architecture Framework, acknowledge the importance of requirements modelling in the development of EAs. Modelling support is needed to specify, document, communicate and reason about goals and requirements. The current modelling techniques for EA focus on the products, services, processes and applications of an enterprise. In addition, techniques may be provided to describe structured requirements lists and use cases. Little support is available however for modelling the underlying motivation of EAs in terms of stakeholder concerns and the high-level goals that address these concerns. This article describes a language that supports the modelling of this motivation. The definition of the language is based on existing work on high-level goal and requirements modelling and is aligned with an existing standard for enterprise modelling: the ArchiMate language. Furthermore, the article illustrates how EA can benefit from analysis techniques from the requirements engineering domain.

  19. Potential impact on the global atmospheric N2O budget of the increased nitrogen input required to meet future global food demands

    NARCIS (Netherlands)

    Mosier, A.; Kroeze, C.

    2000-01-01

    In most soils, biogenic formation of N2O is enhanced by an increase in available mineral N through increased nitrification and denitrification. N-fertilization, therefore, directly results in additional N2O formation. In addition, these inputs may lead to indirect formation of N2O after N leaching

  20. Visual input that matches the content of vist of visual working memory requires less (not faster) evidence sampling to reach conscious access

    NARCIS (Netherlands)

    Gayet, S.; van Maanen, L.; Heilbron, M.; Paffen, C.L.E.; Van Der Stigchel, S.

    2016-01-01

    The content of visual working memory (VWM) affects the processing of concurrent visual input. Recently, it has been demonstrated that stimuli are released from interocular suppression faster when they match rather than mismatch a color that is memorized for subsequent recall. In order to investigate

  1. China’s Input-Output Efficiency of Water-Energy-Food Nexus Based on the Data Envelopment Analysis (DEA Model

    Directory of Open Access Journals (Sweden)

    Guijun Li

    2016-09-01

    Full Text Available An explanation and quantification of the water-energy-food nexus (WEF-Nexus is important to advance our understanding of regional resource management, which is presently in its infant stage. Evaluation of the current states, interconnections, and trends of WEF-Nexus, in cities, has largely been ignored due to quantification hurdles and the lack of available data. Based on the interaction of WEF-Nexus with population system, economic system, and environmental system, this paper builds the input output index system at the city level. Using the input output index system, we evaluate the WEF-Nexus input-output efficiency with the data envelopment analysis (DEA model. We regard the decision making unit as a “black box”, to explore the states and trends of WEF-Nexus. In the empirical study based on data from China, we compare the input-output efficiency of WEF-Nexus in 30 provinces across China, from 2005 to 2014, to better understand their statues and trends of the input-output efficiency holistically. Together with the Malmquist index, factors leading to regional differences in the fluctuation of input-output efficiency are explored. Finally, we conclude that the DEA model indicates the regional consumption of WEF resources in the horizontal dimension and the trends in vertical dimension, together with the Malmquist index, to explain the variations for proposing specific implications.

  2. Applying the Context, Input, Process, Product Evaluation Model for Evaluation, Research, and Redesign of an Online Master’s Program

    Directory of Open Access Journals (Sweden)

    Hatice Sancar Tokmak

    2013-07-01

    Full Text Available This study aimed to evaluate and redesign an online master’s degree program consisting of 12 courses from the informatics field using a context, input, process, product (CIPP evaluation model. Research conducted during the redesign of the online program followed a mixed methodology in which data was collected through a CIPP survey, focus-group interview, and open-ended questionnaire. An initial CIPP survey sent to students, which had a response rate of approximately 60%, indicated that the Fuzzy Logic course did not fully meet the needs of students. Based on these findings, the program managers decided to improve this course, and a focus group was organized with the students of the Fuzzy Logic course in order to obtain more information to help in redesigning the course. Accordingly, the course was redesigned to include more examples and visuals, including videos; student-instructor interaction was increased through face-to-face meetings; and extra meetings were arranged before exams so that additional examples could be presented for problem-solving to satisfy students about assessment procedures. Lastly, the modifications to the Fuzzy Logic course were implemented, and the students in the course were sent an open-ended form asking them what they thought about the modifications. The results indicated that most students were pleased with the new version of the course.

  3. Migration of radionuclides with ground water: a discussion of the relevance of the input parameters used in model calculations

    International Nuclear Information System (INIS)

    Jensen, B.S.

    1982-01-01

    It is probably obvious to all, that establishing the scientific basis of geological waste disposal by going deeper and deeper in detail, may fill out the working hours of hundreds of scientists for hundreds of years. Such an endeavor is, however, impossible to attain, and we are forced to define some criteria telling us and others when knowledge and insight is sufficient. In thepresent case of geological disposal one need to be able to predict migration behavior of a series of radionuclides under diverse conditions to ascertain that unacceptable transfer to the biosphere never occurs. We have already collected a huge amount of data concerning migration phenomena, some very useful, oter less so, but we still need investigatoins departing from the simple ideal concepts, which most often have provided modellers with input data to their calculations. I therefore advocate that basic research is pursued to the point where it is possible to put limits on the effect of the lesser known factors on the migration behavior of radionuclides. When such limits have been established, it will be possible to make calculations on the worst cases, which may also occur. Although I personally believe, that these extra investigations will prove additional safety in geological disposal, this fact will convince nobody, only experimental facts will do

  4. Fully automated calculation of image-derived input function in simultaneous PET/MRI in a sheep model

    Energy Technology Data Exchange (ETDEWEB)

    Jochimsen, Thies H.; Zeisig, Vilia [Department of Nuclear Medicine, Leipzig University Hospital, Liebigstr. 18, Leipzig (Germany); Schulz, Jessica [Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1a, Leipzig, D-04103 (Germany); Werner, Peter; Patt, Marianne; Patt, Jörg [Department of Nuclear Medicine, Leipzig University Hospital, Liebigstr. 18, Leipzig (Germany); Dreyer, Antje Y. [Fraunhofer Institute of Cell Therapy and Immunology, Perlickstr. 1, Leipzig, D-04103 (Germany); Translational Centre for Regenerative Medicine, University Leipzig, Philipp-Rosenthal-Str. 55, Leipzig, D-04103 (Germany); Boltze, Johannes [Fraunhofer Institute of Cell Therapy and Immunology, Perlickstr. 1, Leipzig, D-04103 (Germany); Translational Centre for Regenerative Medicine, University Leipzig, Philipp-Rosenthal-Str. 55, Leipzig, D-04103 (Germany); Fraunhofer Research Institution of Marine Biotechnology and Institute for Medical and Marine Biotechnology, University of Lübeck, Lübeck (Germany); Barthel, Henryk; Sabri, Osama; Sattler, Bernhard [Department of Nuclear Medicine, Leipzig University Hospital, Liebigstr. 18, Leipzig (Germany)

    2016-02-13

    Obtaining the arterial input function (AIF) from image data in dynamic positron emission tomography (PET) examinations is a non-invasive alternative to arterial blood sampling. In simultaneous PET/magnetic resonance imaging (PET/MRI), high-resolution MRI angiographies can be used to define major arteries for correction of partial-volume effects (PVE) and point spread function (PSF) response in the PET data. The present study describes a fully automated method to obtain the image-derived input function (IDIF) in PET/MRI. Results are compared to those obtained by arterial blood sampling. To segment the trunk of the major arteries in the neck, a high-resolution time-of-flight MRI angiography was postprocessed by a vessel-enhancement filter based on the inertia tensor. Together with the measured PSF of the PET subsystem, the arterial mask was used for geometrical deconvolution, yielding the time-resolved activity concentration averaged over a major artery. The method was compared to manual arterial blood sampling at the hind leg of 21 sheep (animal stroke model) during measurement of blood flow with O15-water. Absolute quantification of activity concentration was compared after bolus passage during steady state, i.e., between 2.5- and 5-min post injection. Cerebral blood flow (CBF) values from blood sampling and IDIF were also compared. The cross-calibration factor obtained by comparing activity concentrations in blood samples and IDIF during steady state is 0.98 ± 0.10. In all examinations, the IDIF provided a much earlier and sharper bolus peak than in the time course of activity concentration obtained by arterial blood sampling. CBF using the IDIF was 22 % higher than CBF obtained by using the AIF yielded by blood sampling. The small deviation between arterial blood sampling and IDIF during steady state indicates that correction of PVE and PSF is possible with the method presented. The differences in bolus dynamics and, hence, CBF values can be explained by the

  5. Using cognitive modeling for requirements engineering in anesthesiology

    NARCIS (Netherlands)

    Pott, C; le Feber, J

    2005-01-01

    Cognitive modeling is a complexity reducing method to describe significant cognitive processes under a specified research focus. Here, a cognitive process model for decision making in anesthesiology is presented and applied in requirements engineering. Three decision making situations of

  6. Modeling Domain Variability in Requirements Engineering with Contexts

    Science.gov (United States)

    Lapouchnian, Alexei; Mylopoulos, John

    Various characteristics of the problem domain define the context in which the system is to operate and thus impact heavily on its requirements. However, most requirements specifications do not consider contextual properties and few modeling notations explicitly specify how domain variability affects the requirements. In this paper, we propose an approach for using contexts to model domain variability in goal models. We discuss the modeling of contexts, the specification of their effects on system goals, and the analysis of goal models with contextual variability. The approach is illustrated with a case study.

  7. Application of regional physically-based landslide early warning model: tuning of the input parameters and validation of the results

    Science.gov (United States)

    D'Ambrosio, Michele; Tofani, Veronica; Rossi, Guglielmo; Salvatici, Teresa; Tacconi Stefanelli, Carlo; Rosi, Ascanio; Benedetta Masi, Elena; Pazzi, Veronica; Vannocci, Pietro; Catani, Filippo; Casagli, Nicola

    2017-04-01

    The Aosta Valley region is located in North-West Alpine mountain chain. The geomorphology of the region is characterized by steep slopes, high climatic and altitude (ranging from 400 m a.s.l of Dora Baltea's river floodplain to 4810 m a.s.l. of Mont Blanc) variability. In the study area (zone B), located in Eastern part of Aosta Valley, heavy rainfall of about 800-900 mm per year is the main landslides trigger. These features lead to a high hydrogeological risk in all territory, as mass movements interest the 70% of the municipality areas (mainly shallow rapid landslides and rock falls). An in-depth study of the geotechnical and hydrological properties of hillslopes controlling shallow landslides formation was conducted, with the aim to improve the reliability of deterministic model, named HIRESS (HIgh REsolution Stability Simulator). In particular, two campaigns of on site measurements and laboratory experiments were performed. The data obtained have been studied in order to assess the relationships existing among the different parameters and the bedrock lithology. The analyzed soils in 12 survey points are mainly composed of sand and gravel, with highly variable contents of silt. The range of effective internal friction angle (from 25.6° to 34.3°) and effective cohesion (from 0 kPa to 9.3 kPa) measured and the median ks (10E-6 m/s) value are consistent with the average grain sizes (gravelly sand). The data collected contributes to generate input map of parameters for HIRESS (static data). More static data are: volume weight, residual water content, porosity and grain size index. In order to improve the original formulation of the model, the contribution of the root cohesion has been also taken into account based on the vegetation map and literature values. HIRESS is a physically based distributed slope stability simulator for analyzing shallow landslide triggering conditions in real time and in large areas using parallel computational techniques. The software

  8. Application of nonlinear autoregressive moving average exogenous input models to Geospace: Advances in understanding and space weather forecasts

    OpenAIRE

    Boynton, RJ; Balikhin, MA; Billings, SA; Amariutei, OA

    2013-01-01

    The nonlinear autoregressive moving average with exogenous inputs (NARMAX) system identification technique is applied to various aspects of the magnetospheres dynamics. It is shown, from an example system, how the inputs to a system can be found from the error reduction ratio (ERR) analysis, a key concept of the NARMAX approach. The application of the NARMAX approach to the Dst (disturbance storm time) index and the electron fluxes at geostationary Earth orbit (GEO) are reviewed, revealing ne...

  9. Wind Power Curve Modeling Using Statistical Models: An Investigation of Atmospheric Input Variables at a Flat and Complex Terrain Wind Farm

    Energy Technology Data Exchange (ETDEWEB)

    Wharton, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bulaevskaya, V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Irons, Z. [Enel Green Power North America, Andover, MA (United States); Qualley, G. [Infigen Energy, Dallas, TX (United States); Newman, J. F. [Univ. of Oklahoma, Norman, OK (United States); Miller, W. O. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-09-28

    The goal of our FY15 project was to explore the use of statistical models and high-resolution atmospheric input data to develop more accurate prediction models for turbine power generation. We modeled power for two operational wind farms in two regions of the country. The first site is a 235 MW wind farm in Northern Oklahoma with 140 GE 1.68 turbines. Our second site is a 38 MW wind farm in the Altamont Pass Region of Northern California with 38 Mitsubishi 1 MW turbines. The farms are very different in topography, climatology, and turbine technology; however, both occupy high wind resource areas in the U.S. and are representative of typical wind farms found in their respective areas.

  10. Software Requirements Specification Verifiable Fuel Cycle Simulation (VISION) Model

    International Nuclear Information System (INIS)

    D. E. Shropshire; W. H. West

    2005-01-01

    The purpose of this Software Requirements Specification (SRS) is to define the top-level requirements for a Verifiable Fuel Cycle Simulation Model (VISION) of the Advanced Fuel Cycle (AFC). This simulation model is intended to serve a broad systems analysis and study tool applicable to work conducted as part of the AFCI (including costs estimates) and Generation IV reactor development studies

  11. Methods, Devices and Computer Program Products Providing for Establishing a Model for Emulating a Physical Quantity Which Depends on at Least One Input Parameter, and Use Thereof

    DEFF Research Database (Denmark)

    2014-01-01

    The present invention proposes methods, devices and computer program products. To this extent, there is defined a set X including N distinct parameter values x_i for at least one input parameter x, N being an integer greater than or equal to 1, first measured the physical quantity Pm1 for each...... based on the Vandermonde matrix and the first measured physical quantity according to the equation W=(VMT*VM)-1*VMT*Pm1. The model is iteratively refined so as to obtained a desired emulation precision.; The model can later be used to emulate the physical quantity based on input parameters or logs taken...

  12. Capabilities and requirements for modelling radionuclide transport in the geosphere

    International Nuclear Information System (INIS)

    Paige, R.W.; Piper, D.

    1989-02-01

    This report gives an overview of geosphere flow and transport models suitable for use by the Department of the Environment in the performance assessment of radioactive waste disposal sites. An outline methodology for geosphere modelling is proposed, consisting of a number of different types of model. A brief description of each of the component models is given, indicating the purpose of the model, the processes being modelled and the methodologies adopted. Areas requiring development are noted. (author)

  13. A robust hybrid model integrating enhanced inputs based extreme learning machine with PLSR (PLSR-EIELM) and its application to intelligent measurement.

    Science.gov (United States)

    He, Yan-Lin; Geng, Zhi-Qiang; Xu, Yuan; Zhu, Qun-Xiong

    2015-09-01

    In this paper, a robust hybrid model integrating an enhanced inputs based extreme learning machine with the partial least square regression (PLSR-EIELM) was proposed. The proposed PLSR-EIELM model can overcome two main flaws in the extreme learning machine (ELM), i.e. the intractable problem in determining the optimal number of the hidden layer neurons and the over-fitting phenomenon. First, a traditional extreme learning machine (ELM) is selected. Second, a method of randomly assigning is applied to the weights between the input layer and the hidden layer, and then the nonlinear transformation for independent variables can be obtained from the output of the hidden layer neurons. Especially, the original input variables are regarded as enhanced inputs; then the enhanced inputs and the nonlinear transformed variables are tied together as the whole independent variables. In this way, the PLSR can be carried out to identify the PLS components not only from the nonlinear transformed variables but also from the original input variables, which can remove the correlation among the whole independent variables and the expected outputs. Finally, the optimal relationship model of the whole independent variables with the expected outputs can be achieved by using PLSR. Thus, the PLSR-EIELM model is developed. Then the PLSR-EIELM model served as an intelligent measurement tool for the key variables of the Purified Terephthalic Acid (PTA) process and the High Density Polyethylene (HDPE) process. The experimental results show that the predictive accuracy of PLSR-EIELM is stable, which indicate that PLSR-EIELM has good robust character. Moreover, compared with ELM, PLSR, hierarchical ELM (HELM), and PLSR-ELM, PLSR-EIELM can achieve much smaller predicted relative errors in these two applications. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Requirements Validation: Execution of UML Models with CPN Tools

    DEFF Research Database (Denmark)

    Machado, Ricardo J.; Lassen, Kristian Bisgaard; Oliveira, Sérgio

    2007-01-01

    Requirements validation is a critical task in any engineering project. The confrontation of stakeholders with static requirements models is not enough, since stakeholders with non-computer science education are not able to discover all the inter-dependencies between the elicited requirements. Even...... requirements, where the system to be built must explicitly support the interaction between people within a pervasive cooperative workflow execution. A case study from a real project is used to illustrate the proposed approach....

  15. Enhancement of a robust arcuate GABAergic input to gonadotropin-releasing hormone neurons in a model of polycystic ovarian syndrome.

    Science.gov (United States)

    Moore, Aleisha M; Prescott, Mel; Marshall, Christopher J; Yip, Siew Hoong; Campbell, Rebecca E

    2015-01-13

    Polycystic ovarian syndrome (PCOS), the leading cause of female infertility, is associated with an increase in luteinizing hormone (LH) pulse frequency, implicating abnormal steroid hormone feedback to gonadotropin-releasing hormone (GnRH) neurons. This study investigated whether modifications in the synaptically connected neuronal network of GnRH neurons could account for this pathology. The PCOS phenotype was induced in mice following prenatal androgen (PNA) exposure. Serial blood sampling confirmed that PNA elicits increased LH pulse frequency and impaired progesterone negative feedback in adult females, mimicking the neuroendocrine abnormalities of the clinical syndrome. Imaging of GnRH neurons revealed greater dendritic spine density that correlated with increased putative GABAergic but not glutamatergic inputs in PNA mice. Mapping of steroid hormone receptor expression revealed that PNA mice had 59% fewer progesterone receptor-expressing cells in the arcuate nucleus of the hypothalamus (ARN). To address whether increased GABA innervation to GnRH neurons originates in the ARN, a viral-mediated Cre-lox approach was taken to trace the projections of ARN GABA neurons in vivo. Remarkably, projections from ARN GABAergic neurons heavily contacted and even bundled with GnRH neuron dendrites, and the density of fibers apposing GnRH neurons was even greater in PNA mice (56%). Additionally, this ARN GABA population showed significantly less colocalization with progesterone receptor in PNA animals compared with controls. Together, these data describe a robust GABAergic circuit originating in the ARN that is enhanced in a model of PCOS and may underpin the neuroendocrine pathophysiology of the syndrome.

  16. Waste Isolation Pilot Plant environmental impact report: an outline of the input--output model and the impact projections methodology. Technical document, socioeconomic portion

    International Nuclear Information System (INIS)

    1978-07-01

    A static model in the form of a regional input-output model was constructed for Eddy and Lea Counties, New Mexico. Besides the WIPP project, the model was also used for several other projects to determine the economic impact of proposed new facilities and developments. Both private and public sectors are covered. Sub-sectors for WIPP below-ground construction, above-ground construction, and operation and transport are included

  17. Continuous and simultaneous estimation of finger kinematics using inputs from an EMG-to-muscle activation model.

    Science.gov (United States)

    Ngeo, Jimson G; Tamei, Tomoya; Shibata, Tomohiro

    2014-08-14

    Surface electromyography (EMG) signals are often used in many robot and rehabilitation applications because these reflect motor intentions of users very well. However, very few studies have focused on the accurate and proportional control of the human hand using EMG signals. Many have focused on discrete gesture classification and some have encountered inherent problems such as electro-mechanical delays (EMD). Here, we present a new method for estimating simultaneous and multiple finger kinematics from multi-channel surface EMG signals. In this study, surface EMG signals from the forearm and finger kinematic data were extracted from ten able-bodied subjects while they were tasked to do individual and simultaneous multiple finger flexion and extension movements in free space. Instead of using traditional time-domain features of EMG, an EMG-to-Muscle Activation model that parameterizes EMD was used and shown to give better estimation performance. A fast feed forward artificial neural network (ANN) and a nonparametric Gaussian Process (GP) regressor were both used and evaluated to estimate complex finger kinematics, with the latter rarely used in the other related literature. The estimation accuracies, in terms of mean correlation coefficient, were 0.85 ± 0.07, 0.78 ± 0.06 and 0.73 ± 0.04 for the metacarpophalangeal (MCP), proximal interphalangeal (PIP) and the distal interphalangeal (DIP) finger joint DOFs, respectively. The mean root-mean-square error in each individual DOF ranged from 5 to 15%. We show that estimation improved using the proposed muscle activation inputs compared to other features, and that using GP regression gave better estimation results when using fewer training samples. The proposed method provides a viable means of capturing the general trend of finger movements and shows a good way of estimating finger joint kinematics using a muscle activation model that parameterizes EMD. The results from this study demonstrates a potential control

  18. A Comprehensive Energy Analysis and Related Carbon Footprint of Dairy Farms, Part 2: Investigation and Modeling of Indirect Energy Requirements

    Directory of Open Access Journals (Sweden)

    Giuseppe Todde

    2018-02-01

    Full Text Available Dairy cattle farms are continuously developing more intensive systems of management, which require higher utilization of durable and non-durable inputs. These inputs are responsible for significant direct and indirect fossil energy requirements, which are related to remarkable emissions of CO2. This study focused on investigating the indirect energy requirements of 285 conventional dairy farms and the related carbon footprint. A detailed analysis of the indirect energy inputs related to farm buildings, machinery and agricultural inputs was carried out. A partial life cycle assessment approach was carried out to evaluate indirect energy inputs and the carbon footprint of farms over a period of one harvest year. The investigation highlights the importance and the weight related to the use of agricultural inputs, which represent more than 80% of the total indirect energy requirements. Moreover, the analyses carried out underline that the assumption of similarity in terms of requirements of indirect energy and related carbon emissions among dairy farms is incorrect especially when observing different farm sizes and milk production levels. Moreover, a mathematical model to estimate the indirect energy requirements of dairy farms has been developed in order to provide an instrument allowing researchers to assess the energy incorporated into farm machinery, agricultural inputs and buildings. Combining the results of this two-part series, the total energy demand (expressed in GJ per farm results in being mostly due to agricultural inputs and fuel consumption, which have the largest share of the annual requirements for each milk yield class. Direct and indirect energy requirements increased, going from small sized farms to larger ones, from 1302–5109 GJ·y−1, respectively. However, the related carbon dioxide emissions expressed per 100 kg of milk showed a negative trend going from class <5000 to >9000 kg of milk yield, where larger farms were able to

  19. Our Electron Model vindicates Schr"odinger's Incomplete Results and Require Restatement of Heisenberg's Uncertainty Principle

    Science.gov (United States)

    McLeod, David; McLeod, Roger

    2008-04-01

    The electron model used in our other joint paper here requires revision of some foundational physics. That electron model followed from comparing the experimentally proved results of human vision models using spatial Fourier transformations, SFTs, of pincushion and Hermann grids. Visual systems detect ``negative'' electric field values for darker so-called ``illusory'' diagonals that are physical consequences of the lens SFT of the Hermann grid, distinguishing this from light ``illusory'' diagonals. This indicates that oppositely directed vectors of the separate illusions are discretely observable, constituting another foundational fault in quantum mechanics, QM. The SFT of human vision is merely the scaled SFT of QM. Reciprocal space results of wavelength and momentum mimic reciprocal relationships between space variable x and spatial frequency variable p, by the experiment mentioned. Nobel laureate physicist von B'ek'esey, physiology of hearing, 1961, performed pressure input Rect x inputs that the brain always reports as truncated Sinc p, showing again that the brain is an adjunct built by sight, preserves sign sense of EMF vectors, and is hard wired as an inverse SFT. These require vindication of Schr"odinger's actual, but incomplete, wave model of the electron as having physical extent over the wave, and question Heisenberg's uncertainty proposal.

  20. Requirements Traceability and Transformation Conformance in Model-Driven Development

    NARCIS (Netherlands)

    Andrade Almeida, João; van Eck, Pascal; Iacob, Maria Eugenia

    2006-01-01

    The variety of design artefacts (models) produced in a model-driven design process results in an intricate rela-tionship between requirements and the various models. This paper proposes a methodological framework that simplifies management of this relationship. This frame-work is a basis for tracing

  1. Using the Context, Input, Process, and Product Evaluation Model (CIPP) as a Comprehensive Framework to Guide the Planning, Implementation, and Assessment of Service-Learning Programs

    Science.gov (United States)

    Zhang, Guili; Zeller, Nancy; Griffith, Robin; Metcalf, Debbie; Williams, Jennifer; Shea, Christine; Misulis, Katherine

    2011-01-01

    Planning, implementing, and assessing a service-learning project can be a complex task because service-learning projects often involve multiple constituencies and aim to meet both the needs of service providers and community partners. In this article, Stufflebeam's Context, Input, Process, and Product (CIPP) evaluation model is recommended as a…

  2. Intermediate inputs and economic productivity.

    Science.gov (United States)

    Baptist, Simon; Hepburn, Cameron

    2013-03-13

    Many models of economic growth exclude materials, energy and other intermediate inputs from the production function. Growing environmental pressures and resource prices suggest that this may be increasingly inappropriate. This paper explores the relationship between intermediate input intensity, productivity and national accounts using a panel dataset of manufacturing subsectors in the USA over 47 years. The first contribution is to identify sectoral production functions that incorporate intermediate inputs, while allowing for heterogeneity in both technology and productivity. The second contribution is that the paper finds a negative correlation between intermediate input intensity and total factor productivity (TFP)--sectors that are less intensive in their use of intermediate inputs have higher productivity. This finding is replicated at the firm level. We propose tentative hypotheses to explain this association, but testing and further disaggregation of intermediate inputs is left for further work. Further work could also explore more directly the relationship between material inputs and economic growth--given the high proportion of materials in intermediate inputs, the results in this paper are suggestive of further work on material efficiency. Depending upon the nature of the mechanism linking a reduction in intermediate input intensity to an increase in TFP, the implications could be significant. A third contribution is to suggest that an empirical bias in productivity, as measured in national accounts, may arise due to the exclusion of intermediate inputs. Current conventions of measuring productivity in national accounts may overstate the productivity of resource-intensive sectors relative to other sectors.

  3. Second research co-ordination meeting on development of reference input parameter library for nuclear model calculations of nuclear data. Summary report

    International Nuclear Information System (INIS)

    Oblozinsky, P.

    1996-03-01

    The present report contains the summary of the Second Research Co-ordination Meeting on ''Development of Reference Input Parameter Library for Nuclear Model Calculations of Nuclear Data'', held in Vienna, Austria, from 30 October to 3 November 1995. The library should serve the input for theoretical calculations of nuclear reaction data induced primarily with neutrons in the incident energy range below 30 MeV. Summarized are conclusions and recommendations of the meeting together with a detailed list of actions and deadlines. Attached is the agenda of the meeting, list of participants, and titles and abstracts of their presentations. (author)

  4. Commonsense Psychology and the Functional Requirements of Cognitive Models

    National Research Council Canada - National Science Library

    Gordon, Andrew S

    2005-01-01

    In this paper we argue that previous models of cognitive abilities (e.g. memory, analogy) have been constructed to satisfy functional requirements of implicit commonsense psychological theories held by researchers and nonresearchers alike...

  5. Requirements engineering for cross-sectional information chain models.

    Science.gov (United States)

    Hübner, U; Cruel, E; Gök, M; Garthaus, M; Zimansky, M; Remmers, H; Rienhoff, O

    2012-01-01

    Despite the wealth of literature on requirements engineering, little is known about engineering very generic, innovative and emerging requirements, such as those for cross-sectional information chains. The IKM health project aims at building information chain reference models for the care of patients with chronic wounds, cancer-related pain and back pain. Our question therefore was how to appropriately capture information and process requirements that are both generally applicable and practically useful. To this end, we started with recommendations from clinical guidelines and put them up for discussion in Delphi surveys and expert interviews. Despite the heterogeneity we encountered in all three methods, it was possible to obtain requirements suitable for building reference models. We evaluated three modelling languages and then chose to write the models in UML (class and activity diagrams). On the basis of the current project results, the pros and cons of our approach are discussed.

  6. Spreadsheet Decision Support Model for Training Exercise Material Requirements Planning

    National Research Council Canada - National Science Library

    Tringali, Arthur

    1997-01-01

    This thesis focuses on developing a spreadsheet decision support model that can be used by combat engineer platoon and company commanders in determining the material requirements and estimated costs...

  7. Hybriding CMMI and requirement engineering maturity and capability models

    OpenAIRE

    Buglione, Luigi; Hauck, Jean Carlo R.; Gresse von Wangenheim, Christiane; Mc Caffery, Fergal

    2012-01-01

    peer-reviewed Estimation represents one of the most critical processes for any project and it is highly dependent on the quality of requirements elicitation and management. Therefore, the management of requirements should be prioritised in any process improvement program, because the less precise the requirements gathering, analysis and sizing, the greater the error in terms of time and cost estimation. Maturity and Capability Models (MCM) represent a good tool for assessing the status of ...

  8. Generic skills requirements (KSA model) towards future mechanical ...

    African Journals Online (AJOL)

    Generic Skills is a basic requirement that engineers need to master in all areas of Engineering. This study was conducted throughout the peninsular Malaysia involving small, medium and heavy industries using the KSA Model. The objectives of this study are studying the level of requirement of Generic Skills that need to be ...

  9. Modeling the ionosphere-thermosphere response to a geomagnetic storm using physics-based magnetospheric energy input: OpenGGCM-CTIM results

    Directory of Open Access Journals (Sweden)

    Connor Hyunju Kim

    2016-01-01

    Full Text Available The magnetosphere is a major source of energy for the Earth’s ionosphere and thermosphere (IT system. Current IT models drive the upper atmosphere using empirically calculated magnetospheric energy input. Thus, they do not sufficiently capture the storm-time dynamics, particularly at high latitudes. To improve the prediction capability of IT models, a physics-based magnetospheric input is necessary. Here, we use the Open Global General Circulation Model (OpenGGCM coupled with the Coupled Thermosphere Ionosphere Model (CTIM. OpenGGCM calculates a three-dimensional global magnetosphere and a two-dimensional high-latitude ionosphere by solving resistive magnetohydrodynamic (MHD equations with solar wind input. CTIM calculates a global thermosphere and a high-latitude ionosphere in three dimensions using realistic magnetospheric inputs from the OpenGGCM. We investigate whether the coupled model improves the storm-time IT responses by simulating a geomagnetic storm that is preceded by a strong solar wind pressure front on August 24, 2005. We compare the OpenGGCM-CTIM results with low-earth-orbit satellite observations and with the model results of Coupled Thermosphere-Ionosphere-Plasmasphere electrodynamics (CTIPe. CTIPe is an up-to-date version of CTIM that incorporates more IT dynamics such as a low-latitude ionosphere and a plasmasphere, but uses empirical magnetospheric input. OpenGGCM-CTIM reproduces localized neutral density peaks at ~ 400 km altitude in the high-latitude dayside regions in agreement with in situ observations during the pressure shock and the early phase of the storm. Although CTIPe is in some sense a much superior model than CTIM, it misses these localized enhancements. Unlike the CTIPe empirical input models, OpenGGCM-CTIM more faithfully produces localized increases of both auroral precipitation and ionospheric electric fields near the high-latitude dayside region after the pressure shock and after the storm onset

  10. Differential effects of isoflurane and halothane on aortic input impedance quantified using a three-element Windkessel model.

    Science.gov (United States)

    Hettrick, D A; Pagel, P S; Warltier, D C

    1995-08-01

    Systemic vascular resistance (the ratio of mean aortic pressure [AP] and mean aortic blood flow [AQ]) does not completely describe left ventricular (LV) afterload because of the phasic nature of pressure and blood flow. Aortic input impedance (Zin) is an established experimental description of LV afterload that incorporates the frequency-dependent characteristics and viscoelastic properties of the arterial system. Zin is most often interpreted through an analytical model known as the three-element Windkessel. This investigation examined the effects of isoflurane, halothane, and sodium nitroprusside (SNP) on Zin. Changes in Zin were quantified using three variables derived from the Windkessel: characteristic aortic impedance (Zc), total arterial compliance (C), and total arterial resistance (R). Sixteen experiments were conducted in eight dogs chronically instrumented for measurement of AP, LV pressure, maximum rate of change in left ventricular pressure, subendocardial segment length, and AQ. AP and AQ waveforms were recorded in the conscious state and after 30 min equilibration at 1.25, 1.5, and 1.75 minimum alveolar concentration (MAC) isoflurane and halothane. Zin spectra were obtained by power spectral analysis of AP and AQ waveforms and corrected for the phase responses of the transducers. Zc and R were calculated as the mean of Zin between 2 and 15 Hz and the difference between Zin at zero frequency and Zc, respectively. C was determined using the formula C = (Ad.MAP).[MAQ.(Pes-Ped)]-1, where Ad = diastolic AP area; MAP and MAQ = mean AP and mean AQ, respectively; and Pes and Ped = end-systolic and end-diastolic AP, respectively. Parameters describing the net site and magnitude of arterial wave reflection were also calculated from Zin. Eight additional dogs were studied in the conscious state before and after 15 min equilibration at three equihypotensive infusions of SNP. Isoflurane decreased R (3,205 +/- 315 during control to 2,340 +/- 2.19 dyn.s.cm-5 during

  11. Use of data assimilation procedures in the meteorological pre-processors of decision support systems to improve the meteorological input of atmospheric dispersion models

    International Nuclear Information System (INIS)

    Kovalets, I.; Andronopoulos, S.; Bartzis, J.G.

    2003-01-01

    Full text: The Atmospheric Dispersion Models (ADMs) play a key role in decision support systems for nuclear emergency management, as they are used to determine the current, and predict the future spatial distribution of radionuclides after an accidental release of radioactivity to the atmosphere. Meteorological pre-processors (MPPs), usually act as interface between the ADMs and the incoming meteorological data. Therefore the quality of the results of the ADMs crucially depends on the input that they receive from the MPPs. The meteorological data are measurements from one or more stations in the vicinity of the nuclear power plant and/or prognostic data from Numerical Weather Prediction (NWP) models of National Weather Services. The measurements are representative of the past and current local conditions, while the NWP data cover a wider range in space and future time, where no measurements exist. In this respect, the simultaneous use of both by an MPP immediately poses the questions of consistency and of the appropriate methodology for reconciliation of the two kinds of meteorological data. The main objective of the work presented in this paper is the introduction of data assimilation (DA) techniques in the MPP of the RODOS (Real-time On-line Decision Support) system for nuclear emergency management in Europe, developed under the European Project 'RODOS-Migration', to reconcile the NWP data with the local observations coming from the meteorological stations. More specifically, in this paper: the methodological approach for simultaneous use of both meteorological measurements and NWP data in the MPP is presented; the method is validated by comparing results of calculations with experimental data; future ways of improvement of the meteorological input for the calculations of the atmospheric dispersion in the RODOS system are discussed. The methodological approach for solving the DA problem developed in this work is based on the method of optimal interpolation (OI

  12. Analysis of urban metabolic processes based on input-output method: model development and a case study for Beijing

    Science.gov (United States)

    Zhang, Yan; Liu, Hong; Chen, Bin; Zheng, Hongmei; Li, Yating

    2014-06-01

    Discovering ways in which to increase the sustainability of the metabolic processes involved in urbanization has become an urgent task for urban design and management in China. As cities are analogous to living organisms, the disorders of their metabolic processes can be regarded as the cause of "urban disease". Therefore, identification of these causes through metabolic process analysis and ecological element distribution through the urban ecosystem's compartments will be helpful. By using Beijing as an example, we have compiled monetary input-output tables from 1997, 2000, 2002, 2005, and 2007 and calculated the intensities of the embodied ecological elements to compile the corresponding implied physical input-output tables. We then divided Beijing's economy into 32 compartments and analyzed the direct and indirect ecological intensities embodied in the flows of ecological elements through urban metabolic processes. Based on the combination of input-output tables and ecological network analysis, the description of multiple ecological elements transferred among Beijing's industrial compartments and their distribution has been refined. This hybrid approach can provide a more scientific basis for management of urban resource flows. In addition, the data obtained from distribution characteristics of ecological elements may provide a basic data platform for exploring the metabolic mechanism of Beijing.

  13. Irrigation Requirement Estimation Using Vegetation Indices and Inverse Biophysical Modeling

    Science.gov (United States)

    Bounoua, Lahouari; Imhoff, Marc L.; Franks, Shannon

    2010-01-01

    We explore an inverse biophysical modeling process forced by satellite and climatological data to quantify irrigation requirements in semi-arid agricultural areas. We constrain the carbon and water cycles modeled under both equilibrium, balance between vegetation and climate, and non-equilibrium, water added through irrigation. We postulate that the degree to which irrigated dry lands vary from equilibrium climate conditions is related to the amount of irrigation. The amount of water required over and above precipitation is considered as an irrigation requirement. For July, results show that spray irrigation resulted in an additional amount of water of 1.3 mm per occurrence with a frequency of 24.6 hours. In contrast, the drip irrigation required only 0.6 mm every 45.6 hours or 46% of that simulated by the spray irrigation. The modeled estimates account for 87% of the total reported irrigation water use, when soil salinity is not important and 66% in saline lands.

  14. Energy use for building construction. Preliminary progress report for period March 1, 1976--May 15, 1976. [Energy intensities of various sectors and overall industry from Energy Input/Output Model

    Energy Technology Data Exchange (ETDEWEB)

    Hannon, B M; Stein, R G; Segal, B; Serber, D

    1976-05-01

    The building construction industry, as broken down by the Bureau of Economic Analysis, U.S. Department of Commerce, was integrated into the Energy Input/Output Model developed at the Center for Advanced Computation, University of Illinois. The resulting expanded model was used to determine energy intensities of various (49) building construction (new and maintenance) sectors and of the overall building construction industry, for year 1967. The latter figure was computed at about 70,000 Btu/$, i.e., the construction industry on the average required about 70,000 Btu of direct and indirect energy per dollar of output produced. The most energy intensive sector was New Construction of Petroleum Pipelines (about 150,000 Btu/$), while the least intensive was Maintenance Construction for Electric Utilities (about 25,000 Btu/$). Also developed were total energy (direct and indirect) requirements to final demand for the building construction industry, for 1967. The overall industry required about 6000 trillion Btu, or about nine percent of the total U.S. energy requirement. New Highway Construction required the most energy to final demand (about 1000 trillion Btu, or 16 percent of the total construction industry requirement), while Maintenance Construction Residential required the least (about 9 trillion Btu, or 0.1 percent of the total industry requirement.

  15. Business Process Simulation: Requirements for Business and Resource Models

    OpenAIRE

    Audrius Rima; Olegas Vasilecas

    2015-01-01

    The purpose of Business Process Model and Notation (BPMN) is to provide easily understandable graphical representation of business process. Thus BPMN is widely used and applied in various areas one of them being a business process simulation. This paper addresses some BPMN model based business process simulation problems. The paper formulate requirements for business process and resource models in enabling their use for business process simulation.

  16. Business Process Simulation: Requirements for Business and Resource Models

    Directory of Open Access Journals (Sweden)

    Audrius Rima

    2015-07-01

    Full Text Available The purpose of Business Process Model and Notation (BPMN is to provide easily understandable graphical representation of business process. Thus BPMN is widely used and applied in various areas one of them being a business process simulation. This paper addresses some BPMN model based business process simulation problems. The paper formulate requirements for business process and resource models in enabling their use for business process simulation.

  17. The Neurobiological Basis of Cognition: Identification by Multi-Input, Multioutput Nonlinear Dynamic Modeling: A method is proposed for measuring and modeling human long-term memory formation by mathematical analysis and computer simulation of nerve-cell dynamics

    OpenAIRE

    Berger, Theodore W.; Song, Dong; Chan, Rosa H. M.; Marmarelis, Vasilis Z.

    2010-01-01

    The successful development of neural prostheses requires an understanding of the neurobiological bases of cognitive processes, i.e., how the collective activity of populations of neurons results in a higher level process not predictable based on knowledge of the individual neurons and/or synapses alone. We have been studying and applying novel methods for representing nonlinear transformations of multiple spike train inputs (multiple time series of pulse train inputs) produced by synaptic and...

  18. Decentralized control with input saturation

    NARCIS (Netherlands)

    Saberi, Ali; Stoorvogel, Antonie Arij; Sannuti, Peddapullaiah

    In decentralized control it is known that the system can be stabilized only if the so-called fixed modes are all stable. If we have input constraints then (semi-)global stability requires all poles to be in the closed left half plane. This paper establishes that these two requirements are necessary

  19. Testing the importance of accurate meteorological input fields and parameterizations in atmospheric transport modelling using DREAM - Validation against ETEX-1

    DEFF Research Database (Denmark)

    Brandt, J.; Bastrup-Birk, A.; Christensen, J.H.

    1998-01-01

    A tracer model, the DREAM, which is based on a combination of a near-range Lagrangian model and a long-range Eulerian model, has been developed. The meteorological meso-scale model, MM5V1, is implemented as a meteorological driver for the tracer model. The model system is used for studying...

  20. Better temperature predictions in geothermal modelling by improved quality of input parameters: a regional case study from the Danish-German border region

    Science.gov (United States)

    Fuchs, Sven; Bording, Thue S.; Balling, Niels

    2015-04-01

    Thermal modelling is used to examine the subsurface temperature field and geothermal conditions at various scales (e.g. sedimentary basins, deep crust) and in the framework of different problem settings (e.g. scientific or industrial use). In such models, knowledge of rock thermal properties is prerequisites for the parameterisation of boundary conditions and layer properties. In contrast to hydrogeological ground-water models, where parameterization of the major rock property (i.e. hydraulic conductivity) is generally conducted considering lateral variations within geological layers, parameterization of thermal models (in particular regarding thermal conductivity but also radiogenic heat production and specific heat capacity) in most cases is conducted using constant parameters for each modelled layer. For such constant thermal parameter values, moreover, initial values are normally obtained from rare core measurements and/or literature values, which raise questions for their representativeness. Some few studies have considered lithological composition or well log information, but still keeping the layer values constant. In the present thermal-modelling scenario analysis, we demonstrate how the use of different parameter input type (from literature, well logs and lithology) and parameter input style (constant or laterally varying layer values) affects the temperature model prediction in sedimentary basins. For this purpose, rock thermal properties are deduced from standard petrophysical well logs and lithological descriptions for several wells in a project area. Statistical values of thermal properties (mean, standard deviation, moments, etc.) are calculated at each borehole location for each geological formation and, moreover, for the entire dataset. Our case study is located at the Danish-German border region (model dimension: 135 x115 km, depth: 20 km). Results clearly show that (i) the use of location-specific well-log derived rock thermal properties and (i

  1. Building a Narrative Based Requirements Engineering Mediation Model

    Science.gov (United States)

    Ma, Nan; Hall, Tracy; Barker, Trevor

    This paper presents a narrative-based Requirements Engineering (RE) mediation model to help RE practitioners to effectively identify, define, and resolve conflicts of interest, goals, and requirements. Within the SPI community, there is a common belief that social, human, and organizational issues significantly impact on the effectiveness of software process improvement in general and the requirements engineering process in particularl. Conflicts among different stakeholders are an important human and social issue that need more research attention in the SPI and RE community. By drawing on the conflict resolution literature and IS literature, we argue that conflict resolution in RE is a mediated process, in which a requirements engineer can act as a mediator among different stakeholders. To address socio-psychological aspects of conflict in RE and SPI, Winslade and Monk (2000)'s narrative mediation model is introduced, justified, and translated into the context of RE.

  2. Modelling Security Requirements Through Extending Scrum Agile Development Framework

    OpenAIRE

    Alotaibi, Minahi

    2016-01-01

    Security is today considered as a basic foundation in software development and therefore, the modelling and implementation of security requirements is an essential part of the production of secure software systems. Information technology organisations are moving towards agile development methods in order to satisfy customers' changing requirements in light of accelerated evolution and time restrictions with their competitors in software production. Security engineering is considered difficult...

  3. A New Rapid Simplified Model for Urban Rainstorm Inundation with Low Data Requirements

    Directory of Open Access Journals (Sweden)

    Ji Shen

    2016-11-01

    Full Text Available This paper proposes a new rapid simplified inundation model (NRSIM for flood inundation caused by rainstorms in an urban setting that can simulate the urban rainstorm inundation extent and depth in a data-scarce area. Drainage basins delineated from a floodplain map according to the distribution of the inundation sources serve as the calculation cells of NRSIM. To reduce data requirements and computational costs of the model, the internal topography of each calculation cell is simplified to a circular cone, and a mass conservation equation based on a volume spreading algorithm is established to simulate the interior water filling process. Moreover, an improved D8 algorithm is outlined for the simulation of water spilling between different cells. The performance of NRSIM is evaluated by comparing the simulated results with those from a traditional rapid flood spreading model (TRFSM for various resolutions of digital elevation model (DEM data. The results are as follows: (1 given high-resolution DEM data input, the TRFSM model has better performance in terms of precision than NRSIM; (2 the results from TRFSM are seriously affected by the decrease in DEM data resolution, whereas those from NRSIM are not; and (3 NRSIM always requires less computational time than TRFSM. Apparently, compared with the complex hydrodynamic or traditional rapid flood spreading model, NRSIM has much better applicability and cost-efficiency in real-time urban inundation forecasting for data-sparse areas.

  4. NASA Standard for Models and Simulations: Philosophy and Requirements Overview

    Science.gov (United States)

    Blattnig, Steve R.; Luckring, James M.; Morrison, Joseph H.; Sylvester, Andre J.; Tripathi, Ram K.; Zang, Thomas A.

    2013-01-01

    Following the Columbia Accident Investigation Board report, the NASA Administrator chartered an executive team (known as the Diaz Team) to identify those CAIB report elements with NASA-wide applicability and to develop corrective measures to address each element. One such measure was the development of a standard for the development, documentation, and operation of models and simulations. This report describes the philosophy and requirements overview of the resulting NASA Standard for Models and Simulations.

  5. Critical Business Requirements Model and Metrics for Intranet ROI

    OpenAIRE

    Luqi; Jacoby, Grant A.

    2005-01-01

    Journal of Electronic Commerce Research, Vol. 6, No. 1, pp. 1-30. This research provides the first theoretical model, the Intranet Efficiency and Effectiveness Model (IEEM), to measure intranet overall value contributions based on a corporation’s critical business requirements by applying a balanced baseline of metrics and conversion ratios linked to key business processes of knowledge workers, IT managers and business decision makers -- in effect, closing the gap of understanding...

  6. 77 FR 38804 - Wireline Competition Bureau Seeks Comment on Model Design and Data Inputs for Phase II of the...

    Science.gov (United States)

    2012-06-29

    ... should estimate the costs of Fiber-to-the-Premises (FTTP) or Digital Subscriber Line (DSL) (including... consistent with the requirements set forth in the USF/ICC Transformation Order, 76 FR 73830, November 29... obligations laid out in the USF/ ICC Transformation Order. The requirements laid out in the USF/ICC...

  7. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    Science.gov (United States)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  8. Models of protein and amino acid requirements for cattle

    Directory of Open Access Journals (Sweden)

    Luis Orlindo Tedeschi

    2015-03-01

    Full Text Available Protein supply and requirements by ruminants have been studied for more than a century. These studies led to the accumulation of lots of scientific information about digestion and metabolism of protein by ruminants as well as the characterization of the dietary protein in order to maximize animal performance. During the 1980s and 1990s, when computers became more accessible and powerful, scientists began to conceptualize and develop mathematical nutrition models, and to program them into computers to assist with ration balancing and formulation for domesticated ruminants, specifically dairy and beef cattle. The most commonly known nutrition models developed during this period were the National Research Council (NRC in the United States, Agricultural Research Council (ARC in the United Kingdom, Institut National de la Recherche Agronomique (INRA in France, and the Commonwealth Scientific and Industrial Research Organization (CSIRO in Australia. Others were derivative works from these models with different degrees of modifications in the supply or requirement calculations, and the modeling nature (e.g., static or dynamic, mechanistic, or deterministic. Circa 1990s, most models adopted the metabolizable protein (MP system over the crude protein (CP and digestible CP systems to estimate supply of MP and the factorial system to calculate MP required by the animal. The MP system included two portions of protein (i.e., the rumen-undegraded dietary CP - RUP - and the contributions of microbial CP - MCP as the main sources of MP for the animal. Some models would explicitly account for the impact of dry matter intake (DMI on the MP required for maintenance (MPm; e.g., Cornell Net Carbohydrate and Protein System - CNCPS, the Dutch system - DVE/OEB, while others would simply account for scurf, urinary, metabolic fecal, and endogenous contributions independently of DMI. All models included milk yield and its components in estimating MP required for lactation

  9. Modeling the effect of nitrogen input from feed on the nitrogen dynamics in an enclosed intensive culture pond of black tiger shrimp (Penaeus monodon)

    OpenAIRE

    Kittiwanich, Jutarat; Songsangjinda, Putth; Yamamoto, Tamiji; Fukami, Kimio; Muangyao, Pensri

    2012-01-01

    In the present study, a mathematical model was developed to evaluate the effect of nitrogen (N) input from feed on the N dynamics under different feeding scenarios in a well aerated and enclosed culture pond of black tiger shrimp (Penaeus monodon). Three feeding levels were examined: underfeeding, optimum feeding and overfeeding. The model was formulated using field data gathered from an earthen pond (0.69 ha) which was stocked with shrimp post larvae at a density of 340,000 individuals ha(-1...

  10. High organic inputs explain shallow and deep SOC storage in a long-term agroforestry system – combining experimental and modeling approaches

    Directory of Open Access Journals (Sweden)

    R. Cardinael

    2018-01-01

    Full Text Available Agroforestry is an increasingly popular farming system enabling agricultural diversification and providing several ecosystem services. In agroforestry systems, soil organic carbon (SOC stocks are generally increased, but it is difficult to disentangle the different factors responsible for this storage. Organic carbon (OC inputs to the soil may be larger, but SOC decomposition rates may be modified owing to microclimate, physical protection, or priming effect from roots, especially at depth. We used an 18-year-old silvoarable system associating hybrid walnut trees (Juglans regia  ×  nigra and durum wheat (Triticum turgidum L. subsp. durum and an adjacent agricultural control plot to quantify all OC inputs to the soil – leaf litter, tree fine root senescence, crop residues, and tree row herbaceous vegetation – and measured SOC stocks down to 2 m of depth at varying distances from the trees. We then proposed a model that simulates SOC dynamics in agroforestry accounting for both the whole soil profile and the lateral spatial heterogeneity. The model was calibrated to the control plot only. Measured OC inputs to soil were increased by about 40 % (+ 1.11 t C ha−1 yr−1 down to 2 m of depth in the agroforestry plot compared to the control, resulting in an additional SOC stock of 6.3 t C ha−1 down to 1 m of depth. However, most of the SOC storage occurred in the first 30 cm of soil and in the tree rows. The model was strongly validated, properly describing the measured SOC stocks and distribution with depth in agroforestry tree rows and alleys. It showed that the increased inputs of fresh biomass to soil explained the observed additional SOC storage in the agroforestry plot. Moreover, only a priming effect variant of the model was able to capture the depth distribution of SOC stocks, suggesting the priming effect as a possible mechanism driving deep SOC dynamics. This result questions the potential of soils to

  11. High organic inputs explain shallow and deep SOC storage in a long-term agroforestry system - combining experimental and modeling approaches

    Science.gov (United States)

    Cardinael, Rémi; Guenet, Bertrand; Chevallier, Tiphaine; Dupraz, Christian; Cozzi, Thomas; Chenu, Claire

    2018-01-01

    Agroforestry is an increasingly popular farming system enabling agricultural diversification and providing several ecosystem services. In agroforestry systems, soil organic carbon (SOC) stocks are generally increased, but it is difficult to disentangle the different factors responsible for this storage. Organic carbon (OC) inputs to the soil may be larger, but SOC decomposition rates may be modified owing to microclimate, physical protection, or priming effect from roots, especially at depth. We used an 18-year-old silvoarable system associating hybrid walnut trees (Juglans regia × nigra) and durum wheat (Triticum turgidum L. subsp. durum) and an adjacent agricultural control plot to quantify all OC inputs to the soil - leaf litter, tree fine root senescence, crop residues, and tree row herbaceous vegetation - and measured SOC stocks down to 2 m of depth at varying distances from the trees. We then proposed a model that simulates SOC dynamics in agroforestry accounting for both the whole soil profile and the lateral spatial heterogeneity. The model was calibrated to the control plot only. Measured OC inputs to soil were increased by about 40 % (+ 1.11 t C ha-1 yr-1) down to 2 m of depth in the agroforestry plot compared to the control, resulting in an additional SOC stock of 6.3 t C ha-1 down to 1 m of depth. However, most of the SOC storage occurred in the first 30 cm of soil and in the tree rows. The model was strongly validated, properly describing the measured SOC stocks and distribution with depth in agroforestry tree rows and alleys. It showed that the increased inputs of fresh biomass to soil explained the observed additional SOC storage in the agroforestry plot. Moreover, only a priming effect variant of the model was able to capture the depth distribution of SOC stocks, suggesting the priming effect as a possible mechanism driving deep SOC dynamics. This result questions the potential of soils to store large amounts of carbon, especially at depth. Deep

  12. Fouling resistance prediction using artificial neural network nonlinear auto-regressive with exogenous input model based on operating conditions and fluid properties correlations

    Energy Technology Data Exchange (ETDEWEB)

    Biyanto, Totok R. [Department of Engineering Physics, Institute Technology of Sepuluh Nopember Surabaya, Surabaya, Indonesia 60111 (Indonesia)

    2016-06-03

    Fouling in a heat exchanger in Crude Preheat Train (CPT) refinery is an unsolved problem that reduces the plant efficiency, increases fuel consumption and CO{sub 2} emission. The fouling resistance behavior is very complex. It is difficult to develop a model using first principle equation to predict the fouling resistance due to different operating conditions and different crude blends. In this paper, Artificial Neural Networks (ANN) MultiLayer Perceptron (MLP) with input structure using Nonlinear Auto-Regressive with eXogenous (NARX) is utilized to build the fouling resistance model in shell and tube heat exchanger (STHX). The input data of the model are flow rates and temperatures of the streams of the heat exchanger, physical properties of product and crude blend data. This model serves as a predicting tool to optimize operating conditions and preventive maintenance of STHX. The results show that the model can capture the complexity of fouling characteristics in heat exchanger due to thermodynamic conditions and variations in crude oil properties (blends). It was found that the Root Mean Square Error (RMSE) are suitable to capture the nonlinearity and complexity of the STHX fouling resistance during phases of training and validation.

  13. Atmospheric disturbance modelling requirements for flying qualities applications

    Science.gov (United States)

    Moorhouse, D. J.

    1978-01-01

    Flying qualities are defined as those airplane characteristics which govern the ease or precision with which the pilot can accomplish the mission. Some atmospheric disturbance modelling requirements for aircraft flying qualities applications are reviewed. It is concluded that some simplifications are justified in identifying the primary influence on aircraft response and pilot control. It is recommended that a universal environmental model be developed, which could form the reference for different applications. This model should include the latest information on winds, turbulence, gusts, visibility, icing and precipitation. A chosen model would be kept by a national agency and updated regularly by feedback from users. A user manual is believed to be an essential part of such a model.

  14. GARFEM input deck description

    Energy Technology Data Exchange (ETDEWEB)

    Zdunek, A.; Soederberg, M. (Aeronautical Research Inst. of Sweden, Bromma (Sweden))

    1989-01-01

    The input card deck for the finite element program GARFEM version 3.2 is described in this manual. The program includes, but is not limited to, capabilities to handle the following problems: * Linear bar and beam element structures, * Geometrically non-linear problems (bar and beam), both static and transient dynamic analysis, * Transient response dynamics from a catalog of time varying external forcing function types or input function tables, * Eigenvalue solution (modes and frequencies), * Multi point constraints (MPC) for the modelling of mechanisms and e.g. rigid links. The MPC definition is used only in the geometrically linearized sense, * Beams with disjunct shear axis and neutral axis, * Beams with rigid offset. An interface exist that connects GARFEM with the program GAROS. GAROS is a program for aeroelastic analysis of rotating structures. Since this interface was developed GARFEM now serves as a preprocessor program in place of NASTRAN which was formerly used. Documentation of the methods applied in GARFEM exists but is so far limited to the capacities in existence before the GAROS interface was developed.

  15. FLUTAN input specifications

    International Nuclear Information System (INIS)

    Borgwaldt, H.; Baumann, W.; Willerding, G.

    1991-05-01

    FLUTAN is a highly vectorized computer code for 3-D fluiddynamic and thermal-hydraulic analyses in cartesian and cylinder coordinates. It is related to the family of COMMIX codes originally developed at Argonne National Laboratory, USA. To a large extent, FLUTAN relies on basic concepts and structures imported from COMMIX-1B and COMMIX-2 which were made available to KfK in the frame of cooperation contracts in the fast reactor safety field. While on the one hand not all features of the original COMMIX versions have been implemented in FLUTAN, the code on the other hand includes some essential innovative options like CRESOR solution algorithm, general 3-dimensional rebalacing scheme for solving the pressure equation, and LECUSSO-QUICK-FRAM techniques suitable for reducing 'numerical diffusion' in both the enthalphy and momentum equations. This report provides users with detailed input instructions, presents formulations of the various model options, and explains by means of comprehensive sample input, how to use the code. (orig.) [de

  16. Designing the input vector to ANN-based models for short-term load forecast in electricity distribution systems

    International Nuclear Information System (INIS)

    Santos, P.J.; Martins, A.G.; Pires, A.J.

    2007-01-01

    The present trend to electricity market restructuring increases the need for reliable short-term load forecast (STLF) algorithms, in order to assist electric utilities in activities such as planning, operating and controlling electric energy systems. Methodologies such as artificial neural networks (ANN) have been widely used in the next hour load forecast horizon with satisfactory results. However, this type of approach has had some shortcomings. Usually, the input vector (IV) is defined in a arbitrary way, mainly based on experience, on engineering judgment criteria and on concern about the ANN dimension, always taking into consideration the apparent correlations within the available endogenous and exogenous data. In this paper, a proposal is made of an approach to define the IV composition, with the main focus on reducing the influence of trial-and-error and common sense judgments, which usually are not based on sufficient evidence of comparative advantages over previous alternatives. The proposal includes the assessment of the strictly necessary instances of the endogenous variable, both from the point of view of the contiguous values prior to the forecast to be made, and of the past values representing the trend of consumption at homologous time intervals of the past. It also assesses the influence of exogenous variables, again limiting their presence at the IV to the indispensable minimum. A comparison is made with two alternative IV structures previously proposed in the literature, also applied to the distribution sector. The paper is supported by a real case study at the distribution sector. (author)

  17. Designing the input vector to ANN-based models for short-term load forecast in electricity distribution systems

    Energy Technology Data Exchange (ETDEWEB)

    Santos, P.J. [LabSEI-ESTSetubal-Department of Electrical Engineering at Escola Superior de Tecnologia, Polytechnic Institute of Setubal Rua Vale de Chaves Estefanilha, 2910-761 Setubal (Portugal); Martins, A.G. [Department of Electrical Engineering, FCTUC/INESC, Polo 2 University of Coimbra, Pinhal de Marrocos, 3030 Coimbra (Portugal); Pires, A.J. [LabSEI-ESTSetubal-Department of Electrical Engineering at Escola Superior de Tecnologia, Polytechnic Institute of Setubal Rua Vale de, Chaves Estefanilha, 2910-761 Setubal (Portugal)

    2007-05-15

    The present trend to electricity market restructuring increases the need for reliable short-term load forecast (STLF) algorithms, in order to assist electric utilities in activities such as planning, operating and controlling electric energy systems. Methodologies such as artificial neural networks (ANN) have been widely used in the next hour load forecast horizon with satisfactory results. However, this type of approach has had some shortcomings. Usually, the input vector (IV) is defined in a arbitrary way, mainly based on experience, on engineering judgment criteria and on concern about the ANN dimension, always taking into consideration the apparent correlations within the available endogenous and exogenous data. In this paper, a proposal is made of an approach to define the IV composition, with the main focus on reducing the influence of trial-and-error and common sense judgments, which usually are not based on sufficient evidence of comparative advantages over previous alternatives. The proposal includes the assessment of the strictly necessary instances of the endogenous variable, both from the point of view of the contiguous values prior to the forecast to be made, and of the past values representing the trend of consumption at homologous time intervals of the past. It also assesses the influence of exogenous variables, again limiting their presence at the IV to the indispensable minimum. A comparison is made with two alternative IV structures previously proposed in the literature, also applied to the distribution sector. The paper is supported by a real case study at the distribution sector. (author)

  18. WE-FG-206-06: Dual-Input Tracer Kinetic Modeling and Its Analog Implementation for Dynamic Contrast-Enhanced (DCE-) MRI of Malignant Mesothelioma (MPM)

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S; Rimner, A; Hayes, S; Hunt, M; Deasy, J; Zauderer, M; Rusch, V; Tyagi, N [Memorial Sloan Kettering Cancer Center, New York, NY (United States)

    2016-06-15

    Purpose: To use dual-input tracer kinetic modeling of the lung for mapping spatial heterogeneity of various kinetic parameters in malignant MPM Methods: Six MPM patients received DCE-MRI as part of their radiation therapy simulation scan. 5 patients had the epitheloid subtype of MPM, while one was biphasic. A 3D fast-field echo sequence with TR/TE/Flip angle of 3.62ms/1.69ms/15° was used for DCE-MRI acquisition. The scan was collected for 5 minutes with a temporal resolution of 5-9 seconds depending on the spatial extent of the tumor. A principal component analysis-based groupwise deformable registration was used to co-register all the DCE-MRI series for motion compensation. All the images were analyzed using five different dual-input tracer kinetic models implemented in analog continuous-time formalism: the Tofts-Kety (TK), extended TK (ETK), two compartment exchange (2CX), adiabatic approximation to the tissue homogeneity (AATH), and distributed parameter (DP) models. The following parameters were computed for each model: total blood flow (BF), pulmonary flow fraction (γ), pulmonary blood flow (BF-pa), systemic blood flow (BF-a), blood volume (BV), mean transit time (MTT), permeability-surface area product (PS), fractional interstitial volume (vi), extraction fraction (E), volume transfer constant (Ktrans) and efflux rate constant (kep). Results: Although the majority of patients had epitheloid histologies, kinetic parameter values varied across different models. One patient showed a higher total BF value in all models among the epitheloid histologies, although the γ value was varying among these different models. In one tumor with a large area of necrosis, the TK and ETK models showed higher E, Ktrans, and kep values and lower interstitial volume as compared to AATH and DP and 2CX models. Kinetic parameters such as BF-pa, BF-a, PS, Ktrans values were higher in surviving group compared to non-surviving group across most models. Conclusion: Dual-input tracer

  19. Incorporation of Plasticity and Damage Into an Orthotropic Three-Dimensional Model with Tabulated Input Suitable for Use in Composite Impact Problems

    Science.gov (United States)

    Goldberg, Robert K.; Carney, Kelly S.; Dubois, Paul; Hoffarth, Canio; Rajan,Subramaniam; Blackenhorn, Gunther

    2015-01-01

    The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites under impact conditions is becoming critical as these materials are gaining increased usage in the aerospace and automotive industries. While there are several composite material models currently available within commercial transient dynamic finite element codes, several features have been identified as being lacking in the currently available material models that could substantially enhance the predictive capability of the impact simulations. A specific desired feature pertains to the incorporation of both plasticity and damage within the material model. Another desired feature relates to using experimentally based tabulated stress-strain input to define the evolution of plasticity and damage as opposed to specifying discrete input properties (such as modulus and strength) and employing analytical functions to track the response of the material. To begin to address these needs, a combined plasticity and damage model suitable for use with both solid and shell elements is being developed for implementation within the commercial code LS-DYNA. The plasticity model is based on extending the Tsai-Wu composite failure model into a strain-hardening based orthotropic plasticity model with a non-associative flow rule. The evolution of the yield surface is determined based on tabulated stress-strain curves in the various normal and shear directions and is tracked using the effective plastic strain. The effective plastic strain is computed by using the non-associative flow rule in combination with appropriate numerical methods. To compute the evolution of damage, a strain equivalent semi-coupled formulation is used, in which a load in one direction results in a stiffness reduction in multiple coordinate directions. A specific laminated composite is examined to demonstrate the process of characterizing and analyzing the response of a composite using the developed

  20. On the sensitivity of EAS observables at superhigh energies to the physical input of hadron interaction models

    International Nuclear Information System (INIS)

    Kalmykov, N.N.; Ostapchenko, S.S.; Pavlov, A.I.

    1999-01-01

    The differences in the underlying assumptions of hadronic interaction models are analysed. The influence of assumed momentum distributions of quarks (antiquarks), positioned at the ends of quark-gluon strings, is investigated in the framework of QGSJET model. It is shown that for the choice of the effective exponent of these distributions α q → 1 calculated EAS characteristics become nearer to the predictions of SYBILL model

  1. A critical review of the data requirements for fluid flow models through fractured rock

    International Nuclear Information System (INIS)

    Priest, S.D.

    1986-01-01

    The report is a comprehensive critical review of the data requirements for ten models of fluid flow through fractured rock, developed in Europe and North America. The first part of the report contains a detailed review of rock discontinuities and how their important geometrical properties can be quantified. This is followed by a brief summary of the fundamental principles in the analysis of fluid flow through two-dimensional discontinuity networks and an explanation of a new approach to the incorporation of variability and uncertainty into geotechnical models. The report also contains a review of the geological and geotechnical properties of anhydrite and granite. Of the ten fluid flow models reviewed, only three offer a realistic fracture network model for which it is feasible to obtain the input data. Although some of the other models have some valuable or novel features, there is a tendency to concentrate on the simulation of contaminant transport processes, at the expense of providing a realistic fracture network model. Only two of the models reviewed, neither of them developed in Europe, have seriously addressed the problem of analysing fluid flow in three-dimensional networks. (author)

  2. The SIOP Model: Transforming the Experiences of College Professors. Part I. Lesson Planning, Building Background, and Comprehensible Input

    Science.gov (United States)

    Salcedo, Diana M.

    2010-01-01

    This article, the first of two, presents the introduction, context, and analysis of professor experiences in an on-going research project for implementing a new educational model in a bilingual teacher's college in Bogotá, Colombia. The model, the sheltered instruction observation protocol (SIOP) promotes eight components for a bilingual education…

  3. Groundwater travel time uncertainty analysis. Sensitivity of results to model geometry, and correlations and cross correlations among input parameters

    International Nuclear Information System (INIS)

    Clifton, P.M.

    1985-03-01

    This study examines the sensitivity of the travel time distribution predicted by a reference case model to (1) scale of representation of the model parameters, (2) size of the model domain, (3) correlation range of log-transmissivity, and (4) cross correlations between transmissivity and effective thickness. The basis for the reference model is the preliminary stochastic travel time model previously documented by the Basalt Waste Isolation Project. Results of this study show the following. The variability of the predicted travel times can be adequately represented when the ratio between the size of the zones used to represent the model parameters and the log-transmissivity correlation range is less than about one-fifth. The size of the model domain and the types of boundary conditions can have a strong impact on the distribution of travel times. Longer log-transmissivity correlation ranges cause larger variability in the predicted travel times. Positive cross correlation between transmissivity and effective thickness causes a decrease in the travel time variability. These results demonstrate the need for a sound conceptual model prior to conducting a stochastic travel time analysis

  4. School Inputs, Household Substitution, and Test Scores

    OpenAIRE

    Das, Jishnu; Dercon, Stefan; Krishnan, Pramila; Sundararaman, Venkatesh; Muralidharan, Karthik; Habyarimana, James

    2013-01-01

    Empirical studies of the relationship between school inputs and test scores typically do not account for the fact that households will respond to changes in school inputs. This paper presents a dynamic household optimization model relating test scores to school and household inputs, and tests its predictions in two very different low-income country settings -- Zambia and India. The authors...

  5. Energy balance in the solar transition region. II - Effects of pressure and energy input on hydrostatic models

    Science.gov (United States)

    Fontenla, J. M.; Avrett, E. H.; Loeser, R.

    1991-01-01

    The radiation of energy by hydrogen lines and continua in hydrostatic energy-balance models of the transition region between the solar chromosphere and corona is studied using models which assume that mechanical or magnetic energy is dissipated in the hot corona and is then transported toward the chromosphere down the steep temperature gradient of the transition region. These models explain the average quiet sun and also the entire range of variability of the Ly-alpha lines. The relations between the downward energy flux, the pressure of the transition region, and the different hydrogen emission are described.

  6. Adaptation of a community-based participatory research model to gain community input on identifying indicators of successful parenting.

    Science.gov (United States)

    Zlotnick, Cheryl; Wright, Marguerite; Sanchez, Roberto Macias; Kusnir, Rosario Murga; Te'o-Bennett, Iemaima

    2010-01-01

    Parenting models are generally based on families in stable homes, rather than in transitional situations such as in foster care, homeless shelters, and other temporary, at-risk residences. Consequently, these models do not recognize the unique challenges of families in transition. This study explored the domains of the Circumplex Model and examined its fit for transitional families using tenets from community-based participatory research. Findings suggest that in addition to the Circumplex Model's components, caregivers with children living in transition believe that managing the scrutiny of external authority systems and countering the negative influences of poverty and racism are two indicators that contribute to parenting success. Obtaining consumer-informed views of parenting not only is an important contributor to standards of practice, but also a promising avenue for future research.

  7. Coronal hole evolution from multi-viewpoint data as input for a STEREO solar wind speed persistence model

    Directory of Open Access Journals (Sweden)

    Temmer Manuela

    2018-01-01

    Full Text Available We present a concept study of a solar wind forecasting method for Earth, based on persistence modeling from STEREO in situ measurements combined with multi-viewpoint EUV observational data. By comparing the fractional areas of coronal holes (CHs extracted from EUV data of STEREO and SoHO/SDO, we perform an uncertainty assessment derived from changes in the CHs and apply those changes to the predicted solar wind speed profile at 1 AU. We evaluate the method for the time period 2008–2012, and compare the results to a persistence model based on ACE in situ measurements and to the STEREO persistence model without implementing the information on CH evolution. Compared to an ACE based persistence model, the performance of the STEREO persistence model which takes into account the evolution of CHs, is able to increase the number of correctly predicted high-speed streams by about 12%, and to decrease the number of missed streams by about 23%, and the number of false alarms by about 19%. However, the added information on CH evolution is not able to deliver more accurate speed values for the forecast than using the STEREO persistence model without CH information which performs better than an ACE based persistence model. Investigating the CH evolution between STEREO and Earth view for varying separation angles over ∼25–140° East of Earth, we derive some relation between expanding CHs and increasing solar wind speed, but a less clear relation for decaying CHs and decreasing solar wind speed. This fact most likely prevents the method from making more precise forecasts. The obtained results support a future L5 mission and show the importance and valuable contribution using multi-viewpoint data.

  8. Coronal hole evolution from multi-viewpoint data as input for a STEREO solar wind speed persistence model

    Science.gov (United States)

    Temmer, Manuela; Hinterreiter, Jürgen; Reiss, Martin A.

    2018-03-01

    We present a concept study of a solar wind forecasting method for Earth, based on persistence modeling from STEREO in situ measurements combined with multi-viewpoint EUV observational data. By comparing the fractional areas of coronal holes (CHs) extracted from EUV data of STEREO and SoHO/SDO, we perform an uncertainty assessment derived from changes in the CHs and apply those changes to the predicted solar wind speed profile at 1 AU. We evaluate the method for the time period 2008-2012, and compare the results to a persistence model based on ACE in situ measurements and to the STEREO persistence model without implementing the information on CH evolution. Compared to an ACE based persistence model, the performance of the STEREO persistence model which takes into account the evolution of CHs, is able to increase the number of correctly predicted high-speed streams by about 12%, and to decrease the number of missed streams by about 23%, and the number of false alarms by about 19%. However, the added information on CH evolution is not able to deliver more accurate speed values for the forecast than using the STEREO persistence model without CH information which performs better than an ACE based persistence model. Investigating the CH evolution between STEREO and Earth view for varying separation angles over ˜25-140° East of Earth, we derive some relation between expanding CHs and increasing solar wind speed, but a less clear relation for decaying CHs and decreasing solar wind speed. This fact most likely prevents the method from making more precise forecasts. The obtained results support a future L5 mission and show the importance and valuable contribution using multi-viewpoint data.

  9. A Practical pedestrian approach to parsimonious regression with inaccurate inputs

    Directory of Open Access Journals (Sweden)

    Seppo Karrila

    2014-04-01

    Full Text Available A measurement result often dictates an interval containing the correct value. Interval data is also created by roundoff, truncation, and binning. We focus on such common interval uncertainty in data. Inaccuracy in model inputs is typically ignored on model fitting. We provide a practical approach for regression with inaccurate data: the mathematics is easy, and the linear programming formulations simple to use even in a spreadsheet. This self-contained elementary presentation introduces interval linear systems and requires only basic knowledge of algebra. Feature selection is automatic; but can be controlled to find only a few most relevant inputs; and joint feature selection is enabled for multiple modeled outputs. With more features than cases, a novel connection to compressed sensing emerges: robustness against interval errors-in-variables implies model parsimony, and the input inaccuracies determine the regularization term. A small numerical example highlights counterintuitive results and a dramatic difference to total least squares.

  10. Phase-field modeling of coring during solidification of Au–Ni alloy using quaternions and CALPHAD input

    International Nuclear Information System (INIS)

    Fattebert, J.-L.; Wickett, M.E.; Turchi, P.E.A.

    2014-01-01

    A numerical method for the simulation of microstructure evolution during the solidification of an alloy is presented. The approach is based on a phase-field model including a phase variable, an orientation variable given by a quaternion, the alloy composition and a uniform temperature field. Energies and diffusion coefficients used in the model rely on thermodynamic and kinetic databases in the framework of the CALPHAD methodology. The numerical approach is based on a finite volume discretization and an implicit time-stepping algorithm. Numerical results for solidification and accompanying coring effect in a Au–Ni alloy are used to illustrate the methodology

  11. Formal Requirements Modeling for Reactive Systems with Coloured Petri Nets

    DEFF Research Database (Denmark)

    Tjell, Simon

    . This is important because it represents the identi cation of what is being designed (the reactive system), and what is given and being made assumptions about (the environment). The representation of the environment is further partitioned to distinguish human actors from non-human actors. This allows the modeler...... to addressing the problem of validating formal requirements models through interactive graphical animations is presented. Executable Use Cases (EUCs) provide a framework for integrating three tiers of descriptions of specifications and environment assumptions: the lower tier is an informal description...... to distinguish the modeling artifacts describing the environment from those describing the specifications for a reactive system. The formalization allows for clear identi cation of interfaces between interacting domains, where the interaction takes place through an abstraction of possibly parameterized states...

  12. Use of Bayesian Estimates to determine the Volatility Parameter Input in the Black-Scholes and Binomial Option Pricing Models

    Directory of Open Access Journals (Sweden)

    Shu Wing Ho

    2011-12-01

    Full Text Available The valuation of options and many other derivative instruments requires an estimation of exante or forward looking volatility. This paper adopts a Bayesian approach to estimate stock price volatility. We find evidence that overall Bayesian volatility estimates more closely approximate the implied volatility of stocks derived from traded call and put options prices compared to historical volatility estimates sourced from IVolatility.com (“IVolatility”. Our evidence suggests use of the Bayesian approach to estimate volatility can provide a more accurate measure of ex-ante stock price volatility and will be useful in the pricing of derivative securities where the implied stock price volatility cannot be observed.

  13. Development of a LiDAR derived digital elevation model (DEM) as Input to a METRANS geographic information system (GIS).

    Science.gov (United States)

    2011-05-01

    This report describes an assessment of digital elevation models (DEMs) derived from : LiDAR data for a subset of the Ports of Los Angeles and Long Beach. A methodology : based on Monte Carlo simulation was applied to investigate the accuracy of DEMs ...

  14. Model of an aquaponic system for minimised water, energy and nitrogen requirements.

    Science.gov (United States)

    Reyes Lastiri, D; Slinkert, T; Cappon, H J; Baganz, D; Staaks, G; Keesman, K J

    2016-01-01

    Water and nutrient savings can be established by coupling water streams between interacting processes. Wastewater from production processes contains nutrients like nitrogen (N), which can and should be recycled in order to meet future regulatory discharge demands. Optimisation of interacting water systems is a complex task. An effective way of understanding, analysing and optimising such systems is by applying mathematical models. The present modelling work aims at supporting the design of a nearly emission-free aquaculture and hydroponic system (aquaponics), thus contributing to sustainable production and to food security for the 21st century. Based on the model, a system that couples 40 m(3) fish tanks and a hydroponic system of 1,000 m(2) can produce 5 tons of tilapia and 75 tons of tomato yearly. The system requires energy to condense and recover evaporated water, for lighting and heating, adding up to 1.3 GJ/m(2) every year. In the suggested configuration, the fish can provide about 26% of the N required in a plant cycle. A coupling strategy that sends water from the fish to the plants in amounts proportional to the fish feed input, reduces the standard deviation of the NO3(-) level in the fish cycle by 35%.

  15. High resolution weather data for urban hydrological modelling and impact assessment, ICT requirements and future challenges

    Science.gov (United States)

    ten Veldhuis, Marie-claire; van Riemsdijk, Birna

    2013-04-01

    Hydrological analysis of urban catchments requires high resolution rainfall and catchment information because of the small size of these catchments, high spatial variability of the urban fabric, fast runoff processes and related short response times. Rainfall information available from traditional radar and rain gauge networks does no not meet the relevant scales of urban hydrology. A new type of weather radars, based on X-band frequency and equipped with Doppler and dual polarimetry capabilities, promises to provide more accurate rainfall estimates at the spatial and temporal scales that are required for urban hydrological analysis. Recently, the RAINGAIN project was started to analyse the applicability of this new type of radars in the context of urban hydrological modelling. In this project, meteorologists and hydrologists work closely together in several stages of urban hydrological analysis: from the acquisition procedure of novel and high-end radar products to data acquisition and processing, rainfall data retrieval, hydrological event analysis and forecasting. The project comprises of four pilot locations with various characteristics of weather radar equipment, ground stations, urban hydrological systems, modelling approaches and requirements. Access to data processing and modelling software is handled in different ways in the pilots, depending on ownership and user context. Sharing of data and software among pilots and with the outside world is an ongoing topic of discussion. The availability of high resolution weather data augments requirements with respect to the resolution of hydrological models and input data. This has led to the development of fully distributed hydrological models, the implementation of which remains limited by the unavailability of hydrological input data. On the other hand, if models are to be used in flood forecasting, hydrological models need to be computationally efficient to enable fast responses to extreme event conditions. This

  16. Development of Off-take Model, Subcooled Boiling Model, and Radiation Heat Transfer Input Model into the MARS Code for a Regulatory Auditing of CANDU Reactors

    International Nuclear Information System (INIS)

    Yoon, C.; Rhee, B. W.; Chung, B. D.; Ahn, S. H.; Kim, M. W.

    2009-01-01

    Korea currently has four operating units of the CANDU-6 type reactor in Wolsong. However, the safety assessment system for CANDU reactors has not been fully established due to a lack of self-reliance technology. Although the CATHENA code had been introduced from AECL, it is undesirable to use a vendor's code for a regulatory auditing analysis. In Korea, the MARS code has been developed for decades and is being considered by KINS as a thermal hydraulic regulatory auditing tool for nuclear power plants. Before this decision, KINS (Korea Institute of Nuclear Safety) had developed the RELAP5/MOD3/CANDU code for CANDU safety analyses by modifying the model of the existing PWR auditing tool, RELAP5/MOD3. The main purpose of this study is to transplant the CANDU models of the RELAP5/MOD3/CANDU code to the MARS code including a quality assurance of the developed models

  17. Evaluating the economic damages of transport disruptions using a transnational and interregional input-output model for Japan, China, and South Korea

    Science.gov (United States)

    Irimoto, Hiroshi; Shibusawa, Hiroyuki; Miyata, Yuzuru

    2017-10-01

    Damage to transportation networks as a result of natural disasters can lead to economic losses due to lost trade along those links in addition to the costs of damage to the infrastructure itself. This study evaluates the economic damages of transport disruptions such as highways, tunnels, bridges, and ports using a transnational and interregional Input-Output Model that divides the world into 23 regions: 9 regions in Japan, 7 regions in China, and 4 regions in Korea, Taiwan, ASEAN5, and the USA to allow us to focus on Japan's regional and international links. In our simulation, economic ripple effects of both international and interregional transport disruptions are measured by changes in the trade coefficients in the input-output model. The simulation showed that, in the case of regional links in Japan, a transport disruption in the Kanmon Straits causes the most damage to our targeted world, resulting in economic damage of approximately 36.3 billion. In the case of international links among Japan, China, and Korea, damage to the link between Kanto in Japan and Huabei in China causes economic losses of approximately 31.1 billion. Our result highlights the importance of disaster prevention in the Kanmon Straits, Kanto, and Huabei to help ensure economic resilience.

  18. Groundwater travel time uncertainty analysis: Sensitivity of results to model geometry, and correlations and cross correlations among input parameters

    International Nuclear Information System (INIS)

    Clifton, P.M.

    1984-12-01

    The deep basalt formations beneath the Hanford Site are being investigated for the Department of Energy (DOE) to assess their suitability as a host medium for a high level nuclear waste repository. Predicted performance of the proposed repository is an important part of the investigation. One of the performance measures being used to gauge the suitability of the host medium is pre-waste-emplacement groundwater travel times to the accessible environment. Many deterministic analyses of groundwater travel times have been completed by Rockwell and other independent organizations. Recently, Rockwell has completed a preliminary stochastic analysis of groundwater travel times. This document presents analyses that show the sensitivity of the results from the previous stochastic travel time study to: (1) scale of representation of model parameters, (2) size of the model domain, (3) correlation range of log-transmissivity, and (4) cross-correlation between transmissivity and effective thickness. 40 refs., 29 figs., 6 tabs

  19. Thermal properties and unfrozen water content of frozen volcanic ash as a modelling input parameters in mountainous volcanic areas

    Science.gov (United States)

    Kuznetsova, E.

    2016-12-01

    Volcanic eruptions are one of the major causes of the burial of ice and snow in volcanic areas. This has been demonstrated on volcanoes, e.g. in Iceland, Russia, USA and Chile, where the combination of a permafrost-favorable climate and a thin layer of tephra is sufficient to reduce the sub-tephra layer snow ablation substantially, even to zero, causing ground ice formation and permafrost aggradation. Many numerical models that have been used to investigate and predict the evolution of cold regions as the result of climatic changes are lacking the accurate data of the thermal properties —thermal conductivity, heat capacity, thermal diffusivity—of soils or debris layers involved. The angular shape of the fragments that make up ash and scoria makes it inappropriate to apply existing models to estimate bulk thermal conductivity. The lack of experimental data on the thermal conductivity of volcanic deposits will hinder the development of realistic models. The decreasing thermal conductivity of volcanic ash in the frozen state is associated with the development and presence of unfrozen water films that may have a direct mechanical impact on the movement or slippage between ice and particle, and thus, change the stress transfer. This becomes particularly significant during periods of climate change when enhanced temperatures and associated melting could weaken polythermal glaciers and affect areas with warm and discontinuous permafrost, and induce ice or land movements, perhaps on a catastrophic scale. In the presentation, we will summarize existing data regarding: (i) the thermal properties and unfrozen water content in frozen volcanic ash and cinder, (ii) the effects of cold temperatures on weathering processes of volcanic glass, (iii) the relationship between the mineralogy of frozen volcanic deposits and their thermal properties —and then discusses their significance in relation to the numerical modelling of glaciers and permafrost's thermal behavior.

  20. Input torque sensitivity to uncertain parameters in biped robot

    Science.gov (United States)

    Ding, Chang-Tao; Yang, Shi-Xi; Gan, Chun-Biao

    2013-06-01

    Input torque is themain power to maintain bipedal walking of robot, and can be calculated from trajectory planning and dynamic modeling on biped robot. During bipedal walking, the input torque is usually required to be adjusted due to some uncertain parameters arising from objective or subjective factors in the dynamical model to maintain the pre-planned stable trajectory. Here, a planar 5-link biped robot is used as an illustrating example to investigate the effects of uncertain parameters on the input torques. Kinematic equations of the biped robot are firstly established by the third-order spline curves based on the trajectory planning method, and the dynamic modeling is accomplished by taking both the certain and uncertain parameters into account. Next, several evaluation indices on input torques are introduced to perform sensitivity analysis of the input torque with respect to the uncertain parameters. Finally, based on the Monte Carlo simulation, the values of evaluation indices on input torques are presented, from which all the robot parameters are classified into three categories, i.e., strongly sensitive, sensitive and almost insensitive parameters.

  1. Specification of advanced safety modeling requirements (Rev. 0).

    Energy Technology Data Exchange (ETDEWEB)

    Fanning, T. H.; Tautges, T. J.

    2008-06-30

    The U.S. Department of Energy's Global Nuclear Energy Partnership has lead to renewed interest in liquid-metal-cooled fast reactors for the purpose of closing the nuclear fuel cycle and making more efficient use of future repository capacity. However, the U.S. has not designed or constructed a fast reactor in nearly 30 years. Accurate, high-fidelity, whole-plant dynamics safety simulations will play a crucial role by providing confidence that component and system designs will satisfy established design limits and safety margins under a wide variety of operational, design basis, and beyond design basis transient conditions. Current modeling capabilities for fast reactor safety analyses have resulted from several hundred person-years of code development effort supported by experimental validation. The broad spectrum of mechanistic and phenomenological models that have been developed represent an enormous amount of institutional knowledge that needs to be maintained. Complicating this, the existing code architectures for safety modeling evolved from programming practices of the 1970s. This has lead to monolithic applications with interdependent data models which require significant knowledge of the complexities of the entire code in order for each component to be maintained. In order to develop an advanced fast reactor safety modeling capability, the limitations of the existing code architecture must be overcome while preserving the capabilities that already exist. To accomplish this, a set of advanced safety modeling requirements is defined, based on modern programming practices, that focuses on modular development within a flexible coupling framework. An approach for integrating the existing capabilities of the SAS4A/SASSYS-1 fast reactor safety analysis code into the SHARP framework is provided in order to preserve existing capabilities while providing a smooth transition to advanced modeling capabilities. In doing this, the advanced fast reactor safety models

  2. Dynamic modeling and simulation of an induction motor with adaptive backstepping design of an input-output feedback linearization controller in series hybrid electric vehicle

    Directory of Open Access Journals (Sweden)

    Jalalifar Mehran

    2007-01-01

    Full Text Available In this paper using adaptive backstepping approach an adaptive rotor flux observer which provides stator and rotor resistances estimation simultaneously for induction motor used in series hybrid electric vehicle is proposed. The controller of induction motor (IM is designed based on input-output feedback linearization technique. Combining this controller with adaptive backstepping observer the system is robust against rotor and stator resistances uncertainties. In additional, mechanical components of a hybrid electric vehicle are called from the Advanced Vehicle Simulator Software Library and then linked with the electric motor. Finally, a typical series hybrid electric vehicle is modeled and investigated. Various tests, such as acceleration traversing ramp, and fuel consumption and emission are performed on the proposed model of a series hybrid vehicle. Computer simulation results obtained, confirm the validity and performance of the proposed IM control approach using for series hybrid electric vehicle.

  3. An Input and Output Analysis of the Quaternity-Dominating Energy Engineering Model from China’s Countryside

    Science.gov (United States)

    Xie, Xing Long; Xian Xue, Wei

    2017-12-01

    The aim of this study is to qualitatively and quantitatively explore an energy engineering model termed quaternity-dominating pattern emerging in North China’s countryside. This study finds methane produced in this model serves household activities such as cooking, inducing reduction of coal or biomass spending, which otherwise would provoke air pollution, water loss and land erosion, and ultimately leading to ecological environment betterment. Additionally, this project generates byproducts, biogas liquids and residuals, which can, as a category of fertilizer, can promote straightening of fertility preservation capacity and improvement in the chemical and physical quality of land as well as increasing crop output and quality. This study also finds this engineering could encourage social stability via efficiently allocating bucolic surplus labor during winter and successful running this engineering project would trigger an increase of scientific and technological qualifications for rural citizens. Moreover, cost-profit analysis indicates this pattern can allow one rural home to obtain access to a hygienic energy resource of biogas in the yearly volume of 375m3, generate annual net earnings of US3458.82 and make investment return in about 2.73 years. Especially for poverty-stricken areas, this energy engineering project enjoys high values and great significance, which can lift these impoverished areas from poverty both in economics and energy. The paper concludes with pointing out practical proposals on launching and operating this energy engineering project.

  4. Variable Input Power Supply.

    Science.gov (United States)

    An electronic power supply using pulse width modulated (PWM) voltage regulation provides a regulated output for a wide range of input voltages. Thus...switch to change the level of voltage regulation and the turns ratio of the primary winding of the power supply output transformer, thereby obtaining increased tolerance to input voltage change. (Author)

  5. SSYST-2 input description

    International Nuclear Information System (INIS)

    Meyder, R.

    1980-11-01

    The codes system SSYST-2 is designed to analyse the thermal and mechanical behaviour of a fuel rod during a LOCA. The report contains a short introduction into the SSYST structure, a complete input-list for all modules and several tested input-list for a LOCA-analysis. (orig.) [de

  6. MDS MIC Catalog Inputs

    Science.gov (United States)

    Johnson-Throop, Kathy A.; Vowell, C. W.; Smith, Byron; Darcy, Jeannette

    2006-01-01

    This viewgraph presentation reviews the inputs to the MDS Medical Information Communique (MIC) catalog. The purpose of the group is to provide input for updating the MDS MIC Catalog and to request that MMOP assign Action Item to other working groups and FSs to support the MITWG Process for developing MIC-DDs.

  7. Model Penentuan Nilai Target Functional Requirement Berbasis Utilitas

    Directory of Open Access Journals (Sweden)

    Cucuk Nur Rosyidi

    2012-01-01

    Full Text Available In a product design and development process, a designer faces a problem to decide functional requirement (FR target values. That decision is made under a risk since it is conducted in the early design phase using incomplete information. Utility function can be used to reflect the decision maker attitude towards the risk in making such decision. In this research, we develop a utility-based model to determine FR target values using quadratic utility function and information from Quality Function Deployment (QFD. A pencil design is used as a numerical example using quadratic utility function for each FR. The model can be applied for balancing customer and designer interest in determining FR target values.

  8. Evaluation of models for developing biological input for the design and location of water-intake structures

    Energy Technology Data Exchange (ETDEWEB)

    Simmons, M.A.; McKenzie, D.H.

    1981-12-01

    An approach for assessing multiple stimulus/response relations between fish and water intake structures is presented in this report. The approach stresses stimulus/response relations influencing fish and shellfish distribution and is made up of two methods. The first places emphasis on spatial and temperal distributions of populations; information is presented in the form of a non-predictive model, which allows for organizing information and documenting review processes. The second approach encompasses functional relationships between environmental and biological stimuli and responses of organisms. By using the two methods together, functional relationships can be evaluated to define the distribution of a fish or shellfish species. This information can then be used to resolve questions relating to impingement and entrainment.

  9. Requirements traceability in model-driven development: Applying model and transformation conformance

    NARCIS (Netherlands)

    Andrade Almeida, João; Iacob, Maria Eugenia; van Eck, Pascal

    The variety of design artifacts (models) produced in a model-driven design process results in an intricate relationship between requirements and the various models. This paper proposes a methodological framework that simplifies management of this relationship, which helps in assessing the quality of

  10. The advanced LIGO input optics.

    Science.gov (United States)

    Mueller, Chris L; Arain, Muzammil A; Ciani, Giacomo; DeRosa, Ryan T; Effler, Anamaria; Feldbaum, David; Frolov, Valery V; Fulda, Paul; Gleason, Joseph; Heintze, Matthew; Kawabe, Keita; King, Eleanor J; Kokeyama, Keiko; Korth, William Z; Martin, Rodica M; Mullavey, Adam; Peold, Jan; Quetschke, Volker; Reitze, David H; Tanner, David B; Vorvick, Cheryl; Williams, Luke F; Mueller, Guido

    2016-01-01

    The advanced LIGO gravitational wave detectors are nearing their design sensitivity and should begin taking meaningful astrophysical data in the fall of 2015. These resonant optical interferometers will have unprecedented sensitivity to the strains caused by passing gravitational waves. The input optics play a significant part in allowing these devices to reach such sensitivities. Residing between the pre-stabilized laser and the main interferometer, the input optics subsystem is tasked with preparing the laser beam for interferometry at the sub-attometer level while operating at continuous wave input power levels ranging from 100 mW to 150 W. These extreme operating conditions required every major component to be custom designed. These designs draw heavily on the experience and understanding gained during the operation of Initial LIGO and Enhanced LIGO. In this article, we report on how the components of the input optics were designed to meet their stringent requirements and present measurements showing how well they have lived up to their design.

  11. The advanced LIGO input optics

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, Chris L., E-mail: cmueller@phys.ufl.edu; Arain, Muzammil A.; Ciani, Giacomo; Feldbaum, David; Fulda, Paul; Gleason, Joseph; Heintze, Matthew; Martin, Rodica M.; Reitze, David H.; Tanner, David B.; Williams, Luke F.; Mueller, Guido [University of Florida, Gainesville, Florida 32611 (United States); DeRosa, Ryan T.; Effler, Anamaria; Kokeyama, Keiko [Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Frolov, Valery V.; Mullavey, Adam [LIGO Livingston Observatory, Livingston, Louisiana 70754 (United States); Kawabe, Keita; Vorvick, Cheryl [LIGO Hanford Observatory, Richland, Washington 99352 (United States); King, Eleanor J. [University of Adelaide, Adelaide, SA 5005 (Australia); and others

    2016-01-15

    The advanced LIGO gravitational wave detectors are nearing their design sensitivity and should begin taking meaningful astrophysical data in the fall of 2015. These resonant optical interferometers will have unprecedented sensitivity to the strains caused by passing gravitational waves. The input optics play a significant part in allowing these devices to reach such sensitivities. Residing between the pre-stabilized laser and the main interferometer, the input optics subsystem is tasked with preparing the laser beam for interferometry at the sub-attometer level while operating at continuous wave input power levels ranging from 100 mW to 150 W. These extreme operating conditions required every major component to be custom designed. These designs draw heavily on the experience and understanding gained during the operation of Initial LIGO and Enhanced LIGO. In this article, we report on how the components of the input optics were designed to meet their stringent requirements and present measurements showing how well they have lived up to their design.

  12. Three very high resolution optical images for land use mapping of a suburban catchment: input to distributed hydrological models

    Science.gov (United States)

    Jacqueminet, Christine; Kermadi, Saïda; Michel, Kristell; Jankowfsky, Sonja; Braud, Isabelle; Branger, Flora; Beal, David; Gagnage, Matthieu

    2010-05-01

    Keywords : land cover mapping, very high resolution, remote sensing processing techniques, object oriented approach, distributed hydrological model, peri-urban area Urbanization and other modifications of land use affect the hydrological cycle of suburban catchments. In order to quantify these impacts, the AVuPUR project (Assessing the Vulnerability of Peri-Urban Rivers) is currently developing a distributed hydrological model that includes anthropogenic features. The case study is the Yzeron catchment (150 km²), located close to Lyon city, France. This catchment experiences a growing of urbanization and a modification of traditional land use since the middle of the 20th century, resulting in an increase of flooding, water pollution and river banks erosion. This contribution discusses the potentials of automated data processing techniques on three different VHR images, in order to produce appropriate and detailed land cover data for the models. Of particular interest is the identification of impermeable surfaces (buildings, roads, and parking places) and permeable surfaces (forest areas, agricultural fields, gardens, trees…) within the catchment, because their infiltration capacity and their impact on runoff generation are different. Three aerial and spatial images were acquired: (1) BD Ortho IGN aerial images, 0.50 m resolution, visible bands, may 5th 2008; (2) QuickBird satellite image, 2.44 m resolution, visible and near-infrared bands, august 29th 2008; (3) Spot satellite image, 2.50 m resolution, visible and near-infrared bands, September 22nd 2008. From these images, we developed three image processing methods: (1) a pixel-based method associated to a segmentation using Matlab®, (2) a pixel-based method using ENVI®, (3) an object-based classification using Definiens®. We extracted six land cover types from the BD Ortho IGN (visible bands) and height classes from the satellite images (visible and near infrared bands). The three classified images are

  13. The Neurobiological Basis of Cognition: Identification by Multi-Input, Multioutput Nonlinear Dynamic Modeling: A method is proposed for measuring and modeling human long-term memory formation by mathematical analysis and computer simulation of nerve-cell dynamics.

    Science.gov (United States)

    Berger, Theodore W; Song, Dong; Chan, Rosa H M; Marmarelis, Vasilis Z

    2010-03-04

    The successful development of neural prostheses requires an understanding of the neurobiological bases of cognitive processes, i.e., how the collective activity of populations of neurons results in a higher level process not predictable based on knowledge of the individual neurons and/or synapses alone. We have been studying and applying novel methods for representing nonlinear transformations of multiple spike train inputs (multiple time series of pulse train inputs) produced by synaptic and field interactions among multiple subclasses of neurons arrayed in multiple layers of incompletely connected units. We have been applying our methods to study of the hippocampus, a cortical brain structure that has been demonstrated, in humans and in animals, to perform the cognitive function of encoding new long-term (declarative) memories. Without their hippocampi, animals and humans retain a short-term memory (memory lasting approximately 1 min), and long-term memory for information learned prior to loss of hippocampal function. Results of more than 20 years of studies have demonstrated that both individual hippocampal neurons, and populations of hippocampal cells, e.g., the neurons comprising one of the three principal subsystems of the hippocampus, induce strong, higher order, nonlinear transformations of hippocampal inputs into hippocampal outputs. For one synaptic input or for a population of synchronously active synaptic inputs, such a transformation is represented by a sequence of action potential inputs being changed into a different sequence of action potential outputs. In other words, an incoming temporal pattern is transformed into a different, outgoing temporal pattern. For multiple, asynchronous synaptic inputs, such a transformation is represented by a spatiotemporal pattern of action potential inputs being changed into a different spatiotemporal pattern of action potential outputs. Our primary thesis is that the encoding of short-term memories into new, long

  14. Identifying weaknesses in undergraduate programs within the context input process product model framework in view of faculty and library staff in 2014

    Directory of Open Access Journals (Sweden)

    Narges Neyazi

    2016-06-01

    Full Text Available Purpose: Objective of this research is to find out weaknesses of undergraduate programs in terms of personnel and financial, organizational management and facilities in view of faculty and library staff, and determining factors that may facilitate program quality–improvement. Methods: This is a descriptive analytical survey research and from purpose aspect is an application evaluation study that undergraduate groups of selected faculties (Public Health, Nursing and Midwifery, Allied Medical Sciences and Rehabilitation at Tehran University of Medical Sciences (TUMS have been surveyed using context input process product model in 2014. Statistical population were consist of three subgroups including department head (n=10, faculty members (n=61, and library staff (n=10 with total population of 81 people. Data collected through three researcher-made questionnaires which were based on Likert scale. The data were then analyzed using descriptive and inferential statistics. Results: Results showed desirable and relatively desirable situation for factors in context, input, process, and product fields except for factors of administration and financial; and research and educational spaces and equipment which were in undesirable situation. Conclusion: Based on results, researcher highlighted weaknesses in the undergraduate programs of TUMS in terms of research and educational spaces and facilities, educational curriculum, administration and financial; and recommended some steps in terms of financial, organizational management and communication with graduates in order to improve the quality of this system.

  15. Automatic individual arterial input functions calculated from PCA outperform manual and population-averaged approaches for the pharmacokinetic modeling of DCE-MR images.

    Science.gov (United States)

    Sanz-Requena, Roberto; Prats-Montalbán, José Manuel; Martí-Bonmatí, Luis; Alberich-Bayarri, Ángel; García-Martí, Gracián; Pérez, Rosario; Ferrer, Alberto

    2015-08-01

    To introduce a segmentation method to calculate an automatic arterial input function (AIF) based on principal component analysis (PCA) of dynamic contrast enhanced MR (DCE-MR) imaging and compare it with individual manually selected and population-averaged AIFs using calculated pharmacokinetic parameters. The study included 65 individuals with prostate examinations (27 tumors and 38 controls). Manual AIFs were individually extracted and also averaged to obtain a population AIF. Automatic AIFs were individually obtained by applying PCA to volumetric DCE-MR imaging data and finding the highest correlation of the PCs with a reference AIF. Variability was assessed using coefficients of variation and repeated measures tests. The different AIFs were used as inputs to the pharmacokinetic model and correlation coefficients, Bland-Altman plots and analysis of variance tests were obtained to compare the results. Automatic PCA-based AIFs were successfully extracted in all cases. The manual and PCA-based AIFs showed good correlation (r between pharmacokinetic parameters ranging from 0.74 to 0.95), with differences below the manual individual variability (RMSCV up to 27.3%). The population-averaged AIF showed larger differences (r from 0.30 to 0.61). The automatic PCA-based approach minimizes the variability associated to obtaining individual volume-based AIFs in DCE-MR studies of the prostate. © 2014 Wiley Periodicals, Inc.

  16. Analyzing the Effects of the Iranian Energy Subsidy Reform Plan on Short- Run Marginal Generation Cost of Electricity Using Extended Input-Output Price Model

    Directory of Open Access Journals (Sweden)

    Zohreh Salimian

    2012-01-01

    Full Text Available Subsidizing energy in Iran has imposed high costs on country's economy. Thus revising energy prices, on the basis of a subsidy reform plan, is a vital remedy to boost up the economy. While the direct consequence of cutting subsidies on electricity generation costs can be determined in a simple way, identifying indirect effects, which reflect higher costs for input factors such as labor, is a challenging problem. In this paper, variables such as compensation of employees and private consumption are endogenized by using extended Input-Output (I-O price model to evaluate direct and indirect effects of electricity and fuel prices increase on economic subsectors. The determination of the short-run marginal generation cost of electricity using I-O technique with taken into account the Iranian targeted subsidy plan's influences is the main goal of this paper. Marginal cost of electricity, in various scenarios of price adjustment of energy, is estimated for three conventional categories of thermal power plants. Our results show that the raising the price of energy leads to an increase in the electricity production costs. Accordingly, the production costs will be higher than 1000 Rials per kWh until 2014 as predicted in the beginning of the reform plan by electricity suppliers.

  17. A GENERALIZATION OF TRADITIONAL KANO MODEL FOR CUSTOMER REQUIREMENTS ANALYSIS

    Directory of Open Access Journals (Sweden)

    Renáta Turisová

    2015-07-01

    Full Text Available Purpose: The theory of attractiveness determines the relationship between the technically achieved and customer perceived quality of product attributes. The most frequently used approach in the theory of attractiveness is the implementation of Kano‘s model. There exist a lot of generalizations of that model which take into consideration various aspects and approaches focused on understanding the customer preferences and identification of his priorities for a selling  product. The aim of this article is to outline another possible generalization of Kano‘s model.Methodology/Approach: The traditional Kano’s model captures the nonlinear relationship between reached attributes of quality and customer requirements. The individual attributes of quality are divided into three main categories: must-be, one-dimensional, attractive quality and into two side categories: indifferent and reverse quality. The well selling product has to contain the must-be attribute. It should contain as many one-dimensional attributes as possible. If there are also supplementary attractive attributes, it means that attractiveness of the entire product, from the viewpoint of the customer, nonlinearly sharply rises what has a direct positive impact on a decision of potential customer when purchasing the product. In this article, we show that inclusion of individual quality attributes of a product to the mentioned categories depends, among other things, also on costs on life cycle of the product, respectively on a price of the product on the market.Findings: In practice, we are often encountering the inclusion of products into different price categories: lower, middle and upper class. For a certain type of products the category is either directly declared by a producer (especially in automotive industry, or is determined by a customer by means of assessment of available market prices. To each of those groups of a products different customer expectations can be assigned

  18. Modeling the minimum enzymatic requirements for optimal cellulose conversion

    International Nuclear Information System (INIS)

    Den Haan, R; Van Zyl, W H; Van Zyl, J M; Harms, T M

    2013-01-01

    Hydrolysis of cellulose is achieved by the synergistic action of endoglucanases, exoglucanases and β-glucosidases. Most cellulolytic microorganisms produce a varied array of these enzymes and the relative roles of the components are not easily defined or quantified. In this study we have used partially purified cellulases produced heterologously in the yeast Saccharomyces cerevisiae to increase our understanding of the roles of some of these components. CBH1 (Cel7), CBH2 (Cel6) and EG2 (Cel5) were separately produced in recombinant yeast strains, allowing their isolation free of any contaminating cellulolytic activity. Binary and ternary mixtures of the enzymes at loadings ranging between 3 and 100 mg g −1 Avicel allowed us to illustrate the relative roles of the enzymes and their levels of synergy. A mathematical model was created to simulate the interactions of these enzymes on crystalline cellulose, under both isolated and synergistic conditions. Laboratory results from the various mixtures at a range of loadings of recombinant enzymes allowed refinement of the mathematical model. The model can further be used to predict the optimal synergistic mixes of the enzymes. This information can subsequently be applied to help to determine the minimum protein requirement for complete hydrolysis of cellulose. Such knowledge will be greatly informative for the design of better enzymatic cocktails or processing organisms for the conversion of cellulosic biomass to commodity products. (letter)

  19. Comparing urban solid waste recycling from the viewpoint of urban metabolism based on physical input-output model: A case of Suzhou in China.

    Science.gov (United States)

    Liang, Sai; Zhang, Tianzhu

    2012-01-01

    Investigating impacts of urban solid waste recycling on urban metabolism contributes to sustainable urban solid waste management and urban sustainability. Using a physical input-output model and scenario analysis, urban metabolism of Suzhou in 2015 is predicted and impacts of four categories of solid waste recycling on urban metabolism are illustrated: scrap tire recycling, food waste recycling, fly ash recycling and sludge recycling. Sludge recycling has positive effects on reducing all material flows. Thus, sludge recycling for biogas is regarded as an accepted method. Moreover, technical levels of scrap tire recycling and food waste recycling should be improved to produce positive effects on reducing more material flows. Fly ash recycling for cement production has negative effects on reducing all material flows except solid wastes. Thus, other fly ash utilization methods should be exploited. In addition, the utilization and treatment of secondary wastes from food waste recycling and sludge recycling should be concerned. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Estimating direct and indirect rebound effects by supply-driven input-output model: A case study of Taiwan's industry

    International Nuclear Information System (INIS)

    Wu, Kuei-Yen; Wu, Jung-Hua; Huang, Yun-Hsun; Fu, Szu-Chi; Chen, Chia-Yon

    2016-01-01

    Most existing literature focuses on the direct rebound effect on the demand side for consumers. This study analyses direct and indirect rebound effects in Taiwan's industry from the perspective of producers. However, most studies on the producers' viewpoint may overlook inter-industry linkages. This study applies a supply-driven input-output model to quantify the magnitude of rebound effects by explicitly considering inter-industry linkages. Empirical results showed that total rebound effects for most Taiwan's sectors were less than 10% in 2011. A comparison among the sectors yields that sectors with lower energy efficiency had higher direct rebound effects, while sectors with higher forward linkages generated higher indirect rebound effects. Taking the Mining sector (S3) as an example, which is an upstream supplier and has high forward linkages; it showed high indirect rebound effects that are derived from the accumulation of additional energy consumption by its downstream producers. The findings also showed that in almost all sectors, indirect rebound effects were higher than direct rebound effects. In other words, if indirect rebound effects are neglected, the total rebound effects will be underestimated. Hence, the energy-saving potential may be overestimated. - Highlights: • This study quantifies rebound effects by a supply-driven input-output model. • For most Taiwan's sectors, total rebound magnitudes were less than 10% in 2011. • Direct rebound effects and energy efficiency were inverse correlation. • Indirect rebound effects and industrial forward linkages were positive correlation. • Indirect rebound effects were generally higher than direct rebound effects.

  1. Early neonatal loss of inhibitory synaptic input to the spinal motor neurons confers spina bifida-like leg dysfunction in a chicken model

    Directory of Open Access Journals (Sweden)

    Md. Sakirul Islam Khan

    2017-12-01

    Full Text Available Spina bifida aperta (SBA, one of the most common congenital malformations, causes lifelong neurological complications, particularly in terms of motor dysfunction. Fetuses with SBA exhibit voluntary leg movements in utero and during early neonatal life, but these disappear within the first few weeks after birth. However, the pathophysiological sequence underlying such motor dysfunction remains unclear. Additionally, because important insights have yet to be obtained from human cases, an appropriate animal model is essential. Here, we investigated the neuropathological mechanisms of progression of SBA-like motor dysfunctions in a neural tube surgery-induced chicken model of SBA at different pathogenesis points ranging from embryonic to posthatch ages. We found that chicks with SBA-like features lose voluntary leg movements and subsequently exhibit lower-limb paralysis within the first 2 weeks after hatching, coinciding with the synaptic change-induced disruption of spinal motor networks at the site of the SBA lesion in the lumbosacral region. Such synaptic changes reduced the ratio of inhibitory-to-excitatory inputs to motor neurons and were associated with a drastic loss of γ-aminobutyric acid (GABAergic inputs and upregulation of the cholinergic activities of motor neurons. Furthermore, most of the neurons in ventral horns, which appeared to be suffering from excitotoxicity during the early postnatal days, underwent apoptosis. However, the triggers of cellular abnormalization and neurodegenerative signaling were evident in the middle- to late-gestational stages, probably attributable to the amniotic fluid-induced in ovo milieu. In conclusion, we found that early neonatal loss of neurons in the ventral horn of exposed spinal cord affords novel insights into the pathophysiology of SBA-like leg dysfunction.

  2. The Model Intercomparison Project on the Climatic Response to Volcanic Forcing (VolMIP): Experimental Design and Forcing Input Data for CMIP6

    Science.gov (United States)

    Zanchettin, Davide; Khodri, Myriam; Timmreck, Claudia; Toohey, Matthew; Schmidt, Anja; Gerber, Edwin P.; Hegerl, Gabriele; Robock, Alan; Pausata, Francesco; Ball, William T.; hide

    2016-01-01

    The enhancement of the stratospheric aerosol layer by volcanic eruptions induces a complex set of responses causing global and regional climate effects on a broad range of timescales. Uncertainties exist regarding the climatic response to strong volcanic forcing identified in coupled climate simulations that contributed to the fifth phase of the Coupled Model Intercomparison Project (CMIP5). In order to better understand the sources of these model diversities, the Model Intercomparison Project on the climatic response to Volcanic forcing (VolMIP) has defined a coordinated set of idealized volcanic perturbation experiments to be carried out in alignment with the CMIP6 protocol. VolMIP provides a common stratospheric aerosol data set for each experiment to minimize differences in the applied volcanic forcing. It defines a set of initial conditions to assess how internal climate variability contributes to determining the response. VolMIP will assess to what extent volcanically forced responses of the coupled ocean-atmosphere system are robustly simulated by state-of-the-art coupled climate models and identify the causes that limit robust simulated behavior, especially differences in the treatment of physical processes. This paper illustrates the design of the idealized volcanic perturbation experiments in the VolMIP protocol and describes the common aerosol forcing input data sets to be used.

  3. An updated Quantitative Water Air Sediment Interaction (QWASI) model for evaluating chemical fate and input parameter sensitivities in aquatic systems: application to D5 (decamethylcyclopentasiloxane) and PCB-180 in two lakes.

    Science.gov (United States)

    Mackay, Donald; Hughes, Lauren; Powell, David E; Kim, Jaeshin

    2014-09-01

    The QWASI fugacity mass balance model has been widely used since 1983 for both scientific and regulatory purposes to estimate the concentrations of organic chemicals in water and sediment, given an assumed rate of chemical emission, advective inflow in water or deposition from the atmosphere. It has become apparent that an updated version is required, especially to incorporate improved methods of obtaining input parameters such as partition coefficients. Accordingly, the model has been revised and it is now available in spreadsheet format. Changes to the model are described and the new version is applied to two chemicals, D5 (decamethylcyclopentasiloxane) and PCB-180, in two lakes, Lake Pepin (MN, USA) and Lake Ontario, showing the model's capability of illustrating both the chemical to chemical differences and lake to lake differences. Since there are now increased regulatory demands for rigorous sensitivity and uncertainty analyses, these aspects are discussed and two approaches are illustrated. It is concluded that the new QWASI water quality model can be of value for both evaluative and simulation purposes, thus providing a tool for obtaining an improved understanding of chemical mass balances in lakes, as a contribution to the assessment of fate and exposure and as a step towards the assessment of risk. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Serial Input Output

    Energy Technology Data Exchange (ETDEWEB)

    Waite, Anthony; /SLAC

    2011-09-07

    Serial Input/Output (SIO) is designed to be a long term storage format of a sophistication somewhere between simple ASCII files and the techniques provided by inter alia Objectivity and Root. The former tend to be low density, information lossy (floating point numbers lose precision) and inflexible. The latter require abstract descriptions of the data with all that that implies in terms of extra complexity. The basic building blocks of SIO are streams, records and blocks. Streams provide the connections between the program and files. The user can define an arbitrary list of streams as required. A given stream must be opened for either reading or writing. SIO does not support read/write streams. If a stream is closed during the execution of a program, it can be reopened in either read or write mode to the same or a different file. Records represent a coherent grouping of data. Records consist of a collection of blocks (see next paragraph). The user can define a variety of records (headers, events, error logs, etc.) and request that any of them be written to any stream. When SIO reads a file, it first decodes the record name and if that record has been defined and unpacking has been requested for it, SIO proceeds to unpack the blocks. Blocks are user provided objects which do the real work of reading/writing the data. The user is responsible for writing the code for these blocks and for identifying these blocks to SIO at run time. To write a collection of blocks, the user must first connect them to a record. The record can then be written to a stream as described above. Note that the same block can be connected to many different records. When SIO reads a record, it scans through the blocks written and calls the corresponding block object (if it has been defined) to decode it. Undefined blocks are skipped. Each of these categories (streams, records and blocks) have some characteristics in common. Every stream, record and block has a name with the condition that each

  5. A Model for Forecasting Enlisted Student IA Billet Requirements

    Science.gov (United States)

    2016-03-01

    represents the time that sailors spend in training while in a student status. Sailors in a student status are not part of the Navy’s distributable... students in training and the average total time they spend in training. Because most students in pre-fleet training are new accessions, the main inputs...for determining the number of students are the monthly accession goals for each rating. The total time students spend at school consists of time

  6. Instantaneous-to-daily GPP upscaling schemes based on a coupled photosynthesis-stomatal conductance model: correcting the overestimation of GPP by directly using daily average meteorological inputs.

    Science.gov (United States)

    Wang, Fumin; Gonsamo, Alemu; Chen, Jing M; Black, T Andrew; Zhou, Bin

    2014-11-01

    Daily canopy photosynthesis is usually temporally upscaled from instantaneous (i.e., seconds) photosynthesis rate. The nonlinear response of photosynthesis to meteorological variables makes the temporal scaling a significant challenge. In this study, two temporal upscaling schemes of daily photosynthesis, the integrated daily model (IDM) and the segmented daily model (SDM), are presented by considering the diurnal variations of meteorological variables based on a coupled photosynthesis-stomatal conductance model. The two models, as well as a simple average daily model (SADM) with daily average meteorological inputs, were validated using the tower-derived gross primary production (GPP) to assess their abilities in simulating daily photosynthesis. The results showed IDM closely followed the seasonal trend of the tower-derived GPP with an average RMSE of 1.63 g C m(-2) day(-1), and an average Nash-Sutcliffe model efficiency coefficient (E) of 0.87. SDM performed similarly to IDM in GPP simulation but decreased the computation time by >66%. SADM overestimated daily GPP by about 15% during the growing season compared to IDM. Both IDM and SDM greatly decreased the overestimation by SADM, and improved the simulation of daily GPP by reducing the RMSE by 34 and 30%, respectively. The results indicated that IDM and SDM are useful temporal upscaling approaches, and both are superior to SADM in daily GPP simulation because they take into account the diurnally varying responses of photosynthesis to meteorological variables. SDM is computationally more efficient, and therefore more suitable for long-term and large-scale GPP simulations.

  7. Numerical Modeling of the Effects of Nutrient-rich Coastal-water Input on the Phytoplankton in the Gulf of California

    Science.gov (United States)

    Bermudez, A.; Rivas, D.

    2015-12-01

    Phytoplankton bloom dynamics depends on the interactions of favorable physical, chemical, and biotic conditions, particularly on the available nutrients that enhance phytoplankton growth, like nitrogen. Costal and estuarine environments are heavily influenced by exogenous sources of nitrogen; the anthropogenic inputs include urban and rural wastewater coming from agricultural activities (i.e., fertilizers and animal waste). In response, new production is often enhanced, leading eutrophication and phytoplankton blooms, including harmful taxa. These events have become more frequent, and with it the interest to evaluate their effects on marine ecosystems and the impact on human health. In the Gulf of California the harmful algal blooms (HABs) had affected aquaculture, fisheries, and even tourism, thereby it is important to generate information about biological and physical factors that can influence their appearance. A numerical model is a tool that may bring key information about the origin and distribution of phytoplankton blooms. Herein the analysis is based on a three-dimensional, hydrodynamical numerical model, coupled to a Nitrogen-Phytoplankton-Zooplankton-Detritus (NPZD) model. Several numerical simulations using different forcing and scenarios are carried out in order to evaluate the processes that influence the phytoplankton growth. These numerical results are compared to available observations. Thus, the main environmental factors triggering the generation of HABs can be identified.

  8. Automation of Geometry Input for Building Code Compliance Check

    DEFF Research Database (Denmark)

    Petrova, Ekaterina Aleksandrova; Johansen, Peter Lind; Jensen, Rasmus Lund

    2017-01-01

    . That has left the industry in constant pursuit of possibilities for integration of the tool within the Building Information Modelling environment so that the potential provided by the latter can be harvested and the processed can be optimized. This paper presents a solution for automated data extraction......Documentation of compliance with the energy performance regulations at the end of the detailed design phase is mandatory for building owners in Denmark. Therefore, besides multidisciplinary input, the building design process requires various iterative analyses, so that the optimal solutions can...... from building geometry created in Autodesk Revit and its translation to input for compliance check analysis....

  9. An Input Shaping Method Based on System Output

    Directory of Open Access Journals (Sweden)

    Zhiqiang ZHU

    2014-06-01

    Full Text Available In this paper, an input shaping method is proposed. This method only requires