WorldWideScience

Sample records for model input requirements

  1. User requirements for hydrological models with remote sensing input

    Energy Technology Data Exchange (ETDEWEB)

    Kolberg, Sjur

    1997-10-01

    Monitoring the seasonal snow cover is important for several purposes. This report describes user requirements for hydrological models utilizing remotely sensed snow data. The information is mainly provided by operational users through a questionnaire. The report is primarily intended as a basis for other work packages within the Snow Tools project which aim at developing new remote sensing products for use in hydrological models. The HBV model is the only model mentioned by users in the questionnaire. It is widely used in Northern Scandinavia and Finland, in the fields of hydroelectric power production, flood forecasting and general monitoring of water resources. The current implementation of HBV is not based on remotely sensed data. Even the presently used HBV implementation may benefit from remotely sensed data. However, several improvements can be made to hydrological models to include remotely sensed snow data. Among these the most important are a distributed version, a more physical approach to the snow depletion curve, and a way to combine data from several sources. 1 ref.

  2. Model based optimization of EMC input filters

    Energy Technology Data Exchange (ETDEWEB)

    Raggl, K; Kolar, J. W. [Swiss Federal Institute of Technology, Power Electronic Systems Laboratory, Zuerich (Switzerland); Nussbaumer, T. [Levitronix GmbH, Zuerich (Switzerland)

    2008-07-01

    Input filters of power converters for compliance with regulatory electromagnetic compatibility (EMC) standards are often over-dimensioned in practice due to a non-optimal selection of number of filter stages and/or the lack of solid volumetric models of the inductor cores. This paper presents a systematic filter design approach based on a specific filter attenuation requirement and volumetric component parameters. It is shown that a minimal volume can be found for a certain optimal number of filter stages for both the differential mode (DM) and common mode (CM) filter. The considerations are carried out exemplarily for an EMC input filter of a single phase power converter for the power levels of 100 W, 300 W, and 500 W. (author)

  3. Reducing external speedup requirements for input-queued crossbars

    DEFF Research Database (Denmark)

    Berger, Michael Stubert

    2005-01-01

    This paper presents a modified architecture for an input queued switch that reduces external speedup. Maximal size scheduling algorithms for input-buffered crossbars requires a speedup between port card and switch card. The speedup is typically in the range of 2, to compensate for the scheduler...... performance degradation. This implies, that the required bandwidth between port card and switch card is 2 times the actual port speed, adding to cost and complexity. To reduce this bandwidth, a modified architecture is proposed that introduces a small amount of input and output memory on the switch card chip...

  4. Treatments of Precipitation Inputs to Hydrologic Models

    Science.gov (United States)

    Hydrological models are used to assess many water resources problems from agricultural use and water quality to engineering issues. The success of these models are dependent on correct parameterization; the most sensitive being the rainfall input time series. These records can come from land-based ...

  5. Software safety analysis on the model specified by NuSCR and SMV input language at requirements phase of software development life cycle using SMV

    Energy Technology Data Exchange (ETDEWEB)

    Koh, Kwang Yong; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    2005-07-01

    Safety-critical software process is composed of development process, verification and validation (V and V) process and safety analysis process. Safety analysis process has been often treated as an additional process and not found in a conventional software process. But software safety analysis (SSA) is required if software is applied to a safety system, and the SSA shall be performed independently for the safety software through software development life cycle (SDLC). Of all the phases in software development, requirements engineering is generally considered to play the most critical role in determining the overall software quality. NASA data demonstrate that nearly 75% of failures found in operational software were caused by errors in the requirements. The verification process in requirements phase checks the correctness of software requirements specification, and the safety analysis process analyzes the safety-related properties in detail. In this paper, the method for safety analysis at requirements phase of software development life cycle using symbolic model verifier (SMV) is proposed. Hazard is discovered by hazard analysis and in other to use SMV for the safety analysis, the safety-related properties are expressed by computation tree logic (CTL)

  6. Model-Free importance indicators for dependent input

    Energy Technology Data Exchange (ETDEWEB)

    Saltelli, A.; Ratto, M.; Tarantola, S

    2001-07-01

    A number of methods are available to asses uncertainty importance in the predictions of a simulation model for orthogonal sets of uncertain input factors. However, in many practical cases input factors are correlated. Even for these cases it is still possible to compute the correlation ratio and the partial (or incremental) importance measure, two popular sensitivity measures proposed in the recent literature on the subject. Unfortunately, the existing indicators of importance have limitations in terms of their use in sensitivity analysis of model output. Correlation ratios are indeed effective for priority setting (i.e. to find out what input factor needs better determination) but not, for instance, for the identification of the subset of the most important input factors, or for model simplification. In such cases other types of indicators are required that can cope with the simultaneous occurrence of correlation and interaction (a property of the model) among the input factors. In (1) the limitations of current measures of importance were discussed and a general approach was identified to quantify uncertainty importance for correlated inputs in terms of different betting contexts. This work was later submitted to the Journal of the American Statistical Association. However, the computational cost of such approach is still high, as it happens when dealing with correlated input factors. In this paper we explore how suitable designs could reduce the numerical load of the analysis. (Author) 11 refs.

  7. Approximate input physics for stellar modelling

    CERN Document Server

    Pols, O R; Eggleton, P P; Han, Z; Pols, O R; Tout, C A; Eggleton, P P; Han, Z

    1995-01-01

    We present a simple and efficient, yet reasonably accurate, equation of state, which at the moderately low temperatures and high densities found in the interiors of stars less massive than the Sun is substantially more accurate than its predecessor by Eggleton, Faulkner & Flannery. Along with the most recently available values in tabular form of opacities, neutrino loss rates, and nuclear reaction rates for a selection of the most important reactions, this provides a convenient package of input physics for stellar modelling. We briefly discuss a few results obtained with the updated stellar evolution code.

  8. Sensitivity Analysis of the ALMANAC Model's Input Variables

    Institute of Scientific and Technical Information of China (English)

    XIE Yun; James R.Kiniry; Jimmy R.Williams; CHEN You-min; LIN Er-da

    2002-01-01

    Crop models often require extensive input data sets to realistically simulate crop growth. Development of such input data sets can be difficult for some model users. The objective of this study was to evaluate the importance of variables in input data sets for crop modeling. Based on published hybrid performance trials in eight Texas counties, we developed standard data sets of 10-year simulations of maize and sorghum for these eight counties with the ALMANAC (Agricultural Land Management Alternatives with Numerical Assessment Criteria) model. The simulation results were close to the measured county yields with relative error only 2.6%for maize, and - 0.6% for sorghum. We then analyzed the sensitivity of grain yield to solar radiation, rainfall, soil depth, soil plant available water, and runoff curve number, comparing simulated yields to those with the original, standard data sets. Runoff curve number changes had the greatest impact on simulated maize and sorghum yields for all the counties. The next most critical input was rainfall, and then solar radiation for both maize and sorghum, especially for the dryland condition. For irrigated sorghum, solar radiation was the second most critical input instead of rainfall. The degree of sensitivity of yield to all variables for maize was larger than for sorghum except for solar radiation. Many models use a USDA curve number approach to represent soil water redistribution, so it will be important to have accurate curve numbers, rainfall, and soil depth to realistically simulate yields.

  9. Determining avalanche modelling input parameters using terrestrial laser scanning technology

    OpenAIRE

    2013-01-01

    International audience; In dynamic avalanche modelling, data about the volumes and areas of the snow released, mobilized and deposited are key input parameters, as well as the fracture height. The fracture height can sometimes be measured in the field, but it is often difficult to access the starting zone due to difficult or dangerous terrain and avalanche hazards. More complex is determining the areas and volumes of snow involved in an avalanche. Such calculations require high-resolution spa...

  10. Input modelling for subchannel analysis of CANFLEX fuel bundle

    Energy Technology Data Exchange (ETDEWEB)

    Park, Joo Hwan; Jun, Ji Su; Suk, Ho Chun [Korea Atomic Energy Research Institute, Taejon (Korea)

    1998-06-01

    This report describs the input modelling for subchannel analysis of CANFLEX fuel bundle using CASS(Candu thermalhydraulic Analysis by Subchannel approacheS) code which has been developed for subchannel analysis of CANDU fuel channel. CASS code can give the different calculation results according to users' input modelling. Hence, the objective of this report provide the background information of input modelling, the accuracy of input data and gives the confidence of calculation results. (author). 11 refs., 3 figs., 4 tabs.

  11. REFLECTIONS ON THE INOPERABILITY INPUT-OUTPUT MODEL

    NARCIS (Netherlands)

    Dietzenbacher, Erik; Miller, Ronald E.

    2015-01-01

    We argue that the inoperability input-output model is a straightforward - albeit potentially very relevant - application of the standard input-output model. In addition, we propose two less standard input-output approaches as alternatives to take into consideration when analyzing the effects of disa

  12. Robust input design for nonlinear dynamic modeling of AUV.

    Science.gov (United States)

    Nouri, Nowrouz Mohammad; Valadi, Mehrdad

    2017-09-01

    Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Analytical delay models for RLC interconnects under ramp input

    Institute of Scientific and Technical Information of China (English)

    REN Yinglei; MAO Junfa; LI Xiaochun

    2007-01-01

    Analytical delay models for Resistance Inductance Capacitance (RLC)interconnects with ramp input are presented for difierent situations,which include overdamped,underdamped and critical response cases.The errors of delay estimation using the analytical models proposed in this paper are less bv 3%in comparison to the SPICE-computed delay.These models are meaningful for the delay analysis of actual circuits in which the input signal is ramp but not ideal step input.

  14. Sensitivity analysis of a sound absorption model with correlated inputs

    Science.gov (United States)

    Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.

    2017-04-01

    Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.

  15. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  16. Water input requirements of the rapidly shrinking Dead Sea.

    Science.gov (United States)

    Abu Ghazleh, Shahrazad; Hartmann, Jens; Jansen, Nils; Kempe, Stephan

    2009-05-01

    The deepest point on Earth, the Dead Sea level, has been dropping alarmingly since 1978 by 0.7 m/a on average due to the accelerating water consumption in the Jordan catchment and stood in 2008 at 420 m below sea level. In this study, a terrain model of the surface area and water volume of the Dead Sea was developed from the Shuttle Radar Topography Mission data using ArcGIS. The model shows that the lake shrinks on average by 4 km(2)/a in area and by 0.47 km(3)/a in volume, amounting to a cumulative loss of 14 km(3) in the last 30 years. The receding level leaves almost annually erosional terraces, recorded here for the first time by Differential Global Positioning System field surveys. The terrace altitudes were correlated among the different profiles and dated to specific years of the lake level regression, illustrating the tight correlation between the morphology of the terrace sequence and the receding lake level. Our volume-level model described here and previous work on groundwater inflow suggest that the projected Dead Sea-Red Sea channel or the Mediterranean-Dead Sea channel must have a carrying capacity of >0.9 km(3)/a in order to slowly re-fill the lake to its former level and to create a sustainable system of electricity generation and freshwater production by desalinization. Moreover, such a channel will maintain tourism and potash industry on both sides of the Dead Sea and reduce the natural hazard caused by the recession.

  17. Soil-Related Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    A. J. Smith

    2004-09-09

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure

  18. SIMPLE MODEL FOR THE INPUT IMPEDANCE OF RECTANGULAR MICROSTRIP ANTENNA

    Directory of Open Access Journals (Sweden)

    Celal YILDIZ

    1998-03-01

    Full Text Available A very simple model for the input impedance of a coax-fed rectangular microstrip patch antenna is presented. It is based on the cavity model and the equivalent resonant circuits. The theoretical input impedance results obtained from this model are in good agreement with the experimental results available in the literature. This model is well suited for computer-aided design (CAD.

  19. Storm-impact scenario XBeach model inputs and tesults

    Science.gov (United States)

    Mickey, Rangley; Long, Joseph W.; Thompson, David M.; Plant, Nathaniel G.; Dalyander, P. Soupy

    2017-01-01

    The XBeach model input and output of topography and bathymetry resulting from simulation of storm-impact scenarios at the Chandeleur Islands, LA, as described in USGS Open-File Report 2017–1009 (https://doi.org/10.3133/ofr20171009), are provided here. For further information regarding model input generation and visualization of model output topography and bathymetry refer to USGS Open-File Report 2017–1009 (https://doi.org/10.3133/ofr20171009).

  20. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-06-27

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699

  1. Preisach models of hysteresis driven by Markovian input processes

    Science.gov (United States)

    Schubert, Sven; Radons, Günter

    2017-08-01

    We study the response of Preisach models of hysteresis to stochastically fluctuating external fields. We perform numerical simulations, which indicate that analytical expressions derived previously for the autocorrelation functions and power spectral densities of the Preisach model with uncorrelated input, hold asymptotically also if the external field shows exponentially decaying correlations. As a consequence, the mechanisms causing long-term memory and 1 /f noise in Preisach models with uncorrelated inputs still apply in the presence of fast decaying input correlations. We collect additional evidence for the importance of the effective Preisach density previously introduced even for Preisach models with correlated inputs. Additionally, we present some results for the output of the Preisach model with uncorrelated input using analytical methods. It is found, for instance, that in order to produce the same long-time tails in the output, the elementary hysteresis loops of large width need to have a higher weight for the generic Preisach model than for the symmetric Preisach model. Further, we find autocorrelation functions and power spectral densities to be monotonically decreasing independently of the choice of input and Preisach density.

  2. Space market model space industry input-output model

    Science.gov (United States)

    Hodgin, Robert F.; Marchesini, Roberto

    1987-01-01

    The goal of the Space Market Model (SMM) is to develop an information resource for the space industry. The SMM is intended to contain information appropriate for decision making in the space industry. The objectives of the SMM are to: (1) assemble information related to the development of the space business; (2) construct an adequate description of the emerging space market; (3) disseminate the information on the space market to forecasts and planners in government agencies and private corporations; and (4) provide timely analyses and forecasts of critical elements of the space market. An Input-Output model of market activity is proposed which are capable of transforming raw data into useful information for decision makers and policy makers dealing with the space sector.

  3. Estimation of the input parameters in the Feller neuronal model

    Science.gov (United States)

    Ditlevsen, Susanne; Lansky, Petr

    2006-06-01

    The stochastic Feller neuronal model is studied, and estimators of the model input parameters, depending on the firing regime of the process, are derived. Closed expressions for the first two moments of functionals of the first-passage time (FTP) through a constant boundary in the suprathreshold regime are derived, which are used to calculate moment estimators. In the subthreshold regime, the exponentiality of the FTP is utilized to characterize the input parameters. The methods are illustrated on simulated data. Finally, approximations of the first-passage-time moments are suggested, and biological interpretations and comparisons of the parameters in the Feller and the Ornstein-Uhlenbeck models are discussed.

  4. Quality assurance of weather data for agricultural system model input

    Science.gov (United States)

    It is well known that crop production and hydrologic variation on watersheds is weather related. Rarely, however, is meteorological data quality checks reported for agricultural systems model research. We present quality assurance procedures for agricultural system model weather data input. Problems...

  5. Optimization of precipitation inputs for SWAT modeling in mountainous catchment

    Science.gov (United States)

    Tuo, Ye; Chiogna, Gabriele; Disse, Markus

    2016-04-01

    Precipitation is often the most important input data in hydrological models when simulating streamflow in mountainous catchment. The Soil and Water Assessment Tool (SWAT), a widely used hydrological model, only makes use of data from one precipitation gauging station which is nearest to the centroid of each subcatchment, eventually corrected using the band elevation method. This leads in general to inaccurate subcatchment precipitation representation, which results in unreliable simulation results in mountainous catchment. To investigate the impact of the precipitation inputs and consider the high spatial and temporal variability of precipitation, we first interpolated 21 years (1990-2010) of daily measured data using the Inverse Distance Weighting (IDW) method. Averaged IDW daily values have been calculated at the subcatchment scale to be further supplied as optimized precipitation inputs for SWAT. Both datasets (Measured data and IDW data) are applied to three Alpine subcatchments of the Adige catchment (North-eastern Italy, 12100 km2) as precipitation inputs. Based on the calibration and validation results, model performances are evaluated according to the Nash Sutchliffe Efficiency (NSE) and Coefficient of Determination (R2). For all three subcatchments, the simulation results with IDW inputs are better than the original method which uses measured inputs from the nearest station. This suggests that IDW method could improve the model performance in Alpine catchments to some extent. By taking into account and weighting the distance between precipitation records, IDW supplies more accurate precipitation inputs for each individual Alpine subcatchment, which would as a whole lead to an improved description of the hydrological behavior of the entire Adige catchment.

  6. Meteorological input for atmospheric dispersion models: an inter-comparison between new generation models

    Energy Technology Data Exchange (ETDEWEB)

    Busillo, C.; Calastrini, F.; Gualtieri, G. [Lab. for Meteorol. and Environ. Modell. (LaMMA/CNR-IBIMET), Florence (Italy); Carpentieri, M.; Corti, A. [Dept. of Energetics, Univ. of Florence (Italy); Canepa, E. [INFM, Dept. of Physics, Univ. of Genoa (Italy)

    2004-07-01

    The behaviour of atmospheric dispersion models is strongly influenced by meteorological input, especially as far as new generation models are concerned. More sophisticated meteorological pre-processors require more extended and more reliable data. This is true in particular when short-term simulations are performed, while in long-term modelling detailed data are less important. In Europe no meteorological standards exist about data, therefore testing and evaluating the results of new generation dispersion models is particularly important in order to obtain information on reliability of model predictions. (orig.)

  7. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rautenstrauch

    2004-09-10

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.

  8. Testing agile requirements models

    Institute of Scientific and Technical Information of China (English)

    BOTASCHANJAN Jewgenij; PISTER Markus; RUMPE Bernhard

    2004-01-01

    This paper discusses a model-based approach to validate software requirements in agile development processes by simulation and in particular automated testing. The use of models as central development artifact needs to be added to the portfolio of software engineering techniques, to further increase efficiency and flexibility of the development beginning already early in the requirements definition phase. Testing requirements are some of the most important techniques to give feedback and to increase the quality of the result. Therefore testing of artifacts should be introduced as early as possible, even in the requirements definition phase.

  9. The use of synthetic input sequences in time series modeling

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Dair Jose de [Programa de Pos-Graduacao em Engenharia Eletrica, Universidade Federal de Minas Gerais, Av. Antonio Carlos 6627, 31.270-901 Belo Horizonte, MG (Brazil); Letellier, Christophe [CORIA/CNRS UMR 6614, Universite et INSA de Rouen, Av. de l' Universite, BP 12, F-76801 Saint-Etienne du Rouvray cedex (France); Gomes, Murilo E.D. [Programa de Pos-Graduacao em Engenharia Eletrica, Universidade Federal de Minas Gerais, Av. Antonio Carlos 6627, 31.270-901 Belo Horizonte, MG (Brazil); Aguirre, Luis A. [Programa de Pos-Graduacao em Engenharia Eletrica, Universidade Federal de Minas Gerais, Av. Antonio Carlos 6627, 31.270-901 Belo Horizonte, MG (Brazil)], E-mail: aguirre@cpdee.ufmg.br

    2008-08-04

    In many situations time series models obtained from noise-like data settle to trivial solutions under iteration. This Letter proposes a way of producing a synthetic (dummy) input, that is included to prevent the model from settling down to a trivial solution, while maintaining features of the original signal. Simulated benchmark models and a real time series of RR intervals from an ECG are used to illustrate the procedure.

  10. The use of synthetic input sequences in time series modeling

    Science.gov (United States)

    de Oliveira, Dair José; Letellier, Christophe; Gomes, Murilo E. D.; Aguirre, Luis A.

    2008-08-01

    In many situations time series models obtained from noise-like data settle to trivial solutions under iteration. This Letter proposes a way of producing a synthetic (dummy) input, that is included to prevent the model from settling down to a trivial solution, while maintaining features of the original signal. Simulated benchmark models and a real time series of RR intervals from an ECG are used to illustrate the procedure.

  11. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2004-09-10

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis

  12. Effects of input uncertainty on cross-scale crop modeling

    Science.gov (United States)

    Waha, Katharina; Huth, Neil; Carberry, Peter

    2014-05-01

    The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input

  13. Model reduction of nonlinear systems subject to input disturbances

    KAUST Repository

    Ndoye, Ibrahima

    2017-07-10

    The method of convex optimization is used as a tool for model reduction of a class of nonlinear systems in the presence of disturbances. It is shown that under some conditions the nonlinear disturbed system can be approximated by a reduced order nonlinear system with similar disturbance-output properties to the original plant. The proposed model reduction strategy preserves the nonlinearity and the input disturbance nature of the model. It guarantees a sufficiently small error between the outputs of the original and the reduced-order systems, and also maintains the properties of input-to-state stability. The matrices of the reduced order system are given in terms of a set of linear matrix inequalities (LMIs). The paper concludes with a demonstration of the proposed approach on model reduction of a nonlinear electronic circuit with additive disturbances.

  14. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2006-06-05

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This

  15. How sensitive are estimates of carbon fixation in agricultural models to input data?

    Directory of Open Access Journals (Sweden)

    Tum Markus

    2012-02-01

    Full Text Available Abstract Background Process based vegetation models are central to understand the hydrological and carbon cycle. To achieve useful results at regional to global scales, such models require various input data from a wide range of earth observations. Since the geographical extent of these datasets varies from local to global scale, data quality and validity is of major interest when they are chosen for use. It is important to assess the effect of different input datasets in terms of quality to model outputs. In this article, we reflect on both: the uncertainty in input data and the reliability of model results. For our case study analysis we selected the Marchfeld region in Austria. We used independent meteorological datasets from the Central Institute for Meteorology and Geodynamics and the European Centre for Medium-Range Weather Forecasts (ECMWF. Land cover / land use information was taken from the GLC2000 and the CORINE 2000 products. Results For our case study analysis we selected two different process based models: the Environmental Policy Integrated Climate (EPIC and the Biosphere Energy Transfer Hydrology (BETHY/DLR model. Both process models show a congruent pattern to changes in input data. The annual variability of NPP reaches 36% for BETHY/DLR and 39% for EPIC when changing major input datasets. However, EPIC is less sensitive to meteorological input data than BETHY/DLR. The ECMWF maximum temperatures show a systematic pattern. Temperatures above 20°C are overestimated, whereas temperatures below 20°C are underestimated, resulting in an overall underestimation of NPP in both models. Besides, BETHY/DLR is sensitive to the choice and accuracy of the land cover product. Discussion This study shows that the impact of input data uncertainty on modelling results need to be assessed: whenever the models are applied under new conditions, local data should be used for both input and result comparison.

  16. How sensitive are estimates of carbon fixation in agricultural models to input data?

    Science.gov (United States)

    Tum, Markus; Strauss, Franziska; McCallum, Ian; Günther, Kurt; Schmid, Erwin

    2012-02-01

    Process based vegetation models are central to understand the hydrological and carbon cycle. To achieve useful results at regional to global scales, such models require various input data from a wide range of earth observations. Since the geographical extent of these datasets varies from local to global scale, data quality and validity is of major interest when they are chosen for use. It is important to assess the effect of different input datasets in terms of quality to model outputs. In this article, we reflect on both: the uncertainty in input data and the reliability of model results. For our case study analysis we selected the Marchfeld region in Austria. We used independent meteorological datasets from the Central Institute for Meteorology and Geodynamics and the European Centre for Medium-Range Weather Forecasts (ECMWF). Land cover / land use information was taken from the GLC2000 and the CORINE 2000 products. For our case study analysis we selected two different process based models: the Environmental Policy Integrated Climate (EPIC) and the Biosphere Energy Transfer Hydrology (BETHY/DLR) model. Both process models show a congruent pattern to changes in input data. The annual variability of NPP reaches 36% for BETHY/DLR and 39% for EPIC when changing major input datasets. However, EPIC is less sensitive to meteorological input data than BETHY/DLR. The ECMWF maximum temperatures show a systematic pattern. Temperatures above 20°C are overestimated, whereas temperatures below 20°C are underestimated, resulting in an overall underestimation of NPP in both models. Besides, BETHY/DLR is sensitive to the choice and accuracy of the land cover product. This study shows that the impact of input data uncertainty on modelling results need to be assessed: whenever the models are applied under new conditions, local data should be used for both input and result comparison.

  17. Agricultural and Environmental Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rasmuson; K. Rautenstrauch

    2004-09-14

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters.

  18. Land Building Models: Uncertainty in and Sensitivity to Input Parameters

    Science.gov (United States)

    2013-08-01

    Louisiana Coastal Area Ecosystem Restoration Projects Study , Vol. 3, Final integrated ERDC/CHL CHETN-VI-44 August 2013 24 feasibility study and... Nourishment Module, Chapter 8. In Coastal Louisiana Ecosystem Assessment and Restoration (CLEAR) Model of Louisiana Coastal Area (LCA) Comprehensive...to Input Parameters by Ty V. Wamsley PURPOSE: The purpose of this Coastal and Hydraulics Engineering Technical Note (CHETN) is to document a

  19. Influence of magnetospheric inputs definition on modeling of ionospheric storms

    Science.gov (United States)

    Tashchilin, A. V.; Romanova, E. B.; Kurkin, V. I.

    Usually for numerical modeling of ionospheric storms corresponding empirical models specify parameters of neutral atmosphere and magnetosphere. Statistical kind of these models renders them impractical for simulation of the individual storm. Therefore one has to correct the empirical models using various additional speculations. The influence of magnetospheric inputs such as distributions of electric potential, number and energy fluxes of the precipitating electrons on the results of the ionospheric storm simulations has been investigated in this work. With this aim for the strong geomagnetic storm on September 25, 1998 hour global distributions of those magnetospheric inputs from 20 to 27 September were calculated by the magnetogram inversion technique (MIT). Then with the help of 3-D ionospheric model two variants of ionospheric response to this magnetic storm were simulated using MIT data and empirical models of the electric fields (Sojka et al., 1986) and electron precipitations (Hardy et al., 1985). The comparison of the received results showed that for high-latitude and subauroral stations the daily variations of electron density calculated with MIT data are more close to observations than those of empirical models. In addition using of the MIT data allows revealing some peculiarities in the daily variations of electron density during strong geomagnetic storm. References Sojka J.J., Rasmussen C.E., Schunk R.W. J.Geophys.Res., 1986, N10, p.11281. Hardy D.A., Gussenhoven M.S., Holeman E.A. J.Geophys.Res., 1985, N5, p.4229.

  20. Assessing and propagating uncertainty in model inputs in corsim

    Energy Technology Data Exchange (ETDEWEB)

    Molina, G.; Bayarri, M. J.; Berger, J. O.

    2001-07-01

    CORSIM is a large simulator for vehicular traffic, and is being studied with respect to its ability to successfully model and predict behavior of traffic in a 36 block section of Chicago. Inputs to the simulator include information about street configuration, driver behavior, traffic light timing, turning probabilities at each corner and distributions of traffic ingress into the system. This work is described in more detail in the article Fast Simulators for Assessment and Propagation of Model Uncertainty also in these proceedings. The focus of this conference poster is on the computational aspects of this problem. In particular, we address the description of the full conditional distributions needed for implementation of the MCMC algorithm and, in particular, how the constraints can be incorporated; details concerning the run time and convergence of the MCMC algorithm; and utilisation of the MCMC output for prediction and uncertainty analysis concerning the CORSIM computer model. As this last is the ultimate goal, it is worth emphasizing that the incorporation of all uncertainty concerning inputs can significantly affect the model predictions. (Author)

  1. Evaluating the uncertainty of input quantities in measurement models

    Science.gov (United States)

    Possolo, Antonio; Elster, Clemens

    2014-06-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) gives guidance about how values and uncertainties should be assigned to the input quantities that appear in measurement models. This contribution offers a concrete proposal for how that guidance may be updated in light of the advances in the evaluation and expression of measurement uncertainty that were made in the course of the twenty years that have elapsed since the publication of the GUM, and also considering situations that the GUM does not yet contemplate. Our motivation is the ongoing conversation about a new edition of the GUM. While generally we favour a Bayesian approach to uncertainty evaluation, we also recognize the value that other approaches may bring to the problems considered here, and focus on methods for uncertainty evaluation and propagation that are widely applicable, including to cases that the GUM has not yet addressed. In addition to Bayesian methods, we discuss maximum-likelihood estimation, robust statistical methods, and measurement models where values of nominal properties play the same role that input quantities play in traditional models. We illustrate these general-purpose techniques in concrete examples, employing data sets that are realistic but that also are of conveniently small sizes. The supplementary material available online lists the R computer code that we have used to produce these examples (stacks.iop.org/Met/51/3/339/mmedia). Although we strive to stay close to clause 4 of the GUM, which addresses the evaluation of uncertainty for input quantities, we depart from it as we review the classes of measurement models that we believe are generally useful in contemporary measurement science. We also considerably expand and update the treatment that the GUM gives to Type B evaluations of uncertainty: reviewing the state-of-the-art, disciplined approach to the elicitation of expert knowledge, and its encapsulation in probability distributions that are usable in

  2. Kernel Principal Component Analysis for Stochastic Input Model Generation (PREPRINT)

    Science.gov (United States)

    2010-08-17

    c ( )d Fig. 13. Contour of saturation at 0.2 PVI : MC mean (a) and variance (b) from experimental samples; MC mean (c) and variance (d) from PC...realizations. The contour plots of saturation at 0.2 PVI are given in Fig. 13. PVI represents dimensionless time and is computed as PVI = ∫ Q dt/Vp...stochastic input model provides a fast way to generate many realizations, which are consistent, in a useful sense, with the experimental data. PVI M ea n

  3. RELAP5/MOD3 code manual: User`s guide and input requirements. Volume 2

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-08-01

    The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. Volume II contains detailed instructions for code application and input data preparation.

  4. Performance Comparison of Sub Phonetic Model with Input Signal Processing

    Directory of Open Access Journals (Sweden)

    Dr E. Ramaraj

    2006-01-01

    Full Text Available The quest to arrive at a better model for signal transformation for speech has resulted in striving to develop better signal representations and algorithm. The article explores the word model which is a concatenation of state dependent senones as an alternate for phoneme. The Research Work has an objective of involving the senone with the Input signal processing an algorithm which has been tried with phoneme and has been quite successful and try to compare the performance of senone with ISP and Phoneme with ISP and supply the result analysis. The research model has taken the SPHINX IV[4] speech engine for its implementation owing to its flexibility to the new algorithm, robustness and performance consideration.

  5. State-shared model for multiple-input multiple-output systems

    Institute of Scientific and Technical Information of China (English)

    Zhenhua TIAN; Karlene A. HOO

    2005-01-01

    This work proposes a method to construct a state-shared model for multiple-input multiple-output (MIMO)systems. A state-shared model is defined as a linear time invariant state-space structure that is driven by measurement signals-the plant outputs and the manipulated variables, but shared by different multiple input/output models. The genesis of the state-shared model is based on a particular reduced non-minimal realization. Any such realization necessarily fulfills the requirement that the output of the state-shared model is an asymptotically correct estimate of the output of the plant, if the process model is selected appropriately. The approach is demonstrated on a nonlinear MIMO system- a physiological model of calcium fluxes that controls muscle contraction and relaxation in human cardiac myocytes.

  6. Estimation of sectoral prices in the BNL energy input--output model

    Energy Technology Data Exchange (ETDEWEB)

    Tessmer, R.G. Jr.; Groncki, P.; Boyce, G.W. Jr.

    1977-12-01

    Value-added coefficients have been incorporated into Brookhaven's Energy Input-Output Model so that one can calculate the implicit price at which each sector sells its output to interindustry and final-demand purchasers. Certain adjustments to historical 1967 data are required because of the unique structure of the model. Procedures are also described for projecting energy-sector coefficients in future years that are consistent with exogenously specified energy prices.

  7. Measurement of Laser Weld Temperatures for 3D Model Input.

    Energy Technology Data Exchange (ETDEWEB)

    Dagel, Daryl; GROSSETETE, GRANT; Maccallum, Danny O.

    2016-10-01

    Laser welding is a key joining process used extensively in the manufacture and assembly of critical components for several weapons systems. Sandia National Laboratories advances the understanding of the laser welding process through coupled experimentation and modeling. This report summarizes the experimental portion of the research program, which focused on measuring temperatures and thermal history of laser welds on steel plates. To increase confidence in measurement accuracy, researchers utilized multiple complementary techniques to acquire temperatures during laser welding. This data serves as input to and validation of 3D laser welding models aimed at predicting microstructure and the formation of defects and their impact on weld-joint reliability, a crucial step in rapid prototyping of weapons components.

  8. Phylogenetic mixtures and linear invariants for equal input models.

    Science.gov (United States)

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  9. Demand-driven energy requirement of world economy 2007: A multi-region input-output network simulation

    Science.gov (United States)

    Chen, Zhan-Ming; Chen, G. Q.

    2013-07-01

    This study presents a network simulation of the global embodied energy flows in 2007 based on a multi-region input-output model. The world economy is portrayed as a 6384-node network and the energy interactions between any two nodes are calculated and analyzed. According to the results, about 70% of the world's direct energy input is invested in resource, heavy manufacture, and transportation sectors which provide only 30% of the embodied energy to satisfy final demand. By contrast, non-transportation services sectors contribute to 24% of the world's demand-driven energy requirement with only 6% of the direct energy input. Commodity trade is shown to be an important alternative to fuel trade in redistributing energy, as international commodity flows embody 1.74E + 20 J of energy in magnitude up to 89% of the traded fuels. China is the largest embodied energy exporter with a net export of 3.26E + 19 J, in contrast to the United States as the largest importer with a net import of 2.50E + 19 J. The recent economic fluctuations following the financial crisis accelerate the relative expansions of energy requirement by developing countries, as a consequence China will take over the place of the United States as the world's top demand-driven energy consumer in 2022 and India will become the third largest in 2015.

  10. Recurrent network models for perfect temporal integration of fluctuating correlated inputs.

    Directory of Open Access Journals (Sweden)

    Hiroshi Okamoto

    2009-06-01

    Full Text Available Temporal integration of input is essential to the accumulation of information in various cognitive and behavioral processes, and gradually increasing neuronal activity, typically occurring within a range of seconds, is considered to reflect such computation by the brain. Some psychological evidence suggests that temporal integration by the brain is nearly perfect, that is, the integration is non-leaky, and the output of a neural integrator is accurately proportional to the strength of input. Neural mechanisms of perfect temporal integration, however, remain largely unknown. Here, we propose a recurrent network model of cortical neurons that perfectly integrates partially correlated, irregular input spike trains. We demonstrate that the rate of this temporal integration changes proportionately to the probability of spike coincidences in synaptic inputs. We analytically prove that this highly accurate integration of synaptic inputs emerges from integration of the variance of the fluctuating synaptic inputs, when their mean component is kept constant. Highly irregular neuronal firing and spike coincidences are the major features of cortical activity, but they have been separately addressed so far. Our results suggest that the efficient protocol of information integration by cortical networks essentially requires both features and hence is heterotic.

  11. Assigning probability distributions to input parameters of performance assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta [INTERA Inc., Austin, TX (United States)

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.

  12. Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models

    Science.gov (United States)

    Rothenberger, Michael J.

    This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input

  13. Bootstrap rank-ordered conditional mutual information (broCMI): A nonlinear input variable selection method for water resources modeling

    Science.gov (United States)

    Quilty, John; Adamowski, Jan; Khalil, Bahaa; Rathinasamy, Maheswaran

    2016-03-01

    The input variable selection problem has recently garnered much interest in the time series modeling community, especially within water resources applications, demonstrating that information theoretic (nonlinear)-based input variable selection algorithms such as partial mutual information (PMI) selection (PMIS) provide an improved representation of the modeled process when compared to linear alternatives such as partial correlation input selection (PCIS). PMIS is a popular algorithm for water resources modeling problems considering nonlinear input variable selection; however, this method requires the specification of two nonlinear regression models, each with parametric settings that greatly influence the selected input variables. Other attempts to develop input variable selection methods using conditional mutual information (CMI) (an analog to PMI) have been formulated under different parametric pretenses such as k nearest-neighbor (KNN) statistics or kernel density estimates (KDE). In this paper, we introduce a new input variable selection method based on CMI that uses a nonparametric multivariate continuous probability estimator based on Edgeworth approximations (EA). We improve the EA method by considering the uncertainty in the input variable selection procedure by introducing a bootstrap resampling procedure that uses rank statistics to order the selected input sets; we name our proposed method bootstrap rank-ordered CMI (broCMI). We demonstrate the superior performance of broCMI when compared to CMI-based alternatives (EA, KDE, and KNN), PMIS, and PCIS input variable selection algorithms on a set of seven synthetic test problems and a real-world urban water demand (UWD) forecasting experiment in Ottawa, Canada.

  14. Input modeling with phase-type distributions and Markov models theory and applications

    CERN Document Server

    Buchholz, Peter; Felko, Iryna

    2014-01-01

    Containing a summary of several recent results on Markov-based input modeling in a coherent notation, this book introduces and compares algorithms for parameter fitting and gives an overview of available software tools in the area. Due to progress made in recent years with respect to new algorithms to generate PH distributions and Markovian arrival processes from measured data, the models outlined are useful alternatives to other distributions or stochastic processes used for input modeling. Graduate students and researchers in applied probability, operations research and computer science along with practitioners using simulation or analytical models for performance analysis and capacity planning will find the unified notation and up-to-date results presented useful. Input modeling is the key step in model based system analysis to adequately describe the load of a system using stochastic models. The goal of input modeling is to find a stochastic model to describe a sequence of measurements from a real system...

  15. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity.

    Science.gov (United States)

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2012-12-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes.

  16. Modelling Analysis of Forestry Input-Output Elasticity in China

    Directory of Open Access Journals (Sweden)

    Guofeng Wang

    2016-01-01

    Full Text Available Based on an extended economic model and space econometrics, this essay analyzed the spatial distributions and interdependent relationships of the production of forestry in China; also the input-output elasticity of forestry production were calculated. Results figure out there exists significant spatial correlation in forestry production in China. Spatial distribution is mainly manifested as spatial agglomeration. The output elasticity of labor force is equal to 0.6649, and that of capital is equal to 0.8412. The contribution of land is significantly negative. Labor and capital are the main determinants for the province-level forestry production in China. Thus, research on the province-level forestry production should not ignore the spatial effect. The policy-making process should take into consideration the effects between provinces on the production of forestry. This study provides some scientific technical support for forestry production.

  17. Computation of reduced energy input current stimuli for neuron phase models.

    Science.gov (United States)

    Anyalebechi, Jason; Koelling, Melinda E; Miller, Damon A

    2014-01-01

    A regularly spiking neuron can be studied using a phase model. The effect of an input stimulus current on the phase time derivative is captured by a phase response curve. This paper adapts a technique that was previously applied to conductance-based models to discover optimal input stimulus currents for phase models. First, the neuron phase response θ(t) due to an input stimulus current i(t) is computed using a phase model. The resulting θ(t) is taken to be a reference phase r(t). Second, an optimal input stimulus current i(*)(t) is computed to minimize a weighted sum of the square-integral `energy' of i(*)(t) and the tracking error between the reference phase r(t) and the phase response due to i(*)(t). The balance between the conflicting requirements of energy and tracking error minimization is controlled by a single parameter. The generated optimal current i(*)t) is then compared to the input current i(t) which was used to generate the reference phase r(t). This technique was applied to two neuron phase models; in each case, the current i(*)(t) generates a phase response similar to the reference phase r(t), and the optimal current i(*)(t) has a lower `energy' than the square-integral of i(t). For constant i(t), the optimal current i(*)(t) need not be constant in time. In fact, i(*)(t) is large (possibly even larger than i(t)) for regions where the phase response curve indicates a stronger sensitivity to the input stimulus current, and smaller in regions of reduced sensitivity.

  18. Input--output capital coefficients for energy technologies. [Input-output model

    Energy Technology Data Exchange (ETDEWEB)

    Tessmer, R.G. Jr.

    1976-12-01

    Input-output capital coefficients are presented for five electric and seven non-electric energy technologies. They describe the durable goods and structures purchases (at a 110 sector level of detail) that are necessary to expand productive capacity in each of twelve energy source sectors. Coefficients are defined in terms of 1967 dollar purchases per 10/sup 6/ Btu of output from new capacity, and original data sources include Battelle Memorial Institute, the Harvard Economic Research Project, The Mitre Corp., and Bechtel Corp. The twelve energy sectors are coal, crude oil and gas, shale oil, methane from coal, solvent refined coal, refined oil products, pipeline gas, coal combined-cycle electric, fossil electric, LWR electric, HTGR electric, and hydroelectric.

  19. Characteristic operator functions for quantum input-plant-output models and coherent control

    Science.gov (United States)

    Gough, John E.

    2015-01-01

    We introduce the characteristic operator as the generalization of the usual concept of a transfer function of linear input-plant-output systems to arbitrary quantum nonlinear Markovian input-output models. This is intended as a tool in the characterization of quantum feedback control systems that fits in with the general theory of networks. The definition exploits the linearity of noise differentials in both the plant Heisenberg equations of motion and the differential form of the input-output relations. Mathematically, the characteristic operator is a matrix of dimension equal to the number of outputs times the number of inputs (which must coincide), but with entries that are operators of the plant system. In this sense, the characteristic operator retains details of the effective plant dynamical structure and is an essentially quantum object. We illustrate the relevance to model reduction and simplification definition by showing that the convergence of the characteristic operator in adiabatic elimination limit models requires the same conditions and assumptions appearing in the work on limit quantum stochastic differential theorems of Bouten and Silberfarb [Commun. Math. Phys. 283, 491-505 (2008)]. This approach also shows in a natural way that the limit coefficients of the quantum stochastic differential equations in adiabatic elimination problems arise algebraically as Schur complements and amounts to a model reduction where the fast degrees of freedom are decoupled from the slow ones and eliminated.

  20. Fast Prediction of Differential Mode Noise Input Filter Requirements for FLyback and Boost Unity Power Factor Converters

    DEFF Research Database (Denmark)

    Andersen, Michael Andreas E.

    1997-01-01

    Two new and simple methods to make predictions of the differential mode (DM) input filter requirements are presented, one for flyback and one for boost unity power factor converters. They have been verified by measurements. They give the designer the ability to predict the DM input noise filter...

  1. Spatial Statistical Procedures to Validate Input Data in Energy Models

    Energy Technology Data Exchange (ETDEWEB)

    Johannesson, G.; Stewart, J.; Barr, C.; Brady Sabeff, L.; George, R.; Heimiller, D.; Milbrandt, A.

    2006-01-01

    Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, economic trends, and other primarily non-energy related uses. Systematic collection of empirical data solely for regional, national, and global energy modeling has not been established as in the abovementioned fields. Empirical and modeled data relevant to energy modeling is reported and available at various spatial and temporal scales that might or might not be those needed and used by the energy modeling community. The incorrect representation of spatial and temporal components of these data sets can result in energy models producing misleading conclusions, especially in cases of newly evolving technologies with spatial and temporal operating characteristics different from the dominant fossil and nuclear technologies that powered the energy economy over the last two hundred years. Increased private and government research and development and public interest in alternative technologies that have a benign effect on the climate and the environment have spurred interest in wind, solar, hydrogen, and other alternative energy sources and energy carriers. Many of these technologies require much finer spatial and temporal detail to determine optimal engineering designs, resource availability, and market potential. This paper presents exploratory and modeling techniques in spatial statistics that can improve the usefulness of empirical and modeled data sets that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) predicting missing data, and (3) merging spatial data sets. In addition, we introduce relevant statistical software models commonly used in the field for various sizes and types of data sets.

  2. Spatial Statistical Procedures to Validate Input Data in Energy Models

    Energy Technology Data Exchange (ETDEWEB)

    Lawrence Livermore National Laboratory

    2006-01-27

    Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, economic trends, and other primarily non-energy-related uses. Systematic collection of empirical data solely for regional, national, and global energy modeling has not been established as in the above-mentioned fields. Empirical and modeled data relevant to energy modeling is reported and available at various spatial and temporal scales that might or might not be those needed and used by the energy modeling community. The incorrect representation of spatial and temporal components of these data sets can result in energy models producing misleading conclusions, especially in cases of newly evolving technologies with spatial and temporal operating characteristics different from the dominant fossil and nuclear technologies that powered the energy economy over the last two hundred years. Increased private and government research and development and public interest in alternative technologies that have a benign effect on the climate and the environment have spurred interest in wind, solar, hydrogen, and other alternative energy sources and energy carriers. Many of these technologies require much finer spatial and temporal detail to determine optimal engineering designs, resource availability, and market potential. This paper presents exploratory and modeling techniques in spatial statistics that can improve the usefulness of empirical and modeled data sets that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) predicting missing data, and (3) merging spatial data sets. In addition, we introduce relevant statistical software models commonly used in the field for various sizes and types of data sets.

  3. 40 CFR 96.76 - Additional requirements to provide heat input data for allocations purposes.

    Science.gov (United States)

    2010-07-01

    ... heat input data for allocations purposes. 96.76 Section 96.76 Protection of Environment ENVIRONMENTAL... to provide heat input data for allocations purposes. (a) The owner or operator of a unit that elects... chapter for any source located in a state developing source allocations based upon heat input. (b)...

  4. Translation of CODEV Lens Model To IGES Input File

    Science.gov (United States)

    Wise, T. D.; Carlin, B. B.

    1986-10-01

    The design of modern optical systems is not a trivial task; even more difficult is the requirement for an opticker to accurately describe the physical constraints implicit in his design so that a mechanical designer can correctly mount the optical elements. Typical concerns include setback of baffles, obstruction of clear apertures by mounting hardware, location of the image plane with respect to fiducial marks, and the correct interpretation of systems having odd geometry. The presence of multiple coordinate systems (optical, mechan-ical, system test, and spacecraft) only exacerbates an already difficult situation. A number of successful optical design programs, such as CODEV (1), have come into existence over the years while the development of Computer Aided Design (CAD) and Computer Aided Manufacturing (CAM) has allowed a number of firms to install "paperless" design systems. In such a system, a part which is entered by keyboard, or pallet, is made into a real physical piece on a milling machine which has received its instructions from the design system. However, a persistent problem is the lack of a link between the optical design programs and the mechanical CAD programs. This paper will describe a first step which has been taken to bridge this gap. Starting with the neutral plot file generated by the CODEV optical design program, we have been able to produce a file suitable for input to the ANVIL (2) and GEOMOD (3) software packages, using the International Graphics Exchange Standard (IGES) interface. This is accomplished by software of our design, which runs on a VAX (4) system. A description of the steps to be taken in transferring a design will be provided. We shall also provide some examples of designs on which this technique has been used successfully. Finally, we shall discuss limitations of the existing software and suggest some improvements which might be undertaken.

  5. The stability of input structures in a supply-driven input-output model: A regional analysis

    Energy Technology Data Exchange (ETDEWEB)

    Allison, T.

    1994-06-01

    Disruptions in the supply of strategic resources or other crucial factor inputs often present significant problems for planners and policymakers. The problem may be particularly significant at the regional level where higher levels of product specialization mean supply restrictions are more likely to affect leading regional industries. To maintain economic stability in the event of a supply restriction, regional planners may therefore need to evaluate the importance of market versus non-market systems for allocating the remaining supply of the disrupted resource to the region`s leading consuming industries. This paper reports on research that has attempted to show that large short term changes on the supply side do not lead to substantial changes in input coefficients and do not therefore mean the abandonment of the concept of the production function as has been suggested (Oosterhaven, 1988). The supply-driven model was tested for six sectors of the economy of Washington State and found to yield new input coefficients whose values were in most cases close approximations of their original values, even with substantial changes in supply. Average coefficient changes from a 50% output reduction in these six sectors were in the vast majority of cases (297 from a total of 315) less than +2.0% of their original values, excluding coefficient changes for the restricted input. Given these small changes, the most important issue for the validity of the supply-driven input-output model may therefore be the empirical question of the extent to which these coefficient changes are acceptable as being within the limits of approximation.

  6. A diffusion model for drying of a heat sensitive solid under multiple heat input modes.

    Science.gov (United States)

    Sun, Lan; Islam, Md Raisul; Ho, J C; Mujumdar, A S

    2005-09-01

    To obtain optimal drying kinetics as well as quality of the dried product in a batch dryer, the energy required may be supplied by combining different modes of heat transfer. In this work, using potato slice as a model heat sensitive drying object, experimental studies were conducted using a batch heat pump dryer designed to permit simultaneous application of conduction and radiation heat. Four heat input schemes were compared: pure convection, radiation-coupled convection, conduction-coupled convection and radiation-conduction-coupled convection. A two-dimensional drying model was developed assuming the drying rate to be controlled by liquid water diffusion. Both drying rates and temperatures within the slab during drying under all these four heat input schemes showed good accord with measurements. Radiation-coupled convection is the recommended heat transfer scheme from the viewpoint of high drying rate and low energy consumption.

  7. [Bivariate statistical model for calculating phosphorus input loads to the river from point and nonpoint sources].

    Science.gov (United States)

    Chen, Ding-Jiang; Sun, Si-Yang; Jia, Ying-Na; Chen, Jia-Bo; Lü, Jun

    2013-01-01

    Based on the hydrological difference between the point source (PS) and nonpoint source (NPS) pollution processes and the major influencing mechanism of in-stream retention processes, a bivariate statistical model was developed for relating river phosphorus load to river water flow rate and temperature. Using the calibrated and validated four model coefficients from in-stream monitoring data, monthly phosphorus input loads to the river from PS and NPS can be easily determined by the model. Compared to current hydrologica methods, this model takes the in-stream retention process and the upstream inflow term into consideration; thus it improves the knowledge on phosphorus pollution processes and can meet the requirements of both the district-based and watershed-based wate quality management patterns. Using this model, total phosphorus (TP) input load to the Changle River in Zhejiang Province was calculated. Results indicated that annual total TP input load was (54.6 +/- 11.9) t x a(-1) in 2004-2009, with upstream water inflow, PS and NPS contributing to 5% +/- 1%, 12% +/- 3% and 83% +/- 3%, respectively. The cumulative NPS TP input load during the high flow periods (i. e. , June, July, August and September) in summer accounted for 50% +/- 9% of the annual amount, increasing the alga blooming risk in downstream water bodies. Annual in-stream TP retention load was (4.5 +/- 0.1) t x a(-1) and occupied 9% +/- 2% of the total input load. The cumulative in-stream TP retention load during the summer periods (i. e. , June-September) accounted for 55% +/- 2% of the annual amount, indicating that in-stream retention function plays an important role in seasonal TP transport and transformation processes. This bivariate statistical model only requires commonly available in-stream monitoring data (i. e. , river phosphorus load, water flow rate and temperature) with no requirement of special software knowledge; thus it offers researchers an managers with a cost-effective tool for

  8. Efficient uncertainty quantification of a fully nonlinear and dispersive water wave model with random inputs

    DEFF Research Database (Denmark)

    Bigoni, Daniele; Engsig-Karup, Allan Peter; Eskilsson, Claes

    2016-01-01

    of the evolution of waves. The model is analyzed using random sampling techniques and nonintrusive methods based on generalized polynomial chaos (PC). These methods allow us to accurately and efficiently estimate the probability distribution of the solution and require only the computation of the solution...... at different points in the parameter space, allowing for the reuse of existing simulation software. The choice of the applied methods is driven by the number of uncertain input parameters and by the fact that finding the solution of the considered model is computationally intensive. We revisit experimental...

  9. Alternative to Ritt's pseudodivision for finding the input-output equations of multi-output models.

    Science.gov (United States)

    Meshkat, Nicolette; Anderson, Chris; DiStefano, Joseph J

    2012-09-01

    Differential algebra approaches to structural identifiability analysis of a dynamic system model in many instances heavily depend upon Ritt's pseudodivision at an early step in analysis. The pseudodivision algorithm is used to find the characteristic set, of which a subset, the input-output equations, is used for identifiability analysis. A simpler algorithm is proposed for this step, using Gröbner Bases, along with a proof of the method that includes a reduced upper bound on derivative requirements. Efficacy of the new algorithm is illustrated with several biosystem model examples.

  10. Direct ventral hippocampal-prefrontal input is required for anxiety-related neural activity and behavior

    Science.gov (United States)

    Padilla-Coreano, Nancy; Bolkan, Scott S.; Pierce, Georgia M.; Blackman, Dakota R.; Hardin, William D.; Garcia-Garcia, Alvaro L.; Spellman, Timothy J.; Gordon, Joshua A.

    2016-01-01

    The ventral hippocampus (vHPC), medial prefrontal cortex (mPFC), and basolateral amygdala (BLA) are each required for the expression of anxiety-like behavior. Yet the role of each individual element of the circuit is unclear. The projection from the vHPC to the mPFC has been implicated in anxiety-related neural synchrony and spatial representations of aversion. The role of this projection was examined using multi-site neural recordings combined with optogenetic terminal inhibition. Inhibition of vHPC input to the mPFC disrupted anxiety and mPFC representations of aversion, and reduced theta synchrony in a pathway-, frequency- and task-specific manner. Moreover, bilateral, but not unilateral inhibition altered physiological correlates of anxiety in the BLA, mimicking a safety-like state. These results reveal a specific role for the vHPC-mPFC projection in anxiety-related behavior and the spatial representation of aversive information within the mPFC. PMID:26853301

  11. Wage Differentials among Workers in Input-Output Models.

    Science.gov (United States)

    Filippini, Luigi

    1981-01-01

    Using an input-output framework, the author derives hypotheses on wage differentials based on the assumption that human capital (in this case, education) will explain workers' wage differentials. The hypothetical wage differentials are tested on data from the Italian economy. (RW)

  12. Endogenous cholinergic input to the pontine REM sleep generator is not required for REM sleep to occur.

    Science.gov (United States)

    Grace, Kevin P; Vanstone, Lindsay E; Horner, Richard L

    2014-10-22

    Initial theories of rapid eye movement (REM) sleep generation posited that induction of the state required activation of the pontine subceruleus (SubC) by cholinergic inputs. Although the capacity of cholinergic neurotransmission to contribute to REM sleep generation has been established, the role of cholinergic inputs in the generation of REM sleep is ultimately undetermined as the critical test of this hypothesis (local blockade of SubC acetylcholine receptors) has not been rigorously performed. We used bilateral microdialysis in freely behaving rats (n = 32), instrumented for electroencephalographic and electromyographic recording, to locally manipulate neurotransmission in the SubC with select drugs. As predicted, combined microperfusion of D-AP5 (glutamate receptor antagonist) and muscimol (GABAA receptor agonist) in the SubC virtually eliminated REM sleep. However, REM sleep was not reduced by scopolamine microperfusion in this same region, at a concentration capable of blocking the effects of cholinergic receptor stimulation. This result suggests that transmission of REM sleep drive to the SubC is acetylcholine-independent. Although SubC cholinergic inputs are not majorly involved in REM sleep generation, they may perform a minor function in the reinforcement of transitions into REM sleep, as evidenced by increases in non-REM-to-REM sleep transition duration and failure rate during cholinergic receptor blockade. Cholinergic receptor antagonism also attenuated the normal increase in hippocampal θ oscillations that characterize REM sleep. Using computational modeling, we show that our in vivo results are consistent with a mutually excitatory interaction between the SubC and cholinergic neurons where, importantly, cholinergic neuron activation is gated by SubC activity.

  13. High Temperature Test Facility Preliminary RELAP5-3D Input Model Description

    Energy Technology Data Exchange (ETDEWEB)

    Bayless, Paul David [Idaho National Laboratory

    2015-12-01

    A RELAP5-3D input model is being developed for the High Temperature Test Facility at Oregon State University. The current model is described in detail. Further refinements will be made to the model as final as-built drawings are released and when system characterization data are available for benchmarking the input model.

  14. A time-resolved model of the mesospheric Na layer: constraints on the meteor input function

    Directory of Open Access Journals (Sweden)

    J. M. C. Plane

    2004-01-01

    Full Text Available A time-resolved model of the Na layer in the mesosphere/lower thermosphere region is described, where the continuity equations for the major sodium species Na, Na+ and NaHCO3 are solved explicity, and the other short-lived species are treated in steady-state. It is shown that the diurnal variation of the Na layer can only be modelled satisfactorily if sodium species are permanently removed below about 85 km, both through the dimerization of NaHCO3 and the uptake of sodium species on meteoric smoke particles that are assumed to have formed from the recondensation of vaporized meteoroids. When the sensitivity of the Na layer to the meteoroid input function is considered, an inconsistent picture emerges. The ratio of the column abundance of Na+ to Na is shown to increase strongly with the average meteoroid velocity, because the Na is injected at higher altitudes. Comparison with a limited set of Na+ measurements indicates that the average meteoroid velocity is probably less than about 25 km s-1, in agreement with velocity estimates from conventional meteor radars, and considerably slower than recent observations made by wide aperture incoherent scatter radars. The Na column abundance is shown to be very sensitive to the meteoroid mass input rate, and to the rate of vertical transport by eddy diffusion. Although the magnitude of the eddy diffusion coefficient in the 80–90 km region is uncertain, there is a consensus between recent models using parameterisations of gravity wave momentum deposition that the average value is less than 3×105 cm2 s-1. This requires that the global meteoric mass input rate is less than about 20 td-1, which is closest to estimates from incoherent scatter radar observations. Finally, the diurnal variation in the meteoroid input rate only slight perturbs the Na layer, because the residence time of Na in the layer is several days, and diurnal effects are effectively averaged out.

  15. Analysis of the Model Checkers' Input Languages for Modeling Traffic Light Systems

    Directory of Open Access Journals (Sweden)

    Pathiah A. Samat

    2011-01-01

    Full Text Available Problem statement: Model checking is an automated verification technique that can be used for verifying properties of a system. A number of model checking systems have been developed over the last few years. However, there is no guideline that is available for selecting the most suitable model checker to be used to model a particular system. Approach: In this study, we compare the use of four model checkers: SMV, SPIN, UPPAAL and PRISM for modeling a distributed control system. In particular, we are looking at the capabilities of the input languages of these model checkers for modeling this type of system. Limitations and differences of their input language are compared and analyses by using a set of questions. Results: The result of the study shows that although the input languages of these model checkers have a lot of similarities, they also have a significant number of differences. The result of the study also shows that one model checker may be more suitable than others for verifying this type of systems Conclusion: User need to choose the right model checker for the problem to be verified.

  16. INPUT MODELLING USING STATISTICAL DISTRIBUTIONS AND ARENA SOFTWARE

    Directory of Open Access Journals (Sweden)

    Elena Iuliana GINGU (BOTEANU

    2015-05-01

    Full Text Available The paper presents a method of choosing properly the probability distributions for failure time in a flexible manufacturing system. Several well-known distributions often provide good approximation in practice. The commonly used continuous distributions are: Uniform, Triangular, Beta, Normal, Lognormal, Weibull, and Exponential. In this article is studied how to use the Input Analyzer in the simulation language Arena to fit probability distributions to data, or to evaluate how well a particular distribution. The objective was to provide the selection of the most appropriate statistical distributions and to estimate parameter values of failure times for each machine of a real manufacturing line.

  17. Dispersion modeling of accidental releases of toxic gases - Sensitivity study and optimization of the meteorological input

    Science.gov (United States)

    Baumann-Stanzer, K.; Stenzel, S.

    2009-04-01

    based on the weather forecast model ALADIN. The meteorological field's analysis with INCA include: Temperature, Humidity, Wind, Precipitation and Cloudiness. In the frame of the project INCA data were compared with measurements conducted at traffic-near sites. INCA analysis and very short term forecast fields (up to 6 hours) are found to be an advanced possibility to provide on-line meteorological input for the model package used by the fire brigade. Nevertheless a high degree of caution in the interpretation of the model results is required - especially in the case of very slow wind speeds, very stable atmospheric condition, and flow deflection by buildings in the urban area or by complex topography.

  18. The MARINA model (Model to Assess River Inputs of Nutrients to seAs)

    NARCIS (Netherlands)

    Strokal, Maryna; Kroeze, Carolien; Wang, Mengru; Bai, Zhaohai; Ma, Lin

    2016-01-01

    Chinese agriculture has been developing fast towards industrial food production systems that discharge nutrient-rich wastewater into rivers. As a result, nutrient export by rivers has been increasing, resulting in coastal water pollution. We developed a Model to Assess River Inputs of Nutrients t

  19. Input Response of Neural Network Model with Lognormally Distributed Synaptic Weights

    Science.gov (United States)

    Nagano, Yoshihiro; Karakida, Ryo; Watanabe, Norifumi; Aoyama, Atsushi; Okada, Masato

    2016-07-01

    Neural assemblies in the cortical microcircuit can sustain irregular spiking activity without external inputs. On the other hand, neurons exhibit rich evoked activities driven by sensory stimulus, and both activities are reported to contribute to cognitive functions. We studied the external input response of the neural network model with lognormally distributed synaptic weights. We show that the model can achieve irregular spontaneous activity and population oscillation depending on the presence of external input. The firing rate distribution was maintained for the external input, and the order of firing rates in evoked activity reflected that in spontaneous activity. Moreover, there were bistable regions in the inhibitory input parameter space. The bimodal membrane potential distribution, which is a characteristic feature of the up-down state, was obtained under such conditions. From these results, we can conclude that the model displays various evoked activities due to the external input and is biologically plausible.

  20. Motivation Monitoring and Assessment Extension for Input-Process-Outcome Game Model

    Science.gov (United States)

    Ghergulescu, Ioana; Muntean, Cristina Hava

    2014-01-01

    This article proposes a Motivation Assessment-oriented Input-Process-Outcome Game Model (MotIPO), which extends the Input-Process-Outcome game model with game-centred and player-centred motivation assessments performed right from the beginning of the game-play. A feasibility case-study involving 67 participants playing an educational game and…

  1. Motivation Monitoring and Assessment Extension for Input-Process-Outcome Game Model

    Science.gov (United States)

    Ghergulescu, Ioana; Muntean, Cristina Hava

    2014-01-01

    This article proposes a Motivation Assessment-oriented Input-Process-Outcome Game Model (MotIPO), which extends the Input-Process-Outcome game model with game-centred and player-centred motivation assessments performed right from the beginning of the game-play. A feasibility case-study involving 67 participants playing an educational game and…

  2. Cost-optimal levels for energy performance requirements:The Concerted Action's input to the Framework Methodology

    OpenAIRE

    Thomsen, Kirsten Engelund; Aggerholm, Søren; Kluttig-Erhorn, Heike; Erhorn, Hans; Poel, Bart; Hitchin, Roger

    2011-01-01

    The CA conducted a study on experiences and challenges for setting cost optimal levels for energy performance requirements. The results were used as input by the EU Commission in their work of establishing the Regulation on a comparative methodology framework for calculating cost optimal levels of minimum energy performance requirements. In addition to the summary report released in August 2011, the full detailed report on this study is now also made available, just as the EC is about to publ...

  3. Agent Based Multiviews Requirements Model

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Based on the current researches of viewpoints oriented requirements engineering and intelligent agent, we present the concept of viewpoint agent and its abstract model based on a meta-language for multiviews requirements engineering. It provided a basis for consistency checking and integration of different viewpoint requirements, at the same time, these checking and integration works can automatically realized in virtue of intelligent agent's autonomy, proactiveness and social ability. Finally, we introduce the practical application of the model by the case study of data flow diagram.

  4. Modelling groundwater discharge areas using only digital elevation models as input data

    Energy Technology Data Exchange (ETDEWEB)

    Brydsten, Lars [Umeaa Univ. (Sweden). Dept. of Biology and Environmental Science

    2006-10-15

    Advanced geohydrological models require data on topography, soil distribution in three dimensions, vegetation, land use, bedrock fracture zones. To model present geohydrological conditions, these factors can be gathered with different techniques. If a future geohydrological condition is modelled in an area with positive shore displacement (say 5,000 or 10,000 years), some of these factors can be difficult to measure. This could include the development of wetlands and the filling of lakes. If the goal of the model is to predict distribution of groundwater recharge and discharge areas in the landscape, the most important factor is topography. The question is how much can topography alone explain the distribution of geohydrological objects in the landscape. A simplified description of the distribution of geohydrological objects in the landscape is that groundwater recharge areas occur at local elevation curvatures and discharge occurs in lakes, brooks, and low situated slopes. Areas in-between these make up discharge areas during wet periods and recharge areas during dry periods. A model that could predict this pattern only using topography data needs to be able to predict high ridges and future lakes and brooks. This study uses GIS software with four different functions using digital elevation models as input data, geomorphometrical parameters to predict landscape ridges, basin fill for predicting lakes, flow accumulations for predicting future waterways, and topographical wetness indexes for dividing in-between areas based on degree of wetness. An area between the village of and Forsmarks' Nuclear Power Plant has been used to calibrate the model. The area is within the SKB 10-metre Elevation Model (DEM) and has a high-resolution orienteering map for wetlands. Wetlands are assumed to be groundwater discharge areas. Five hundred points were randomly distributed across the wetlands. These are potential discharge points. Model parameters were chosen with the

  5. Modelling the soil microclimate: does the spatial or temporal resolution of input parameters matter?

    Directory of Open Access Journals (Sweden)

    Anna Carter

    2016-01-01

    Full Text Available The urgency of predicting future impacts of environmental change on vulnerable populations is advancing the development of spatially explicit habitat models. Continental-scale climate and microclimate layers are now widely available. However, most terrestrial organisms exist within microclimate spaces that are very small, relative to the spatial resolution of those layers. We examined the effects of multi-resolution, multi-extent topographic and climate inputs on the accuracy of hourly soil temperature predictions for a small island generated at a very high spatial resolution (<1 m2 using the mechanistic microclimate model in NicheMapR. Achieving an accuracy comparable to lower-resolution, continental-scale microclimate layers (within about 2–3°C of observed values required the use of daily weather data as well as high resolution topographic layers (elevation, slope, aspect, horizon angles, while inclusion of site-specific soil properties did not markedly improve predictions. Our results suggest that large-extent microclimate layers may not provide accurate estimates of microclimate conditions when the spatial extent of a habitat or other area of interest is similar to or smaller than the spatial resolution of the layers themselves. Thus, effort in sourcing model inputs should be focused on obtaining high resolution terrain data, e.g., via LiDAR or photogrammetry, and local weather information rather than in situ sampling of microclimate characteristics.

  6. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Science.gov (United States)

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  7. Assessing the required additional organic inputs to soils to reach the 4 per 1000 objective at the global scale: a RothC project

    Science.gov (United States)

    Lutfalla, Suzanne; Skalsky, Rastislav; Martin, Manuel; Balkovic, Juraj; Havlik, Petr; Soussana, Jean-François

    2017-04-01

    The 4 per 1000 Initiative underlines the role of soil organic matter in addressing the three-fold challenge of food security, adaptation of the land sector to climate change, and mitigation of human-induced GHG emissions. It sets an ambitious global target of a 0.4% (4/1000) annual increase in top soil organic carbon (SOC) stock. The present collaborative project between the 4 per 1000 research program, INRA and IIASA aims at providing a first global assessment of the translation of this soil organic carbon sequestration target into the equivalent organic matter inputs target. Indeed, soil organic carbon builds up in the soil through different processes leading to an increased input of carbon to the system (by increasing returns to the soil for instance) or a decreased output of carbon from the system (mainly by biodegradation and mineralization processes). Here we answer the question of how much extra organic matter must be added to agricultural soils every year (in otherwise unchanged climatic conditions) in order to guarantee a 0.4% yearly increase of total soil organic carbon stocks (40cm soil depth is considered). We use the RothC model of soil organic matter turnover on a spatial grid over 10 years to model two situations for croplands: a first situation where soil organic carbon remains constant (system at equilibrium) and a second situation where soil organic matter increases by 0.4% every year. The model accounts for the effects of soil type, temperature, moisture content and plant cover on the turnover process, it is run on a monthly time step, and it can simulate the needed organic input to sustain a certain SOC stock (or evolution of SOC stock). These two SOC conditions lead to two average yearly plant inputs over 10 years. The difference between the two simulated inputs represent the additional yearly input needed to reach the 4 per 1000 objective (input_eq for inputs needed for SOC to remain constant; input_4/1000 for inputs needed for SOC to reach

  8. Practical approximation method for firing-rate models of coupled neural networks with correlated inputs

    Science.gov (United States)

    Barreiro, Andrea K.; Ly, Cheng

    2017-08-01

    Rapid experimental advances now enable simultaneous electrophysiological recording of neural activity at single-cell resolution across large regions of the nervous system. Models of this neural network activity will necessarily increase in size and complexity, thus increasing the computational cost of simulating them and the challenge of analyzing them. Here we present a method to approximate the activity and firing statistics of a general firing rate network model (of the Wilson-Cowan type) subject to noisy correlated background inputs. The method requires solving a system of transcendental equations and is fast compared to Monte Carlo simulations of coupled stochastic differential equations. We implement the method with several examples of coupled neural networks and show that the results are quantitatively accurate even with moderate coupling strengths and an appreciable amount of heterogeneity in many parameters. This work should be useful for investigating how various neural attributes qualitatively affect the spiking statistics of coupled neural networks.

  9. "Updates to Model Algorithms & Inputs for the Biogenic ...

    Science.gov (United States)

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observations. This has resulted in improvements in model evaluations of modeled isoprene, NOx, and O3. The National Exposure Research Laboratory (NERL) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA mission to protect human health and the environment. AMAD research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for providing a sound scientific and technical basis for regulatory policies based on air quality models to improve ambient air quality. The models developed by AMAD are being used by EPA, NOAA, and the air pollution community in understanding and forecasting not only the magnitude of the air pollution problem, but also in developing emission control policies and regulations for air quality improvements.

  10. Evapotranspiration Input Data for the Central Valley Hydrologic Model (CVHM)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This digital dataset contains monthly reference evapotranspiration (ETo) data for the Central Valley Hydrologic Model (CVHM). The Central Valley encompasses an...

  11. Using Crowd Sensed Data as Input to Congestion Model

    DEFF Research Database (Denmark)

    Lehmann, Anders; Gross, Allan

    2016-01-01

    . To get accurate and timely information on traffic congestion, and by extension information on air pollution, near real time traffic models are needed. We present in this paper an implementation of the Restricted Stochastic User equilibrium model, that is capable to model congestions for very large Urban......Emission of airborne pollutants and climate gasses from the transport sector is a growing problem, both in indus- trialised and developing countries. Planning of urban transport system is essential to minimise the environmental, health and economic impact of congestion in the transport system...... traffic systems, in less than an hour. The model is implemented in an open source database system, for easy interface with GIS resources and crowd sensed transportation data....

  12. Input-dependent wave attenuation in a critically-balanced model of cortex.

    Directory of Open Access Journals (Sweden)

    Xiao-Hu Yan

    Full Text Available A number of studies have suggested that many properties of brain activity can be understood in terms of critical systems. However it is still not known how the long-range susceptibilities characteristic of criticality arise in the living brain from its local connectivity structures. Here we prove that a dynamically critically-poised model of cortex acquires an infinitely-long ranged susceptibility in the absence of input. When an input is presented, the susceptibility attenuates exponentially as a function of distance, with an increasing spatial attenuation constant (i.e., decreasing range the larger the input. This is in direct agreement with recent results that show that waves of local field potential activity evoked by single spikes in primary visual cortex of cat and macaque attenuate with a characteristic length that also increases with decreasing contrast of the visual stimulus. A susceptibility that changes spatial range with input strength can be thought to implement an input-dependent spatial integration: when the input is large, no additional evidence is needed in addition to the local input; when the input is weak, evidence needs to be integrated over a larger spatial domain to achieve a decision. Such input-strength-dependent strategies have been demonstrated in visual processing. Our results suggest that input-strength dependent spatial integration may be a natural feature of a critically-balanced cortical network.

  13. High Flux Isotope Reactor system RELAP5 input model

    Energy Technology Data Exchange (ETDEWEB)

    Morris, D.G.; Wendel, M.W.

    1993-01-01

    A thermal-hydraulic computational model of the High Flux Isotope Reactor (HFIR) has been developed using the RELAP5 program. The purpose of the model is to provide a state-of-the art thermal-hydraulic simulation tool for analyzing selected hypothetical accident scenarios for a revised HFIR Safety Analysis Report (SAR). The model includes (1) a detailed representation of the reactor core and other vessel components, (2) three heat exchanger/pump cells, (3) pressurizing pumps and letdown valves, and (4) secondary coolant system (with less detail than the primary system). Data from HFIR operation, component tests, tests in facility mockups and the HFIR, HFIR specific experiments, and other pertinent experiments performed independent of HFIR were used to construct the model and validate it to the extent permitted by the data. The detailed version of the model has been used to simulate loss-of-coolant accidents (LOCAs), while the abbreviated version has been developed for the operational transients that allow use of a less detailed nodalization. Analysis of station blackout with core long-term decay heat removal via natural convection has been performed using the core and vessel portions of the detailed model.

  14. Regional input-output models and the treatment of imports in the European System of Accounts

    OpenAIRE

    Kronenberg, Tobias

    2011-01-01

    Input-output models are often used in regional science due to their versatility and their ability to capture many of the distinguishing features of a regional economy. Input-output tables are available for all EU member countries, but they are hard to find at the regional level, since many regional governments lack the resources or the will to produce reliable, survey-based regional input-output tables. Therefore, in many cases researchers adopt nonsurvey techniques to derive regional input-o...

  15. Large uncertainty in soil carbon modelling related to carbon input calculation method

    DEFF Research Database (Denmark)

    Keel, Sonja; Leifeld, Jens; Mayer, Jochen

    2017-01-01

    The application of dynamic models to report changes in soil organic carbon (SOC) stocks, for example as part of greenhouse gas inventories, is becoming increasingly important. Most of these models rely on input data from harvest residues or decaying plant parts and also organic fertilizer, together...... referred to as soil carbon inputs (C). The soil C inputs from plants are derived from measured agricultural yields using allometric equations. Here we compared the results of five previously published equations. Our goal was to test whether the choice of method is critical for modelling soil C and if so......, which of these equations is most suitable for Swiss conditions. For this purpose we used the five equations to calculate soil C inputs based on yield data from a Swiss long-term cropping experiment. Estimated annual soil C inputs from various crops were averaged from 28 years and four fertilizer...

  16. Scientific and technical advisory committee review of the nutrient inputs to the watershed model

    Science.gov (United States)

    The following is a report by a STAC Review Team concerning the methods and documentation used by the Chesapeake Bay Partnership for evaluation of nutrient inputs to Phase 6 of the Chesapeake Bay Watershed Model. The “STAC Review of the Nutrient Inputs to the Watershed Model” (previously referred to...

  17. From LCC to LCA Using a Hybrid Input Output Model – A Maritime Case Study

    DEFF Research Database (Denmark)

    Kjær, Louise Laumann; Pagoropoulos, Aris; Hauschild, Michael Zwicky;

    2015-01-01

    As companies try to embrace life cycle thinking, Life Cycle Assessment (LCA) and Life Cycle Costing (LCC) have proven to be powerful tools. In this paper, an Environmental Input-Output model is used for analysis as it enables an LCA using the same economic input data as LCC. This approach helps...

  18. Tracking cellular telephones as an input for developing transport models

    CSIR Research Space (South Africa)

    Cooper, Antony K

    2010-08-01

    Full Text Available of tracking cellular telephones and using the data to populate transport and other models. We report here on one of the pilots, known as DYNATRACK (Dynamic Daily Path Tracking), a larger experiment conducted in 2007 with a more heterogeneous group of commuters...

  19. Physics input for modelling superfluid neutron stars with hyperon cores

    CERN Document Server

    Gusakov, M E; Kantor, E M

    2014-01-01

    Observations of massive ($M \\approx 2.0~M_\\odot$) neutron stars (NSs), PSRs J1614-2230 and J0348+0432, rule out most of the models of nucleon-hyperon matter employed in NS simulations. Here we construct three possible models of nucleon-hyperon matter consistent with the existence of $2~M_\\odot$ pulsars as well as with semi-empirical nuclear matter parameters at saturation, and semi-empirical hypernuclear data. Our aim is to calculate for these models all the parameters necessary for modelling dynamics of hyperon stars (such as equation of state, adiabatic indices, thermodynamic derivatives, relativistic entrainment matrix, etc.), making them available for a potential user. To this aim a general non-linear hadronic Lagrangian involving $\\sigma\\omega\\rho\\phi\\sigma^\\ast$ meson fields, as well as quartic terms in vector-meson fields, is considered. A universal scheme for calculation of the $\\ell=0,1$ Landau Fermi-liquid parameters and relativistic entrainment matrix is formulated in the mean-field approximation. ...

  20. Human task animation from performance models and natural language input

    Science.gov (United States)

    Esakov, Jeffrey; Badler, Norman I.; Jung, Moon

    1989-01-01

    Graphical manipulation of human figures is essential for certain types of human factors analyses such as reach, clearance, fit, and view. In many situations, however, the animation of simulated people performing various tasks may be based on more complicated functions involving multiple simultaneous reaches, critical timing, resource availability, and human performance capabilities. One rather effective means for creating such a simulation is through a natural language description of the tasks to be carried out. Given an anthropometrically-sized figure and a geometric workplace environment, various simple actions such as reach, turn, and view can be effectively controlled from language commands or standard NASA checklist procedures. The commands may also be generated by external simulation tools. Task timing is determined from actual performance models, if available, such as strength models or Fitts' Law. The resulting action specification are animated on a Silicon Graphics Iris workstation in real-time.

  1. Tumor Growth Model with PK Input for Neuroblastoma Drug Development

    Science.gov (United States)

    2015-09-01

    9/2012 - 4/30/2017 2.40 calendar NCI Anticancer Drug Pharmacology in Very Young Children The proposed studies will use pharmacokinetic... anticancer drugs . DOD W81XWH-14-1-0103 CA130396 (Stewart) 9/1/2014 - 8/31/2016 .60 calendar DOD-DEPARTMENT OF THE ARMY Tumor Growth Model with PK... anticancer drugs . .60 calendar V Foundation Translational (Stewart) 11/1/2012-10/31/2015 THE V FDN FOR CA RES Identification & preclinical testing

  2. Influence of input matrix representation on topic modelling performance

    CSIR Research Space (South Africa)

    De Waal, A

    2010-11-01

    Full Text Available model, perplexity is an appropriate measure. It provides an indication of the model’s ability to generalise by measuring the exponent of the mean log-likelihood of words in a held-out test set of the corpus. The exploratory abilities of the latent.... The phrases are clearly more intelligible than only single word phrases in many cases, thus demonstrating the qualitative advantage of the proposed method. 1For the CRAN corpus, each subset of chunks includes the top 1000 chunks with the highest...

  3. Remote sensing inputs to landscape models which predict future spatial land use patterns for hydrologic models

    Science.gov (United States)

    Miller, L. D.; Tom, C.; Nualchawee, K.

    1977-01-01

    A tropical forest area of Northern Thailand provided a test case of the application of the approach in more natural surroundings. Remote sensing imagery subjected to proper computer analysis has been shown to be a very useful means of collecting spatial data for the science of hydrology. Remote sensing products provide direct input to hydrologic models and practical data bases for planning large and small-scale hydrologic developments. Combining the available remote sensing imagery together with available map information in the landscape model provides a basis for substantial improvements in these applications.

  4. Researches on the Model of Telecommunication Service with Variable Input Tariff Rates

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The paper sets up and studies the model of the telecommunication queue servicing system with variable input tariff rates, which can relieve the crowding system traffic flows during the busy hour to enhance the utilizing rate of the telecom's resources.

  5. Loss of GABAergic inputs in APP/PS1 mouse model of Alzheimer's disease

    Directory of Open Access Journals (Sweden)

    Tutu Oyelami

    2014-04-01

    Full Text Available Alzheimer's disease (AD is characterized by symptoms which include seizures, sleep disruption, loss of memory as well as anxiety in patients. Of particular importance is the possibility of preventing the progressive loss of neuronal projections in the disease. Transgenic mice overexpressing EOFAD mutant PS1 (L166P and mutant APP (APP KM670/671NL Swedish (APP/PS1 develop a very early and robust Amyloid pathology and display synaptic plasticity impairments and cognitive dysfunction. Here we investigated GABAergic neurotransmission, using multi-electrode array (MEA technology and pharmacological manipulation to quantify the effect of GABA Blockers on field excitatory postsynaptic potentials (fEPSP, and immunostaining of GABAergic neurons. Using MEA technology we confirm impaired LTP induction by high frequency stimulation in APPPS1 hippocampal CA1 region that was associated with reduced alteration of the pair pulse ratio after LTP induction. Synaptic dysfunction was also observed under manipulation of external Calcium concentration and input-output curve. Electrophysiological recordings from brain slice of CA1 hippocampus area, in the presence of GABAergic receptors blockers cocktails further demonstrated significant reduction in the GABAergic inputs in APP/PS1 mice. Moreover, immunostaining of GAD65 a specific marker for GABAergic neurons revealed reduction of the GABAergic inputs in CA1 area of the hippocampus. These results might be linked to increased seizure sensitivity, premature death and cognitive dysfunction in this animal model of AD. Further in depth analysis of GABAergic dysfunction in APP/PS1 mice is required and may open new perspectives for AD therapy by restoring GABAergic function.

  6. Monitoring the inputs required to extend and sustain hygiene promotion: findings from the GLAAS 2013/2014 survey.

    Science.gov (United States)

    Moreland, Leslie D; Gore, Fiona M; Andre, Nathalie; Cairncross, Sandy; Ensink, Jeroen H J

    2016-08-01

    There are significant gaps in information about the inputs required to effectively extend and sustain hygiene promotion activities to improve people's health outcomes through water, sanitation and hygiene (WASH) interventions. We sought to analyse current country and global trends in the use of key inputs required for effective and sustainable implementation of hygiene promotion to help guide hygiene promotion policy and decision-making after 2015. Data collected in response to the GLAAS 2013/2014 survey from 93 countries of 94 were included, and responses were analysed for 12 questions assessing the inputs and enabling environment for hygiene promotion under four thematic areas. Data were included and analysed from 20 External Support Agencies (ESA) of 23 collected through self-administered surveys. Firstly, the data showed a large variation in the way in which hygiene promotion is defined and what constitutes key activities in this area. Secondly, challenges to implement hygiene promotion are considerable: include poor implementation of policies and plans, weak coordination mechanisms, human resource limitations and a lack of available hygiene promotion budget data. Despite the proven benefits of hand washing with soap, a critical hygiene-related factor in minimising infection, GLAAS 2013/2014 survey data showed that hygiene promotion remains a neglected component of WASH. Additional research to identify the context-specific strategies and inputs required to enhance the effectiveness of hygiene promotion at scale are needed. Improved data collection methods are also necessary to advance the availability and reliability of hygiene-specific information. © 2016 John Wiley & Sons Ltd.

  7. Statistical selection of multiple-input multiple-output nonlinear dynamic models of spike train transformation.

    Science.gov (United States)

    Song, Dong; Chan, Rosa H M; Marmarelis, Vasilis Z; Hampson, Robert E; Deadwyler, Sam A; Berger, Theodore W

    2007-01-01

    Multiple-input multiple-output nonlinear dynamic model of spike train to spike train transformations was previously formulated for hippocampal-cortical prostheses. This paper further described the statistical methods of selecting significant inputs (self-terms) and interactions between inputs (cross-terms) of this Volterra kernel-based model. In our approach, model structure was determined by progressively adding self-terms and cross-terms using a forward stepwise model selection technique. Model coefficients were then pruned based on Wald test. Results showed that the reduced kernel models, which contained much fewer coefficients than the full Volterra kernel model, gave good fits to the novel data. These models could be used to analyze the functional interactions between neurons during behavior.

  8. Advancements in Wind Integration Study Input Data Modeling: The Wind Integration National Dataset (WIND) Toolkit

    Science.gov (United States)

    Hodge, B.; Orwig, K.; McCaa, J. R.; Harrold, S.; Draxl, C.; Jones, W.; Searight, K.; Getman, D.

    2013-12-01

    Regional wind integration studies in the United States, such as the Western Wind and Solar Integration Study (WWSIS), Eastern Wind Integration and Transmission Study (EWITS), and Eastern Renewable Generation Integration Study (ERGIS), perform detailed simulations of the power system to determine the impact of high wind and solar energy penetrations on power systems operations. Some of the specific aspects examined include: infrastructure requirements, impacts on grid operations and conventional generators, ancillary service requirements, as well as the benefits of geographic diversity and forecasting. These studies require geographically broad and temporally consistent wind and solar power production input datasets that realistically reflect the ramping characteristics, spatial and temporal correlations, and capacity factors of wind and solar power plant production, and are time-synchronous with load profiles. The original western and eastern wind datasets were generated independently for 2004-2006 using numerical weather prediction (NWP) models run on a ~2 km grid with 10-minute resolution. Each utilized its own site selection process to augment existing wind plants with simulated sites of high development potential. The original dataset also included day-ahead simulated forecasts. These datasets were the first of their kind and many lessons were learned from their development. For example, the modeling approach used generated periodic false ramps that later had to be removed due to unrealistic impacts on ancillary service requirements. For several years, stakeholders have been requesting an updated dataset that: 1) covers more recent years; 2) spans four or more years to better evaluate interannual variability; 3) uses improved methods to minimize false ramps and spatial seams; 4) better incorporates solar power production inputs; and 5) is more easily accessible. To address these needs, the U.S. Department of Energy (DOE) Wind and Solar Programs have funded two

  9. Bayesian nonlinear structural FE model and seismic input identification for damage assessment of civil structures

    Science.gov (United States)

    Astroza, Rodrigo; Ebrahimian, Hamed; Li, Yong; Conte, Joel P.

    2017-09-01

    A methodology is proposed to update mechanics-based nonlinear finite element (FE) models of civil structures subjected to unknown input excitation. The approach allows to jointly estimate unknown time-invariant model parameters of a nonlinear FE model of the structure and the unknown time histories of input excitations using spatially-sparse output response measurements recorded during an earthquake event. The unscented Kalman filter, which circumvents the computation of FE response sensitivities with respect to the unknown model parameters and unknown input excitations by using a deterministic sampling approach, is employed as the estimation tool. The use of measurement data obtained from arrays of heterogeneous sensors, including accelerometers, displacement sensors, and strain gauges is investigated. Based on the estimated FE model parameters and input excitations, the updated nonlinear FE model can be interrogated to detect, localize, classify, and assess damage in the structure. Numerically simulated response data of a three-dimensional 4-story 2-by-1 bay steel frame structure with six unknown model parameters subjected to unknown bi-directional horizontal seismic excitation, and a three-dimensional 5-story 2-by-1 bay reinforced concrete frame structure with nine unknown model parameters subjected to unknown bi-directional horizontal seismic excitation are used to illustrate and validate the proposed methodology. The results of the validation studies show the excellent performance and robustness of the proposed algorithm to jointly estimate unknown FE model parameters and unknown input excitations.

  10. Multi-bump solutions in a neural field model with external inputs

    Science.gov (United States)

    Ferreira, Flora; Erlhagen, Wolfram; Bicho, Estela

    2016-07-01

    We study the conditions for the formation of multiple regions of high activity or "bumps" in a one-dimensional, homogeneous neural field with localized inputs. Stable multi-bump solutions of the integro-differential equation have been proposed as a model of a neural population representation of remembered external stimuli. We apply a class of oscillatory coupling functions and first derive criteria to the input width and distance, which relate to the synaptic couplings that guarantee the existence and stability of one and two regions of high activity. These input-induced patterns are attracted by the corresponding stable one-bump and two-bump solutions when the input is removed. We then extend our analytical and numerical investigation to N-bump solutions showing that the constraints on the input shape derived for the two-bump case can be exploited to generate a memory of N > 2 localized inputs. We discuss the pattern formation process when either the conditions on the input shape are violated or when the spatial ranges of the excitatory and inhibitory connections are changed. An important aspect for applications is that the theoretical findings allow us to determine for a given coupling function the maximum number of localized inputs that can be stored in a given finite interval.

  11. Input-output model for MACCS nuclear accident impacts estimation¹

    Energy Technology Data Exchange (ETDEWEB)

    Outkin, Alexander V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bixler, Nathan E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vargas, Vanessa N [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-27

    Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domestic product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.

  12. Geochemical inputs for hydrological models of deep-lying sedimentary units: Loss of mineral hydration water

    Science.gov (United States)

    Graf, D. L.; Anderson, D. E.

    1981-12-01

    Hydrological models that treat phenomena occurring deep in sedimentary piles, such as petroleum maturation and retention of chemical and radioactive waste, may require time spans of at least several million years. Many input quantities classically treated as constants will be variables on this time scale. Models sophisticated enough to include transport contributions from such processes as chemical diffusion, mineral dehydration and shale membrane behavior require considerable knowledge about regional geological history as well as the pertinent mineralogical and geochemical relationships. Simple dehydrations such as those of gypsum and halloysite occur at sharply-defined temperatures but, as with all mineral dehydration reactions, the equilibrium temperature is strongly dependent on the pore-fluid salinity and degree of overpressuring encountered in the subsurface. The dehydrations of analcime and smectite proceed by reactions involving other sedimentary minerals. The smectite reaction is crystallographically complex, yielding a succession of mixed-layered illite/smectites, and on the U.S.A. Gulf of Mexico coast continues over several million years at a particular stratigraphic interval.

  13. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    Energy Technology Data Exchange (ETDEWEB)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin, E-mail: dengbin@tju.edu.cn; Chan, Wai-lok [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2016-06-15

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  14. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    Science.gov (United States)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok

    2016-06-01

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  15. Input-to-output transformation in a model of the rat hippocampal CA1 network

    OpenAIRE

    Olypher, Andrey V; Lytton, William W; Prinz, Astrid A.

    2012-01-01

    Here we use computational modeling to gain new insights into the transformation of inputs in hippocampal field CA1. We considered input-output transformation in CA1 principal cells of the rat hippocampus, with activity synchronized by population gamma oscillations. Prior experiments have shown that such synchronization is especially strong for cells within one millimeter of each other. We therefore simulated a one-millimeter patch of CA1 with 23,500 principal cells. We used morphologically an...

  16. Regional Input Output Models and the FLQ Formula: A Case Study of Finland

    OpenAIRE

    Tony Flegg; Paul White

    2008-01-01

    This paper examines the use of location quotients (LQs) in constructing regional input-output models. Its focus is on the augmented FLQ formula (AFLQ) proposed by Flegg and Webber, 2000, which takes regional specialization explicitly into account. In our case study, we examine data for 20 Finnish regions, ranging in size from very small to very large, in order to assess the relative performance of the AFLQ formula in estimating regional imports, total intermediate inputs and output multiplier...

  17. Interregional spillovers in Spain: an estimation using an interregional input-output model

    OpenAIRE

    Llano, Carlos

    2009-01-01

    In this note we introduce the 1995 Spanish Interregional Input-Output Model, which was estimated using a wide set of One-region input-output tables and interregional trade matrices, estimated for each sector using interregional transport flows. Based on this framework, and by means of the Hypothetical Regional Extraction Method, the interregional backward and feedback effects are computed, capturing the pull effect of every region over the rest of Spain, through their sectoral relations withi...

  18. Autonomous attitude coordinated control for spacecraft formation with input constraint, model uncertainties, and external disturbances

    Institute of Scientific and Technical Information of China (English)

    Zheng Zhong; Song Shenmin

    2014-01-01

    To synchronize the attitude of a spacecraft formation flying system, three novel auton-omous control schemes are proposed to deal with the issue in this paper. The first one is an ideal autonomous attitude coordinated controller, which is applied to address the case with certain mod-els and no disturbance. The second one is a robust adaptive attitude coordinated controller, which aims to tackle the case with external disturbances and model uncertainties. The last one is a filtered robust adaptive attitude coordinated controller, which is used to overcome the case with input con-straint, model uncertainties, and external disturbances. The above three controllers do not need any external tracking signal and only require angular velocity and relative orientation between a space-craft and its neighbors. Besides, the relative information is represented in the body frame of each spacecraft. The controllers are proved to be able to result in asymptotical stability almost every-where. Numerical simulation results show that the proposed three approaches are effective for atti-tude coordination in a spacecraft formation flying system.

  19. Econometric Model Estimation and Sensitivity Analysis of Inputs for Mandarin Production in Mazandaran Province of Iran

    Directory of Open Access Journals (Sweden)

    Majid Namdari

    2011-05-01

    Full Text Available This study examines energy consumption of inputs and output used in mandarin production, and to find relationship between energy inputs and yield in Mazandaran, Iran. Also the Marginal Physical Product (MPP method was used to analyze the sensitivity of energy inputs on mandarin yield and returns to scale of econometric model was calculated. For this purpose, the data were collected from 110 mandarin orchards which were selected based on random sampling method. The results indicated that total energy inputs were 77501.17 MJ/ha. The energy use efficiency, energy productivity and net energy of mandarin production were found as 0.77, 0.41 kg/MJ and -17651.17 MJ/ha. About 41% of the total energy inputs used in mandarin production was indirect while about 59% was direct. Econometric estimation results revealed that the impact of human labor energy (0.37 was found the highest among the other inputs in mandarin production. The results also showed that direct, indirect and renewable and non-renewable, energy forms had a positive and statistically significant impact on output level. The results of sensitivity analysis of the energy inputs showed that with an additional use of 1 MJ of each of the human labor, farmyard manure and chemical fertilizers energy would lead to an increase in yield by 2.05, 1.80 and 1.26 kg, respectively. The results also showed that the MPP value of direct and renewable energy were higher.

  20. An integrated model for the assessment of global water resources – Part 1: Model description and input meteorological forcing

    Directory of Open Access Journals (Sweden)

    N. Hanasaki

    2008-07-01

    Full Text Available To assess global water availability and use at a subannual timescale, an integrated global water resources model was developed consisting of six modules: land surface hydrology, river routing, crop growth, reservoir operation, environmental flow requirement estimation, and anthropogenic water withdrawal. The model simulates both natural and anthropogenic water flow globally (excluding Antarctica on a daily basis at a spatial resolution of 1°×1° (longitude and latitude. This first part of the two-feature report describes the six modules and the input meteorological forcing. The input meteorological forcing was provided by the second Global Soil Wetness Project (GSWP2, an international land surface modeling project. Several reported shortcomings of the forcing component were improved. The land surface hydrology module was developed based on a bucket type model that simulates energy and water balance on land surfaces. The crop growth module is a relatively simple model based on concepts of heat unit theory, potential biomass, and a harvest index. In the reservoir operation module, 452 major reservoirs with >1 km3 each of storage capacity store and release water according to their own rules of operation. Operating rules were determined for each reservoir by an algorithm that used currently available global data such as reservoir storage capacity, intended purposes, simulated inflow, and water demand in the lower reaches. The environmental flow requirement module was newly developed based on case studies from around the world. Simulated runoff was compared and validated with observation-based global runoff data sets and observed streamflow records at 32 major river gauging stations around the world. Mean annual runoff agreed well with earlier studies at global and continental scales, and in individual basins, the mean bias was less than ±20% in 14 of the 32 river basins and less than ±50% in 24 basins. The error in the peak was less

  1. An integrated model for the assessment of global water resources Part 1: Model description and input meteorological forcing

    Science.gov (United States)

    Hanasaki, N.; Kanae, S.; Oki, T.; Masuda, K.; Motoya, K.; Shirakawa, N.; Shen, Y.; Tanaka, K.

    2008-07-01

    To assess global water availability and use at a subannual timescale, an integrated global water resources model was developed consisting of six modules: land surface hydrology, river routing, crop growth, reservoir operation, environmental flow requirement estimation, and anthropogenic water withdrawal. The model simulates both natural and anthropogenic water flow globally (excluding Antarctica) on a daily basis at a spatial resolution of 1°×1° (longitude and latitude). This first part of the two-feature report describes the six modules and the input meteorological forcing. The input meteorological forcing was provided by the second Global Soil Wetness Project (GSWP2), an international land surface modeling project. Several reported shortcomings of the forcing component were improved. The land surface hydrology module was developed based on a bucket type model that simulates energy and water balance on land surfaces. The crop growth module is a relatively simple model based on concepts of heat unit theory, potential biomass, and a harvest index. In the reservoir operation module, 452 major reservoirs with >1 km3 each of storage capacity store and release water according to their own rules of operation. Operating rules were determined for each reservoir by an algorithm that used currently available global data such as reservoir storage capacity, intended purposes, simulated inflow, and water demand in the lower reaches. The environmental flow requirement module was newly developed based on case studies from around the world. Simulated runoff was compared and validated with observation-based global runoff data sets and observed streamflow records at 32 major river gauging stations around the world. Mean annual runoff agreed well with earlier studies at global and continental scales, and in individual basins, the mean bias was less than ±20% in 14 of the 32 river basins and less than ±50% in 24 basins. The error in the peak was less than ±1 mo in 19 of the 27

  2. Wind Farm Flow Modeling using an Input-Output Reduced-Order Model

    Energy Technology Data Exchange (ETDEWEB)

    Annoni, Jennifer; Gebraad, Pieter; Seiler, Peter

    2016-08-01

    Wind turbines in a wind farm operate individually to maximize their own power regardless of the impact of aerodynamic interactions on neighboring turbines. There is the potential to increase power and reduce overall structural loads by properly coordinating turbines. To perform control design and analysis, a model needs to be of low computational cost, but retains the necessary dynamics seen in high-fidelity models. The objective of this work is to obtain a reduced-order model that represents the full-order flow computed using a high-fidelity model. A variety of methods, including proper orthogonal decomposition and dynamic mode decomposition, can be used to extract the dominant flow structures and obtain a reduced-order model. In this paper, we combine proper orthogonal decomposition with a system identification technique to produce an input-output reduced-order model. This technique is used to construct a reduced-order model of the flow within a two-turbine array computed using a large-eddy simulation.

  3. Multi input single output model predictive control of non-linear bio-polymerization process

    Energy Technology Data Exchange (ETDEWEB)

    Arumugasamy, Senthil Kumar; Ahmad, Z. [School of Chemical Engineering, Univerisiti Sains Malaysia, Engineering Campus, Seri Ampangan,14300 Nibong Tebal, Seberang Perai Selatan, Pulau Pinang (Malaysia)

    2015-05-15

    This paper focuses on Multi Input Single Output (MISO) Model Predictive Control of bio-polymerization process in which mechanistic model is developed and linked with the feedforward neural network model to obtain a hybrid model (Mechanistic-FANN) of lipase-catalyzed ring-opening polymerization of ε-caprolactone (ε-CL) for Poly (ε-caprolactone) production. In this research, state space model was used, in which the input to the model were the reactor temperatures and reactor impeller speeds and the output were the molecular weight of polymer (M{sub n}) and polymer polydispersity index. State space model for MISO created using System identification tool box of Matlab™. This state space model is used in MISO MPC. Model predictive control (MPC) has been applied to predict the molecular weight of the biopolymer and consequently control the molecular weight of biopolymer. The result shows that MPC is able to track reference trajectory and give optimum movement of manipulated variable.

  4. On the Influence of Input Data Quality to Flood Damage Estimation: The Performance of the INSYDE Model

    Directory of Open Access Journals (Sweden)

    Daniela Molinari

    2017-09-01

    Full Text Available IN-depth SYnthetic Model for Flood Damage Estimation (INSYDE is a model for the estimation of flood damage to residential buildings at the micro-scale. This study investigates the sensitivity of INSYDE to the accuracy of input data. Starting from the knowledge of input parameters at the scale of individual buildings for a case study, the level of detail of input data is progressively downgraded until the condition in which a representative value is defined for all inputs at the census block scale. The analysis reveals that two conditions are required to limit the errors in damage estimation: the representativeness of representatives values with respect to micro-scale values and the local knowledge of the footprint area of the buildings, being the latter the main extensive variable adopted by INSYDE. Such a result allows for extending the usability of the model at the meso-scale, also in different countries, depending on the availability of aggregated building data.

  5. Input-to-output transformation in a model of the rat hippocampal CA1 network.

    Science.gov (United States)

    Olypher, Andrey V; Lytton, William W; Prinz, Astrid A

    2012-01-01

    Here we use computational modeling to gain new insights into the transformation of inputs in hippocampal field CA1. We considered input-output transformation in CA1 principal cells of the rat hippocampus, with activity synchronized by population gamma oscillations. Prior experiments have shown that such synchronization is especially strong for cells within one millimeter of each other. We therefore simulated a one-millimeter ıt patch of CA1 with 23,500 principal cells. We used morphologically and biophysically detailed neuronal models, each with more than 1000 compartments and thousands of synaptic inputs. Inputs came from binary patterns of spiking neurons from field CA3 and entorhinal cortex (EC). On average, each presynaptic pattern initiated action potentials in the same number of CA1 principal cells in the patch. We considered pairs of similar and pairs of distinct patterns. In all the cases CA1 strongly separated input patterns. However, CA1 cells were considerably more sensitive to small alterations in EC patterns compared to CA3 patterns. Our results can be used for comparison of input-to-output transformations in normal and pathological hippocampal networks.

  6. Modelling pesticide leaching under climate change: parameter vs. climate input uncertainty

    Directory of Open Access Journals (Sweden)

    K. Steffens

    2014-02-01

    Full Text Available Assessing climate change impacts on pesticide leaching requires careful consideration of different sources of uncertainty. We investigated the uncertainty related to climate scenario input and its importance relative to parameter uncertainty of the pesticide leaching model. The pesticide fate model MACRO was calibrated against a comprehensive one-year field data set for a well-structured clay soil in south-western Sweden. We obtained an ensemble of 56 acceptable parameter sets that represented the parameter uncertainty. Nine different climate model projections of the regional climate model RCA3 were available as driven by different combinations of global climate models (GCM, greenhouse gas emission scenarios and initial states of the GCM. The future time series of weather data used to drive the MACRO model were generated by scaling a reference climate data set (1970–1999 for an important agricultural production area in south-western Sweden based on monthly change factors for 2070–2099. 30 yr simulations were performed for different combinations of pesticide properties and application seasons. Our analysis showed that both the magnitude and the direction of predicted change in pesticide leaching from present to future depended strongly on the particular climate scenario. The effect of parameter uncertainty was of major importance for simulating absolute pesticide losses, whereas the climate uncertainty was relatively more important for predictions of changes of pesticide losses from present to future. The climate uncertainty should be accounted for by applying an ensemble of different climate scenarios. The aggregated ensemble prediction based on both acceptable parameterizations and different climate scenarios has the potential to provide robust probabilistic estimates of future pesticide losses.

  7. Modeling the short-run effect of fiscal stimuli on GDP : A new semi-closed input-output model

    NARCIS (Netherlands)

    Chen, Quanrun; Dietzenbacher, Erik; Los, Bart; Yang, Cuihong

    2016-01-01

    In this study, we propose a new semi-closed input-output model, which reconciles input-output analysis with modern consumption theories. It can simulate changes in household consumption behavior when exogenous stimulus policies lead to higher disposable income levels. It is useful for quantifying

  8. A Model for Gathering Stakeholder Input for Setting Research Priorities at the Land-Grant University.

    Science.gov (United States)

    Kelsey, Kathleen Dodge; Pense, Seburn L.

    2001-01-01

    A model for collecting and using stakeholder input on research priorities is a modification of Guba and Lincoln's model, involving preevaluation preparation, stakeholder identification, information gathering and analysis, interpretive filtering, and negotiation and consensus. A case study at Oklahoma State University illustrates its applicability…

  9. Improving the Performance of Water Demand Forecasting Models by Using Weather Input

    NARCIS (Netherlands)

    Bakker, M.; Van Duist, H.; Van Schagen, K.; Vreeburg, J.; Rietveld, L.

    2014-01-01

    Literature shows that water demand forecasting models which use water demand as single input, are capable of generating a fairly accurate forecast. However, at changing weather conditions the forecasting errors are quite large. In this paper three different forecasting models are studied: an Adaptiv

  10. Improving the Performance of Water Demand Forecasting Models by Using Weather Input

    NARCIS (Netherlands)

    Bakker, M.; Van Duist, H.; Van Schagen, K.; Vreeburg, J.; Rietveld, L.

    2014-01-01

    Literature shows that water demand forecasting models which use water demand as single input, are capable of generating a fairly accurate forecast. However, at changing weather conditions the forecasting errors are quite large. In this paper three different forecasting models are studied: an Adaptiv

  11. Good Modeling Practice for PAT Applications: Propagation of Input Uncertainty and Sensitivity Analysis

    DEFF Research Database (Denmark)

    Sin, Gürkan; Gernaey, Krist; Eliasson Lantz, Anna

    2009-01-01

    The uncertainty and sensitivity analysis are evaluated for their usefulness as part of the model-building within Process Analytical Technology applications. A mechanistic model describing a batch cultivation of Streptomyces coelicolor for antibiotic production was used as case study. The input...

  12. Development of an Input Model to MELCOR 1.8.5 for the Oskarshamn 3 BWR

    Energy Technology Data Exchange (ETDEWEB)

    Nilsson, Lars [Lentek, Nykoeping (Sweden)

    2006-05-15

    An input model has been prepared to the code MELCOR 1.8.5 for the Swedish Oskarshamn 3 Boiling Water Reactor (O3). This report describes the modelling work and the various files which comprise the input deck. Input data are mainly based on original drawings and system descriptions made available by courtesy of OKG AB. Comparison and check of some primary system data were made against an O3 input file to the SCDAP/RELAP5 code that was used in the SARA project. Useful information was also obtained from the FSAR (Final Safety Analysis Report) for O3 and the SKI report '2003 Stoerningshandboken BWR'. The input models the O3 reactor at its current state with the operating power of 3300 MW{sub th}. One aim with this work is that the MELCOR input could also be used for power upgrading studies. All fuel assemblies are thus assumed to consist of the new Westinghouse-Atom's SVEA-96 Optima2 fuel. MELCOR is a severe accident code developed by Sandia National Laboratory under contract from the U.S. Nuclear Regulatory Commission (NRC). MELCOR is a successor to STCP (Source Term Code Package) and has thus a long evolutionary history. The input described here is adapted to the latest version 1.8.5 available when the work began. It was released the year 2000, but a new version 1.8.6 was distributed recently. Conversion to the new version is recommended. (During the writing of this report still another code version, MELCOR 2.0, has been announced to be released within short.) In version 1.8.5 there is an option to describe the accident progression in the lower plenum and the melt-through of the reactor vessel bottom in more detail by use of the Bottom Head (BH) package developed by Oak Ridge National Laboratory especially for BWRs. This is in addition to the ordinary MELCOR COR package. Since problems arose running with the BH input two versions of the O3 input deck were produced, a NONBH and a BH deck. The BH package is no longer a separate package in the new 1

  13. GEN-IV BENCHMARKING OF TRISO FUEL PERFORMANCE MODELS UNDER ACCIDENT CONDITIONS MODELING INPUT DATA

    Energy Technology Data Exchange (ETDEWEB)

    Collin, Blaise Paul [Idaho National Laboratory

    2016-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: • The modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release. • The modeling of the AGR-1 and HFR-EU1bis safety testing experiments. • The comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from “Case 5” of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. “Case 5” of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to “effects of the numerical calculation method rather than the physical model” [IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read

  14. Analysis of MODIS snow cover time series over the alpine regions as input for hydrological modeling

    Science.gov (United States)

    Notarnicola, Claudia; Rastner, Philipp; Irsara, Luca; Moelg, Nico; Bertoldi, Giacomo; Dalla Chiesa, Stefano; Endrizzi, Stefano; Zebisch, Marc

    2010-05-01

    Snow extent and relative physical properties are key parameters in hydrology, weather forecast and hazard warning as well as in climatological models. Satellite sensors offer a unique advantage in monitoring snow cover due to their temporal and spatial synoptic view. The Moderate Resolution Imaging Spectrometer (MODIS) from NASA is especially useful for this purpose due to its high frequency. However, in order to evaluate the role of snow on the water cycle of a catchment such as runoff generation due to snowmelt, remote sensing data need to be assimilated in hydrological models. This study presents a comparison on a multi-temporal basis between snow cover data derived from (1) MODIS images, (2) LANDSAT images, and (3) predictions by the hydrological model GEOtop [1,3]. The test area is located in the catchment of the Matscher Valley (South Tyrol, Northern Italy). The snow cover maps derived from MODIS-images are obtained using a newly developed algorithm taking into account the specific requirements of mountain regions with a focus on the Alps [2]. This algorithm requires the standard MODIS-products MOD09 and MOD02 as input data and generates snow cover maps at a spatial resolution of 250 m. The final output is a combination of MODIS AQUA and MODIS TERRA snow cover maps, thus reducing the presence of cloudy pixels and no-data-values due to topography. By using these maps, daily time series starting from the winter season (November - May) 2002 till 2008/2009 have been created. Along with snow maps from MODIS images, also some snow cover maps derived from LANDSAT images have been used. Due to their high resolution (manto nevoso in aree alpine con dati MODIS multi-temporali e modelli idrologici, 13th ASITA National Conference, 1-4.12.2009, Bari, Italy. [3] Zanotti F., Endrizzi S., Bertoldi G. and Rigon R. 2004. The GEOtop snow module. Hydrological Processes, 18: 3667-3679. DOI:10.1002/hyp.5794.

  15. The application of Global Sensitivity Analysis to quantify the dominant input factors for hydraulic model simulations

    Science.gov (United States)

    Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten

    2015-04-01

    Predicting flood inundation extents using hydraulic models is subject to a number of critical uncertainties. For a specific event, these uncertainties are known to have a large influence on model outputs and any subsequent analyses made by risk managers. Hydraulic modellers often approach such problems by applying uncertainty analysis techniques such as the Generalised Likelihood Uncertainty Estimation (GLUE) methodology. However, these methods do not allow one to attribute which source of uncertainty has the most influence on the various model outputs that inform flood risk decision making. Another issue facing modellers is the amount of computational resource that is available to spend on modelling flood inundations that are 'fit for purpose' to the modelling objectives. Therefore a balance needs to be struck between computation time, realism and spatial resolution, and effectively characterising the uncertainty spread of predictions (for example from boundary conditions and model parameterisations). However, it is not fully understood how much of an impact each factor has on model performance, for example how much influence changing the spatial resolution of a model has on inundation predictions in comparison to other uncertainties inherent in the modelling process. Furthermore, when resampling fine scale topographic data in the form of a Digital Elevation Model (DEM) to coarser resolutions, there are a number of possible coarser DEMs that can be produced. Deciding which DEM is then chosen to represent the surface elevations in the model could also influence model performance. In this study we model a flood event using the hydraulic model LISFLOOD-FP and apply Sobol' Sensitivity Analysis to estimate which input factor, among the uncertainty in model boundary conditions, uncertain model parameters, the spatial resolution of the DEM and the choice of resampled DEM, have the most influence on a range of model outputs. These outputs include whole domain maximum

  16. A MULTIYEAR LAGS INPUT-HOLDING-OUTPUT MODEL ON EDUCATION WITH EXCLUDING IDLE CAPITAL

    Institute of Scientific and Technical Information of China (English)

    Xue FU; Xikang CHEN

    2009-01-01

    This paper develops a multi-year lag Input-Holding-Output (I-H-O) Model on education with exclusion of the idle capital to address the reasonable education structure in support of a sus-tainable development strategy in China. First, the model considers the multiyear lag of human capital because the lag time of human capital is even longer and more important than that of fixed capital. Second, it considers the idle capital resulting from the output decline in education, for example, stu-dent decrease in primary school. The new generalized Leonitief dynamic inverse is deduced to obtain a positive solution on education when output declines as well as expands. After compiling the 2000 I-H-O table on education, the authors adopt modifications-by-step method to treat nonlinear coefficients, and calculate education scale, the requirement of human capital, and education expenditure from 2005 to 2020. It is found that structural imbalance of human capital is a serious problem for Chinese economic development.

  17. Estimation of Soil Carbon Input in France: An Inverse Modelling Approach

    Institute of Scientific and Technical Information of China (English)

    J.MEERSMANS; M.P.MARTIN; E.LACARCE; T.G.ORTON; S.DE BAETS; M.GOURRAT; N.P.A.SABY

    2013-01-01

    Development of a quantitative understanding of soil organic carbon (SOC) dynamics is vital for management of soil to sequester carbon (C) and maintain fertility,thereby contributing to food security and climate change mitigation.There are well-established process-based models that can be used to simulate SOC stock evolution; however,there are few plant residue C input values and those that exist represent a limited range of environments.This limitation in a fundamental model component (i.e.,C input) constrains the reliability of current SOC stock simulations.This study aimed to estimate crop-specific and environment-specific plant-derived soil C input values for agricultural sites in Prance based on data from 700 sites selected from a recently established French soil monitoring network (the RMQS database).Measured SOC stock values from this large scale soil database were used to constrain an inverse RothC modelling approach to derive estimated C input values consistent with the stocks.This approach allowed us to estimate significant crop-specific C input values (P < 0.05) for 14 out of 17 crop types in the range from 1.84 ± 0.69 t C ha-1 year-1 (silage corn) to 5.15 ± 0.12 t C ha-1 year-1 (grassland/pasture).Furthermore,the incorporation of climate variables improved the predictions.C input of 4 crop types could be predicted as a function of temperature and 8 as a function of precipitation.This study offered an approach to meet the urgent need for crop-specific and environment-specific C input values in order to improve the reliability of SOC stock prediction.

  18. Modeling the cardiovascular system using a nonlinear additive autoregressive model with exogenous input

    Science.gov (United States)

    Riedl, M.; Suhrbier, A.; Malberg, H.; Penzel, T.; Bretthauer, G.; Kurths, J.; Wessel, N.

    2008-07-01

    The parameters of heart rate variability and blood pressure variability have proved to be useful analytical tools in cardiovascular physics and medicine. Model-based analysis of these variabilities additionally leads to new prognostic information about mechanisms behind regulations in the cardiovascular system. In this paper, we analyze the complex interaction between heart rate, systolic blood pressure, and respiration by nonparametric fitted nonlinear additive autoregressive models with external inputs. Therefore, we consider measurements of healthy persons and patients suffering from obstructive sleep apnea syndrome (OSAS), with and without hypertension. It is shown that the proposed nonlinear models are capable of describing short-term fluctuations in heart rate as well as systolic blood pressure significantly better than similar linear ones, which confirms the assumption of nonlinear controlled heart rate and blood pressure. Furthermore, the comparison of the nonlinear and linear approaches reveals that the heart rate and blood pressure variability in healthy subjects is caused by a higher level of noise as well as nonlinearity than in patients suffering from OSAS. The residue analysis points at a further source of heart rate and blood pressure variability in healthy subjects, in addition to heart rate, systolic blood pressure, and respiration. Comparison of the nonlinear models within and among the different groups of subjects suggests the ability to discriminate the cohorts that could lead to a stratification of hypertension risk in OSAS patients.

  19. Validation of Power Requirement Model for Active Loudspeakers

    DEFF Research Database (Denmark)

    Schneider, Henrik; Madsen, Anders Normann; Bjerregaard, Ruben

    2015-01-01

    The actual power requirement of an active loudspeaker during playback of music has not received much attention in the literature. This is probably because no single and simple solution exists and because a complete system knowledge from input voltage to output sound pressure level is required....... There are however many advantages that could be harvested from such knowledge like size, cost and efficiency improvements. In this paper a recently proposed power requirement model for active loudspeakers is experimentally validated and the model is expanded to include the closed and vented type enclosures...

  20. Simulation Evaluation of Pilot Inputs for Real Time Modeling During Commercial Flight Operations

    Science.gov (United States)

    Martos, Borja; Ranaudo, Richard; Oltman, Ryan; Myhre, Nick

    2017-01-01

    Aircraft dynamics characteristics can only be identified from flight data when the aircraft dynamics are excited sufficiently. A preliminary study was conducted into what types and levels of manual piloted control excitation would be required for accurate Real-Time Parameter IDentification (RTPID) results by commercial airline pilots. This includes assessing the practicality for the pilot to provide this excitation when cued, and to further understand if pilot inputs during various phases of flight provide sufficient excitation naturally. An operationally representative task was evaluated by 5 commercial airline pilots using the NASA Ice Contamination Effects Flight Training Device (ICEFTD). Results showed that it is practical to use manual pilot inputs only as a means of achieving good RTPID in all phases of flight and in flight turbulence conditions. All pilots were effective in satisfying excitation requirements when cued. Much of the time, cueing was not even necessary, as just performing the required task provided enough excitation for accurate RTPID estimation. Pilot opinion surveys reported that the additional control inputs required when prompted by the excitation cueing were easy to make, quickly mastered, and required minimal training.

  1. Queueing model for an ATM multiplexer with unequal input/output link capacities

    Science.gov (United States)

    Long, Y. H.; Ho, T. K.; Rad, A. B.; Lam, S. P. S.

    1998-10-01

    We present a queuing model for an ATM multiplexer with unequal input/output link capacities in this paper. This model can be used to analyze the buffer behaviors of an ATM multiplexer which multiplexes low speed input links into a high speed output link. For this queuing mode, we assume that the input and output slot times are not equal, this is quite different from most analysis of discrete-time queues for ATM multiplexer/switch. In the queuing analysis, we adopt a correlated arrival process represented by the Discrete-time Batch Markovian Arrival Process. The analysis is based upon M/G/1 type queue technique which enables easy numerical computation. Queue length distributions observed at different epochs and queue length distribution seen by an arbitrary arrival cell when it enters the buffer are given.

  2. Investigations of the sensitivity of a coronal mass ejection model (ENLIL) to solar input parameters

    DEFF Research Database (Denmark)

    Falkenberg, Thea Vilstrup; Vršnak, B.; Taktakishvili, A.;

    2010-01-01

    investigate the parameter space of the ENLILv2.5b model using the CME event of 25 July 2004. ENLIL is a time‐dependent 3‐D MHD model that can simulate the propagation of cone‐shaped interplanetary coronal mass ejections (ICMEs) through the solar system. Excepting the cone parameters (radius, position...... (CMEs), but in order to predict the caused effects, we need to be able to model their propagation from their origin in the solar corona to the point of interest, e.g., Earth. Many such models exist, but to understand the models in detail we must understand the primary input parameters. Here we......, and initial velocity), all remaining parameters are varied, resulting in more than 20 runs investigated here. The output parameters considered are velocity, density, magnetic field strength, and temperature. We find that the largest effects on the model output are the input parameters of upper limit...

  3. A seismic free field input model for FE-SBFE coupling in time domain

    Institute of Scientific and Technical Information of China (English)

    阎俊义; 金峰; 徐艳杰; 王光纶; 张楚汉

    2003-01-01

    A seismic free field input formulation of the coupling procedure of the finite element (FE) and the scaled boundary finite-element(SBFE) is proposed to perform the unbounded soil-structure interaction analysis in time domain. Based on the substructure technique, seismic excitation of the soil-structure system is represented by the free-field motion of an elastic half-space. To reduce the computational effort, the acceleration unit-impulse response function of the unbounded soil is decomposed into two functions: linear and residual. The latter converges to zero and can be truncated as required. With the prescribed tolerance parameter, the balance between accuracy and efficiency of the procedure can be controlled. The validity of the model is verified by the scattering analysis of a hemi-spherical canyon subjected to plane harmonic P, SV and SH wave incidence. Numerical results show that the new procedure is very efficient for seismic problems within a normal range of frequency. The coupling procedure presented herein can be applied to linear and nonlinear earthquake response analysis of practical structures which are built on unbounded soil.

  4. Green Input-Output Model for Power Company Theoretical & Application Analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Based on the theory of marginal opportunity cost, one kind of green input-output table and models of powercompany are put forward in this paper. For an appliable purpose, analysis of integrated planning, cost analysis, pricingof the power company are also given.

  5. The economic impact of multifunctional agriculture in Dutch regions: An input-output model

    NARCIS (Netherlands)

    Heringa, P.W.; Heide, van der C.M.; Heijman, W.J.M.

    2013-01-01

    Multifunctional agriculture is a broad concept lacking a precise definition. Moreover, little is known about the societal importance of multifunctional agriculture. This paper is an empirical attempt to fill this gap. To this end, an input-output model was constructed for multifunctional agriculture

  6. The economic impact of multifunctional agriculture in The Netherlands: A regional input-output model

    NARCIS (Netherlands)

    Heringa, P.W.; Heide, van der C.M.; Heijman, W.J.M.

    2012-01-01

    Multifunctional agriculture is a broad concept lacking a precise and uniform definition. Moreover, little is known about the societal importance of multifunctional agriculture. This paper is an empirical attempt to fill this gap. To this end, an input-output model is constructed for multifunctional

  7. Using a Joint-Input, Multi-Product Formulation to Improve Spatial Price Equilibrium Models

    OpenAIRE

    Bishop, Phillip M.; Pratt, James E.; Novakovic, Andrew M.

    1994-01-01

    Mathematical programming models, as typically formulated for international trade applications, may contain certain implied restrictions which lead to solutions which can be shown to be technically infeasible, or if feasible, then not actually an equilibrium. An alternative formulation is presented which allows joint-inputs and multi-products, with pure transshipment and product substitution forms of arbitrage.

  8. A neuromorphic model of motor overflow in focal hand dystonia due to correlated sensory input

    Science.gov (United States)

    Sohn, Won Joon; Niu, Chuanxin M.; Sanger, Terence D.

    2016-10-01

    Objective. Motor overflow is a common and frustrating symptom of dystonia, manifested as unintentional muscle contraction that occurs during an intended voluntary movement. Although it is suspected that motor overflow is due to cortical disorganization in some types of dystonia (e.g. focal hand dystonia), it remains elusive which mechanisms could initiate and, more importantly, perpetuate motor overflow. We hypothesize that distinct motor elements have low risk of motor overflow if their sensory inputs remain statistically independent. But when provided with correlated sensory inputs, pre-existing crosstalk among sensory projections will grow under spike-timing-dependent-plasticity (STDP) and eventually produce irreversible motor overflow. Approach. We emulated a simplified neuromuscular system comprising two anatomically distinct digital muscles innervated by two layers of spiking neurons with STDP. The synaptic connections between layers included crosstalk connections. The input neurons received either independent or correlated sensory drive during 4 days of continuous excitation. The emulation is critically enabled and accelerated by our neuromorphic hardware created in previous work. Main results. When driven by correlated sensory inputs, the crosstalk synapses gained weight and produced prominent motor overflow; the growth of crosstalk synapses resulted in enlarged sensory representation reflecting cortical reorganization. The overflow failed to recede when the inputs resumed their original uncorrelated statistics. In the control group, no motor overflow was observed. Significance. Although our model is a highly simplified and limited representation of the human sensorimotor system, it allows us to explain how correlated sensory input to anatomically distinct muscles is by itself sufficient to cause persistent and irreversible motor overflow. Further studies are needed to locate the source of correlation in sensory input.

  9. Linear and quadratic models of point process systems: contributions of patterned input to output.

    Science.gov (United States)

    Lindsay, K A; Rosenberg, J R

    2012-08-01

    In the 1880's Volterra characterised a nonlinear system using a functional series connecting continuous input and continuous output. Norbert Wiener, in the 1940's, circumvented problems associated with the application of Volterra series to physical problems by deriving from it a new series of terms that are mutually uncorrelated with respect to Gaussian processes. Subsequently, Brillinger, in the 1970's, introduced a point-process analogue of Volterra's series connecting point-process inputs to the instantaneous rate of point-process output. We derive here a new series from this analogue in which its terms are mutually uncorrelated with respect to Poisson processes. This new series expresses how patterned input in a spike train, represented by third-order cross-cumulants, is converted into the instantaneous rate of an output point-process. Given experimental records of suitable duration, the contribution of arbitrary patterned input to an output process can, in principle, be determined. Solutions for linear and quadratic point-process models with one and two inputs and a single output are investigated. Our theoretical results are applied to isolated muscle spindle data in which the spike trains from the primary and secondary endings from the same muscle spindle are recorded in response to stimulation of one and then two static fusimotor axons in the absence and presence of a random length change imposed on the parent muscle. For a fixed mean rate of input spikes, the analysis of the experimental data makes explicit which patterns of two input spikes contribute to an output spike.

  10. Resonance model for non-perturbative inputs to gluon distributions in the hadrons

    CERN Document Server

    Ermolaev, B I; Troyan, S I

    2015-01-01

    We construct non-perturbative inputs for the elastic gluon-hadron scattering amplitudes in the forward kinematic region for both polarized and non-polarized hadrons. We use the optical theorem to relate invariant scattering amplitudes to the gluon distributions in the hadrons. By analyzing the structure of the UV and IR divergences, we can determine theoretical conditions on the non-perturbative inputs, and use these to construct the results in a generalized Basic Factorization framework using a simple Resonance Model. These results can then be related to the K_T and Collinear Factorization expressions, and the corresponding constrains can be extracted.

  11. Generation IV benchmarking of TRISO fuel performance models under accident conditions: Modeling input data

    Energy Technology Data Exchange (ETDEWEB)

    Collin, Blaise P. [Idaho National Laboratory (INL), Idaho Falls, ID (United States)

    2014-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison

  12. Generation IV benchmarking of TRISO fuel performance models under accident conditions: Modeling input data

    Energy Technology Data Exchange (ETDEWEB)

    Collin, Blaise P. [Idaho National Laboratory (INL), Idaho Falls, ID (United States)

    2014-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison

  13. Input-constrained model predictive control via the alternating direction method of multipliers

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Frison, Gianluca; Andersen, Martin S.

    2014-01-01

    This paper presents an algorithm, based on the alternating direction method of multipliers, for the convex optimal control problem arising in input-constrained model predictive control. We develop an efficient implementation of the algorithm for the extended linear quadratic control problem (LQCP......) with input and input-rate limits. The algorithm alternates between solving an extended LQCP and a highly structured quadratic program. These quadratic programs are solved using a Riccati iteration procedure, and a structure-exploiting interior-point method, respectively. The computational cost per iteration...... is quadratic in the dimensions of the controlled system, and linear in the length of the prediction horizon. Simulations show that the approach proposed in this paper is more than an order of magnitude faster than several state-of-the-art quadratic programming algorithms, and that the difference in computation...

  14. Accessory subunit NUYM (NDUFS4) is required for stability of the electron input module and activity of mitochondrial complex I.

    Science.gov (United States)

    Kahlhöfer, Flora; Kmita, Katarzyna; Wittig, Ilka; Zwicker, Klaus; Zickermann, Volker

    2017-02-01

    Mitochondrial complex I is an intricate 1MDa membrane protein complex with a central role in aerobic energy metabolism. The minimal form of complex I consists of fourteen central subunits that are conserved from bacteria to man. In addition, eukaryotic complex I comprises some 30 accessory subunits of largely unknown function. The gene for the accessory NDUFS4 subunit of human complex I is a hot spot for fatal pathogenic mutations in humans. We have deleted the gene for the orthologous NUYM subunit in the aerobic yeast Yarrowia lipolytica, an established model system to study eukaryotic complex I and complex I linked diseases. We observed assembly of complex I which lacked only subunit NUYM and retained weak interaction with assembly factor N7BML (human NDUFAF2). Absence of NUYM caused distortion of iron sulfur clusters of the electron input domain leading to decreased complex I activity and increased release of reactive oxygen species. We conclude that NUYM has an important stabilizing function for the electron input module of complex I and is essential for proper complex I function.

  15. Large uncertainty in soil carbon modelling related to carbon input calculation method

    Science.gov (United States)

    Keel, Sonja G.; Leifeld, Jens; Taghizadeh-Toosi, Arezoo; Oleson, Jørgen E.

    2016-04-01

    A model-based inventory for carbon (C) sinks and sources in agricultural soils is being established for Switzerland. As part of this project, five frequently used allometric equations that estimate soil C inputs based on measured yields are compared. To evaluate the different methods, we calculate soil C inputs for a long-term field trial in Switzerland. This DOK experiment (bio-Dynamic, bio-Organic, and conventional (German: Konventionell)) compares five different management systems, that are applied to identical crop rotations. Average calculated soil C inputs vary largely between allometric equations and range from 1.6 t C ha-1 yr-1 to 2.6 t C ha-1 yr-1. Among the most important crops in Switzerland, the uncertainty is largest for barley (difference between highest and lowest estimate: 3.0 t C ha-1 yr-1). For the unfertilized control treatment, the estimated soil C inputs vary less between allometric equations than for the treatment that received mineral fertilizer and farmyard manure. Most likely, this is due to the higher yields in the latter treatment, i.e. the difference between methods might be amplified because yields differ more. To evaluate the influence of these allometric equations on soil C dynamics we simulate the DOK trial for the years 1977-2004 using the model C-TOOL (Taghizadeh-Toosi et al. 2014) and the five different soil C input calculation methods. Across all treatments, C-TOOL simulates a decrease in soil C in line with the experimental data. This decline, however, varies between allometric equations (-2.4 t C ha-1 to -6.3 t C ha-1 for the years 1977-2004) and has the same order of magnitude as the difference between treatments. In summary, the method to estimate soil C inputs is identified as a significant source of uncertainty in soil C modelling. Choosing an appropriate allometric equation to derive the input data is thus a critical step when setting up a model-based national soil C inventory. References Taghizadeh-Toosi A et al. (2014) C

  16. Modeling the impact of common noise inputs on the network activity of retinal ganglion cells.

    Science.gov (United States)

    Vidne, Michael; Ahmadian, Yashar; Shlens, Jonathon; Pillow, Jonathan W; Kulkarni, Jayant; Litke, Alan M; Chichilnisky, E J; Simoncelli, Eero; Paninski, Liam

    2012-08-01

    Synchronized spontaneous firing among retinal ganglion cells (RGCs), on timescales faster than visual responses, has been reported in many studies. Two candidate mechanisms of synchronized firing include direct coupling and shared noisy inputs. In neighboring parasol cells of primate retina, which exhibit rapid synchronized firing that has been studied extensively, recent experimental work indicates that direct electrical or synaptic coupling is weak, but shared synaptic input in the absence of modulated stimuli is strong. However, previous modeling efforts have not accounted for this aspect of firing in the parasol cell population. Here we develop a new model that incorporates the effects of common noise, and apply it to analyze the light responses and synchronized firing of a large, densely-sampled network of over 250 simultaneously recorded parasol cells. We use a generalized linear model in which the spike rate in each cell is determined by the linear combination of the spatio-temporally filtered visual input, the temporally filtered prior spikes of that cell, and unobserved sources representing common noise. The model accurately captures the statistical structure of the spike trains and the encoding of the visual stimulus, without the direct coupling assumption present in previous modeling work. Finally, we examined the problem of decoding the visual stimulus from the spike train given the estimated parameters. The common-noise model produces Bayesian decoding performance as accurate as that of a model with direct coupling, but with significantly more robustness to spike timing perturbations.

  17. Simulation model structure numerically robust to changes in magnitude and combination of input and output variables

    DEFF Research Database (Denmark)

    Rasmussen, Bjarne D.; Jakobsen, Arne

    1999-01-01

    instabilities prevent the practical use of such a system model for more than one input/output combination and for other magnitudes of refrigerating capacities.A higher numerical robustness of system models can be achieved by making a model for the refrigeration cycle the core of the system model and by using...... variables with narrow definition intervals for the exchange of information between the cycle model and the component models.The advantages of the cycle-oriented method are illustrated by an example showing the refrigeration cycle similarities between two very different refrigeration systems.......Mathematical models of refrigeration systems are often based on a coupling of component models forming a “closed loop” type of system model. In these models the coupling structure of the component models represents the actual flow path of refrigerant in the system. Very often numerical...

  18. Global Behaviors of a Chemostat Model with Delayed Nutrient Recycling and Periodically Pulsed Input

    Directory of Open Access Journals (Sweden)

    Kai Wang

    2010-01-01

    Full Text Available The dynamic behaviors in a chemostat model with delayed nutrient recycling and periodically pulsed input are studied. By introducing new analysis technique, the sufficient and necessary conditions on the permanence and extinction of the microorganisms are obtained. Furthermore, by using the Liapunov function method, the sufficient condition on the global attractivity of the model is established. Finally, an example is given to demonstrate the effectiveness of the results in this paper.

  19. Use of Generalised Linear Models to quantify rainfall input uncertainty to hydrological modelling in the Upper Nile

    Science.gov (United States)

    Kigobe, M.; McIntyre, N.; Wheater, H. S.

    2009-04-01

    Interest in the application of climate and hydrological models in the Nile basin has risen in the recent past; however, the first drawback for most efforts has been the estimation of historic precipitation patterns. In this study we have applied stochastic models to infill and extend observed data sets to generate inputs for hydrological modelling. Several stochastic climate models within the Generalised Linear Modelling (GLM) framework have been applied to reproduce spatial and temporal patterns of precipitation in the Kyoga basin. A logistic regression model (describing rainfall occurrence) and a gamma distribution (describing rainfall amounts) are used to model rainfall patterns. The parameters of the models are functions of spatial and temporal covariates, and are fitted to the observed rainfall data using log-likelihood methods. Using the fitted model, multi-site rainfall sequences over the Kyoga basin are generated stochastically as a function of the dominant seasonal, climatic and geographic controls. The rainfall sequences generated are then used to drive a semi distributed hydrological model using the Soil Water and Assessment Tool (SWAT). The sensitivity of runoff to uncertainty associated with missing precipitation records is thus tested. In an application to the Lake Kyoga catchment, the performance of the hydrological model highly depends on the spatial representation of the input precipitation patterns, model parameterisation and the performance of the GLM stochastic models used to generate the input rainfall. The results obtained so far disclose that stochastic models can be developed for several climatic regions within the Kyoga basin; and, given identification of a stochastic rainfall model; input uncertainty due to precipitation can be usefully quantified. The ways forward for rainfall modelling and hydrological simulation in Uganda and the Upper Nile are discussed. Key Words: Precipitation, Generalised Linear Models, Input Uncertainty, Soil Water

  20. Regional disaster impact analysis: comparing input-output and computable general equilibrium models

    Science.gov (United States)

    Koks, Elco E.; Carrera, Lorenzo; Jonkeren, Olaf; Aerts, Jeroen C. J. H.; Husby, Trond G.; Thissen, Mark; Standardi, Gabriele; Mysiak, Jaroslav

    2016-08-01

    A variety of models have been applied to assess the economic losses of disasters, of which the most common ones are input-output (IO) and computable general equilibrium (CGE) models. In addition, an increasing number of scholars have developed hybrid approaches: one that combines both or either of them in combination with noneconomic methods. While both IO and CGE models are widely used, they are mainly compared on theoretical grounds. Few studies have compared disaster impacts of different model types in a systematic way and for the same geographical area, using similar input data. Such a comparison is valuable from both a scientific and policy perspective as the magnitude and the spatial distribution of the estimated losses are born likely to vary with the chosen modelling approach (IO, CGE, or hybrid). Hence, regional disaster impact loss estimates resulting from a range of models facilitate better decisions and policy making. Therefore, this study analyses the economic consequences for a specific case study, using three regional disaster impact models: two hybrid IO models and a CGE model. The case study concerns two flood scenarios in the Po River basin in Italy. Modelling results indicate that the difference in estimated total (national) economic losses and the regional distribution of those losses may vary by up to a factor of 7 between the three models, depending on the type of recovery path. Total economic impact, comprising all Italian regions, is negative in all models though.

  1. Formulation of a hybrid calibration approach for a physically based distributed model with NEXRAD data input

    Science.gov (United States)

    Di Luzio, Mauro; Arnold, Jeffrey G.

    2004-10-01

    This paper describes the background, formulation and results of an hourly input-output calibration approach proposed for the Soil and Water Assessment Tool (SWAT) watershed model, presented for 24 representative storm events occurring during the period between 1994 and 2000 in the Blue River watershed (1233 km 2 located in Oklahoma). This effort is the first follow up to the participation in the National Weather Service-Distributed Modeling Intercomparison Project (DMIP), an opportunity to apply, for the first time within the SWAT modeling framework, routines for hourly stream flow prediction based on gridded precipitation (NEXRAD) data input. Previous SWAT model simulations, uncalibrated and with moderate manual calibration (only the water balance over the calibration period), were provided for the entire set of watersheds and associated outlets for the comparison designed in the DMIP project. The extended goal of this follow up was to verify the model efficiency in simulating hourly hydrographs calibrating each storm event using the formulated approach. This included a combination of a manual and an automatic calibration approach (Shuffled Complex Evolution Method) and the use of input parameter values allowed to vary only within their physical extent. While the model provided reasonable water budget results with minimal calibration, event simulations with the revised calibration were significantly improved. The combination of NEXRAD precipitation data input, the soil water balance and runoff equations, along with the calibration strategy described in the paper, appear to adequately describe the storm events. The presented application and the formulated calibration method are initial steps toward the improvement of the simulation on an hourly basis of the SWAT model loading variables associated with the storm flow, such as sediment and pollutants, and the success of Total Maximum Daily Load (TMDL) projects.

  2. Radiation Belt and Plasma Model Requirements

    Science.gov (United States)

    Barth, Janet L.

    2005-01-01

    Contents include the following: Radiation belt and plasma model environment. Environment hazards for systems and humans. Need for new models. How models are used. Model requirements. How can space weather community help?

  3. Consolidating soil carbon turnover models by improved estimates of belowground carbon input

    Science.gov (United States)

    Taghizadeh-Toosi, Arezoo; Christensen, Bent T.; Glendining, Margaret; Olesen, Jørgen E.

    2016-09-01

    World soil carbon (C) stocks are third only to those in the ocean and earth crust, and represent twice the amount currently present in the atmosphere. Therefore, any small change in the amount of soil organic C (SOC) may affect carbon dioxide (CO2) concentrations in the atmosphere. Dynamic models of SOC help reveal the interaction among soil carbon systems, climate and land management, and they are also frequently used to help assess SOC dynamics. Those models often use allometric functions to calculate soil C inputs in which the amount of C in both above and below ground crop residues are assumed to be proportional to crop harvest yield. Here we argue that simulating changes in SOC stocks based on C input that are proportional to crop yield is not supported by data from long-term experiments with measured SOC changes. Rather, there is evidence that root C inputs are largely independent of crop yield, but crop specific. We discuss implications of applying fixed belowground C input regardless of crop yield on agricultural greenhouse gas mitigation and accounting.

  4. Application of a Linear Input/Output Model to Tankless Water Heaters

    Energy Technology Data Exchange (ETDEWEB)

    Butcher T.; Schoenbauer, B.

    2011-12-31

    In this study, the applicability of a linear input/output model to gas-fired, tankless water heaters has been evaluated. This simple model assumes that the relationship between input and output, averaged over both active draw and idle periods, is linear. This approach is being applied to boilers in other studies and offers the potential to make a small number of simple measurements to obtain the model parameters. These parameters can then be used to predict performance under complex load patterns. Both condensing and non-condensing water heaters have been tested under a very wide range of load conditions. It is shown that this approach can be used to reproduce performance metrics, such as the energy factor, and can be used to evaluate the impacts of alternative draw patterns and conditions.

  5. Analytical modeling of the input admittance of an electric drive for stability analysis purposes

    Science.gov (United States)

    Girinon, S.; Baumann, C.; Piquet, H.; Roux, N.

    2009-07-01

    Embedded electric HVDC distribution network are facing difficult issues on quality and stability concerns. In order to help to resolve those problems, this paper proposes to develop an analytical model of an electric drive. This self-contained model includes an inverter, its regulation loops and the PMSM. After comparing the model with its equivalent (abc) full model, the study focuses on frequency analysis. The association with an input filter helps in expressing stability of the whole assembly by means of Routh-Hurtwitz criterion.

  6. New Results on Robust Model Predictive Control for Time-Delay Systems with Input Constraints

    Directory of Open Access Journals (Sweden)

    Qing Lu

    2014-01-01

    Full Text Available This paper investigates the problem of model predictive control for a class of nonlinear systems subject to state delays and input constraints. The time-varying delay is considered with both upper and lower bounds. A new model is proposed to approximate the delay. And the uncertainty is polytopic type. For the state-feedback MPC design objective, we formulate an optimization problem. Under model transformation, a new model predictive controller is designed such that the robust asymptotical stability of the closed-loop system can be guaranteed. Finally, the applicability of the presented results are demonstrated by a practical example.

  7. A hippocampal cognitive prosthesis: multi-input, multi-output nonlinear modeling and VLSI implementation.

    Science.gov (United States)

    Berger, Theodore W; Song, Dong; Chan, Rosa H M; Marmarelis, Vasilis Z; LaCoss, Jeff; Wills, Jack; Hampson, Robert E; Deadwyler, Sam A; Granacki, John J

    2012-03-01

    This paper describes the development of a cognitive prosthesis designed to restore the ability to form new long-term memories typically lost after damage to the hippocampus. The animal model used is delayed nonmatch-to-sample (DNMS) behavior in the rat, and the "core" of the prosthesis is a biomimetic multi-input/multi-output (MIMO) nonlinear model that provides the capability for predicting spatio-temporal spike train output of hippocampus (CA1) based on spatio-temporal spike train inputs recorded presynaptically to CA1 (e.g., CA3). We demonstrate the capability of the MIMO model for highly accurate predictions of CA1 coded memories that can be made on a single-trial basis and in real-time. When hippocampal CA1 function is blocked and long-term memory formation is lost, successful DNMS behavior also is abolished. However, when MIMO model predictions are used to reinstate CA1 memory-related activity by driving spatio-temporal electrical stimulation of hippocampal output to mimic the patterns of activity observed in control conditions, successful DNMS behavior is restored. We also outline the design in very-large-scale integration for a hardware implementation of a 16-input, 16-output MIMO model, along with spike sorting, amplification, and other functions necessary for a total system, when coupled together with electrode arrays to record extracellularly from populations of hippocampal neurons, that can serve as a cognitive prosthesis in behaving animals.

  8. Skin lesion computational diagnosis of dermoscopic images: Ensemble models based on input feature manipulation.

    Science.gov (United States)

    Oliveira, Roberta B; Pereira, Aledir S; Tavares, João Manuel R S

    2017-10-01

    The number of deaths worldwide due to melanoma has risen in recent times, in part because melanoma is the most aggressive type of skin cancer. Computational systems have been developed to assist dermatologists in early diagnosis of skin cancer, or even to monitor skin lesions. However, there still remains a challenge to improve classifiers for the diagnosis of such skin lesions. The main objective of this article is to evaluate different ensemble classification models based on input feature manipulation to diagnose skin lesions. Input feature manipulation processes are based on feature subset selections from shape properties, colour variation and texture analysis to generate diversity for the ensemble models. Three subset selection models are presented here: (1) a subset selection model based on specific feature groups, (2) a correlation-based subset selection model, and (3) a subset selection model based on feature selection algorithms. Each ensemble classification model is generated using an optimum-path forest classifier and integrated with a majority voting strategy. The proposed models were applied on a set of 1104 dermoscopic images using a cross-validation procedure. The best results were obtained by the first ensemble classification model that generates a feature subset ensemble based on specific feature groups. The skin lesion diagnosis computational system achieved 94.3% accuracy, 91.8% sensitivity and 96.7% specificity. The input feature manipulation process based on specific feature subsets generated the greatest diversity for the ensemble classification model with very promising results. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. The role of additive neurogenesis and synaptic plasticity in a hippocampal memory model with grid-cell like input.

    Directory of Open Access Journals (Sweden)

    Peter A Appleby

    Full Text Available Recently, we presented a study of adult neurogenesis in a simplified hippocampal memory model. The network was required to encode and decode memory patterns despite changing input statistics. We showed that additive neurogenesis was a more effective adaptation strategy compared to neuronal turnover and conventional synaptic plasticity as it allowed the network to respond to changes in the input statistics while preserving representations of earlier environments. Here we extend our model to include realistic, spatially driven input firing patterns in the form of grid cells in the entorhinal cortex. We compare network performance across a sequence of spatial environments using three distinct adaptation strategies: conventional synaptic plasticity, where the network is of fixed size but the connectivity is plastic; neuronal turnover, where the network is of fixed size but units in the network may die and be replaced; and additive neurogenesis, where the network starts out with fewer initial units but grows over time. We confirm that additive neurogenesis is a superior adaptation strategy when using realistic, spatially structured input patterns. We then show that a more biologically plausible neurogenesis rule that incorporates cell death and enhanced plasticity of new granule cells has an overall performance significantly better than any one of the three individual strategies operating alone. This adaptation rule can be tailored to maximise performance of the network when operating as either a short- or long-term memory store. We also examine the time course of adult neurogenesis over the lifetime of an animal raised under different hypothetical rearing conditions. These growth profiles have several distinct features that form a theoretical prediction that could be tested experimentally. Finally, we show that place cells can emerge and refine in a realistic manner in our model as a direct result of the sparsification performed by the dentate gyrus

  10. Time series analysis as input for clinical predictive modeling: Modeling cardiac arrest in a pediatric ICU

    Directory of Open Access Journals (Sweden)

    Kennedy Curtis E

    2011-10-01

    Full Text Available Abstract Background Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. Methods We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Results Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1 selecting candidate variables; 2 specifying measurement parameters; 3 defining data format; 4 defining time window duration and resolution; 5 calculating latent variables for candidate variables not directly measured; 6 calculating time series features as latent variables; 7 creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8

  11. Input dependent cell assembly dynamics in a model of the striatal medium spiny neuron network.

    Science.gov (United States)

    Ponzi, Adam; Wickens, Jeff

    2012-01-01

    The striatal medium spiny neuron (MSN) network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri-stimulus time histograms (PSTH) of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioral task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviorally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and outline the range of parameters where this behavior is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response which could be utilized by the animal in behavior.

  12. Input dependent cell assembly dynamics in a model of the striatal medium spiny neuron network

    Directory of Open Access Journals (Sweden)

    Adam ePonzi

    2012-03-01

    Full Text Available The striatal medium spiny neuron (MSNs network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri stimulus time histograms (PSTH of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioural task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviourally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would in when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and delineate the range of parameters where this behaviour is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response

  13. What input data are needed to accurately model electromagnetic fields from mobile phone base stations?

    Science.gov (United States)

    Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel

    2015-01-01

    The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available.

  14. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    Science.gov (United States)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-03-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.

  15. Stable isotopes and Digital Elevation Models to study nutrient inputs in high-Arctic lakes

    Science.gov (United States)

    Calizza, Edoardo; Rossi, David; Costantini, Maria Letizia; Careddu, Giulio; Rossi, Loreto

    2016-04-01

    Ice cover, run-off from the watershed, aquatic and terrestrial primary productivity, guano deposition from birds are key factors controlling nutrient and organic matter inputs in high-Arctic lakes. All these factors are expected to be significantly affected by climate change. Quantifying these controls is a key baseline step to understand what combination of factors subtends the biological productivity in Arctic lakes and will drive their ecological response to environmental change. Basing on Digital Elevation Models, drainage maps, and C and N elemental content and stable isotope analysis in sediments, aquatic vegetation and a dominant macroinvertebrate species (Lepidurus arcticus Pallas 1973) belonging to Tvillingvatnet, Storvatnet and Kolhamna, three lakes located in North Spitsbergen (Svalbard), we propose an integrated approach for the analysis of (i) nutrient and organic matter inputs in lakes; (ii) the role of catchment hydro-geomorphology in determining inter-lake differences in the isotopic composition of sediments; (iii) effects of diverse nutrient inputs on the isotopic niche of Lepidurus arcticus. Given its high run-off and large catchment, organic deposits in Tvillingvatnet where dominated by terrestrial inputs, whereas inputs were mainly of aquatic origin in Storvatnet, a lowland lake with low potential run-off. In Kolhamna, organic deposits seem to be dominated by inputs from birds, which actually colonise the area. Isotopic signatures were similar between samples within each lake, representing precise tracers for studies on the effect of climate change on biogeochemical cycles in lakes. The isotopic niche of L. aricticus reflected differences in sediments between lakes, suggesting a bottom-up effect of hydro-geomorphology characterizing each lake on nutrients assimilated by this species. The presented approach proven to be an effective research pathway for the identification of factors subtending to nutrient and organic matter inputs and transfer

  16. Estimating input parameters from intracellular recordings in the Feller neuronal model

    Science.gov (United States)

    Bibbona, Enrico; Lansky, Petr; Sirovich, Roberta

    2010-03-01

    We study the estimation of the input parameters in a Feller neuronal model from a trajectory of the membrane potential sampled at discrete times. These input parameters are identified with the drift and the infinitesimal variance of the underlying stochastic diffusion process with multiplicative noise. The state space of the process is restricted from below by an inaccessible boundary. Further, the model is characterized by the presence of an absorbing threshold, the first hitting of which determines the length of each trajectory and which constrains the state space from above. We compare, both in the presence and in the absence of the absorbing threshold, the efficiency of different known estimators. In addition, we propose an estimator for the drift term, which is proved to be more efficient than the others, at least in the explored range of the parameters. The presence of the threshold makes the estimates of the drift term biased, and two methods to correct it are proposed.

  17. On the redistribution of existing inputs using the spherical frontier dea model

    Directory of Open Access Journals (Sweden)

    José Virgilio Guedes de Avellar

    2010-04-01

    Full Text Available The Spherical Frontier DEA Model (SFM (Avellar et al., 2007 was developed to be used when one wants to fairly distribute a new and fixed input to a group of Decision Making Units (DMU's. SFM's basic idea is to distribute this new and fixed input in such a way that every DMU will be placed on an efficiency frontier with a spherical shape. We use SFM to analyze the problems that appear when one wants to redistribute an already existing input to a group of DMU's such that the total sum of this input will remain constant. We also analyze the case in which this total sum may vary.O Modelo de Fronteira Esférica (MFE (Avellar et al., 2007 foi desenvolvido para ser usado quando se deseja distribuir de maneira justa um novo insumo a um conjunto de unidades tomadoras de decisão (DMU's, da sigla em inglês, Decision Making Units. A ideia básica do MFE é a de distribuir esse novo insumo de maneira que todas as DMU's sejam colocadas numa fronteira de eficiência com um formato esférico. Neste artigo, usamos MFE para analisar o problema que surge quando se deseja redistribuir um insumo já existente para um grupo de DMU's de tal forma que a soma desse insumo para todas as DMU's se mantenha constante. Também analisamos o caso em que essa soma possa variar.

  18. Automated detection of arterial input function in DSC perfusion MRI in a stroke rat model

    Science.gov (United States)

    Yeh, M.-Y.; Lee, T.-H.; Yang, S.-T.; Kuo, H.-H.; Chyi, T.-K.; Liu, H.-L.

    2009-05-01

    Quantitative cerebral blood flow (CBF) estimation requires deconvolution of the tissue concentration time curves with an arterial input function (AIF). However, image-based determination of AIF in rodent is challenged due to limited spatial resolution. We evaluated the feasibility of quantitative analysis using automated AIF detection and compared the results with commonly applied semi-quantitative analysis. Permanent occlusion of bilateral or unilateral common carotid artery was used to induce cerebral ischemia in rats. The image using dynamic susceptibility contrast method was performed on a 3-T magnetic resonance scanner with a spin-echo echo-planar-image sequence (TR/TE = 700/80 ms, FOV = 41 mm, matrix = 64, 3 slices, SW = 2 mm), starting from 7 s prior to contrast injection (1.2 ml/kg) at four different time points. For quantitative analysis, CBF was calculated by the AIF which was obtained from 10 voxels with greatest contrast enhancement after deconvolution. For semi-quantitative analysis, relative CBF was estimated by the integral divided by the first moment of the relaxivity time curves. We observed if the AIFs obtained in the three different ROIs (whole brain, hemisphere without lesion and hemisphere with lesion) were similar, the CBF ratios (lesion/normal) between quantitative and semi-quantitative analyses might have a similar trend at different operative time points. If the AIFs were different, the CBF ratios might be different. We concluded that using local maximum one can define proper AIF without knowing the anatomical location of arteries in a stroke rat model.

  19. Better temperature predictions in geothermal modelling by improved quality of input parameters

    DEFF Research Database (Denmark)

    Fuchs, Sven; Bording, Thue Sylvester; Balling, N.

    2015-01-01

    Thermal modelling is used to examine the subsurface temperature field and geothermal conditions at various scales (e.g. sedimentary basins, deep crust) and in the framework of different problem settings (e.g. scientific or industrial use). In such models, knowledge of rock thermal properties...... region (model dimension: 135 x115 km, depth: 20 km). Results clearly show that (i) the use of location-specific well-log derived rock thermal properties and (ii) the consideration of laterally varying input data (reflecting changes of thermofacies in the project area) significantly improves...

  20. A New Ensemble of Perturbed-Input-Parameter Simulations by the Community Atmosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    Covey, C; Brandon, S; Bremer, P T; Domyancis, D; Garaizar, X; Johannesson, G; Klein, R; Klein, S A; Lucas, D D; Tannahill, J; Zhang, Y

    2011-10-27

    Uncertainty quantification (UQ) is a fundamental challenge in the numerical simulation of Earth's weather and climate, and other complex systems. It entails much more than attaching defensible error bars to predictions: in particular it includes assessing low-probability but high-consequence events. To achieve these goals with models containing a large number of uncertain input parameters, structural uncertainties, etc., raw computational power is needed. An automated, self-adapting search of the possible model configurations is also useful. Our UQ initiative at the Lawrence Livermore National Laboratory has produced the most extensive set to date of simulations from the US Community Atmosphere Model. We are examining output from about 3,000 twelve-year climate simulations generated with a specialized UQ software framework, and assessing the model's accuracy as a function of 21 to 28 uncertain input parameter values. Most of the input parameters we vary are related to the boundary layer, clouds, and other sub-grid scale processes. Our simulations prescribe surface boundary conditions (sea surface temperatures and sea ice amounts) to match recent observations. Fully searching this 21+ dimensional space is impossible, but sensitivity and ranking algorithms can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination. Bayesian statistical constraints, employing a variety of climate observations as metrics, also seem promising. Observational constraints will be important in the next step of our project, which will compute sea surface temperatures and sea ice interactively, and will study climate change due to increasing atmospheric carbon dioxide.

  1. Minimal state space realisation of continuous-time linear time-variant input-output models

    Science.gov (United States)

    Goos, J.; Pintelon, R.

    2016-04-01

    In the linear time-invariant (LTI) framework, the transformation from an input-output equation into state space representation is well understood. Several canonical forms exist that realise the same dynamic behaviour. If the coefficients become time-varying however, the LTI transformation no longer holds. We prove by induction that there exists a closed-form expression for the observability canonical state space model, using binomial coefficients.

  2. Integrated Flight Mechanic and Aeroelastic Modelling and Control of a Flexible Aircraft Considering Multidimensional Gust Input

    Science.gov (United States)

    2000-05-01

    INTEGRATED FLIGHT MECHANIC AND AEROELASTIC MODELLING AND CONTROL OF A FLEXIBLE AIRCRAFT CONSIDERING MULTIDIMENSIONAL GUST INPUT Patrick Teufel, Martin Hanel...the lateral separation distance have been developed by ’ = matrix of two dimensional spectrum function Eichenbaum 4 and are described by Bessel...Journal of Aircraft, Vol. 30, No. 5, Sept.-Oct. 1993 Relations to Risk Sensitivity, System & Control Letters 11, [4] Eichenbaum F.D., Evaluation of 3D

  3. The Role of Spatio-Temporal Resolution of Rainfall Inputs on a Landscape Evolution Model

    Science.gov (United States)

    Skinner, C. J.; Coulthard, T. J.

    2015-12-01

    Landscape Evolution Models are important experimental tools for understanding the long-term development of landscapes. Designed to simulate timescales ranging from decades to millennia, they are usually driven by precipitation inputs that are lumped, both spatially across the drainage basin, and temporally to daily, monthly, or even annual rates. This is based on an assumption that the spatial and temporal heterogeneity of the rainfall will equalise over the long timescales simulated. However, recent studies (Coulthard et al., 2012) have shown that such models are sensitive to event magnitudes, with exponential increases in sediment yields generated by linear increases in flood event size at a basin scale. This suggests that there may be a sensitivity to the spatial and temporal scales of rainfall used to drive such models. This study uses the CAESAR-Lisflood Landscape Evolution Model to investigate the impact of spatial and temporal resolution of rainfall input on model outputs. The sediment response to a range of temporal (15 min to daily) and spatial (5 km to 50km) resolutions over three different drainage basin sizes was observed. The results showed the model was sensitive to both, generating up to 100% differences in modelled sediment yields with smaller spatial and temporal resolution precipitation. Larger drainage basins also showed a greater sensitivity to both spatial and temporal resolution. Furthermore, analysis of the distribution of erosion and deposition patterns suggested that small temporal and spatial resolution inputs increased erosion in drainage basin headwaters and deposition in the valley floors. Both of these findings may have implications for existing models and approaches for simulating landscape development.

  4. Analyzing the sensitivity of a flood risk assessment model towards its input data

    Science.gov (United States)

    Glas, Hanne; Deruyter, Greet; De Maeyer, Philippe; Mandal, Arpita; James-Williamson, Sherene

    2016-11-01

    The Small Island Developing States are characterized by an unstable economy and low-lying, densely populated cities, resulting in a high vulnerability to natural hazards. Flooding affects more people than any other hazard. To limit the consequences of these hazards, adequate risk assessments are indispensable. Satisfactory input data for these assessments are hard to acquire, especially in developing countries. Therefore, in this study, a methodology was developed and evaluated to test the sensitivity of a flood model towards its input data in order to determine a minimum set of indispensable data. In a first step, a flood damage assessment model was created for the case study of Annotto Bay, Jamaica. This model generates a damage map for the region based on the flood extent map of the 2001 inundations caused by Tropical Storm Michelle. Three damages were taken into account: building, road and crop damage. Twelve scenarios were generated, each with a different combination of input data, testing one of the three damage calculations for its sensitivity. One main conclusion was that population density, in combination with an average number of people per household, is a good parameter in determining the building damage when exact building locations are unknown. Furthermore, the importance of roads for an accurate visual result was demonstrated.

  5. A generic method for automatic translation between input models for different versions of simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Serfontein, Dawid E., E-mail: Dawid.Serfontein@nwu.ac.za [School of Mechanical and Nuclear Engineering, North West University (PUK-Campus), PRIVATE BAG X6001 (Internal Post Box 360), Potchefstroom 2520 (South Africa); Mulder, Eben J. [School of Mechanical and Nuclear Engineering, North West University (South Africa); Reitsma, Frederik [Calvera Consultants (South Africa)

    2014-05-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications.

  6. Modeling Soil Carbon Dynamics in Northern Forests: Effects of Spatial and Temporal Aggregation of Climatic Input Data.

    Science.gov (United States)

    Dalsgaard, Lise; Astrup, Rasmus; Antón-Fernández, Clara; Borgen, Signe Kynding; Breidenbach, Johannes; Lange, Holger; Lehtonen, Aleksi; Liski, Jari

    2016-01-01

    Boreal forests contain 30% of the global forest carbon with the majority residing in soils. While challenging to quantify, soil carbon changes comprise a significant, and potentially increasing, part of the terrestrial carbon cycle. Thus, their estimation is important when designing forest-based climate change mitigation strategies and soil carbon change estimates are required for the reporting of greenhouse gas emissions. Organic matter decomposition varies with climate in complex nonlinear ways, rendering data aggregation nontrivial. Here, we explored the effects of temporal and spatial aggregation of climatic and litter input data on regional estimates of soil organic carbon stocks and changes for upland forests. We used the soil carbon and decomposition model Yasso07 with input from the Norwegian National Forest Inventory (11275 plots, 1960-2012). Estimates were produced at three spatial and three temporal scales. Results showed that a national level average soil carbon stock estimate varied by 10% depending on the applied spatial and temporal scale of aggregation. Higher stocks were found when applying plot-level input compared to country-level input and when long-term climate was used as compared to annual or 5-year mean values. A national level estimate for soil carbon change was similar across spatial scales, but was considerably (60-70%) lower when applying annual or 5-year mean climate compared to long-term mean climate reflecting the recent climatic changes in Norway. This was particularly evident for the forest-dominated districts in the southeastern and central parts of Norway and in the far north. We concluded that the sensitivity of model estimates to spatial aggregation will depend on the region of interest. Further, that using long-term climate averages during periods with strong climatic trends results in large differences in soil carbon estimates. The largest differences in this study were observed in central and northern regions with strongly

  7. An Integrated Hydrologic Bayesian Multi-Model Combination Framework: Confronting Input, parameter and model structural uncertainty in Hydrologic Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Ajami, N K; Duan, Q; Sorooshian, S

    2006-05-05

    This paper presents a new technique--Integrated Bayesian Uncertainty Estimator (IBUNE) to account for the major uncertainties of hydrologic rainfall-runoff predictions explicitly. The uncertainties from the input (forcing) data--mainly the precipitation observations and from the model parameters are reduced through a Monte Carlo Markov Chain (MCMC) scheme named Shuffled Complex Evolution Metropolis (SCEM) algorithm which has been extended to include a precipitation error model. Afterwards, the Bayesian Model Averaging (BMA) scheme is employed to further improve the prediction skill and uncertainty estimation using multiple model output. A series of case studies using three rainfall-runoff models to predict the streamflow in the Leaf River basin, Mississippi are used to examine the necessity and usefulness of this technique. The results suggests that ignoring either input forcings error or model structural uncertainty will lead to unrealistic model simulations and their associated uncertainty bounds which does not consistently capture and represent the real-world behavior of the watershed.

  8. Water Yield and Sediment Yield Simulations for Teba Catchment in Spain Using SWRRB Model: Ⅰ. Model Input and Simulation Experiment

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Water yield and sediment yield in the Teba catchment, Spain, were simulated using SWRRB (Simulator for Water Resources in Rural Basins) model. The model is composed of 198 mathematical equations. About 120 items (variables) were input for the simulation, including meteorological and climatic factors, hydrologic factors, topographic factors, parent materials, soils, vegetation, human activities, etc. The simulated results involved surface runoff, subsurface runoff, sediment, peak flow, evapotranspiration, soil water, total biomass,etc. Careful and thorough input data preparation and repeated simulation experiments are the key to get the accurate results. In this work the simulation accuracy for annual water yield prediction reached to 83.68%.``

  9. Unitary input DEA model to identify beef cattle production systems typologies

    Directory of Open Access Journals (Sweden)

    Eliane Gonçalves Gomes

    2012-08-01

    Full Text Available The cow-calf beef production sector in Brazil has a wide variety of operating systems. This suggests the identification and the characterization of homogeneous regions of production, with consequent implementation of actions to achieve its sustainability. In this paper we attempted to measure the performance of 21 livestock modal production systems, in their cow-calf phase. We measured the performance of these systems, considering husbandry and production variables. The proposed approach is based on data envelopment analysis (DEA. We used unitary input DEA model, with apparent input orientation, together with the efficiency measurements generated by the inverted DEA frontier. We identified five modal production systems typologies, using the isoefficiency layers approach. The results showed that the knowledge and the processes management are the most important factors for improving the efficiency of beef cattle production systems.

  10. Dynamics of a Stage Structured Pest Control Model in a Polluted Environment with Pulse Pollution Input

    Directory of Open Access Journals (Sweden)

    Bing Liu

    2013-01-01

    Full Text Available By using pollution model and impulsive delay differential equation, we formulate a pest control model with stage structure for natural enemy in a polluted environment by introducing a constant periodic pollutant input and killing pest at different fixed moments and investigate the dynamics of such a system. We assume only that the natural enemies are affected by pollution, and we choose the method to kill the pest without harming natural enemies. Sufficient conditions for global attractivity of the natural enemy-extinction periodic solution and permanence of the system are obtained. Numerical simulations are presented to confirm our theoretical results.

  11. System Identification for Nonlinear FOPDT Model with Input-Dependent Dead-Time

    DEFF Research Database (Denmark)

    Sun, Zhen; Yang, Zhenyu

    2011-01-01

    . In order to identify these parameters in an online manner, the considered system is discretized at first. Then, the nonlinear FOPDT identification problem is formulated as a stochastic Mixed Integer Non-Linear Programming problem, and an identification algorithm is proposed by combining the Branch......An on-line iterative method of system identification for a kind of nonlinear FOPDT system is proposed in the paper. The considered nonlinear FOPDT model is an extension of the standard FOPDT model by means that its dead time depends on the input signal and the other parameters are time dependent...

  12. A Probabilistic Collocation Method Based Statistical Gate Delay Model Considering Process Variations and Multiple Input Switching

    CERN Document Server

    Kumar, Y Satish; Talarico, Claudio; Wang, Janet; 10.1109/DATE.2005.31

    2011-01-01

    Since the advent of new nanotechnologies, the variability of gate delay due to process variations has become a major concern. This paper proposes a new gate delay model that includes impact from both process variations and multiple input switching. The proposed model uses orthogonal polynomial based probabilistic collocation method to construct a delay analytical equation from circuit timing performance. From the experimental results, our approach has less that 0.2% error on the mean delay of gates and less than 3% error on the standard deviation.

  13. Predicting input impedance and efficiency of graphene reconfigurable dipoles using a simple circuit model

    CERN Document Server

    Tamagnone, Michele

    2014-01-01

    An analytical circuit model able to predict the input impedance of reconfigurable graphene plasmonic dipoles is presented. A suitable definition of plasmonic characteristic impedance, employing natural currents, is used to for consistent modeling of the antenna-load connection in the circuit. In its purely analytical form, the model shows good agreement with full-wave simulations, and explains the remarkable tuning properties of graphene antennas. Furthermore, using a single full-wave simulation and scaling laws, additional parasitic elements can be determined for a vast parametric space, leading to very accurate modeling. Finally, we also show that the modeling approach allows fair estimation of radiation efficiency as well. The approach also applies to thin plasmonic antennas realized using noble metals or semiconductors.

  14. A comparison of numerical and machine-learning modeling of soil water content with limited input data

    Science.gov (United States)

    Karandish, Fatemeh; Šimůnek, Jiří

    2016-12-01

    Soil water content (SWC) is a key factor in optimizing the usage of water resources in agriculture since it provides information to make an accurate estimation of crop water demand. Methods for predicting SWC that have simple data requirements are needed to achieve an optimal irrigation schedule, especially for various water-saving irrigation strategies that are required to resolve both food and water security issues under conditions of water shortages. Thus, a two-year field investigation was carried out to provide a dataset to compare the effectiveness of HYDRUS-2D, a physically-based numerical model, with various machine-learning models, including Multiple Linear Regressions (MLR), Adaptive Neuro-Fuzzy Inference Systems (ANFIS), and Support Vector Machines (SVM), for simulating time series of SWC data under water stress conditions. SWC was monitored using TDRs during the maize growing seasons of 2010 and 2011. Eight combinations of six, simple, independent parameters, including pan evaporation and average air temperature as atmospheric parameters, cumulative growth degree days (cGDD) and crop coefficient (Kc) as crop factors, and water deficit (WD) and irrigation depth (In) as crop stress factors, were adopted for the estimation of SWCs in the machine-learning models. Having Root Mean Square Errors (RMSE) in the range of 0.54-2.07 mm, HYDRUS-2D ranked first for the SWC estimation, while the ANFIS and SVM models with input datasets of cGDD, Kc, WD and In ranked next with RMSEs ranging from 1.27 to 1.9 mm and mean bias errors of -0.07 to 0.27 mm, respectively. However, the MLR models did not perform well for SWC forecasting, mainly due to non-linear changes of SWCs under the irrigation process. The results demonstrated that despite requiring only simple input data, the ANFIS and SVM models could be favorably used for SWC predictions under water stress conditions, especially when there is a lack of data. However, process-based numerical models are undoubtedly a

  15. Scaling precipitation input to spatially distributed hydrological models by measured snow distribution

    Directory of Open Access Journals (Sweden)

    Christian Vögeli

    2016-12-01

    Full Text Available Accurate knowledge on snow distribution in alpine terrain is crucial for various applicationssuch as flood risk assessment, avalanche warning or managing water supply and hydro-power.To simulate the seasonal snow cover development in alpine terrain, the spatially distributed,physics-based model Alpine3D is suitable. The model is typically driven by spatial interpolationsof observations from automatic weather stations (AWS, leading to errors in the spatial distributionof atmospheric forcing. With recent advances in remote sensing techniques, maps of snowdepth can be acquired with high spatial resolution and accuracy. In this work, maps of the snowdepth distribution, calculated from summer and winter digital surface models based on AirborneDigital Sensors (ADS, are used to scale precipitation input data, with the aim to improve theaccuracy of simulation of the spatial distribution of snow with Alpine3D. A simple method toscale and redistribute precipitation is presented and the performance is analysed. The scalingmethod is only applied if it is snowing. For rainfall the precipitation is distributed by interpolation,with a simple air temperature threshold used for the determination of the precipitation phase.It was found that the accuracy of spatial snow distribution could be improved significantly forthe simulated domain. The standard deviation of absolute snow depth error is reduced up toa factor 3.4 to less than 20 cm. The mean absolute error in snow distribution was reducedwhen using representative input sources for the simulation domain. For inter-annual scaling, themodel performance could also be improved, even when using a remote sensing dataset from adifferent winter. In conclusion, using remote sensing data to process precipitation input, complexprocesses such as preferential snow deposition and snow relocation due to wind or avalanches,can be substituted and modelling performance of spatial snow distribution is improved.

  16. Input impedance and reflection coefficient in fractal-like models of asymmetrically branching compliant tubes.

    Science.gov (United States)

    Brown, D J

    1996-07-01

    A mathematical model is described, based on linear transmission line theory, for the computation of hydraulic input impedance spectra in complex, dichotomously branching networks similar to mammalian arterial systems. Conceptually, the networks are constructed from a discretized set of self-similar compliant tubes whose dimensions are described by an integer power law. The model allows specification of the branching geometry, i.e., the daughter-parent branch area ratio and the daughter-daughter area asymmetry ratio, as functions of vessel size. Characteristic impedances of individual vessels are described by linear theory for a fully constrained thick-walled elastic tube. Besides termination impedances and fluid density and viscosity, other model parameters included relative vessel length and phase velocity, each as a function of vessel size (elastic nonuniformity). The primary goal of the study was to examine systematically the effect of fractal branching asymmetry, both degree and location within the network, on the complex input impedance spectrum and reflection coefficient. With progressive branching asymmetry, fractal model spectra exhibit some of the features inherent in natural arterial systems such as the loss of prominent, regularly-occurring maxima and minima; the effect is most apparent at higher frequencies. Marked reduction of the reflection coefficient occurs, due to disparities in wave path length, when branching is asymmetric. Because of path length differences, branching asymmetry near the system input has a far greater effect on minimizing spectrum oscillations and reflections than downstream asymmetry. Fractal-like constructs suggest a means by which arterial trees of realistic complexity might be described, both structurally and functionally.

  17. The MARINA model (Model to Assess River Inputs of Nutrients to seAs): Model description and results for China.

    Science.gov (United States)

    Strokal, Maryna; Kroeze, Carolien; Wang, Mengru; Bai, Zhaohai; Ma, Lin

    2016-08-15

    Chinese agriculture has been developing fast towards industrial food production systems that discharge nutrient-rich wastewater into rivers. As a result, nutrient export by rivers has been increasing, resulting in coastal water pollution. We developed a Model to Assess River Inputs of Nutrients to seAs (MARINA) for China. The MARINA Nutrient Model quantifies river export of nutrients by source at the sub-basin scale as a function of human activities on land. MARINA is a downscaled version for China of the Global NEWS-2 (Nutrient Export from WaterSheds) model with an improved approach for nutrient losses from animal production and population. We use the model to quantify dissolved inorganic and organic nitrogen (N) and phosphorus (P) export by six large rivers draining into the Bohai Gulf (Yellow, Hai, Liao), Yellow Sea (Yangtze, Huai) and South China Sea (Pearl) in 1970, 2000 and 2050. We addressed uncertainties in the MARINA Nutrient model. Between 1970 and 2000 river export of dissolved N and P increased by a factor of 2-8 depending on sea and nutrient form. Thus, the risk for coastal eutrophication increased. Direct losses of manure to rivers contribute to 60-78% of nutrient inputs to the Bohai Gulf and 20-74% of nutrient inputs to the other seas in 2000. Sewage is an important source of dissolved inorganic P, and synthetic fertilizers of dissolved inorganic N. Over half of the nutrients exported by the Yangtze and Pearl rivers originated from human activities in downstream and middlestream sub-basins. The Yellow River exported up to 70% of dissolved inorganic N and P from downstream sub-basins and of dissolved organic N and P from middlestream sub-basins. Rivers draining into the Bohai Gulf are drier, and thus transport fewer nutrients. For the future we calculate further increases in river export of nutrients. The MARINA Nutrient model quantifies the main sources of coastal water pollution for sub-basins. This information can contribute to formulation of

  18. Nutrient inputs to the Laurentian Great Lakes by source and watershed estimated using SPARROW watershed models

    Science.gov (United States)

    Robertson, Dale M.; Saad, David A.

    2011-01-01

    Nutrient input to the Laurentian Great Lakes continues to cause problems with eutrophication. To reduce the extent and severity of these problems, target nutrient loads were established and Total Maximum Daily Loads are being developed for many tributaries. Without detailed loading information it is difficult to determine if the targets are being met and how to prioritize rehabilitation efforts. To help address these issues, SPAtially Referenced Regressions On Watershed attributes (SPARROW) models were developed for estimating loads and sources of phosphorus (P) and nitrogen (N) from the United States (U.S.) portion of the Great Lakes, Upper Mississippi, Ohio, and Red River Basins. Results indicated that recent U.S. loadings to Lakes Michigan and Ontario are similar to those in the 1980s, whereas loadings to Lakes Superior, Huron, and Erie decreased. Highest loads were from tributaries with the largest watersheds, whereas highest yields were from areas with intense agriculture and large point sources of nutrients. Tributaries were ranked based on their relative loads and yields to each lake. Input from agricultural areas was a significant source of nutrients, contributing ∼33-44% of the P and ∼33-58% of the N, except for areas around Superior with little agriculture. Point sources were also significant, contributing ∼14-44% of the P and 13-34% of the N. Watersheds around Lake Erie contributed nutrients at the highest rate (similar to intensively farmed areas in the Midwest) because they have the largest nutrient inputs and highest delivery ratio.

  19. Self-Triggered Model Predictive Control for Linear Systems Based on Transmission of Control Input Sequences

    Directory of Open Access Journals (Sweden)

    Koichi Kobayashi

    2016-01-01

    Full Text Available A networked control system (NCS is a control system where components such as plants and controllers are connected through communication networks. Self-triggered control is well known as one of the control methods in NCSs and is a control method that for sampled-data control systems both the control input and the aperiodic sampling interval (i.e., the transmission interval are computed simultaneously. In this paper, a self-triggered model predictive control (MPC method for discrete-time linear systems with disturbances is proposed. In the conventional MPC method, the first one of the control input sequence obtained by solving the finite-time optimal control problem is sent and applied to the plant. In the proposed method, the first some elements of the control input sequence obtained are sent to the plant, and each element is sequentially applied to the plant. The number of elements is decided according to the effect of disturbances. In other words, transmission intervals can be controlled. Finally, the effectiveness of the proposed method is shown by numerical simulations.

  20. Input-output modeling for urban energy consumption in Beijing: dynamics and comparison.

    Science.gov (United States)

    Zhang, Lixiao; Hu, Qiuhong; Zhang, Fan

    2014-01-01

    Input-output analysis has been proven to be a powerful instrument for estimating embodied (direct plus indirect) energy usage through economic sectors. Using 9 economic input-output tables of years 1987, 1990, 1992, 1995, 1997, 2000, 2002, 2005, and 2007, this paper analyzes energy flows for the entire city of Beijing and its 30 economic sectors, respectively. Results show that the embodied energy consumption of Beijing increased from 38.85 million tonnes of coal equivalent (Mtce) to 206.2 Mtce over the past twenty years of rapid urbanization; the share of indirect energy consumption in total energy consumption increased from 48% to 76%, suggesting the transition of Beijing from a production-based and manufacturing-dominated economy to a consumption-based and service-dominated economy. Real estate development has shown to be a major driving factor of the growth in indirect energy consumption. The boom and bust of construction activities have been strongly correlated with the increase and decrease of system-side indirect energy consumption. Traditional heavy industries remain the most energy-intensive sectors in the economy. However, the transportation and service sectors have contributed most to the rapid increase in overall energy consumption. The analyses in this paper demonstrate that a system-wide approach such as that based on input-output model can be a useful tool for robust energy policy making.

  1. PERMODELAN INDEKS HARGA KONSUMEN INDONESIA DENGAN MENGGUNAKAN MODEL INTERVENSI MULTI INPUT

    KAUST Repository

    Novianti, P.W.

    2017-01-24

    There are some events which are expected effecting CPI’s fluctuation, i.e. financial crisis 1997/1998, fuel price risings, base year changing’s, independence of Timor-Timur (October 1999), and Tsunami disaster in Aceh (December 2004). During re-search period, there were eight fuel price risings and four base year changing’s. The objective of this research is to obtain multi input intervention model which can des-cribe magnitude and duration of each event effected to CPI. Most of intervention re-searches that have been done are only contain of an intervention with single input, ei-ther step or pulse function. Multi input intervention was used in Indonesia CPI case because there are some events which are expected effecting CPI. Based on the result, those events were affecting CPI. Additionally, other events, such as Ied on January 1999, events on April 2002, July 2003, December 2005, and September 2008, were affecting CPI too. In general, those events gave positive effect to CPI, except events on April 2002 and July 2003 which gave negative effects.

  2. Input-output modeling for urban energy consumption in Beijing: dynamics and comparison.

    Directory of Open Access Journals (Sweden)

    Lixiao Zhang

    Full Text Available Input-output analysis has been proven to be a powerful instrument for estimating embodied (direct plus indirect energy usage through economic sectors. Using 9 economic input-output tables of years 1987, 1990, 1992, 1995, 1997, 2000, 2002, 2005, and 2007, this paper analyzes energy flows for the entire city of Beijing and its 30 economic sectors, respectively. Results show that the embodied energy consumption of Beijing increased from 38.85 million tonnes of coal equivalent (Mtce to 206.2 Mtce over the past twenty years of rapid urbanization; the share of indirect energy consumption in total energy consumption increased from 48% to 76%, suggesting the transition of Beijing from a production-based and manufacturing-dominated economy to a consumption-based and service-dominated economy. Real estate development has shown to be a major driving factor of the growth in indirect energy consumption. The boom and bust of construction activities have been strongly correlated with the increase and decrease of system-side indirect energy consumption. Traditional heavy industries remain the most energy-intensive sectors in the economy. However, the transportation and service sectors have contributed most to the rapid increase in overall energy consumption. The analyses in this paper demonstrate that a system-wide approach such as that based on input-output model can be a useful tool for robust energy policy making.

  3. A Framework for Modelling Software Requirements

    Directory of Open Access Journals (Sweden)

    Dhirendra Pandey

    2011-05-01

    Full Text Available Requirement engineering plays an important role in producing quality software products. In recent past years, some approaches of requirement framework have been designed to provide an end-to-end solution for system development life cycle. Textual requirements specifications are difficult to learn, design, understand, review, and maintain whereas pictorial modelling is widely recognized as an effective requirement analysis tool. In this paper, we will present a requirement modelling framework with the analysis of modern requirements modelling techniques. Also, we will discuss various domains of requirement engineering with the help of modelling elements such as semantic map of business concepts, lifecycles of business objects, business processes, business rules, system context diagram, use cases and their scenarios, constraints, and user interface prototypes. The proposed framework will be illustrated with the case study of inventory management system.

  4. Investigation of effects of varying model inputs on mercury deposition estimates in the Southwest US

    Directory of Open Access Journals (Sweden)

    T. Myers

    2012-04-01

    Full Text Available The Community Multiscale Air Quality (CMAQ model version 4.7.1 was used to simulate mercury wet and dry deposition for a domain covering the contiguous United States (US. The simulations used MM5-derived meteorological input fields and the US Environmental Protection Agency (EPA Clear Air Mercury Rule (CAMR emissions inventory. Using sensitivity simulations with different boundary conditions and tracer simulations, this investigation focuses on the contributions of boundary concentrations to deposited mercury in the Southwest (SW US. Concentrations of oxidized mercury species along the boundaries of the domain, in particular the upper layers of the domain, can make significant contributions to the simulated wet and dry deposition of mercury in the SW US. In order to better understand the contributions of boundary conditions to deposition, inert tracer simulations were conducted to quantify the relative amount of an atmospheric constituent transported across the boundaries of the domain at various altitudes and to quantify the amount that reaches and potentially deposits to the land surface in the SW US. Simulations using alternate sets of boundary concentrations, including estimates from global models (Goddard Earth Observing System-Chem (GEOS-Chem and the Global/Regional Atmospheric Heavy Metals (GRAHM model, and alternate meteorological input fields (for different years are analyzed in this paper. CMAQ dry deposition in the SW US is sensitive to differences in the atmospheric dynamics and atmospheric mercury chemistry parameterizations between the global models used for boundary conditions.

  5. A synaptic input portal for a mapped clock oscillator model of neuronal electrical rhythmic activity

    Science.gov (United States)

    Zariffa, José; Ebden, Mark; Bardakjian, Berj L.

    2004-09-01

    Neuronal electrical oscillations play a central role in a variety of situations, such as epilepsy and learning. The mapped clock oscillator (MCO) model is a general model of transmembrane voltage oscillations in excitable cells. In order to be able to investigate the behaviour of neuronal oscillator populations, we present a neuronal version of the model. The neuronal MCO includes an extra input portal, the synaptic portal, which can reflect the biological relationships in a chemical synapse between the frequency of the presynaptic action potentials and the postsynaptic resting level, which in turn affects the frequency of the postsynaptic potentials. We propose that the synaptic input-output relationship must include a power function in order to be able to reproduce physiological behaviour such as resting level saturation. One linear and two power functions (Butterworth and sigmoidal) are investigated, using the case of an inhibitory synapse. The linear relation was not able to produce physiologically plausible behaviour, whereas both the power function examples were appropriate. The resulting neuronal MCO model can be tailored to a variety of neuronal cell types, and can be used to investigate complex population behaviour, such as the influence of network topology and stochastic resonance.

  6. Investigation of effects of varying model inputs on mercury deposition estimates in the Southwest US

    Directory of Open Access Journals (Sweden)

    T. Myers

    2013-01-01

    Full Text Available The Community Multiscale Air Quality (CMAQ model version 4.7.1 was used to simulate mercury wet and dry deposition for a domain covering the continental United States (US. The simulations used MM5-derived meteorological input fields and the US Environmental Protection Agency (EPA Clear Air Mercury Rule (CAMR emissions inventory. Using sensitivity simulations with different boundary conditions and tracer simulations, this investigation focuses on the contributions of boundary concentrations to deposited mercury in the Southwest (SW US. Concentrations of oxidized mercury species along the boundaries of the domain, in particular the upper layers of the domain, can make significant contributions to the simulated wet and dry deposition of mercury in the SW US. In order to better understand the contributions of boundary conditions to deposition, inert tracer simulations were conducted to quantify the relative amount of an atmospheric constituent transported across the boundaries of the domain at various altitudes and to quantify the amount that reaches and potentially deposits to the land surface in the SW US. Simulations using alternate sets of boundary concentrations, including estimates from global models (Goddard Earth Observing System-Chem (GEOS-Chem and the Global/Regional Atmospheric Heavy Metals (GRAHM model, and alternate meteorological input fields (for different years are analyzed in this paper. CMAQ dry deposition in the SW US is sensitive to differences in the atmospheric dynamics and atmospheric mercury chemistry parameterizations between the global models used for boundary conditions.

  7. Good modeling practice for PAT applications: propagation of input uncertainty and sensitivity analysis.

    Science.gov (United States)

    Sin, Gürkan; Gernaey, Krist V; Lantz, Anna Eliasson

    2009-01-01

    The uncertainty and sensitivity analysis are evaluated for their usefulness as part of the model-building within Process Analytical Technology applications. A mechanistic model describing a batch cultivation of Streptomyces coelicolor for antibiotic production was used as case study. The input uncertainty resulting from assumptions of the model was propagated using the Monte Carlo procedure to estimate the output uncertainty. The results showed that significant uncertainty exists in the model outputs. Moreover the uncertainty in the biomass, glucose, ammonium and base-consumption were found low compared to the large uncertainty observed in the antibiotic and off-gas CO(2) predictions. The output uncertainty was observed to be lower during the exponential growth phase, while higher in the stationary and death phases - meaning the model describes some periods better than others. To understand which input parameters are responsible for the output uncertainty, three sensitivity methods (Standardized Regression Coefficients, Morris and differential analysis) were evaluated and compared. The results from these methods were mostly in agreement with each other and revealed that only few parameters (about 10) out of a total 56 were mainly responsible for the output uncertainty. Among these significant parameters, one finds parameters related to fermentation characteristics such as biomass metabolism, chemical equilibria and mass-transfer. Overall the uncertainty and sensitivity analysis are found promising for helping to build reliable mechanistic models and to interpret the model outputs properly. These tools make part of good modeling practice, which can contribute to successful PAT applications for increased process understanding, operation and control purposes.

  8. Realistic modeling of seismic input for megacities and large urban areas

    Science.gov (United States)

    Panza, G. F.; Unesco/Iugs/Igcp Project 414 Team

    2003-04-01

    The project addressed the problem of pre-disaster orientation: hazard prediction, risk assessment, and hazard mapping, in connection with seismic activity and man-induced vibrations. The definition of realistic seismic input has been obtained from the computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different source and structural models. The innovative modeling technique, that constitutes the common tool to the entire project, takes into account source, propagation and local site effects. This is done using first principles of physics about wave generation and propagation in complex media, and does not require to resort to convolutive approaches, that have been proven to be quite unreliable, mainly when dealing with complex geological structures, the most interesting from the practical point of view. In fact, several techniques that have been proposed to empirically estimate the site effects using observations convolved with theoretically computed signals corresponding to simplified models, supply reliable information about the site response to non-interfering seismic phases. They are not adequate in most of the real cases, when the seismic sequel is formed by several interfering waves. The availability of realistic numerical simulations enables us to reliably estimate the amplification effects even in complex geological structures, exploiting the available geotechnical, lithological, geophysical parameters, topography of the medium, tectonic, historical, palaeoseismological data, and seismotectonic models. The realistic modeling of the ground motion is a very important base of knowledge for the preparation of groundshaking scenarios that represent a valid and economic tool for the seismic microzonation. This knowledge can be very fruitfully used by civil engineers in the design of new seismo-resistant constructions and in the reinforcement of the existing built environment, and, therefore

  9. LMI-Based Fuzzy Optimal Variance Control of Airfoil Model Subject to Input Constraints

    Science.gov (United States)

    Swei, Sean S.M.; Ayoubi, Mohammad A.

    2017-01-01

    This paper presents a study of fuzzy optimal variance control problem for dynamical systems subject to actuator amplitude and rate constraints. Using Takagi-Sugeno fuzzy modeling and dynamic Parallel Distributed Compensation technique, the stability and the constraints can be cast as a multi-objective optimization problem in the form of Linear Matrix Inequalities. By utilizing the formulations and solutions for the input and output variance constraint problems, we develop a fuzzy full-state feedback controller. The stability and performance of the proposed controller is demonstrated through its application to the airfoil flutter suppression.

  10. Determination of growth rates as an input of the stock discount valuation models

    Directory of Open Access Journals (Sweden)

    Momčilović Mirela

    2013-01-01

    Full Text Available When determining the value of the stocks with different stock discount valuation models, one of the important inputs is expected growth rate of dividends, earnings, cash flows and other relevant parameters of the company. Growth rate can be determined by three basic ways, and those are: on the basis of extrapolation of historical data, on the basis of professional assessment of the analytics who follow business of the company and on the basis of fundamental indicators of the company. Aim of this paper is to depict theoretical basis and practical application of stated methods for growth rate determination, and to indicate their advantages, or deficiencies.

  11. A leech model for homeostatic plasticity and motor network recovery after loss of descending inputs.

    Science.gov (United States)

    Lane, Brian J

    2016-04-01

    Motor networks below the site of spinal cord injury (SCI) and their reconfiguration after loss of central inputs are poorly understood but remain of great interest in SCI research. Harley et al. (J Neurophysiol 113: 3610-3622, 2015) report a striking locomotor recovery paradigm in the leech Hirudo verbena with features that are functionally analogous to SCI. They propose that this well-established neurophysiological system could potentially be repurposed to provide a complementary model to investigate basic principles of homeostatic compensation relevant to SCI research.

  12. Using Whole-House Field Tests to Empirically Derive Moisture Buffering Model Inputs

    Energy Technology Data Exchange (ETDEWEB)

    Woods, J.; Winkler, J.; Christensen, D.; Hancock, E.

    2014-08-01

    Building energy simulations can be used to predict a building's interior conditions, along with the energy use associated with keeping these conditions comfortable. These models simulate the loads on the building (e.g., internal gains, envelope heat transfer), determine the operation of the space conditioning equipment, and then calculate the building's temperature and humidity throughout the year. The indoor temperature and humidity are affected not only by the loads and the space conditioning equipment, but also by the capacitance of the building materials, which buffer changes in temperature and humidity. This research developed an empirical method to extract whole-house model inputs for use with a more accurate moisture capacitance model (the effective moisture penetration depth model). The experimental approach was to subject the materials in the house to a square-wave relative humidity profile, measure all of the moisture transfer terms (e.g., infiltration, air conditioner condensate) and calculate the only unmeasured term: the moisture absorption into the materials. After validating the method with laboratory measurements, we performed the tests in a field house. A least-squares fit of an analytical solution to the measured moisture absorption curves was used to determine the three independent model parameters representing the moisture buffering potential of this house and its furnishings. Follow on tests with realistic latent and sensible loads showed good agreement with the derived parameters, especially compared to the commonly-used effective capacitance approach. These results show that the EMPD model, once the inputs are known, is an accurate moisture buffering model.

  13. Using Whole-House Field Tests to Empirically Derive Moisture Buffering Model Inputs

    Energy Technology Data Exchange (ETDEWEB)

    Woods, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Winkler, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Christensen, D. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Hancock, E. [Mountain Energy Partnership, Longmont, CO (United States)

    2014-08-01

    Building energy simulations can be used to predict a building's interior conditions, along with the energy use associated with keeping these conditions comfortable. These models simulate the loads on the building (e.g., internal gains, envelope heat transfer), determine the operation of the space conditioning equipment, and then calculate the building's temperature and humidity throughout the year. The indoor temperature and humidity are affected not only by the loads and the space conditioning equipment, but also by the capacitance of the building materials, which buffer changes in temperature and humidity. This research developed an empirical method to extract whole-house model inputs for use with a more accurate moisture capacitance model (the effective moisture penetration depth model). The experimental approach was to subject the materials in the house to a square-wave relative humidity profile, measure all of the moisture transfer terms (e.g., infiltration, air conditioner condensate) and calculate the only unmeasured term: the moisture absorption into the materials. After validating the method with laboratory measurements, we performed the tests in a field house. A least-squares fit of an analytical solution to the measured moisture absorption curves was used to determine the three independent model parameters representing the moisture buffering potential of this house and its furnishings. Follow on tests with realistic latent and sensible loads showed good agreement with the derived parameters, especially compared to the commonly-used effective capacitance approach. These results show that the EMPD model, once the inputs are known, is an accurate moisture buffering model.

  14. Comparison of input parameters regarding rock mass in analytical solution and numerical modelling

    Science.gov (United States)

    Yasitli, N. E.

    2016-12-01

    Characteristics of stress redistribution around a tunnel excavated in rock are of prime importance for an efficient tunnelling operation and maintaining stability. As it is a well known fact that rock mass properties are the most important factors affecting stability together with in-situ stress field and tunnel geometry. Induced stresses and resultant deformation around a tunnel can be approximated by means of analytical solutions and application of numerical modelling. However, success of these methods depends on assumptions and input parameters which must be representative for the rock mass. However, mechanical properties of intact rock can be found by laboratory testing. The aim of this paper is to demonstrate the importance of proper representation of rock mass properties as input data for analytical solution and numerical modelling. For this purpose, intact rock data were converted into rock mass data by using the Hoek-Brown failure criterion and empirical relations. Stress-deformation analyses together with yield zone thickness determination have been carried out by using analytical solutions and numerical analyses by using FLAC3D programme. Analyses results have indicated that incomplete and incorrect design causes stability and economic problems in the tunnel. For this reason during the tunnel design analytical data and rock mass data should be used together. In addition, this study was carried out to prove theoretically that numerical modelling results should be applied to the tunnel design for the stability and for the economy of the support.

  15. Model Predictive Control of Linear Systems over Networks with State and Input Quantizations

    Directory of Open Access Journals (Sweden)

    Xiao-Ming Tang

    2013-01-01

    Full Text Available Although there have been a lot of works about the synthesis and analysis of networked control systems (NCSs with data quantization, most of the results are developed for the case of considering the quantizer only existing in one of the transmission links (either from the sensor to the controller link or from the controller to the actuator link. This paper investigates the synthesis approaches of model predictive control (MPC for NCS subject to data quantizations in both links. Firstly, a novel model to describe the state and input quantizations of the NCS is addressed by extending the sector bound approach. Further, from the new model, two synthesis approaches of MPC are developed: one parameterizes the infinite horizon control moves into a single state feedback law and the other into a free control move followed by the single state feedback law. Finally, the stability results that explicitly consider the satisfaction of input and state constraints are presented. A numerical example is given to illustrate the effectiveness of the proposed MPC.

  16. Modeling Requirements for Cohort and Register IT.

    Science.gov (United States)

    Stäubert, Sebastian; Weber, Ulrike; Michalik, Claudia; Dress, Jochen; Ngouongo, Sylvie; Stausberg, Jürgen; Winter, Alfred

    2016-01-01

    The project KoRegIT (funded by TMF e.V.) aimed to develop a generic catalog of requirements for research networks like cohort studies and registers (KoReg). The catalog supports such kind of research networks to build up and to manage their organizational and IT infrastructure. To make transparent the complex relationships between requirements, which are described in use cases from a given text catalog. By analyzing and modeling the requirements a better understanding and optimizations of the catalog are intended. There are two subgoals: a) to investigate one cohort study and two registers and to model the current state of their IT infrastructure; b) to analyze the current state models and to find simplifications within the generic catalog. Processing the generic catalog was performed by means of text extraction, conceptualization and concept mapping. Then methods of enterprise architecture planning (EAP) are used to model the extracted information. To work on objective a) questionnaires are developed by utilizing the model. They are used for semi-structured interviews, whose results are evaluated via qualitative content analysis. Afterwards the current state was modeled. Objective b) was done by model analysis. A given generic text catalog of requirements was transferred into a model. As result of objective a) current state models of one existing cohort study and two registers are created and analyzed. An optimized model called KoReg-reference-model is the result of objective b). It is possible to use methods of EAP to model requirements. This enables a better overview of the partly connected requirements by means of visualization. The model based approach also enables the analysis and comparison of the empirical data from the current state models. Information managers could reduce the effort of planning the IT infrastructure utilizing the KoReg-reference-model. Modeling the current state and the generation of reports from the model, which could be used as

  17. Digital Avionics Information System (DAIS): Training Requirements Analysis Model Users Guide. Final Report.

    Science.gov (United States)

    Czuchry, Andrew J.; And Others

    This user's guide describes the functions, logical operations and subroutines, input data requirements, and available outputs of the Training Requirements Analysis Model (TRAMOD), a computerized analytical life cycle cost modeling system for use in the early stages of system design. Operable in a stand-alone mode, TRAMOD can be used for the…

  18. An Extended Analysis of Requirements Traceability Model

    Institute of Scientific and Technical Information of China (English)

    Jiang Dandong(蒋丹东); Zhang Shensheng; Chen Lu

    2004-01-01

    A new extended meta model of traceability is presented. Then, a formalized fine-grained model of traceability is described. Some major issues about this model, including trace units, requirements and relations within the model, are further analyzed. Finally, a case study that comes from a key project of 863 Program is given.

  19. Fuzzy portfolio model with fuzzy-input return rates and fuzzy-output proportions

    Science.gov (United States)

    Tsaur, Ruey-Chyn

    2015-02-01

    In the finance market, a short-term investment strategy is usually applied in portfolio selection in order to reduce investment risk; however, the economy is uncertain and the investment period is short. Further, an investor has incomplete information for selecting a portfolio with crisp proportions for each chosen security. In this paper we present a new method of constructing fuzzy portfolio model for the parameters of fuzzy-input return rates and fuzzy-output proportions, based on possibilistic mean-standard deviation models. Furthermore, we consider both excess or shortage of investment in different economic periods by using fuzzy constraint for the sum of the fuzzy proportions, and we also refer to risks of securities investment and vagueness of incomplete information during the period of depression economics for the portfolio selection. Finally, we present a numerical example of a portfolio selection problem to illustrate the proposed model and a sensitivity analysis is realised based on the results.

  20. Long-term dynamics simulation: Modeling requirements

    Energy Technology Data Exchange (ETDEWEB)

    Morched, A.S.; Kar, P.K.; Rogers, G.J.; Morison, G.K. (Ontario Hydro, Toronto, ON (Canada))

    1989-12-01

    This report details the required performance and modelling capabilities of a computer program intended for the study of the long term dynamics of power systems. Following a general introduction which outlines the need for long term dynamic studies, the modelling requirements for the conduct of such studies is discussed in detail. Particular emphasis is placed on models for system elements not normally modelled in power system stability programs, which will have a significant impact in the long term time frame of minutes to hours following the initiating disturbance. The report concludes with a discussion of the special computational and programming requirements for a long term stability program. 43 refs., 36 figs.

  1. International trade inoperability input-output model (IT-IIM): theory and application.

    Science.gov (United States)

    Jung, Jeesang; Santos, Joost R; Haimes, Yacov Y

    2009-01-01

    The inoperability input-output model (IIM) has been used for analyzing disruptions due to man-made or natural disasters that can adversely affect the operation of economic systems or critical infrastructures. Taking economic perturbation for each sector as inputs, the IIM provides the degree of economic production impacts on all industry sectors as the outputs for the model. The current version of the IIM does not provide a separate analysis for the international trade component of the inoperability. If an important port of entry (e.g., Port of Los Angeles) is disrupted, then international trade inoperability becomes a highly relevant subject for analysis. To complement the current IIM, this article develops the International Trade-IIM (IT-IIM). The IT-IIM investigates the resulting international trade inoperability for all industry sectors resulting from disruptions to a major port of entry. Similar to traditional IIM analysis, the inoperability metrics that the IT-IIM provides can be used to prioritize economic sectors based on the losses they could potentially incur. The IT-IIM is used to analyze two types of direct perturbations: (1) the reduced capacity of ports of entry, including harbors and airports (e.g., a shutdown of any port of entry); and (2) restrictions on commercial goods that foreign countries trade with the base nation (e.g., embargo).

  2. Multiregional input-output model for the evaluation of Spanish water flows.

    Science.gov (United States)

    Cazcarro, Ignacio; Duarte, Rosa; Sánchez Chóliz, Julio

    2013-01-01

    We construct a multiregional input-output model for Spain, in order to evaluate the pressures on the water resources, virtual water flows, and water footprints of the regions, and the water impact of trade relationships within Spain and abroad. The study is framed with those interregional input-output models constructed to study water flows and impacts of regions in China, Australia, Mexico, or the UK. To build our database, we reconcile regional IO tables, national and regional accountancy of Spain, trade and water data. Results show an important imbalance between origin of water resources and final destination, with significant water pressures in the South, Mediterranean, and some central regions. The most populated and dynamic regions of Madrid and Barcelona are important drivers of water consumption in Spain. Main virtual water exporters are the South and Central agrarian regions: Andalusia, Castile-La Mancha, Castile-Leon, Aragon, and Extremadura, while the main virtual water importers are the industrialized regions of Madrid, Basque country, and the Mediterranean coast. The paper shows the different location of direct and indirect consumers of water in Spain and how the economic trade and consumption pattern of certain areas has significant impacts on the availability of water resources in other different and often drier regions.

  3. A Water-Withdrawal Input-Output Model of the Indian Economy.

    Science.gov (United States)

    Bogra, Shelly; Bakshi, Bhavik R; Mathur, Ritu

    2016-02-02

    Managing freshwater allocation for a highly populated and growing economy like India can benefit from knowledge about the effect of economic activities. This study transforms the 2003-2004 economic input-output (IO) table of India into a water withdrawal input-output model to quantify direct and indirect flows. This unique model is based on a comprehensive database compiled from diverse public sources, and estimates direct and indirect water withdrawal of all economic sectors. It distinguishes between green (rainfall), blue (surface and ground), and scarce groundwater. Results indicate that the total direct water withdrawal is nearly 3052 billion cubic meter (BCM) and 96% of this is used in agriculture sectors with the contribution of direct green water being about 1145 BCM, excluding forestry. Apart from 727 BCM direct blue water withdrawal for agricultural, other significant users include "Electricity" with 64 BCM, "Water supply" with 44 BCM and other industrial sectors with nearly 14 BCM. "Construction", "miscellaneous food products"; "Hotels and restaurants"; "Paper, paper products, and newsprint" are other significant indirect withdrawers. The net virtual water import is found to be insignificant compared to direct water used in agriculture nationally, while scarce ground water associated with crops is largely contributed by northern states.

  4. Input determination for neural network models in water resources applications. Part 2. Case study: forecasting salinity in a river

    Science.gov (United States)

    Bowden, Gavin J.; Maier, Holger R.; Dandy, Graeme C.

    2005-01-01

    This paper is the second of a two-part series in this issue that presents a methodology for determining an appropriate set of model inputs for artificial neural network (ANN) models in hydrologic applications. The first paper presented two input determination methods. The first method utilises a measure of dependence known as the partial mutual information (PMI) criterion to select significant model inputs. The second method utilises a self-organising map (SOM) to remove redundant input variables, and a hybrid genetic algorithm (GA) and general regression neural network (GRNN) to select the inputs that have a significant influence on the model's forecast. In the first paper, both methods were applied to synthetic data sets and were shown to lead to a set of appropriate ANN model inputs. To verify the proposed techniques, it is important that they are applied to a real-world case study. In this paper, the PMI algorithm and the SOM-GAGRNN are used to find suitable inputs to an ANN model for forecasting salinity in the River Murray at Murray Bridge, South Australia. The proposed methods are also compared with two methods used in previous studies, for the same case study. The two proposed methods were found to lead to more parsimonious models with a lower forecasting error than the models developed using the methods from previous studies. To verify the robustness of each of the ANNs developed using the proposed methodology, a real-time forecasting simulation was conducted. This validation data set consisted of independent data from a six-year period from 1992 to 1998. The ANN developed using the inputs identified by the stepwise PMI algorithm was found to be the most robust for this validation set. The PMI scores obtained using the stepwise PMI algorithm revealed useful information about the order of importance of each significant input.

  5. Experimental development based on mapping rule between requirements analysis model and web framework specific design model.

    Science.gov (United States)

    Okuda, Hirotaka; Ogata, Shinpei; Matsuura, Saeko

    2013-12-01

    Model Driven Development is a promising approach to develop high quality software systems. We have proposed a method of model-driven requirements analysis using Unified Modeling Language (UML). The main feature of our method is to automatically generate a Web user interface prototype from UML requirements analysis model so that we can confirm validity of input/output data for each page and page transition on the system by directly operating the prototype. We proposes a mapping rule in which design information independent of each web application framework implementation is defined based on the requirements analysis model, so as to improve the traceability to the final product from the valid requirements analysis model. This paper discusses the result of applying our method to the development of a Group Work Support System that is currently running in our department.

  6. Review of Literature for Inputs to the National Water Savings Model and Spreadsheet Tool-Commercial/Institutional

    Energy Technology Data Exchange (ETDEWEB)

    Whitehead, Camilla Dunham; Melody, Moya; Lutz, James

    2009-05-29

    Lawrence Berkeley National Laboratory (LBNL) is developing a computer model and spreadsheet tool for the United States Environmental Protection Agency (EPA) to help estimate the water savings attributable to their WaterSense program. WaterSense has developed a labeling program for three types of plumbing fixtures commonly used in commercial and institutional settings: flushometer valve toilets, urinals, and pre-rinse spray valves. This National Water Savings-Commercial/Institutional (NWS-CI) model is patterned after the National Water Savings-Residential model, which was completed in 2008. Calculating the quantity of water and money saved through the WaterSense labeling program requires three primary inputs: (1) the quantity of a given product in use; (2) the frequency with which units of the product are replaced or are installed in new construction; and (3) the number of times or the duration the product is used in various settings. To obtain the information required for developing the NWS-CI model, LBNL reviewed various resources pertaining to the three WaterSense-labeled commercial/institutional products. The data gathered ranged from the number of commercial buildings in the United States to numbers of employees in various sectors of the economy and plumbing codes for commercial buildings. This document summarizes information obtained about the three products' attributes, quantities, and use in commercial and institutional settings that is needed to estimate how much water EPA's WaterSense program saves.

  7. Limited fetch revisited: comparison of wind input terms in surface waves modeling

    CERN Document Server

    Andrei, Pushkarev

    2015-01-01

    The results of numerical solution of the Hasselmann kinetic equation ($HE$) for wind driven sea spectra in the fetch limited geometry are presented. Five versions of the source functions, including recently introduced ZRP model, have been studied for the exact expression of Snl and high-frequency implicit dissipation due to wave-breaking. Four out of five experiments were done in the absence of spectral peak dissipation for various Sin terms. They demonstrated the dominance of quadruplet wave-wave interaction in the energy balance and the formation of self-similar regimes of unlimited wave energy growth along the fetch. Between them was ZRP model, which showed especially good agreement with the dozen of field observations performed in the seas and lakes since 1971. The fifth, WAM3 wind input term experiment, used additional spectral peak dissipation and reproduced the results of previous similar numerical simulation, but was in a good agreement with the field experiments only for moderate fetches, demonstrati...

  8. Efficient design and simulation of an expandable hybrid (wind-photovoltaic) power system with MPPT and inverter input voltage regulation features in compliance with electric grid requirements

    Energy Technology Data Exchange (ETDEWEB)

    Skretas, Sotirios B.; Papadopoulos, Demetrios P. [Electrical Machines Laboratory, Department of Electrical and Computer Engineering, Democritos University of Thrace (DUTH), 12 V. Sofias, 67100 Xanthi (Greece)

    2009-09-15

    In this paper an efficient design along with modeling and simulation of a transformer-less small-scale centralized DC - bus Grid Connected Hybrid (Wind-PV) power system for supplying electric power to a single phase of a three phase low voltage (LV) strong distribution grid are proposed and presented. The main components of the hybrid system are: a PV generator (PVG); and an array of horizontal-axis, fixed-pitch, small-size, variable-speed wind turbines (WTs) with direct-driven permanent magnet synchronous generator (PMSG) having an embedded uncontrolled bridge rectifier. An overview of the basic theory of such systems along with their modeling and simulation via Simulink/MATLAB software package are presented. An intelligent control method is applied to the proposed configuration to simultaneously achieve three desired goals: to extract maximum power from each hybrid power system component (PVG and WTs); to guarantee DC voltage regulation/stabilization at the input of the inverter; to transfer the total produced electric power to the electric grid, while fulfilling all necessary interconnection requirements. Finally, a practical case study is conducted for the purpose of fully evaluating a possible installation in a city site of Xanthi/Greece, and the practical results of the simulations are presented. (author)

  9. Model algorithm control using neural networks for input delayed nonlinear control system

    Institute of Scientific and Technical Information of China (English)

    Yuanliang Zhang; Kil To Chong

    2015-01-01

    The performance of the model algorithm control method is partial y based on the accuracy of the system’s model. It is diffi-cult to obtain a good model of a nonlinear system, especial y when the nonlinearity is high. Neural networks have the ability to“learn”the characteristics of a system through nonlinear mapping to rep-resent nonlinear functions as wel as their inverse functions. This paper presents a model algorithm control method using neural net-works for nonlinear time delay systems. Two neural networks are used in the control scheme. One neural network is trained as the model of the nonlinear time delay system, and the other one pro-duces the control inputs. The neural networks are combined with the model algorithm control method to control the nonlinear time delay systems. Three examples are used to il ustrate the proposed control method. The simulation results show that the proposed control method has a good control performance for nonlinear time delay systems.

  10. Requirements for clinical information modelling tools.

    Science.gov (United States)

    Moreno-Conde, Alberto; Jódar-Sánchez, Francisco; Kalra, Dipak

    2015-07-01

    This study proposes consensus requirements for clinical information modelling tools that can support modelling tasks in medium/large scale institutions. Rather than identify which functionalities are currently available in existing tools, the study has focused on functionalities that should be covered in order to provide guidance about how to evolve the existing tools. After identifying a set of 56 requirements for clinical information modelling tools based on a literature review and interviews with experts, a classical Delphi study methodology was applied to conduct a two round survey in order to classify them as essential or recommended. Essential requirements are those that must be met by any tool that claims to be suitable for clinical information modelling, and if we one day have a certified tools list, any tool that does not meet essential criteria would be excluded. Recommended requirements are those more advanced requirements that may be met by tools offering a superior product or only needed in certain modelling situations. According to the answers provided by 57 experts from 14 different countries, we found a high level of agreement to enable the study to identify 20 essential and 21 recommended requirements for these tools. It is expected that this list of identified requirements will guide developers on the inclusion of new basic and advanced functionalities that have strong support by end users. This list could also guide regulators in order to identify requirements that could be demanded of tools adopted within their institutions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. Teams in organizations: from input-process-output models to IMOI models.

    Science.gov (United States)

    Ilgen, Daniel R; Hollenbeck, John R; Johnson, Michael; Jundt, Dustin

    2005-01-01

    This review examines research and theory relevant to work groups and teams typically embedded in organizations and existing over time, although many studies reviewed were conducted in other settings, including the laboratory. Research was organized around a two-dimensional system based on time and the nature of explanatory mechanisms that mediated between team inputs and outcomes. These mechanisms were affective, behavioral, cognitive, or some combination of the three. Recent theoretical and methodological work is discussed that has advanced our understanding of teams as complex, multilevel systems that function over time, tasks, and contexts. The state of both the empirical and theoretical work is compared as to its impact on present knowledge and future directions.

  12. Documentation of input datasets for the soil-water balance groundwater recharge model of the Upper Colorado River Basin

    Science.gov (United States)

    Tillman, Fred D

    2015-01-01

    The Colorado River and its tributaries supply water to more than 35 million people in the United States and 3 million people in Mexico, irrigating more than 4.5 million acres of farmland, and generating about 12 billion kilowatt hours of hydroelectric power annually. The Upper Colorado River Basin, encompassing more than 110,000 square miles (mi2), contains the headwaters of the Colorado River (also known as the River) and is an important source of snowmelt runoff to the River. Groundwater discharge also is an important source of water in the River and its tributaries, with estimates ranging from 21 to 58 percent of streamflow in the upper basin. Planning for the sustainable management of the Colorado River in future climates requires an understanding of the Upper Colorado River Basin groundwater system. This report documents input datasets for a Soil-Water Balance groundwater recharge model that was developed for the Upper Colorado River Basin.

  13. Multivariate autoregressive models with exogenous inputs for intracerebral responses to direct electrical stimulation of the human brain

    Directory of Open Access Journals (Sweden)

    Jui-Yang eChang

    2012-11-01

    Full Text Available A multivariate autoregressive model with exogenous inputs is developed for describing the cortical interactions excited by direct electrical current stimulation of the cortex. Current stimulation is challenging to model because it excites neurons in multiple locations both near and distant to the stimulation site. The approach presented here models these effects using an exogenous input that is passed through a bank of filters, one for each channel. The filtered input and a random input excite a multivariate autoregressive system describing the interactions between cortical activity at the recording sites. The exogenous input filter coefficients, the autoregressive coefficients, and random input characteristics are estimated from the measured activity due to current stimulation. The effectiveness of the approach is demonstrated using intracranial recordings from three surgical epilepsy patients. We evaluate models for wakefulness and NREM sleep in these patients with two stimulation levels in one patient and two stimulation sites in another resulting in a total of ten datasets. Excellent agreement between measured and model-predicted evoked responses is obtained across all datasets. Furthermore, one-step prediction is used to show that the model also describes dynamics in prestimulus and evoked recordings. We also compare integrated information --- a measure of intracortical communication thought to reflect the capacity for consciousness --- associated with the network model in wakefulness and sleep. As predicted, higher information integration is found in wakefulness than in sleep for all five cases.

  14. Multivariate autoregressive models with exogenous inputs for intracerebral responses to direct electrical stimulation of the human brain.

    Science.gov (United States)

    Chang, Jui-Yang; Pigorini, Andrea; Massimini, Marcello; Tononi, Giulio; Nobili, Lino; Van Veen, Barry D

    2012-01-01

    A multivariate autoregressive (MVAR) model with exogenous inputs (MVARX) is developed for describing the cortical interactions excited by direct electrical current stimulation of the cortex. Current stimulation is challenging to model because it excites neurons in multiple locations both near and distant to the stimulation site. The approach presented here models these effects using an exogenous input that is passed through a bank of filters, one for each channel. The filtered input and a random input excite a MVAR system describing the interactions between cortical activity at the recording sites. The exogenous input filter coefficients, the autoregressive coefficients, and random input characteristics are estimated from the measured activity due to current stimulation. The effectiveness of the approach is demonstrated using intracranial recordings from three surgical epilepsy patients. We evaluate models for wakefulness and NREM sleep in these patients with two stimulation levels in one patient and two stimulation sites in another resulting in a total of 10 datasets. Excellent agreement between measured and model-predicted evoked responses is obtained across all datasets. Furthermore, one-step prediction is used to show that the model also describes dynamics in pre-stimulus and evoked recordings. We also compare integrated information-a measure of intracortical communication thought to reflect the capacity for consciousness-associated with the network model in wakefulness and sleep. As predicted, higher information integration is found in wakefulness than in sleep for all five cases.

  15. Diagnostic analysis of distributed input and parameter datasets in Mediterranean basin streamflow modeling

    Science.gov (United States)

    Milella, Pamela; Bisantino, Tiziana; Gentile, Francesco; Iacobellis, Vito; Trisorio Liuzzi, Giuliana

    2012-11-01

    SummaryThe paper suggests a methodology, based on performance metrics, to select the optimal set of input and parameters to be used for the simulation of river flow discharges with a semi-distributed hydrologic model. The model is applied at daily scale in a semi-arid basin of Southern Italy (Carapelle river, basin area: 506 km2) for which rainfall and discharge series for the period 2006-2009 are available. A classification of inputs and parameters was made in two subsets: the former - spatially distributed - to be selected among different options, the latter - lumped - to be calibrated. Different data sources of (or methodologies to obtain) spatially distributed data have been explored for the first subset. In particular, the FAO Penman-Monteith, Hargreaves and Thornthwaite equations were tested for the evaluation of reference evapotranspiration that, in semi-arid areas, represents a key role in hydrological modeling. The availability of LAI maps from different remote sensing sources was exploited in order to enhance the characterization of the vegetation state and consequently of the spatio-temporal variation in actual evapotranspiration. Different type of pedotransfer functions were used to derive the soil hydraulic parameters of the area. For each configuration of the first subset of data, a manual calibration of the second subset of parameters was carried out. Both the manual calibration of the lumped parameters and the selection of the optimal distributed dataset were based on the calculation and the comparison of different performance metrics measuring the distance between observed and simulated discharge data series. Results not only show the best options for estimating reference evapotranspiration, crop coefficients, LAI values and hydraulic properties of soil, but also provide significant insights regarding the use of different performance metrics including traditional indexes such as RMSE, NSE, index of agreement, with the more recent Benchmark

  16. Comparison of several climate indices as inputs in modelling of the Baltic Sea runoff

    Energy Technology Data Exchange (ETDEWEB)

    Hanninen, J.; Vuorinen, I. [Turku Univ. (Finland). Archipelaco Research Inst.], e-mail: jari.hanninen@utu.fi

    2012-11-01

    Using Transfer function (TF) models, we have earlier presented a chain of events between changes in the North Atlantic Oscillation (NAO) and their oceanographical and ecological consequences in the Baltic Sea. Here we tested whether other climate indices as inputs would improve TF models, and our understanding of the Baltic Sea ecosystem. Besides NAO, the predictors were the Arctic Oscillation (AO), sea-level air pressures at Iceland (SLP), and wind speeds at Hoburg (Gotland). All indices produced good TF models when the total riverine runoff to the Baltic Sea was used as a modelling basis. AO was not applicable in all study areas, showing a delay of about half a year between climate and runoff events, connected with freezing and melting time of ice and snow in the northern catchment area of the Baltic Sea. NAO appeared to be most useful modelling tool as its area of applicability was the widest of the tested indices, and the time lag between climate and runoff events was the shortest. SLP and Hoburg wind speeds showed largely same results as NAO, but with smaller areal applicability. Thus AO and NAO were both mostly contributing to the general understanding of climate control of runoff events in the Baltic Sea ecosystem. (orig.)

  17. Reconstruction of rocks petrophysical properties as input data for reservoir modeling

    Science.gov (United States)

    Cantucci, B.; Montegrossi, G.; Lucci, F.; Quattrocchi, F.

    2016-11-01

    The worldwide increasing energy demand triggered studies focused on defining the underground energy potential even in areas previously discharged or neglected. Nowadays, geological gas storage (CO2 and/or CH4) and geothermal energy are considered strategic for low-carbon energy development. A widespread and safe application of these technologies needs an accurate characterization of the underground, in terms of geology, hydrogeology, geochemistry, and geomechanics. However, during prefeasibility study-stage, the limited number of available direct measurements of reservoirs, and the high costs of reopening closed deep wells must be taken into account. The aim of this work is to overcome these limits, proposing a new methodology to reconstruct vertical profiles, from surface to reservoir base, of: (i) thermal capacity, (ii) thermal conductivity, (iii) porosity, and (iv) permeability, through integration of well-log information, petrographic observations on inland outcropping samples, and flow and heat transport modeling. As case study to test our procedure we selected a deep structure, located in the medium Tyrrhenian Sea (Italy). Obtained results are consistent with measured data, confirming the validity of the proposed model. Notwithstanding intrinsic limitations due to manual calibration of the model with measured data, this methodology represents an useful tool for reservoir and geochemical modelers that need to define petrophysical input data for underground modeling before the well reopening.

  18. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models

    Energy Technology Data Exchange (ETDEWEB)

    Lamboni, Matieyendou [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Monod, Herve, E-mail: herve.monod@jouy.inra.f [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Makowski, David [INRA, UMR Agronomie INRA/AgroParisTech (UMR 211), BP 01, F78850 Thiverval-Grignon (France)

    2011-04-15

    Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.

  19. Robust unknown input observer design for state estimation and fault detection using linear parameter varying model

    Science.gov (United States)

    Li, Shanzhi; Wang, Haoping; Aitouche, Abdel; Tian, Yang; Christov, Nicolai

    2017-01-01

    This paper proposes a robust unknown input observer for state estimation and fault detection using linear parameter varying model. Since the disturbance and actuator fault is mixed together in the physical system, it is difficult to isolate the fault from the disturbance. Using the state transforation, the estimation of the original state becomes to associate with the transform state. By solving the linear matrix inequalities (LMIs)and linear matrix equalities (LMEs), the parameters of the UIO can be obtained. The convergence of the UIO is also analysed by the Layapunov theory. Finally, a wind turbine system with disturbance and actuator fault is tested for the proposed method. From the simulations, it demonstrates the effectiveness and performances of the proposed method.

  20. Applying Input-Output Model to Estimate Broader Economic Impact of Transportation Infrastructure Investment

    Science.gov (United States)

    Anas, Ridwan; Tamin, Ofyar; Wibowo, Sony S.

    2016-09-01

    The purpose of this study is to identify the relationships between infrastructure improvement and economic growth in the surrounding region. Traditionally, microeconomic and macroeconomic analyses are the mostly used tools for analyzing the linkage between transportation sectors and economic growth but offer little clues to the mechanisms linking transport improvements and the broader economy impacts. This study will estimate the broader economic benefits of the new transportation infrastructure investment, Cipularangtollway in West Java province, Indonesia, to the region connected (Bandung district) using Input-Output model. The result show the decrease of freight transportation costs by at 17 % and the increase of 1.2 % of Bandung District's GDP after the operation of Cipularangtollway.

  1. Applying Input-Output Model to Estimate Broader Economic Impact of Transportation Infrastructure Investment

    Science.gov (United States)

    Anas, Ridwan; Tamin, Ofyar; Wibowo, Sony S.

    2016-08-01

    The purpose of this study is to identify the relationships between infrastructure improvement and economic growth in the surrounding region. Traditionally, microeconomic and macroeconomic analyses are the mostly used tools for analyzing the linkage between transportation sectors and economic growth but offer little clues to the mechanisms linking transport improvements and the broader economy impacts. This study will estimate the broader economic benefits of the new transportation infrastructure investment, Cipularangtollway in West Java province, Indonesia, to the region connected (Bandung district) using Input-Output model. The result show the decrease of freight transportation costs by at 17 % and the increase of 1.2 % of Bandung District's GDP after the operation of Cipularangtollway.

  2. Modeling and Controller Design of PV Micro Inverter without Using Electrolytic Capacitors and Input Current Sensors

    Directory of Open Access Journals (Sweden)

    Faa Jeng Lin

    2016-11-01

    Full Text Available This paper outlines the modeling and controller design of a novel two-stage photovoltaic (PV micro inverter (MI that eliminates the need for an electrolytic capacitor (E-cap and input current sensor. The proposed MI uses an active-clamped current-fed push-pull DC-DC converter, cascaded with a full-bridge inverter. Three strategies are proposed to cope with the inherent limitations of a two-stage PV MI: (i high-speed DC bus voltage regulation using an integrator to deal with the 2nd harmonic voltage ripples found in single-phase systems; (ii inclusion of a small film capacitor in the DC bus to achieve ripple-free PV voltage; (iii improved incremental conductance (INC maximum power point tracking (MPPT without the need for current sensing by the PV module. Simulation and experimental results demonstrate the efficacy of the proposed system.

  3. Synchronized Beta-Band Oscillations in a Model of the Globus Pallidus-Subthalamic Nucleus Network under External Input

    Science.gov (United States)

    Ahn, Sungwoo; Zauber, S. Elizabeth; Worth, Robert M.; Rubchinsky, Leonid L.

    2016-01-01

    Hypokinetic symptoms of Parkinson's disease are usually associated with excessively strong oscillations and synchrony in the beta frequency band. The origin of this synchronized oscillatory dynamics is being debated. Cortical circuits may be a critical source of excessive beta in Parkinson's disease. However, subthalamo-pallidal circuits were also suggested to be a substantial component in generation and/or maintenance of Parkinsonian beta activity. Here we study how the subthalamo-pallidal circuits interact with input signals in the beta frequency band, representing cortical input. We use conductance-based models of the subthalamo-pallidal network and two types of input signals: artificially-generated inputs and input signals obtained from recordings in Parkinsonian patients. The resulting model network dynamics is compared with the dynamics of the experimental recordings from patient's basal ganglia. Our results indicate that the subthalamo-pallidal model network exhibits multiple resonances in response to inputs in the beta band. For a relatively broad range of network parameters, there is always a certain input strength, which will induce patterns of synchrony similar to the experimentally observed ones. This ability of the subthalamo-pallidal network to exhibit realistic patterns of synchronous oscillatory activity under broad conditions may indicate that these basal ganglia circuits are directly involved in the expression of Parkinsonian synchronized beta oscillations. Thus, Parkinsonian synchronized beta oscillations may be promoted by the simultaneous action of both cortical (or some other) and subthalamo-pallidal network mechanisms. Hence, these mechanisms are not necessarily mutually exclusive. PMID:28066222

  4. Operant responding for optogenetic excitation of LDTg inputs to the VTA requires D1 and D2 dopamine receptor activation in the NAcc.

    Science.gov (United States)

    Steidl, Stephan; O'Sullivan, Shannon; Pilat, Dustin; Bubula, Nancy; Brown, Jason; Vezina, Paul

    2017-08-30

    Behavioral studies in rats and mice indicate that laterodorsal tegmental nucleus (LDTg) inputs to the ventral tegmental area (VTA) importantly contribute to reward function. Further evidence from anesthetized rat and mouse preparations suggests that these LTDg inputs may exert this effect by regulating mesolimbic dopamine (DA) signaling. Direct evidence supporting this possibility remains lacking however. To address this lack, rat LDTg neurons were transfected with adeno-associated viral vectors encoding channelrhodopsin2 and eYFP (ChR2) or eYFP alone (eYFP) and rats were subsequently trained to lever press for intracranial self-stimulation (ICSS) of the inputs of these neurons to the VTA. First, we found that DA overflow in the forebrain nucleus accumbens (NAcc) increased maximally during ICSS to approximately 240% of baseline levels in ChR2, but not in eYFP, rats. Based on these findings, we next tested the contribution of NAcc D1 and D2 DA receptors to the reinforcing effects of optogenetic excitation of LDTg inputs to the VTA. Microinjecting SCH23390 or raclopride, D1 and D2 DA receptor antagonists respectively, into the NAcc significantly reduced operant responding for this stimulation. Together these results demonstrate for the first time that optogenetic ICSS of LDTg inputs to the VTA increases DA overflow in the NAcc and requires activation of D1 and D2 DA receptors in this site. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. A Requirements Analysis Model Based on QFD

    Institute of Scientific and Technical Information of China (English)

    TANG Zhi-wei; Nelson K.H.Tang

    2004-01-01

    The enterprise resource planning (ERP) system has emerged to offer an integrated IT solution and more and more enterprises are increasing by adopting this system and regarding it as an important innovation. However, there is already evidence of high failure risks in ERP project implementation, one major reason is poor analysis of the requirements for system implementation. In this paper, the importance of requirements analysis for ERP project implementation is highlighted, and a requirements analysis model by applying quality function deployment (QFD) is presented, which will support to conduct requirements analysis for ERP project.

  6. Scaling precipitation input to distributed hydrological models by measured snow distribution

    Science.gov (United States)

    Voegeli, Christian; Lehning, Michael; Wever, Nander; Bavay, Mathias; Bühler, Yves; Marty, Mauro; Molnar, Peter

    2016-04-01

    Precise knowledge about the snow distribution in alpine terrain is crucial for various applications such as flood risk assessment, avalanche warning or water supply and hydropower. To simulate the seasonal snow cover development in alpine terrain, the spatially distributed, physics-based model Alpine3D is suitable. The model is often driven by spatial interpolations from automatic weather stations (AWS). As AWS are sparsely spread, the data needs to be interpolated, leading to errors in the spatial distribution of the snow cover - especially on subcatchment scale. With the recent advances in remote sensing techniques, maps of snow depth can be acquired with high spatial resolution and vertical accuracy. Here we use maps of the snow depth distribution, calculated from summer and winter digital surface models acquired with the airborne opto-electronic scanner ADS to preprocess and redistribute precipitation input data for Alpine3D to improve the accuracy of spatial distribution of snow depth simulations. A differentiation between liquid and solid precipitation is made, to account for different precipitation patterns that can be expected from rain and snowfall. For liquid precipitation, only large scale distribution patterns are applied to distribute precipitation in the simulation domain. For solid precipitation, an additional small scale distribution, based on the ADS data, is applied. The large scale patterns are generated using AWS measurements interpolated over the domain. The small scale patterns are generated by redistributing the large scale precipitation according to the relative snow depth in the ADS dataset. The determination of the precipitation phase is done using an air temperature threshold. Using this simple approach to redistribute precipitation, the accuracy of spatial snow distribution could be improved significantly. The standard deviation of absolute snow depth error could be reduced by a factor of 2 to less than 20 cm for the season 2011/12. The

  7. MODELING OF THE PRIORITY SCHEDULING INPUT-LINE GROUP OUTPUT WITH MULTI-CHANNEL IN ATM EXCHANGE SYSTEM

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In this paper, an extended Kendall model for the priority scheduling input-line group output with multi-channel in Asynchronous Transfer Mode (ATM) exchange system is proposed and then the mean method is used to model mathematically the non-typical non-anticipative PRiority service (PR) model. Compared with the typical and non-anticipative PR model, it expresses the characteristics of the priority scheduling input-line group output with multi-channel in ATM exchange system. The simulation experiment shows that this model can improve the HOL block and the performance of input-queued ATM switch network dramatically. This model has a better developing prospect in ATM exchange system.

  8. Modelling pesticide leaching under climate change: parameter vs. climate input uncertainty

    Directory of Open Access Journals (Sweden)

    K. Steffens

    2013-08-01

    Full Text Available The assessment of climate change impacts on the risk for pesticide leaching needs careful consideration of different sources of uncertainty. We investigated the uncertainty related to climate scenario input and its importance relative to parameter uncertainty of the pesticide leaching model. The pesticide fate model MACRO was calibrated against a comprehensive one-year field data set for a well-structured clay soil in south-west Sweden. We obtained an ensemble of 56 acceptable parameter sets that represented the parameter uncertainty. Nine different climate model projections of the regional climate model RCA3 were available as driven by different combinations of global climate models (GCM, greenhouse gas emission scenarios and initial states of the GCM. The future time series of weather data used to drive the MACRO-model were generated by scaling a reference climate data set (1970–1999 for an important agricultural production area in south-west Sweden based on monthly change factors for 2070–2099. 30 yr simulations were performed for different combinations of pesticide properties and application seasons. Our analysis showed that both the magnitude and the direction of predicted change in pesticide leaching from present to future depended strongly on the particular climate scenario. The effect of parameter uncertainty was of major importance for simulating absolute pesticide losses, whereas the climate uncertainty was relatively more important for predictions of changes of pesticide losses from present to future. The climate uncertainty should be accounted for by applying an ensemble of different climate scenarios. The aggregated ensemble prediction based on both acceptable parameterizations and different climate scenarios could provide robust probabilistic estimates of future pesticide losses and assessments of changes in pesticide leaching risks.

  9. Nuclear inputs of key iron isotopes for core-collapse modeling and simulation

    CERN Document Server

    Nabi, Jameel-Un

    2014-01-01

    From the modeling and simulation results of presupernova evolution of massive stars, it was found that isotopes of iron, $^{54,55,56}$Fe, play a significant role inside the stellar cores, primarily decreasing the electron-to-baryon ratio ($Y_{e}$) mainly via electron capture processes thereby reducing the pressure support. The neutrinos produced, as a result of these capture processes, are transparent to the stellar matter and assist in cooling the core thereby reducing the entropy. The structure of the presupernova star is altered both by the changes in $Y_{e}$ and the entropy of the core material. Here we present the microscopic calculation of Gamow-Teller strength distributions for isotopes of iron. The calculation is also compared with other theoretical models and experimental data. Presented also are stellar electron capture rates and associated neutrino cooling rates, due to isotopes of iron, in a form suitable for simulation and modeling codes. It is hoped that the nuclear inputs presented here should ...

  10. Modeling and Testing Legacy Data Consistency Requirements

    DEFF Research Database (Denmark)

    Nytun, J. P.; Jensen, Christian Søndergaard

    2003-01-01

    An increasing number of data sources are available on the Internet, many of which offer semantically overlapping data, but based on different schemas, or models. While it is often of interest to integrate such data sources, the lack of consistency among them makes this integration difficult....... This paper addresses the need for new techniques that enable the modeling and consistency checking for legacy data sources. Specifically, the paper contributes to the development of a framework that enables consistency testing of data coming from different types of data sources. The vehicle is UML and its...... accompanying XMI. The paper presents techniques for modeling consistency requirements using OCL and other UML modeling elements: it studies how models that describe the required consistencies among instances of legacy models can be designed in standard UML tools that support XMI. The paper also considers...

  11. Achieving a System Operational Availability Requirement (ASOAR) Model

    Science.gov (United States)

    1992-07-01

    ASOAR requires only system and end item level input data, not Line Replaceable Unit (LRU) Input data. ASOAR usage provides concepts for major logistics...the Corp/Theater ADP Service Center II (CTASC II) to a systen operational availabilty goal. The CTASC II system configuration had many redundant types

  12. Predicting musically induced emotions from physiological inputs: Linear and neural network models

    Directory of Open Access Journals (Sweden)

    Frank A. Russo

    2013-08-01

    Full Text Available Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of 'felt' emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants – heart rate, respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a nonlinear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The nonlinear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the nonlinear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

  13. Modelling Implicit Communication in Multi-Agent Systems with Hybrid Input/Output Automata

    Directory of Open Access Journals (Sweden)

    Marta Capiluppi

    2012-10-01

    Full Text Available We propose an extension of Hybrid I/O Automata (HIOAs to model agent systems and their implicit communication through perturbation of the environment, like localization of objects or radio signals diffusion and detection. To this end we decided to specialize some variables of the HIOAs whose values are functions both of time and space. We call them world variables. Basically they are treated similarly to the other variables of HIOAs, but they have the function of representing the interaction of each automaton with the surrounding environment, hence they can be output, input or internal variables. Since these special variables have the role of simulating implicit communication, their dynamics are specified both in time and space, because they model the perturbations induced by the agent to the environment, and the perturbations of the environment as perceived by the agent. Parallel composition of world variables is slightly different from parallel composition of the other variables, since their signals are summed. The theory is illustrated through a simple example of agents systems.

  14. Effects of model input data uncertainty in simulating water resources of a transnational catchment

    Science.gov (United States)

    Camargos, Carla; Breuer, Lutz

    2016-04-01

    Landscape consists of different ecosystem components and how these components affect water quantity and quality need to be understood. We start from the assumption that water resources are generated in landscapes and that rural land use (particular agriculture) has a strong impact on water resources that are used downstream for domestic and industrial supply. Partly located in the north of Luxembourg and partly in the southeast of Belgium, the Haute-Sûre catchment is about 943 km2. As part of the catchment, the Haute-Sûre Lake is an important source of drinking water for Luxembourg population, satisfying 30% of the city's demand. The objective of this study is investigate impact of spatial input data uncertainty on water resources simulations for the Haute-Sûre catchment. We apply the SWAT model for the period 2006 to 2012 and use a variety of digital information on soils, elevation and land uses with various spatial resolutions. Several objective functions are being evaluated and we consider resulting parameter uncertainty to quantify an important part of the global uncertainty in model simulations.

  15. Limited fetch revisited: Comparison of wind input terms, in surface wave modeling

    Science.gov (United States)

    Pushkarev, Andrei; Zakharov, Vladimir

    2016-07-01

    Results pertaining to numerical solutions of the Hasselmann kinetic equation (HE), for wind driven sea spectra, in the fetch limited geometry, are presented. Five versions of source functions, including the recently introduced ZRP model (Zakharov et al., 2012), have been studied, for the exact expression of Snl and high-frequency implicit dissipation, due to wave-breaking. Four of the five experiments were done in the absence of spectral peak dissipation for various Sin terms. They demonstrated the dominance of quadruplet wave-wave interaction, in the energy balance, and the formation of self-similar regimes, of unlimited wave energy growth, along the fetch. Between them was the ZRP model, which strongly agreed with dozens of field observations performed in the seas and lakes, since 1947. The fifth, the WAM3 wind input term experiment, used additional spectral peak dissipation and reproduced the results of a previous, similar, numerical simulation described in Komen et al. (1994), but only supported the field experiments for moderate fetches, demonstrating a total energy saturation at half of that of the Pierson-Moscowits limit. The alternative framework for HE numerical simulation is proposed, along with a set of tests, allowing one to select physically-justified source terms.

  16. Modeling uncertainties in workforce disruptions from influenza pandemics using dynamic input-output analysis.

    Science.gov (United States)

    El Haimar, Amine; Santos, Joost R

    2014-03-01

    Influenza pandemic is a serious disaster that can pose significant disruptions to the workforce and associated economic sectors. This article examines the impact of influenza pandemic on workforce availability within an interdependent set of economic sectors. We introduce a simulation model based on the dynamic input-output model to capture the propagation of pandemic consequences through the National Capital Region (NCR). The analysis conducted in this article is based on the 2009 H1N1 pandemic data. Two metrics were used to assess the impacts of the influenza pandemic on the economic sectors: (i) inoperability, which measures the percentage gap between the as-planned output and the actual output of a sector, and (ii) economic loss, which quantifies the associated monetary value of the degraded output. The inoperability and economic loss metrics generate two different rankings of the critical economic sectors. Results show that most of the critical sectors in terms of inoperability are sectors that are related to hospitals and health-care providers. On the other hand, most of the sectors that are critically ranked in terms of economic loss are sectors with significant total production outputs in the NCR such as federal government agencies. Therefore, policy recommendations relating to potential mitigation and recovery strategies should take into account the balance between the inoperability and economic loss metrics.

  17. Three-Verb Clusters in Interference Frisian: A Stochastic Model over Sequential Syntactic Input.

    Science.gov (United States)

    Hoekstra, Eric; Versloot, Arjen

    2016-03-01

    Abstract Interference Frisian (IF) is a variety of Frisian, spoken by mostly younger speakers, which is heavily influenced by Dutch. IF exhibits all six logically possible word orders in a cluster of three verbs. This phenomenon has been researched by Koeneman and Postma (2006), who argue for a parameter theory, which leaves frequency differences between various orders unexplained. Rejecting Koeneman and Postma's parameter theory, but accepting their conclusion that Dutch (and Frisian) data are input for the grammar of IF, we will argue that the word order preferences of speakers of IF are determined by frequency and similarity. More specifically, three-verb clusters in IF are sensitive to: their linear left-to-right similarity to two-verb clusters and three-verb clusters in Frisian and in Dutch; the (estimated) frequency of two- and three-verb clusters in Frisian and Dutch. The model will be shown to work best if Dutch and Frisian, and two- and three-verb clusters, have equal impact factors. If different impact factors are taken, the model's predictions do not change substantially, testifying to its robustness. This analysis is in line with recent ideas that the sequential nature of human speech is more important to syntactic processes than commonly assumed, and that less burden need be put on the hierarchical dimension of syntactic structure.

  18. A fuzzy model for exploiting customer requirements

    Directory of Open Access Journals (Sweden)

    Zahra Javadirad

    2016-09-01

    Full Text Available Nowadays, Quality function deployment (QFD is one of the total quality management tools, where customers’ views and requirements are perceived and using various techniques improves the production requirements and operations. The QFD department, after identification and analysis of the competitors, takes customers’ feedbacks to meet the customers’ demands for the products compared with the competitors. In this study, a comprehensive model for assessing the importance of the customer requirements in the products or services for an organization is proposed. The proposed study uses linguistic variables, as a more comprehensive approach, to increase the precision of the expression evaluations. The importance of these requirements specifies the strengths and weaknesses of the organization in meeting the requirements relative to competitors. The results of these experiments show that the proposed method performs better than the other methods.

  19. "Updates to Model Algorithms & Inputs for the Biogenic Emissions Inventory System (BEIS) Model"

    Science.gov (United States)

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observatio...

  20. Evaluating the effects of model structure and meteorological input data on runoff modelling in an alpine headwater basin

    Science.gov (United States)

    Schattan, Paul; Bellinger, Johannes; Förster, Kristian; Schöber, Johannes; Huttenlau, Matthias; Kirnbauer, Robert; Achleitner, Stefan

    2017-04-01

    Modelling water resources in snow-dominated mountainous catchments is challenging due to both, short concentration times and a highly variable contribution of snow melt in space and time from complex terrain. A number of model setups exist ranging from physically based models to conceptional models which do not attempt to represent the natural processes in a physically meaningful way. Within the flood forecasting system for the Tyrolean Inn River two serially linked hydrological models with differing process representation are used. Non- glacierized catchments are modelled by a semi-distributed, water balance model (HQsim) based on the HRU-approach. A fully-distributed energy and mass balance model (SES), purpose-built for snow- and icemelt, is used for highly glacierized headwater catchments. Previous work revealed uncertainties and limitations within the models' structures regarding (i) the representation of snow processes in HQsim, (ii) the runoff routing of SES, and (iii) the spatial resolution of the meteorological input data in both models. To overcome these limitations, a "strengths driven" model coupling is applied. Instead of linking the models serially, a vertical one-way coupling of models has been implemented. The fully-distributed snow modelling of SES is combined with the semi-distributed HQsim structure, allowing to benefit from soil and runoff routing schemes in HQsim. A monte-carlo based modelling experiment was set up to evaluate the resulting differences in the runoff prediction due to the improved model coupling and a refined spatial resolution of the meteorological forcing. The experiment design follows a gradient of spatial discretisation of hydrological processes and meteorological forcing data with a total of six different model setups for the alpine headwater basin of the Fagge River in the Tyrolean Alps. In general, all setups show a good performance for this particular basin. It is therefore planned to include other basins with differing

  1. Dynamic Modeling of a Roller Chain Drive System Considering the Flexibility of Input Shaft

    Institute of Scientific and Technical Information of China (English)

    XU Lixin; YANG Yuhu; CHANG Zongyu; LIU Jianping

    2010-01-01

    Roller chain drives are widely used in various high-speed, high-load and power transmission applications, but their complex dynamic behavior is not well researched. Most studies were only focused on the analysis of the vibration of chain tight span, and in these models, many factors are neglected. In this paper, a mathematical model is developed to calculate the dynamic response of a roller chain drive working at constant or variable speed condition. In the model, the complete chain transmission with two sprockets and the necessary tight and slack spans is used. The effect of the flexibility of input shaft on dynamic response of the chain system is taken into account, as well as the elastic deformation in the chain, the inertial forces, the gravity and the torque on driven shaft. The nonlinear equations of movement are derived from using Lagrange equations and solved numerically. Given the center distance and the two initial position angles of teeth on driving and driven sprockets corresponding to the first seating roller on each side of the tight span, dynamics of any roller chain drive with two sprockets and two spans can be analyzed by the procedure. Finally, a numerical example is given and the validity of the procedure developed is demonstrated by analyzing the dynamic behavior of a typical roller chain drive. The model can well simulate the transverse and longitudinal vibration of the chain spans and the torsional vibration of the sprockets. This study can provide an effective method for the analysis of the dynamic characteristics of all the chain drive systems.

  2. Including operational data in QMRA model: development and impact of model inputs.

    Science.gov (United States)

    Jaidi, Kenza; Barbeau, Benoit; Carrière, Annie; Desjardins, Raymond; Prévost, Michèle

    2009-03-01

    A Monte Carlo model, based on the Quantitative Microbial Risk Analysis approach (QMRA), has been developed to assess the relative risks of infection associated with the presence of Cryptosporidium and Giardia in drinking water. The impact of various approaches for modelling the initial parameters of the model on the final risk assessments is evaluated. The Monte Carlo simulations that we performed showed that the occurrence of parasites in raw water was best described by a mixed distribution: log-Normal for concentrations > detection limit (DL), and a uniform distribution for concentrations risks significantly. The mean annual risks for conventional treatment are: 1.97E-03 (removal credit adjusted by log parasite = log spores), 1.58E-05 (log parasite = 1.7 x log spores) or 9.33E-03 (regulatory credits based on the turbidity measurement in filtered water). Using full scale validated SCADA data, the simplified calculation of CT performed at the plant was shown to largely underestimate the risk relative to a more detailed CT calculation, which takes into consideration the downtime and system failure events identified at the plant (1.46E-03 vs. 3.93E-02 for the mean risk).

  3. A pre-calibration approach to select optimum inputs for hydrological models in data-scarce regions

    Science.gov (United States)

    Tarawneh, Esraa; Bridge, Jonathan; Macdonald, Neil

    2016-10-01

    This study uses the Soil and Water Assessment Tool (SWAT) model to quantitatively compare available input datasets in a data-poor dryland environment (Wala catchment, Jordan; 1743 km2). Eighteen scenarios combining best available land-use, soil and weather datasets (1979-2002) are considered to construct SWAT models. Data include local observations and global reanalysis data products. Uncalibrated model outputs assess the variability in model performance derived from input data sources only. Model performance against discharge and sediment load data are compared using r2, Nash-Sutcliffe efficiency (NSE), root mean square error standard deviation ratio (RSR) and percent bias (PBIAS). NSE statistic varies from 0.56 to -12 and 0.79 to -85 for best- and poorest-performing scenarios against observed discharge and sediment data respectively. Global weather inputs yield considerable improvements on discontinuous local datasets, whilst local soil inputs perform considerably better than global-scale mapping. The methodology provides a rapid, transparent and transferable approach to aid selection of the most robust suite of input data.

  4. Hydrological and sedimentological modeling of the Okavango Delta, Botswana, using remotely sensed input and calibration data

    Science.gov (United States)

    Milzow, C.; Kgotlhang, L.; Kinzelbach, W.; Bauer-Gottwein, P.

    2006-12-01

    medium-term. The Delta's size and limited accessibility make direct data acquisition on the ground difficult. Remote sensing methods are the most promising source of acquiring spatially distributed data for both, model input and calibration. Besides ground data, METEOSAT and NOAA data are used for precipitation and evapotranspiration inputs respectively. The topography is taken from a study from Gumbricht et al. (2004) where the SRTM shuttle mission data is refined using remotely sensed vegetation indexes. The aquifer thickness was determined with an aeromagnetic survey. For calibration, the simulated flooding patterns are compared to patterns derived from satellite imagery: recent ENVISAT ASAR and older NOAA AVHRR scenes. The final objective is to better understand the hydrological and hydraulic aspects of this complex ecosystem and eventually predict the consequences of human interventions. It will provide a tool for decision makers involved to assess the impact of possible upstream dams and water abstraction scenarios.

  5. Discharge simulations performed with a hydrological model using bias corrected regional climate model input

    Directory of Open Access Journals (Sweden)

    S. C. van Pelt

    2009-12-01

    Full Text Available Studies have demonstrated that precipitation on Northern Hemisphere mid-latitudes has increased in the last decades and that it is likely that this trend will continue. This will have an influence on discharge of the river Meuse. The use of bias correction methods is important when the effect of precipitation change on river discharge is studied. The objective of this paper is to investigate the effect of using two different bias correction methods on output from a Regional Climate Model (RCM simulation. In this study a Regional Atmospheric Climate Model (RACMO2 run is used, forced by ECHAM5/MPIOM under the condition of the SRES-A1B emission scenario, with a 25 km horizontal resolution. The RACMO2 runs contain a systematic precipitation bias on which two bias correction methods are applied. The first method corrects for the wet day fraction and wet day average (WD bias correction and the second method corrects for the mean and coefficient of variance (MV bias correction. The WD bias correction initially corrects well for the average, but it appears that too many successive precipitation days were removed with this correction. The second method performed less well on average bias correction, but the temporal precipitation pattern was better. Subsequently, the discharge was calculated by using RACMO2 output as forcing to the HBV-96 hydrological model. A large difference was found between the simulated discharge of the uncorrected RACMO2 run, the WD bias corrected run and the MV bias corrected run. These results show the importance of an appropriate bias correction.

  6. Assessing Spatial and Attribute Errors of Input Data in Large National Datasets for use in Population Distribution Models

    Energy Technology Data Exchange (ETDEWEB)

    Patterson, Lauren A [ORNL; Urban, Marie L [ORNL; Myers, Aaron T [ORNL; Bhaduri, Budhendra L [ORNL; Bright, Eddie A [ORNL; Coleman, Phil R [ORNL

    2007-01-01

    Geospatial technologies and digital data have developed and disseminated rapidly in conjunction with increasing computing performance and internet availability. The ability to store and transmit large datasets has encouraged the development of national datasets in geospatial format. National datasets are used by numerous agencies for analysis and modeling purposes because these datasets are standardized, and are considered to be of acceptable accuracy. At Oak Ridge National Laboratory, a national population model incorporating multiple ancillary variables was developed and one of the inputs required is a school database. This paper examines inaccuracies present within two national school datasets, TeleAtlas North America (TANA) and National Center of Education Statistics (NCES). Schools are an important component of the population model, because they serve as locations containing dense clusters of vulnerable populations. It is therefore essential to validate the quality of the school input data, which was made possible by increasing national coverage of high resolution imagery. Schools were also chosen since a 'real-world' representation of K-12 schools for the Philadelphia School District was produced; thereby enabling 'ground-truthing' of the national datasets. Analyses found the national datasets not standardized and incomplete, containing 76 to 90% of existing schools. The temporal accuracy of enrollment values of updating national datasets resulted in 89% inaccuracy to match 2003 data. Spatial rectification was required for 87% of the NCES points, of which 58% of the errors were attributed to the geocoding process. Lastly, it was found that by combining the two national datasets together, the resultant dataset provided a more useful and accurate solution. Acknowledgment Prepared by Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, Tennessee 37831-6285, managed by UT-Battelle, LLC for the U. S. Department of Energy undercontract no

  7. Usefulness of non-linear input-output models for economic impact analyses in tourism and recreation

    NARCIS (Netherlands)

    Klijs, J.; Peerlings, J.H.M.; Heijman, W.J.M.

    2015-01-01

    In tourism and recreation management it is still common practice to apply traditional input–output (IO) economic impact models, despite their well-known limitations. In this study the authors analyse the usefulness of applying a non-linear input–output (NLIO) model, in which price-induced input subs

  8. The Use of an Eight-Step Instructional Model to Train School Staff in Partner-Augmented Input

    Science.gov (United States)

    Senner, Jill E.; Baud, Matthew R.

    2017-01-01

    An eight-step instruction model was used to train a self-contained classroom teacher, speech-language pathologist, and two instructional assistants in partner-augmented input, a modeling strategy for teaching augmentative and alternative communication use. With the exception of a 2-hr training session, instruction primarily was conducted during…

  9. User Requirements and Domain Model Engineering

    NARCIS (Netherlands)

    Specht, Marcus; Glahn, Christian

    2006-01-01

    Specht, M., & Glahn, C. (2006). User requirements and domain model engineering. Presentation at International Workshop in Learning Networks for Lifelong Competence Development. March, 30-31, 2006. Sofia, Bulgaria: TENCompetence Conference. Retrieved June 30th, 2006, from http://dspace.learningnetwor

  10. User Requirements and Domain Model Engineering

    NARCIS (Netherlands)

    Specht, Marcus; Glahn, Christian

    2006-01-01

    Specht, M., & Glahn, C. (2006). User requirements and domain model engineering. Presentation at International Workshop in Learning Networks for Lifelong Competence Development. March, 30-31, 2006. Sofia, Bulgaria: TENCompetence Conference. Retrieved June 30th, 2006, from http://dspace.learningnetwor

  11. Ecological input-output modeling for embodied resources and emissions in Chinese economy 2005

    Science.gov (United States)

    Chen, Z. M.; Chen, G. Q.; Zhou, J. B.; Jiang, M. M.; Chen, B.

    2010-07-01

    For the embodiment of natural resources and environmental emissions in Chinese economy 2005, a biophysical balance modeling is carried out based on an extension of the economic input-output table into an ecological one integrating the economy with its various environmental driving forces. Included resource flows into the primary resource sectors and environmental emission flows from the primary emission sectors belong to seven categories as energy resources in terms of fossil fuels, hydropower and nuclear energy, biomass, and other sources; freshwater resources; greenhouse gas emissions in terms of CO2, CH4, and N2O; industrial wastes in terms of waste water, waste gas, and waste solid; exergy in terms of fossil fuel resources, biological resources, mineral resources, and environmental resources; solar emergy and cosmic emergy in terms of climate resources, soil, fossil fuels, and minerals. The resulted database for embodiment intensity and sectoral embodiment of natural resources and environmental emissions is of essential implications in context of systems ecology and ecological economics in general and of global climate change in particular.

  12. Regional Agricultural Input-Output Model and Countermeasure for Production and Income Increase of Farmers in Southern Xinjiang,China

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Agricultural input and output status in southern Xinjiang,China is introduced,such as lack of agricultural input,low level of agricultural modernization,excessive fertilizer use,serious damage of environment,shortage of water resources,tremendous pressure on ecological balance,insignificant economic and social benefits of agricultural production in southern Xinjiang,agriculture remaining a weak industry,agricultural economy as the economic subject of southern Xinjiang,and backward economic development of southern Xinjiang.Taking the Aksu area as an example,according to the input and output data in the years 2002-2007,input-output model about regional agriculture of the southern Xinjiang is established by principal component analysis.DPS software is used in the process of solving the model.Then,Eviews software is adopted to revise and test the model in order to analyze and evaluate the economic significance of the results obtained,and to make additional explanations of the relevant model.Since the agricultural economic output is seriously restricted in southern Xinjiang at present,the following countermeasures are put forward,such as adjusting the structure of agricultural land,improving the utilization ratio of land,increasing agricultural input,realizing agricultural modernization,rationally utilizing water resources,maintaining eco-environmental balance,enhancing the awareness of agricultural insurance,minimizing the risk and loss,taking the road of industrialization of characteristic agricultural products,and realizing the transfer of surplus labor force.

  13. Parametric modeling of DSC-MRI data with stochastic filtration and optimal input design versus non-parametric modeling.

    Science.gov (United States)

    Kalicka, Renata; Pietrenko-Dabrowska, Anna

    2007-03-01

    In the paper MRI measurements are used for assessment of brain tissue perfusion and other features and functions of the brain (cerebral blood flow - CBF, cerebral blood volume - CBV, mean transit time - MTT). Perfusion is an important indicator of tissue viability and functioning as in pathological tissue blood flow, vascular and tissue structure are altered with respect to normal tissue. MRI enables diagnosing diseases at an early stage of their course. The parametric and non-parametric approaches to the identification of MRI models are presented and compared. The non-parametric modeling adopts gamma variate functions. The parametric three-compartmental catenary model, based on the general kinetic model, is also proposed. The parameters of the models are estimated on the basis of experimental data. The goodness of fit of the gamma variate and the three-compartmental models to the data and the accuracy of the parameter estimates are compared. Kalman filtering, smoothing the measurements, was adopted to improve the estimate accuracy of the parametric model. Parametric modeling gives a better fit and better parameter estimates than non-parametric and allows an insight into the functioning of the system. To improve the accuracy optimal experiment design related to the input signal was performed.

  14. Urban pluvial flood prediction: a case study evaluating radar rainfall nowcasts and numerical weather prediction models as model inputs.

    Science.gov (United States)

    Thorndahl, Søren; Nielsen, Jesper Ellerbæk; Jensen, David Getreuer

    2016-12-01

    Flooding produced by high-intensive local rainfall and drainage system capacity exceedance can have severe impacts in cities. In order to prepare cities for these types of flood events - especially in the future climate - it is valuable to be able to simulate these events numerically, both historically and in real-time. There is a rather untested potential in real-time prediction of urban floods. In this paper, radar data observations with different spatial and temporal resolution, radar nowcasts of 0-2 h leadtime, and numerical weather models with leadtimes up to 24 h are used as inputs to an integrated flood and drainage systems model in order to investigate the relative difference between different inputs in predicting future floods. The system is tested on the small town of Lystrup in Denmark, which was flooded in 2012 and 2014. Results show it is possible to generate detailed flood maps in real-time with high resolution radar rainfall data, but rather limited forecast performance in predicting floods with leadtimes more than half an hour.

  15. Modeling requirements for in situ vitrification

    Energy Technology Data Exchange (ETDEWEB)

    MacKinnon, R.J.; Mecham, D.C.; Hagrman, D.L.; Johnson, R.W.; Murray, P.E.; Slater, C.E.; Marwil, E.S.; Weaver, R.A.; Argyle, M.D.

    1991-11-01

    This document outlines the requirements for the model being developed at the INEL which will provide analytical support for the ISV technology assessment program. The model includes representations of the electric potential field, thermal transport with melting, gas and particulate release, vapor migration, off-gas combustion and process chemistry. The modeling objectives are to (1) help determine the safety of the process by assessing the air and surrounding soil radionuclide and chemical pollution hazards, the nuclear criticality hazard, and the explosion and fire hazards, (2) help determine the suitability of the ISV process for stabilizing the buried wastes involved, and (3) help design laboratory and field tests and interpret results therefrom.

  16. Rigorous model-based uncertainty quantification with application to terminal ballistics, part I: Systems with controllable inputs and small scatter

    Science.gov (United States)

    Kidane, A.; Lashgari, A.; Li, B.; McKerns, M.; Ortiz, M.; Owhadi, H.; Ravichandran, G.; Stalzer, M.; Sullivan, T. J.

    2012-05-01

    This work is concerned with establishing the feasibility of a data-on-demand (DoD) uncertainty quantification (UQ) protocol based on concentration-of-measure inequalities. Specific aims are to establish the feasibility of the protocol and its basic properties, including the tightness of the predictions afforded by the protocol. The assessment is based on an application to terminal ballistics and a specific system configuration consisting of 6061-T6 aluminum plates struck by spherical S-2 tool steel projectiles at ballistic impact speeds. The system's inputs are the plate thickness and impact velocity and the perforation area is chosen as the sole performance measure of the system. The objective of the UQ analysis is to certify the lethality of the projectile, i.e., that the projectile perforates the plate with high probability over a prespecified range of impact velocities and plate thicknesses. The net outcome of the UQ analysis is an M/U ratio, or confidence factor, of 2.93, indicative of a small probability of no perforation of the plate over its entire operating range. The high-confidence (>99.9%) in the successful operation of the system afforded the analysis and the small number of tests (40) required for the determination of the modeling-error diameter, establishes the feasibility of the DoD UQ protocol as a rigorous yet practical approach for model-based certification of complex systems.

  17. RUSLE2015: Modelling soil erosion at continental scale using high resolution input layers

    Science.gov (United States)

    Panagos, Panos; Borrelli, Pasquale; Meusburger, Katrin; Poesen, Jean; Ballabio, Cristiano; Lugato, Emanuele; Montanarella, Luca; Alewell, Christine

    2016-04-01

    Soil erosion by water is one of the most widespread forms of soil degradation in the Europe. On the occasion of the 2015 celebration of the International Year of Soils, the European Commission's Joint Research Centre (JRC) published the RUSLE2015, a modified modelling approach for assessing soil erosion in Europe by using the best available input data layers. The objective of the recent assessment performed with RUSLE2015 was to improve our knowledge and understanding of soil erosion by water across the European Union and to accentuate the differences and similarities between different regions and countries beyond national borders and nationally adapted models. RUSLE2015 has maximized the use of available homogeneous, updated, pan-European datasets (LUCAS topsoil, LUCAS survey, GAEC, Eurostat crops, Eurostat Management Practices, REDES, DEM 25m, CORINE, European Soil Database) and have used the best suited approach at European scale for modelling soil erosion. The collaboration of JRC with many scientists around Europe and numerous prominent European universities and institutes resulted in an improved assessment of individual risk factors (rainfall erosivity, soil erodibility, cover-management, topography and support practices) and a final harmonized European soil erosion map at high resolution. The mean soil loss rate in the European Union's erosion-prone lands (agricultural, forests and semi-natural areas) was found to be 2.46 t ha-1 yr-1, resulting in a total soil loss of 970 Mt annually; equal to an area the size of Berlin (assuming a removal of 1 meter). According to the RUSLE2015 model approximately 12.7% of arable lands in the European Union is estimated to suffer from moderate to high erosion(>5 t ha-1 yr-1). This equates to an area of 140,373 km2 which equals to the surface area of Greece (Environmental Science & Policy, 54, 438-447; 2015). Even the mean erosion rate outstrips the mean formation rate (<1.4 tonnes per ha annually). The recent RUSLE2015

  18. Uncertainties in the magnitudes of inputs of trace metals to the North Sea. Implications for water quality models

    Energy Technology Data Exchange (ETDEWEB)

    Tappin, A.D.; Burton, J.D. [The University, Dept. of Oceanography, Highfield, Southampton (United Kingdom); Millward, G.E.; Statham, P.J. [Univ. of Plymouth, Dept. of Environmental Sciences, Plymouth (United Kingdom)

    1996-12-31

    Numerical modelling is a powerful tool for studying the concetrations, distributions and fates of contaminants in the North Sea, and for the prediction of water quality. Its usefulness, with respect to developing strategies regarding source reductions for example, depends on how closely the forcing functions and biogeochemical processes that significantly influence contaminant transport and cycling, can be reflected in the model. One major consideration is the completeness and quality of data on inputs, which constitute major forcing functions. If estimates of the magnitudes of contaminant inputs are poorly constrained, then model results may become of qualitative value only, rather than quantitative. In this paper, a water quality model for trace metals in the southern North Sea is used to examine how predicted concentrations and distributions of cadmium, copper and lead vary during winter in response to the incorporation into the model of uncertainties in inputs. The model is largely driven by data associated with the Natural Environment Research Council North Sea Project (NERC NSP). The range in predicted concentrations of both the dissolved and particulate phases of these metals in a given grid cell following incorporation of maximum and minimum inputs is relatively narrow, even when the range in inputs is large. For dissolved copper and lead, and particulate copper, there is reasonable agreement between simulated concentration and those observed during a winter NSP cruise. For dissolved cadmium, and particulate cadmium and lead, concentrations of the right order are predicted, although the detailed scatter in the observations is not. Significant reductions in river inputs of total led and copper lead to predictions that water column concentrations of dissolved lead and copper decrease just in the coastal zone, and then by only a small fraction. (au) 49 refs.

  19. Modeling the BOD of Danube River in Serbia using spatial, temporal, and input variables optimized artificial neural network models.

    Science.gov (United States)

    Šiljić Tomić, Aleksandra N; Antanasijević, Davor Z; Ristić, Mirjana Đ; Perić-Grujić, Aleksandra A; Pocajt, Viktor V

    2016-05-01

    This paper describes the application of artificial neural network models for the prediction of biological oxygen demand (BOD) levels in the Danube River. Eighteen regularly monitored water quality parameters at 17 stations on the river stretch passing through Serbia were used as input variables. The optimization of the model was performed in three consecutive steps: firstly, the spatial influence of a monitoring station was examined; secondly, the monitoring period necessary to reach satisfactory performance was determined; and lastly, correlation analysis was applied to evaluate the relationship among water quality parameters. Root-mean-square error (RMSE) was used to evaluate model performance in the first two steps, whereas in the last step, multiple statistical indicators of performance were utilized. As a result, two optimized models were developed, a general regression neural network model (labeled GRNN-1) that covers the monitoring stations from the Danube inflow to the city of Novi Sad and a GRNN model (labeled GRNN-2) that covers the stations from the city of Novi Sad to the border with Romania. Both models demonstrated good agreement between the predicted and actually observed BOD values.

  20. Evaluation of Uncertainty in Constituent Input Parameters for Modeling the Fate of RDX

    Science.gov (United States)

    2015-07-01

    accurately estimated, such as solubility, while others — such as degradation rates — are often far more uncertain . Prior to using improved methods for...meet this purpose, a previous application of TREECS™ was used to evaluate parameter sensitivity and the effects of highly uncertain inputs for...than others. One of the most uncertain inputs in this application is the loading rate (grams/year) of unexploded RDX residue. A value of 1.5 kg/yr was

  1. Development of ANFIS models for air quality forecasting and input optimization for reducing the computational cost and time

    Science.gov (United States)

    Prasad, Kanchan; Gorai, Amit Kumar; Goyal, Pramila

    2016-03-01

    This study aims to develop adaptive neuro-fuzzy inference system (ANFIS) for forecasting of daily air pollution concentrations of five air pollutants [sulphur dioxide (SO2), nitrogen dioxide (NO2), carbon monoxide (CO), ozone (O3) and particular matters (PM10)] in the atmosphere of a Megacity (Howrah). Air pollution in the city (Howrah) is rising in parallel with the economics and thus observing, forecasting and controlling the air pollution becomes increasingly important due to the health impact. ANFIS serve as a basis for constructing a set of fuzzy IF-THEN rules, with appropriate membership functions to generate the stipulated input-output pairs. The ANFIS model predictor considers the value of meteorological factors (pressure, temperature, relative humidity, dew point, visibility, wind speed, and precipitation) and previous day's pollutant concentration in different combinations as the inputs to predict the 1-day advance and same day air pollution concentration. The concentration value of five air pollutants and seven meteorological parameters of the Howrah city during the period 2009 to 2011 were used for development of the ANFIS model. Collinearity tests were conducted to eliminate the redundant input variables. A forward selection (FS) method is used for selecting the different subsets of input variables. Application of collinearity tests and FS techniques reduces the numbers of input variables and subsets which helps in reducing the computational cost and time. The performances of the models were evaluated on the basis of four statistical indices (coefficient of determination, normalized mean square error, index of agreement, and fractional bias).

  2. A method of aggregating heterogeneous subgrid land cover input data for multi-scale urban parameterization within atmospheric models

    Science.gov (United States)

    Shaffer, S. R.

    2015-12-01

    A method for representing grid-scale heterogeneous development density for urban climate models from probability density functions of sub-grid resolution observed data is proposed. Derived values are evaluated in relation to normalized Shannon Entropy to provide guidance in assessing model input data. Urban fraction for dominant and mosaic urban class contributions are estimated by combining analysis of 30-meter resolution National Land Cover Database 2006 data products for continuous impervious surface area and categorical land cover. The method aims at reducing model error through improvement of urban parameterization and representation of observations employed as input data. The multi-scale variation of parameter values are demonstrated for several methods of utilizing input. The method provides multi-scale and spatial guidance for determining where parameterization schemes may be mis-representing heterogeneity of input data, along with motivation for employing mosaic techniques based upon assessment of input data. The proposed method has wider potential for geographic application, and complements data products which focus on characterizing central business districts. The method enables obtaining urban fraction dependent upon resolution and class partition scheme, based upon improved parameterization of observed data, which provides one means of influencing simulation prediction at various aggregated grid scales.

  3. STABILITY ANALYSIS OF THE DYNAMIC INPUT-OUTPUT SYSTEM

    Institute of Scientific and Technical Information of China (English)

    GuoChonghui; TangHuanwen

    2002-01-01

    The dynamic input-output model is well known in economic theory and practice. In this paper, the asymptotic stability and balanced growth solutions of the dynamic input-output system are considered. Under some natural assumptions which do not require the technical coefficient matrix to be indecomposable,it has been proved that the dynamic input-output system is not asymptotically stable and the closed dynamic input-output model has a balanced growth solution.

  4. SKIRT: the design of a suite of input models for Monte Carlo radiative transfer simulations

    CERN Document Server

    Baes, Maarten

    2015-01-01

    The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can...

  5. Information Models, Data Requirements, and Agile Data Curation

    Science.gov (United States)

    Hughes, John S.; Crichton, Dan; Ritschel, Bernd; Hardman, Sean; Joyner, Ron

    2015-04-01

    The Planetary Data System's next generation system, PDS4, is an example of the successful use of an ontology-based Information Model (IM) to drive the development and operations of a data system. In traditional systems engineering, requirements or statements about what is necessary for the system are collected and analyzed for input into the design stage of systems development. With the advent of big data the requirements associated with data have begun to dominate and an ontology-based information model can be used to provide a formalized and rigorous set of data requirements. These requirements address not only the usual issues of data quantity, quality, and disposition but also data representation, integrity, provenance, context, and semantics. In addition the use of these data requirements during system's development has many characteristics of Agile Curation as proposed by Young et al. [Taking Another Look at the Data Management Life Cycle: Deconstruction, Agile, and Community, AGU 2014], namely adaptive planning, evolutionary development, early delivery, continuous improvement, and rapid and flexible response to change. For example customers can be satisfied through early and continuous delivery of system software and services that are configured directly from the information model. This presentation will describe the PDS4 architecture and its three principle parts: the ontology-based Information Model (IM), the federated registries and repositories, and the REST-based service layer for search, retrieval, and distribution. The development of the IM will be highlighted with special emphasis on knowledge acquisition, the impact of the IM on development and operations, and the use of shared ontologies at multiple governance levels to promote system interoperability and data correlation.

  6. Impact of input data uncertainty on environmental exposure assessment models : A case study for electromagnetic field modelling from mobile phone base stations

    NARCIS (Netherlands)

    Beekhuizen, Johan; Heuvelink, Gerard B M; Huss, Anke; Bürgi, Alfred; Kromhout, Hans; Vermeulen, Roel

    2014-01-01

    BACKGROUND: With the increased availability of spatial data and computing power, spatial prediction approaches have become a standard tool for exposure assessment in environmental epidemiology. However, such models are largely dependent on accurate input data. Uncertainties in the input data can the

  7. Impact of input data uncertainty on environmental exposure assessment models: A case study for electromagnetic field modelling from mobile phone base stations

    NARCIS (Netherlands)

    Beekhuizen, J.; Heuvelink, G.B.M.; Huss, A.; Burgi, A.; Kromhout, H.; Vermeulen, R.

    2014-01-01

    Background: With the increased availability of spatial data and computing power, spatial prediction approaches have become a standard tool for exposure assessment in environmental epidemiology. However, such models are largely dependent on accurate input data. Uncertainties in the input data can the

  8. Sensitivity of Global Modeling Initiative chemistry and transport model simulations of radon-222 and lead-210 to input meteorological data

    Directory of Open Access Journals (Sweden)

    D. B. Considine

    2005-01-01

    Full Text Available We have used the Global Modeling Initiative chemistry and transport model to simulate the radionuclides radon-222 and lead-210 using three different sets of input meteorological information: 1. Output from the Goddard Space Flight Center Global Modeling and Assimilation Office GEOS-STRAT assimilation; 2. Output from the Goddard Institute for Space Studies GISS II' general circulation model; and 3. Output from the National Center for Atmospheric Research MACCM3 general circulation model. We intercompare these simulations with observations to determine the variability resulting from the different meteorological data used to drive the model, and to assess the agreement of the simulations with observations at the surface and in the upper troposphere/lower stratosphere region. The observational datasets we use are primarily climatologies developed from multiple years of observations. In the upper troposphere/lower stratosphere region, climatological distributions of lead-210 were constructed from ~25 years of aircraft and balloon observations compiled into the US Environmental Measurements Laboratory RANDAB database. Taken as a whole, no simulation stands out as superior to the others. However, the simulation driven by the NCAR MACCM3 meteorological data compares better with lead-210 observations in the upper troposphere/lower stratosphere region. Comparisons of simulations made with and without convection show that the role played by convective transport and scavenging in the three simulations differs substantially. These differences may have implications for evaluation of the importance of very short-lived halogen-containing species on stratospheric halogen budgets.

  9. Comparison of different snow model formulations and their responses to input uncertainties in the Upper Indus Basin

    Science.gov (United States)

    Pritchard, David; Fowler, Hayley; Forsythe, Nathan; O'Donnell, Greg; Rutter, Nick; Bardossy, Andras

    2017-04-01

    Snow and glacier melt in the mountainous Upper Indus Basin (UIB) sustain water supplies, irrigation networks, hydropower production and ecosystems in extensive downstream lowlands. Understanding hydrological and cryospheric sensitivities to climatic variability and change in the basin is therefore critical for local, national and regional water resources management. Assessing these sensitivities using numerical modelling is challenging, due to limitations in the quality and quantity of input and evaluation data, as well as uncertainties in model structures and parameters. This study explores how these uncertainties in inputs and process parameterisations affect distributed simulations of ablation in the complex climatic setting of the UIB. The role of model forcing uncertainties is explored using combinations of local observations, remote sensing and reanalysis - including the high resolution High Asia Refined Analysis - to generate multiple realisations of spatiotemporal model input fields. Forcing a range of model structures with these input fields then provides an indication of how different ablation parameterisations respond to uncertainties and perturbations in climatic drivers. Model structures considered include simple, empirical representations of melt processes through to physically based, full energy balance models with multi-physics options for simulating snowpack evolution (including an adapted version of FSM). Analysing model input and structural uncertainties in this way provides insights for methodological choices in climate sensitivity assessments of data-sparse, high mountain catchments. Such assessments are key for supporting water resource management in these catchments, particularly given the potential complications of enhanced warming through elevation effects or, in the case of the UIB, limited understanding of how and why local climate change signals differ from broader patterns.

  10. Impacts of the representation of riverine freshwater input in the community earth system model

    Science.gov (United States)

    Tseng, Yu-heng; Bryan, Frank O.; Whitney, Michael M.

    2016-09-01

    The impacts of the representation of riverine freshwater input on the simulated ocean state are investigated through comparison of a suite of experiments with the Community Earth System Model (CESM). The aspects of river and estuary processes investigated include lateral spreading of runoff, runoff contribution to the surface buoyancy flux within the K-Profile Parameterization (KPP), the use of a local salinity in the virtual salt flux (VSF) formulation, and the vertical redistribution of runoff. The horizontal runoff spreading distribution plays an important role in the regional salinity distribution and significantly changes the vertical stratification and mixing. When runoff is considered to be a contribution to the surface buoyancy flux, the calculation of turbulent length and velocity scales in the KPP can be significantly impacted near larger discharge rivers, resulting in local surface salinity changes of up to 12 ppt. Using the local surface salinity instead of a globally constant reference salinity in the conversion of riverine freshwater flux to VSF can reduce biases in the simulated salinity near river mouths but leads to drift in global mean salinity. This is remedied through a global correction approach. We also explore the sensitivity to the vertical redistribution of runoff, which partially mimics the impacts of vertical mixing process within estuaries and coastal river plumes. The impacts of the vertical redistribution of runoff are largest when the runoff effective mixing depth is comparable with the mixed layer depth, resulting from the enhanced vertical mixing and the increase of the available potential energy. The impacts in all sensitivity experiments are predominantly local, but the regional circulation can advect the influences downstream.

  11. InGaAs-based mm-wave integrated subharmonic mixer exhibiting low input power requirement and low noise characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Marsh, P.; Hong, K.; Pavlidis, D. [Univ. of Michigan, Ann Arbor, MI (United States)

    1996-12-31

    The authors have designed and fabricated an integrated InGaAs-based subharmonic mixer which showed a single-sideband conversion loss of 10.5dB and double-sideband noise temperature of 1,164 K at a very low LO power level of 1.1mW. Also demonstrated was the feasibility of integrating mixer diodes with antenna and other interconnect metal structures on InP. Simulations indicate the potential for performance improvements with L{sub c} decreasing to 9.6dB and T{sub mix} decreasing to approximately 700 K for anode sizes of 1{micro}m. A significant advantage of the InGaAs subharmonic mixers is that their P{sub LO} requirements are approximately a factor of 0.2 to 0.37 of that required by GaAs technology. Another advantage of InGaAs mixer technology is that high-performance three-terminal device technology, available on InP, could potentially be used to integrate LNA front ends and IF amplifiers with the mixers, to form high-performance monolithic millimeter-wave receivers.

  12. Understanding requirements via natural language information modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sharp, J.K.; Becker, S.D.

    1993-07-01

    Information system requirements that are expressed as simple English sentences provide a clear understanding of what is needed between system specifiers, administrators, users, and developers of information systems. The approach used to develop the requirements is the Natural-language Information Analysis Methodology (NIAM). NIAM allows the processes, events, and business rules to be modeled using natural language. The natural language presentation enables the people who deal with the business issues that are to be supported by the information system to describe exactly the system requirements that designers and developers will implement. Computer prattle is completely eliminated from the requirements discussion. An example is presented that is based upon a section of a DOE Order involving nuclear materials management. Where possible, the section is analyzed to specify the process(es) to be done, the event(s) that start the process, and the business rules that are to be followed during the process. Examples, including constraints, are developed. The presentation steps through the modeling process and shows where the section of the DOE Order needs clarification, extensions or interpretations that could provide a more complete and accurate specification.

  13. Initiation of male sperm-transfer behavior in Caenorhabditis elegans requires input from the ventral nerve cord

    Directory of Open Access Journals (Sweden)

    Gharib Shahla

    2006-08-01

    Full Text Available Abstract Background The Caenorhabditis elegans male exhibits a stereotypic behavioral pattern when attempting to mate. This behavior has been divided into the following steps: response, backing, turning, vulva location, spicule insertion, and sperm transfer. We and others have begun in-depth analyses of all these steps in order to understand how complex behaviors are generated. Here we extend our understanding of the sperm-transfer step of male mating behavior. Results Based on observation of wild-type males and on genetic analysis, we have divided the sperm-transfer step of mating behavior into four sub-steps: initiation, release, continued transfer, and cessation. To begin to understand how these sub-steps of sperm transfer are regulated, we screened for ethylmethanesulfonate (EMS-induced mutations that cause males to transfer sperm aberrantly. We isolated an allele of unc-18, a previously reported member of the Sec1/Munc-18 (SM family of proteins that is necessary for regulated exocytosis in C. elegans motor neurons. Our allele, sy671, is defective in two distinct sub-steps of sperm transfer: initiation and continued transfer. By a series of transgenic site-of-action experiments, we found that motor neurons in the ventral nerve cord require UNC-18 for the initiation of sperm transfer, and that UNC-18 acts downstream or in parallel to the SPV sensory neurons in this process. In addition to this neuronal requirement, we found that non-neuronal expression of UNC-18, in the male gonad, is necessary for the continuation of sperm transfer. Conclusion Our division of sperm-transfer behavior into sub-steps has provided a framework for the further detailed analysis of sperm transfer and its integration with other aspects of mating behavior. By determining the site of action of UNC-18 in sperm-transfer behavior, and its relation to the SPV sensory neurons, we have further defined the cells and tissues involved in the generation of this behavior. We

  14. Output from Statistical Predictive Models as Input to eLearning Dashboards

    Directory of Open Access Journals (Sweden)

    Marlene A. Smith

    2015-06-01

    Full Text Available We describe how statistical predictive models might play an expanded role in educational analytics by giving students automated, real-time information about what their current performance means for eventual success in eLearning environments. We discuss how an online messaging system might tailor information to individual students using predictive analytics. The proposed system would be data-driven and quantitative; e.g., a message might furnish the probability that a student will successfully complete the certificate requirements of a massive open online course. Repeated messages would prod underperforming students and alert instructors to those in need of intervention. Administrators responsible for accreditation or outcomes assessment would have ready documentation of learning outcomes and actions taken to address unsatisfactory student performance. The article’s brief introduction to statistical predictive models sets the stage for a description of the messaging system. Resources and methods needed to develop and implement the system are discussed.

  15. Logic models to predict continuous outputs based on binary inputs with an application to personalized cancer therapy

    Science.gov (United States)

    Knijnenburg, Theo A.; Klau, Gunnar W.; Iorio, Francesco; Garnett, Mathew J.; McDermott, Ultan; Shmulevich, Ilya; Wessels, Lodewyk F. A.

    2016-01-01

    Mining large datasets using machine learning approaches often leads to models that are hard to interpret and not amenable to the generation of hypotheses that can be experimentally tested. We present ‘Logic Optimization for Binary Input to Continuous Output’ (LOBICO), a computational approach that infers small and easily interpretable logic models of binary input features that explain a continuous output variable. Applying LOBICO to a large cancer cell line panel, we find that logic combinations of multiple mutations are more predictive of drug response than single gene predictors. Importantly, we show that the use of the continuous information leads to robust and more accurate logic models. LOBICO implements the ability to uncover logic models around predefined operating points in terms of sensitivity and specificity. As such, it represents an important step towards practical application of interpretable logic models. PMID:27876821

  16. Logic models to predict continuous outputs based on binary inputs with an application to personalized cancer therapy

    Science.gov (United States)

    Knijnenburg, Theo A.; Klau, Gunnar W.; Iorio, Francesco; Garnett, Mathew J.; McDermott, Ultan; Shmulevich, Ilya; Wessels, Lodewyk F. A.

    2016-11-01

    Mining large datasets using machine learning approaches often leads to models that are hard to interpret and not amenable to the generation of hypotheses that can be experimentally tested. We present ‘Logic Optimization for Binary Input to Continuous Output’ (LOBICO), a computational approach that infers small and easily interpretable logic models of binary input features that explain a continuous output variable. Applying LOBICO to a large cancer cell line panel, we find that logic combinations of multiple mutations are more predictive of drug response than single gene predictors. Importantly, we show that the use of the continuous information leads to robust and more accurate logic models. LOBICO implements the ability to uncover logic models around predefined operating points in terms of sensitivity and specificity. As such, it represents an important step towards practical application of interpretable logic models.

  17. Measurement method for urine puddle depth in dairy cow houses as input variable for ammonia emission modelling

    NARCIS (Netherlands)

    Snoek, J.W.; Stigter, J.D.; Ogink, Nico; Groot Koerkamp, P.W.G.

    2015-01-01

    Dairy cow houses are a major contributor to ammonia (NH3) emission in many European countries. To understand and predict NH3 emissions from cubicle dairy cow houses a mechanistic model was developed and a sensitivity analysis was performed to assess the contribution to NH3 emission of each input var

  18. Measurement method for urine puddle depth in dairy cow houses as input variable for ammonia emission modelling

    NARCIS (Netherlands)

    Snoek, J.W.; Stigter, J.D.; Ogink, Nico; Groot Koerkamp, P.W.G.

    2015-01-01

    Dairy cow houses are a major contributor to ammonia (NH3) emission in many European countries. To understand and predict NH3 emissions from cubicle dairy cow houses a mechanistic model was developed and a sensitivity analysis was performed to assess the contribution to NH3 emission of each input

  19. Pre-Mission Input Requirements to Enable Successful Sample Collection by A Remote Field/EVA Team

    Science.gov (United States)

    Cohen, B. A.; Lim, D. S. S.; Young, K. E.; Brunner, A.; Elphic, R. E.; Horne, A.; Kerrigan, M. C.; Osinski, G. R.; Skok, J. R.; Squyres, S. W.; Saint-Jacques, D.; Heldmann, J. L.

    2016-01-01

    The FINESSE (Field Investigations to Enable Solar System Science and Exploration) team, part of the Solar System Exploration Virtual Institute (SSERVI), is a field-based research program aimed at generating strategic knowledge in preparation for human and robotic exploration of the Moon, near-Earth asteroids, Phobos and Deimos, and beyond. In contract to other technology-driven NASA analog studies, The FINESSE WCIS activity is science-focused and, moreover, is sampling-focused with the explicit intent to return the best samples for geochronology studies in the laboratory. We used the FINESSE field excursion to the West Clearwater Lake Impact structure (WCIS) as an opportunity to test factors related to sampling decisions. We examined the in situ sample characterization and real-time decision-making process of the astronauts, with a guiding hypothesis that pre-mission training that included detailed background information on the analytical fate of a sample would better enable future astronauts to select samples that would best meet science requirements. We conducted three tests of this hypothesis over several days in the field. Our investigation was designed to document processes, tools and procedures for crew sampling of planetary targets. This was not meant to be a blind, controlled test of crew efficacy, but rather an effort to explicitly recognize the relevant variables that enter into sampling protocol and to be able to develop recommendations for crew and backroom training in future endeavors.

  20. Uncertainty in photochemical modeling results from using seasonal estimates vs day-specific emissions inputs for utility sources in an urban airshed in the northeast

    Energy Technology Data Exchange (ETDEWEB)

    Arunachalam, S.; Georgopoulos, P.G. [Rutgers, the State Univ. of New Jersey, Piscataway, NJ (United States)

    1996-12-31

    Design and development of robust ozone control strategies through photochemical modeling studies are dependent to a large extent on the quality of the emissions inputs that are used. A key issue here in the quality of the emissions inventory is the choice between using day-specific information versus seasonal estimates for emissions from major utilities in the modeling domain of interest. Emissions of NO{sub x} from electric utilities constitute more than a third of the total NO{sub x} emissions from all sources ill a typical urban modeling domain, and hence it is important that the emissions from these sources are characterized as accurately as possible in the photochemical model. Since a considerable amount of resources are required to develop regional or urban-level emissions inventories for modeling purposes, one has to accept the level of detail that can be incorporated in a given modeling inventory and try to develop optimal control strategies based on the inputs. The sensitivity of the model to the differences in emissions inputs as mentioned above are examined in the New Jersey-Philadelphia-Delaware Valley Urban Airshed Model State Implementation Plan (SIP) application for two ozone episodes that occurred in the Northeastern US - the July 6-8, 1988 and the July 18-20, 1991. Day-specific emissions information are collected for a major portion of the elevated point sources within tile domain for these two episodes and various metrics besides the daily maximum one-hour averaged ozone predictions, are compared from model predictions for the two cases. Such comparative studies will bring into focus the presence of a weekend effect, if any, and differences between weekday and weekend emissions can also be tested with the model, using the same meteorology. Understanding the impact of this difference will lead to a better design sensitivity-uncertainty simulations and call lead to the development of robust emission control strategies as well.

  1. The input and output management of solid waste using DEA models: A case study at Jengka, Pahang

    Science.gov (United States)

    Mohamed, Siti Rosiah; Ghazali, Nur Fadzrina Mohd; Mohd, Ainun Hafizah

    2017-08-01

    Data Envelopment Analysis (DEA) as a tool for obtaining performance indices has been used extensively in several of organizations sector. The ways to improve the efficiency of Decision Making Units (DMUs) is impractical because some of inputs and outputs are uncontrollable and in certain situation its produce weak efficiency which often reflect the impact for operating environment. Based on the data from Alam Flora Sdn. Bhd Jengka, the researcher wants to determine the efficiency of solid waste management (SWM) in town Jengka Pahang using CCRI and CCRO model of DEA and duality formulation with vector average input and output. Three input variables (length collection in meter, frequency time per week in hour and number of garbage truck) and 2 outputs variables (frequency collection and the total solid waste collection in kilogram) are analyzed. As a conclusion, it shows only three roads from 23 roads are efficient that achieve efficiency score 1. Meanwhile, 20 other roads are in an inefficient management.

  2. A method for the identification of state space models from input and output measurements

    Directory of Open Access Journals (Sweden)

    David Di Ruscio

    1995-07-01

    Full Text Available In this paper we present a simple and general algorithm for the combined deterministic stochastic realization problem directly from known input and output time series. The solution to the pure deterministic as well as the pure stochastic realization problem are special cases of the method presented.

  3. Utilizing Physical Input-Output Model to Inform Nitrogen related Ecosystem Services

    Science.gov (United States)

    Here we describe the development of nitrogen PIOTs for the midwestern US state of Illinois with large inputs of nitrogen from agriculture and industry. The PIOTs are used to analyze the relationship between regional economic activities and ecosystem services in order to identify...

  4. Impact of Infralimbic Inputs on Intercalated Amygdale Neurons: A Biophysical Modeling Study

    Science.gov (United States)

    Li, Guoshi; Amano, Taiju; Pare, Denis; Nair, Satish S.

    2011-01-01

    Intercalated (ITC) amygdala neurons regulate fear expression by controlling impulse traffic between the input (basolateral amygdala; BLA) and output (central nucleus; Ce) stations of the amygdala for conditioned fear responses. Previously, stimulation of the infralimbic (IL) cortex was found to reduce fear expression and the responsiveness of Ce…

  5. Finding identifiable parameter combinations in nonlinear ODE models and the rational reparameterization of their input-output equations.

    Science.gov (United States)

    Meshkat, Nicolette; Anderson, Chris; Distefano, Joseph J

    2011-09-01

    When examining the structural identifiability properties of dynamic system models, some parameters can take on an infinite number of values and yet yield identical input-output data. These parameters and the model are then said to be unidentifiable. Finding identifiable combinations of parameters with which to reparameterize the model provides a means for quantitatively analyzing the model and computing solutions in terms of the combinations. In this paper, we revisit and explore the properties of an algorithm for finding identifiable parameter combinations using Gröbner Bases and prove useful theoretical properties of these parameter combinations. We prove a set of M algebraically independent identifiable parameter combinations can be found using this algorithm and that there exists a unique rational reparameterization of the input-output equations over these parameter combinations. We also demonstrate application of the procedure to a nonlinear biomodel.

  6. Evaluating the efficiency of municipalities in collecting and processing municipal solid waste: A shared input DEA-model

    Energy Technology Data Exchange (ETDEWEB)

    Rogge, Nicky, E-mail: Nicky.Rogge@hubrussel.be [Hogeschool-Universiteit Brussel (HUBrussel), Center for Business Management Research (CBMR), Warmoesberg 26, 1000 Brussels (Belgium); Katholieke Universiteit Leuven (KULeuven), Faculty of Business and Economics, Naamsestraat 69, 3000 Leuven (Belgium); De Jaeger, Simon [Katholieke Universiteit Leuven (KULeuven), Faculty of Business and Economics, Naamsestraat 69, 3000 Leuven (Belgium); Hogeschool-Universiteit Brussel (HUBrussel), Center for Economics and Corporate Sustainability (CEDON), Warmoesberg 26, 1000 Brussels (Belgium)

    2012-10-15

    Highlights: Black-Right-Pointing-Pointer Complexity in local waste management calls for more in depth efficiency analysis. Black-Right-Pointing-Pointer Shared-input Data Envelopment Analysis can provide solution. Black-Right-Pointing-Pointer Considerable room for the Flemish municipalities to improve their cost efficiency. - Abstract: This paper proposed an adjusted 'shared-input' version of the popular efficiency measurement technique Data Envelopment Analysis (DEA) that enables evaluating municipality waste collection and processing performances in settings in which one input (waste costs) is shared among treatment efforts of multiple municipal solid waste fractions. The main advantage of this version of DEA is that it not only provides an estimate of the municipalities overall cost efficiency but also estimates of the municipalities' cost efficiency in the treatment of the different fractions of municipal solid waste (MSW). To illustrate the practical usefulness of the shared input DEA-model, we apply the model to data on 293 municipalities in Flanders, Belgium, for the year 2008.

  7. Using Multi-input-layer Wavelet Neural Network to Model Product Quality of Continuous Casting Furnace and Hot Rolling Mill

    Institute of Scientific and Technical Information of China (English)

    HuanqinLi; JieCheng; BaiwuWan

    2004-01-01

    A new architecture of wavelet neural network with multi-input-layer is proposed and implemented for modeling a class of large-scale industrial processes. Because the processes are very complicated and the number of technological parameters, which determine the final product quality, is quite large, and these parameters do not make actions at the same time but work in different procedures, the conventional feed-forward neural networks cannot model this set of problems efficiently. The network presented in this paper has several input-layers according to the sequence of work procedure in large-scale industrial production processes. The performance of such networks is analyzed and the network is applied to model the steel plate quality of continuous casting furnace and hot rolling mill. Simulation results indicate that the developed methodology is competent and has well prospects to this set of problems.

  8. The sensitivity of conduit flow models to basic input parameters: there is no need for magma trolls!

    Science.gov (United States)

    Thomas, M. E.; Neuberg, J. W.

    2012-04-01

    Many conduit flow models now exist and some of these models are becoming extremely complicated, conducted in three dimensions and incorporating the physics of compressible three phase fluids (magmas), intricate conduit geometries and fragmentation processes, to name but a few examples. These highly specialised models are being used to explain observations of the natural system, and there is a danger that possible explanations may be getting needlessly complex. It is coherent, for instance, to propose the involvement of sub-surface dwelling magma trolls as an explanation for the change in a volcanoes eruptive style, but assuming the simplest explanation would prevent such additions, unless they were absolutely necessary. While the understanding of individual, often small scale conduit processes is increasing rapidly, is this level of detail necessary? How sensitive are these models to small changes in the most basic of governing parameters? Can these changes be used to explain observed behaviour? Here we will examine the sensitivity of conduit flow models to changes in the melt viscosity, one of the fundamental inputs to any such model. However, even addressing this elementary issue is not straight forward. There are several viscosity models in existence, how do they differ? Can models that use different viscosity models be realistically compared? Each of these viscosity models is also heavily dependent on the magma composition and/or temperature, and how well are these variables constrained? Magma temperatures and water contents are often assumed as "ball-park" figures, and are very rarely exactly known for the periods of observation the models are attempting to explain, yet they exhibit a strong controlling factor on the melt viscosity. The role of both these variables will be discussed. For example, using one of the available viscosity models a 20 K decrease in temperature of the melt results in a greater than 100% increase in the melt viscosity. With changes of

  9. Observation-Based Dissipation and Input Terms for Spectral Wave Models, with End-User Testing

    Science.gov (United States)

    2014-09-30

    Zieger, S. “Wave climate in the marginal ice zones of Arctic Seas, observations and modelling”. ONR Sea State DRU project. Studies wave climate in the...Rinke, and H. Matthes, 2014: Projected changes of wind-wave activity in the Arctic Ocean. Proceedings of the 22nd IAHR International Symposium on Ice ...objectives are to use new observation-based source terms for the wind input, wave-breaking (whitecapping) dissipation and swell decay in the third

  10. Coronal energy input and dissipation in a solar active region 3D MHD model

    CERN Document Server

    Bourdin, Philippe-A; Peter, Hardi

    2015-01-01

    Context. We have conducted a 3D MHD simulation of the solar corona above an active region in full scale and high resolution, which shows coronal loops, and plasma flows within them, similar to observations. Aims. We want to find the connection between the photospheric energy input by field-line braiding with the coronal energy conversion by Ohmic dissipation of induced currents. Methods. To this end we compare the coronal energy input and dissipation within our simulation domain above different fields of view, e.g. for a small loops system in the active region (AR) core. We also choose an ensemble of field lines to compare, e.g., the magnetic energy input to the heating per particle along these field lines. Results. We find an enhanced Ohmic dissipation of currents in the corona above areas that also have enhanced upwards-directed Poynting flux. These regions coincide with the regions where hot coronal loops within the AR core are observed. The coronal density plays a role in estimating the coronal temperatur...

  11. SISTEM KONTROL OTOMATIK DENGAN MODEL SINGLE-INPUT-DUAL-OUTPUT DALAM KENDALI EFISIENSI UMUR-PEMAKAIAN INSTRUMEN

    Directory of Open Access Journals (Sweden)

    S.N.M.P. Simamora

    2014-10-01

    Full Text Available Efficiency condition occurs when the value of the used outputs compared to the resource total that has been used almost close to the value 1 (absolute environment. An instrument to achieve efficiency if the power output level has decreased significantly in the life of the instrument used, if it compared to the previous condition, when the instrument is not equipped with additional systems (or proposed model improvement. Even more effective if the inputs model that are used in unison to achieve a homogeneous output. On this research has been designed and implemented the automatic control system for models of single input-dual-output, wherein the sampling instruments used are lamp and fan. Source voltage used is AC (alternate-current and tested using quantitative research methods and instrumentation (with measuring instruments are observed. The results obtained demonstrate the efficiency of the instrument experienced a significant current model of single-input-dual-output applied separately instrument trials such as lamp and fan when it compared to the condition or state before. And the result show that the design has been built, can also run well.

  12. Sensitivity of meteorological input and soil properties in simulating aerosols (dust, PM10, and BC) using CHIMERE chemistry transport model

    Indian Academy of Sciences (India)

    Nishi Srivastava; S K Satheesh; Nadège Blond

    2014-08-01

    The objective of this study is to evaluate the ability of a European chemistry transport model, ‘CHIMERE’ driven by the US meteorological model MM5, in simulating aerosol concentrations [dust, PM10 and black carbon (BC)] over the Indian region. An evaluation of a meteorological event (dust storm); impact of change in soil-related parameters and meteorological input grid resolution on these aerosol concentrations has been performed. Dust storm simulation over Indo-Gangetic basin indicates ability of the model to capture dust storm events. Measured (AERONET data) and simulated parameters such as aerosol optical depth (AOD) and Angstrom exponent are used to evaluate the performance of the model to capture the dust storm event. A sensitivity study is performed to investigate the impact of change in soil characteristics (thickness of the soil layer in contact with air, volumetric water, and air content of the soil) and meteorological input grid resolution on the aerosol (dust, PM10, BC) distribution. Results show that soil parameters and meteorological input grid resolution have an important impact on spatial distribution of aerosol (dust, PM10, BC) concentrations.

  13. Terrestrial ecosystem recovery - Modelling the effects of reduced acidic inputs and increased inputs of sea-salts induced by global change

    DEFF Research Database (Denmark)

    Beier, C.; Moldan, F.; Wright, R.F.

    2003-01-01

    to 3 large-scale "clean rain" experiments, the so-called roof experiments at Risdalsheia, Norway; Gardsjon, Sweden, and Klosterhede, Denmark. Implementation of the Gothenburg protocol will initiate recovery of the soils at all 3 sites by rebuilding base saturation. The rate of recovery is small...... and base saturation increases less than 5% over the next 30 years. A climate-induced increase in storm severity will increase the sea-salt input to the ecosystems. This will provide additional base cations to the soils and more than double the rate of the recovery, but also lead to strong acid pulses...... following high sea-salt inputs as the deposited base cations exchange with the acidity stored in the soil. Future recovery of soils and runoff at acidified catchments will thus depend on the amount and rate of reduction of acid deposition, and in the case of systems near the coast, the frequency...

  14. Development of a MODIS-Derived Surface Albedo Data Set: An Improved Model Input for Processing the NSRDB

    Energy Technology Data Exchange (ETDEWEB)

    Maclaurin, Galen [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sengupta, Manajit [National Renewable Energy Lab. (NREL), Golden, CO (United States); Xie, Yu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gilroy, Nicholas [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-12-01

    A significant source of bias in the transposition of global horizontal irradiance to plane-of-array (POA) irradiance arises from inaccurate estimations of surface albedo. The current physics-based model used to produce the National Solar Radiation Database (NSRDB) relies on model estimations of surface albedo from a reanalysis climatalogy produced at relatively coarse spatial resolution compared to that of the NSRDB. As an input to spectral decomposition and transposition models, more accurate surface albedo data from remotely sensed imagery at finer spatial resolutions would improve accuracy in the final product. The National Renewable Energy Laboratory (NREL) developed an improved white-sky (bi-hemispherical reflectance) broadband (0.3-5.0 ..mu..m) surface albedo data set for processing the NSRDB from two existing data sets: a gap-filled albedo product and a daily snow cover product. The Moderate Resolution Imaging Spectroradiometer (MODIS) sensors onboard the Terra and Aqua satellites have provided high-quality measurements of surface albedo at 30 arc-second spatial resolution and 8-day temporal resolution since 2001. The high spatial and temporal resolutions and the temporal coverage of the MODIS sensor will allow for improved modeling of POA irradiance in the NSRDB. However, cloud and snow cover interfere with MODIS observations of ground surface albedo, and thus they require post-processing. The MODIS production team applied a gap-filling methodology to interpolate observations obscured by clouds or ephemeral snow. This approach filled pixels with ephemeral snow cover because the 8-day temporal resolution is too coarse to accurately capture the variability of snow cover and its impact on albedo estimates. However, for this project, accurate representation of daily snow cover change is important in producing the NSRDB. Therefore, NREL also used the Integrated Multisensor Snow and Ice Mapping System data set, which provides daily snow cover observations of the

  15. Effects of input discretization, model complexity, and calibration strategy on model performance in a data-scarce glacierized catchment in Central Asia

    Science.gov (United States)

    Tarasova, L.; Knoche, M.; Dietrich, J.; Merz, R.

    2016-06-01

    Glacierized high-mountainous catchments are often the water towers for downstream region, and modeling these remote areas are often the only available tool for the assessment of water resources availability. Nevertheless, data scarcity affects different aspects of hydrological modeling in such mountainous glacierized basins. On the example of poorly gauged glacierized catchment in Central Asia, we examined the effects of input discretization, model complexity, and calibration strategy on model performance. The study was conducted with the GSM-Socont model driven with climatic input from the corrected High Asia Reanalysis data set of two different discretizations. We analyze the effects of the use of long-term glacier volume loss, snow cover images, and interior runoff as an additional calibration data. In glacierized catchments with winter accumulation type, where the transformation of precipitation into runoff is mainly controlled by snow and glacier melt processes, the spatial discretization of precipitation tends to have less impact on simulated runoff than a correct prediction of the integral precipitation volume. Increasing model complexity by using spatially distributed input or semidistributed parameters values does not increase model performance in the Gunt catchment, as the more complex model tends to be more sensitive to errors in the input data set. In our case, better model performance and quantification of the flow components can be achieved by additional calibration data, rather than by using a more distributed model parameters. However, a semidistributed model better predicts the spatial patterns of snow accumulation and provides more plausible runoff predictions at the interior sites.

  16. Synaptic inputs compete during rapid formation of the calyx of Held: a new model system for neural development.

    Science.gov (United States)

    Holcomb, Paul S; Hoffpauir, Brian K; Hoyson, Mitchell C; Jackson, Dakota R; Deerinck, Thomas J; Marrs, Glenn S; Dehoff, Marlin; Wu, Jonathan; Ellisman, Mark H; Spirou, George A

    2013-08-07

    Hallmark features of neural circuit development include early exuberant innervation followed by competition and pruning to mature innervation topography. Several neural systems, including the neuromuscular junction and climbing fiber innervation of Purkinje cells, are models to study neural development in part because they establish a recognizable endpoint of monoinnervation of their targets and because the presynaptic terminals are large and easily monitored. We demonstrate here that calyx of Held (CH) innervation of its target, which forms a key element of auditory brainstem binaural circuitry, exhibits all of these characteristics. To investigate CH development, we made the first application of serial block-face scanning electron microscopy to neural development with fine temporal resolution and thereby accomplished the first time series for 3D ultrastructural analysis of neural circuit formation. This approach revealed a growth spurt of added apposed surface area (ASA)>200 μm2/d centered on a single age at postnatal day 3 in mice and an initial rapid phase of growth and competition that resolved to monoinnervation in two-thirds of cells within 3 d. This rapid growth occurred in parallel with an increase in action potential threshold, which may mediate selection of the strongest input as the winning competitor. ASAs of competing inputs were segregated on the cell body surface. These data suggest mechanisms to select "winning" inputs by regional reinforcement of postsynaptic membrane to mediate size and strength of competing synaptic inputs.

  17. Dynamical analysis of a five-dimensioned chemostat model with impulsive diffusion and pulse input environmental toxicant

    Energy Technology Data Exchange (ETDEWEB)

    Jiao Jianjun, E-mail: jiaojianjun05@126.co [Guizhou Key Laboratory of Economic System Simulation, Guizhou College of Finance and Economics, Guiyang 550004 (China); Ye Kaili [School of Economics and Management, Xinyang Normal University, Xinyang 464000, Henan (China); Chen Lansun [Institute of Mathematics, Academy of Mathematics and System Sciences, Beijing 100080 (China)

    2011-01-15

    Research Highlights: This work improves on existing chemostat models. The proposed model accounts for natural phenomena. This work improves on the existing mathematical methods. - Abstract: In this paper, we consider a five-dimensioned chemostat model with impulsive diffusion and pulse input environmental toxicant. Using the discrete dynamical system determined by the stroboscopic map, we obtain a microorganism-extinction periodic solution. Further, it is globally asymptotically stable. The permanent condition of the investigated system is also analyzed by the theory on impulsive differential equation. Our results reveal that the chemostat environmental changes play an important role on the outcome of the chemostat.

  18. Standard Model evaluation of $\\varepsilon_K$ using lattice QCD inputs for $\\hat{B}_K$ and $V_{cb}$

    CERN Document Server

    Bailey, Jon A; Lee, Weonjong; Park, Sungwoo

    2015-01-01

    We report the Standard Model evaluation of the indirect CP violation parameter $\\varepsilon_K$ using inputs from lattice QCD: the kaon bag parameter $\\hat{B}_K$, $\\xi_0$, $|V_{us}|$ from the $K_{\\ell 3}$ and $K_{\\mu 2}$ decays, and $|V_{cb}|$ from the axial current form factor for the exclusive decay $\\bar{B} \\to D^* \\ell \\bar{\

  19. Energy Efficiency Analysis and Modeling the Relationship between Energy Inputs and Wheat Yield in Iran

    Directory of Open Access Journals (Sweden)

    Fakher Kardoni

    2015-12-01

    Full Text Available Wheat is the dominant cereal crop constituting the first staple food in Iran. This paper studies the energy consumption patterns and the relationship between energy inputs and yield for Wheat production in Iranian agriculture during the period 1986 – 2008. The results indicated that total energy inputs in irrigated and dryland wheat production increased from 29.01 and 9.81 GJ ha-1 in 1986 to 44.67 and 12.35 GJ ha-1 in 2008, respectively. Similarly, total output energy rose from 28.87 and 10.43 GJ ha-1 in 1986 to 58.53 and 15.77 GJ ha-1 in 2008, in the same period. Energy efficiency indicators, input– output ratio, energy productivity, and net energy have improved over the examined period. The results also revealed that nonrenewable, direct, and indirect energy forms had a positive impact on the output level. Moreover, the regression results showed the significant effect of irrigation water and seed energies in irrigated wheat and human labor and fertilizer in dryland wheat on crop yield. Results of this study indicated that improvement of fertilizer efficiency and reduction of fuel consumption by modifying tillage, harvest method, and other agronomic operations can significantly affect the energy efficiency of wheat production in Iran.

  20. Uncertainty squared: Choosing among multiple input probability distributions and interpreting multiple output probability distributions in Monte Carlo climate risk models

    Science.gov (United States)

    Baer, P.; Mastrandrea, M.

    2006-12-01

    Simple probabilistic models which attempt to estimate likely transient temperature change from specified CO2 emissions scenarios must make assumptions about at least six uncertain aspects of the causal chain between emissions and temperature: current radiative forcing (including but not limited to aerosols), current land use emissions, carbon sinks, future non-CO2 forcing, ocean heat uptake, and climate sensitivity. Of these, multiple PDFs (probability density functions) have been published for the climate sensitivity, a couple for current forcing and ocean heat uptake, one for future non-CO2 forcing, and none for current land use emissions or carbon cycle uncertainty (which are interdependent). Different assumptions about these parameters, as well as different model structures, will lead to different estimates of likely temperature increase from the same emissions pathway. Thus policymakers will be faced with a range of temperature probability distributions for the same emissions scenarios, each described by a central tendency and spread. Because our conventional understanding of uncertainty and probability requires that a probabilistically defined variable of interest have only a single mean (or median, or modal) value and a well-defined spread, this "multidimensional" uncertainty defies straightforward utilization in policymaking. We suggest that there are no simple solutions to the questions raised. Crucially, we must dispel the notion that there is a "true" probability probabilities of this type are necessarily subjective, and reasonable people may disagree. Indeed, we suggest that what is at stake is precisely the question, what is it reasonable to believe, and to act as if we believe? As a preliminary suggestion, we demonstrate how the output of a simple probabilistic climate model might be evaluated regarding the reasonableness of the outputs it calculates with different input PDFs. We suggest further that where there is insufficient evidence to clearly

  1. Integrated modelling requires mass collaboration (Invited)

    Science.gov (United States)

    Moore, R. V.

    2009-12-01

    The need for sustainable solutions to the world’s problems is self evident; the challenge is to anticipate where, in the environment, economy or society, the proposed solution will have negative consequences. If we failed to realise that the switch to biofuels would have the seemingly obvious result of reduced food production, how much harder will it be to predict the likely impact of policies whose impacts may be more subtle? It has been clear for a long time that models and data will be important tools for assessing the impact of events and the measures for their mitigation. They are an effective way of encapsulating knowledge of a process and using it for prediction. However, most models represent a single or small group of processes. The sustainability challenges that face us now require not just the prediction of a single process but the prediction of how many interacting processes will respond in given circumstances. These processes will not be confined to a single discipline but will often straddle many. For example, the question, “What will be the impact on river water quality of the medical plans for managing a ‘flu pandemic and could they cause a further health hazard?” spans medical planning, the absorption of drugs by the body, the spread of disease, the hydraulic and chemical processes in sewers and sewage treatment works and river water quality. This question nicely reflects the present state of the art. We have models of the processes and standards, such as the Open Modelling Interface (the OpenMI), allow them to be linked together and to datasets. We can therefore answer the question but with the important proviso that we thought to ask it. The next and greater challenge is to deal with the open question, “What are the implications of the medical plans for managing a ‘flu pandemic?”. This implies a system that can make connections that may well not have occurred to us and then evaluate their probable impact. The final touch will be to

  2. A Diffusion Approximation and Numerical Methods for Adaptive Neuron Models with Stochastic Inputs.

    Science.gov (United States)

    Rosenbaum, Robert

    2016-01-01

    Characterizing the spiking statistics of neurons receiving noisy synaptic input is a central problem in computational neuroscience. Monte Carlo approaches to this problem are computationally expensive and often fail to provide mechanistic insight. Thus, the field has seen the development of mathematical and numerical approaches, often relying on a Fokker-Planck formalism. These approaches force a compromise between biological realism, accuracy and computational efficiency. In this article we develop an extension of existing diffusion approximations to more accurately approximate the response of neurons with adaptation currents and noisy synaptic currents. The implementation refines existing numerical schemes for solving the associated Fokker-Planck equations to improve computationally efficiency and accuracy. Computer code implementing the developed algorithms is made available to the public.

  3. Nonlinear neural network for hemodynamic model state and input estimation using fMRI data

    KAUST Repository

    Karam, Ayman M.

    2014-11-01

    Originally inspired by biological neural networks, artificial neural networks (ANNs) are powerful mathematical tools that can solve complex nonlinear problems such as filtering, classification, prediction and more. This paper demonstrates the first successful implementation of ANN, specifically nonlinear autoregressive with exogenous input (NARX) networks, to estimate the hemodynamic states and neural activity from simulated and measured real blood oxygenation level dependent (BOLD) signals. Blocked and event-related BOLD data are used to test the algorithm on real experiments. The proposed method is accurate and robust even in the presence of signal noise and it does not depend on sampling interval. Moreover, the structure of the NARX networks is optimized to yield the best estimate with minimal network architecture. The results of the estimated neural activity are also discussed in terms of their potential use.

  4. Realistic modelling of the seismic input Site effects and parametric studies

    CERN Document Server

    Romanelli, F; Vaccari, F

    2002-01-01

    We illustrate the work done in the framework of a large international cooperation, showing the very recent numerical experiments carried out within the framework of the EC project 'Advanced methods for assessing the seismic vulnerability of existing motorway bridges' (VAB) to assess the importance of non-synchronous seismic excitation of long structures. The definition of the seismic input at the Warth bridge site, i.e. the determination of the seismic ground motion due to an earthquake with a given magnitude and epicentral distance from the site, has been done following a theoretical approach. In order to perform an accurate and realistic estimate of site effects and of differential motion it is necessary to make a parametric study that takes into account the complex combination of the source and propagation parameters, in realistic geological structures. The computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different sources and stru...

  5. Embodied water analysis for Hebei Province, China by input-output modelling

    Science.gov (United States)

    Liu, Siyuan; Han, Mengyao; Wu, Xudong; Wu, Xiaofang; Li, Zhi; Xia, Xiaohua; Ji, Xi

    2016-12-01

    With the accelerating coordinated development of the Beijing-Tianjin-Hebei region, regional economic integration is recognized as a national strategy. As water scarcity places Hebei Province in a dilemma, it is of critical importance for Hebei Province to balance water resources as well as make full use of its unique advantages in the transition to sustainable development. To our knowledge, related embodied water accounting analysis has been conducted for Beijing and Tianjin, while similar works with the focus on Hebei are not found. In this paper, using the most complete and recent statistics available for Hebei Province, the embodied water use in Hebei Province is analyzed in detail. Based on input-output analysis, it presents a complete set of systems accounting framework for water resources. In addition, a database of embodied water intensity is proposed which is applicable to both intermediate inputs and final demand. The result suggests that the total amount of embodied water in final demand is 10.62 billion m3, of which the water embodied in urban household consumption accounts for more than half. As a net embodied water importer, the water embodied in the commodity trade in Hebei Province is 17.20 billion m3. The outcome of this work implies that it is particularly urgent to adjust industrial structure and trade policies for water conservation, to upgrade technology and to improve water utilization. As a result, to relieve water shortages in Hebei Province, it is of crucial importance to regulate the balance of water use within the province, thus balancing water distribution in the various industrial sectors.

  6. Modeling commuting patterns in a multi-regional input-output framework: impacts of an `urban re-centralization' scenario

    Science.gov (United States)

    Ferreira, J.-P.; Ramos, P.; Cruz, L.; Barata, E.

    2017-10-01

    The paper suggests a modeling approach for assessing economic and social impacts of changes in urban forms and commuting patterns that extends a multi-regional input-output framework by incorporating a set of commuting-related consequences. The Lisbon Metropolitan Area case with an urban re-centralization scenario is used as an example to illustrate the relevance of this modeling approach for analyzing commuting-related changes in regional income distribution on the one side and in household consumption structures on the other.

  7. Gamified Requirements Engineering: Model and Experimentation

    NARCIS (Netherlands)

    Lombriser, Philipp; Dalpiaz, Fabiano; Lucassen, Garm; Brinkkemper, Sjaak

    2016-01-01

    [Context & Motivation] Engaging stakeholders in requirements engineering (RE) influences the quality of the requirements and ultimately of the system to-be. Unfortunately, stakeholder engagement is often insufficient, leading to too few, low-quality requirements. [Question/problem] We aim to

  8. Gamified Requirements Engineering: Model and Experimentation

    NARCIS (Netherlands)

    Lombriser, Philipp; Dalpiaz, Fabiano; Lucassen, Garm; Brinkkemper, Sjaak

    2016-01-01

    [Context & Motivation] Engaging stakeholders in requirements engineering (RE) influences the quality of the requirements and ultimately of the system to-be. Unfortunately, stakeholder engagement is often insufficient, leading to too few, low-quality requirements. [Question/problem] We aim to evaluat

  9. Gamified Requirements Engineering: Model and Experimentation

    NARCIS (Netherlands)

    Lombriser, Philipp; Dalpiaz, Fabiano|info:eu-repo/dai/nl/369508394; Lucassen, Garm; Brinkkemper, Sjaak|info:eu-repo/dai/nl/07500707X

    2016-01-01

    [Context & Motivation] Engaging stakeholders in requirements engineering (RE) influences the quality of the requirements and ultimately of the system to-be. Unfortunately, stakeholder engagement is often insufficient, leading to too few, low-quality requirements. [Question/problem] We aim to evaluat

  10. Performance assessment of nitrate leaching models for highly vulnerable soils used in low-input farming based on lysimeter data.

    Science.gov (United States)

    Groenendijk, Piet; Heinen, Marius; Klammler, Gernot; Fank, Johann; Kupfersberger, Hans; Pisinaras, Vassilios; Gemitzi, Alexandra; Peña-Haro, Salvador; García-Prats, Alberto; Pulido-Velazquez, Manuel; Perego, Alessia; Acutis, Marco; Trevisan, Marco

    2014-11-15

    The agricultural sector faces the challenge of ensuring food security without an excessive burden on the environment. Simulation models provide excellent instruments for researchers to gain more insight into relevant processes and best agricultural practices and provide tools for planners for decision making support. The extent to which models are capable of reliable extrapolation and prediction is important for exploring new farming systems or assessing the impacts of future land and climate changes. A performance assessment was conducted by testing six detailed state-of-the-art models for simulation of nitrate leaching (ARMOSA, COUPMODEL, DAISY, EPIC, SIMWASER/STOTRASIM, SWAP/ANIMO) for lysimeter data of the Wagna experimental field station in Eastern Austria, where the soil is highly vulnerable to nitrate leaching. Three consecutive phases were distinguished to gain insight in the predictive power of the models: 1) a blind test for 2005-2008 in which only soil hydraulic characteristics, meteorological data and information about the agricultural management were accessible; 2) a calibration for the same period in which essential information on field observations was additionally available to the modellers; and 3) a validation for 2009-2011 with the corresponding type of data available as for the blind test. A set of statistical metrics (mean absolute error, root mean squared error, index of agreement, model efficiency, root relative squared error, Pearson's linear correlation coefficient) was applied for testing the results and comparing the models. None of the models performed good for all of the statistical metrics. Models designed for nitrate leaching in high-input farming systems had difficulties in accurately predicting leaching in low-input farming systems that are strongly influenced by the retention of nitrogen in catch crops and nitrogen fixation by legumes. An accurate calibration does not guarantee a good predictive power of the model. Nevertheless all

  11. Incorporation of Failure Into an Orthotropic Three-Dimensional Model with Tabulated Input Suitable for Use in Composite Impact Problems

    Science.gov (United States)

    Goldberg, Robert K.; Carney, Kelly S.; Dubois, Paul; Hoffarth, Canio; Khaled, Bilal; Shyamsunder, Loukham; Rajan, Subramaniam; Blankenhorn, Gunther

    2017-01-01

    The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites under impact conditions is becoming critical as these materials are gaining increased use in the aerospace and automotive communities. The aerospace community has identified several key capabilities which are currently lacking in the available material models in commercial transient dynamic finite element codes. To attempt to improve the predictive capability of composite impact simulations, a next generation material model is being developed for incorporation within the commercial transient dynamic finite element code LS-DYNA. The material model, which incorporates plasticity, damage and failure, utilizes experimentally based tabulated input to define the evolution of plasticity and damage and the initiation of failure as opposed to specifying discrete input parameters such as modulus and strength. The plasticity portion of the orthotropic, three-dimensional, macroscopic composite constitutive model is based on an extension of the Tsai-Wu composite failure model into a generalized yield function with a non-associative flow rule. For the damage model, a strain equivalent formulation is used to allow for the uncoupling of the deformation and damage analyses. In the damage model, a semi-coupled approach is employed where the overall damage in a particular coordinate direction is assumed to be a multiplicative combination of the damage in that direction resulting from the applied loads in various coordinate directions. For the failure model, a tabulated approach is utilized in which a stress or strain based invariant is defined as a function of the location of the current stress state in stress space to define the initiation of failure. Failure surfaces can be defined with any arbitrary shape, unlike traditional failure models where the mathematical functions used to define the failure surface impose a specific shape on the failure surface. In the current

  12. ColloInputGenerator

    DEFF Research Database (Denmark)

    2013-01-01

    This is a very simple program to help you put together input files for use in Gries' (2007) R-based collostruction analysis program. It basically puts together a text file with a frequency list of lexemes in the construction and inserts a column where you can add the corpus frequencies. It requires...... it as input for basic collexeme collostructional analysis (Stefanowitsch & Gries 2003) in Gries' (2007) program. ColloInputGenerator is, in its current state, based on programming commands introduced in Gries (2009). Projected updates: Generation of complete work-ready frequency lists....

  13. AN ACCURATE MODELING OF DELAY AND SLEW METRICS FOR ON-CHIP VLSI RC INTERCONNECTS FOR RAMP INPUTS USING BURR’S DISTRIBUTION FUNCTION

    Directory of Open Access Journals (Sweden)

    Rajib Kar

    2010-09-01

    Full Text Available This work presents an accurate and efficient model to compute the delay and slew metric of on-chip interconnect of high speed CMOS circuits foe ramp input. Our metric assumption is based on the Burr’s Distribution function. The Burr’s distribution is used to characterize the normalized homogeneous portion of the step response. We used the PERI (Probability distribution function Extension for Ramp Inputs technique that extends delay metrics and slew metric for step inputs to the more general and realistic non-step inputs. The accuracy of our models is justified with the results compared with that of SPICE simulations.

  14. Effect of Manure vs. Fertilizer Inputs on Productivity of Forage Crop Models

    Directory of Open Access Journals (Sweden)

    Pasquale Martiniello

    2011-06-01

    Full Text Available Manure produced by livestock activity is a dangerous product capable of causing serious environmental pollution. Agronomic management practices on the use of manure may transform the target from a waste to a resource product. Experiments performed on comparison of manure with standard chemical fertilizers (CF were studied under a double cropping per year regime (alfalfa, model I; Italian ryegrass-corn, model II; barley-seed sorghum, model III; and horse-bean-silage sorghum, model IV. The total amount of manure applied in the annual forage crops of the model II, III and IV was 158, 140 and 80 m3 ha−1, respectively. The manure applied to soil by broadcast and injection procedure provides an amount of nitrogen equal to that supplied by CF. The effect of manure applications on animal feeding production and biochemical soil characteristics was related to the models. The weather condition and manures and CF showed small interaction among treatments. The number of MFU ha−1 of biomass crop gross product produced in autumn and spring sowing models under manure applications was 11,769, 20,525, 11,342, 21,397 in models I through IV, respectively. The reduction of MFU ha−1 under CF ranges from 10.7% to 13.2% those of the manure models. The effect of manure on organic carbon and total nitrogen of topsoil, compared to model I, stressed the parameters as CF whose amount was higher in models II and III than model IV. In term of percentage the organic carbon and total nitrogen of model I and treatment with manure was reduced by about 18.5 and 21.9% in model II and model III and 8.8 and 6.3% in model IV, respectively. Manure management may substitute CF without reducing gross production and sustainability of cropping systems, thus allowing the opportunity to recycle the waste product for animal forage feeding.

  15. Solving Inverse Problems for Mechanistic Systems Biology Models with Unknown Inputs

    Science.gov (United States)

    2014-10-16

    frusemide in terms of diuresis and natriuresis can be modeled by indirect response model [18]. In this project, a modified version of this model was used...were derived from their measurements. The model relating the effect site excretion rate of frusemide ( ) to diuresis is given by: 64433-MA-II...time courses of frusemide infusion rate, frusemide urinary excretion rate, diuresis and natriuresis). The “true” parameter values used in the

  16. Effect of manure vs. fertilizer inputs on productivity of forage crop models.

    Science.gov (United States)

    Annicchiarico, Giovanni; Caternolo, Giovanni; Rossi, Emanuela; Martiniello, Pasquale

    2011-06-01

    Manure produced by livestock activity is a dangerous product capable of causing serious environmental pollution. Agronomic management practices on the use of manure may transform the target from a waste to a resource product. Experiments performed on comparison of manure with standard chemical fertilizers (CF) were studied under a double cropping per year regime (alfalfa, model I; Italian ryegrass-corn, model II; barley-seed sorghum, model III; and horse-bean-silage sorghum, model IV). The total amount of manure applied in the annual forage crops of the model II, III and IV was 158, 140 and 80 m3 ha(-1), respectively. The manure applied to soil by broadcast and injection procedure provides an amount of nitrogen equal to that supplied by CF. The effect of manure applications on animal feeding production and biochemical soil characteristics was related to the models. The weather condition and manures and CF showed small interaction among treatments. The number of MFU ha(-1) of biomass crop gross product produced in autumn and spring sowing models under manure applications was 11,769, 20,525, 11,342, 21,397 in models I through IV, respectively. The reduction of MFU ha(-1) under CF ranges from 10.7% to 13.2% those of the manure models. The effect of manure on organic carbon and total nitrogen of topsoil, compared to model I, stressed the parameters as CF whose amount was higher in models II and III than model IV. In term of percentage the organic carbon and total nitrogen of model I and treatment with manure was reduced by about 18.5 and 21.9% in model II and model III and 8.8 and 6.3% in model IV, respectively. Manure management may substitute CF without reducing gross production and sustainability of cropping systems, thus allowing the opportunity to recycle the waste product for animal forage feeding.

  17. 'Fingerprints' of four crop models as affected by soil input data aggregation

    DEFF Research Database (Denmark)

    Angulo, Carlos; Gaiser, Thomas; Rötter, Reimund P;

    2014-01-01

    . In this study we used four crop models (SIMPLACE, DSSAT-CSM, EPIC and DAISY) differing in the detail of modeling above-ground biomass and yield as well as of modeling soil water dynamics, water uptake and drought effects on plants to simulate winter wheat in two (agro-climatologically and geo...

  18. Input-Output Modeling and Control of the Departure Process of Congested Airports

    Science.gov (United States)

    Pujet, Nicolas; Delcaire, Bertrand; Feron, Eric

    2003-01-01

    A simple queueing model of busy airport departure operations is proposed. This model is calibrated and validated using available runway configuration and traffic data. The model is then used to evaluate preliminary control schemes aimed at alleviating departure traffic congestion on the airport surface. The potential impact of these control strategies on direct operating costs, environmental costs and overall delay is quantified and discussed.

  19. Modeling river total bed material load discharge using artificial intelligence approaches (based on conceptual inputs)

    Science.gov (United States)

    Roushangar, Kiyoumars; Mehrabani, Fatemeh Vojoudi; Shiri, Jalal

    2014-06-01

    This study presents Artificial Intelligence (AI)-based modeling of total bed material load through developing the accuracy level of the predictions of traditional models. Gene expression programming (GEP) and adaptive neuro-fuzzy inference system (ANFIS)-based models were developed and validated for estimations. Sediment data from Qotur River (Northwestern Iran) were used for developing and validation of the applied techniques. In order to assess the applied techniques in relation to traditional models, stream power-based and shear stress-based physical models were also applied in the studied case. The obtained results reveal that developed AI-based models using minimum number of dominant factors, give more accurate results than the other applied models. Nonetheless, it was revealed that k-fold test is a practical but high-cost technique for complete scanning of applied data and avoiding the over-fitting.

  20. From requirements to Java in a snap model-driven requirements engineering in practice

    CERN Document Server

    Smialek, Michal

    2015-01-01

    This book provides a coherent methodology for Model-Driven Requirements Engineering which stresses the systematic treatment of requirements within the realm of modelling and model transformations. The underlying basic assumption is that detailed requirements models are used as first-class artefacts playing a direct role in constructing software. To this end, the book presents the Requirements Specification Language (RSL) that allows precision and formality, which eventually permits automation of the process of turning requirements into a working system by applying model transformations and co

  1. Linear and Non-linear Multi-Input Multi-Output Model Predictive Control of Continuous Stirred Tank Reactor

    Directory of Open Access Journals (Sweden)

    Muayad Al-Qaisy

    2013-04-01

    Full Text Available In this article, multi-input multi-output (MIMO linear model predictive controller (LMPC based on state space model and nonlinear model predictive controller based on neural network (NNMPC are applied on a continuous stirred tank reactor (CSTR. The idea is to have a good control system that will be able to give optimal performance, reject high load disturbance, and track set point change. In order to study the performance of the two model predictive controllers, MIMO Proportional-Integral-Derivative controller (PID strategy is used as benchmark. The LMPC, NNMPC, and PID strategies are used for controlling the residual concentration (CA and reactor temperature (T. NNMPC control shows a superior performance over the LMPC and PID controllers by presenting a smaller overshoot and shorter settling time.

  2. Input-Dependent Integral Nonlinearity Modeling for Pipelined Analog-Digital Converters

    OpenAIRE

    Samer Medawar; Peter Händel; Niclas Björsell; Magnus Jansson

    2010-01-01

    Integral nonlinearity (INL) for pipelined analog-digital converters (ADCs) operating at RF is measured and characterized. A parametric model for the INL of pipelined ADCs is proposed, and the corresponding least-squares problem is formulated and solved. The INL is modeled both with respect to the converter output code and the frequency stimuli, which is dynamic modeling. The INL model contains a static and a dynamic part. The former comprises two 1-D terms in ADC code that are a sequence of z...

  3. Validation of input-noise model for simulations of supercontinuum generation and rogue waves

    DEFF Research Database (Denmark)

    Frosz, Michael Henoch

    2010-01-01

    A new model of pump noise in supercontinuum and rogue wave generation is presented. Simulations are compared with experiments and show that the new model provides significantly better agreement than the currently ubiquitously used one-photon-per-mode model. The new model also allows for a study...... of the influence of the pump spectral line width on the spectral broadening mechanisms. Specifically, it is found that for four-wave mixing (FWM) a narrow spectral line width ( 0.1 nm) initially leads to a build-up of FWM from quantum noise, whereas a broad spectral line width ( 1 nm) initially leads to a gradual...

  4. Error analysis of the quantification of hepatic perfusion using a dual-input single-compartment model

    Science.gov (United States)

    Miyazaki, Shohei; Yamazaki, Youichi; Murase, Kenya

    2008-11-01

    We performed an error analysis of the quantification of liver perfusion from dynamic contrast-enhanced computed tomography (DCE-CT) data using a dual-input single-compartment model for various disease severities, based on computer simulations. In the simulations, the time-density curves (TDCs) in the liver were generated from an actually measured arterial input function using a theoretical equation describing the kinetic behavior of the contrast agent (CA) in the liver. The rate constants for the transfer of CA from the hepatic artery to the liver (K1a), from the portal vein to the liver (K1p), and from the liver to the plasma (k2) were estimated from simulated TDCs with various plasma volumes (V0s). To investigate the effect of the shapes of input functions, the original arterial and portal-venous input functions were stretched in the time direction by factors of 2, 3 and 4 (stretching factors). The above parameters were estimated with the linear least-squares (LLSQ) and nonlinear least-squares (NLSQ) methods, and the root mean square errors (RMSEs) between the true and estimated values were calculated. Sensitivity and identifiability analyses were also performed. The RMSE of V0 was the smallest, followed by those of K1a, k2 and K1p in an increasing order. The RMSEs of K1a, K1p and k2 increased with increasing V0, while that of V0 tended to decrease. The stretching factor also affected parameter estimation in both methods. The LLSQ method estimated the above parameters faster and with smaller variations than the NLSQ method. Sensitivity analysis showed that the magnitude of the sensitivity function of V0 was the greatest, followed by those of K1a, K1p and k2 in a decreasing order, while the variance of V0 obtained from the covariance matrices was the smallest, followed by those of K1a, K1p and k2 in an increasing order. The magnitude of the sensitivity function and the variance increased and decreased, respectively, with increasing disease severity and decreased

  5. Modeling uncertainty in requirements engineering decision support

    Science.gov (United States)

    Feather, Martin S.; Maynard-Zhang, Pedrito; Kiper, James D.

    2005-01-01

    One inherent characteristic of requrements engineering is a lack of certainty during this early phase of a project. Nevertheless, decisions about requirements must be made in spite of this uncertainty. Here we describe the context in which we are exploring this, and some initial work to support elicitation of uncertain requirements, and to deal with the combination of such information from multiple stakeholders.

  6. Modeling uncertainty in requirements engineering decision support

    Science.gov (United States)

    Feather, Martin S.; Maynard-Zhang, Pedrito; Kiper, James D.

    2005-01-01

    One inherent characteristic of requrements engineering is a lack of certainty during this early phase of a project. Nevertheless, decisions about requirements must be made in spite of this uncertainty. Here we describe the context in which we are exploring this, and some initial work to support elicitation of uncertain requirements, and to deal with the combination of such information from multiple stakeholders.

  7. A goal-oriented requirements modelling language for enterprise architecture

    NARCIS (Netherlands)

    Quartel, Dick; Engelsman, Wilco; Jonkers, Henk; Sinderen, van Marten

    2009-01-01

    Methods for enterprise architecture, such as TOGAF, acknowledge the importance of requirements engineering in the development of enterprise architectures. Modelling support is needed to specify, document, communicate and reason about goals and requirements. Current modelling techniques for enterpris

  8. A goal-oriented requirements modelling language for enterprise architecture

    NARCIS (Netherlands)

    Quartel, Dick; Engelsman, W.; Jonkers, Henk; van Sinderen, Marten J.

    2009-01-01

    Methods for enterprise architecture, such as TOGAF, acknowledge the importance of requirements engineering in the development of enterprise architectures. Modelling support is needed to specify, document, communicate and reason about goals and requirements. Current modelling techniques for

  9. Effective property determination for input to a geostatistical model of regional groundwater flow: Wellenberg T{yields}K

    Energy Technology Data Exchange (ETDEWEB)

    Lanyon, G.W. [GeoScience Ltd., Falmouth (United Kingdom); Marschall, P.; Vomvoris, S. [NAGRA, Wettingen (Switzerland); Jaquet, O. [Colenco Power Engineering AG, Baden (Switzerland); Mazurek, M. [Bern Univ. (Switzerland). Mineralogisch-petrographisches Inst.

    1998-09-01

    This paper describes the methodology used to estimate effective hydraulic properties for input into a regional geostatistical model of groundwater flow at the Wellenberg site in Switzerland. The methodology uses a geologically-based discrete fracture network model to calculate effective hydraulic properties for 100m blocks along each borehole. A description of the most transmissive features (Water Conducting Features or WCFs) in each borehole is used to determine local transmissivity distributions which are combined with descriptions of WCF extent, orientation and channelling to create fracture network models. WCF geometry is dependent on the class of WCF. WCF classes are defined for each type of geological structure associated with identified borehole inflows. Local to each borehole, models are conditioned on the observed transmissivity and occurrence of WCFs. Multiple realisations are calculated for each 100m block over approximately 400m of borehole. The results from the numerical upscaling are compared with conservative estimates of hydraulic conductivity. Results from unconditioned models are also compared to identify the consequences of conditioning and interval of boreholes that appear to be atypical. An inverse method is also described by which realisations of the geostatistical model can be used to condition discrete fracture network models away from the boreholes. The method can be used as a verification of the modelling approach by prediction of data at borehole locations. Applications of the models to estimation of post-closure repository performance, including cavern inflow and seal zone modelling, are illustrated 14 refs, 9 figs

  10. Resources use and greenhouse gas emissions in urban economy: Ecological input-output modeling for Beijing 2002

    Science.gov (United States)

    Zhou, S. Y.; Chen, H.; Li, S. C.

    2010-10-01

    The embodiment of natural resources and greenhouse gas emissions for the urban economy of Beijing economy 2002 by a physical balance modeling is carried out based on an extension of the economic input-output table into an ecological one integrating the economy with its various environmental driving forces. Included resources and greenhouse gas emissions belong to six categories as energy resources in terms of primary energy and secondary energy; water resource; emissions of CO2, CH4, and N2O; exergy in terms of energy sources, biological resources and minerals; and solar emergy and cosmic emergy in terms of climate resources, soil, energy sources, and minerals.

  11. Global qualitative analysis of new Monod type chemostat model with delayed growth response and pulsed input in polluted environment

    Institute of Scientific and Technical Information of China (English)

    MENG Xin-zhu; ZHAO Qiu-lan; CHEN Lan-sun

    2008-01-01

    In this paper, we consider a new Monod type chemostat model with time delay and impulsive input concentration of the nutrient in a polluted environment. Us- ing the discrete dynamical system determined by the stroboscopic map, we obtain a "microorganism-extinction" periodic solution. Further, we establish the sufficient condi- tions for the global attractivity of the microorganismoextinction periodic solution. Using new computational techniques for impulsive and delayed differential equation, we prove that the system is permanent under appropriate conditions. Our results show that time delay is "profitlcss".

  12. Input Harmonic Analysis on the Slim DC-Link Drive Using Harmonic State Space Model

    DEFF Research Database (Denmark)

    Yang, Feng; Kwon, Jun Bum; Wang, Xiongfei

    2017-01-01

    the shortcomings of the present harmonic analysis methods, such as the time-domain simulation, or the Fourier analysis, this paper proposes a Harmonic State Space model to study the harmonics performance for this type of drive. In this study, this model is utilized to describe the behavior of the harmonic...... variation according to the switching instant, the harmonics at the steady-state condition, as well as the coupling between the multiple harmonic impedances. By using this model, the impaction on the harmonics performance by the film capacitor and the grid inductance is derived. Simulation and experimental...

  13. Monthly Precipitation Input Data for the Central Valley Hydrologic Model (CVHM)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This digital dataset contains the monthly precipitation for the Central Valley Hydrologic Model (CVHM). The Central Valley encompasses an approximate 50,000...

  14. Development of a General Form CO2 and Brine Flux Input Model

    Energy Technology Data Exchange (ETDEWEB)

    Mansoor, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sun, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Carroll, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-08-01

    The National Risk Assessment Partnership (NRAP) project is developing a science-based toolset for the quantitative analysis of the potential risks associated with changes in groundwater chemistry from CO2 injection. In order to address uncertainty probabilistically, NRAP is developing efficient, reduced-order models (ROMs) as part of its approach. These ROMs are built from detailed, physics-based process models to provide confidence in the predictions over a range of conditions. The ROMs are designed to reproduce accurately the predictions from the computationally intensive process models at a fraction of the computational time, thereby allowing the utilization of Monte Carlo methods to probe variability in key parameters. This report presents the procedures used to develop a generalized model for CO2 and brine leakage fluxes based on the output of a numerical wellbore simulation. The resulting generalized parameters and ranges reported here will be used for the development of third-generation groundwater ROMs.

  15. Inflow Locations and Magnitude Input Files to the Central Valley Hydrologic Model (CVHM)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This digital dataset contains the name and location for the inflows to the surface-water network for the Central Valley Hydrologic Model (CVHM). The Central Valley...

  16. Scaling precipitation input to spatially distributed hydrological models by measured snow distribution

    OpenAIRE

    2016-01-01

    Accurate knowledge on snow distribution in alpine terrain is crucial for various applicationssuch as flood risk assessment, avalanche warning or managing water supply and hydro-power.To simulate the seasonal snow cover development in alpine terrain, the spatially distributed,physics-based model Alpine3D is suitable. The model is typically driven by spatial interpolationsof observations from automatic weather stations (AWS), leading to errors in the spatial distributionof atmospheric forcing. ...

  17. Vadose zone-attenuated artificial recharge for input to a ground water model.

    Science.gov (United States)

    Nichols, William E; Wurstner, Signe K; Eslinger, Paul W

    2007-01-01

    Accurate representation of artificial recharge is requisite to calibration of a ground water model of an unconfined aquifer for a semiarid or arid site with a vadose zone that imparts significant attenuation of liquid transmission and substantial anthropogenic liquid discharges. Under such circumstances, artificial recharge occurs in response to liquid disposal to the vadose zone in areas that are small relative to the ground water model domain. Natural recharge, in contrast, is spatially variable and occurs over the entire upper boundary of a typical unconfined ground water model. An improved technique for partitioning artificial recharge from simulated total recharge for inclusion in a ground water model is presented. The improved technique is applied using data from the semiarid Hanford Site. From 1944 until the late 1980s, when Hanford's mission was the production of nuclear materials, the quantities of liquid discharged from production facilities to the ground vastly exceeded natural recharge. Nearly all hydraulic head data available for use in calibrating a ground water model at this site were collected during this period or later, when the aquifer was under the diminishing influence of the massive water disposals. The vadose zone is typically 80 to 90 m thick at the Central Plateau where most production facilities were located at this semiarid site, and its attenuation of liquid transmission to the aquifer can be significant. The new technique is shown to improve the representation of artificial recharge and thereby contribute to improvement in the calibration of a site-wide ground water model.

  18. MODEL PENENTUAN TARIF MENGGUNAKAN MINIMISASI BIAYA DAN PERMINTAAN INPUT UNTUK PERUSAHAAN MONOPOLI

    Directory of Open Access Journals (Sweden)

    Fitrawaty Fitrawaty

    2012-09-01

    Full Text Available Provision of some public goods, such as drinking water, electricity, gas, telephone, in many countries is generally done by the government. This is due to the firm is a natural monopoly, meaning that these companies require a huge investment, so that the level of efficiency can be achieved when the large scale of production. The problem is what price should be charged to the public? This study aimed to determine the price of a good in theory. The method used is minimization cost of production (through indirect cost function with the constraints of the production function.  _________________________________ Key words: Minimum Cost, Price, Monopolistic

  19. Simplified models for new physics in vector boson scattering. Input for Snowmass 2013

    Energy Technology Data Exchange (ETDEWEB)

    Reuter, Juergen [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Kilian, Wolfgang; Sekulla, Marco [Siegen Univ. (Germany). Theoretische Physik I

    2013-07-15

    In this contribution to the Snowmass process 2013 we give a brief review of how new physics could enter in the electroweak (EW) sector of the Standard Model (SM). This new physics, if it is directly accessible at low energies, can be parameterized by explicit resonances having certain quantum numbers. The extreme case is the decoupling limit where those resonances are very heavy and leave only traces in the form of deviations in the SM couplings. Translations are given into higher-dimensional operators leading to such deviations. As long as such resonances are introduced without a UV-complete theory behind it, these models suffer from unitarity violation of perturbative scattering amplitudes. We show explicitly how theoretically sane descriptions could be achieved by using a unitarization prescription that allows a correct description of such a resonance without specifying a UV-complete model.

  20. SPY: a new scission-point model based on microscopic inputs to predict fission fragment properties

    Energy Technology Data Exchange (ETDEWEB)

    Panebianco, Stefano; Lemaître, Jean-Francois; Sida, Jean-Luc [CEA Centre de Saclay, Gif-sur-Ivette (France); Dubray, Noëel [CEA, DAM, DIF, Arpajon (France); Goriely, Stephane [Institut d' Astronomie et d' Astrophisique, Universite Libre de Bruxelles, Brussels (Belgium)

    2014-07-01

    Despite the difficulty in describing the whole fission dynamics, the main fragment characteristics can be determined in a static approach based on a so-called scission-point model. Within this framework, a new Scission-Point model for the calculations of fission fragment Yields (SPY) has been developed. This model, initially based on the approach developed by Wilkins in the late seventies, consists in performing a static energy balance at scission, where the two fragments are supposed to be completely separated so that their macroscopic properties (mass and charge) can be considered as fixed. Given the knowledge of the system state density, averaged quantities such as mass and charge yields, mean kinetic and excitation energy can then be extracted in the framework of a microcanonical statistical description. The main advantage of the SPY model is the introduction of one of the most up-to-date microscopic descriptions of the nucleus for the individual energy of each fragment and, in the future, for their state density. These quantities are obtained in the framework of HFB calculations using the Gogny nucleon-nucleon interaction, ensuring an overall coherence of the model. Starting from a description of the SPY model and its main features, a comparison between the SPY predictions and experimental data will be discussed for some specific cases, from light nuclei around mercury to major actinides. Moreover, extensive predictions over the whole chart of nuclides will be discussed, with particular attention to their implication in stellar nucleosynthesis. Finally, future developments, mainly concerning the introduction of microscopic state densities, will be briefly discussed. (author)

  1. SPY: a new scission-point model based on microscopic inputs to predict fission fragment properties

    Science.gov (United States)

    Panebianco, Stefano; Dubray, Nöel; Goriely, Stéphane; Hilaire, Stéphane; Lemaître, Jean-François; Sida, Jean-Luc

    2014-04-01

    Despite the difficulty in describing the whole fission dynamics, the main fragment characteristics can be determined in a static approach based on a so-called scission-point model. Within this framework, a new Scission-Point model for the calculations of fission fragment Yields (SPY) has been developed. This model, initially based on the approach developed by Wilkins in the late seventies, consists in performing a static energy balance at scission, where the two fragments are supposed to be completely separated so that their macroscopic properties (mass and charge) can be considered as fixed. Given the knowledge of the system state density, averaged quantities such as mass and charge yields, mean kinetic and excitation energy can then be extracted in the framework of a microcanonical statistical description. The main advantage of the SPY model is the introduction of one of the most up-to-date microscopic descriptions of the nucleus for the individual energy of each fragment and, in the future, for their state density. These quantities are obtained in the framework of HFB calculations using the Gogny nucleon-nucleon interaction, ensuring an overall coherence of the model. Starting from a description of the SPY model and its main features, a comparison between the SPY predictions and experimental data will be discussed for some specific cases, from light nuclei around mercury to major actinides. Moreover, extensive predictions over the whole chart of nuclides will be discussed, with particular attention to their implication in stellar nucleosynthesis. Finally, future developments, mainly concerning the introduction of microscopic state densities, will be briefly discussed.

  2. SPY: a new scission-point model based on microscopic inputs to predict fission fragment properties

    Directory of Open Access Journals (Sweden)

    Panebianco Stefano

    2014-04-01

    Full Text Available Despite the difficulty in describing the whole fission dynamics, the main fragment characteristics can be determined in a static approach based on a so-called scission-point model. Within this framework, a new Scission-Point model for the calculations of fission fragment Yields (SPY has been developed. This model, initially based on the approach developed by Wilkins in the late seventies, consists in performing a static energy balance at scission, where the two fragments are supposed to be completely separated so that their macroscopic properties (mass and charge can be considered as fixed. Given the knowledge of the system state density, averaged quantities such as mass and charge yields, mean kinetic and excitation energy can then be extracted in the framework of a microcanonical statistical description. The main advantage of the SPY model is the introduction of one of the most up-to-date microscopic descriptions of the nucleus for the individual energy of each fragment and, in the future, for their state density. These quantities are obtained in the framework of HFB calculations using the Gogny nucleon-nucleon interaction, ensuring an overall coherence of the model. Starting from a description of the SPY model and its main features, a comparison between the SPY predictions and experimental data will be discussed for some specific cases, from light nuclei around mercury to major actinides. Moreover, extensive predictions over the whole chart of nuclides will be discussed, with particular attention to their implication in stellar nucleosynthesis. Finally, future developments, mainly concerning the introduction of microscopic state densities, will be briefly discussed.

  3. Input/output models for general aviation piston-prop aircraft fuel economy

    Science.gov (United States)

    Sweet, L. M.

    1982-01-01

    A fuel efficient cruise performance model for general aviation piston engine airplane was tested. The following equations were made: (1) for the standard atmosphere; (2) airframe-propeller-atmosphere cruise performance; and (3) naturally aspirated engine cruise performance. Adjustments are made to the compact cruise performance model as follows: corrected quantities, corrected performance plots, algebraic equations, maximize R with or without constraints, and appears suitable for airborne microprocessor implementation. The following hardwares are recommended: ignition timing regulator, fuel-air mass ration controller, microprocessor, sensors and displays.

  4. A stock-flow consistent input-output model with applications to energy price shocks, interest rates, and heat emissions

    Science.gov (United States)

    Berg, Matthew; Hartley, Brian; Richters, Oliver

    2015-01-01

    By synthesizing stock-flow consistent models, input-output models, and aspects of ecological macroeconomics, a method is developed to simultaneously model monetary flows through the financial system, flows of produced goods and services through the real economy, and flows of physical materials through the natural environment. This paper highlights the linkages between the physical environment and the economic system by emphasizing the role of the energy industry. A conceptual model is developed in general form with an arbitrary number of sectors, while emphasizing connections with the agent-based, econophysics, and complexity economics literature. First, we use the model to challenge claims that 0% interest rates are a necessary condition for a stationary economy and conduct a stability analysis within the parameter space of interest rates and consumption parameters of an economy in stock-flow equilibrium. Second, we analyze the role of energy price shocks in contributing to recessions, incorporating several propagation and amplification mechanisms. Third, implied heat emissions from energy conversion and the effect of anthropogenic heat flux on climate change are considered in light of a minimal single-layer atmosphere climate model, although the model is only implicitly, not explicitly, linked to the economic model.

  5. Estimation of evapotranspiration for a small catchment as an input for rainfall-runoff model

    Science.gov (United States)

    Hejduk, Leszek; Banasik, Kazimierz; Krajewski, Adam; Mackiewicz, Marta

    2014-05-01

    One of the methods for determination of floods is application of mathematical rainfall-runoff models. Usually, it is possible to distinguish a number of steps for calculation of hydrograph of the flood. The first step is the calculation of effective rainfall which is a difference between total rainfall and losses (amount of water which do not participate in flood formation like interception, infiltration, evaporation etc.) . One of the most common method for determination of effective rainfall is a USDA-SCS method were losses are connected with type of the soils, vegetation and soil moisture. Those factors includes the Curve Number factor (CN). However there is also different approach for determination of losses were soil moisture is calculated as a function of evapotranspiration. In this study, the meteorological data from year 2002-2012 were used for determination of daily evapotranspiration (ETo) by use of FAO Penmana-Monteitha model for Zagozdzonka river catchment in central Poland. Due to gaps in metrological data, some other simpler methods of ETo calculation were applied like Hargraves model and Grabarczyk (1976) model. Based on received results the uncertainty of ETo was calculated. Grabarczyk S., 1976. Polowe zuzycie wody a czynniki meteorologiczne. Zesz. Probl. Post. Nauk Rol. 181, 495-511 ACKNOWLEDGMENTS The investigation described in the poster is part of the research project KORANET founded by PL-National Center for Research and Development (NCBiR).

  6. Treatment of input uncertainty in hydrologic modeling: Doing hydrology backward with Markov chain Monte Carlo simulation

    NARCIS (Netherlands)

    Vrugt, J.A.; Braak, ter C.J.F.; Clark, M.P.; Hyman, J.M.; Robinson, B.A.

    2008-01-01

    There is increasing consensus in the hydrologic literature that an appropriate framework for streamflow forecasting and simulation should include explicit recognition of forcing and parameter and model structural error. This paper presents a novel Markov chain Monte Carlo (MCMC) sampler, entitled

  7. Packaging tomorrow : modelling the material input for European packaging in the 21st century

    NARCIS (Netherlands)

    Hekkert, M.P.; Joosten, L.A.J.; Worrell, E.

    2006-01-01

    This report is a result of the MATTER project (MATerials Technology for CO2 Emission Reduction). The project focuses on CO2 emission reductions that are related to the Western European materials system. The total impact of the reduction options for different scenario's will be modeled in MARKAL (MAR

  8. Treatment of input uncertainty in hydrologic modeling: Doing hydrology backward with Markov chain Monte Carlo simulation

    NARCIS (Netherlands)

    Vrugt, J.A.; Braak, ter C.J.F.; Clark, M.P.; Hyman, J.M.; Robinson, B.A.

    2008-01-01

    There is increasing consensus in the hydrologic literature that an appropriate framework for streamflow forecasting and simulation should include explicit recognition of forcing and parameter and model structural error. This paper presents a novel Markov chain Monte Carlo (MCMC) sampler, entitled di

  9. Modeling chronic diseases: the diabetes module. Justification of (new) input data

    NARCIS (Netherlands)

    Baan CA; Bos G; Jacobs-van der Bruggen MAM; Baan CA; Bos G; Jacobs-van der Bruggen MAM; PZO

    2005-01-01

    The RIVM chronic disease model (CDM) is an instrument designed to estimate the effects of changes in the prevalence of risk factors for chronic diseases on disease burden and mortality. To enable the computation of the effects of various diabetes prevention scenarios, the CDM has been updated and

  10. GALEV evolutionary synthesis models – I. Code, input physics and web

    NARCIS (Netherlands)

    Kotulla, R.; Fritze, U.; Weilbacher, P.; Anders, P.

    2009-01-01

    GALEV (GALaxy EVolution) evolutionary synthesis models describe the evolution of stellar populations in general, of star clusters as well as of galaxies, both in terms of resolved stellar populations and of integrated light properties over cosmological time-scales of ≥13 Gyr from the onset of star f

  11. An improved statistical model for linear antenna input impedance in an electrically large cavity.

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, William Arthur; Warne, Larry Kevin; Jorgenson, Roy Eberhardt; Lee, Kelvin S. H. (ITT Industries/AES, Los Angeles, CA)

    2005-03-01

    This report presents a modification of a previous model for the statistical distribution of linear antenna impedance. With this modification a simple formula is determined which yields accurate results for all ratios of modal spectral width to spacing. It is shown that the reactance formula approaches the known unit Lorentzian in the lossless limit.

  12. What input data are needed to accurately model electromagnetic fields from mobile phone base stations?

    NARCIS (Netherlands)

    Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel

    2015-01-01

    The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what

  13. Long-term solar UV radiation reconstructed by ANN modelling with emphasis on spatial characteristics of input data

    Science.gov (United States)

    Feister, U.; Junk, J.; Woldt, M.; Bais, A.; Helbig, A.; Janouch, M.; Josefsson, W.; Kazantzidis, A.; Lindfors, A.; den Outer, P. N.; Slaper, H.

    2008-06-01

    Artificial Neural Networks (ANN) are efficient tools to derive solar UV radiation from measured meteorological parameters such as global radiation, aerosol optical depths and atmospheric column ozone. The ANN model has been tested with different combinations of data from the two sites Potsdam and Lindenberg, and used to reconstruct solar UV radiation at eight European sites by more than 100 years into the past. Special emphasis will be given to the discussion of small-scale characteristics of input data to the ANN model. Annual totals of UV radiation derived from reconstructed daily UV values reflect interannual variations and long-term patterns that are compatible with variabilities and changes of measured input data, in particular global dimming by about 1980/1990, subsequent global brightening, volcanic eruption effects such as that of Mt. Pinatubo, and the long-term ozone decline since the 1970s. Patterns of annual erythemal UV radiation are very similar at sites located at latitudes close to each other, but different patterns occur between UV radiation at sites in different latitude regions.

  14. Long-term solar UV radiation reconstructed by ANN modelling with emphasis on spatial characteristics of input data

    Directory of Open Access Journals (Sweden)

    U. Feister

    2008-06-01

    Full Text Available Artificial Neural Networks (ANN are efficient tools to derive solar UV radiation from measured meteorological parameters such as global radiation, aerosol optical depths and atmospheric column ozone. The ANN model has been tested with different combinations of data from the two sites Potsdam and Lindenberg, and used to reconstruct solar UV radiation at eight European sites by more than 100 years into the past. Special emphasis will be given to the discussion of small-scale characteristics of input data to the ANN model.

    Annual totals of UV radiation derived from reconstructed daily UV values reflect interannual variations and long-term patterns that are compatible with variabilities and changes of measured input data, in particular global dimming by about 1980/1990, subsequent global brightening, volcanic eruption effects such as that of Mt. Pinatubo, and the long-term ozone decline since the 1970s. Patterns of annual erythemal UV radiation are very similar at sites located at latitudes close to each other, but different patterns occur between UV radiation at sites in different latitude regions.

  15. Dynamic Modeling and Simulation of a Thermoelectric-Solar Hybrid Energy System Using an Inverse Dynamic Analysis Input Shaper

    Directory of Open Access Journals (Sweden)

    A. M. Yusop

    2014-01-01

    Full Text Available This study presents the behavioral model of thermal temperature and power generation of a thermoelectric-solar hybrid energy system exposed to dynamic transient sources. In the development of thermoelectric-solar hybrid energy system, studies have focused on the regulation of both systems separately. In practice, a separate control system affects hardware pricing. In this study, an inverse dynamic analysis shaping technique based on exponential function is applied to a solar array (SA to stabilize output voltage before this technique is combined with a thermoelectric module (TEM. This method can be used to estimate the maximum power point of the hybrid system by initially shaping the input voltage of SA. The behavior of the overall system can be estimated by controlling the behavior of SA, such that SA can follow the output voltage of TEM as the time constant of TEM is greater than that of SA. Moreover, by employing a continuous and differentiable function, the acquired output behavior of the hybrid system can be attained. Data showing the model is obtained from current experiments with predicted values of temperature, internal resistance, and current attributes of TEM. The simulation results show that the proposed input shaper can be used to trigger the output voltage of SA to follow the TEM behavior under transient conditions.

  16. Correlation-based analysis and generation of multiple spike trains using hawkes models with an exogenous input.

    Science.gov (United States)

    Krumin, Michael; Reutsky, Inna; Shoham, Shy

    2010-01-01

    The correlation structure of neural activity is believed to play a major role in the encoding and possibly the decoding of information in neural populations. Recently, several methods were developed for exactly controlling the correlation structure of multi-channel synthetic spike trains (Brette, 2009; Krumin and Shoham, 2009; Macke et al., 2009; Gutnisky and Josic, 2010; Tchumatchenko et al., 2010) and, in a related work, correlation-based analysis of spike trains was used for blind identification of single-neuron models (Krumin et al., 2010), for identifying compact auto-regressive models for multi-channel spike trains, and for facilitating their causal network analysis (Krumin and Shoham, 2010). However, the diversity of correlation structures that can be explained by the feed-forward, non-recurrent, generative models used in these studies is limited. Hence, methods based on such models occasionally fail when analyzing correlation structures that are observed in neural activity. Here, we extend this framework by deriving closed-form expressions for the correlation structure of a more powerful multivariate self- and mutually exciting Hawkes model class that is driven by exogenous non-negative inputs. We demonstrate that the resulting Linear-Non-linear-Hawkes (LNH) framework is capable of capturing the dynamics of spike trains with a generally richer and more biologically relevant multi-correlation structure, and can be used to accurately estimate the Hawkes kernels or the correlation structure of external inputs in both simulated and real spike trains (recorded from visually stimulated mouse retinal ganglion cells). We conclude by discussing the method's limitations and the broader significance of strengthening the links between neural spike train analysis and classical system identification.

  17. Surviving mossy cells enlarge and receive more excitatory synaptic input in a mouse model of temporal lobe epilepsy.

    Science.gov (United States)

    Zhang, Wei; Thamattoor, Ajoy K; LeRoy, Christopher; Buckmaster, Paul S

    2015-05-01

    Numerous hypotheses of temporal lobe epileptogenesis have been proposed, and several involve hippocampal mossy cells. Building on previous hypotheses we sought to test the possibility that after epileptogenic injuries surviving mossy cells develop into super-connected seizure-generating hub cells. If so, they might require more cellular machinery and consequently have larger somata, elongate their dendrites to receive more synaptic input, and display higher frequencies of miniature excitatory synaptic currents (mEPSCs). To test these possibilities pilocarpine-treated mice were evaluated using GluR2-immunocytochemistry, whole-cell recording, and biocytin-labeling. Epileptic pilocarpine-treated mice displayed substantial loss of GluR2-positive hilar neurons. Somata of surviving neurons were 1.4-times larger than in controls. Biocytin-labeled mossy cells also were larger in epileptic mice, but dendritic length per cell was not significantly different. The average frequency of mEPSCs of mossy cells recorded in the presence of tetrodotoxin and bicuculline was 3.2-times higher in epileptic pilocarpine-treated mice as compared to controls. Other parameters of mEPSCs were similar in both groups. Average input resistance of mossy cells in epileptic mice was reduced to 63% of controls, which is consistent with larger somata and would tend to make surviving mossy cells less excitable. Other intrinsic physiological characteristics examined were similar in both groups. Increased excitatory synaptic input is consistent with the hypothesis that surviving mossy cells develop into aberrantly super-connected seizure-generating hub cells, and soma hypertrophy is indirectly consistent with the possibility of axon sprouting. However, no obvious evidence of hyperexcitable intrinsic physiology was found. Furthermore, similar hypertrophy and hyper-connectivity has been reported for other neuron types in the dentate gyrus, suggesting mossy cells are not unique in this regard. Thus

  18. Adaptive control for an uncertain robotic manipulator with input saturations

    Institute of Scientific and Technical Information of China (English)

    Trong-Toan TRAN; Shuzhi Sam GE; Wei HE

    2016-01-01

    In this paper, we address the control problem of an uncertain robotic manipulator with input saturations, unknown input scalings and disturbances. For this purpose, a model reference adaptive control like (MRAC-like) is used to handle the input saturations. The model reference is input to state stable (ISS) and driven by the errors between the required control signals and input saturations. The uncertain parameters are dealt with by using linear-in-the-parameters property of robotic dynamics, while unknown input scalings and disturbances are handled by non-regressor based approach. Our design ensures that all the signals in the closed-loop system are bounded, and the tracking error converges to the compact set which depends on the predetermined bounds of the control inputs. Simulation on a planar elbow manipulator with two joints is provided to illustrate the effectiveness of the proposed controller.

  19. The Canadian Defence Input-Output Model DIO Version 4.41

    Science.gov (United States)

    2011-09-01

    Output models, for instance to study the regional benefits of different large procure- ment programmes, the data censorship limitation would...excluding potato chips and nuts 113 0960 Cocoa and chocolate 114 0979 Nuts DRDC CORA TM 2011-147 31 Index Code Commodity name 115 0989 Chocolate...Private hospital services 631 5631 Private residential care facilities 632 5632 Child care, outside the home 633 5633 Other health and social services 634

  20. Modeling debris-covered glaciers: extension due to steady debris input

    Directory of Open Access Journals (Sweden)

    L. S. Anderson

    2015-11-01

    Debris-forced glacier extension decreases the ratio of accumulation zone to total glacier area (AAR. The model reproduces first-order relationships between debris cover, AARs, and glacier surface velocities from glaciers in High Asia. We provide a quantitative, theoretical foundation to interpret the effect of debris cover on the moraine record, and to assess the effects of climate change on debris-covered glaciers.

  1. Development of NEXRAD Wind Retrievals as Input to Atmospheric Dispersion Models

    Energy Technology Data Exchange (ETDEWEB)

    Fast, Jerome D.; Newsom, Rob K.; Allwine, K Jerry; Xu, Qin; Zhang, Pengfei; Copeland, Jeffrey H.; Sun, Jenny

    2007-03-06

    The objective of this study is to determine the feasibility that routinely collected data from the Doppler radars can appropriately be used in Atmospheric Dispersion Models (ADMs) for emergency response. We have evaluated the computational efficiency and accuracy of two variational mathematical techniques that derive the u- and v-components of the wind from radial velocities obtained from Doppler radars. A review of the scientific literature indicated that the techniques employ significantly different approaches in applying the variational techniques: 2-D Variational (2DVar), developed by NOAA¹s (National Oceanic and Atmospheric Administration's) National Severe Storms Laboratory (NSSL) and Variational Doppler Radar Analysis System (VDRAS), developed by the National Center for Atmospheric Research (NCAR). We designed a series of numerical experiments in which both models employed the same horizontal domain and resolution encompassing Oklahoma City for a two-week period during the summer of 2003 so that the computed wind retrievals could be fairly compared. Both models ran faster than real-time on a typical single dual-processor computer, indicating that they could be used to generate wind retrievals in near real-time. 2DVar executed ~2.5 times faster than VDRAS because of its simpler approach.

  2. Optimization modeling of U.S. renewable electricity deployment using local input variables

    Science.gov (United States)

    Bernstein, Adam

    For the past five years, state Renewable Portfolio Standard (RPS) laws have been a primary driver of renewable electricity (RE) deployments in the United States. However, four key trends currently developing: (i) lower natural gas prices, (ii) slower growth in electricity demand, (iii) challenges of system balancing intermittent RE within the U.S. transmission regions, and (iv) fewer economical sites for RE development, may limit the efficacy of RPS laws over the remainder of the current RPS statutes' lifetime. An outsized proportion of U.S. RE build occurs in a small number of favorable locations, increasing the effects of these variables on marginal RE capacity additions. A state-by-state analysis is necessary to study the U.S. electric sector and to generate technology specific generation forecasts. We used LP optimization modeling similar to the National Renewable Energy Laboratory (NREL) Renewable Energy Development System (ReEDS) to forecast RE deployment across the 8 U.S. states with the largest electricity load, and found state-level RE projections to Year 2031 significantly lower than thoseimplied in the Energy Information Administration (EIA) 2013 Annual Energy Outlook forecast. Additionally, the majority of states do not achieve their RPS targets in our forecast. Combined with the tendency of prior research and RE forecasts to focus on larger national and global scale models, we posit that further bottom-up state and local analysis is needed for more accurate policy assessment, forecasting, and ongoing revision of variables as parameter values evolve through time. Current optimization software eliminates much of the need for algorithm coding and programming, allowing for rapid model construction and updating across many customized state and local RE parameters. Further, our results can be tested against the empirical outcomes that will be observed over the coming years, and the forecast deviation from the actuals can be attributed to discrete parameter

  3. Identification of a Manipulator Model Using the Input Error Method in the Mathematica Program

    Directory of Open Access Journals (Sweden)

    Leszek CEDRO

    2009-06-01

    Full Text Available The problem of parameter identification for a four-degree-of-freedom robot was solved using the Mathematica program. The identification was performed by means of specially developed differential filters [1]. Using the example of a manipulator, we analyze the capabilities of the Mathematica program that can be applied to solve problems related to the modeling, control, simulation and identification of a system [2]. The responses of the identification process for the variables and the values of the quality function are included.

  4. Creating Locally-Resolved Mobile-Source Emissions Inputs for Air Quality Modeling in Support of an Exposure Study in Detroit, Michigan, USA

    Directory of Open Access Journals (Sweden)

    Michelle Snyder

    2014-12-01

    Full Text Available This work describes a methodology for modeling the impact of traffic-generated air pollutants in an urban area. This methodology presented here utilizes road network geometry, traffic volume, temporal allocation factors, fleet mixes, and emission factors to provide critical modeling inputs. These inputs, assembled from a variety of sources, are combined with meteorological inputs to generate link-based emissions for use in dispersion modeling to estimate pollutant concentration levels due to traffic. A case study implementing this methodology for a large health study is presented, including a sensitivity analysis of the modeling results reinforcing the importance of model inputs and identify those having greater relative impact, such as fleet mix. In addition, an example use of local measurements of fleet activity to supplement model inputs is described, and its impacts to the model outputs are discussed. We conclude that with detailed model inputs supported by local traffic measurements and meteorology, it is possible to capture the spatial and temporal patterns needed to accurately estimate exposure from traffic-related pollutants.

  5. Creating locally-resolved mobile-source emissions inputs for air quality modeling in support of an exposure study in Detroit, Michigan, USA.

    Science.gov (United States)

    Snyder, Michelle; Arunachalam, Saravanan; Isakov, Vlad; Talgo, Kevin; Naess, Brian; Valencia, Alejandro; Omary, Mohammad; Davis, Neil; Cook, Rich; Hanna, Adel

    2014-12-01

    This work describes a methodology for modeling the impact of traffic-generated air pollutants in an urban area. This methodology presented here utilizes road network geometry, traffic volume, temporal allocation factors, fleet mixes, and emission factors to provide critical modeling inputs. These inputs, assembled from a variety of sources, are combined with meteorological inputs to generate link-based emissions for use in dispersion modeling to estimate pollutant concentration levels due to traffic. A case study implementing this methodology for a large health study is presented, including a sensitivity analysis of the modeling results reinforcing the importance of model inputs and identify those having greater relative impact, such as fleet mix. In addition, an example use of local measurements of fleet activity to supplement model inputs is described, and its impacts to the model outputs are discussed. We conclude that with detailed model inputs supported by local traffic measurements and meteorology, it is possible to capture the spatial and temporal patterns needed to accurately estimate exposure from traffic-related pollutants.

  6. Multiparameter Correction Intensity of Terrestrial Laser Scanning Data as AN Input for Rock Surface Modelling

    Science.gov (United States)

    Paleček, V.; Kubíček, P.

    2016-06-01

    A large increase in the creation of 3D models of objects all around us can be observed in the last few years; thanks to the help of the rapid development of new advanced technologies for spatial data collection and robust software tools. A new commercially available airborne laser scanning data in Czech Republic, provided in the form of the Digital terrain model of the fifth generation as irregularly spaced points, enable locating the majority of rock formations. However, the positional and height accuracy of this type of landforms can reach huge errors in some cases. Therefore, it is necessary to start mapping using terrestrial laser scanning with the possibility of adding a point cloud data derived from ground or aerial photogrammetry. Intensity correction and noise removal is usually based on the distance between measured objects and the laser scanner, the incidence angle of the beam or on the radiometric and topographic characteristics of measured objects. This contribution represents the major undesirable effects that affect the quality of acquisition and processing of laser scanning data. Likewise there is introduced solutions to some of these problems.

  7. A web application prototype for the multiscale modelling of seismic input

    CERN Document Server

    Vaccari, Franco

    2014-01-01

    A web application prototype is described, aimed at the generation of synthetic seismograms for user-defined earthquake models. The web application graphical user interface hides the complexity of the underlying computational engine, which is the outcome of the continuous evolution of sophisticated computer codes, some of which saw the light back in the middle '80s. With the web application, even the non-experts can produce ground shaking scenarios at the local or regional scale in very short times, depending on the complexity of the adopted source and medium models, without the need of a deep knowledge of the physics of the earthquake phenomenon. Actually, it may even allow neophytes to get some basic education in the field of seismology and seismic engineering, due to the simplified intuitive experimental approach to the matter. One of the most powerful features made available to the users is indeed the capability of executing quick parametric tests in near real-time, to explore the relations between each mo...

  8. Effect of the spatiotemporal variability of rainfall inputs in water quality integrated catchment modelling for dissolved oxygen concentrations

    Science.gov (United States)

    Moreno Ródenas, Antonio Manuel; Cecinati, Francesca; ten Veldhuis, Marie-Claire; Langeveld, Jeroen; Clemens, Francois

    2016-04-01

    Maintaining water quality standards in highly urbanised hydrological catchments is a worldwide challenge. Water management authorities struggle to cope with changing climate and an increase in pollution pressures. Water quality modelling has been used as a decision support tool for investment and regulatory developments. This approach led to the development of integrated catchment models (ICM), which account for the link between the urban/rural hydrology and the in-river pollutant dynamics. In the modelled system, rainfall triggers the drainage systems of urban areas scattered along a river. When flow exceeds the sewer infrastructure capacity, untreated wastewater enters the natural system by combined sewer overflows. This results in a degradation of the river water quality, depending on the magnitude of the emission and river conditions. Thus, being capable of representing these dynamics in the modelling process is key for a correct assessment of the water quality. In many urbanised hydrological systems the distances between draining sewer infrastructures go beyond the de-correlation length of rainfall processes, especially, for convective summer storms. Hence, spatial and temporal scales of selected rainfall inputs are expected to affect water quality dynamics. The objective of this work is to evaluate how the use of rainfall data from different sources and with different space-time characteristics affects modelled output concentrations of dissolved oxygen in a simplified ICM. The study area is located at the Dommel, a relatively small and sensitive river flowing through the city of Eindhoven (The Netherlands). This river stretch receives the discharge of the 750,000 p.e. WWTP of Eindhoven and from over 200 combined sewer overflows scattered along its length. A pseudo-distributed water quality model has been developed in WEST (mikedhi.com); this is a lumped-physically based model that accounts for urban drainage processes, WWTP and river dynamics for several

  9. Modeling of the impact of Rhone River nutrient inputs on the dynamics of planktonic diversity

    Science.gov (United States)

    Alekseenko, Elena; Baklouti, Melika; Garreau, Pierre; Guyennon, Arnaud; Carlotti, François

    2014-05-01

    Recent studies devoted to the Mediterranean Sea highlight that a large number of uncertainties still exist particularly as regards the variations of elemental stoichiometry of all compartments of pelagic ecosystems (The MerMex Group, 2011, Pujo-Pay et al., 2011, Malatonne-Rizotti and the Pan-Med Group, 2012). Moreover, during the last two decades, it was observed that the inorganic ratio N:P ratio in among all the Mediterranean rivers, including the Rhone River, has dramatically increased, thus strengthening the P-limitation in the Mediterranean waters (Ludwig et al, 2009, The MerMex group, 2011) and increasing the anomaly in the ratio N:P of the Gulf of Lions and all the western part of NW Mediterranean. At which time scales such a change will impact the biogeochemical stocks and fluxes of the Gulf of Lion and of the whole NW Mediterranean sea still remains unknown. In the same way, it is still uncertain how this increase in the N:P ratio will modify the composition of the trophic web, and potentially lead to regime shifts by favouring for example one of the classical food chains of the sea considered in Parsons & Lalli (2002). To address this question, the Eco3M-MED biogeochemical model (Baklouti et al., 2006a,b, Alekseenko et al., 2014) representing the first trophic levels from bacteria to mesozooplankton, coupled with the hydrodynamical model MARS3D (Lazure&Dumas, 2008) is used. This model has already been partially validated (Alekseenko et al., 2014) and the fact that it describes each biogenic compartment in terms of its abundance (for organisms), and carbon, phosphorus, nitrogen and chlorophyll (for autotrophs) implies that all the information on the intracellular status of organisms and on the element(s) that limit(s) their growth will be available. The N:P ratios in water, organisms and in the exported material will also be analyzed. In practice, the work will first consist in running different scenarios starting from similar initial early winter

  10. DECISION MAKING MODELING OF CONCRETE REQUIREMENTS

    Directory of Open Access Journals (Sweden)

    Suhartono Irawan

    2001-01-01

    Full Text Available This paper presents the results of an experimental evaluation between predicted and practice concrete strength. The scope of the evaluation is the optimisation of the cement content for different concrete grades as a result of bringing the target mean value of tests cubes closer to the required characteristic strength value by reducing the standard deviation. Abstract in Bahasa Indonesia : concrete+mix+design%2C+acceptance+control%2C+optimisation%2C+cement+content.

  11. A MODEL FOR ALIGNING SOFTWARE PROJECTS REQUIREMENTS WITH PROJECT TEAM MEMBERS REQUIREMENTS

    Directory of Open Access Journals (Sweden)

    Robert Hans

    2013-02-01

    Full Text Available The fast-paced, dynamic environment within which information and communication technology (ICT projects are run as well as ICT professionals’ constant changing requirements present a challenge for project managers in terms of aligning projects’ requirements with project team members’ requirements. This research paper purports that if projects’ requirements are properly aligned with team members’ requirements, then this will result in a balanced decision approach. Moreover, such an alignment will result in the realization of employee’s needs as well as meeting project’s needs. This paper presents a Project’s requirements and project Team members’ requirements (PrTr alignment model and argues that a balanced decision which meets both software project’s requirements and team members’ requirements can be achieved through the application of the PrTr alignment model.

  12. Supporting requirements model evolution throughout the system life-cycle

    OpenAIRE

    Ernst, Neil; Mylopoulos, John; Yu, Yijun; Ngyuen, Tien T.

    2008-01-01

    Requirements models are essential not just during system implementation, but also to manage system changes post-implementation. Such models should be supported by a requirements model management framework that allows users to create, manage and evolve models of domains, requirements, code and other design-time artifacts along with traceability links between their elements. We propose a comprehensive framework which delineates the operations and elements necessary, and then describe a tool imp...

  13. Input Parameters for Models of Energetic Electrons Fluxes at the Geostationary Orbit

    Institute of Scientific and Technical Information of China (English)

    V. I. Degtyarev; G.V. Popov; B. S. Xue; S.E. Chudnenko

    2005-01-01

    The results of cross-correlation analysis between electrons fluxes (with energies of > 0.6 MeV,> 2.0MeV and > 4.0MeV), geomagnetic indices and solar wind parameters are shown in the paper. It is determined that the electron fluxes are controlled not only by the geomagnetic indices, but also by the solar wind parameters, and the solar wind velocity demonstrates the best relation with the electron fluxes.Numerical value of the relation efficiency of external parameters with the highly energetic electrons fluxes shows a periodicity. It is presented here the preliminary results of daily averaged electrons fluxes forecast for a day ahead on the basis of the model of neuron networks.

  14. Models of Angular Momentum Input to a Circumterrestrial Swarm from Encounters with Heliocentric Planetesimals

    Science.gov (United States)

    Davis, D. R.; Greenberg, R.; Hebert, F.

    1985-01-01

    Models of lunar origin in which the Moon accretes in orbit about the Earth from material approaching the Earth from heliocentric orbits must overcome a fundamental problem: the approach orbits of such material would be, in the simplest approximation, equally likely to be prograde or retrograde about the Earth, with the result that accretion of such material adds mass but not angular momentum to circumterrestrial satellites. Satellite orbits would then decay due to the resulting drag, ultimately impacting onto the Earth. One possibility for adding both material and angular momentum to Earth orbit is investigated: imbalance in the delivered angular momentum between pro and retrograde Earth passing orbits which arises from the three body dynamics of planetesimals approaching the Earth from heliocentric space. In order to study angular momentum delivery to circumterrestrial satellites, the near Earth velocities were numerically computed as a function of distance from the Earth for a large array of orbits systematically spanning heliocentric phase space.

  15. Supply Chain Vulnerability Analysis Using Scenario-Based Input-Output Modeling: Application to Port Operations.

    Science.gov (United States)

    Thekdi, Shital A; Santos, Joost R

    2016-05-01

    Disruptive events such as natural disasters, loss or reduction of resources, work stoppages, and emergent conditions have potential to propagate economic losses across trade networks. In particular, disruptions to the operation of container port activity can be detrimental for international trade and commerce. Risk assessment should anticipate the impact of port operation disruptions with consideration of how priorities change due to uncertain scenarios and guide investments that are effective and feasible for implementation. Priorities for protective measures and continuity of operations planning must consider the economic impact of such disruptions across a variety of scenarios. This article introduces new performance metrics to characterize resiliency in interdependency modeling and also integrates scenario-based methods to measure economic sensitivity to sudden-onset disruptions. The methods will be demonstrated on a U.S. port responsible for handling $36.1 billion of cargo annually. The methods will be useful to port management, private industry supply chain planning, and transportation infrastructure management.

  16. Effect of delayed response in growth on the dynamics of a chemostat model with impulsive input

    Energy Technology Data Exchange (ETDEWEB)

    Jiao Jianjun [School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan 430074 (China) and Guizhou Key Laboratory of Economics System Simulation, Guizhou College of Finance and Economics, Guiyang 550004 (China)], E-mail: jiaojianjun05@126.com; Yang Xiaosong [School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan 430074 (China); Chen Lansun [Institute of Mathematics, Academia Sinica, Beijing 100080 (China)], E-mail: lschen@amss.ac.cn; Cai Shaohong [Guizhou Key Laboratory of Economics System Simulation, Guizhou College of Finance and Economics, Guiyang 550004 (China)

    2009-11-30

    In this paper, a chemostat model with delayed response in growth and impulsive perturbations on the substrate is considered. Using the discrete dynamical system determined by the stroboscopic map, we obtain a microorganism-extinction periodic solution, further, the globally attractive condition of the microorganism-extinction periodic solution is obtained. By the use of the theory on delay functional and impulsive differential equation, we also obtain the permanent condition of the investigated system. Our results indicate that the discrete time delay has influence to the dynamics behaviors of the investigated system, and provide tactical basis for the experimenters to control the outcome of the chemostat. Furthermore, numerical analysis is inserted to illuminate the dynamics of the system affected by the discrete time delay.

  17. Evaluation of a Regional Australian Nurse-Led Parkinson's Service Using the Context, Input, Process, and Product Evaluation Model.

    Science.gov (United States)

    Jones, Belinda; Hopkins, Genevieve; Wherry, Sally-Anne; Lueck, Christian J; Das, Chandi P; Dugdale, Paul

    2016-01-01

    A nurse-led Parkinson's service was introduced at Canberra Hospital and Health Services in 2012 with the primary objective of improving the care and self-management of people with a diagnosis of Parkinson's disease (PD) and related movement disorders. Other objectives of the Service included improving the quality of life of patients with PD and reducing their caregiver burden, improving the knowledge and understanding of PD among healthcare professionals, and reducing unnecessary hospital admissions. This article evaluates the first 2 years of this Service. The Context, Input, Process, and Product Evaluation Model was used to evaluate the Parkinson's and Movement Disorder Service. The context evaluation was conducted through discussions with stakeholders, review of PD guidelines and care pathways, and assessment of service gaps. Input: The input evaluation was carried out by reviewing the resources and strategies used in the development of the Service. The process evaluation was undertaken by reviewing the areas of the implementation that went well and identifying issues and ongoing gaps in service provision. Product: Finally, product evaluation was undertaken by conducting stakeholder interviews and surveying patients in order to assess their knowledge and perception of value, and the patient experience of the Service. Admission data before and after implementation of the Parkinson's and Movement Disorder Service were also compared for any notable trends. Several gaps in service provision for patients with PD in the Australian Capital Territory were identified, prompting the development of a PD Service to address some of them. Input: Funding for a Parkinson's disease nurse specialist was made available, and existing resources were used to develop clinics, education sessions, and outreach services. Clinics and education sessions were implemented successfully, with positive feedback from patients and healthcare professionals. However, outreach services were limited

  18. On learning time delays between the spikes from different input neurons in a biophysical model of a pyramidal neuron.

    Science.gov (United States)

    Koutsou, Achilleas; Bugmann, Guido; Christodoulou, Chris

    2015-10-01

    Biological systems are able to recognise temporal sequences of stimuli or compute in the temporal domain. In this paper we are exploring whether a biophysical model of a pyramidal neuron can detect and learn systematic time delays between the spikes from different input neurons. In particular, we investigate whether it is possible to reinforce pairs of synapses separated by a dendritic propagation time delay corresponding to the arrival time difference of two spikes from two different input neurons. We examine two subthreshold learning approaches where the first relies on the backpropagation of EPSPs (excitatory postsynaptic potentials) and the second on the backpropagation of a somatic action potential, whose production is supported by a learning-enabling background current. The first approach does not provide a learning signal that sufficiently differentiates between synapses at different locations, while in the second approach, somatic spikes do not provide a reliable signal distinguishing arrival time differences of the order of the dendritic propagation time. It appears that the firing of pyramidal neurons shows little sensitivity to heterosynaptic spike arrival time differences of several milliseconds. This neuron is therefore unlikely to be able to learn to detect such differences.

  19. Deriving input parameters for cost-effectiveness modeling: taxonomy of data types and approaches to their statistical synthesis.

    Science.gov (United States)

    Saramago, Pedro; Manca, Andrea; Sutton, Alex J

    2012-01-01

    The evidence base informing economic evaluation models is rarely derived from a single source. Researchers are typically expected to identify and combine available data to inform the estimation of model parameters for a particular decision problem. The absence of clear guidelines on what data can be used and how to effectively synthesize this evidence base under different scenarios inevitably leads to different approaches being used by different modelers. The aim of this article is to produce a taxonomy that can help modelers identify the most appropriate methods to use when synthesizing the available data for a given model parameter. This article developed a taxonomy based on possible scenarios faced by the analyst when dealing with the available evidence. While mainly focusing on clinical effectiveness parameters, this article also discusses strategies relevant to other key input parameters in any economic model (i.e., disease natural history, resource use/costs, and preferences). The taxonomy categorizes the evidence base for health economic modeling according to whether 1) single or multiple data sources are available, 2) individual or aggregate data are available (or both), or 3) individual or multiple decision model parameters are to be estimated from the data. References to examples of the key methodological developments for each entry in the taxonomy together with citations to where such methods have been used in practice are provided throughout. The use of the taxonomy developed in this article hopes to improve the quality of the synthesis of evidence informing decision models by bringing to the attention of health economics modelers recent methodological developments in this field. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  20. Channel responses to varying sediment input: A flume experiment modeled after Redwood Creek, California

    Science.gov (United States)

    Madej, M.A.; Sutherland, D.G.; Lisle, T.E.; Pryor, B.

    2009-01-01

    At the reach scale, a channel adjusts to sediment supply and flow through mutual interactions among channel form, bed particle size, and flow dynamics that govern river bed mobility. Sediment can impair the beneficial uses of a river, but the timescales for studying recovery following high sediment loading in the field setting make flume experiments appealing. We use a flume experiment, coupled with field measurements in a gravel-bed river, to explore sediment transport, storage, and mobility relations under various sediment supply conditions. Our flume experiment modeled adjustments of channel morphology, slope, and armoring in a gravel-bed channel. Under moderate sediment increases, channel bed elevation increased and sediment output increased, but channel planform remained similar to pre-feed conditions. During the following degradational cycle, most of the excess sediment was evacuated from the flume and the bed became armored. Under high sediment feed, channel bed elevation increased, the bed became smoother, mid-channel bars and bedload sheets formed, and water surface slope increased. Concurrently, output increased and became more poorly sorted. During the last degradational cycle, the channel became armored and channel incision ceased before all excess sediment was removed. Selective transport of finer material was evident throughout the aggradational cycles and became more pronounced during degradational cycles as the bed became armored. Our flume results of changes in bed elevation, sediment storage, channel morphology, and bed texture parallel those from field surveys of Redwood Creek, northern California, which has exhibited channel bed degradation for 30??years following a large aggradation event in the 1970s. The flume experiment suggested that channel recovery in terms of reestablishing a specific morphology may not occur, but the channel may return to a state of balancing sediment supply and transport capacity.

  1. Requirements model for an e-Health awareness portal

    Science.gov (United States)

    Hussain, Azham; Mkpojiogu, Emmanuel O. C.; Nawi, Mohd Nasrun M.

    2016-08-01

    Requirements engineering is at the heart and foundation of software engineering process. Poor quality requirements inevitably lead to poor quality software solutions. Also, poor requirement modeling is tantamount to designing a poor quality product. So, quality assured requirements development collaborates fine with usable products in giving the software product the needed quality it demands. In the light of the foregoing, the requirements for an e-Ebola Awareness Portal were modeled with a good attention given to these software engineering concerns. The requirements for the e-Health Awareness Portal are modeled as a contribution to the fight against Ebola and helps in the fulfillment of the United Nation's Millennium Development Goal No. 6. In this study requirements were modeled using UML 2.0 modeling technique.

  2. Extending enterprise architecture modelling with business goals and requirements

    NARCIS (Netherlands)

    Engelsman, Wilco; Quartel, Dick; Jonkers, Henk; Sinderen, van Marten

    2011-01-01

    The methods for enterprise architecture (EA), such as The Open Group Architecture Framework, acknowledge the importance of requirements modelling in the development of EAs. Modelling support is needed to specify, document, communicate and reason about goals and requirements. The current modelling te

  3. A 2nd generation static model for predicting greenhouse energy inputs, as an aid for production planning

    CERN Document Server

    Jolliet, O; Munday, G L

    1985-01-01

    A model which allows accurate prediction of energy consumption of a greenhouse is a useful tool for production planning and optimisation of greenhouse components. To date two types of model have been developed; some very simple models of low precision, others, precise dynamic models unsuitable for employment over long periods and too complex for use in practice. A theoretical study and measurements at the CERN trial greenhouse have allowed development of a new static model named "HORTICERN", easy to use and as precise as more complex dynamic models. This paper demonstrates the potential of this model for long-term production planning. The model gives precise predictions of energy consumption when given greenhouse conditions of use (inside temperatures, dehumidification by ventilation, …) and takes into account local climatic conditions (wind radiative losses to the sky and solar gains), type of greenhouse (cladding, thermal screen …). The HORTICERN method has been developed for PC use and requires less...

  4. Impact of temporal resolution of inputs on hydrological model performance: An analysis based on 2400 flood events

    Science.gov (United States)

    Ficchì, Andrea; Perrin, Charles; Andréassian, Vazken

    2016-07-01

    Hydro-climatic data at short time steps are considered essential to model the rainfall-runoff relationship, especially for short-duration hydrological events, typically flash floods. Also, using fine time step information may be beneficial when using or analysing model outputs at larger aggregated time scales. However, the actual gain in prediction efficiency using short time-step data is not well understood or quantified. In this paper, we investigate the extent to which the performance of hydrological modelling is improved by short time-step data, using a large set of 240 French catchments, for which 2400 flood events were selected. Six-minute rain gauge data were available and the GR4 rainfall-runoff model was run with precipitation inputs at eight different time steps ranging from 6 min to 1 day. Then model outputs were aggregated at seven different reference time scales ranging from sub-hourly to daily for a comparative evaluation of simulations at different target time steps. Three classes of model performance behaviour were found for the 240 test catchments: (i) significant improvement of performance with shorter time steps; (ii) performance insensitivity to the modelling time step; (iii) performance degradation as the time step becomes shorter. The differences between these groups were analysed based on a number of catchment and event characteristics. A statistical test highlighted the most influential explanatory variables for model performance evolution at different time steps, including flow auto-correlation, flood and storm duration, flood hydrograph peakedness, rainfall-runoff lag time and precipitation temporal variability.

  5. Selection Input Output by Restriction Using DEA Models Based on a Fuzzy Delphi Approach and Expert Information

    Science.gov (United States)

    Arsad, Roslah; Nasir Abdullah, Mohammad; Alias, Suriana; Isa, Zaidi

    2017-09-01

    Stock evaluation has always been an interesting problem for investors. In this paper, a comparison regarding the efficiency stocks of listed companies in Bursa Malaysia were made through the application of estimation method of Data Envelopment Analysis (DEA). One of the interesting research subjects in DEA is the selection of appropriate input and output parameter. In this study, DEA was used to measure efficiency of stocks of listed companies in Bursa Malaysia in terms of the financial ratio to evaluate performance of stocks. Based on previous studies and Fuzzy Delphi Method (FDM), the most important financial ratio was selected. The results indicated that return on equity, return on assets, net profit margin, operating profit margin, earnings per share, price to earnings and debt to equity were the most important ratios. Using expert information, all the parameter were clarified as inputs and outputs. The main objectives were to identify most critical financial ratio, clarify them based on expert information and compute the relative efficiency scores of stocks as well as rank them in the construction industry and material completely. The methods of analysis using Alirezaee and Afsharian’s model were employed in this study, where the originality of Charnes, Cooper and Rhodes (CCR) with the assumption of Constant Return to Scale (CSR) still holds. This method of ranking relative efficiency of decision making units (DMUs) was value-added by the Balance Index. The interested data was made for year 2015 and the population of the research includes accepted companies in stock markets in the construction industry and material (63 companies). According to the ranking, the proposed model can rank completely for 63 companies using selected financial ratio.

  6. Extending enterprise architecture modelling with business goals and requirements

    Science.gov (United States)

    Engelsman, Wilco; Quartel, Dick; Jonkers, Henk; van Sinderen, Marten

    2011-02-01

    The methods for enterprise architecture (EA), such as The Open Group Architecture Framework, acknowledge the importance of requirements modelling in the development of EAs. Modelling support is needed to specify, document, communicate and reason about goals and requirements. The current modelling techniques for EA focus on the products, services, processes and applications of an enterprise. In addition, techniques may be provided to describe structured requirements lists and use cases. Little support is available however for modelling the underlying motivation of EAs in terms of stakeholder concerns and the high-level goals that address these concerns. This article describes a language that supports the modelling of this motivation. The definition of the language is based on existing work on high-level goal and requirements modelling and is aligned with an existing standard for enterprise modelling: the ArchiMate language. Furthermore, the article illustrates how EA can benefit from analysis techniques from the requirements engineering domain.

  7. Atmospheric Oxidation in a Southeastern US Forest: Sensitivity of Differences Between Modeled and Measured Hydroxyl (OH) to Model Mechanism and Inputs

    Science.gov (United States)

    Brune, W. H.; Feiner, P. A.; Zhang, L.; Miller, D. O.

    2014-12-01

    Forests play a critical role in the atmosphere's oxidation chemistry because of their broad global extent and their prodigious emissions of biogenic volatile organic compounds (BVOCs). The high hydroxyl (OH) reactivity of these BVOCs causes much of the initial chemistry to occur near the forest. Some OH measurements in forests are much greater than calculated with models, leading to close examination of the BVOC oxidation mechanisms and the possibility of significant OH recycling. The 2013 Southern Oxidant and Aerosol Study (SOAS) provides a rigorous test of the BVOC oxidation mechanisms and OH recycling with its extensive measurement suite that was positioned in an Alabama forest for six weeks. OH measurements made with the Ground-based Tropospheric Hydrogen Oxides Sensor (GTHOS) are compared to photochemical box models constrained with other simultaneous measurements in order to test the understanding of this forest photochemistry. In this work, we use a global sensitivity analysis (Random Sampling - High Dimensional Model Representation) to examine the sensitivity of the differences between the modeled and measured OH to the model mechanism and inputs. In this presentation, we will discuss the model reactions and inputs that have the most influence on the modeled OH and its difference with measured OH and will provide recommendations for reducing model and measurement uncertainty.

  8. Mixing Formal and Informal Model Elements for Tracing Requirements

    DEFF Research Database (Denmark)

    Jastram, Michael; Hallerstede, Stefan; Ladenberger, Lukas

    2011-01-01

    a system for traceability with a state-based formal method that supports refinement. We do not require all specification elements to be modelled formally and support incremental incorporation of new specification elements into the formal model. Refinement is used to deal with larger amounts of requirements......Tracing between informal requirements and formal models is challenging. A method for such tracing should permit to deal efficiently with changes to both the requirements and the model. A particular challenge is posed by the persisting interplay of formal and informal elements. In this paper, we...

  9. Diversity not quantity in caregiver speech: Using computational modeling to isolate the effects of the quantity and the diversity of the input on vocabulary growth.

    Science.gov (United States)

    Jones, Gary; Rowland, Caroline F

    2017-11-01

    Children who hear large amounts of diverse speech learn language more quickly than children who do not. However, high correlations between the amount and the diversity of the input in speech samples makes it difficult to isolate the influence of each. We overcame this problem by controlling the input to a computational model so that amount of exposure to linguistic input (quantity) and the quality of that input (lexical diversity) were independently manipulated. Sublexical, lexical, and multi-word knowledge were charted across development (Study 1), showing that while input quantity may be important early in learning, lexical diversity is ultimately more crucial, a prediction confirmed against children's data (Study 2). The model trained on a lexically diverse input also performed better on nonword repetition and sentence recall tests (Study 3) and was quicker to learn new words over time (Study 4). A language input that is rich in lexical diversity outperforms equivalent richness in quantity for learned sublexical and lexical knowledge, for well-established language tests, and for acquiring words that have never been encountered before. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Prediction of the Nighttime VLF Subionospheric Signal Amplitude by Using Nonlinear Autoregressive with Exogenous Input Neural Network Model

    Science.gov (United States)

    Santosa, H.; Hobara, Y.; Balikhin, M. A.

    2015-12-01

    Very Low Frequency (VLF) waves have been proposed as an approach to study and monitor the lower ionospheric conditions. The ionospheric perturbations are identified in relation with thunderstorm activity, geomagnetic storm and other factors. The temporal dependence of VLF amplitude has a complicated and large daily variabilities in general due to combinations of both effects from above (space weather effect) and below (atmospheric and crustal processes) of the ionosphere. Quantitative contributions from different external sources are not known well yet. Thus the modelling and prediction of VLF wave amplitude are important issues to study the lower ionospheric responses from various external parameters and to also detect the anomalies of the ionosphere. The purpose of the study is to model and predict nighttime average amplitude of VLF wave propagation from the VLF transmitter in Hawaii (NPM) to receiver in Chofu (CHO) Tokyo, Japan path using NARX neural network. The constructed model was trained for the target parameter of nighttime average amplitude of NPM-CHO path. The NARX model, which was built based on daily input variables of various physical parameters such as stratosphere temperature, cosmic rays and total column ozone, possessed good accuracies. As a result, the constructed models are capable of performing accurate multistep ahead predictions, while maintaining acceptable one step ahead prediction accuracy. The results of the predicted daily VLF amplitude are in good agreement with observed (true) value for one step ahead prediction (r = 0.92, RMSE = 1.99), multi-step ahead 5 days prediction (r = 0.91, RMSE = 1.14) and multi-step ahead 10 days prediction (r = 0.75, RMSE = 1.74). The developed model indicates the feasibility and reliability of predicting lower ionospheric properties by the NARX neural network approach, and provides physical insights on the responses of lower ionosphere due to various external forcing.

  11. An adaptive regional input-output model and its application to the assessment of the economic cost of Katrina.

    Science.gov (United States)

    Hallegatte, Stéphane

    2008-06-01

    This article proposes a new modeling framework to investigate the consequences of natural disasters and the following reconstruction phase. Based on input-output tables, its originalities are (1) the taking into account of sector production capacities and of both forward and backward propagations within the economic system; and (2) the introduction of adaptive behaviors. The model is used to simulate the response of the economy of Louisiana to the landfall of Katrina. The model is found consistent with available data, and provides two important insights. First, economic processes exacerbate direct losses, and total costs are estimated at $149 billion, for direct losses equal to $107 billion. When exploring the impacts of other possible disasters, it is found that total losses due to a disaster affecting Louisiana increase nonlinearly with respect to direct losses when the latter exceed $50 billion. When direct losses exceed $200 billion, for instance, total losses are twice as large as direct losses. For risk management, therefore, direct losses are insufficient measures of disaster consequences. Second, positive and negative backward propagation mechanisms are essential for the assessment of disaster consequences, and the taking into account of production capacities is necessary to avoid overestimating the positive effects of reconstruction. A systematic sensitivity analysis shows that, among all parameters, the overproduction capacity in the construction sector and the adaptation characteristic time are the most important.

  12. Mutational Analyses of HAMP Helices Suggest a Dynamic Bundle Model of Input-Output Signaling in Chemoreceptors

    Science.gov (United States)

    Zhou, Qin; Ames, Peter; Parkinson, John S.

    2009-01-01

    SUMMARY To test the gearbox model of HAMP signaling in the E. coli serine receptor, Tsr, we generated a series of amino acid replacements at each residue of the AS1 and AS2 helices. The residues most critical for Tsr function defined hydrophobic packing faces consistent with a 4-helix bundle. Suppression patterns of helix lesions conformed to the the predicted packing layers in the bundle. Although the properties and patterns of most AS1 and AS2 lesions were consistent with both proposed gearbox structures, some mutational features specifically indicate the functional importance of an x-da bundle over an alternative a-d bundle. These genetic data suggest that HAMP signaling could simply involve changes in the stability of its x-da bundle. We propose that Tsr HAMP controls output signals by modulating destabilizing phase clashes between the AS2 helices and the adjoining kinase control helices. Our model further proposes that chemoeffectors regulate HAMP bundle stability through a control cable connection between the transmembrane segments and AS1 helices. Attractant stimuli, which cause inward piston displacements in chemoreceptors, should reduce cable tension, thereby stabilizing the HAMP bundle. This study shows how transmembrane signaling and HAMP input-output control could occur without the helix rotations central to the gearbox model. PMID:19656294

  13. Using cognitive modeling for requirements engineering in anesthesiology

    NARCIS (Netherlands)

    Pott, C; le Feber, J

    2005-01-01

    Cognitive modeling is a complexity reducing method to describe significant cognitive processes under a specified research focus. Here, a cognitive process model for decision making in anesthesiology is presented and applied in requirements engineering. Three decision making situations of

  14. Joint input-response estimation for structural systems based on reduced-order models and vibration data from a limited number of sensors

    Science.gov (United States)

    Lourens, E.; Papadimitriou, C.; Gillijns, S.; Reynders, E.; De Roeck, G.; Lombaert, G.

    2012-05-01

    An algorithm is presented for jointly estimating the input and state of a structure from a limited number of acceleration measurements. The algorithm extends an existing joint input-state estimation filter, derived using linear minimum-variance unbiased estimation, to applications in structural dynamics. The filter has the structure of a Kalman filter, except that the true value of the input is replaced by an optimal estimate. No prior information on the dynamic evolution of the input forces is assumed and no regularization is required, permitting online application. The effectiveness and accuracy of the proposed algorithm are demonstrated using data from a numerical cantilever beam example as well as a laboratory experiment on an instrumented steel beam and an in situ experiment on a footbridge.

  15. Crop yield response to soil fertility and N, P, K inputs in different environments: Testing and improving the QUEFTS model

    NARCIS (Netherlands)

    Sattari, S.Z.; Ittersum, van M.K.; Bouwman, A.F.; Smit, A.L.; Janssen, B.H.

    2014-01-01

    Global food production strongly depends on availability of nutrients. Assessment of future global phosphorus (P) fertilizer demand in interaction with nitrogen (N) and potassium (K) fertilizers under different levels of food demand requires a model-based approach. In this paper we tested use of the

  16. Crop yield response to soil fertility and N, P, K inputs in different environments: Testing and improving the QUEFTS model

    NARCIS (Netherlands)

    Sattari, S.Z.; Ittersum, van M.K.; Bouwman, A.F.; Smit, A.L.; Janssen, B.H.

    2014-01-01

    Global food production strongly depends on availability of nutrients. Assessment of future global phosphorus (P) fertilizer demand in interaction with nitrogen (N) and potassium (K) fertilizers under different levels of food demand requires a model-based approach. In this paper we tested use of the

  17. Comparison of Parameter Estimations Using Dual-Input and Arterial-Input in Liver Kinetic Studies of FDG Metabolism.

    Science.gov (United States)

    Cui, Yunfeng; Bai, Jing

    2005-01-01

    Liver kinetic study of [18F]2-fluoro-2-deoxy-D-glucose (FDG) metabolism in human body is an important tool for functional modeling and glucose metabolic rate estimation. In general, the arterial blood time-activity curve (TAC) and the tissue TAC are required as the input and output functions for the kinetic model. For liver study, however, the arterial-input may be not consistent with the actual model input because the liver has a dual blood supply from the hepatic artery (HA) and the portal vein (PV) to the liver. In this study, the result of model parameter estimation using dual-input function is compared with that using arterial-input function. First, a dynamic positron emission tomography (PET) experiment is performed after injection of FDG into the human body. The TACs of aortic blood, PV blood, and five regions of interest (ROIs) in liver are obtained from the PET image. Then, the dual-input curve is generated by calculating weighted sum of both the arterial and PV input curves. Finally, the five liver ROIs' kinetic parameters are estimated with arterial-input and dual-input functions respectively. The results indicate that the two methods provide different parameter estimations and the dual-input function may lead to more accurate parameter estimation.

  18. Inventory development and input-output model of U.S. land use: relating land in production to consumption.

    Science.gov (United States)

    Costello, Christine; Griffin, W Michael; Matthews, H Scott; Weber, Christopher L

    2011-06-01

    As populations and demands for land-intensive products, e.g., cattle and biofuels, increase the need to understand the relationship between land use and consumption grows. This paper develops a production-based inventory of land use (i.e., the land used to produce goods) in the U.S. With this inventory an input-output analysis is used to create a consumption-based inventory of land use. This allows for exploration of links between land used in production to the consumption of particular goods. For example, it is possible to estimate the amount of cropland embodied in processed foods or healthcare services. As would be expected, agricultural and forestry industries are the largest users of land in the production-based inventory. Similarly, we find that processed foods and forest products are the largest users of land in the consumption-based inventory. Somewhat less expectedly this work finds that the majority of manufacturing and service industries, not typically associated with land use, require substantial amounts of land to produce output due to the purchase of food and other agricultural and wood-based products in the supply chain. The quantitative land use results of this analysis could be integrated with qualitative metrics such as weighting schemes designed to reflect environmental impact or life cycle impact assessment methods.

  19. Software Requirements Specification Verifiable Fuel Cycle Simulation (VISION) Model

    Energy Technology Data Exchange (ETDEWEB)

    D. E. Shropshire; W. H. West

    2005-11-01

    The purpose of this Software Requirements Specification (SRS) is to define the top-level requirements for a Verifiable Fuel Cycle Simulation Model (VISION) of the Advanced Fuel Cycle (AFC). This simulation model is intended to serve a broad systems analysis and study tool applicable to work conducted as part of the AFCI (including costs estimates) and Generation IV reactor development studies.

  20. Requirements Validation: Execution of UML Models with CPN Tools

    DEFF Research Database (Denmark)

    Machado, Ricardo J.; Lassen, Kristian Bisgaard; Oliveira, Sérgio

    2007-01-01

    with simple unified modelling language (UML) requirements models, it is not easy for the development team to get confidence on the stakeholders' requirements validation. This paper describes an approach, based on the construction of executable interactive prototypes, to support the validation of workflow...

  1. Optimizing the modified microdosimetric kinetic model input parameters for proton and 4He ion beam therapy application

    Science.gov (United States)

    Mairani, A.; Magro, G.; Tessonnier, T.; Böhlen, T. T.; Molinelli, S.; Ferrari, A.; Parodi, K.; Debus, J.; Haberer, T.

    2017-06-01

    Models able to predict relative biological effectiveness (RBE) values are necessary for an accurate determination of the biological effect with proton and 4He ion beams. This is particularly important when including RBE calculations in treatment planning studies comparing biologically optimized proton and 4He ion beam plans. In this work, we have tailored the predictions of the modified microdosimetric kinetic model (MKM), which is clinically applied for carbon ion beam therapy in Japan, to reproduce RBE with proton and 4He ion beams. We have tuned the input parameters of the MKM, i.e. the domain and nucleus radii, reproducing an experimental database of initial RBE data for proton and He ion beams. The modified MKM, with the best fit parameters obtained, has been used to reproduce in vitro cell survival data in clinically-relevant scenarios. A satisfactory agreement has been found for the studied cell lines, A549 and RENCA, with the mean absolute survival variation between the data and predictions within 2% and 5% for proton and 4He ion beams, respectively. Moreover, a sensitivity study has been performed varying the domain and nucleus radii and the quadratic parameter of the photon response curve. The promising agreement found in this work for the studied clinical-like scenarios supports the usage of the modified MKM for treatment planning studies in proton and 4He ion beam therapy.

  2. Air quality modelling in the Berlin-Brandenburg region using WRF-Chem v3.7.1: sensitivity to resolution of model grid and input data

    Science.gov (United States)

    Kuik, Friderike; Lauer, Axel; Churkina, Galina; Denier van der Gon, Hugo A. C.; Fenner, Daniel; Mar, Kathleen A.; Butler, Tim M.

    2016-12-01

    Air pollution is the number one environmental cause of premature deaths in Europe. Despite extensive regulations, air pollution remains a challenge, especially in urban areas. For studying summertime air quality in the Berlin-Brandenburg region of Germany, the Weather Research and Forecasting Model with Chemistry (WRF-Chem) is set up and evaluated against meteorological and air quality observations from monitoring stations as well as from a field campaign conducted in 2014. The objective is to assess which resolution and level of detail in the input data is needed for simulating urban background air pollutant concentrations and their spatial distribution in the Berlin-Brandenburg area. The model setup includes three nested domains with horizontal resolutions of 15, 3 and 1 km and anthropogenic emissions from the TNO-MACC III inventory. We use RADM2 chemistry and the MADE/SORGAM aerosol scheme. Three sensitivity simulations are conducted updating input parameters to the single-layer urban canopy model based on structural data for Berlin, specifying land use classes on a sub-grid scale (mosaic option) and downscaling the original emissions to a resolution of ca. 1 km × 1 km for Berlin based on proxy data including traffic density and population density. The results show that the model simulates meteorology well, though urban 2 m temperature and urban wind speeds are biased high and nighttime mixing layer height is biased low in the base run with the settings described above. We show that the simulation of urban meteorology can be improved when specifying the input parameters to the urban model, and to a lesser extent when using the mosaic option. On average, ozone is simulated reasonably well, but maximum daily 8 h mean concentrations are underestimated, which is consistent with the results from previous modelling studies using the RADM2 chemical mechanism. Particulate matter is underestimated, which is partly due to an underestimation of secondary organic aerosols

  3. Impact of multi-resolution analysis of artificial intelligence models inputs on multi-step ahead river flow forecasting

    Science.gov (United States)

    Badrzadeh, Honey; Sarukkalige, Ranjan; Jayawardena, A. W.

    2013-12-01

    Discrete wavelet transform was applied to decomposed ANN and ANFIS inputs.Novel approach of WNF with subtractive clustering applied for flow forecasting.Forecasting was performed in 1-5 step ahead, using multi-variate inputs.Forecasting accuracy of peak values and longer lead-time significantly improved.

  4. A CORF computational model of a simple cell that relies on LGN input outperforms the Gabor function model

    NARCIS (Netherlands)

    Azzopardi, George; Petkov, Nicolai

    Simple cells in primary visual cortex are believed to extract local contour information from a visual scene. The 2D Gabor function (GF) model has gained particular popularity as a computational model of a simple cell. However, it short-cuts the LGN, it cannot reproduce a number of properties of real

  5. Selection of Polynomial Chaos Bases via Bayesian Model Uncertainty Methods with Applications to Sparse Approximation of PDEs with Stochastic Inputs

    Energy Technology Data Exchange (ETDEWEB)

    Karagiannis, Georgios; Lin, Guang

    2014-02-15

    Generalized polynomial chaos (gPC) expansions allow the representation of the solution of a stochastic system as a series of polynomial terms. The number of gPC terms increases dramatically with the dimension of the random input variables. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs if the evaluations of the system are expensive, the evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solution, both in spacial and random domains, by coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spacial points via (1) Bayesian model average or (2) medial probability model, and their construction as functions on the spacial domain via spline interpolation. The former accounts the model uncertainty and provides Bayes-optimal predictions; while the latter, additionally, provides a sparse representation of the solution by evaluating the expansion on a subset of dominating gPC bases when represented as a gPC expansion. Moreover, the method quantifies the importance of the gPC bases through inclusion probabilities. We design an MCMC sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed method is suitable for, but not restricted to, problems whose stochastic solution is sparse at the stochastic level with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the good performance of the proposed method and make comparisons with others on 1D, 14D and 40D in random space elliptic stochastic partial differential equations.

  6. Comparing TRMM 3B42, CFSR and ground-based rainfall estimates as input for hydrological models, in data scarce regions: the Upper Blue Nile Basin, Ethiopia

    Directory of Open Access Journals (Sweden)

    A. W. Worqlul

    2015-02-01

    Full Text Available Accurate prediction of hydrological models requires accurate spatial and temporal distribution of rainfall observation network. In developing countries rainfall observation station network are sparse and unevenly distributed. Satellite-based products have the potential to overcome these shortcomings. The objective of this study is to compare the advantages and the limitation of commonly used high-resolution satellite rainfall products as input to hydrological models as compared to sparsely populated network of rain gauges. For this comparison we use two semi-distributed hydrological models Hydrologiska Byråns Vattenbalansavdelning (HBV and Parameter Efficient Distributed (PED that performed well in Ethiopian highlands in two watersheds: the Gilgel Abay with relatively dense network and Main Beles with relatively scarce rain gauge stations. Both are located in the Upper Blue Nile Basin. The two models are calibrated with the observed discharge from 1994 to 2003 and validated from 2004 to 2006. Satellite rainfall estimates used includes Climate Forecast System Reanalysis (CFSR, Tropical Rainfall Measuring Mission (TRMM 3B42 version 7 and ground rainfall measurements. The results indicated that both the gauged and the CFSR precipitation estimates were able to reproduce the stream flow well for both models and both watershed. TRMM 3B42 performed poorly with Nash Sutcliffe values less than 0.1. As expected the HBV model performed slightly better than the PED model, because HBV divides the watershed into sub-basins resulting in a greater number of calibration parameters. The simulated discharge for the Gilgel Abay was better than for the less well endowed (rain gauge wise Main Beles. Finally surprisingly, the ground based gauge performed better for both watersheds (with the exception of extreme events than TRMM and CFSR satellite rainfall estimates. Undoubtedly in the future, when improved satellite products will become available, this will change.

  7. Modeling the ionosphere-thermosphere response to a geomagnetic storm using physics-based magnetospheric energy input: OpenGGCM-CTIM results

    Science.gov (United States)

    Connor, Hyunju Kim; Zesta, Eftyhia; Fedrizzi, Mariangel; Shi, Yong; Raeder, Joachim; Codrescu, Mihail V.; Fuller-Rowell, Tim J.

    2016-06-01

    The magnetosphere is a major source of energy for the Earth's ionosphere and thermosphere (IT) system. Current IT models drive the upper atmosphere using empirically calculated magnetospheric energy input. Thus, they do not sufficiently capture the storm-time dynamics, particularly at high latitudes. To improve the prediction capability of IT models, a physics-based magnetospheric input is necessary. Here, we use the Open Global General Circulation Model (OpenGGCM) coupled with the Coupled Thermosphere Ionosphere Model (CTIM). OpenGGCM calculates a three-dimensional global magnetosphere and a two-dimensional high-latitude ionosphere by solving resistive magnetohydrodynamic (MHD) equations with solar wind input. CTIM calculates a global thermosphere and a high-latitude ionosphere in three dimensions using realistic magnetospheric inputs from the OpenGGCM. We investigate whether the coupled model improves the storm-time IT responses by simulating a geomagnetic storm that is preceded by a strong solar wind pressure front on August 24, 2005. We compare the OpenGGCM-CTIM results with low-earth-orbit satellite observations and with the model results of Coupled Thermosphere-Ionosphere-Plasmasphere electrodynamics (CTIPe). CTIPe is an up-to-date version of CTIM that incorporates more IT dynamics such as a low-latitude ionosphere and a plasmasphere, but uses empirical magnetospheric input. OpenGGCM-CTIM reproduces localized neutral density peaks at ~ 400 km altitude in the high-latitude dayside regions in agreement with in situ observations during the pressure shock and the early phase of the storm. Although CTIPe is in some sense a much superior model than CTIM, it misses these localized enhancements. Unlike the CTIPe empirical input models, OpenGGCM-CTIM more faithfully produces localized increases of both auroral precipitation and ionospheric electric fields near the high-latitude dayside region after the pressure shock and after the storm onset, which in turn

  8. Process Model for Defining Space Sensing and Situational Awareness Requirements

    Science.gov (United States)

    2006-04-01

    process model for defining systems for space sensing and space situational awareness is presented. The paper concentrates on eight steps for determining the requirements to include: decision maker needs, system requirements, exploitation methods and vulnerabilities, critical capabilities, and identify attack scenarios. Utilization of the USAF anti-tamper (AT) implementation process as a process model departure point for the space sensing and situational awareness (SSSA...is presented. The AT implementation process model , as an

  9. Model independent determination of the CKM phase γ using input from D{sup 0}-D̄{sup 0} mixing

    Energy Technology Data Exchange (ETDEWEB)

    Harnew, Samuel; Rademacker, Jonas [H H Wills Physics Laboratory, University of Bristol,Bristol (United Kingdom)

    2015-03-31

    We present a new, amplitude model-independent method to measure the CP violation parameter γ in B{sup −}→DK{sup −} and related decays. Information on charm interference parameters, usually obtained from charm threshold data, is obtained from charm mixing. By splitting the phase space of the D meson decay into several bins, enough information can be gained to measure γ without input from the charm threshold. We demonstrate the feasibility of this approach with a simulation study of B{sup −}→DK{sup −} with D→K{sup +}π{sup −}π{sup +}π{sup −}. We compare the performance of our novel approach to that of a previously proposed binned analysis which uses charm interference parameters obtained from threshold data. While both methods provide useful constraints, the combination of the two by far outperforms either of them applied on their own. Such an analysis would provide a highly competitive measurement of γ. Our simulation studies indicate, subject to assumptions about data yields and the amplitude structure of D{sup 0}→K{sup +}π{sup −}π{sup +}π{sup −}, a statistical uncertainty on γ of ∼12{sup ∘} with existing data and ∼4{sup ∘} for the LHCb-upgrade.

  10. Applying the Context, Input, Process, Product Evaluation Model for Evaluation, Research, and Redesign of an Online Master’s Program

    Directory of Open Access Journals (Sweden)

    Hatice Sancar Tokmak

    2013-07-01

    Full Text Available This study aimed to evaluate and redesign an online master’s degree program consisting of 12 courses from the informatics field using a context, input, process, product (CIPP evaluation model. Research conducted during the redesign of the online program followed a mixed methodology in which data was collected through a CIPP survey, focus-group interview, and open-ended questionnaire. An initial CIPP survey sent to students, which had a response rate of approximately 60%, indicated that the Fuzzy Logic course did not fully meet the needs of students. Based on these findings, the program managers decided to improve this course, and a focus group was organized with the students of the Fuzzy Logic course in order to obtain more information to help in redesigning the course. Accordingly, the course was redesigned to include more examples and visuals, including videos; student-instructor interaction was increased through face-to-face meetings; and extra meetings were arranged before exams so that additional examples could be presented for problem-solving to satisfy students about assessment procedures. Lastly, the modifications to the Fuzzy Logic course were implemented, and the students in the course were sent an open-ended form asking them what they thought about the modifications. The results indicated that most students were pleased with the new version of the course.

  11. How uncertainty in input and parameters influences transport model output: four-stage model case-study

    DEFF Research Database (Denmark)

    Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2015-01-01

    ) different levels of network congestion. The choice of the probability distributions shows a low impact on the model output uncertainty, quantified in terms of coefficient of variation. Instead, with respect to the choice of different assignment algorithms, the link flow uncertainty, expressed in terms...... of coefficient of variation, resulting from stochastic user equilibrium and user equilibrium is, respectively, of 0.425 and 0.468. Finally, network congestion does not show a high effect on model output uncertainty at the network level. However, the final uncertainty of links with higher volume/capacity ratio...

  12. Requirements for Logical Models for Value-Added Tax Legislation

    DEFF Research Database (Denmark)

    Nielsen, Morten Ib; Simonsen, Jakob Grue; Larsen, Ken Friis

    -specific needs. Currently, these difficulties are handled in most major ERP systems by customising and localising the native code of the ERP systems for each specific country and industry. We propose an alternative that uses logical modeling of VAT legislation. The potential benefit is to eventually transform...... such a model automatically into programs that essentially will replace customisation and localisation by con¿guration by changing parameters in the model. In particular, we: (1) identify a number of requirements for such modeling, including requirements for the underlying logic; (2) model salient parts...

  13. Fully automated calculation of image-derived input function in simultaneous PET/MRI in a sheep model

    Energy Technology Data Exchange (ETDEWEB)

    Jochimsen, Thies H.; Zeisig, Vilia [Department of Nuclear Medicine, Leipzig University Hospital, Liebigstr. 18, Leipzig (Germany); Schulz, Jessica [Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1a, Leipzig, D-04103 (Germany); Werner, Peter; Patt, Marianne; Patt, Jörg [Department of Nuclear Medicine, Leipzig University Hospital, Liebigstr. 18, Leipzig (Germany); Dreyer, Antje Y. [Fraunhofer Institute of Cell Therapy and Immunology, Perlickstr. 1, Leipzig, D-04103 (Germany); Translational Centre for Regenerative Medicine, University Leipzig, Philipp-Rosenthal-Str. 55, Leipzig, D-04103 (Germany); Boltze, Johannes [Fraunhofer Institute of Cell Therapy and Immunology, Perlickstr. 1, Leipzig, D-04103 (Germany); Translational Centre for Regenerative Medicine, University Leipzig, Philipp-Rosenthal-Str. 55, Leipzig, D-04103 (Germany); Fraunhofer Research Institution of Marine Biotechnology and Institute for Medical and Marine Biotechnology, University of Lübeck, Lübeck (Germany); Barthel, Henryk; Sabri, Osama; Sattler, Bernhard [Department of Nuclear Medicine, Leipzig University Hospital, Liebigstr. 18, Leipzig (Germany)

    2016-02-13

    Obtaining the arterial input function (AIF) from image data in dynamic positron emission tomography (PET) examinations is a non-invasive alternative to arterial blood sampling. In simultaneous PET/magnetic resonance imaging (PET/MRI), high-resolution MRI angiographies can be used to define major arteries for correction of partial-volume effects (PVE) and point spread function (PSF) response in the PET data. The present study describes a fully automated method to obtain the image-derived input function (IDIF) in PET/MRI. Results are compared to those obtained by arterial blood sampling. To segment the trunk of the major arteries in the neck, a high-resolution time-of-flight MRI angiography was postprocessed by a vessel-enhancement filter based on the inertia tensor. Together with the measured PSF of the PET subsystem, the arterial mask was used for geometrical deconvolution, yielding the time-resolved activity concentration averaged over a major artery. The method was compared to manual arterial blood sampling at the hind leg of 21 sheep (animal stroke model) during measurement of blood flow with O15-water. Absolute quantification of activity concentration was compared after bolus passage during steady state, i.e., between 2.5- and 5-min post injection. Cerebral blood flow (CBF) values from blood sampling and IDIF were also compared. The cross-calibration factor obtained by comparing activity concentrations in blood samples and IDIF during steady state is 0.98 ± 0.10. In all examinations, the IDIF provided a much earlier and sharper bolus peak than in the time course of activity concentration obtained by arterial blood sampling. CBF using the IDIF was 22 % higher than CBF obtained by using the AIF yielded by blood sampling. The small deviation between arterial blood sampling and IDIF during steady state indicates that correction of PVE and PSF is possible with the method presented. The differences in bolus dynamics and, hence, CBF values can be explained by the

  14. A spiking neural network model of model-free reinforcement learning with high-dimensional sensory input and perceptual ambiguity.

    Science.gov (United States)

    Nakano, Takashi; Otsuka, Makoto; Yoshimoto, Junichiro; Doya, Kenji

    2015-01-01

    A theoretical framework of reinforcement learning plays an important role in understanding action selection in animals. Spiking neural networks provide a theoretically grounded means to test computational hypotheses on neurally plausible algorithms of reinforcement learning through numerical simulation. However, most of these models cannot handle observations which are noisy, or occurred in the past, even though these are inevitable and constraining features of learning in real environments. This class of problem is formally known as partially observable reinforcement learning (PORL) problems. It provides a generalization of reinforcement learning to partially observable domains. In addition, observations in the real world tend to be rich and high-dimensional. In this work, we use a spiking neural network model to approximate the free energy of a restricted Boltzmann machine and apply it to the solution of PORL problems with high-dimensional observations. Our spiking network model solves maze tasks with perceptually ambiguous high-dimensional observations without knowledge of the true environment. An extended model with working memory also solves history-dependent tasks. The way spiking neural networks handle PORL problems may provide a glimpse into the underlying laws of neural information processing which can only be discovered through such a top-down approach.

  15. A Spiking Neural Network Model of Model-Free Reinforcement Learning with High-Dimensional Sensory Input and Perceptual Ambiguity

    Science.gov (United States)

    Nakano, Takashi; Otsuka, Makoto; Yoshimoto, Junichiro; Doya, Kenji

    2015-01-01

    A theoretical framework of reinforcement learning plays an important role in understanding action selection in animals. Spiking neural networks provide a theoretically grounded means to test computational hypotheses on neurally plausible algorithms of reinforcement learning through numerical simulation. However, most of these models cannot handle observations which are noisy, or occurred in the past, even though these are inevitable and constraining features of learning in real environments. This class of problem is formally known as partially observable reinforcement learning (PORL) problems. It provides a generalization of reinforcement learning to partially observable domains. In addition, observations in the real world tend to be rich and high-dimensional. In this work, we use a spiking neural network model to approximate the free energy of a restricted Boltzmann machine and apply it to the solution of PORL problems with high-dimensional observations. Our spiking network model solves maze tasks with perceptually ambiguous high-dimensional observations without knowledge of the true environment. An extended model with working memory also solves history-dependent tasks. The way spiking neural networks handle PORL problems may provide a glimpse into the underlying laws of neural information processing which can only be discovered through such a top-down approach. PMID:25734662

  16. APPLICATION OF FRF ESTIMATOR BASED ON ERRORS-IN-VARIABLES MODEL IN MULTI-INPUT MULTI-OUTPUT VIBRATION CONTROL SYSTEM

    Institute of Scientific and Technical Information of China (English)

    GUAN Guangfeng; CONG Dacheng; HAN Junwei; LI Hongren

    2007-01-01

    The FRF estimator based on the errors-in-variables (EV) model of multi-input multi-output (MIMO) System is presented to reduce the bias error of FRF Hl estimator. The FRF Hl estimator is influenced by the noises in the inputs of the System and generates an under-estimation of the true FRF. The FRF estimator based on the EV model takes into account the errors in both the inputs and Outputs of the System and would lead to more accurate FRF estimation. The FRF estimator based on the EV model is applied to the waveform replication on the 6-DOF (degree-of-freedom) hydraulic Vibration table. The result shows that it is favorable to improve the control precision of the MIMO Vibration control system.

  17. Digital Avionics Information System (DAIS): Training Requirements Analysis Model (TRAMOD).

    Science.gov (United States)

    Czuchry, Andrew J.; And Others

    The training requirements analysis model (TRAMOD) described in this report represents an important portion of the larger effort called the Digital Avionics Information System (DAIS) Life Cycle Cost (LCC) Study. TRAMOD is the second of three models that comprise an LCC impact modeling system for use in the early stages of system development. As…

  18. Application of regional physically-based landslide early warning model: tuning of the input parameters and validation of the results

    Science.gov (United States)

    D'Ambrosio, Michele; Tofani, Veronica; Rossi, Guglielmo; Salvatici, Teresa; Tacconi Stefanelli, Carlo; Rosi, Ascanio; Benedetta Masi, Elena; Pazzi, Veronica; Vannocci, Pietro; Catani, Filippo; Casagli, Nicola

    2017-04-01

    The Aosta Valley region is located in North-West Alpine mountain chain. The geomorphology of the region is characterized by steep slopes, high climatic and altitude (ranging from 400 m a.s.l of Dora Baltea's river floodplain to 4810 m a.s.l. of Mont Blanc) variability. In the study area (zone B), located in Eastern part of Aosta Valley, heavy rainfall of about 800-900 mm per year is the main landslides trigger. These features lead to a high hydrogeological risk in all territory, as mass movements interest the 70% of the municipality areas (mainly shallow rapid landslides and rock falls). An in-depth study of the geotechnical and hydrological properties of hillslopes controlling shallow landslides formation was conducted, with the aim to improve the reliability of deterministic model, named HIRESS (HIgh REsolution Stability Simulator). In particular, two campaigns of on site measurements and laboratory experiments were performed. The data obtained have been studied in order to assess the relationships existing among the different parameters and the bedrock lithology. The analyzed soils in 12 survey points are mainly composed of sand and gravel, with highly variable contents of silt. The range of effective internal friction angle (from 25.6° to 34.3°) and effective cohesion (from 0 kPa to 9.3 kPa) measured and the median ks (10E-6 m/s) value are consistent with the average grain sizes (gravelly sand). The data collected contributes to generate input map of parameters for HIRESS (static data). More static data are: volume weight, residual water content, porosity and grain size index. In order to improve the original formulation of the model, the contribution of the root cohesion has been also taken into account based on the vegetation map and literature values. HIRESS is a physically based distributed slope stability simulator for analyzing shallow landslide triggering conditions in real time and in large areas using parallel computational techniques. The software

  19. Requirements Validation: Execution of UML Models with CPN Tools

    DEFF Research Database (Denmark)

    Machado, Ricardo J.; Lassen, Kristian Bisgaard; Oliveira, Sérgio

    2007-01-01

    Requirements validation is a critical task in any engineering project. The confrontation of stakeholders with static requirements models is not enough, since stakeholders with non-computer science education are not able to discover all the inter-dependencies between the elicited requirements. Eve...... requirements, where the system to be built must explicitly support the interaction between people within a pervasive cooperative workflow execution. A case study from a real project is used to illustrate the proposed approach.......Requirements validation is a critical task in any engineering project. The confrontation of stakeholders with static requirements models is not enough, since stakeholders with non-computer science education are not able to discover all the inter-dependencies between the elicited requirements. Even...... with simple unified modelling language (UML) requirements models, it is not easy for the development team to get confidence on the stakeholders' requirements validation. This paper describes an approach, based on the construction of executable interactive prototypes, to support the validation of workflow...

  20. Methods, Devices and Computer Program Products Providing for Establishing a Model for Emulating a Physical Quantity Which Depends on at Least One Input Parameter, and Use Thereof

    DEFF Research Database (Denmark)

    2014-01-01

    The present invention proposes methods, devices and computer program products. To this extent, there is defined a set X including N distinct parameter values x_i for at least one input parameter x, N being an integer greater than or equal to 1, first measured the physical quantity Pm1 for each...... based on the Vandermonde matrix and the first measured physical quantity according to the equation W=(VMT*VM)-1*VMT*Pm1. The model is iteratively refined so as to obtained a desired emulation precision.; The model can later be used to emulate the physical quantity based on input parameters or logs taken...

  1. GENERAL REQUIREMENTS FOR SIMULATION MODELS IN WASTE MANAGEMENT

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Ian; Kossik, Rick; Voss, Charlie

    2003-02-27

    Most waste management activities are decided upon and carried out in a public or semi-public arena, typically involving the waste management organization, one or more regulators, and often other stakeholders and members of the public. In these environments, simulation modeling can be a powerful tool in reaching a consensus on the best path forward, but only if the models that are developed are understood and accepted by all of the parties involved. These requirements for understanding and acceptance of the models constrain the appropriate software and model development procedures that are employed. This paper discusses requirements for both simulation software and for the models that are developed using the software. Requirements for the software include transparency, accessibility, flexibility, extensibility, quality assurance, ability to do discrete and/or continuous simulation, and efficiency. Requirements for the models that are developed include traceability, transparency, credibility/validity, and quality control. The paper discusses these requirements with specific reference to the requirements for performance assessment models that are used for predicting the long-term safety of waste disposal facilities, such as the proposed Yucca Mountain repository.

  2. Wind Power Curve Modeling Using Statistical Models: An Investigation of Atmospheric Input Variables at a Flat and Complex Terrain Wind Farm

    Energy Technology Data Exchange (ETDEWEB)

    Wharton, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bulaevskaya, V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Irons, Z. [Enel Green Power North America, Andover, MA (United States); Qualley, G. [Infigen Energy, Dallas, TX (United States); Newman, J. F. [Univ. of Oklahoma, Norman, OK (United States); Miller, W. O. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-09-28

    The goal of our FY15 project was to explore the use of statistical models and high-resolution atmospheric input data to develop more accurate prediction models for turbine power generation. We modeled power for two operational wind farms in two regions of the country. The first site is a 235 MW wind farm in Northern Oklahoma with 140 GE 1.68 turbines. Our second site is a 38 MW wind farm in the Altamont Pass Region of Northern California with 38 Mitsubishi 1 MW turbines. The farms are very different in topography, climatology, and turbine technology; however, both occupy high wind resource areas in the U.S. and are representative of typical wind farms found in their respective areas.

  3. Requirements engineering for cross-sectional information chain models.

    Science.gov (United States)

    Hübner, U; Cruel, E; Gök, M; Garthaus, M; Zimansky, M; Remmers, H; Rienhoff, O

    2012-01-01

    Despite the wealth of literature on requirements engineering, little is known about engineering very generic, innovative and emerging requirements, such as those for cross-sectional information chains. The IKM health project aims at building information chain reference models for the care of patients with chronic wounds, cancer-related pain and back pain. Our question therefore was how to appropriately capture information and process requirements that are both generally applicable and practically useful. To this end, we started with recommendations from clinical guidelines and put them up for discussion in Delphi surveys and expert interviews. Despite the heterogeneity we encountered in all three methods, it was possible to obtain requirements suitable for building reference models. We evaluated three modelling languages and then chose to write the models in UML (class and activity diagrams). On the basis of the current project results, the pros and cons of our approach are discussed.

  4. A robust hybrid model integrating enhanced inputs based extreme learning machine with PLSR (PLSR-EIELM) and its application to intelligent measurement.

    Science.gov (United States)

    He, Yan-Lin; Geng, Zhi-Qiang; Xu, Yuan; Zhu, Qun-Xiong

    2015-09-01

    In this paper, a robust hybrid model integrating an enhanced inputs based extreme learning machine with the partial least square regression (PLSR-EIELM) was proposed. The proposed PLSR-EIELM model can overcome two main flaws in the extreme learning machine (ELM), i.e. the intractable problem in determining the optimal number of the hidden layer neurons and the over-fitting phenomenon. First, a traditional extreme learning machine (ELM) is selected. Second, a method of randomly assigning is applied to the weights between the input layer and the hidden layer, and then the nonlinear transformation for independent variables can be obtained from the output of the hidden layer neurons. Especially, the original input variables are regarded as enhanced inputs; then the enhanced inputs and the nonlinear transformed variables are tied together as the whole independent variables. In this way, the PLSR can be carried out to identify the PLS components not only from the nonlinear transformed variables but also from the original input variables, which can remove the correlation among the whole independent variables and the expected outputs. Finally, the optimal relationship model of the whole independent variables with the expected outputs can be achieved by using PLSR. Thus, the PLSR-EIELM model is developed. Then the PLSR-EIELM model served as an intelligent measurement tool for the key variables of the Purified Terephthalic Acid (PTA) process and the High Density Polyethylene (HDPE) process. The experimental results show that the predictive accuracy of PLSR-EIELM is stable, which indicate that PLSR-EIELM has good robust character. Moreover, compared with ELM, PLSR, hierarchical ELM (HELM), and PLSR-ELM, PLSR-EIELM can achieve much smaller predicted relative errors in these two applications.

  5. Tropical Atlantic climate response to different freshwater input in high latitudes with an ocean-only general circulation model

    Science.gov (United States)

    Men, Guang; Wan, Xiuquan; Liu, Zedong

    2016-10-01

    Tropical Atlantic climate change is relevant to the variation of Atlantic meridional overturning circulation (AMOC) through different physical processes. Previous coupled climate model simulation suggested a dipole-like SST structure cooling over the North Atlantic and warming over the South Tropical Atlantic in response to the slowdown of the AMOC. Using an ocean-only global ocean model here, an attempt was made to separate the total influence of various AMOC change scenarios into an oceanic-induced component and an atmospheric-induced component. In contrast with previous freshwater-hosing experiments with coupled climate models, the ocean-only modeling presented here shows a surface warming in the whole tropical Atlantic region and the oceanic-induced processes may play an important role in the SST change in the equatorial south Atlantic. Our result shows that the warming is partly governed by oceanic process through the mechanism of oceanic gateway change, which operates in the regime where freshwater forcing is strong, exceeding 0.3 Sv. Strong AMOC change is required for the gateway mechanism to work in our model because only when the AMOC is sufficiently weak, the North Brazil Undercurrent can flow equatorward, carrying warm and salty north Atlantic subtropical gyre water into the equatorial zone. This threshold is likely to be model-dependent. An improved understanding of these issues may have help with abrupt climate change prediction later.

  6. Inferring Requirement Goals from Model Implementing in UML

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    UML is used widely in many software developmentprocesses.However,it does not make explicit requirement goals.Here is a method tending to establish the semantic relationship between requirements goals and UML models.Before the method is introduced,some relevant concepts are described

  7. Choice of rainfall inputs for event-based rainfall-runoff modeling in a catchment with multiple rainfall stations using data-driven techniques

    Science.gov (United States)

    Chang, Tak Kwin; Talei, Amin; Alaghmand, Sina; Ooi, Melanie Po-Leen

    2017-02-01

    Input selection for data-driven rainfall-runoff models is an important task as these models find the relationship between rainfall and runoff by direct mapping of inputs to output. In this study, two different input selection methods were used: cross-correlation analysis (CCA), and a combination of mutual information and cross-correlation analyses (MICCA). Selected inputs were used to develop adaptive network-based fuzzy inference system (ANFIS) in Sungai Kayu Ara basin, Selangor, Malaysia. The study catchment has 10 rainfall stations and one discharge station located at the outlet of the catchment. A total of 24 rainfall-runoff events (10-min interval) from 1996 to 2004 were selected from which 18 events were used for training and the remaining 6 were reserved for validating (testing) the models. The results of ANFIS models then were compared against the ones obtained by conceptual model HEC-HMS. The CCA and MICCA methods selected the rainfall inputs only from 2 (stations 1 and 5) and 3 (stations 1, 3, and 5) rainfall stations, respectively. ANFIS model developed based on MICCA inputs (ANFIS-MICCA) performed slightly better than the one developed based on CCA inputs (ANFIS-CCA). ANFIS-CCA and ANFIS-MICCA were able to perform comparably to HEC-HMS model where rainfall data of all 10 stations had been used; however, in peak estimation, ANFIS-MICCA was the best model. The sensitivity analysis on HEC-HMS was conducted by recalibrating the model by using the same selected rainfall stations for ANFIS. It was concluded that HEC-HMS model performance deteriorates if the number of rainfall stations reduces. In general, ANFIS was found to be a reliable alternative for HEC-HMS in cases whereby not all rainfall stations are functioning. This study showed that the selected stations have received the highest total rain and rainfall intensity (stations 3 and 5). Moreover, the contributing rainfall stations selected by CCA and MICCA were found to be located near the outlet of

  8. Rest-to-Rest Attitude Naneuvers and Residual Vibration Reduction of a Finite Element Model of Flexible Satellite by Using Input Shaper

    Directory of Open Access Journals (Sweden)

    Setyamartana Parman

    1999-01-01

    Full Text Available A three-dimensional rest-to-rest attitude maneuver of flexible spacecraft equipped by on-off reaction jets is studied. Equations of motion of the spacecraft is developed by employing a hybrid system of coordinates and Lagrangian formulation. The finite element method is used to examine discrete elastic deformations of a particular model of satellite carrying flexible solar panels by modelling the panels as flat plate structures in bending. Results indicate that, under an unshaped input, the maneuvers induce undesirable attitude angle motions of the satellite as well as vibration of the solar panels. An input shaper is then applied to reduce the residual oscillation of its motion at several natural frequencies in order to get an expected pointing precision of the satellite. Once the shaped input is given to the satellite, the performance improves significantly.

  9. Influence of the meteorological input on the atmospheric transport modelling with FLEXPART of radionuclides from the Fukushima Daiichi nuclear accident.

    Science.gov (United States)

    Arnold, D; Maurer, C; Wotawa, G; Draxler, R; Saito, K; Seibert, P

    2015-01-01

    In the present paper the role of precipitation as FLEXPART model input is investigated for one possible release scenario of the Fukushima Daiichi accident. Precipitation data from the European Center for Medium-Range Weather Forecast (ECMWF), the NOAA's National Center for Environmental Prediction (NCEP), the Japan Meteorological Agency's (JMA) mesoscale analysis and a JMA radar-rain gauge precipitation analysis product were utilized. The accident of Fukushima in March 2011 and the following observations enable us to assess the impact of these precipitation products at least for this single case. As expected the differences in the statistical scores are visible but not large. Increasing the ECMWF resolution of all the fields from 0.5° to 0.2° rises the correlation from 0.71 to 0.80 and an overall rank from 3.38 to 3.44. Substituting ECMWF precipitation, while the rest of the variables remains unmodified, by the JMA mesoscale precipitation analysis and the JMA radar gauge precipitation data yield the best results on a regional scale, specially when a new and more robust wet deposition scheme is introduced. The best results are obtained with a combination of ECMWF 0.2° data with precipitation from JMA mesoscale analyses and the modified wet deposition with a correlation of 0.83 and an overall rank of 3.58. NCEP-based results with the same source term are generally poorer, giving correlations around 0.66, and comparatively large negative biases and an overall rank of 3.05 that worsens when regional precipitation data is introduced.

  10. Towards a Formalized Ontology-Based Requirements Model

    Institute of Scientific and Technical Information of China (English)

    JIANG Dan-dong; ZHANG Shen-sheng; WANG Ying-lin

    2005-01-01

    The goal of this paper is to take a further step towards an ontological approach for representing requirements information. The motivation for ontologies was discussed. The definitions of ontology and requirements ontology were given. Then, it presented a collection of informal terms, including four subject areas. It also discussed the formalization process of ontology. The underlying meta-ontology was determined, and the formalized requirements ontology was analyzed. This formal ontology is built to serve as a basis for requirements model. Finally, the implementation of software system was given.

  11. A transformation approach for collaboration based requirement models

    CERN Document Server

    Harbouche, Ahmed; Mokhtari, Aicha

    2012-01-01

    Distributed software engineering is widely recognized as a complex task. Among the inherent complexities is the process of obtaining a system design from its global requirement specification. This paper deals with such transformation process and suggests an approach to derive the behavior of a given system components, in the form of distributed Finite State Machines, from the global system requirements, in the form of an augmented UML Activity Diagrams notation. The process of the suggested approach is summarized in three steps: the definition of the appropriate source Meta-Model (requirements Meta-Model), the definition of the target Design Meta-Model and the definition of the rules to govern the transformation during the derivation process. The derivation process transforms the global system requirements described as UML diagram activities (extended with collaborations) to system roles behaviors represented as UML finite state machines. The approach is implemented using Atlas Transformation Language (ATL).

  12. Irrigation Requirement Estimation Using Vegetation Indices and Inverse Biophysical Modeling

    Science.gov (United States)

    Bounoua, Lahouari; Imhoff, Marc L.; Franks, Shannon

    2010-01-01

    We explore an inverse biophysical modeling process forced by satellite and climatological data to quantify irrigation requirements in semi-arid agricultural areas. We constrain the carbon and water cycles modeled under both equilibrium, balance between vegetation and climate, and non-equilibrium, water added through irrigation. We postulate that the degree to which irrigated dry lands vary from equilibrium climate conditions is related to the amount of irrigation. The amount of water required over and above precipitation is considered as an irrigation requirement. For July, results show that spray irrigation resulted in an additional amount of water of 1.3 mm per occurrence with a frequency of 24.6 hours. In contrast, the drip irrigation required only 0.6 mm every 45.6 hours or 46% of that simulated by the spray irrigation. The modeled estimates account for 87% of the total reported irrigation water use, when soil salinity is not important and 66% in saline lands.

  13. Innovative Product Design Based on Customer Requirement Weight Calculation Model

    Institute of Scientific and Technical Information of China (English)

    Chen-Guang Guo; Yong-Xian Liu; Shou-Ming Hou; Wei Wang

    2010-01-01

    In the processes of product innovation and design, it is important for the designers to find and capture customer's focus through customer requirement weight calculation and ranking. Based on the fuzzy set theory and Euclidean space distance, this paper puts forward a method for customer requirement weight calculation called Euclidean space distances weighting ranking method. This method is used in the fuzzy analytic hierarchy process that satisfies the additive consistent fuzzy matrix. A model for the weight calculation steps is constructed;meanwhile, a product innovation design module on the basis of the customer requirement weight calculation model is developed. Finally, combined with the instance of titanium sponge production, the customer requirement weight calculation model is validated. By the innovation design module, the structure of the titanium sponge reactor has been improved and made innovative.

  14. Dopamine modulation of GABAergic function enables network stability and input selectivity for sustaining working memory in a computational model of the prefrontal cortex.

    Science.gov (United States)

    Lew, Sergio E; Tseng, Kuei Y

    2014-12-01

    Dopamine modulation of GABAergic transmission in the prefrontal cortex (PFC) is thought to be critical for sustaining cognitive processes such as working memory and decision-making. Here, we developed a neurocomputational model of the PFC that includes physiological features of the facilitatory action of dopamine on fast-spiking interneurons to assess how a GABAergic dysregulation impacts on the prefrontal network stability and working memory. We found that a particular non-linear relationship between dopamine transmission and GABA function is required to enable input selectivity in the PFC for the formation and retention of working memory. Either degradation of the dopamine signal or the GABAergic function is sufficient to elicit hyperexcitability in pyramidal neurons and working memory impairments. The simulations also revealed an inverted U-shape relationship between working memory and dopamine, a function that is maintained even at high levels of GABA degradation. In fact, the working memory deficits resulting from reduced GABAergic transmission can be rescued by increasing dopamine tone and vice versa. We also examined the role of this dopamine-GABA interaction for the termination of working memory and found that the extent of GABAergic excitation needed to reset the PFC network begins to occur when the activity of fast-spiking interneurons surpasses 40 Hz. Together, these results indicate that the capability of the PFC to sustain working memory and network stability depends on a robust interplay of compensatory mechanisms between dopamine tone and the activity of local GABAergic interneurons.

  15. Using the Context, Input, Process, and Product Evaluation Model (CIPP) as a Comprehensive Framework to Guide the Planning, Implementation, and Assessment of Service-Learning Programs

    Science.gov (United States)

    Zhang, Guili; Zeller, Nancy; Griffith, Robin; Metcalf, Debbie; Williams, Jennifer; Shea, Christine; Misulis, Katherine

    2011-01-01

    Planning, implementing, and assessing a service-learning project can be a complex task because service-learning projects often involve multiple constituencies and aim to meet both the needs of service providers and community partners. In this article, Stufflebeam's Context, Input, Process, and Product (CIPP) evaluation model is recommended as a…

  16. Applying the Context, Input, Process, Product Evaluation Model for Evaluation, Research, and Redesign of an Online Master's Program

    Science.gov (United States)

    Sancar Tokmak, Hatice; Meltem Baturay, H.; Fadde, Peter

    2013-01-01

    This study aimed to evaluate and redesign an online master's degree program consisting of 12 courses from the informatics field using a context, input, process, product (CIPP) evaluation model. Research conducted during the redesign of the online program followed a mixed methodology in which data was collected through a CIPP survey,…

  17. Corrigendum to "Development of ANFIS model for air quality forecasting and input optimization for reducing the computational cost and time" [Atmos. Environ. 128 (2016) 246-262

    Science.gov (United States)

    Prasad, Kanchan; Gorai, Amit Kumar; Goyal, Pramila

    2016-10-01

    In the paper entitled "Development of ANFIS model for air quality forecasting and input optimization for reducing the computational cost and time" the correlation coefficient values of O3 with the other parameters (shown in Table 4) were mistakenly written from some other results. But, the analyses were done based on the actual results. The actual values are listed in the revised Table 4.

  18. Evaluation of Foreign Exchange Risk Capital Requirement Models

    Directory of Open Access Journals (Sweden)

    Ricardo S. Maia Clemente

    2005-12-01

    Full Text Available This paper examines capital requirement for financial institutions in order to cover market risk stemming from exposure to foreign currencies. The models examined belong to two groups according to the approach involved: standardized and internal models. In the first group, we study the Basel model and the model adopted by the Brazilian legislation. In the second group, we consider the models based on the concept of value at risk (VaR. We analyze the single and the double-window historical model, the exponential smoothing model (EWMA and a hybrid approach that combines features of both models. The results suggest that the Basel model is inadequate to the Brazilian market, exhibiting a large number of exceptions. The model of the Brazilian legislation has no exceptions, though generating higher capital requirements than other internal models based on VaR. In general, VaR-based models perform better and result in less capital allocation than the standardized approach model applied in Brazil.

  19. Toward high-resolution flash flood prediction in large urban areas - Analysis of sensitivity to spatiotemporal resolution of rainfall input and hydrologic modeling

    Science.gov (United States)

    Rafieeinasab, Arezoo; Norouzi, Amir; Kim, Sunghee; Habibi, Hamideh; Nazari, Behzad; Seo, Dong-Jun; Lee, Haksu; Cosgrove, Brian; Cui, Zhengtao

    2015-12-01

    Urban flash flooding is a serious problem in large, highly populated areas such as the Dallas-Fort Worth Metroplex (DFW). Being able to monitor and predict flash flooding at a high spatiotemporal resolution is critical to providing location-specific early warnings and cost-effective emergency management in such areas. Under the idealized conditions of perfect models and precipitation input, one may expect that spatiotemporal specificity and accuracy of the model output improve as the resolution of the models and precipitation input increases. In reality, however, due to the errors in the precipitation input, and in the structures, parameters and states of the models, there are practical limits to the model resolution. In this work, we assess the sensitivity of streamflow simulation in urban catchments to the spatiotemporal resolution of precipitation input and hydrologic modeling to identify the resolution at which the simulation errors may be at minimum given the quality of the precipitation input and hydrologic models used, and the response time of the catchment. The hydrologic modeling system used in this work is the National Weather Service (NWS) Hydrology Laboratory's Research Distributed Hydrologic Model (HLRDHM) applied at spatiotemporal resolutions ranging from 250 m to 2 km and from 1 min to 1 h applied over the Cities of Fort Worth, Arlington and Grand Prairie in DFW. The high-resolution precipitation input is from the DFW Demonstration Network of the Collaborative Adaptive Sensing of the Atmosphere (CASA) radars. For comparison, the NWS Multisensor Precipitation Estimator (MPE) product, which is available at a 4-km 1-h resolution, was also used. The streamflow simulation results are evaluated for 5 urban catchments ranging in size from 3.4 to 54.6 km2 and from about 45 min to 3 h in time-to-peak in the Cities of Fort Worth, Arlington and Grand Prairie. The streamflow observations used in evaluation were obtained from water level measurements via rating

  20. The Benefit of Ambiguity in Understanding Goals in Requirements Modelling

    DEFF Research Database (Denmark)

    Paay, Jeni; Pedell, Sonja; Sterling, Leon

    2011-01-01

    This paper examines the benefit of ambiguity in describing goals in requirements modelling for the design of socio-technical systems using concepts from Agent-Oriented Software Engineering (AOSE) and ethnographic and cultural probe methods from Human Computer Interaction (HCI). The authors’ aim...... a holistic approach to eliciting, analyzing, and modelling socially-oriented requirements by combining a particular form of ethnographic technique, cultural probes, with Agent Oriented Software Engineering notations to model these requirements. This paper focuses on examining the value of maintaining...... of their research is to create technologies that support more flexible and meaningful social interactions, by combining best practice in software engineering with ethnographic techniques to model complex social interactions from their socially oriented life for the purposes of building rich socio...

  1. Relative importance of nutrient inputs from streams and the sea entrance for phytoplankton dynamics in a shallow estuary - insights from 3D model simulations

    DEFF Research Database (Denmark)

    Timmermann, Karen; Gustafsson, Karin; Markager, Svend Stiig

    Danish estuaries are highly eutrophic due to high N and P inputs from local streams and a general increase in nutrient concentrations in the Baltic Sea region. Recovery plans have been implemented to reduce local loadings. A key issue for these plans is the relative importance of local sources...... (streams and open sea inputs) on phytoplankton growth and biomass. Model simulations revealed that local streams accounted for about 40% of the phytoplankton nitrogen content and that changes in local discharges and open sea nutrient concentrations almost had similar impact on phytoplankton biomass....

  2. Business Process Simulation: Requirements for Business and Resource Models

    Directory of Open Access Journals (Sweden)

    Audrius Rima

    2015-07-01

    Full Text Available The purpose of Business Process Model and Notation (BPMN is to provide easily understandable graphical representation of business process. Thus BPMN is widely used and applied in various areas one of them being a business process simulation. This paper addresses some BPMN model based business process simulation problems. The paper formulate requirements for business process and resource models in enabling their use for business process simulation.

  3. The micrometeoric input in the upper atmosphere. A comparison between model predictions and HPLA and meteor radars observations and AIM-CDE dust detections

    Science.gov (United States)

    Janches, Diego; Sparks, Jonathan; Johnson, Kyle; Poppe, Andrew; James, David; Fentzke, Jonathan; Palo, Scott; Horanyi, Mihaly

    It is now widely accepted that microgram extraterrestrial particles from the sporadic background are the major contributors of metals in the Mesosphere/Lower Thermosphere (MLT). It is also well established that this material gives rise to the upper atmospheric metallic and ion layers observed by radars and lidars. In addition, micrometeoroids are believed to be an important source for condensation nuclei (CN), the existence of which is a prerequisite for the formation of NLC and PMSE particles in the polar mesopause region. In order to understand how this flux gives rise to these atmospheric phenomena, accurate knowledge of the global meteoric input function (MIF) is critical. This function accounts for the annual and diurnal variations of meteor rates, global distribution, directionality, and velocity and mass distributions. Estimates of most of these parameters are still under investigation. In this talk, we present results of a detailed model of the diurnal, seasonal and geographical variability of the micrometeoric activity in the upper atmosphere. The principal goal of this effort is to construct a new and more precise sporadic MIF needed for the subsequent modeling of the atmospheric chemistry of meteoric material and the origin and formation of metal layers in the MLT. The model uses Monte Carlo simulation techniques and includes an accepted mass flux provided by six main known meteor sources (i.e. orbital families of dust) and a detailed modeling of the meteoroid atmospheric entry physics. We compare the model predictions with meteor head-echo observations using the 430 MHz Arecibo (AO) radar in Puerto Rico and the 450 MHz Advance Modular ISR in Poker Flat (PFISR), AK. The results indicate, that although the Earth's Apex centered source, thought to be composed mostly of dust from long period comets, is required to be only about ˜33% of dust in the Solar System at 1 AU, it accounts for 60 to 70% of the actual dust that ablates in the atmosphere. These

  4. Coupling LiDAR and thermal imagery to model the effects of riparian vegetation shade and groundwater inputs on summer river temperature.

    Science.gov (United States)

    Wawrzyniak, Vincent; Allemand, Pascal; Bailly, Sarah; Lejot, Jérôme; Piégay, Hervé

    2017-03-16

    In the context of global warming, it is important to understand the drivers controlling river temperature in order to mitigate temperature increases. A modeling approach can be useful for quantifying the respective importance of the different drivers, notably groundwater inputs and riparian shading which are potentially critical for reducing summer temperature. In this study, we use a one-dimensional deterministic model to predict summer water temperature at an hourly time step over a 21km reach of the lower Ain River (France). This sinuous gravel-bed river undergoes summer temperature increase with potential impacts on salmonid populations. The model considers heat fluxes at the water-air interface, attenuation of solar radiation by riparian forest, groundwater inputs and hydraulic characteristics of the river. Modeling is performed over two periods of five days during the summers 2010 and 2011. River properties are obtained from hydraulic modeling based on cross-section profiles and water level surveys. We model shadows of the vegetation on the river surface using LiDAR data. Groundwater inputs are determined using airborne thermal infrared (TIR) images and hydrological data. Results indicate that vegetation and groundwater inputs can mitigate high water temperatures during summer. Riparian shading effect is fairly similar between the two periods (-0.26±0.12°C and -0.31±0.18°C). Groundwater input cooling is variable between the two studied periods: when groundwater discharge represents 16% of the river discharge, it cools the river down by 0.68±0.13°C while the effect is very low (0.11±0.01°C) when the groundwater discharge contributes only 2% to the discharge. The effect of shading varies through the day: low in the morning and high during the afternoon and the evening whereas those induced by groundwater inputs is more constant through the day. Overall, the effect of riparian vegetation and groundwater inputs represents about 10% in 2010 and 24% in 2011

  5. STD-dependent and independent encoding of input irregularity as spike rate in a computational model of a cerebellar nucleus neuron.

    Science.gov (United States)

    Luthman, Johannes; Hoebeek, Freek E; Maex, Reinoud; Davey, Neil; Adams, Rod; De Zeeuw, Chris I; Steuber, Volker

    2011-12-01

    Neurons in the cerebellar nuclei (CN) receive inhibitory inputs from Purkinje cells in the cerebellar cortex and provide the major output from the cerebellum, but their computational function is not well understood. It has recently been shown that the spike activity of Purkinje cells is more regular than previously assumed and that this regularity can affect motor behaviour. We use a conductance-based model of a CN neuron to study the effect of the regularity of Purkinje cell spiking on CN neuron activity. We find that increasing the irregularity of Purkinje cell activity accelerates the CN neuron spike rate and that the mechanism of this recoding of input irregularity as output spike rate depends on the number of Purkinje cells converging onto a CN neuron. For high convergence ratios, the irregularity induced spike rate acceleration depends on short-term depression (STD) at the Purkinje cell synapses. At low convergence ratios, or for synchronised Purkinje cell input, the firing rate increase is independent of STD. The transformation of input irregularity into output spike rate occurs in response to artificial input spike trains as well as to spike trains recorded from Purkinje cells in tottering mice, which show highly irregular spiking patterns. Our results suggest that STD may contribute to the accelerated CN spike rate in tottering mice and they raise the possibility that the deficits in motor control in these mutants partly result as a pathological consequence of this natural form of plasticity.

  6. Improving the temperature predictions of subsurface thermal models by using high-quality input data. Part 1: Uncertainty analysis of the thermal-conductivity parameterization

    DEFF Research Database (Denmark)

    Fuchs, Sven; Balling, Niels

    2016-01-01

    The subsurface temperature field and the geothermal conditions in sedimentary basins are frequently examined by using numerical thermal models. For those models, detailed knowledge of rock thermal properties are paramount for a reliable parameterization of layer properties and boundary conditions...... against known observed temperatures of good quality. Results clearly show that the use of location-specific well-log derived rock thermal properties and the integration of laterally varying input data (reflecting changes of lithofacies) significantly improves the temperature prediction...

  7. 投入产出偏差分析模型的建立与应用%Input-output deviation model and its application

    Institute of Scientific and Technical Information of China (English)

    宋文新; 宋辉; 王振涛

    2003-01-01

    In order to study that how and in what extent such factors as technology progress, terminal demand,inpert-expert may affect the tktal national economic amount as well as its structure, the paper established an input-output biased estimate model based on the essential inputoutput model. Practice in the reality proves a good effect. It provides a new quantitative analyzing method for the kind of problems.

  8. A TRANSFORMATION APPROACH FOR COLLABORATION BASED REQUIREMENT MODELS

    Directory of Open Access Journals (Sweden)

    Ahmed Harbouche

    2012-02-01

    Full Text Available Distributed software engineering is widely recognized as a complex task. Among the inherent complexitiesis the process of obtaining a system design from its global requirement specification. This paper deals withsuch transformation process and suggests an approach to derive the behavior of a given systemcomponents, in the form of distributed Finite State Machines, from the global system requirements, in theform of an augmented UML Activity Diagrams notation. The process of the suggested approach issummarized in three steps: the definition of the appropriate source Meta-Model (requirements Meta-Model, the definition of the target Design Meta-Model and the definition of the rules to govern thetransformation during the derivation process. The derivation process transforms the global systemrequirements described as UML diagram activities (extended with collaborations to system rolesbehaviors represented as UML finite state machines. The approach is implemented using AtlasTransformation Language (ATL.

  9. Modeling the ionosphere-thermosphere response to a geomagnetic storm using physics-based magnetospheric energy input: OpenGGCM-CTIM results

    Directory of Open Access Journals (Sweden)

    Connor Hyunju Kim

    2016-01-01

    Full Text Available The magnetosphere is a major source of energy for the Earth’s ionosphere and thermosphere (IT system. Current IT models drive the upper atmosphere using empirically calculated magnetospheric energy input. Thus, they do not sufficiently capture the storm-time dynamics, particularly at high latitudes. To improve the prediction capability of IT models, a physics-based magnetospheric input is necessary. Here, we use the Open Global General Circulation Model (OpenGGCM coupled with the Coupled Thermosphere Ionosphere Model (CTIM. OpenGGCM calculates a three-dimensional global magnetosphere and a two-dimensional high-latitude ionosphere by solving resistive magnetohydrodynamic (MHD equations with solar wind input. CTIM calculates a global thermosphere and a high-latitude ionosphere in three dimensions using realistic magnetospheric inputs from the OpenGGCM. We investigate whether the coupled model improves the storm-time IT responses by simulating a geomagnetic storm that is preceded by a strong solar wind pressure front on August 24, 2005. We compare the OpenGGCM-CTIM results with low-earth-orbit satellite observations and with the model results of Coupled Thermosphere-Ionosphere-Plasmasphere electrodynamics (CTIPe. CTIPe is an up-to-date version of CTIM that incorporates more IT dynamics such as a low-latitude ionosphere and a plasmasphere, but uses empirical magnetospheric input. OpenGGCM-CTIM reproduces localized neutral density peaks at ~ 400 km altitude in the high-latitude dayside regions in agreement with in situ observations during the pressure shock and the early phase of the storm. Although CTIPe is in some sense a much superior model than CTIM, it misses these localized enhancements. Unlike the CTIPe empirical input models, OpenGGCM-CTIM more faithfully produces localized increases of both auroral precipitation and ionospheric electric fields near the high-latitude dayside region after the pressure shock and after the storm onset

  10. Development and optimization of a wildfire plume rise model based on remote sensing data inputs - Part 2

    Science.gov (United States)

    Paugam, R.; Wooster, M.; Atherton, J.; Freitas, S. R.; Schultz, M. G.; Kaiser, J. W.

    2015-03-01

    using sequentially a Simulating Annealing algorithm and a Markov chain Monte Carlo uncertainty test, and to try to ensure the appropriate convergence on suitable parameter values we use a training dataset consisting of only fires where a number of specific quality criteria are met, including local ambient wind shear limits derived from the ECMWF and MISR data, and "steady state" plumes and fires showing only relatively small changes between consecutive MODIS observations. Using our optimised plume rise model (PRMv2) with information from all MODIS-detected active fires detected in 2003 over North America, with outputs gridded to a 0.1° horizontal and 500 m vertical resolution mesh, we are able to derive wildfire injection height distributions whose maxima extend to the type of higher altitudes seen in actual observation-based wildfire plume datasets than are those derived either via the original plume model or any other parametrization tested herein. We also find our model to be the only one tested that more correctly simulates the very high plume (6 to 8 km a.s.l.), created by a large fire in Alberta (Canada) on the 17 August 2003, though even our approach does not reach the stratosphere as the real plume is expected to have done. Our results lead us to believe that our PRMv2 approach to modelling the injection height of wildfire plumes is a strong candidate for inclusion into CTMs aiming to represent this process, but we note that significant advances in the spatio-temporal resolutions of the data required to feed the model will also very likely bring key improvements in our ability to more accurately represent such phenomena, and that there remain challenges to the detailed validation of such simulations due to the relative sparseness of plume height observations and their currently rather limited temporal coverage which are not necessarily well matched to when fires are most active (MISR being confined to morning observations for example).

  11. The Advanced LIGO Input Optics

    CERN Document Server

    Mueller, Chris; Ciani, Giacomo; DeRosa, Ryan; Effler, Anamaria; Feldbaum, David; Frolov, Valery; Fulda, Paul; Gleason, Joseph; Heintze, Matthew; King, Eleanor; Kokeyama, Keiko; Korth, William; Martin, Rodica; Mullavey, Adam; Poeld, Jan; Quetschke, Volker; Reitze, David; Tanner, David; Williams, Luke; Mueller, Guido

    2016-01-01

    The Advanced LIGO gravitational wave detectors are nearing their design sensitivity and should begin taking meaningful astrophysical data in the fall of 2015. These resonant optical interferometers will have unprecedented sensitivity to the strains caused by passing gravitational waves. The input optics play a significant part in allowing these devices to reach such sensitivities. Residing between the pre-stabilized laser and the main interferometer, the input optics is tasked with preparing the laser beam for interferometry at the sub-attometer level while operating at continuous wave input power levels ranging from 100 mW to 150 W. These extreme operating conditions required every major component to be custom designed. These designs draw heavily on the experience and understanding gained during the operation of Initial LIGO and Enhanced LIGO. In this article we report on how the components of the input optics were designed to meet their stringent requirements and present measurements showing how well they h...

  12. A pre-calibration approach to selecting optimum inputs for hydrological models in data-scarce regions: a case study in Jordan

    Science.gov (United States)

    Tarawneh, Esraa; Bridge, Jonathan; Macdonald, Neil

    2016-04-01

    This study reports a pre-calibration methodology to select optimum inputs to hydrological models in dryland environments, demonstrated on the semi-arid Wala catchment, Jordan (1743 km2). The Soil and Water Assessment Tool (SWAT) is used to construct eighteen model scenarios combining three land-use, two soil and three weather datasets spanning 1979 - 2002. Weather datasets include locally-recorded precipitation and temperature data and global reanalysis data products. Soil data comprise a high-resolution map constructed from national soil survey data and a significantly lower-resolution global soil map. Landuse maps are obtained from global and local sources; with some modifications applied to the latter using available descriptive landuse information. Variability in model performance arising from using different dataset combinations is assessed by testing uncalibrated model outputs against discharge and sediment load data using r2, Nash-Sutcliffe Efficiency (NSE), RSR and PBIAS. A ranking procedure identifies best-performing input data combinations and trends among the scenarios. In the case of Wala, Jordan, global weather inputs yield considerable improvements on discontinuous local datasets; conversely, local high-resolution soil mapping data perform considerably better than globally-available soil data. NSE values vary from 0.56 to -12 and 0.79 to -85 for best and worst-performing scenarios against observed discharge and sediment data respectively. Full calibration remains an essential step prior to model application. However, the methodology presented provides a transparent, transferable approach to selecting the most robust suite of input data and hence minimising structural biases in model performance arising when calibration proceeds from low-quality initial assumptions. In regions where data are scarce, their quality is unregulated and survey resources are limited, such methods are essential in improving confidence in models which underpin critical water

  13. Modeling the Effects of Irrigation on Land Surface Fluxes and States over the Conterminous United States: Sensitivity to Input Data and Model Parameters

    Energy Technology Data Exchange (ETDEWEB)

    Leng, Guoyong; Huang, Maoyi; Tang, Qiuhong; Sacks, William J.; Lei, Huimin; Leung, Lai-Yung R.

    2013-09-16

    Previous studies on irrigation impacts on land surface fluxes/states were mainly conducted as sensitivity experiments, with limited analysis of uncertainties from the input data and model irrigation schemes used. In this study, we calibrated and evaluated the performance of irrigation water use simulated by the Community Land Model version 4 (CLM4) against observations from agriculture census. We investigated the impacts of irrigation on land surface fluxes and states over the conterminous United States (CONUS) and explored possible directions of improvement. Specifically, we found large uncertainty in the irrigation area data from two widely used sources and CLM4 tended to produce unrealistically large temporal variations of irrigation demand for applications at the water resources region scale over CONUS. At seasonal to interannual time scales, the effects of irrigation on surface energy partitioning appeared to be large and persistent, and more pronounced in dry than wet years. Even with model calibration to yield overall good agreement with the irrigation amounts from the National Agricultural Statistics Service (NASS), differences between the two irrigation area datasets still dominate the differences in the interannual variability of land surface response to irrigation. Our results suggest that irrigation amount simulated by CLM4 can be improved by (1) calibrating model parameter values to account for regional differences in irrigation demand and (2) accurate representation of the spatial distribution and intensity of irrigated areas.

  14. Input management of production systems.

    Science.gov (United States)

    Odum, E P

    1989-01-13

    Nonpoint sources of pollution, which are largely responsible for stressing regional and global life-supporting atmosphere, soil, and water, can only be reduced (and ultimately controlled) by input management that involves increasing the efficiency of production systems and reducing the inputs of environmentally damaging materials. Input management requires a major change, an about-face, in the approach to management of agriculture, power plants, and industries because the focus is on waste reduction and recycling rather than on waste disposal. For large-scale ecosystem-level situations a top-down hierarchical approach is suggested and illustrated by recent research in agroecology and landscape ecology.

  15. A modified NARMAX model-based self-tuner with fault tolerance for unknown nonlinear stochastic hybrid systems with an input-output direct feed-through term.

    Science.gov (United States)

    Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W

    2014-01-01

    A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection.

  16. Ecological network analysis of an urban metabolic system based on input-output tables: model development and case study for Beijing.

    Science.gov (United States)

    Zhang, Yan; Zheng, Hongmei; Fath, Brian D; Liu, Hong; Yang, Zhifeng; Liu, Gengyuan; Su, Meirong

    2014-01-15

    If cities are considered as "superorganisms", then disorders of their metabolic processes cause something analogous to an "urban disease". It is therefore helpful to identify the causes of such disorders by analyzing the inner mechanisms that control urban metabolic processes. Combining input-output analysis with ecological network analysis lets researchers study the functional relationships and hierarchy of the urban metabolic processes, thereby providing direct support for the analysis of urban disease. In this paper, using Beijing as an example, we develop a model of an urban metabolic system that accounts for the intensity of the embodied ecological elements using monetary input-output tables from 1997, 2000, 2002, 2005, and 2007, and use this data to compile the corresponding physical input-output tables. This approach described the various flows of ecological elements through urban metabolic processes and let us build an ecological network model with 32 components. Then, using two methods from ecological network analysis (flow analysis and utility analysis), we quantitatively analyzed the physical input-output relationships among urban components, determined the ecological hierarchy of the components of the metabolic system, and determined the distribution of advantage-dominated and disadvantage-dominated relationships, thereby providing scientific support to guide restructuring of the urban metabolic system in an effort to prevent or cure urban "diseases".

  17. NVC Based Model for Selecting Effective Requirement Elicitation Technique

    Directory of Open Access Journals (Sweden)

    Md. Rizwan Beg

    2012-10-01

    Full Text Available Requirement Engineering process starts from gathering of requirements i.e.; requirements elicitation. Requirementselicitation (RE is the base building block for a software project and has very high impact onsubsequent design and builds phases as well. Accurately capturing system requirements is the major factorin the failure of most of software projects. Due to the criticality and impact of this phase, it is very importantto perform the requirements elicitation in no less than a perfect manner. One of the most difficult jobsfor elicitor is to select appropriate technique for eliciting the requirement. Interviewing and Interactingstakeholder during Elicitation process is a communication intensive activity involves Verbal and Nonverbalcommunication (NVC. Elicitor should give emphasis to Non-verbal communication along with verbalcommunication so that requirements recorded more efficiently and effectively. In this paper we proposea model in which stakeholders are classified by observing non-verbal communication and use it as a basefor elicitation technique selection. We also propose an efficient plan for requirements elicitation which intendsto overcome on the constraints, faced by elicitor.

  18. Formal Requirements Modeling for Reactive Systems with Coloured Petri Nets

    DEFF Research Database (Denmark)

    Tjell, Simon

    This dissertation presents the contributions of seven publications all concerned with the application of Coloured Petri Nets (CPN) to requirements modeling for reactive systems. The publications are introduced along with relevant background material and related work, and their contributions...... interface composed of recognizable artifacts and activities. The presentation of the three publications related to Use Cases is followed by a the presentation of a publication formalizing some of the guidelines applied for structuring the CPN requirements models|namely the guidelines that make it possible...... activity. The traces are automatically recorded during execution of the model. The second publication presents a formally specified framework for automating a large part of the tasks related to integrating Problem Frames with CPN. The framework is specified in VDM++, and allows the modeler to automatically...

  19. On data requirements for calibration of integrated models for urban water systems.

    Science.gov (United States)

    Langeveld, Jeroen; Nopens, Ingmar; Schilperoort, Remy; Benedetti, Lorenzo; de Klein, Jeroen; Amerlinck, Youri; Weijers, Stefan

    2013-01-01

    Modeling of integrated urban water systems (IUWS) has seen a rapid development in recent years. Models and software are available that describe the process dynamics in sewers, wastewater treatment plants (WWTPs), receiving water systems as well as at the interfaces between the submodels. Successful applications of integrated modeling are, however, relatively scarce. One of the reasons for this is the lack of high-quality monitoring data with the required spatial and temporal resolution and accuracy to calibrate and validate the integrated models, even though the state of the art of monitoring itself is no longer the limiting factor. This paper discusses the efforts to be able to meet the data requirements associated with integrated modeling and describes the methods applied to validate the monitoring data and to use submodels as software sensor to provide the necessary input for other submodels. The main conclusion of the paper is that state of the art monitoring is in principle sufficient to provide the data necessary to calibrate integrated models, but practical limitations resulting in incomplete data-sets hamper widespread application. In order to overcome these difficulties, redundancy of future monitoring networks should be increased and, at the same time, data handling (including data validation, mining and assimilation) should receive much more attention.

  20. A New Rapid Simplified Model for Urban Rainstorm Inundation with Low Data Requirements

    Directory of Open Access Journals (Sweden)

    Ji Shen

    2016-11-01

    Full Text Available This paper proposes a new rapid simplified inundation model (NRSIM for flood inundation caused by rainstorms in an urban setting that can simulate the urban rainstorm inundation extent and depth in a data-scarce area. Drainage basins delineated from a floodplain map according to the distribution of the inundation sources serve as the calculation cells of NRSIM. To reduce data requirements and computational costs of the model, the internal topography of each calculation cell is simplified to a circular cone, and a mass conservation equation based on a volume spreading algorithm is established to simulate the interior water filling process. Moreover, an improved D8 algorithm is outlined for the simulation of water spilling between different cells. The performance of NRSIM is evaluated by comparing the simulated results with those from a traditional rapid flood spreading model (TRFSM for various resolutions of digital elevation model (DEM data. The results are as follows: (1 given high-resolution DEM data input, the TRFSM model has better performance in terms of precision than NRSIM; (2 the results from TRFSM are seriously affected by the decrease in DEM data resolution, whereas those from NRSIM are not; and (3 NRSIM always requires less computational time than TRFSM. Apparently, compared with the complex hydrodynamic or traditional rapid flood spreading model, NRSIM has much better applicability and cost-efficiency in real-time urban inundation forecasting for data-sparse areas.

  1. NASA Standard for Models and Simulations: Philosophy and Requirements Overview

    Science.gov (United States)

    Blattnig, Steve R.; Luckring, James M.; Morrison, Joseph H.; Sylvester, Andre J.; Tripathi, Ram K.; Zang, Thomas A.

    2013-01-01

    Following the Columbia Accident Investigation Board report, the NASA Administrator chartered an executive team (known as the Diaz Team) to identify those CAIB report elements with NASA-wide applicability and to develop corrective measures to address each element. One such measure was the development of a standard for the development, documentation, and operation of models and simulations. This report describes the philosophy and requirements overview of the resulting NASA Standard for Models and Simulations.

  2. Single High Fidelity Geometric Data Sets for LCM - Model Requirements

    Science.gov (United States)

    2006-11-01

    material name (example, an HY80 steel ) plus additional material requirements (heat treatment, etc.) Creation of a more detailed description of the data...57 Figure 2.22. Typical Stress-Strain Curve for Steel (adapted from Ref 59) .............................. 60 Figure...structures are steel , aluminum and composites. The structural components that make up a global FEA model drive the fidelity of the model. For example

  3. Requirements for a next generation global flood inundation models

    Science.gov (United States)

    Bates, P. D.; Neal, J. C.; Smith, A.; Sampson, C. C.

    2016-12-01

    In this paper we review the current status of global hydrodynamic models for flood inundation prediction and highlight recent successes and current limitations. Building on this analysis we then go on to consider what is required to develop the next generation of such schemes and show that to achieve this a number of fundamental science problems will need to be overcome. New data sets and new types of analysis will be required, and we show that these will only partially be met by currently planned satellite missions and data collection initiatives. A particular example is the quality of available global Digital Elevation data. The current best data set for flood modelling, SRTM, is only available at a relatively modest 30m resolution, contains pixel-to-pixel noise of 6m and is corrupted by surface artefacts. Creative processing techniques have sought to address these issues with some success, but fundamentally the quality of the available global terrain data limits flood modelling and needs to be overcome. Similar arguments can be made for many other elements of global hydrodynamic models including their bathymetry data, boundary conditions, flood defence information and model validation data. We therefore systematically review each component of global flood models and document whether planned new technology will solve current limitations and, if not, what exactly will be required to do so.

  4. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    Science.gov (United States)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  5. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    Science.gov (United States)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  6. Models of protein and amino acid requirements for cattle

    Directory of Open Access Journals (Sweden)

    Luis Orlindo Tedeschi

    2015-03-01

    Full Text Available Protein supply and requirements by ruminants have been studied for more than a century. These studies led to the accumulation of lots of scientific information about digestion and metabolism of protein by ruminants as well as the characterization of the dietary protein in order to maximize animal performance. During the 1980s and 1990s, when computers became more accessible and powerful, scientists began to conceptualize and develop mathematical nutrition models, and to program them into computers to assist with ration balancing and formulation for domesticated ruminants, specifically dairy and beef cattle. The most commonly known nutrition models developed during this period were the National Research Council (NRC in the United States, Agricultural Research Council (ARC in the United Kingdom, Institut National de la Recherche Agronomique (INRA in France, and the Commonwealth Scientific and Industrial Research Organization (CSIRO in Australia. Others were derivative works from these models with different degrees of modifications in the supply or requirement calculations, and the modeling nature (e.g., static or dynamic, mechanistic, or deterministic. Circa 1990s, most models adopted the metabolizable protein (MP system over the crude protein (CP and digestible CP systems to estimate supply of MP and the factorial system to calculate MP required by the animal. The MP system included two portions of protein (i.e., the rumen-undegraded dietary CP - RUP - and the contributions of microbial CP - MCP as the main sources of MP for the animal. Some models would explicitly account for the impact of dry matter intake (DMI on the MP required for maintenance (MPm; e.g., Cornell Net Carbohydrate and Protein System - CNCPS, the Dutch system - DVE/OEB, while others would simply account for scurf, urinary, metabolic fecal, and endogenous contributions independently of DMI. All models included milk yield and its components in estimating MP required for lactation

  7. Assimilation of autoscaled data and regional and local ionospheric models as input sources for real-time 3D IRI modeling

    Science.gov (United States)

    Pezzopane, Michael; Zolesi, Bruno; Settimi, Alessandro; Pietrella, Marco; Cander, Ljiljiana; Bianchi, Cesidio; Pignatelli, Alessandro

    This paper describes the three-dimensional (3-D) electron density mapping of the Earth’s ionosphere by the assimilative IRI-SIRMUP-P (ISP) model. Specifically, it highlights how the joint utilization of autoscaled data such as the critical frequency foF2, the propagation factor M(3000)F2, and the electron density profile N(h) coming from several reference ionospheric stations, as input to the regional SIRMUP (Simplified Ionospheric Regional Model Updated) and global IRI (International Reference Ionosphere) models, can provide a valid tool for obtaining a real-time three-dimensional (3-D) electron density mapping of the ionosphere. Performance of the ISP model is shown by comparing the electron density profiles given by the model with the ones measured at dedicated testing ionospheric stations for quiet and disturbed geomagnetic conditions. Overall the representation of the ionosphere made by the ISP model proves to be better than the climatological representation made by only the IRI-URSI and the IRI-CCIR models. However, there are some cases when the assimilation of the autoscaled data from the reference stations causes either a strong underestimation or a strong overestimation of the real conditions of the ionosphere hence the IRI-URSI model performs better. This ISP misrepresentation, occurring mainly when the number of reference stations covering the region mapped by the model is not sufficient to represent disturbed periods during which the ionosphere is highly variable both in space and time, is the theme for further ISP improvements. Synthesize oblique ionograms obtained by the combined application of the ISP model and IONORT (IONOspheric Ray-Tracing) are also described in this paper. The comparison between these and measured oblique ionograms, both in terms of the ionogram shape and the Maximum Usable Frequency characterizing the considered radio path confirms that the ISP model can represent the real conditions of the ionosphere more accurately than IRI

  8. Modeling requirements for in situ vitrification. Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    MacKinnon, R.J.; Mecham, D.C.; Hagrman, D.L.; Johnson, R.W.; Murray, P.E.; Slater, C.E.; Marwil, E.S.; Weaver, R.A.; Argyle, M.D.

    1991-11-01

    This document outlines the requirements for the model being developed at the INEL which will provide analytical support for the ISV technology assessment program. The model includes representations of the electric potential field, thermal transport with melting, gas and particulate release, vapor migration, off-gas combustion and process chemistry. The modeling objectives are to (1) help determine the safety of the process by assessing the air and surrounding soil radionuclide and chemical pollution hazards, the nuclear criticality hazard, and the explosion and fire hazards, (2) help determine the suitability of the ISV process for stabilizing the buried wastes involved, and (3) help design laboratory and field tests and interpret results therefrom.

  9. Required experimental accuracy to select between supersymmetrical models

    Indian Academy of Sciences (India)

    David Grellscheid

    2004-03-01

    We will present a method to decide a priori whether various supersymmetrical scenarios can be distinguished based on sparticle mass data alone. For each model, a scan over all free SUSY breaking parameters reveals the extent of that model's physically allowed region of sparticle-mass-space. Based on the geometrical configuration of these regions in mass-space, it is possible to obtain an estimate of the required accuracy of future sparticle mass measurements to distinguish between the models. We will illustrate this algorithm with an example. Ths talk is based on work done in collaboration with B C Allanach (LAPTH, Annecy) and F Quevedo (DAMTP, Cambridge).

  10. Thermodynamic models for bounding pressurant mass requirements of cryogenic tanks

    Science.gov (United States)

    Vandresar, Neil T.; Haberbusch, Mark S.

    1994-01-01

    Thermodynamic models have been formulated to predict lower and upper bounds for the mass of pressurant gas required to pressurize a cryogenic tank and then expel liquid from the tank. Limiting conditions are based on either thermal equilibrium or zero energy exchange between the pressurant gas and initial tank contents. The models are independent of gravity level and allow specification of autogenous or non-condensible pressurants. Partial liquid fill levels may be specified for initial and final conditions. Model predictions are shown to successfully bound results from limited normal-gravity tests with condensable and non-condensable pressurant gases. Representative maximum collapse factor maps are presented for liquid hydrogen to show the effects of initial and final fill level on the range of pressurant gas requirements. Maximum collapse factors occur for partial expulsions with large final liquid fill fractions.

  11. Model Waveform Accuracy Requirements for the $\\chi^2$ Discriminator

    CERN Document Server

    Lindblom, Lee

    2016-01-01

    This paper derives accuracy standards for model gravitational waveforms required to ensure proper use of the $\\chi^2$ discriminator test in gravitational wave (GW) data analysis. These standards are different from previously established requirements for detection and waveform parameter measurement based on signal-to-noise optimization. We present convenient formulae both for evaluating and interpreting the contribution of model errors to measured $\\chi^2$ values. Motivated by these formula, we also present an enhanced, complexified variant of the standard $\\chi^2$ statistic used in GW searches. While our results are not directly relevant to current searches (which use the $\\chi^2$ test only to veto signal candidates with extremely high $\\chi^2$ values), they could be useful in future GW searches and as figures of merit for model gravitational waveforms.

  12. A commuting generation model requiring only aggregated data

    CERN Document Server

    Lenormand, Maxime; Gargiulo, Floriana

    2011-01-01

    We recently proposed, in (Gargiulo et al., 2011), an innova tive stochastic model with only one parameter to calibrate. It reproduces the complete network by an iterative process stochastically choosing, for each commuter living in the municipality of a region, a workplace in the region. The choice is done considering the job offer in each municipality of the region and the distance to all the possible destinations. The model is quite effective if the region is sufficiently autonomous in terms of job offers. However, calibrating or being sure of this autonomy require data or expertise which are not necessarily available. Moreover the region can be not autonomous. In the present, we overcome these limitations, extending the job search geographical base of the commuters to the outside of the region, and changing the deterrence function form. We also found a law to calibrate the improvement model which does not require data.

  13. AQM router design for TCP network via input constrained fuzzy control of time-delay affine Takagi-Sugeno fuzzy models

    Science.gov (United States)

    Chang, Wen-Jer; Meng, Yu-Teh; Tsai, Kuo-Hui

    2012-12-01

    In this article, Takagi-Sugeno (T-S) fuzzy control theory is proposed as a key tool to design an effective active queue management (AQM) router for the transmission control protocol (TCP) networks. The probability control of packet marking in the TCP networks is characterised by an input constrained control problem in this article. By modelling the TCP network into a time-delay affine T-S fuzzy model, an input constrained fuzzy control methodology is developed in this article to serve the AQM router design. The proposed fuzzy control approach, which is developed based on the parallel distributed compensation technique, can provide smaller probability of dropping packets than previous AQM design schemes. Lastly, a numerical simulation is provided to illustrate the usefulness and effectiveness of the proposed design approach.

  14. Cognition and procedure representational requirements for predictive human performance models

    Science.gov (United States)

    Corker, K.

    1992-01-01

    Models and modeling environments for human performance are becoming significant contributors to early system design and analysis procedures. Issues of levels of automation, physical environment, informational environment, and manning requirements are being addressed by such man/machine analysis systems. The research reported here investigates the close interaction between models of human cognition and models that described procedural performance. We describe a methodology for the decomposition of aircrew procedures that supports interaction with models of cognition on the basis of procedures observed; that serves to identify cockpit/avionics information sources and crew information requirements; and that provides the structure to support methods for function allocation among crew and aiding systems. Our approach is to develop an object-oriented, modular, executable software representation of the aircrew, the aircraft, and the procedures necessary to satisfy flight-phase goals. We then encode in a time-based language, taxonomies of the conceptual, relational, and procedural constraints among the cockpit avionics and control system and the aircrew. We have designed and implemented a goals/procedures hierarchic representation sufficient to describe procedural flow in the cockpit. We then execute the procedural representation in simulation software and calculate the values of the flight instruments, aircraft state variables and crew resources using the constraints available from the relationship taxonomies. The system provides a flexible, extensible, manipulative and executable representation of aircrew and procedures that is generally applicable to crew/procedure task-analysis. The representation supports developed methods of intent inference, and is extensible to include issues of information requirements and functional allocation. We are attempting to link the procedural representation to models of cognitive functions to establish several intent inference methods

  15. Mathematical Modeling of Programmatic Requirements for Yaws Eradication

    Science.gov (United States)

    Mitjà, Oriol; Fitzpatrick, Christopher; Asiedu, Kingsley; Solomon, Anthony W.; Mabey, David C.W.; Funk, Sebastian

    2017-01-01

    Yaws is targeted for eradication by 2020. The mainstay of the eradication strategy is mass treatment followed by case finding. Modeling has been used to inform programmatic requirements for other neglected tropical diseases and could provide insights into yaws eradication. We developed a model of yaws transmission varying the coverage and number of rounds of treatment. The estimated number of cases arising from an index case (basic reproduction number [R0]) ranged from 1.08 to 3.32. To have 80% probability of achieving eradication, 8 rounds of treatment with 80% coverage were required at low estimates of R0 (1.45). This requirement increased to 95% at high estimates of R0 (2.47). Extending the treatment interval to 12 months increased requirements at all estimates of R0. At high estimates of R0 with 12 monthly rounds of treatment, no combination of variables achieved eradication. Models should be used to guide the scale-up of yaws eradication. PMID:27983500

  16. The Benefit of Ambiguity in Understanding Goals in Requirements Modelling

    DEFF Research Database (Denmark)

    Paay, Jeni; Pedell, Sonja; Sterling, Leon

    2011-01-01

    of their research is to create technologies that support more flexible and meaningful social interactions, by combining best practice in software engineering with ethnographic techniques to model complex social interactions from their socially oriented life for the purposes of building rich socio......This paper examines the benefit of ambiguity in describing goals in requirements modelling for the design of socio-technical systems using concepts from Agent-Oriented Software Engineering (AOSE) and ethnographic and cultural probe methods from Human Computer Interaction (HCI). The authors’ aim...... of abstraction, ambiguous and open for conversations through the modelling process add richness to goal models, and communicate quality attributes of the interaction being modelled to the design phase, where this ambiguity is regarded as a resource for design....

  17. A wavelet-based non-linear autoregressive with exogenous inputs (WNARX) dynamic neural network model for real-time flood forecasting using satellite-based rainfall products

    Science.gov (United States)

    Nanda, Trushnamayee; Sahoo, Bhabagrahi; Beria, Harsh; Chatterjee, Chandranath

    2016-08-01

    Although flood forecasting and warning system is a very important non-structural measure in flood-prone river basins, poor raingauge network as well as unavailability of rainfall data in real-time could hinder its accuracy at different lead times. Conversely, since the real-time satellite-based rainfall products are now becoming available for the data-scarce regions, their integration with the data-driven models could be effectively used for real-time flood forecasting. To address these issues in operational streamflow forecasting, a new data-driven model, namely, the wavelet-based non-linear autoregressive with exogenous inputs (WNARX) is proposed and evaluated in comparison with four other data-driven models, viz., the linear autoregressive moving average with exogenous inputs (ARMAX), static artificial neural network (ANN), wavelet-based ANN (WANN), and dynamic nonlinear autoregressive with exogenous inputs (NARX) models. First, the quality of input rainfall products of Tropical Rainfall Measuring Mission Multi-satellite Precipitation Analysis (TMPA), viz., TRMM and TRMM-real-time (RT) rainfall products is assessed through statistical evaluation. The results reveal that the satellite rainfall products moderately correlate with the observed rainfall, with the gauge-adjusted TRMM product outperforming the real-time TRMM-RT product. The TRMM rainfall product better captures the ground observations up to 95 percentile range (30.11 mm/day), although the hit rate decreases for high rainfall intensity. The effect of antecedent rainfall (AR) and climate forecast system reanalysis (CFSR) temperature product on the catchment response is tested in all the developed models. The results reveal that, during real-time flow simulation, the satellite-based rainfall products generally perform worse than the gauge-based rainfall. Moreover, as compared to the existing models, the flow forecasting by the WNARX model is way better than the other four models studied herein with the

  18. Analysis of Integrated Econometric and Input-Output Model%投入产出与计量经济联合模型研究

    Institute of Scientific and Technical Information of China (English)

    孟彦菊; 向蓉美

    2011-01-01

    经典投入产出(Input-output,IO)模型是一个线性性和确定性系统.尽管IO模型对现实经济世界的描述只是一种近似,但它所特有的细致的部门分类,能深刻揭示某一时点国民经济各部门之间的数量依存关系.计量经济(Econometric,EC)模型具有动态性优点,它能通过概率论来处理现实世界的不确定性.本文试图结合这两种模型的优点,尝试着建立EC+IO联合模型,并运用中国数据进行实证分析,结果证明联合模型能够更真实地模拟宏观经济发展,进行更准确的预测.%The classical input-output (IO) model is a popular linear and deterministic system. Although it can only approximately describe the real-world economy, IO model can reveal the dependency among different economic sectors at a particular point of time through static or cross-sectional model. On the other hand, the econometric (EC) model is a dynamic system and can deal with the uncertainty in the real economy by means of probability theory. This paper tries to integrate econometric (EC) model and input-output (IO) model to combine their advantages. An empirical study with china data was conducted and it is shown that the integrated model can simulate the macroeconomic more realistically and thus make prediction more accurately.

  19. Vulnerability of shallow ground water and drinking-water wells to nitrate in the United States: Model of predicted nitrate concentration in shallow, recently recharged ground water -- Input data set for water input (gwava-s_wtin)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data set represents "water input," the ratio of the total area of irrigated land to precipitation, in square kilometers per centimeter, in the conterminous...

  20. A Model for Forecasting Enlisted Student IA Billet Requirements

    Science.gov (United States)

    2016-03-01

    were promised and had at least one course failure . Training times Student execution depends on TTT. TTT includes under-instruction (UI) time and...Cleared for Public Release A Model for Forecasting Enlisted Student IA Billet Requirements Steven W. Belcher with David L. Reese...and Kletus S. Lawler March 2016 Copyri